Skip to content

Kubernetes

Helm Charts: Kubernetes Deployments Made Simple

Managing applications in Kubernetes can get messy—fast. Think YAML files everywhere, lots of kubectl apply, and debugging when something’s off. That’s where Helm comes in.

🛠️ What is Helm?

Helm is the package manager for Kubernetes. Think of it like apt for Ubuntu or brew for macOS—but for your Kubernetes apps.

Instead of juggling dozens of YAML files, you define a Helm chart, which is a reusable template for your application, including:

  • Deployments
  • Services
  • ConfigMaps
  • Ingress
  • and more...

A Helm chart helps you install, upgrade, and manage complex Kubernetes apps with just a few commands.


✅ Why Use Helm?

  • Reusability: One chart can serve multiple environments (dev, staging, prod) by swapping values.
  • Consistency: Templates reduce copy-paste errors across YAMLs.
  • Versioning: Helm tracks releases and lets you roll back easily.
  • Simplicity: You can install an entire app stack (like Prometheus, NGINX, etc.) with one command.

🚀 Using a Helm Chart (Quick Start)

1. Install Helm

brew install helm   # macOS
# or
sudo apt install helm  # Debian/Ubuntu

2. Add a Chart Repo

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

3. Install an App (e.g., PostgreSQL)

helm install my-postgres bitnami/postgresql

This will deploy PostgreSQL into your Kubernetes cluster using a pre-built chart.

4. Customize with Values

Create a values.yaml file to override default settings:

auth:
  username: myuser
  password: mypass
  database: mydb

Then install with:

helm install my-postgres bitnami/postgresql -f values.yaml

5. Upgrade or Roll Back

helm upgrade my-postgres bitnami/postgresql -f values.yaml
helm rollback my-postgres 1

6. Uninstall

helm uninstall my-postgres

📦 Writing Your Own Helm Chart (Optional Teaser)

You can also create your own chart with:

helm create mychart

This generates a full chart scaffold. Then you just modify templates and values.yaml to fit your app.


🧠 Final Thoughts

Helm brings structure, repeatability, and sanity to your Kubernetes workflows. Whether you're managing a small service or a full production stack, Helm lets you ship faster with fewer headaches.

Kube-bench: Automating Kubernetes Security Checks

What is it?

The Center for Internet Security (CIS) has established benchmarks to ensure the secure deployment of Kubernetes clusters. These benchmarks provide security configuration guidelines for Kubernetes, aiming to help organizations protect their environments from potential vulnerabilities.

One tool that automates this important process is kube-bench. It runs checks against your Kubernetes cluster based on the CIS Kubernetes Benchmark, helping to verify whether your cluster is configured securely.

Why Use Kube-bench?

kube-bench streamlines security auditing by automating the verification of your Kubernetes setup. It checks for best practices, identifies misconfigurations, and reports areas where your setup might fall short of CIS recommendations. This makes it easier to maintain compliance and reduce the risk of exposure to security threats.

Whether you're running Kubernetes in production, or setting up a development cluster, regular use of kube-bench helps ensure that your deployments meet security standards.

For more details and to start using kube-bench, visit the official GitHub repository.

Running kube-bench on Kubernetes Clusters

The kube-bench tool can be executed in various ways depending on your Kubernetes cluster setup. It ensures that your Kubernetes deployment aligns with the CIS Kubernetes Benchmark, which provides security guidelines.

In this blog, I’ll share how I used kube-bench to audit both worker and master nodes of a Kubernetes cluster deployed with kOps on AWS.

Worker Node Auditing

To audit the worker nodes, I submitted a Kubernetes job that runs kube-bench specifically for worker node configuration. Below are the steps:

# Download the worker node job configuration
$ curl -O https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml

$ kubectl apply -f job.yaml
job.batch/kube-bench created

$ kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
kube-bench-j76s9   0/1     ContainerCreating   0          3s

# Wait for a few seconds for the job to complete
$ kubectl get pods
NAME                      READY   STATUS      RESTARTS   AGE
kube-bench-j76s9   0/1     Completed   0          11s

# The results are held in the pod's logs
kubectl logs kube-bench-j76s9
[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
...

The logs will contain a detailed list of recommendations, outlining the identified security issues and how to address them. You can see an example of the full output in this Gist.

Within the output, each problematic area is explained, and kube-bench offers solutions for improving security on the worker nodes.

Master Node Auditing

To audit the master nodes (control plane), I used a script specifically designed for the master node configuration. Follow these steps to run the audit:

# Download the master node job configuration
$ curl -O https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-master.yaml

$ kubectl apply -f job-master.yaml
job.batch/kube-bench created

$ kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
kube-bench-xxxxx   0/1     ContainerCreating   0          3s

# Wait for a few seconds for the job to complete
$ kubectl get pods
NAME                      READY   STATUS      RESTARTS   AGE
kube-bench-xxxxx   0/1     Completed   0          11s

# The results are held in the pod's logs
kubectl logs kube-bench-j76s9
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
...

The logs will contain a detailed list of recommendations, outlining the identified security issues and how to address them. You can see an example of the full output in this Gist.

kubesec Static analysis security scanning tool.

The problem

Kubernetes resources can be vulnerable to misconfigurations, leading to security risks in your infrastructure. Detecting these issues early is critical to maintaining a secure environment.

What is ?

Is an open-source tool that performs static analysis on Kubernetes resources, identifying security risks before deployment. It ensures that your Kubernetes configuration adheres to best security practices.

How to usage kubesec

There are several ways to use kubesec to scan your Kubernetes resources:

  • Docker container image: docker.io/kubesec/kubesec:v2
  • Linux/MacOS/Win binary (get the latest release)
  • Kubernetes Admission Controller
  • Kubectl plugin

Using the Docker Image

The simplest way to run kubesec is by using its Docker image and passing the file you want to scan. For example, to check the app1-510d6362.yaml file:

docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < app1-510d6362.yaml

This command runs a scan on the specified file, producing results like this:

[
  {
    "object": "Pod/pod.default",
    "valid": true,
    "message": "Passed with a score of 0 points",
    "score": 0,
    "scoring": {
      "advise": [
        {
          "selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
          "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost"
        },
        {
          "selector": "containers[] .securityContext .runAsNonRoot == true",
          "reason": "Force the running image to run as a non-root user to ensure least privilege"
        },
        {
          "selector": "containers[] .securityContext .runAsUser -gt 10000",
          "reason": "Run as a high-UID user to avoid conflicts with the host's user table"
        },
        {
          "selector": "containers[] .securityContext .capabilities .drop",
          "reason": "Reducing kernel capabilities available to a container limits its attack surface"
        },
        {
          "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
          "reason": "Drop all capabilities and add only those required to reduce syscall attack surface"
        },
        {
          "selector": "containers[] .resources .requests .cpu",
          "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster"
        },
        {
          "selector": "containers[] .resources .limits .cpu",
          "reason": "Enforcing CPU limits prevents DOS via resource exhaustion"
        },
        {
          "selector": "containers[] .resources .requests .memory",
          "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster"
        },
        {
          "selector": "containers[] .resources .limits .memory",
          "reason": "Enforcing memory limits prevents DOS via resource exhaustion"
        },
        {
          "selector": ".spec .serviceAccountName",
          "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege"
        },
        {
          "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
          "reason": "Seccomp profiles set minimum privilege and secure against unknown threats"
        },
        {
          "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
          "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY"
        }
      ]
    }
  }
]