Kubernetes How-To - World's Greatest Single-Page Source for Kubernetes

Kubernetes should work for you, not the other way around

Practical, production-tested Kubernetes how-tos. Authored by a production Kubernetes practitioner of 10+ years.

Search guides

Photo of Cassius Adams Author: Cassius Adams — 10+ years running real world production Kubernetes across 100s of clusters Editorial policy

About the author

Cassius Adams — Senior Kubernetes practitioner with 10 years running real world production Kubernetes clusters, 30+ years in Internet tech. He has designed, operated, and secured 100s of AKS/GKE/OpenShift/native on-prem and cloud multi-cluster k8s platforms with a focus on automation, reliability, and secure defaults. Publishes tested runbooks and DRY IaC that work in enterprise environments. Cassius has been responsible for deploying and/or managing 100s of Kubernetes clusters throughout his over decades-long IT career.

"Below is my ever-evolving list of Kubernetes 'HowTos' which I believe every Kubernetes novice and expert will find useful!"

Bookmark this page now!

  • Kubernetes
  • kubectl
  • IaC
  • AKS
  • GKE
  • OpenShift
  • Cloud Native
Editorial policy →

Quick Fill (optional)

Auto‑insert your namespace, pod, ports

Provide your own values once and this page will substitute them into the code-blocks below for quick Copy button action! Completely optional and you can stick to relevant objects. If left blank, commands stay unchanged.

Pods

Pods are the smallest deployable units in Kubernetes. This section covers day‑2 operations like safe exec access, live log streaming, ephemeral debug containers, copying files, and restart strategies with ready‑to‑use kubectl commands.

Pods — FAQ

When should I exec into a pod vs. use an ephemeral debug container?

Use ephemeral debug in production when the base image is minimal or restricted; it adds tooling without changing the running container image or file system. See Use an ephemeral debug container and How to exec into a pod.

Why are my logs empty even though the pod is running?

Check the container name in multi-container pods, recent restarts (--previous), and log output location; some apps, unfortunately, log to file only. See How to get logs from a pod and How to describe a pod.

Is deleting a pod a safe way to “restart” it?

Safe for Deployment/ReplicaSet/DaemonSet because the controller replaces it. Application teams should develop in a way that allows for the clean shutdown of their application upon termination signals, which will minimize or completely eliminate risk of deleting a pod. For standalone pods, it's better to redeploy from a manifest. See How to restart a pod.

Kubernetes: How to exec into a pod

Author: Cassius Adams •
EXPERT TIP Prefer an ephemeral debug container in production to avoid altering running workloads where/when you can. In reality this isn't always possible.
  1. Find the pod and namespace:
    $ kubectl get pods -A
  2. Exec into it with a shell:
    $ kubectl exec -it POD_NAME -n NAMESPACE /bin/bash
    or
    $ kubectl exec -it POD_NAME -n NAMESPACE -- sh
  3. On a restricted cluster, use ephemeral debugging. "target" should specify which container inside that pod you want to debug alongside:
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
HEADS-UP
1. PodSecurity or RBAC may block exec/debug, so it may help to check roles before using it.
2. External registries may be blocked, in which case busybox may not be available unless you've uploaded it to your private registry - in which case target that address.
IMPORTANT
1. Avoid exec'ing into pods with sensitive data from untrusted networks. Use a bastion host, and logging.
2. A debug container will live in your pod until the end of its lifespan. If you can cleanly destroy the pod, do so when complete with your investigation.
EDITOR'S NOTE Just because you can exec into a Pod doesn't mean there's going to be much tooling in there. A lot of organizations keep container images to a minimal, for security and speed reasons. I like to first start by doing ls /bin/ to find out what binaries are available to use. Assuming you're not placing anything custom in /bin/, you can toss the list into ChatGPT and ask it to help choose which tool(s) from the list to achieve your goal.

If you are able to use debug, this is what it might look like:
kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=container-2
	
apiVersion: v1
kind: Pod
metadata:
  name: POD_NAME
  namespace: NAMESPACE
spec:
  containers:
  - name: container-1
    image: app-image-1:latest
  - name: container-2
    image: app-image-2:latest
  # Below is automatically added to existing pod after kubectl debug
  ephemeralContainers:
  - name: debugger-xxxxx <-- autogenerated 
    image: busybox
    targetContainerName: container-2
    tty: true <-- set since running with -it̲
    stdin: true <-- set since running with -i̲t

Kubernetes: How to port-forward to a pod

Author: Cassius Adams •
  1. Find the pod and its namespace:
    $ kubectl get pods -A
  2. Forward a local port to the pod’s port:
    $ kubectl port-forward pod/POD_NAME 8080:80 -n NAMESPACE
  3. Open the forwarded endpoint by visiting in your browser, or curl http://localhost:8080
EXPERT TIP Use a resource type prefix (e.g., pod/) to avoid ambiguous names.
HEADS-UP The session runs until you close it, so keep the terminal open while you're using it.
EDITOR'S NOTE I don't use this feature nearly as much as I should. It's a great way of verifying correct functionality, and performing tests against a specific pod's application.

Kubernetes: How to describe a pod

Author: Cassius Adams •
  1. Describe the pod for a full status dump:
    $ kubectl describe pod POD_NAME -n NAMESPACE
  2. Filter for events (shell grep example):
    $ kubectl describe pod POD_NAME -n NAMESPACE | grep -A5 -i events
  3. Check recent warnings across a namespace:
    $ kubectl get events -n NAMESPACE --sort-by=.lastTimestamp | tail -n 50
EXPERT TIP If a pod is Pending or CrashLoopBackOff, describe usually points to the root cause the fastest.

Kubernetes: How to list pods by label

Author: Cassius Adams •
  1. List pods matching an exact label in a namespace:
    $ kubectl get pods -l app=web -n NAMESPACE
  2. List pods across all namespaces for a label:
    $ kubectl get pods -A -l tier=backend
  3. You can show labels as columns to see/find values quickly:
    $ kubectl get pods -l app=web -L app,tier -n NAMESPACE
  4. Use JSONPath to print only the pod names (handy when scripting things):
    $ kubectl get pods -l app=web -n NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
EXPERT TIP You can combine multiple selectors using commas or logical ops. Examples:
-l 'app=web,environment=prod' -l 'environment in (prod,staging)'
HEADS-UP Label keys are case-sensitive and usually use prefixes that resemble DNS in larger orgs or tools (ex: team.example.com/owner).
EDITOR'S NOTE I often add -o wide to see node placement while filtering by label. I find it makes spotting poorly balanced or hot nodes much easier.

Kubernetes: How to view pod events

Author: Cassius Adams •
  1. View events for a specific pod and sort by most recent at the top:
    $ kubectl get events --field-selector involvedObject.kind=Pod,involvedObject.name=POD_NAME -n NAMESPACE --sort-by=.lastTimestamp
  2. Show only warnings in a specific namespace (and tail 50 most recent):
    $ kubectl get events -n NAMESPACE --field-selector type=Warning --sort-by=.lastTimestamp | tail -n 50
  3. Use describe to see a pod’s embedded event section:
    $ kubectl describe pod POD_NAME -n NAMESPACE
  4. Watch events live, as they happen:
    $ kubectl get events -n NAMESPACE --watch
EXPERT TIP Add --field-selector involvedObject.namespace=NAMESPACE when viewing cluster-wide events to reduce a ton of unnecessary noise.
HEADS-UP Events are ephemeral (controlled by TTL on your cluster). This is a common frustration in Kubernetes. So if you need historical context beyond the TTL, capture them to logs or an observability stack.
EDITOR'S NOTE I use the --watch parameter a LOT - well, -w because it's faster. Especially when impatiently awaiting a new node to enter the cluster. It's definitely not limited to just events.

I'll also sometimes run the warning-only filter in one terminal while reproducing an issue in another because it keeps signal high and fluff low during triage.

Kubernetes: How to set env vars on a pod (kubectl set env)

Author: Cassius Adams •
  1. Get a list of current environment variables on a pod:
    $ kubectl set env pod/POD_NAME --list -n NAMESPACE
  2. Set or update an env var on a Deployment (which automatically triggers a rollout) and also check on rollout status:
    $ kubectl set env deployment/DEPLOYMENT_NAME FOO=bar -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
  3. Populate env vars from a Secret or ConfigMap:
    $ kubectl set env deployment/DEPLOYMENT_NAME --from=secret/SECRET_NAME -n NAMESPACE
    $ kubectl set env deployment/DEPLOYMENT_NAME --from=configmap/CONFIGMAP_NAME -n NAMESPACE
  4. Remove an env var (which automatically triggers rollout):
    $ kubectl set env deployment/DEPLOYMENT_NAME FOO- -n NAMESPACE
EXPERT TIP When importing from a Secret or ConfigMap, you can add a prefix with --prefix=APP_ to avoid name collisions.
HEADS-UP kubectl set env updates the pod template, so controllers (ex, Deployments, DaemonSets, etc) will roll out new pods. Use rollout status to watch health.
IMPORTANT Avoid putting sensitive values directly in commands or manifests. Prefer --from=secret/SECRET_NAME or valueFrom: secretKeyRef and tighten RBAC.
EDITOR'S NOTE I default to ConfigMap or Secret sources for portability. If you must set a one-off var for a hotfix, it's advisable to commit a follow-up PR that formalizes it in IaC immediately.

And speaking of IaC, setting the environment variables manually in general is inadvisable unless it's a lower or testing environment. It should be a code-driven configuration (not the secrets! But the Deployment, DaemonSet, etc should be). If you're not already doing that, get in the habit of applying the env var that way.

Kubernetes: How to use an ephemeral debug container

Author: Cassius Adams •
  1. Start a debug container in the target pod (uses same network & NS as the app). Use --target to choose the specific container within the pod you're troubleshooting:
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
  2. Confirm the debug container is attached (look for ephemeralContainers):
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Ephemeral Containers:/,$p'
    Essentially the same as above, but more reliable (less brittle) - and more complex. (Use this one if scripting)
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{range .spec.ephemeralContainers[*]}{.name}{"\t"}{.image}{"\t"}{.targetContainerName}{"\n"}{end}'
  3. Troubleshoot and collect your evidence from within the debug container (See How to exec into a pod):
    $ nslookup SERVICE_NAME.NAMESPACE.svc.cluster.local
    $ wget -qO- http://127.0.0.1:PORT
  4. Clean up when finished (let the controller recreate the pod if applicable):
    $ kubectl delete pod POD_NAME -n NAMESPACE
EXPERT TIP Use the smallest image with the tooling you need (ex busybox for basics). In restrictive environments, push your chosen toolbox image to the private registry first and use that.
HEADS-UP PodSecurity/RBAC policies may forbid debug. Check permissions if you see forbidden errors.
IMPORTANT Ephemeral containers share the pod's network and may access sensitive data and paths. Limit who can create them and remove the pod when done to return to a known-good state.
EDITOR'S NOTE The debug container cannot easily be removed from the pod once started - it will live in there for the pod's lifespan. So I'd advise the recreation of the pod once troubleshooting is complete.

Usually the first few things I do in a debug shell are to check DNS, check local ports (netstat if it's present), then curl the internal pod HTTP endpoint. These are basics and you'll often need to dive much deeper to reveal the source of the issue you're troubleshooting.

In situations where security doesn't allow debug containers, I'll sometimes hop into the app container, find out what's in /bin/ and write a script to check network states. Deciphering /proc/net/(tcp|udp) and /proc/net/tcp6, for example, can be challenging. But we work with what we have :)

Kubernetes: How to force delete pod (kubectl)

Author: Cassius Adams •
  1. Check what’s holding the Pod up (finalizers, stuck volumes, etc):
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Finalizers:/,/^Events/p'
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.metadata.finalizers}'
  2. Force delete from the API immediately (skips grace period, may kill requests in process):
    $ kubectl delete pod POD_NAME -n NAMESPACE --grace-period=0 --force
  3. If finalizers are the blocker, remove them (advanced):
    $ kubectl patch pod POD_NAME -n NAMESPACE --type=merge -p '{"metadata":{"finalizers":[]}}'
  4. (Controller-managed) When applicable, avoid instant respawn while you investigate by shutting down the app:
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
EXPERT TIP If the Pod is part of a Deployment/ReplicaSet/DaemonSet, scale or pause the controller first or the Pod will be recreated before you can validate the fix.
HEADS-UP Force delete removes the object from the API without waiting for kubelet/containers to exit. The node may clean up later; don’t rely on this for graceful shutdown.

If GitOps (like ArgoCD) is used, it could potentially revert pod back instantly if it targets pods directly (avoid).
IMPORTANT Removing finalizers via patch is a last-resort move! You're bypassing safety mechanisms so make sure you understand what the finalizer is doing (ex, PV protection, custom controllers).
EDITOR'S NOTE Most of the time, it's a finalizer from a storage or service mesh controller. I will generally watch the events in a second terminal to get a view into what's happening while I force delete.

Kubernetes: How to terminate pod (kubectl)

Author: Cassius Adams •
  1. Request a standard graceful pod termination:
    $ kubectl delete pod POD_NAME -n NAMESPACE
  2. Customize the grace period (seconds) and return immediately:
    $ kubectl delete pod POD_NAME --grace-period=20 --wait=false -n NAMESPACE
  3. Wait for deletion to complete (useful in scripts and CI - exits 0 if delete worked, non-0 if it didn't):
    $ kubectl wait --for=delete pod/POD_NAME --timeout=60s -n NAMESPACE
  4. Prefer restarting via controller for managed pods (deployment, daemonset, etc) when appropriate:
    $ kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
EXPERT TIP If a readiness/liveness probe is failing, termination can cascade into flapping. Consider fixing probes and rolling the controller instead of deleting individual Pods.
HEADS-UP Deleting a standalone unmanaged Pod is not persistent. If there's no controller to recreate it, you should redeploy from a manifest. For managed Pods, it should immediate respawn.
EDITOR'S NOTE My general rule: always gracefully restart first - I force it only when I know exactly what I'm bypassing. I'll sometimes --wait=false so I can keep working in the same terminal when confident.

Kubernetes: How to access container

Author: Cassius Adams •
  1. Open a shell in a specific container:
    $ kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE -- sh
    $ kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE /bin/bash
  2. Attach to the main process (no shell required - you'll attach directly to the running process of the container). Ctrl+P, Ctrl+Q to detatch:
    $ kubectl attach -it POD_NAME -c CONTAINER_NAME -n NAMESPACE
  3. Work with logs for only this container, last 200 lines and live logs:
    $ kubectl logs POD_NAME -c CONTAINER_NAME -n NAMESPACE -f --tail=200
  4. Port-forward to reach a container's listening port (pod-level):
    $ kubectl port-forward pod/POD_NAME LOCAL_PORT:CONTAINER_PORT -n NAMESPACE
  5. Copy files to/from a specific container:
    Local → Container
    $ kubectl cp ./local-file.txt NAMESPACE/POD_NAME:/tmp/local-file.txt -c CONTAINER_NAME
    Container → Local
    $ kubectl cp NAMESPACE/POD_NAME:/tmp/local-file.txt ./local-file.txt -c CONTAINER_NAME
EXPERT TIP Always include -c CONTAINER_NAME for multi-container pods. If you omit it, kubectl will pick a default and it might not be the one you expect.
HEADS-UP Many base images don’t include a shell. If sh and bash are missing, either attach to the process or use an ephemeral debug container targeted at CONTAINER_NAME.
IMPORTANT Treat interactive access as privileged. Don’t paste secrets into terminals, and prefer audit-logged bastion/jumpbox access where possible.
EDITOR'S NOTE In practice, I almost never attach directly to a container. I generally exec in, or switch to debug containers when I need tooling.
HEADS-UP Some managed platforms disable Dashboard or ship their own. Namespaces and service names can differ - confirm before you assume it's kubernetes-dashboard.

Kubernetes: How to access dashboard

Author: Cassius Adams •
  1. Verify Dashboard components (namespace, service, deployment):
    $ kubectl get ns | grep -i dashboard
    $ kubectl get all -n NAMESPACE
    $ kubectl get svc -n NAMESPACE
    $ kubectl get deploy -n NAMESPACE
  2. (If not installed) Apply the recommended manifest (research and pick a version appropriate to your cluster):
    $ kubectl apply -f URL
  3. Port-forward the Dashboard service to your workstation:
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME 8443:443
    Then open: https://127.0.0.1:8443
  4. Generate a login token (Kubernetes 1.24+ ServiceAccount token request):
    $ kubectl -n NAMESPACE create token SERVICE_ACCOUNT_NAME
    Older clusters that still mount Secret-based tokens:
    $ kubectl -n NAMESPACE get sa SERVICE_ACCOUNT_NAME -o jsonpath='{.secrets[0].name}'
    $ kubectl -n NAMESPACE get secret SECRET_NAME -o jsonpath='{.data.token}' | base64 -d
  5. Sign into dasbhoard UI with the token. Use a least privilege ServiceAccount; avoid cluster admin unless you're in a lab.
EXPERT TIP Keep Dashboard internal-only and reach it via port-forward or a short-lived bastion. It's simpler and safer than exposing it with an Ingress.
IMPORTANT Don't leave an admin token lying around. Rotate or delete short-lived tokens and restrict RBAC to the minimum verbs/namespaces you need.
EDITOR'S NOTE Honestly, I don't generally use dashboard - I stick to sources provided on managed platforms, like AKS, GKE, OpenShift, etc. That said: if you must expose Dashboard broadly, still please don't. If you do, put it behind SSO, strong network controls, and log everything.

Kubernetes: How to access pod

Author: Cassius Adams •
  1. Interactive shell (single or multi-container with -c):
    $ kubectl exec -it POD_NAME -n NAMESPACE -- sh
    $ kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE /bin/bash
  2. Attach to the running process (no shell):
    $ kubectl attach -it POD_NAME -n NAMESPACE
  3. Reach a Pod’s listening port from your machine (pod-level port-forward):
    $ kubectl port-forward pod/POD_NAME LOCAL_PORT:CONTAINER_PORT -n NAMESPACE
    Then browse/curl: http://127.0.0.1:LOCAL_PORT
  4. Access via an ephemeral debug toolbox (won't change app image):
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
  5. Copy files to/from the Pod:
    $ kubectl cp ./local.txt NAMESPACE/POD_NAME:/tmp/local.txt
    $ kubectl cp NAMESPACE/POD_NAME:/tmp/remote.txt ./remote.txt
EXPERT TIP For multi-container pods, always pass -c CONTAINER_NAME to avoid landing in a sidecar you don’t care about.
HEADS-UP Some images don’t ship a shell. If sh/bash are missing, use attach or a debug container with the tooling you need.
IMPORTANT Treat interactive access as privileged. Route through a bastion and log commands when possible.
EDITOR'S NOTE When I’m just testing HTTP on a pod, port-forwarding is the fastest way. When I need tooling, it's better to go to an ephemeral debug container (debug container will remain for pod's lifetime). Or exec into the pod's container.

Kubernetes: How to communicate between pods

Author: Cassius Adams •
  1. Prefer Service DNS over Pod IPs (stable, load-balanced). From within pod container:
    $ curl http://SERVICE_NAME.NAMESPACE.svc.cluster.local:PORT/healthz
    Or if pods are guaranteed to reside in the same namespace:
    $ curl http://SERVICE_NAME:PORT/healthz
  2. From a debug toolbox (same namespace or specify FQDN):
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
    $ nslookup SERVICE_NAME.NAMESPACE.svc.cluster.local
    $ wget -qO- http://SERVICE_NAME.NAMESPACE.svc.cluster.local:PORT
  3. Inspect endpoints behind a Service:
    $ kubectl get endpoints SERVICE_NAME -n NAMESPACE -o wide
    Or EndpointSlices (version 1.21+ clusters) of a Service:
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
  4. Pod-to-Pod direct by IP (diagnostics only; if you must):
    $ kubectl get pod -o wide -n NAMESPACE | grep POD_NAME
    $ curl http://POD_IP:PORT/healthz
  5. Validate NetworkPolicies aren't blocking traffic:
    $ kubectl get netpol -A
    $ kubectl describe netpol NETPOL_NAME -n NAMESPACE
EXPERT TIP Always test by FQDN first (service.namespace.svc.cluster.local) to avoid search-path surprises when you’re crossing namespaces.
HEADS-UP Pod IPs are ephemeral; reschedules break direct IP calls. Use Services or (for stateful sets) headless Services for stable discovery.
IMPORTANT NetworkPolicies default-deny in many shops. If traffic fails only across namespaces, it’s probably a policy, not DNS.
EDITOR'S NOTE You may want to keep a tiny toolbox container around (busybox, distroless-curl, etc) just for quick way to test DNS or HTTP. Especially if exec is unavailable or undesireable. It can save valuable time when triaging weird cross-namespace transactions.

I almost always prefer that apps use the namespace-local service name (SERVICE_NAME:PORT) rather than the general FQDN (SERVICE_NAME.NAMESPACE.svc.cluster.local:PORT) wherever possible. It makes IaC more straight-forward, with objects much more tightly coupled, somewhat more opinionated, and highly portable across multiple clusters, namespace renames, or migrations.

Kubernetes: How to copy file from pod

Author: Cassius Adams •
  1. Copy a file from a Pod to your local machine:
    $ kubectl cp NAMESPACE/POD_NAME:/path/in/pod/file.txt ./file.txt
  2. Copy a file to a Pod (single-container):
    $ kubectl cp ./local.txt NAMESPACE/POD_NAME:/tmp/local.txt
  3. (Multi-container) Specify the container explicitly:
    $ kubectl cp -c CONTAINER_NAME ./local.txt NAMESPACE/POD_NAME:/tmp/local.txt
    $ kubectl cp -c CONTAINER_NAME NAMESPACE/POD_NAME:/var/log/app.log ./app.log
  4. Copy a directory (contents) out of a Pod:
    $ kubectl cp NAMESPACE/POD_NAME:/var/log/app ./logs
    Copy a directory into a Pod:
    $ kubectl cp ./scripts NAMESPACE/POD_NAME:/opt/scripts
  5. Fallback when kubectl cp fails (ex due to tar flag/version mismatch). Use a tar pipe:
    $ kubectl exec -i POD_NAME -n NAMESPACE -- sh -c "tar -C /path/in/pod -cf - ." > ./archive.tar
    $ tar -C ./extract-here -xf ./archive.tar
EXPERT TIP
Quote paths with spaces:"NAMESPACE/POD_NAME:/var/log/my app/log.txt". Add -c CONTAINER_NAME whenever the Pod has sidecars.
HEADS-UP kubectl cp shells out to tar inside the container. If the image has no tar at all, both kubectl cp and the tar-pipe fallback will fail. The tar-pipe trick is only usable when tar is present but kubectl cp itself fails. For truly tar-less images, use kubectl exec with cat for single files, or attach a debug container with tar tools.
IMPORTANT Be mindful of secrets. Don’t copy sensitive files out of Pods unless you absolutely must, and ensure RBAC and audit logs are in place.
EDITOR'S NOTE If you need to run some tests from the container itself, it's good to keep some custom bash scripts on your local machine that you can rely on when using minimal containers. Quickly cp them into the container, exec in and run your testing scripts. I'll often stage files under /tmp first, then copy from there, so I'm not having to remember crazy-long paths. Also, it keeps permissions sane and avoids noisy permission errors from system or read-only paths. All this assumes you are not using a debug container.

Kubernetes: How to create a pod

Author: Cassius Adams •
  1. Spin up a quick, disposable unmanaged test Pod (no controller):
    $ kubectl run POD_NAME --image=IMAGE --restart=Never -n NAMESPACE --port=CONTAINER_PORT
    Wait (90 seconds) for Ready:
    $ kubectl wait --for=condition=Ready pod/POD_NAME -n NAMESPACE --timeout=90s
  2. Create a minimal Pod from YAML (recommended for anything reproducible):
    apiVersion: v1
    kind: Pod
    metadata:
      name: POD_NAME
      namespace: NAMESPACE
      labels:
        app: example
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: app
        image: IMAGE
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: CONTAINER_PORT
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "256Mi"
        securityContext:
          allowPrivilegeEscalation: false
          runAsNonRoot: true
    
    Apply it:
    $ kubectl apply -f pod.yaml -n NAMESPACE
  3. Verify and check the logs:
    $ kubectl get pod POD_NAME -o wide -n NAMESPACE
    $ kubectl logs POD_NAME -n NAMESPACE --tail=100
  4. (Optional) Reach it locally:
    $ kubectl port-forward pod/POD_NAME 8080:CONTAINER_PORT -n NAMESPACE
EXPERT TIP For real apps, use a controller (Deployment/StatefulSet/Job), not an unmanaged Pod. Pods are pets; controllers make cattle.
HEADS-UP Pod names are immutable. If you change the YAML name, Kubernetes creates a new Pod. That’s normal—treat it as replace, not update.
IMPORTANT Avoid :latest. Pin images to a tag or digest, set resource requests/limits, and add probes if the app is long-running.
EDITOR'S NOTE I just said "set resource requests/limits" above and I can't stress this enough. Without them, Kubernetes scheduler is going to have a terrible time. The pod could use no resources or all the resources, could land on an over-capacity node, cluster autoscalers won't work effectively, and so on. There are a plethora of reasons - you should always at least set a request value for memory and cpu. limit is not as important - but you should still set it (pro tip: set memory request and limit values equal - always.

Kubernetes: How to delete a pod

Author: Cassius Adams •
  1. Delete a single Pod gracefully (default grace period):
    $ kubectl delete pod POD_NAME -n NAMESPACE
  2. Delete multiple Pods by label:
    $ kubectl delete pod -l app=web -n NAMESPACE
  3. Return control to your terminal immediately (don’t block):
    $ kubectl delete pod POD_NAME --wait=false -n NAMESPACE
    Or wait until it’s fully gone (when scripting):
    $ kubectl wait --for=delete pod/POD_NAME --timeout=60s -n NAMESPACE
  4. Prefer controller-aware restarts when appropriate (Managed Pods):
    $ kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
EXPERT TIP Deleting a Pod behind a Deployment/ReplicaSet/DaemonSet should trigger an immediate replacement. Use this as a lightweight “kick” once you’ve fixed the cause, or to clean up a debug container.
HEADS-UP For StatefulSets, identity (name) matters. Deleting web-0 re-creates web-0 with the same identity. Be mindful of PVC attachments and disruption windows.
IMPORTANT Don’t use --force --grace-period=0 unless you know what you’re bypassing. See Force delete a pod for the procedure.
EDITOR'S NOTE I usually keep a second terminal tailing warnings with kubectl get events -n NAMESPACE --field-selector type=Warning --watch while I delete. It reveals issues fast - especially if the pod can't terminate.

Kubernetes: How to delete all pods

Author: Cassius Adams •
  1. Delete every Pod in a specific namespace (gracefully):
    $ kubectl delete pod --all -n NAMESPACE
    You can also preview what would be deleted first:
    $ kubectl delete pod --all -n NAMESPACE --dry-run=client -o name
  2. Delete Pods across all namespaces (use with extreme caution):
    $ kubectl delete pod --all -A
    Safer pattern (skip system namespaces):
    $ for ns in $(kubectl get ns -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' \
      | grep -Ev '^(kube-system|kube-public|kube-node-lease)$'); do
        kubectl delete pod --all -n "$ns"
      done
  3. Prevent immediate respawn while investigating by shutting down the app (controller-managed workloads):
    $ kubectl scale deployment --all --replicas=0 -n NAMESPACE
    When ready, scale back up or prefer a restart:
    $ kubectl rollout restart deployment --all -n NAMESPACE
  4. Watch deletion and replacement progress live as they happen:
    $ kubectl get pods -w -n NAMESPACE -o wide
EXPERT TIP If the goal is “restart everything,” prefer kubectl rollout restart deployment --all -n NAMESPACE instead of mass-deleting Pods. It’s safer and preserves controller intent.
HEADS-UP Deleting Pods in controller-managed namespaces will cause them to be immediately recreated. For Jobs/CronJobs, deleting Pods can confuse completion tracking, so consider deleting the Job or letting it finish.
IMPORTANT Avoid cluster-wide Pod deletion in production. Never target system namespaces (kube-system, kube-public, kube-node-lease) unless you’re absolutely certain. You should absolutely expect outages if you do.
EDITOR'S NOTE I almost never destroy pods en masses. If I do it's because I've fixed a systemic or cascading issue and want a clean slate. Even then, I roll by namespace, watch events in another terminal, and try to keep an eye on HorizontalPodAutoscaler (and PodDisruptionBudgets if present).

Kubernetes: How to delete evicted pods

Author: Cassius Adams •
  1. List evicted Pods in a namespace, for a quick visual:
    $ kubectl get pods -n NAMESPACE | grep -i Evicted
    Cluster-wide:
    $ kubectl get pods -A | grep -i Evicted
  2. Delete all evicted Pods in one namespace (Linux shell):
    $ kubectl get pods -n NAMESPACE | awk '$3=="Evicted"{print $1}' \
      | xargs -r kubectl delete pod -n NAMESPACE
  3. Use JSONPath to select by reason (more precise, good when scripting):
    $ kubectl get pods -n NAMESPACE --field-selector=status.phase=Failed \
      -o jsonpath='{range .items[?(@.status.reason=="Evicted")]}{.metadata.name}{"\n"}{end}' \
      | xargs -r -I{} kubectl delete pod {} -n NAMESPACE
  4. Cluster-wide cleanup (omitting system namespaces):
    $ for ns in $(kubectl get ns -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' \
      | grep -Ev '^(kube-system|kube-public|kube-node-lease)$'); do
        kubectl get pods -n "$ns" | awk '$3=="Evicted"{print $1}' \
          | xargs -r kubectl delete pod -n "$ns"
      done
  5. Investigate why Pods were evicted (memory/disk/node pressure):
    $ kubectl describe node NODE_NAME | sed -n '/Conditions:/,$p'
    $ kubectl top node
    $ kubectl top pod -A --containers
EXPERT TIP Evictions are usually a symptom, not the problem. Look for memory/disk pressure on nodes, pod limits vs. usage, and bursty workloads without headroom (especially if your memory request and limit values are not equal).
HEADS-UP The quick AWK approach keys off the STATUS column. Table output can change between versions; use the JSONPath variant for durable scripts.
IMPORTANT Don't blanket-delete failed Pods from critical namespaces without understanding the blast radius. If a controller is flapping, clean up once after you stabilize the cause.
EDITOR'S NOTE My routine is: skim evictions, spot the common node, check top/describe node, fix pressure, then clean up Pods. Cleaning first just hides the signal/details I need. If it's a resource issue on a node, I tend to spend a bit of time rebalancing the workloads for a more even nodal spread.

Kubernetes: How to enter a pod

Author: Cassius Adams •
  1. Open an interactive shell in a Pod (common shells):
    $ kubectl exec -it POD_NAME -n NAMESPACE -- sh
    If the image has bash:
    $ kubectl exec -it POD_NAME -n NAMESPACE -- /bin/bash
  2. Enter a specific container in a multi-container Pod:
    $ kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE -- sh
  3. Run a one-off command without a shell (outputs and exits immediately):
    $ kubectl exec POD_NAME -n NAMESPACE -- ls -lah /
  4. If the image lacks a shell or tooling, use an ephemeral debug container:
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
    (Same network namespace; doesn’t modify the app image.)
  5. Exit the pod/container cleanly:
    $ exit
EXPERT TIP Use -- to separate kubectl flags from the command you want to run inside the container (prevents kubectl from interpreting your arguments).
HEADS-UP Many images don’t ship a shell. If sh/bash aren’t present, try a direct command (exec POD -- cat /etc/os-release) or switch to a debug container targeted at the app container.
IMPORTANT Interactive access is privileged. Use a bastion/jumpbox, avoid pasting secrets, and clean up any debug containers by recreating the Pod when finished.
EDITOR'S NOTE “Enter” usually means exec with a TTY. I only use attach when I need to interact with the main process directly—and even then, I’m careful. Debug containers are a better option for real troubleshooting.

While is is best practice to use -- to separate the kubectl command from the command to be executed in the container, I never use it if I target /bin/bash
kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE /bin/bash There is no "/bin/bash" parameter for kubectl thus it doesn't get confused.

Kubernetes: How to evict pod

Author: Cassius Adams •
EXPERT TIP “Evict” means request a graceful disruption via the Eviction API (respects PodDisruptionBudgets). It’s not the same as delete or --force.
  1. Find the pod + check its controller and labels (useful for PDB matching):
    $ kubectl get pod POD_NAME -n NAMESPACE -o wide
    Optional (see which controller owns it):
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.metadata.ownerReferences[*].kind}{"\t"}{.metadata.ownerReferences[*].name}{"\n"}'
  2. Check PodDisruptionBudgets (PDB) that may select this pod:
    $ kubectl get pdb -A
    Focus on the namespace and labels that match your pod:
    $ kubectl describe pdb PDB_NAME -n NAMESPACE
    (Look for Allowed disruptions > 0. If it’s 0, an eviction will be blocked.)
  3. Request an eviction (Eviction API). This is the portable, per-pod way:
    $ cat <<'EOF' | kubectl create -f -
    apiVersion: policy/v1
    kind: Eviction
    metadata:
      name: POD_NAME
      namespace: NAMESPACE
    deleteOptions:
      gracePeriodSeconds: 30
    EOF
    Watch progress:
    $ kubectl get pod POD_NAME -n NAMESPACE -w
  4. If PDB blocks eviction (Allowed disruptions = 0), create budget to enable it without breaking SLOs:
    • Temporarily scale up replicas to increase allowed disruptions:
    $ kubectl scale deployment/DEPLOYMENT_NAME -n NAMESPACE --replicas=DESIRED
    • Or adjust the PDB (carefully) to permit one disruption:
    $ kubectl patch pdb PDB_NAME -n NAMESPACE --type=merge -p '{"spec":{"maxUnavailable":1}}'
    Then retry the eviction step.
  5. Node-level safe eviction (drain pattern). This evicts many pods while honoring PDBs:
    $ kubectl cordon NODE_NAME
    $ kubectl drain NODE_NAME --ignore-daemonsets --delete-emptydir-data --grace-period=30 --timeout=10m 
    When finished:
    $ kubectl uncordon NODE_NAME
HEADS-UP Eviction respects PDBs and priorityClassName. High-priority pods or tight PDBs often yield cannot evict responses until you add capacity or relax budgets.
IMPORTANT Don’t “fix” blocked evictions by --force deleting. That bypasses safety rails and can violate availability guarantees. Adjust replicas/PDBs instead.
EDITOR'S NOTE If I’m evicting a single pod, I almost always check the PDB first. For node work, the cordon → drain → uncordon flow has saved me from pager pings more times than I can count.

Kubernetes: How to force delete a pod

Author: Cassius Adams •
  1. Confirm it’s truly stuck (Terminating, NodeLost, or kubelet unreachable):
    $ kubectl get pod POD_NAME -n NAMESPACE -o wide
    Check events and finalizers:
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Finalizers:/,/^Events/p'
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.metadata.finalizers}'
  2. Try a normal delete with a short grace period (preferred first):
    $ kubectl delete pod POD_NAME --grace-period=20 --wait=false -n NAMESPACE
  3. Force remove the API object immediately (bypasses kubelet):
    $ kubectl delete pod POD_NAME -n NAMESPACE --force --grace-period=0
    EXPERT TIP If a controller will recreate it, consider scaling replicas to 0 briefly to stop instant respawns while you investigate.
  4. If finalizers block deletion (advanced), clear them knowingly:
    $ kubectl patch pod POD_NAME -n NAMESPACE --type=merge -p '{"metadata":{"finalizers":[]}}'
    Re-check events after patch:
    $ kubectl get events -n NAMESPACE --field-selector involvedObject.name=POD_NAME --watch
  5. For controller-managed workloads, pause/scale first (avoid churn):
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    # ...perform force delete...
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=DESIRED -n NAMESPACE
HEADS-UP --force --grace-period=0 removes the pod from the API immediately; the node may clean up containers later. Don’t mistake this for a graceful shutdown.
IMPORTANT Finalizers exist to protect resources (volumes, service mesh, controllers). Removing them can orphan resources or violate invariants. Know the owner and consequences first.
EDITOR'S NOTE 90% of my “truly stuck” pods are either NodeLost or a storage finalizer hanging. I tail warnings in another terminal while I force-delete - I'll get a fast signal when it finally clears or reappears.

Kubernetes: How to get container name

Author: Cassius Adams •
  1. List all app container names in a single pod (newline-separated):
    $ kubectl get pod POD_NAME -n NAMESPACE \
      -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
  2. Include init and ephemeral containers (when present):
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='\
    {range .spec.initContainers[*]}init: {.name}{"\n"}{end}\
    {range .spec.containers[*]}app: {.name}{"\n"}{end}\
    {range .spec.ephemeralContainers[*]}ephemeral: {.name}{"\n"}{end}'
  3. (JQ fans) Same idea with jq for readability:
    $ kubectl get pod POD_NAME -n NAMESPACE -o json \
      | jq -r '.spec.initContainers[]?.name as $n | "init: \($n)"'
    $ kubectl get pod POD_NAME -n NAMESPACE -o json \
      | jq -r '.spec.containers[].name as $n | "app: \($n)"'
    $ kubectl get pod POD_NAME -n NAMESPACE -o json \
      | jq -r '.spec.ephemeralContainers[]?.name as $n | "ephemeral: \($n)"'
  4. Print pod name + containers for all pods with a label (handy for scripts):
    $ kubectl get pods -n NAMESPACE -l app=web -o jsonpath='\
    {range .items[*]}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'
  5. (Human scan) describe shows containers too:
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Containers:/,/Conditions:/p'
EXPERT TIP JSONPath is your friend in scripts. Print just the names you need, then pipe into loops that exec/logs/cp for each container.
HEADS-UP Multi-container pods are common (sidecars, service mesh). Always pass -c CONTAINER_NAME to target the right one.
EDITOR'S NOTE For quick audits across a namespace I’ll run the JSONPath one-liner and eyeball for odd container names—great at spotting accidental sidecars or leftover debug containers.

Kubernetes: How to get logs from pod

Author: Cassius Adams •
  1. Tail a pod’s logs live (show latest 200 lines too):
    $ kubectl logs POD_NAME -n NAMESPACE -f --tail=200
    Multi-container pod (target a specific one):
    $ kubectl logs POD_NAME -c CONTAINER_NAME -n NAMESPACE -f --tail=200
  2. Scope by time, bytes, lines—great for noisy pods:
    $ kubectl logs POD_NAME -n NAMESPACE --since=1h --tail=500
    $ kubectl logs POD_NAME -n NAMESPACE --since-time="2025-08-16T10:00:00Z"
    $ kubectl logs POD_NAME -n NAMESPACE --limit-bytes=500000
  3. Get logs from the previous container instance (after a crash/restart):
    $ kubectl logs POD_NAME -n NAMESPACE --previous
    $ kubectl logs POD_NAME -c CONTAINER_NAME -n NAMESPACE --previous
  4. Logs for many pods at once (label selector). Requires a recent kubectl:
    $ kubectl logs -n NAMESPACE -l app=web --all-containers=true -f --tail=50 --max-log-requests=5
    Controller shortcut (kubectl will pick one pod from the Deployment (not all). Use -l with a label selector if you need logs across multiple pods):
    $ kubectl logs deployment/DEPLOYMENT_NAME -n NAMESPACE --all-containers=true --tail=200
  5. Add timestamps (useful when correlating with events/metrics):
    $ kubectl logs POD_NAME -n NAMESPACE --timestamps --tail=200
EXPERT TIP Pair --previous with kubectl describe pod to inspect restart reasons and probe failures. It’s the fastest path to root cause on CrashLoopBackOff.
HEADS-UP Pod logs are ephemeral. If you need retention or cross-pod aggregation, forward logs to a central system (ELK, Loki, whatever your platform provides).
EDITOR'S NOTE For broad, live triage I’ll start with a label-selector filter and then narrow to specific containers. If the selector flow feels clumsy, switch to *stern or your platform’s log UI.

*stern is a small CLI for multi-pod log tailing. Instead of running kubectl logs on one pod at a time, you give stern a label, name, or regex and it streams all matching pods (and containers) together with a colored prefix so you can tell sources apart. It's nice.

Kubernetes: How to get pod ip

Author: Cassius Adams •
  1. Quick view (shows Pod IP, node, and more):
    $ kubectl get pod POD_NAME -n NAMESPACE -o wide
  2. Print only the Pod IP (JSONPath):
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.status.podIP}{"\n"}'
    Dual-stack clusters (all IPs):
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{range .status.podIPs[*]}{.ip}{"\n"}{end}'
  3. List names + IPs for many pods (label selector):
    $ kubectl get pods -n NAMESPACE -l app=web -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
    Or a wide table with node placement:
    $ kubectl get pods -n NAMESPACE -l app=web \
      -o custom-columns=NAME:.metadata.name,IP:.status.podIP,NODE:.spec.nodeName --no-headers
  4. See which pods back a Service (endpoints):
    $ kubectl get endpoints SERVICE_NAME -n NAMESPACE -o wide
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
  5. (Edge cases) hostNetwork pods & pending pods:
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.spec.hostNetwork}{"\n"}{.status.phase}{"\n"}'
    (If hostNetwork=true, Pod IP may equal the node IP. Pending pods won’t have a Pod IP yet.)
EXPERT TIP Use the Service (DNS) whenever possible. Pod IPs are great for diagnostics but they change on reschedules.
HEADS-UP CNI plugins differ. Don’t hardcode assumptions about IP families or ranges; use podIPs for dual-stack awareness.
IMPORTANT Avoid wiring apps to Pod IPs directly. You’ll lose load-balancing and resilience that Services/Headless Services provide.
EDITOR'S NOTE My default is -o wide for humans and JSONPath for scripts. When debugging odd routing, I’ll also peek at EndpointSlices—they tell the truth about where traffic’s actually going.

Kubernetes: How to list containers

Author: Cassius Adams •
EXPERT TIP For scripts, prefer structured output (-o jsonpath / -o json) over grepping tables. It’s faster and won’t break when columns change.
  1. List containers for every Pod in the cluster (NS, Pod, containers):
    $ kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'
  2. Include images (handy to spot drift/mismatch):
    $ kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{": "}{.image}{" "}{end}{"\n"}{end}'
  3. Add init + ephemeral containers (when present) to the listing:
    $ kubectl get pods -A -o jsonpath='\
    {range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\n"}\
    {range .spec.initContainers[*]}  init: {.name}{"\t"}{.image}{"\n"}{end}\
    {range .spec.containers[*]}  app:  {.name}{"\t"}{.image}{"\n"}{end}\
    {range .spec.ephemeralContainers[*]}  eph:  {.name}{"\t"}{.image}{"\n"}{end}\
    {"\n"}{end}'
  4. Filter by label (reduce noise to what you care about):
    $ kubectl get pods -n NAMESPACE -l app=web \
      -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{" "}{end}{"\n"}{end}'
  5. Readable table for humans (quick scan):
    $ kubectl get pods -A \
      -o custom-columns=NS:.metadata.namespace,POD:.metadata.name,CONTAINERS:.spec.containers[*].name --no-headers
HEADS-UP Ephemeral and init containers won’t appear in basic tables. Use JSON/JSONPath or describe to see all container classes.
IMPORTANT Don’t pipe table output into automation that changes resources (like delete/patch). Always use selectors and structured output to avoid accidental blasts.
EDITOR'S NOTE I don't always use columns, but when I do I'll start with the custom-columns view to spot weirdness fast, then switch to JSONPath when I need a clean feed for a scripting loop or report.

Kubernetes: How to list containers in a pod

Author: Cassius Adams •
  1. Show names of app containers (newline-separated):
    $ kubectl get pod POD_NAME -n NAMESPACE \
      -o jsonpath='{range .spec.containers[*]}{.name}{"\n"}{end}'
  2. Show container name + image (quick drift check):
    $ kubectl get pod POD_NAME -n NAMESPACE \
      -o jsonpath='{range .spec.containers[*]}{.name}{"\t"}{.image}{"\n"}{end}'
  3. Include init and ephemeral containers too:
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='\
    {range .spec.initContainers[*]}init: {.name}{"\t"}{.image}{"\n"}{end}\
    {range .spec.containers[*]}app:  {.name}{"\t"}{.image}{"\n"}{end}\
    {range .spec.ephemeralContainers[*]}ephemeral: {.name}{"\t"}{.image}{"\n"}{end}'
  4. Include runtime status (ready + restart counts):
    $ kubectl get pod POD_NAME -n NAMESPACE \
      -o jsonpath='{range .status.containerStatuses[*]}{.name}{"\tready="}{.ready}{"\trestarts="}{.restartCount}{"\n"}{end}'
    (Optional) Rough state hints:
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='\
    {range .status.containerStatuses[*]}{.name}{"\t"}{.state.waiting.reason}{.state.terminated.reason}{.state.running.startedAt}{"\n"}{end}'
  5. Human-readable view via describe:
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Containers:/,/Conditions:/p'
EXPERT TIP Need just one value for a follow-up command? JSONPath it and command-substitute: kubectl exec -it POD -c "$(kubectl get pod POD -o jsonpath='{.spec.containers[0].name}')" -- sh
HEADS-UP Some images lack tooling. If you’re here to troubleshoot, consider a targeted debug container (see that section) so you don’t mutate the app image.
EDITOR'S NOTE Sometimes I like to print ready and restarts next to names. It's a super quick sanity check before I dive into the logs or exec into a pod.

Kubernetes: How to list pods

Author: Cassius Adams •
READ THIS FIRST There isn't one single "right" way to list Pods. It depends on your task (looking at health vs. feeding another command) and your target (one namespace, the whole cluster, a label slice, Pods on a node, etc). So this section is intentionally longer than most and shows multiple patterns you can mix and match:
  • Scope: -n NAMESPACE vs. -A
  • Selectors: -l (labels) and --field-selector (node, phase, etc)
  • Outputs: human tables (-o wide, custom-columns) vs. scriptable
    (-o name, -o jsonpath)
  • Utilities: sorting, watching, and showing owners/restarts
Pick the pattern that fits your task, then swap selectors/outputs without changing the overall approach. Placeholders like NAMESPACE, NODE_NAME, and app=web are meant to be replaced with your values.
  1. List pods in a namespace (with useful columns):
    $ kubectl get pods -n NAMESPACE -o wide
    Cluster-wide:
    $ kubectl get pods -A -o wide
  2. Filter by label and surface labels as columns:
    $ kubectl get pods -n NAMESPACE -l app=web -L app,tier -o wide
  3. Select by fields (node, phase, etc):
    $ kubectl get pods -n NAMESPACE --field-selector spec.nodeName=NODE_NAME -o wide
    Pending pods (any namespace):
    $ kubectl get pods -A --field-selector status.phase=Pending
  4. Sort by age (oldest → newest by default). Grab newest/oldest quickly:
    $ kubectl get pods -n NAMESPACE --sort-by=.metadata.creationTimestamp | tail -n 10
    Oldest:
    $ kubectl get pods -n NAMESPACE --sort-by=.metadata.creationTimestamp | head -n 10
  5. Names-only (great for loops and scripts):
    $ kubectl get pods -n NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}'
    Include namespace:
    $ kubectl get pods -A -o jsonpath='{range .items[*]}{.metadata.namespace}{"\t"}{.metadata.name}{"\n"}{end}'
  6. Watch live (deploys, restarts, reschedules):
    $ kubectl get pods -w -n NAMESPACE -o wide
  7. Custom table with status, restarts, node:
    $ kubectl get pods -n NAMESPACE \
      -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,RESTARTS:.status.containerStatuses[*].restartCount,NODE:.spec.nodeName --no-headers
  8. See what controls each Pod (owner references):
    $ kubectl get pods -n NAMESPACE \
      -o custom-columns=NAME:.metadata.name,OWNER_KIND:.metadata.ownerReferences[0].kind,OWNER:.metadata.ownerReferences[0].name --no-headers
  9. (Quick triage) Show Pods with non-zero restarts (table grep; fine for eyeballing):
    $ kubectl get pods -n NAMESPACE --no-headers | awk '$4+0 > 0'
EXPERT TIP Combine label (-l) + field selectors (--field-selector) to laser-focus big clusters, then switch to JSON/JSONPath when a command needs to feed another command.
HEADS-UP Table columns (and order) can vary across Kubernetes versions and vendors. Don’t rely on column positions in automation—use structured output.
IMPORTANT Be careful mass-operating across -A. If you’re going to feed pod lists into mutating commands, scope by namespace/label and dry-run when possible.
EDITOR'S NOTE I keep a second terminal -w watching pods while I apply changes. It’s a fast feedback loop, especially paired with a warning-only events tail.

Kubernetes: How to pause a pod

Author: Cassius Adams •
HEADS-UP Kubernetes doesn’t have a literal “pause pod” API. Choose the behavior you want: stop serving traffic, stop scheduling new pods, or fully quiesce by scaling to zero. The steps below cover the safest patterns.
  1. Identify how the Pod is managed (Deployment, StatefulSet, DaemonSet, Job, or unmanaged):
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.metadata.ownerReferences[0].kind}{"\t"}{.metadata.ownerReferences[0].name}{"\n"}'
    If empty, it’s likely an unmanaged Pod.
  2. Quiesce a managed service by scaling replicas to zero (recommended):
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
    StatefulSet:
    $ kubectl scale statefulset/STATEFULSET_NAME --replicas=0 -n NAMESPACE
    $ kubectl rollout status statefulset/STATEFULSET_NAME -n NAMESPACE
  3. Hold the current state (block future rollouts without killing running Pods):
    $ kubectl rollout pause deployment/DEPLOYMENT_NAME -n NAMESPACE
    # ...perform maintenance...
    $ kubectl rollout resume deployment/DEPLOYMENT_NAME -n NAMESPACE
    Useful when you want to freeze changes but keep traffic flowing.
  4. Take the service temporarily out of rotation (no traffic) while keeping Pods running:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE -o yaml > svc.yaml
    # Edit svc.yaml and change .spec.selector to a label that no Pods have (ex: app: hold)
    $ kubectl apply -f svc.yaml -n NAMESPACE
    # Or quick patch (dangerous if you don't restore it):
    $ kubectl patch svc SERVICE_NAME -n NAMESPACE -p '{"spec":{"selector":{"app":"does-not-exist"}}}'
    Restore the original selector when ready.
  5. Respect or relax PodDisruptionBudget (PDB) for maintenance windows:
    $ kubectl get pdb -n NAMESPACE
    # Example PDB allowing zero during a planned outage
    $ cat <<'YAML' | kubectl apply -n NAMESPACE -f -
    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      name: web-maintenance
    spec:
      selector:
        matchLabels:
          app: web
      maxUnavailable: 100%
    YAML
    Use a time-bounded or temporary PDB change and revert afterward.
  6. (Advanced) Send a POSIX stop signal to freeze a container’s main process (diagnostics only):
    $ kubectl exec -n NAMESPACE POD_NAME -- kill -STOP 1    # freeze
    $ kubectl exec -n NAMESPACE POD_NAME -- kill -CONT 1    # resume
    Use only when you understand probe/timeout effects (see notes).
EXPERT TIP For “pause but keep N pods alive,” use a PDB with minAvailable (or maxUnavailable) and scale down gradually while watching rollout status.
IMPORTANT The SIGSTOP trick requires permissions inside the container and can trip liveness/readiness probes, causing restarts or removal from Service endpoints. Prefer scale-to-zero + Service changes for production.
EDITOR'S NOTE My go-to “pause” is either scale to zero or pause rollout depending on whether I want to stop traffic or just freeze changes. Service selector patches are fast but sharp - double-check before and after.

Kubernetes: How to remove a pod

Author: Cassius Adams •
  1. Delete a single Pod gracefully:
    $ kubectl delete pod POD_NAME -n NAMESPACE
    Return immediately and let deletion proceed in the background:
    $ kubectl delete pod POD_NAME --wait=false -n NAMESPACE
  2. Remove multiple Pods by label (safer than listing names):
    $ kubectl delete pod -l app=web -n NAMESPACE
    # dry-run first:
    $ kubectl delete pod -l app=web -n NAMESPACE --dry-run=client -o name
  3. Wait until the Pod is actually gone (great for CI/scripts):
    $ kubectl wait --for=delete pod/POD_NAME --timeout=60s -n NAMESPACE
  4. If the Pod is controller-managed, prefer a controller-aware restart:
    $ kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout restart statefulset/STATEFULSET_NAME -n NAMESPACE
    $ kubectl rollout restart daemonset/DAEMONSET_NAME -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
    Deleting a managed Pod directly will usually trigger an immediate replacement anyway.
  5. Investigate if removal hangs (finalizers, volume detaches, etc):
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Finalizers:/,/^Events/p'
    $ kubectl get events -n NAMESPACE --sort-by=.lastTimestamp | tail -n 50
    If you must, see “Force delete a pod” for last-resort steps.
EXPERT TIP Use labels for selection and --dry-run=client -o name to preview what you’ll delete. It saves you from typos and bad copy/paste.
IMPORTANT Unmanaged Pods don’t come back after deletion. Re-apply the manifest or re-run the command that created them. For Jobs/CronJobs, prefer operating on the Job rather than deleting its Pods mid-run.
EDITOR'S NOTE When I’m unsure why deletion is stuck, I keep one terminal tailing warnings: kubectl get events -n NAMESPACE --field-selector type=Warning --watch and another doing the delete/describe loop.

Kubernetes: How to restart a pod

Author: Cassius Adams •
  1. Determine if the Pod is managed (Deployment/StatefulSet/DaemonSet) or unmanaged:
    $ kubectl get pod POD_NAME -n NAMESPACE -o jsonpath='{.metadata.ownerReferences[0].kind}{"\t"}{.metadata.ownerReferences[0].name}{"\n"}'
  2. Preferred (managed): restart via the controller (keeps intent + health checks):
    $ kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout restart statefulset/STATEFULSET_NAME -n NAMESPACE
    $ kubectl rollout restart daemonset/DAEMONSET_NAME -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
  3. Restart just one Pod from a managed set (quick “kick”):
    $ kubectl delete pod POD_NAME -n NAMESPACE
    $ kubectl get pods -n NAMESPACE -w   # watch replacement
  4. Trigger a restart via annotation (auditable reason in template):
    $ kubectl patch deployment/DEPLOYMENT_NAME -n NAMESPACE \
      -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'"$(date -u +%FT%TZ)"'"}}}}}'
  5. Unmanaged Pod: delete and re-create from a manifest:
    $ kubectl delete pod POD_NAME -n NAMESPACE
    $ kubectl apply -f pod.yaml -n NAMESPACE
    $ kubectl wait --for=condition=Ready pod/POD_NAME -n NAMESPACE --timeout=90s
  6. Verify health and outcome:
    $ kubectl get pod -l app=APP -n NAMESPACE \
      -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[*].ready,RESTARTS:.status.containerStatuses[*].restartCount --no-headers
    Optional quick scan + logs:
    $ kubectl get pods -l app=APP -n NAMESPACE -o wide
    $ kubectl logs POD_NAME -n NAMESPACE --tail=200
EXPERT TIP rollout restart respects surge/unavailable settings and readiness probes, but it does not guarantee compliance with PodDisruptionBudgets (PDBs). PDBs apply to evictions, not normal rollouts.
HEADS-UP StatefulSets restart in ordinal order and preserve identity/PVCs. Factor this into maintenance windows and throughput expectations.
IMPORTANT Avoid --force --grace-period=0 unless you accept the blast radius (open connections, writes-in-flight). Prefer graceful patterns and verify with rollout status.
EDITOR'S NOTE If a restart is tied to a config change, bumping the template via an annotation (or env var) leaves breadcrumbs in describe and your Git history. Future you will appreciate it!

Kubernetes: How to ssh into pod

Author: Cassius Adams •
HEADS-UP Pods generally do not run SSH daemons, and that’s by design. You should absolutely avoid using SSH in pods. The Kubernetes-native way is kubectl exec, attach, or an ephemeral debug container. Use those instead of SSH.
  1. Open an interactive shell (preferred over SSH):
    $ kubectl exec -it POD_NAME -n NAMESPACE -- sh
    # if the image has bash:
    $ kubectl exec -it POD_NAME -n NAMESPACE -- /bin/bash
  2. Target a specific container (multi-container Pod):
    $ kubectl exec -it POD_NAME -c CONTAINER_NAME -n NAMESPACE -- sh
  3. No shell or tooling? Use an ephemeral debug container (Ephemeral containers share the Pod's network and IPC namespaces by default. They do not share the process namespace unless the Pod was created with shareProcessNamespace: true):
    $ kubectl debug -it POD_NAME -n NAMESPACE --image=busybox --target=CONTAINER_NAME
    Then run diagnostics from that debug shell.
  4. Interact with the main process directly (no shell):
    $ kubectl attach -it POD_NAME -c CONTAINER_NAME -n NAMESPACE
    # detach with: Ctrl+P, Ctrl+Q
  5. Run a one-off command (capture output & exit):
    $ kubectl exec POD_NAME -n NAMESPACE -- cat /etc/os-release
  6. (Rare, not recommended) If an SSH server already runs in the container and you must use it, tunnel via port-forward:
    $ kubectl port-forward pod/POD_NAME 2222:22 -n NAMESPACE
    # in another terminal:
    $ ssh -p 2222 USER@127.0.0.1
    Prefer short-lived bastions and audit logging if you go this route.
EXPERT TIP Keep a tiny "toolbox" image (busybox, distroless-curl, or your org’s alpine-with-tools) available in your private registry. It makes kubectl debug frictionless when the app image is minimal.
IMPORTANT Don't bake SSH into application containers!! It increases attack surface, complicates secrets management, and bypasses the native audit trail. Use exec/debug instead.
EDITOR'S NOTE I almost never regret choosing debug over fighting with a bare-bones app container in a pod. It lets me keep the app's runtime container clean while still getting the tools I need to troubleshoot issues. I can stop fighting and instead work with Kubernetes.

Kubernetes: How to start a pod

Author: Cassius Adams •
  1. Spin up a quick, unmanaged test Pod (ad hoc):
    $ kubectl run POD_NAME --image=IMAGE --restart=Never -n NAMESPACE --port=CONTAINER_PORT
    $ kubectl wait --for=condition=Ready pod/POD_NAME -n NAMESPACE --timeout=90s
  2. Create a Pod from YAML:
    apiVersion: v1
    kind: Pod
    metadata:
      name: POD_NAME
      namespace: NAMESPACE
      labels:
        app: example
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: app
        image: IMAGE
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: CONTAINER_PORT
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "256Mi"
        securityContext:
          allowPrivilegeEscalation: false
          runAsNonRoot: true
    Apply and then verify:
    $ kubectl apply -f pod.yaml
    $ kubectl get pod POD_NAME -n NAMESPACE -o wide
    $ kubectl logs POD_NAME -n NAMESPACE --tail=100
  3. Start a deployment-managed workload by scaling it up (preferred for apps):
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=1 -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
  4. To troubleshoot Pending/CrashLoopBackOff quickly:
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Events:/,$p'
    $ kubectl get events -n NAMESPACE --sort-by=.metadata.creationTimestamp | tail -n 50
  5. Reach it locally if you'd like:
    $ kubectl port-forward pod/POD_NAME 8080:CONTAINER_PORT -n NAMESPACE
    # open http://127.0.0.1:8080
EXPERT TIP Real apps should always use a controller (Deployment/StatefulSet/Job). Set requests/limits, probes, and pin images (avoid :latest or you're asking for trouble).
HEADS-UP Pod names are immutable. Changing the YAML’s metadata.name creates a new Pod. Treat it like replace, not update.
IMPORTANT If using private container image registries, wire up imagePullSecrets or a ServiceAccount - most "won't start" issues are simple ImagePullBackOffs.
EDITOR'S NOTE I like to set memory request = memory limit by default - and it most situations you should too. It keeps the pod eviction math predictable and reduces noisy OOM (out-of-memory) surprises. Speaking of which, when doing the memory math, keep in mind a lot of managed Kubernetes (AKS, GKE, etc) deploy managed pods that don't follow this practice. If you can update them to sane stable values, I'd recommend it.

Kubernetes: How to stop a pod

Author: Cassius Adams •
  1. Gracefully terminate a single Pod (default grace period):
    $ kubectl delete pod POD_NAME -n NAMESPACE
    Don’t want to block your terminal:
    $ kubectl delete pod POD_NAME --wait=false -n NAMESPACE
    Script-friendly “wait until gone”:
    $ kubectl wait --for=delete pod/POD_NAME --timeout=60s -n NAMESPACE
  2. Stopping a managed app? Prefer scaling the controller:
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    $ kubectl get deploy/DEPLOYMENT_NAME -n NAMESPACE -o custom-columns=NAME:.metadata.name,REPLICAS:.status.replicas --no-headers
    Scale up later when ready.
  3. Customize the grace period (seconds) if needed:
    $ kubectl delete pod POD_NAME --grace-period=20 --wait=false -n NAMESPACE
    Use --grace-period=0 --force only as a last resort (see “force delete” guide).
  4. Watch for replacement behavior (controller-managed Pods):
    $ kubectl get pods -l app=APP -n NAMESPACE -w
    If you don’t want immediate respawn, scale to zero first (above).
  5. Keep an eye on events during termination:
    $ kubectl get events -n NAMESPACE --field-selector type=Warning --watch
EXPERT TIP For StatefulSets, stop by ordinal (unique identity) or scale carefully. Identity (and PVCs) matter - avoid surprise storage detach/reattach loops.
HEADS-UP Deleting unmanaged Pods is ephemeral. If there’s no controller, nothing will recreate it - re-apply from YAML if you need it back.
IMPORTANT Force deletion (--grace-period=0 --force) skips cleanup and can drop in-flight work. Exhaust graceful options first.
EDITOR'S NOTE I try to always tail warnings in one terminal while stopping things in another. It reveals finalizer or storage issues more quickly.

Kubernetes: How to stop pod

Author: Cassius Adams •
  1. Standard graceful stop:
    $ kubectl delete pod POD_NAME -n NAMESPACE
    $ kubectl wait --for=delete pod/POD_NAME --timeout=60s -n NAMESPACE
  2. Stop an entire app (controller-managed):
    $ # Deployment / StatefulSet → scale to zero
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    $ kubectl scale statefulset/STATEFULSET_NAME --replicas=0 -n NAMESPACE
    
    $ # DaemonSet cannot be scaled; either delete it or temporarily disable it
    $ kubectl delete daemonset/DAEMONSET_NAME -n NAMESPACE
    # -- or, temporarily disable by patching nodeSelector to match no nodes:
    $ kubectl patch daemonset/DAEMONSET_NAME -n NAMESPACE --type=merge -p '{"spec":{"template":{"spec":{"nodeSelector":{"_disabled":"true"}}}}}'
  3. Tighten the shutdown window (custom grace):
    $ kubectl delete pod POD_NAME --grace-period=20 --wait=false -n NAMESPACE
  4. Confirm nothing is flapping back:
    $ kubectl get pods -l app=APP -n NAMESPACE -w
    If a controller keeps respawning, scale it to zero first.
  5. Check for blockers (finalizers, volumes, PDBs):
    $ kubectl describe pod POD_NAME -n NAMESPACE | sed -n '/Finalizers:/,/^Events/p'
    $ kubectl get events -n NAMESPACE --sort-by=.metadata.creationTimestamp | tail -n 50
EXPERT TIP If your goal is "restart everything", prefer kubectl rollout restart on the controller over mass-deleting Pods. Safer, and keeps intent.
HEADS-UP Respect PodDisruptionBudgets during maintenance windows. They can block voluntary disruptions if too strict.
IMPORTANT Avoid --force --grace-period=0 unless you know exactly what you're bypassing (writes in flight, in-memory state).
EDITOR'S NOTE My process flow usually looks something like this: scale if managed. ex: deployment (GitOps can get in the way here) → delete (if it's a single Pod) → and watch events. Also, I tail anything relevant in another terminal when applicable. Only when I understand the failure do I consider force options.

Deployments

Deployments manage stateless apps by owning ReplicaSets and rolling Pods forward safely. This section covers image update patterns, controlled restarts, scaling, pausing/resuming rollouts, and quick diagnosis of stuck or failing rollouts.

Deployments — FAQ

When should I use rollout restart vs. deleting a single pod?

Use kubectl rollout restart to refresh all Pods under a Deployment with health gates and surge/unavailable limits. Delete one Pod only for a targeted “kick” after you’ve fixed a pod/node-specific issue. See How to restart a pod.

Why is my rollout stuck at 0% / 25% / 75%?

Check kubectl rollout status, then inspect failing Pods for probe errors, missing Secrets/ConfigMaps, or quota limits. PDBs/HPAs can also constrain progress. See View pod events, Get pod logs, and Describe a pod.

Can I pause a rollout to stage changes safely?

Yes—kubectl rollout pause deployment/NAME, apply spec changes, then kubectl rollout resume deployment/NAME. This holds ReplicaSet scaling while you prep changes, then continues the rollout when ready.

How do I roll back to a previous version?

Use kubectl rollout undo deployment/NAME (optionally --to-revision=N). Keep images immutable and record reasons (annotation or commit message) so the “why” is obvious during post-mortems.

Is deleting a Deployment a good way to “turn off” an app?

Prefer kubectl scale deployment/NAME --replicas=0 so you keep history and can bounce back instantly. Delete only when the app is truly gone. See How to delete a deployment.

Should I use a Deployment for scheduled or one-off work?

No—use Job for finite tasks and CronJob for schedules. See Trigger cronjob (kubectl) and Delete job.

Kubernetes: How to trigger cronjob (kubectl)

Author: Cassius Adams •
HEADS-UP You don’t “run a CronJob” directly. You create a Job from it. ConcurrencyPolicy on the CronJob doesn’t control Jobs you launch manually—so mind overlaps.
  1. Find your CronJob (and check if it’s suspended):
    $ kubectl get cronjob -n NAMESPACE -o wide
    $ kubectl get cronjob CRONJOB_NAME -n NAMESPACE -o jsonpath='{.spec.suspend}{"\n"}'
    Temporarily pause the schedule if you want to avoid overlap:
    $ kubectl patch cronjob CRONJOB_NAME -n NAMESPACE -p '{"spec":{"suspend":true}}'
    # ...resume later...
    $ kubectl patch cronjob CRONJOB_NAME -n NAMESPACE -p '{"spec":{"suspend":false}}'
  2. Create a one-off Job from the CronJob (manual trigger):
    $ JOB="manual-$(date -u +%Y%m%dT%H%M%SZ)"
    $ kubectl create job --from=cronjob/CRONJOB_NAME "$JOB" -n NAMESPACE
    (Optional) Tag it for easy discovery:
    $ kubectl label job "$JOB" -n NAMESPACE trigger=manual origin=cronjob/CRONJOB_NAME
  3. Watch pods and stream logs:
    $ kubectl get pods -l job-name="$JOB" -n NAMESPACE -w
    $ kubectl logs job/"$JOB" -n NAMESPACE -f   # streams all pods for the Job (Not in every case. Depending on version and Job parallelism/completions, kubectl logs job/ may stream one or multiple pods)
    Wait for it to finish (success or fail):
    $ kubectl wait --for=condition=complete job/"$JOB" -n NAMESPACE --timeout=30m \
      || kubectl describe job "$JOB" -n NAMESPACE
  4. (Optional) Backfill a few runs (simple loop; mind concurrency):
    $ for i in 1 2 3; do
        kubectl create job --from=cronjob/CRONJOB_NAME "manual-$(date -u +%Y%m%dT%H%M%SZ)-$i" -n NAMESPACE
      done
  5. Clean up finished Jobs (or let TTL handle it):
    $ kubectl delete job "$JOB" -n NAMESPACE
    Prefer automatic GC on completion:
    $ kubectl patch job "$JOB" -n NAMESPACE -p '{"spec":{"ttlSecondsAfterFinished":600}}'
EXPERT TIP Use kubectl logs job/JOB_NAME for a single command that follows whichever pod runs the Job—even if it retries.
IMPORTANT Manual Jobs created from a CronJob will ignore the CronJob’s startingDeadlineSeconds and concurrencyPolicy. If overlap is risky, suspend the CronJob first.
EDITOR'S NOTE I sometimes label manual runs (trigger=manual) so I can clean them up later without touching scheduled history.

Kubernetes: How to delete a deployment

Author: Cassius Adams •
  1. Confirm what you’re about to remove (and back it up if needed):
    $ kubectl get deploy DEPLOYMENT_NAME -n NAMESPACE -o wide
    $ kubectl get rs,pod -l app=APP -n NAMESPACE
    $ kubectl get deploy DEPLOYMENT_NAME -n NAMESPACE -o yaml > backup-deployment.yaml
  2. (Safer) Scale to zero first so traffic drains gracefully:
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    $ kubectl wait --for=delete pod -l app=APP -n NAMESPACE --timeout=2m
  3. Delete the Deployment (choose cascade behavior):
    $ kubectl delete deployment DEPLOYMENT_NAME -n NAMESPACE
    Wait until everything under it is gone (foreground):
    $ kubectl delete deployment DEPLOYMENT_NAME --cascade=foreground -n NAMESPACE
    Keep ReplicaSets/Pods running (orphan—rare, be careful):
    $ kubectl delete deployment DEPLOYMENT_NAME --cascade=orphan -n NAMESPACE
  4. Bulk delete by label (preview first):
    $ kubectl delete deploy -l app=APP -n NAMESPACE --dry-run=client -o name
    $ kubectl delete deploy -l app=APP -n NAMESPACE
  5. Tidy up related resources if the app is truly gone:
    $ kubectl get svc,ingress,hpa -l app=APP -n NAMESPACE
    # remove what you no longer need
EXPERT TIP Running under GitOps? Remove or change the manifest in Git first—or the controller (Argo/Flux) will resurrect the Deployment you just deleted.
HEADS-UP PodDisruptionBudgets and HPAs may affect scale-down behavior. Foreground deletion waits for children; background returns immediately.
IMPORTANT --cascade=orphan leaves Pods/ReplicaSets alive without a controller. That’s almost never what you want in production—expect surprises.
EDITOR'S NOTE A good process would be:
- scale to zero
- watch events
- delete.
I try to only orphan for forensic or odd cases where I want Pods to keep running while I replace the controller.

Kubernetes: How to delete job

Author: Cassius Adams •
  1. Identify the Job and its Pods (Jobs label Pods with job-name=...):
    $ kubectl get job JOB_NAME -n NAMESPACE -o wide
    $ kubectl get pods -l job-name=JOB_NAME -n NAMESPACE -o wide
  2. (Optional) Stop further work before deletion:
    $ kubectl patch job JOB_NAME -n NAMESPACE -p '{"spec":{"suspend":true,"parallelism":0}}'
    $ kubectl delete pod -l job-name=JOB_NAME -n NAMESPACE   # terminate running Pods now
  3. Delete the Job (default background cascade removes its Pods):
    $ kubectl delete job JOB_NAME -n NAMESPACE
    $ kubectl wait --for=delete job/JOB_NAME -n NAMESPACE --timeout=60s
    Keep Pods running (orphan—use sparingly for forensics):
    $ kubectl delete job JOB_NAME --cascade=orphan -n NAMESPACE
    # Pods remain; clean up later with:
    $ kubectl delete pod -l job-name=JOB_NAME -n NAMESPACE
  4. Bulk-delete finished Jobs (preview first):
    $ kubectl get jobs -n NAMESPACE -o jsonpath='{range .items[?(@.status.succeeded>=1)]}{.metadata.name}{"\n"}{end}' \
      | xargs -r -I{} kubectl delete job {} -n NAMESPACE
  5. Prefer automatic cleanup with TTL:
    $ kubectl patch job JOB_NAME -n NAMESPACE -p '{"spec":{"ttlSecondsAfterFinished":3600}}'
EXPERT TIP Need logs after deletion? Stream them from your log platform—not Pods. Deleting the Job removes its Pods (unless orphaned).
HEADS-UP Jobs created by a CronJob will be re-created by the schedule in the future. If you want to stop the schedule too, kubectl suspend the CronJob (patch .spec.suspend=true).
IMPORTANT Orphaning Pods can burn CPU and keep writing to storage. If you do orphan for analysis, set a plan (and a timer) to clean them up.
EDITOR'S NOTE I try to only use orphaning when I need to keep a hot replica around for a short and focused investigation. Otherwise I'll just delete the Job and move on.

Kubernetes: How to deploy a pod

Author: Cassius Adams •
HEADS-UP Pods are cattle only when a controller owns them. For anything long-running, prefer a Deployment (or StatefulSet/DaemonSet). Standalone Pods are great for quick tests and one-offs.
  1. Spin up a quick unmanaged test Pod (no controller):
    $ kubectl run POD_NAME --image=REGISTRY/IMAGE:TAG --restart=Never -n NAMESPACE --port=CONTAINER_PORT
    Wait for readiness:
    $ kubectl wait --for=condition=Ready pod/POD_NAME -n NAMESPACE --timeout=90s
  2. Create a reproducible Pod via YAML (recommended even for tests):
    apiVersion: v1
    kind: Pod
    metadata:
      name: POD_NAME
      namespace: NAMESPACE
      labels:
        app: example
    spec:
      terminationGracePeriodSeconds: 30
      containers:
      - name: app
        image: REGISTRY/IMAGE:TAG
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: CONTAINER_PORT
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "256Mi"
        readinessProbe:
          httpGet:
            path: /healthz
            port: CONTAINER_PORT
          initialDelaySeconds: 5
          periodSeconds: 5
        livenessProbe:
          httpGet:
            path: /healthz
            port: CONTAINER_PORT
          initialDelaySeconds: 10
          periodSeconds: 10
        securityContext:
          runAsNonRoot: true
          allowPrivilegeEscalation: false
    Apply and verify:
    $ kubectl apply -f pod.yaml
    $ kubectl get pod POD_NAME -o wide -n NAMESPACE
    $ kubectl logs POD_NAME -n NAMESPACE --tail=100
  3. (Optional) Reach it locally (handy for smoke tests):
    $ kubectl port-forward pod/POD_NAME 8080:CONTAINER_PORT -n NAMESPACE
    # open http://127.0.0.1:8080
  4. For real apps, move to a Deployment (controller-managed):
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: APP_NAME
      namespace: NAMESPACE
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: example
      template:
        metadata:
          labels:
            app: example
        spec:
          containers:
          - name: app
            image: REGISTRY/IMAGE:TAG
            ports:
            - containerPort: CONTAINER_PORT
    Apply and watch rollout:
    $ kubectl apply -f deployment.yaml
    $ kubectl rollout status deployment/APP_NAME -n NAMESPACE
EXPERT TIP Pin images to tags or digests and always set resource requests. Probes keep rollouts honest and reduce “works on my machine” surprises.
IMPORTANT Avoid :latest in production and don’t treat unmanaged Pods as durable. A node eviction or restart will take it with it.
EDITOR'S NOTE I'll use kubectl run or a tmp yaml file and apply -f for quick repros, but if it's going to stick around or something bigger it should live in Git as YAML. Day-2 gets much easier when everything is declarative.

Kubernetes: How to deploy docker image

Author: Cassius Adams •
  1. Create a Deployment from an image (one-liner or scaffold YAML):
    $ kubectl create deployment APP_NAME --image=REGISTRY/IMAGE:TAG -n NAMESPACE
    Prefer YAML for review/history:
    $ kubectl create deployment APP_NAME --image=REGISTRY/IMAGE:TAG -n NAMESPACE \
      --dry-run=client -o yaml > deployment.yaml
    $ kubectl apply -f deployment.yaml
    $ kubectl rollout status deployment/APP_NAME -n NAMESPACE
  2. (Private registries) Create an imagePullSecrets and reference it (email is optional):
    $ kubectl create secret docker-registry regcred -n NAMESPACE \
      --docker-server=REGISTRY_URL \
      --docker-username=USERNAME \
      --docker-password=PASSWORD \
      --docker-email=YOU@example.com
    Add to your Pod template:
    spec:
      template:
        spec:
          imagePullSecrets:
          - name: regcred
  3. Expose the Deployment (ClusterIP for internal, NodePort/LoadBalancer for external):
    $ kubectl expose deployment/APP_NAME -n NAMESPACE --port=80 --target-port=CONTAINER_PORT --type=ClusterIP
    # or
    $ kubectl expose deployment/APP_NAME -n NAMESPACE --port=80 --target-port=CONTAINER_PORT --type=LoadBalancer
  4. Update to a new image version (safe rolling):
    $ kubectl set image deployment/APP_NAME APP_NAME=REGISTRY/IMAGE:NEWTAG -n NAMESPACE
    $ kubectl rollout status deployment/APP_NAME -n NAMESPACE
    Pin by digest for immutability:
    image: REGISTRY/IMAGE@sha256:DEADBEEF...  # preferred over mutable tags
  5. Tune replicas and sanity-check health:
    $ kubectl scale deployment/APP_NAME --replicas=3 -n NAMESPACE
    $ kubectl get deploy/APP_NAME -n NAMESPACE
    $ kubectl get pods -l app=APP_NAME -n NAMESPACE -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[*].ready --no-headers
EXPERT TIP Add readinessProbe and livenessProbe before the first rollout. If your app isn’t ready, Kubernetes will pause the rollout rather than serving bad traffic.
HEADS-UP New images that 404/403 on pull will stick Pods in ImagePullBackOff. Check the image name, tag/digest, and imagePullSecrets scope (namespace!) first.
IMPORTANT Never overwrite tags in-place in production registries. Use immutable tags or digests to ensure rollbacks actually roll back.
EDITOR'S NOTE I like to scaffold with kubectl create ... --dry-run=client -o yaml, then commit the manifest.

Kubernetes: How to disable cronjob

Author: Cassius Adams •
  1. Suspend the CronJob (stop future schedules):
    $ kubectl patch cronjob/CRONJOB_NAME -n NAMESPACE --type=merge -p '{"spec":{"suspend":true}}'
    Verify:
    $ kubectl get cronjob/CRONJOB_NAME -n NAMESPACE -o custom-columns=NAME:.metadata.name,SUSPEND:.spec.suspend,SCHEDULE:.spec.schedule --no-headers
  2. (Optional) Clean up active/running Jobs started by this CronJob:
    $ kubectl get jobs -n NAMESPACE -o jsonpath='\
    {range .items[?(@.metadata.ownerReferences[0].kind=="CronJob" && @.metadata.ownerReferences[0].name=="CRONJOB_NAME")]}{.metadata.name}{"\n"}{end}'
    Delete them (if it’s safe to stop work-in-progress):
    $ kubectl get jobs -n NAMESPACE -o jsonpath='\
    {range .items[?(@.metadata.ownerReferences[0].kind=="CronJob" && @.metadata.ownerReferences[0].name=="CRONJOB_NAME")]}{.metadata.name}{"\n"}{end}' \
      | xargs -I{} kubectl delete job {} -n NAMESPACE
  3. Audit the schedule and history:
    $ kubectl describe cronjob/CRONJOB_NAME -n NAMESPACE | sed -n '/Schedule:/,/Events:/p'
  4. Resume later:
    $ kubectl patch cronjob/CRONJOB_NAME -n NAMESPACE --type=merge -p '{"spec":{"suspend":false}}'
    (Optional) Trigger a manual run while still suspended (for testing):
    $ JOB=CRONJOB_NAME-manual-$(date +%s)
    $ kubectl create job --from=cronjob/CRONJOB_NAME "$JOB" -n NAMESPACE
    $ kubectl get jobs -n NAMESPACE
    $ kubectl logs job/"$JOB" -n NAMESPACE --tail=200
EXPERT TIP Set labels on spec.jobTemplate.metadata.labels in the CronJob. They propagate to Jobs so you can filter or bulk delete cleanly later.
HEADS-UP Suspending prevents new runs only. Any currently running Jobs keep running unless you delete them.
IMPORTANT Deleting Jobs can interrupt in-flight work. Confirm idempotency or compensate upstream/downstream before you pull the plug.
EDITOR'S NOTE I try (sometimes forget) to suspend first, then delete active Jobs only when I'm sure it's safe. But when I'm in doubt, I'll scale consumers to zero to avoid partial processing during maintenance.

Kubernetes: How to edit deployment

Author: Cassius Adams •
HEADS-UP Live edits can drift from Git if you use GitOps (Argo CD/Flux). Prefer editing YAML in source control, then kubectl apply. Use kubectl edit for emergencies and quick fixes only.
  1. Quick, inline change (opens your $EDITOR):
    $ kubectl edit deployment/DEPLOYMENT_NAME -n NAMESPACE
    Save + close to kick a rollout (changes to spec.template create a new ReplicaSet).
  2. Patch specific fields (surgical change without full YAML):
    $ kubectl patch deployment/DEPLOYMENT_NAME -n NAMESPACE \
      --type=merge -p '{"spec":{"replicas":4}}'
    JSON Patch (replace image):
    $ kubectl patch deployment/DEPLOYMENT_NAME -n NAMESPACE --type=json \
      -p='[{"op":"replace","path":"/spec/template/spec/containers/0/image","value":"REGISTRY/IMAGE:TAG"}]'
  3. Declarative (preferred): diff, annotate, apply:
    $ kubectl diff -f deployment.yaml -n NAMESPACE
    $ kubectl annotate -f deployment.yaml -n NAMESPACE \
      kubernetes.io/change-cause="Explain what changed" --overwrite
    $ kubectl apply -f deployment.yaml -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
  4. Pause to batch multiple edits, then resume:
    $ kubectl rollout pause deployment/DEPLOYMENT_NAME -n NAMESPACE
    # (apply several spec changes safely)
    $ kubectl apply -f deployment.yaml -n NAMESPACE
    $ kubectl rollout resume deployment/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
  5. Verify history and undo if needed:
    $ kubectl rollout history deployment/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout undo deployment/DEPLOYMENT_NAME -n NAMESPACE --to-revision=REVISION_NUMBER
  6. Common edits (examples):
    $ kubectl set image deployment/DEPLOYMENT_NAME app=REGISTRY/IMAGE:NEWTAG -n NAMESPACE
    $ kubectl set env deployment/DEPLOYMENT_NAME FEATURE_FLAG=true -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
EXPERT TIP Use kubectl diff before apply. It’s the fastest safety check against fat-finger changes in production.
IMPORTANT Don’t hand-edit status or generated fields. Only change spec. If GitOps is enforcing, your live change may be reverted immediately.
EDITOR'S NOTE Here's what I'll usually try to do: pause → apply a few related tweaks → resume → watch rollout → check logs/events. If the change is business-critical, I leave a well-explained change cause for the next person's (or my own future) sanity.

Kubernetes: How to restart a deployment

Author: Cassius Adams •
  1. Trigger a rolling restart (pod template timestamp bump):
    $ kubectl rollout restart deployment/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
    Equivalent annotation method (visible in history):
    $ kubectl patch deployment/DEPLOYMENT_NAME -n NAMESPACE \
      -p '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'"$(date -u +%Y-%m-%dT%H:%M:%SZ)"'"}}}}}'
  2. Check disruption controls (avoid surprises):
    $ kubectl get deploy/DEPLOYMENT_NAME -n NAMESPACE -o jsonpath='{.spec.strategy.rollingUpdate.maxUnavailable}{"\t"}{.spec.strategy.rollingUpdate.maxSurge}{"\n"}'
    $ kubectl get pdb -n NAMESPACE
    Zero-downtime pattern: maxUnavailable=0, maxSurge=1+.
  3. Watch replacement Pods and sanity-check readiness:
    $ kubectl get pods -l app=APP_NAME -n NAMESPACE -w
    $ kubectl get pods -l app=APP_NAME -n NAMESPACE \
      -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[*].ready,RESTARTS:.status.containerStatuses[*].restartCount --no-headers
  4. (Optional) Quiesce by scaling to zero, then back up:
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    $ kubectl scale deployment/DEPLOYMENT_NAME --replicas=DESIRED -n NAMESPACE
    $ kubectl rollout status deployment/DEPLOYMENT_NAME -n NAMESPACE
    Useful during maintenance windows when you prefer a clean stop/start rather than a rolling flip.
  5. Troubleshoot a stuck restart:
    $ kubectl get events -n NAMESPACE --field-selector involvedObject.kind=Pod --sort-by=.lastTimestamp | tail -n 50
    $ kubectl describe deploy/DEPLOYMENT_NAME -n NAMESPACE | sed -n '/Conditions:/,$p'
    $ kubectl get rs -n NAMESPACE -l app=APP_NAME -o wide
EXPERT TIP If you’re restarting to pick up a ConfigMap/Secret change, consider bumping an env var (e.g., CONFIG_SHA) in the Pod template so the reason is explicit in Git + history.
HEADS-UP PodDisruptionBudgets and maxUnavailable=0 can slow or block restarts if you don’t have extra capacity. Pre-scale or temporarily relax constraints if needed.
EDITOR'S NOTE I default to rollout restart. If I need to coordinate with DB migrations or cache warmups, I find it better to pause, prep, then resume for more control.

Kubernetes: How to restart a job

Author: Cassius Adams •
HEADS-UP Jobs don’t have a “restart” button. You either let the controller re-run Pods (if not complete), or you delete and recreate the Job. Make your tasks idempotent.
  1. Check current state (Completed/Failed/Active/backoff):
    $ kubectl describe job/JOB_NAME -n NAMESPACE | sed -n '/Completions:/,/Events:/p'
    Quick JSON peek:
    $ kubectl get job/JOB_NAME -n NAMESPACE -o jsonpath='{.status.active}{"\t"}{.status.succeeded}{"\t"}{.status.failed}{"\n"}'
  2. If the Job is still running but a Pod wedged, nudge it:
    $ kubectl delete pod -l job-name=JOB_NAME -n NAMESPACE
    # The Job controller will create replacement Pods (subject to backoffLimit/activeDeadlineSeconds)
  3. Re-run a finished Job (delete + recreate from YAML):
    $ kubectl delete job/JOB_NAME -n NAMESPACE
    $ kubectl apply -f job.yaml -n NAMESPACE
    $ kubectl get pods -l job-name=JOB_NAME -n NAMESPACE -w
    Prefer a unique name per run:
    $ kubectl apply -f - <<EOF
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: JOB_NAME-$(date +%s)
      namespace: NAMESPACE
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: worker
            image: REGISTRY/IMAGE:TAG
            args: ["--do-the-thing"]
    EOF
  4. Trigger a one-off run from a CronJob template:
    $ kubectl create job --from=cronjob/CRONJOB_NAME CRONJOB_NAME-manual-$(date +%s) -n NAMESPACE
    $ kubectl get jobs -n NAMESPACE
    $ kubectl logs job/CRONJOB_NAME-manual-$(date +%s) -n NAMESPACE --tail=200
  5. Keep the cluster tidy (auto-clean finished Jobs/Pods):
    spec:
      ttlSecondsAfterFinished: 3600  # auto-delete one hour after completion
EXPERT TIP Large parallel Jobs? Use labels on spec.template.metadata.labels (e.g., job-stage=batch1) so you can surgically re-run subsets by deleting only the matching Pods.
IMPORTANT activeDeadlineSeconds and backoffLimit can block retries. If exceeded, delete and recreate the Job (or adjust limits) to run again.
EDITOR'S NOTE I only restart Jobs that are idempotent. If the task writes externally (ex: to DB or object store), I'll verify the retry settings - otherwise it'll double-apply work.

Kubernetes: How to restart statefulset

Author: Cassius Adams •
HEADS-UP StatefulSets preserve identity (ordinal + stable network + PVC). Restarts happen in order. Respect any data semantics before you start flipping replicas.
  1. Check update strategy (RollingUpdate vs OnDelete) and partitions:
    $ kubectl get statefulset/STATEFULSET_NAME -n NAMESPACE -o jsonpath='{.spec.updateStrategy.type}{"\t"}{.spec.updateStrategy.rollingUpdate.partition}{"\n"}'
    If it’s OnDelete, you must delete pods ordinal-by-ordinal (see step 4).
  2. Standard rolling restart (RollingUpdate strategy):
    $ kubectl rollout restart statefulset/STATEFULSET_NAME -n NAMESPACE
    $ kubectl rollout status statefulset/STATEFULSET_NAME -n NAMESPACE
    Watch ordinals progress:
    $ kubectl get pods -l statefulset.kubernetes.io/pod-name -n NAMESPACE -w
  3. (Controlled waves) Use partition to stage restarts:
    $ kubectl patch statefulset/STATEFULSET_NAME -n NAMESPACE --type=merge \
      -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":N}}}}'
    Lower partition stepwise to roll older ordinals later.
  4. OnDelete strategy (or force a single ordinal replace):
    $ kubectl delete pod STATEFULSET_NAME-ORDINAL -n NAMESPACE
    $ kubectl wait --for=condition=Ready pod/STATEFULSET_NAME-ORDINAL -n NAMESPACE --timeout=5m
    Proceed highest ordinal → lowest to minimize dependency impact.
  5. Verify PVC safety and readiness before moving to next ordinal:
    $ kubectl get pvc -l app=APP_NAME -n NAMESPACE
    $ kubectl describe pod STATEFULSET_NAME-ORDINAL -n NAMESPACE | sed -n '/Conditions:/,/Events:/p'
EXPERT TIP For databases, consider partition + readiness gates and explicit preStop hooks. It’s slower but avoids “two leaders” or unflushed WAL surprises.
IMPORTANT Don't scale a StatefulSet down if pods own unique PVCs you still need. Data loss by “cleanup enthusiasm” is pretty much a rite of passage - don’t make it yours.
EDITOR'S NOTE I restart from the highest (last) ordinal down (app-2, app-1, app-0). If one ordinal won't come up cleanly, I stop there and fix root cause before touching the rest.

Kubernetes: How to run a job

Author: Cassius Adams •
  1. Quick one-off Job from an image (no YAML):
    $ kubectl create job JOB_NAME --image=REGISTRY/IMAGE:TAG -n NAMESPACE -- arg1 arg2
    Tail output:
    $ kubectl logs job/JOB_NAME -n NAMESPACE -f --tail=200
  2. Declarative Job with retries, deadline, and cleanup:
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: JOB_NAME
      namespace: NAMESPACE
    spec:
      completions: 1
      parallelism: 1
      backoffLimit: 2
      activeDeadlineSeconds: 1800
      ttlSecondsAfterFinished: 3600
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: worker
            image: REGISTRY/IMAGE:TAG
            args: ["--process","--input=s3://bucket/path"]
    Apply and watch:
    $ kubectl apply -f job.yaml -n NAMESPACE
    $ kubectl wait --for=condition=complete job/JOB_NAME -n NAMESPACE --timeout=30m || kubectl describe job/JOB_NAME -n NAMESPACE
  3. Run many in parallel (sharded or batch work):
    spec:
      completions: 100
      parallelism: 10
      completionMode: Indexed
    Your app reads JOB_COMPLETION_INDEX to know its shard id.
  4. Keep things tidy (labels and TTL):
    metadata:
      labels:
        app: batch
        owner: team-xyz
        run: $(date +%s)
EXPERT TIP For parallel batch jobs, set completionMode: Indexed and map each index to a distinct input partition. It beats ad-hoc sharding logic.
IMPORTANT Jobs inherit service accounts, limits, and network policies like anything else. If your Pods can’t reach data stores, check namespace policies first, not the code.
EDITOR'S NOTE I wish I did this more, but labeling every run with a timestamp (run=$(date +%s)) helps. When it’s time to clean things up, a single selector can be used to keep the cluster clean.

Kubernetes: How to rerun a job

Author: Cassius Adams •
HEADS-UP “Rerun” means create a new Job run. Make the work idempotent or checkpointed or you risk double-processing.
  1. Clone an existing Job’s spec into a new run (unique name):
    $ RUN_NAME="JOB_NAME-$(date +%s)"
    $ kubectl get job/JOB_NAME -n NAMESPACE -o json \
      | jq 'del(.metadata.uid,.metadata.resourceVersion,.metadata.creationTimestamp,.metadata.annotations,.metadata.labels,.status)' \
      | jq --arg name "$RUN_NAME" '.metadata.name=$name' \
      | kubectl apply -n NAMESPACE -f -
    Watch pods for the new run:
    $ kubectl get pods -l job-name="$RUN_NAME" -n NAMESPACE -w
  2. Create a fresh Job from a template (recommended pattern):
    apiVersion: batch/v1
    kind: Job
    metadata:
      generateName: JOB_BASENAME-
      namespace: NAMESPACE
    spec:
      ttlSecondsAfterFinished: 3600
      backoffLimit: 2
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: worker
            image: REGISTRY/IMAGE:TAG
            args: ["--run-once","--window=today"]
    Apply it:
    $ kubectl apply -f job.yaml -n NAMESPACE
  3. (From a CronJob) Rerun the schedule’s template manually:
    $ RUN="CRONJOB_NAME-rerun-$(date +%s)"
    $ kubectl create job --from=cronjob/CRONJOB_NAME "$RUN" -n NAMESPACE
    $ kubectl logs job/"$RUN" -n NAMESPACE -f --tail=200
  4. Troubleshoot backoff/limits blocking a rerun:
    $ kubectl get job/JOB_NAME -n NAMESPACE -o jsonpath='{.spec.backoffLimit}{"\t"}{.spec.activeDeadlineSeconds}{"\n"}'
    If exceeded, create a new Job (don’t resurrect the old one).
EXPERT TIP Stamp runs with labels (e.g., run=$(date +%s), source=rerun) so you can aggregate logs/metrics and clean up surgically.
EDITOR'S NOTE I like fresh Job names per run, even if I'm "rerunning" the exact same template. I find it's easier to reason about and understand the history and TTL cleanup.

Kubernetes: How to stop a deployment

Author: Cassius Adams •
  1. Scale the Deployment to zero replicas (stops its Pods):
    $ kubectl scale deploy/DEPLOYMENT_NAME --replicas=0 -n NAMESPACE
    Verify:
    $ kubectl get pods -l app=APP -n NAMESPACE
  2. (Optional) Pause rollouts while stopped:
    $ kubectl rollout pause deploy/DEPLOYMENT_NAME -n NAMESPACE
  3. (If HPA exists) Prevent automatic scale-up:
    HEADS-UP Standard Kubernetes HPA does not support minReplicas: 0. To avoid fighting your scale-to-zero, temporarily remove/disable the HPA and re-apply later (or use a platform that supports scale-to-zero such as KEDA).
    Backup then remove HPA:
    $ kubectl get hpa HPA_NAME -n NAMESPACE -o yaml > hpa-backup.yaml
    $ kubectl delete hpa HPA_NAME -n NAMESPACE
  4. Resume service later (unpause → scale up → watch rollout):
    $ kubectl rollout resume deploy/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl scale deploy/DEPLOYMENT_NAME --replicas=DESIRED -n NAMESPACE
    $ kubectl rollout status deploy/DEPLOYMENT_NAME -n NAMESPACE
EXPERT TIP Pausing lets you stage multiple spec changes safely, then resume when ready.
HEADS-UP If a Service still points at the Deployment’s selector, clients will see 404s/timeouts while replicas are 0. Consider upstream maintenance handling.
IMPORTANT PodDisruptionBudgets govern evictions, not controller scale-down, so they won’t “block” a scale-to-zero. Still, be mindful of disruption windows for stateful traffic.
EDITOR'S NOTE My safe process usually looks like this: pause → remove HPA → scale to 0 → verify endpoints are empty → make changes → resume → scale up → watch rollout.

Kubernetes: How to trigger cronjob

Author: Cassius Adams •
  1. Create a unique run name and trigger a one-off Job from the CronJob:
    $ RUN_NAME=CRONJOB_NAME-$(date +%s)
    $ kubectl create job --from=cronjob/CRONJOB_NAME "$RUN_NAME" -n NAMESPACE
  2. Watch the Job and its Pods:
    $ kubectl get jobs,pods -n NAMESPACE
  3. Tail logs for this ad-hoc run (don’t re-generate a new timestamp):
    $ kubectl logs -f job/$RUN_NAME -n NAMESPACE
    Or by Pod selector:
    $ kubectl logs -f $(kubectl get pods -l job-name=$RUN_NAME -n NAMESPACE -o name | head -n1) -n NAMESPACE
  4. (Optional) Materialize YAML to tweak args/env before running:
    $ kubectl create job --from=cronjob/CRONJOB_NAME "$RUN_NAME" -n NAMESPACE \
      --dry-run=client -o yaml > job-from-cron.yaml
    # edit command/args/env
    $ kubectl apply -f job-from-cron.yaml -n NAMESPACE
  5. Cleanup the ad-hoc run explicitly (controller label won’t match your manual Job):
    $ kubectl delete job "$RUN_NAME" -n NAMESPACE
    (Tip: add your own label at creation time to make bulk cleanup easy: --labels run=$RUN_NAME.)
EXPERT TIP I like to prefix manual Jobs with the cronjob’s name and a timestamp for quick grep/cleanup later.
HEADS-UP There is no built-in cronjob-name label on Jobs. Bulk deletion by label only works if you add your own label at creation time.
EDITOR'S NOTE YAML-first is my default when I need to tune command/args/env. It's just personal prefernce. CLI is fine for one-offs but I find YAML is better for history and repeatability.

Kubernetes: How to trigger job

Author: Cassius Adams •
  1. Create and run a Job from an image:
    $ kubectl create job JOB_NAME --image=IMAGE -n NAMESPACE -- [args]
  2. Watch and stream logs:
    $ kubectl get job JOB_NAME -n NAMESPACE
    $ kubectl get pods -l job-name=JOB_NAME -n NAMESPACE
    $ kubectl logs -f job/JOB_NAME -n NAMESPACE
  3. Dry-run to YAML then apply if you need to tune env/args:
    $ kubectl create job JOB_NAME --image=IMAGE -n NAMESPACE --dry-run=client -o yaml > job.yaml
    # edit spec.template.spec.containers[0].env / command / args
    $ kubectl apply -f job.yaml -n NAMESPACE
  4. Cleanup:
    $ kubectl delete job JOB_NAME -n NAMESPACE
EXPERT TIP Give Jobs unique names to make kubectl selection, logging, and cleanup simple.
EDITOR'S NOTE CLI for ad-hoc, YAML for repeatability. That balance keeps my history clean, my runs predictable, and I may need to come back to it later so I keep a well-structured "scratch" folder.

Kubernetes: How to update deployment

Author: Cassius Adams •
  1. Update image then watch rollout:
    $ kubectl set image deploy/DEPLOYMENT_NAME CONTAINER_NAME=IMAGE:TAG -n NAMESPACE
    $ kubectl rollout status deploy/DEPLOYMENT_NAME -n NAMESPACE
  2. Apply declaratively (preferred for env/resources/etc.):
    $ kubectl diff -f deployment.yaml -n NAMESPACE
    $ kubectl apply -f deployment.yaml -n NAMESPACE
  3. Strategy tweak example:
    spec:
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 0%
  4. Rollback & history:
    $ kubectl rollout undo deploy/DEPLOYMENT_NAME -n NAMESPACE
    $ kubectl rollout history deploy/DEPLOYMENT_NAME -n NAMESPACE
EXPERT TIP Pin by digest for immutability in production and let CI stamp the digest into YAML.
HEADS-UP Readiness gates the rollout; broken probes stall progress. Fix probes before rolling.
IMPORTANT spec.selector is effectively immutable. The API will reject changes after creation—create a new Deployment if you need different selectors.
EDITOR'S NOTE My process looks like this: diff → apply → watch → smoke test → mark good. If anything smells off, rollback fast and investigate offline.

Services & Networking

Stable virtual IPs and DNS for Pods. Test inside the cluster, port-forward locally, or expose externally with NodePort/LoadBalancer.

Services & Networking — FAQ

Why can’t I reach my Service? It returns 503 or connection refused.

Most often the Service has no endpoints (selector doesn’t match Ready Pods). Verify labels and readiness, then inspect EndpointSlices. See How to access service and Curl a Service.

What’s the difference between ClusterIP, NodePort, and LoadBalancer?

ClusterIP is in-cluster only (DNS: service.namespace.svc). NodePort opens a port on each node for external reach without a LB. LoadBalancer provisions a cloud LB with a public or private address. See Access a ClusterIP Service and Access service.

How do I reach a ClusterIP Service from my laptop?

Use kubectl port-forward to the Service (or Pod) and hit http://127.0.0.1:LOCAL_PORT. This avoids exposing it publicly. See Access service and Curl a Service.

How do I get the external URL or IP of my Service?

For LoadBalancer Services, read .status.loadBalancer.ingress to get the hostname/IP once provisioned. See Get Service URL and Get external IP.

Should I use Ingress or a LoadBalancer Service?

Use Ingress when you want HTTP(S) routing, TLS, and multiple Services behind one entry point; use a LoadBalancer when you need simple L4 exposure per Service. See Use Ingress and Access service.

Kubernetes: How to access service

Author: Cassius Adams •
  1. Reach a ClusterIP Service from inside the cluster (preferred):
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- curl -sI http://SERVICE_NAME.NAMESPACE.svc:PORT/
    DNS short form within the same namespace:
    $ curl http://SERVICE_NAME:PORT/healthz
  2. From your workstation (without exposing it): port-forward the Service:
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME 8080:PORT
    Then browse/curl: http://127.0.0.1:8080
  3. Check Service endpoints (targets behind the virtual IP):
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
    If empty, your selector doesn’t match Ready Pods—fix labels or readiness.
  4. Expose externally (LoadBalancer or NodePort):
    $ kubectl expose deploy/DEPLOYMENT_NAME --type=LoadBalancer --port=80 --target-port=8080 -n NAMESPACE
    Get the external address/hostname:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}{"\n"}'
EXPERT TIP Test with both the namespace-local DNS (SERVICE_NAME:PORT) and the FQDN (service.namespace.svc:PORT) when crossing namespaces—search paths bite.
HEADS-UP A Service without endpoints returns 503/connection refused. Verify labels/revisions and readiness; rollouts can temporarily drain endpoints.
IMPORTANT Exposing with LoadBalancer creates public reachability in many clouds. Confirm firewall rules, allowed CIDRs, and TLS posture before flipping it on.
EDITOR'S NOTE My flow is: DNS resolve → curl via ClusterIP inside cluster → port-forward locally → only then consider external exposure. Keeps blast radius small.

Kubernetes: How to access clusterip service

Author: Cassius Adams •
EXPERT TIP Test inside the cluster first; if that works, use port-forward from your laptop. Only expose externally when you must.
  1. Access from within the cluster (preferred):
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- curl -sI http://SERVICE_NAME.NAMESPACE.svc:PORT/
    Same-namespace short form:
    $ curl http://SERVICE_NAME:PORT/healthz
  2. From your laptop without exposing anything (Service-level port-forward):
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME 8080:PORT
    Then browse/curl: http://127.0.0.1:8080
  3. Verify DNS resolution inside a pod:
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- getent hosts SERVICE_NAME.NAMESPACE.svc
    # or (busybox)
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- nslookup SERVICE_NAME.NAMESPACE.svc
  4. Ensure the Service has endpoints (targets):
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
    Empty endpoints = selectors don’t match Ready Pods (labels or readiness issue).
HEADS-UP Rollouts can temporarily drain endpoints. If curl flips between 200/503 during a rollout, you’re likely hitting Pods in transition.
IMPORTANT ClusterIP Services are not reachable from outside the cluster without a proxy/tunnel. Don’t try to hit the ClusterIP from your laptop directly—it won’t route.
EDITOR'S NOTE When I’m debugging “works in cluster, not locally,” I always fall back to Service port-forwarding first. It dramatically narrows the problem surface.

Kubernetes: How to curl a service

Author: Cassius Adams •
  1. Inside the cluster (best signal):
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- curl -si http://SERVICE_NAME.NAMESPACE.svc:PORT/healthz
  2. From your laptop via Service port-forward:
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME 8080:PORT
    $ curl -si http://127.0.0.1:8080/
  3. Send Host header (testing through an Ingress or multiple virtual hosts):
    $ curl -si http://INGRESS_IP/ -H 'Host: example.com'
  4. Troubleshooting flags that help:
    $ curl -vS --connect-timeout 5 --max-time 10 http://SERVICE_NAME.NAMESPACE.svc:PORT/
    TLS endpoint example:
    $ curl -kIs https://SERVICE_NAME.NAMESPACE.svc:PORT/
EXPERT TIP Use -I (HEAD) for fast checks, -L to follow redirects, and -H to set headers like Authorization or Host.
HEADS-UP Timeouts/connection refused usually mean missing endpoints, readiness gates, or a network policy. Check EndpointSlices and kubectl describe.
EDITOR'S NOTE I keep a tiny “curl pod” around for clusters where exec is blocked. It’s saved me a lot of context switching during triage.

Kubernetes: How to expose a service

Author: Cassius Adams •
  1. Create a Service from a Deployment (ClusterIP by default):
    $ kubectl expose deploy/DEPLOYMENT_NAME --port=80 --target-port=8080 -n NAMESPACE
    Confirm:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE -o wide
  2. Expose externally with a cloud LoadBalancer:
    $ kubectl expose deploy/DEPLOYMENT_NAME --type=LoadBalancer --port=80 --target-port=8080 -n NAMESPACE
    Fetch the external address:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}{"\n"}'
  3. or use NodePort (no external LB available):
    $ kubectl expose deploy/DEPLOYMENT_NAME --type=NodePort --port=80 --target-port=8080 -n NAMESPACE
    Access via any node: http://NODE_IP:NODEPORT
EXPERT TIP You don’t need containerPort for Services to work, but setting it helps documentation and linting.
HEADS-UP Changing a Service type can briefly flap connectivity. Plan a short maintenance window for prod.
IMPORTANT A public LoadBalancer may be internet-reachable. Confirm firewall rules, allowed CIDRs, TLS, and authentication before exposing.
EDITOR'S NOTE I prefer Ingress for HTTP(S) and a single front door; LoadBalancer only when I need raw L4 or non-HTTP protocols.

Kubernetes: How to expose port

Author: Cassius Adams •
  1. Expose a Deployment as a Service (map Service port → container port):
    $ kubectl expose deploy/DEPLOYMENT_NAME --port=80 --target-port=8080 -n NAMESPACE
  2. Expose a single Pod (dev/test only; no controller behind it):
    $ kubectl expose pod/POD_NAME --port=80 --target-port=8080 -n NAMESPACE
  3. Make it reachable externally (choose one):
    $ kubectl expose deploy/DEPLOYMENT_NAME --type=LoadBalancer --port=80 --target-port=8080 -n NAMESPACE
    $ kubectl expose deploy/DEPLOYMENT_NAME --type=NodePort --port=80 --target-port=8080 -n NAMESPACE
  4. Confirm the mapping and endpoints:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE -o wide
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
EXPERT TIP targetPort can be a name from your container’s ports: (e.g., name: http). It keeps YAML readable and resilient to port changes.
HEADS-UP Exposing Pods directly ties the Service to a single Pod name; if that Pod dies, traffic is gone. Prefer exposing Deployments/ReplicaSets.
IMPORTANT NodePort opens a firewall hole on every node. Lock down source ranges at the cloud/network layer and prefer Ingress/TLS for HTTP(S) traffic.
EDITOR'S NOTE I treat kubectl expose as a quick way to conduct experiments (although these days I rarely use it since I have quick template yamls that I keep around). But for anything real, I check a Service YAML into Git, and of course write it into my CI/CD.

Kubernetes: How to get external ip

Author: Cassius Adams •
  1. LoadBalancer Service — fetch hostname/IP (may take time to provision):
    $ kubectl get svc SERVICE_NAME -n NAMESPACE \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}{"\n"}'
    Watch until it becomes non-empty:
    $ kubectl get svc -w -n NAMESPACE
  2. Ingress — read the public host or LB address:
    $ kubectl get ingress INGRESS_NAME -n NAMESPACE -o wide
    Extract first host:
    $ kubectl get ingress INGRESS_NAME -n NAMESPACE \
      -o jsonpath='{.spec.rules[0].host}{"\n"}'
  3. NodePort — construct URL from any node’s IP and the nodePort:
    $ kubectl get svc SERVICE_NAME -n NAMESPACE \
      -o jsonpath='{.spec.ports[0].nodePort}{"\n"}'
    Then: http://NODE_IP:NODEPORT
  4. Minikube helper (local dev):
    $ minikube service SERVICE_NAME -n NAMESPACE --url
EXPERT TIP Cloud LBs can surface a hostname first; it may later resolve to multiple IPs. Use the hostname in clients where possible.
HEADS-UP Provisioning an external IP can take minutes. If it never appears, check your cloud controller logs and Service annotations.
IMPORTANT An external IP is often internet-reachable. Lock down allowed CIDRs, enforce TLS, and validate auth before exposing production traffic.
EDITOR'S NOTE My flow is “get address → quick curl → add DNS/SSL only after I’m sure the target Pods are healthy and stable.”

Kubernetes: How to get service url

Author: Cassius Adams •
  1. Cluster-internal URL (FQDN):
    $ echo http://SERVICE_NAME.NAMESPACE.svc:PORT/
    Same-namespace short form:
    $ echo http://SERVICE_NAME:PORT/
  2. LoadBalancer public URL (hostname/IP):
    $ kubectl get svc SERVICE_NAME -n NAMESPACE \
      -o jsonpath='{.status.loadBalancer.ingress[0].hostname}{.status.loadBalancer.ingress[0].ip}{"\n"}'
    Compose full URL: http(s)://HOST_OR_IP:PORT/
  3. Ingress host-based URL:
    $ kubectl get ingress INGRESS_NAME -n NAMESPACE \
      -o jsonpath='{.spec.rules[0].host}{"\n"}'
    Then: https://HOST/ (port usually 443)
  4. Minikube helper:
    $ minikube service SERVICE_NAME --url -n NAMESPACE
EXPERT TIP Prefer hostnames over raw IPs—cloud LBs often rotate IPs. Your URL stays stable if you use the host.
HEADS-UP Some controllers delay populating status.loadBalancer. If it’s blank, wait or check controller events.
EDITOR'S NOTE For internal docs I always show the short form (service:port) and the FQDN—devs copy the one that matches their context.

Kubernetes: How to port forward

Author: Cassius Adams •
  1. Forward to a Service (recommended for testing through the VIP):
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME LOCAL_PORT:PORT
    Then browse/curl: http://127.0.0.1:LOCAL_PORT/
  2. Forward directly to a Pod (bypasses Service selection):
    $ kubectl port-forward -n NAMESPACE pod/POD_NAME LOCAL_PORT:CONTAINER_PORT
  3. Pick an available local port automatically:
    $ kubectl port-forward -n NAMESPACE svc/SERVICE_NAME :PORT
    The command prints the chosen LOCAL_PORT.
  4. Troubleshoot common issues:
    $ lsof -i :LOCAL_PORT   # ensure local port is free
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide  # Service has endpoints?
    $ kubectl auth can-i create pods/portforward -n NAMESPACE  # RBAC allows it?
EXPERT TIP Use Service port-forward when you want to test the exact path traffic takes in-cluster (selectors/endpoints). Use Pod port-forward to bypass the Service hop.
HEADS-UP The session runs until you close the terminal. In scripts/CI, background it or use --address=0.0.0.0 carefully if others must connect.
EDITOR'S NOTE Port-forwarding is my go-to for quick checks. It’s reversible, leaves no public surface, and makes comparing “inside vs outside” behavior trivial.

Kubernetes: How to restart a service

Author: Cassius Adams •
  1. Understand the model: a Service is a virtual IP/router; there’s nothing to “restart.” Restart the backing Pods instead (Deployment/StatefulSet/DaemonSet).
  2. Roll the backing Deployment:
    $ kubectl rollout restart deploy/DEPLOYMENT_NAME -n NAMESPACE
    Watch until Ready:
    $ kubectl rollout status deploy/DEPLOYMENT_NAME -n NAMESPACE
  3. (Optional) Bounce the Service object (no downtime since it only refreshes metadata):
    $ kubectl apply -f service.yaml -n NAMESPACE
  4. Verify endpoints updated:
    $ kubectl get endpointslice -n NAMESPACE -l kubernetes.io/service-name=SERVICE_NAME -o wide
EXPERT TIP If your app caches DNS aggressively, you may need a full Pod restart to pick up new endpoints, even when the Service updated.
HEADS-UP PodDisruptionBudgets can slow a restart. Check PDBs and surge/availability settings before rolling prod.
EDITOR'S NOTE When folks say "restart the Service," they almost always mean "restart the Deployment." I call that out explicitly in runbooks, and in conversations, to avoid confusion.

Ingress

HTTP(S) routing to Services with host/path rules; TLS termination and traffic control policies.

Ingress — FAQ

My Ingress returns 404/502. Where should I look first?

Confirm the Ingress controller is running, the rule’s host/path match your request, and the backing Service has endpoints. See Access ingress.

How do I test HTTPS with the right SNI/Host header?

Use curl with --resolve (or -H 'Host:' for HTTP) to hit the LB IP while sending the Ingress host. See Access ingress.

What’s the minimal YAML to route a host to my Service?

A single rule with spec.rules.host and a backend Service/port. Add spec.tls for HTTPS. See Configure ingress.

Do I need a LoadBalancer and an Ingress?

No. Ingress exposes HTTP(S) via its controller’s LB. Use a Service LoadBalancer only for raw L4 exposure. See Use ingress.

Kubernetes: How to access ingress

Author: Cassius Adams •
EXPERT TIP Test with the exact Host header your rule expects. If you’re hitting an IP directly, use --resolve so SNI and Host line up.
  1. Discover the Ingress host and LB address:
    $ kubectl get ingress INGRESS_NAME -n NAMESPACE -o wide
    Print the first host quickly:
    $ kubectl get ingress INGRESS_NAME -n NAMESPACE \
      -o jsonpath='{.spec.rules[0].host}{"\n"}'
  2. HTTP test (Host header):
    $ curl -si http://LB_IP_OR_DNS/ -H 'Host: HOSTNAME'
  3. HTTPS test with SNI (no TLS verification issues in lab):
    $ curl -si --resolve HOSTNAME:443:LB_IP https://HOSTNAME/
    (If your cert is self-signed, add -k while testing.)
  4. Validate the Service behind the rule (in-cluster):
    $ kubectl exec deploy/DEPLOYMENT_NAME -n NAMESPACE -- \
      curl -sI http://SERVICE_NAME.NAMESPACE.svc:PORT/
HEADS-UP If the controller isn’t running or your IngressClass doesn’t match, rules won’t be programmed. Check controller Pods/logs and kubectl describe ingress.
IMPORTANT A green Ingress still fails if the backend Service has no endpoints. Always confirm EndpointSlices during rollouts.
EDITOR'S NOTE My sanity loop: Host → LB IP → --resolve curl → backend Service curl. If those three pass, it’s usually app logic, not infra.

Kubernetes: How to configure ingress

Author: Cassius Adams •
  1. Create a minimal host→service route (HTTP):
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: app
      namespace: NAMESPACE
      annotations: {}
    spec:
      ingressClassName: INGRESS_CLASS   # e.g., nginx; omit if default
      rules:
      - host: HOSTNAME
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: SERVICE_NAME
                port:
                  number: PORT
    Apply:
    $ kubectl apply -f ingress.yaml
  2. Add TLS termination (reuse an existing TLS Secret):
    spec:
      tls:
      - hosts: [HOSTNAME]
        secretName: TLS_SECRET_NAME
    Verify cert wired:
    $ kubectl describe ingress app -n NAMESPACE
  3. (Controller hints) Common annotations (nginx example):
    metadata:
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /$1
        nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    Check your controller’s docs for exact keys.
EXPERT TIP Keep the Service healthy first—Ingress won’t fix bad readiness or empty endpoints.
HEADS-UP Changing ingressClassName or moving controllers mid-flight can leave orphaned rules. Plan a cutover.
EDITOR'S NOTE I treat Ingress YAML as IaC—checked in, reviewed, and rolled via GitOps. CLI patches are fine for experiments only.

Kubernetes: How to use ingress

Author: Cassius Adams •
  1. Path-based routing to multiple Services:
    spec:
      rules:
      - host: HOSTNAME
        http:
          paths:
          - path: /api
            pathType: Prefix
            backend: { service: { name: API_SVC, port: { number: 80 } } }
          - path: /web
            pathType: Prefix
            backend: { service: { name: WEB_SVC, port: { number: 80 } } }
  2. Blue/green or canary (nginx example with weight header/annotations):
    metadata:
      annotations:
        nginx.ingress.kubernetes.io/canary: "true"
        nginx.ingress.kubernetes.io/canary-weight: "20"
    (Exact knobs vary by controller.)
  3. Health and status checks:
    $ kubectl describe ingress INGRESS_NAME -n NAMESPACE
    $ kubectl get events -n NAMESPACE --sort-by=.lastTimestamp | tail -n 50
  4. TLS verify from your laptop:
    $ curl -sIv --resolve HOSTNAME:443:LB_IP https://HOSTNAME/
EXPERT TIP Keep paths consistent and prefer / with Prefix semantics unless you have a strong reason to use Exact.
HEADS-UP Some controllers require a default backend; missing one can turn unknown paths into 404s at the edge.
EDITOR'S NOTE I like a “lab” host (e.g., lab.example.com) wired to the same controller for quick experiments—saves me from touching production hosts during tests.

Config & Secrets

Manage application configuration with ConfigMaps, Secrets, and service accounts (basics).

Kubernetes: How to view configmap (kubectl)

Author: Cassius Adams •
  1. List ConfigMaps (namespace or cluster-wide):
    $ kubectl get configmaps -n NAMESPACE
    $ kubectl get configmaps -A
  2. Describe a specific ConfigMap (shows keys and byte counts):
    $ kubectl describe configmap CONFIGMAP_NAME -n NAMESPACE
  3. Output as YAML or JSON:
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o yaml
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json
  4. Print the value of a single key (newline included):
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o jsonpath='{.data.KEY}{"\n"}'
  5. Watch for changes:
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -w -o yaml
EXPERT TIP Use -o json with jq for scripts; describe is fastest for quick human checks.
HEADS-UP If your app uses subPath mounts, updated ConfigMap data won’t hot-reload—restart pods to pick up changes.
IMPORTANT ConfigMaps are not secrets. Never put credentials or API keys here.
EDITOR'S NOTE My flow: describe to verify keys → switch to -o json for exact values and automation.

Kubernetes: How to view secrets (kubectl)

Author: Cassius Adams •
  1. Check RBAC and list secrets:
    $ kubectl auth can-i get secret/SECRET_NAME -n NAMESPACE
    $ kubectl get secrets -n NAMESPACE
  2. Describe a Secret (see type and keys, not plaintext):
    $ kubectl describe secret SECRET_NAME -n NAMESPACE
  3. Decode a single key (prints plaintext):
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o jsonpath='{.data.KEY}' | base64 -d
    # macOS:
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o jsonpath='{.data.KEY}' | base64 -D
  4. List only the key names (no values):
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o json | jq -r '.data | keys[]'
  5. (Optional) Decode all keys locally (avoid writing files):
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o json \
      | jq -r '.data | with_entries(.value |= @base64d)'
EXPERT TIP Decode only the value you need. Prefer a bastion with session logging; avoid writing Secret JSON to disk.
IMPORTANT Secrets are base64-encoded by default, not encrypted. Enforce RBAC, audit, and cluster encryption-at-rest.
EDITOR'S NOTE I do describe to confirm key names, then a single in-memory decode for the value I need.

Kubernetes: How to output ConfigMap as JSON

Author: Cassius Adams •
  1. Get the ConfigMap object as JSON:
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json
  2. Emit only the .data section (compact or pretty):
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json | jq -c '.data'
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json | jq '.data'
  3. Print a single key as a JSON string (handles escaping/newlines):
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json \
      | jq -r --arg k "KEY" '.data[$k] | @json'
  4. Process many by label (name + data JSON per line):
    $ kubectl get configmaps -n NAMESPACE -l app=web -o json \
      | jq -c '.items[] | {name:.metadata.name, data:.data}'
EXPERT TIP JSONPath is fine for single values; for structured output, pipe -o json to jq.
HEADS-UP Embedded newlines and quotes can break naïve parsers—@json ensures proper escaping.
EDITOR'S NOTE I default to “-o jsonjq”. It’s repeatable and avoids brittle text parsing.

Kubernetes: How to create secret

Author: Cassius Adams •
IMPORTANT Secrets are base64-encoded by default, not encrypted. Enable encryption at rest, keep RBAC tight, and avoid writing plaintext to disk or tickets.
  1. Create a generic Secret from literals or files:
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-literal=USERNAME=alice \
      --from-literal=PASSWORD='S3cure!' \
      --from-file=ssh-privatekey=/path/id_rsa \
      --from-file=config.json=/path/app-config.json
  2. Create from an .env file (KEY=VALUE lines):
    $ kubectl create secret generic app-secrets -n NAMESPACE \
      --from-env-file=.env
  3. Registry credentials (image pulls):
    $ kubectl create secret docker-registry regcred -n NAMESPACE \
      --docker-server=REGISTRY_URL \
      --docker-username=USERNAME \
      --docker-password='PASSWORD' \
      --docker-email=you@example.com
  4. Prefer a declarative Secret (dry-run to YAML, then apply):
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-literal=API_KEY=VALUE --dry-run=client -o yaml > secret.yaml
    $ kubectl apply -f secret.yaml
  5. Verify type and keys (no plaintext):
    $ kubectl describe secret SECRET_NAME -n NAMESPACE
EXPERT TIP Pipe sensitive inputs from files or env and rely on --dry-run=client -o yaml for peer review. If you must inline values, lead your shell command with a space to avoid history (if supported).
EDITOR'S NOTE I try hard not to “quick-type secrets into the terminal” because future-me has to clean up the mess. My default is: build YAML with --dry-run, commit an encrypted form (SOPS/Sealed Secrets), and let GitOps do the boring parts.

Kubernetes: How to create service account

Author: Cassius Adams •
HEADS-UP Keep permissions minimal. Bind Roles to the ServiceAccount with only the verbs/resources it truly needs.
  1. Create the ServiceAccount:
    $ kubectl create serviceaccount SA_NAME -n NAMESPACE
  2. Create a least-privilege Role (example: read-only to common resources):
    $ kubectl create role ro -n NAMESPACE \
      --verb=get,list,watch \
      --resource=pods,services,secrets,configmaps,endpoints
  3. Bind the Role to the ServiceAccount:
    $ kubectl create rolebinding ro-bind -n NAMESPACE \
      --role=ro --serviceaccount NAMESPACE:SA_NAME
  4. (K8s 1.24+) Get a short-lived token for the SA:
    $ kubectl -n NAMESPACE create token SA_NAME
  5. (If your cluster still uses legacy Secret tokens) read the mounted token:
    $ kubectl -n NAMESPACE get sa SA_NAME -o jsonpath='{.secrets[0].name}'
    $ kubectl -n NAMESPACE get secret SECRET_NAME -o jsonpath='{.data.token}' | base64 -d
EXPERT TIP Name RoleBindings after the SA and role (e.g., ro-bind) so audits read like a sentence. It saves real time in incident reviews.
EDITOR'S NOTE My rule: permissions should be boring to read. If a RoleBinding looks “clever,” I’ve probably over-granted or hidden intent. Keep it obvious and constrained.

Kubernetes: How to create tls secret

Author: Cassius Adams •
  1. Create the TLS Secret from cert/key files:
    $ kubectl create secret tls TLS_SECRET_NAME -n NAMESPACE \
      --cert=/path/tls.crt --key=/path/tls.key
  2. Verify the type and keys:
    $ kubectl describe secret TLS_SECRET_NAME -n NAMESPACE
  3. (If you have a .pfx/.p12) extract .crt and .key first:
    $ openssl pkcs12 -in cert.pfx -clcerts -nokeys -out tls.crt
    $ openssl pkcs12 -in cert.pfx -nocerts -nodes -out tls.key
  4. Sanity-check the certificate and key match:
    $ openssl x509 -noout -modulus -in tls.crt | openssl md5
    $ openssl rsa  -noout -modulus -in tls.key | openssl md5
HEADS-UP Many ingress controllers expect the key to be unencrypted. If your key is passphrase-protected, remove the passphrase securely before creating the Secret.
EDITOR'S NOTE TLS is where rushed copy-paste bites. I do two things: export fresh .crt/.key from the source of truth and run a quick modulus check so rollout time isn’t spent chasing a mismatched pair.

I can never remember openssl commands, so I find a couple meaninful ones and pull them out of bash history when really needed.

Kubernetes: How to edit configmap

Author: Cassius Adams •
  1. Inline edit (fastest for a quick fix):
    $ kubectl edit configmap CONFIGMAP_NAME -n NAMESPACE
  2. Patch a single key (no editor needed):
    $ kubectl patch configmap CONFIGMAP_NAME -n NAMESPACE \
      --type merge -p '{"data":{"KEY":"NEW_VALUE"}}'
  3. Declarative update (safer/auditable):
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o yaml > configmap.yaml
    # edit configmap.yaml (only .data)
    $ kubectl apply -f configmap.yaml
  4. Make workloads pick up the change (env vars freeze at start; file mounts refresh, subPath does not):
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
  5. Verify new pods and content:
    $ kubectl get pods -n NAMESPACE -w
    $ kubectl describe configmap CONFIGMAP_NAME -n NAMESPACE
EXPERT TIP Prefer declarative apply + rollout restart. If you must hot-patch, keep it surgical and leave a breadcrumb in Git.
HEADS-UP subPath mounts never auto-refresh; env vars update only after a restart. Plan the rollout.
EDITOR'S NOTE I try and avoid late-night mysteries. While I still sometimes do "edit in place" because it feels quick, future-me wants a clean diff and a predictable rollout. So a codified YAML and restart wins.

Kubernetes: How to edit secret

Author: Cassius Adams •
IMPORTANT Secrets are base64-encoded, not encrypted. Avoid writing plaintext to disk. Use least-privilege RBAC and audit.
  1. Safer path: rebuild and apply (avoids base64 typos):
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-literal=KEY=NEW_VALUE \
      --dry-run=client -o yaml | kubectl apply -f -
  2. If you must edit the object file directly:
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o yaml > secret.yaml
    # edit base64 in secret.yaml (KEY: base64(VALUE))
    $ kubectl apply -f secret.yaml
  3. Trigger dependent workloads to pick up the new value:
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
  4. Verify:
    $ kubectl describe secret SECRET_NAME -n NAMESPACE
    $ kubectl get pods -n NAMESPACE -w
EXPERT TIP Keep secret sources (files/env) outside your shell history and CI logs. Generate YAML with --dry-run, then apply.
EDITOR'S NOTE Editing base64 by hand is how evil little gremlins and ghosts get in. :) I rebuild from source and let a restart do the heavy lifting.

Kubernetes: How to encrypt secrets

Author: Cassius Adams •
HEADS-UP Two separate concerns: cluster encryption at rest and encryption in Git. You usually need both.
  1. GitOps: Sealed Secrets workflow (controller required):
    $ kubectl create secret generic app-secrets -n NAMESPACE \
      --from-literal=API_KEY=VALUE --dry-run=client -o yaml > secret.yaml
    $ kubeseal --format yaml < secret.yaml > sealedsecret.yaml
    $ kubectl apply -f sealedsecret.yaml -n NAMESPACE
  2. GitOps: SOPS (age) example (no cluster controller needed):
    $ kubectl create secret generic app-secrets -n NAMESPACE \
      --from-literal=API_KEY=VALUE --dry-run=client -o yaml > secret.yaml
    $ sops --encrypt --in-place secret.yaml  # decrypt with: sops --decrypt secret.yaml | kubectl apply -f -
  3. Cluster encryption at rest (verify with your platform docs):
    # Managed platforms typically enable it; confirm per provider.
    # For self-managed control planes, configure EncryptionConfiguration + KMS provider.
EDITOR'S NOTE I keep plaintext out of repos on principle - secret values should never ever be in the repo. Above, I've given some options for encryption within Git, but in my experience it's better to store the encrypted secret in another purpose-built tool, like a Key Vault, and use a controller to ingest at pod start. Of course then you need external processes for KV rotations (monthly, quarterly, yearly). But if not going that route, pick one tool (Sealed Secrets or SOPS), automate it, and stop arguing with yourself. :)

Kubernetes: How to get service account token

Author: Cassius Adams •
  1. Kubernetes 1.24+: short-lived token via TokenRequest API:
    $ kubectl -n NAMESPACE create token SA_NAME
  2. Specify audience/duration (if supported by your cluster policy):
    $ kubectl -n NAMESPACE create token SA_NAME \
      --audience=kubernetes.default.svc --duration=15m
  3. Legacy clusters (token stored in a Secret):
    $ SECRET_NAME=$(kubectl get secrets -n NAMESPACE -o json \
    	  | jq -r '.items[] | select(.type=="kubernetes.io/service-account-token" and .metadata.annotations["kubernetes.io/service-account.name"]=="SA_NAME") | .metadata.name' | head -n1)
    	$ kubectl -n NAMESPACE get secret "$SECRET_NAME" -o jsonpath='{.data.token}' | base64 -d
EXPERT TIP Prefer short-lived tokens. If something demands a long-lived token, challenge the design.
EDITOR'S NOTE Tokens are not wine - they don't age well over time. In the past I've used them in my cicd pipelines and sure - that does work - but it exposes privileges and it's super-easy to forget to rotate frequently. Now, I keep them short and scoped so they can’t surprise me later.

Kubernetes: How to manage secrets

Author: Cassius Adams •
  1. Baseline hygiene: clear names, labels, and ownership:
    $ kubectl label secret SECRET_NAME -n NAMESPACE app=myapp tier=prod --overwrite
  2. Rotate safely (same name, new value) and restart workloads:
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-literal=KEY=NEW_VALUE --dry-run=client -o yaml | kubectl apply -f -
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
  3. Find workloads referencing a Secret (volumes/env/imagePullSecrets):
    $ kubectl get deploy -A -o json | jq -r '
      .items[] as $d
      | [$d.metadata.namespace,$d.metadata.name,
         ($d.spec.template.spec.volumes[]? | select(.secret) | .secret.secretName),
         ($d.spec.template.spec.imagePullSecrets[]? | .name),
         ($d.spec.template.spec.containers[]?.env[]? | select(.valueFrom.secretKeyRef) | .valueFrom.secretKeyRef.name)
       ] | @tsv' | column -t
  4. Store securely for GitOps:
    # See: Encrypt secrets (Sealed Secrets / SOPS) and keep plaintext out of Git.
HEADS-UP Env vars don’t rotate in-place; pods must restart. Plan a window.
EDITOR'S NOTE Secret management shouldn’t be a heroic endeavour. Be sure to name it clearly, rotate on a predefined automation schedule, and make the rest a button press.

Kubernetes: How to mount configmap

Author: Cassius Adams •
  1. Mount as files (recommended for reloadable config):
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app
    spec:
      replicas: 1
      selector: { matchLabels: { app: app } }
      template:
        metadata: { labels: { app: app } }
        spec:
          containers:
          - name: app
            image: nginx
            volumeMounts:
            - name: cfg
              mountPath: /etc/app/conf
          volumes:
          - name: cfg
            configMap:
              name: CONFIGMAP_NAME
  2. Expose a single key as one file:
    volumes:
    - name: cfg
      configMap:
        name: CONFIGMAP_NAME
        items:
        - key: my.conf
          path: custom.conf
  3. Inject as environment variables (all keys):
    containers:
    - name: app
      envFrom:
      - configMapRef:
          name: CONFIGMAP_NAME
  4. Inject a single key as one env var:
    env:
    - name: APP_MODE
      valueFrom:
        configMapKeyRef:
          name: CONFIGMAP_NAME
          key: mode
  5. Pick up changes:
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
HEADS-UP Files mounted from a ConfigMap refresh over time, but subPath mounts do not. Env vars never change at runtime.
EDITOR'S NOTE I wrote that files mounted from a ConfigMap refresh over time. In practice, this is generally 2 minutes or less. Of course if the contents are a configuration for a tool that needs to refresh (ex: nginx -t), that won't just happen automatically. Or if a process keeps a file descriptor open, it may need to reopen the file to read the new contents. For those reasons, I rarely use this functionality in the wild.

Kubernetes: How to mount secret

Author: Cassius Adams •
  1. Mount as files (recommended for sensitive values):
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app
    spec:
      replicas: 1
      selector: { matchLabels: { app: app } }
      template:
        metadata: { labels: { app: app } }
        spec:
          containers:
          - name: app
            image: nginx
            volumeMounts:
            - name: creds
              mountPath: /etc/app/creds
              readOnly: true
          volumes:
          - name: creds
            secret:
              secretName: SECRET_NAME
  2. Expose a single key as one file:
    volumes:
    - name: creds
      secret:
        secretName: SECRET_NAME
        items:
        - key: password
          path: db.password
  3. Inject all keys as env vars (use sparingly):
    containers:
    - name: app
      envFrom:
      - secretRef:
          name: SECRET_NAME
  4. Inject a single key as one env var:
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: SECRET_NAME
          key: password
  5. Pick up changes (env vars need restarts; files refresh, but not subPath):
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
IMPORTANT Env vars are easy to leak into logs/diagnostics. Prefer file mounts for high-sensitivity material and lock down directory permissions in the container.
HEADS-UP subPath mounts won’t refresh after Secret updates; plan a restart.
EDITOR'S NOTE If a value can end up in a stack trace or a crash dump it probably will. I don’t like to place it in an env var. File mounts age better and are less noisy (exposure) in incident reviews - especially if you're a remote worker constantly doing screen shares.

Kubernetes: How to update configmap

Author: Cassius Adams •
  1. Declarative update (preferred):
    $ kubectl create configmap CONFIGMAP_NAME -n NAMESPACE \
      --from-file=PATH/TO/conf/ --dry-run=client -o yaml > configmap.yaml
    $ kubectl apply -f configmap.yaml
  2. Patch a single key quickly:
    $ kubectl patch configmap CONFIGMAP_NAME -n NAMESPACE \
      --type merge -p '{"data":{"KEY":"NEW_VALUE"}}'
  3. Roll your workloads to pick up the change:
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
  4. Checksum annotation pattern (auto-roll when data changes):
    $ SUM=$(kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json | jq -S '.data' | sha256sum | awk '{print $1}')
    $ kubectl patch deploy DEPLOYMENT -n NAMESPACE \
      -p '{"spec":{"template":{"metadata":{"annotations":{"checksum/config":"'"$SUM"'"}}}}}'
  5. Verify new pods and content:
    $ kubectl get pods -n NAMESPACE -w
    $ kubectl describe configmap CONFIGMAP_NAME -n NAMESPACE
HEADS-UP If a ConfigMap is immutable: true, you can’t update in place—delete and recreate with the same name, then roll the workload.
EXPERT TIP In GitOps, prefer generators (e.g., Kustomize configMapGenerator) so name changes include a hash and deployments naturally roll.
EDITOR'S NOTE I'll admit that I do like boring updates - just one YAML change, one visible rollout, and basically zero guesswork. It keep me from doing late-night greps to figure out which pod is on which version.

Kubernetes: How to update secret

Author: Cassius Adams •
  1. Preferred: rebuild and apply (avoids base64 mistakes):
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-literal=KEY=NEW_VALUE --dry-run=client -o yaml | kubectl apply -f -
  2. Multiple keys from files or .env:
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE \
      --from-file=cert.pem --from-file=key.pem \
      --from-env-file=.env --dry-run=client -o yaml | kubectl apply -f -
  3. Immutable Secrets:
    # If the Secret has immutable: true, recreate it:
    $ kubectl delete secret SECRET_NAME -n NAMESPACE
    $ kubectl create secret generic SECRET_NAME -n NAMESPACE ...
  4. Roll workloads to pick up changes:
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
  5. Verify:
    $ kubectl describe secret SECRET_NAME -n NAMESPACE
    $ kubectl get pods -n NAMESPACE -w
IMPORTANT Secrets are base64-encoded, not encrypted. Keep RBAC tight and avoid writing plaintext to disk or tickets.
EDITOR'S NOTE When I'm updating a secret, I usually have it automatically re-ingested from a key vault, which requires a rollout. If from encrypted code, I'll usually rebuild it from source and let a restart do the heavy lifting via GitOps or a cicd pipeline. I always try to avoid hand-balming base64 unless I enjoy chasing ghosts and ghouls later or strictly for testing, then rectify later.

Kubernetes: How to use configmap

Author: Cassius Adams •
  1. Inject all keys as env vars:
    containers:
    - name: app
      envFrom:
      - configMapRef:
          name: CONFIGMAP_NAME
  2. Inject a single key as one env var:
    env:
    - name: APP_MODE
      valueFrom:
        configMapKeyRef:
          name: CONFIGMAP_NAME
          key: mode
  3. Mount as files (preferred for reloadable config):
    volumes:
    - name: cfg
      configMap:
        name: CONFIGMAP_NAME
    containers:
    - name: app
      volumeMounts:
      - name: cfg
        mountPath: /etc/app/conf
  4. Pick up changes (not explicitly required of mounted without subPath):
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
HEADS-UP Env vars are read at pod start and don’t change in place. File mounts refresh over time, usually within 2 minutes, but subPath mounts do not.
EDITOR'S NOTE Try to default to file mounts. They age better than env vars and make rollouts a decision, not an out-of-sync surprise. A secret that sets env vars may get updated without a rollout, causing things to become out of sync (awful if you have to troubleshoot later). But when mounting as file (excluding subPath), it'll get updated in the container after a few minutes. Of course anything that consumes the values will need to be notified.

Kubernetes: How to use secrets

Author: Cassius Adams •
  1. Mount as files (preferred for sensitive values):
    volumes:
    - name: creds
      secret:
        secretName: SECRET_NAME
    containers:
    - name: app
      volumeMounts:
      - name: creds
        mountPath: /etc/app/creds
        readOnly: true
  2. Single key as env var (use sparingly):
    env:
    - name: DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: SECRET_NAME
          key: password
  3. Image pulls with registry credentials:
    spec:
      imagePullSecrets:
      - name: regcred
  4. Restart behavior (not explicitly required of mounted without subPath):
    $ kubectl rollout restart deploy/DEPLOYMENT -n NAMESPACE
IMPORTANT Secrets are base64-encoded, not encrypted. Don’t paste values into logs, tickets, or shared terminals.
EDITOR'S NOTE Anything likely to show up in a stack trace ideally should not be in an env var. Files are better for avoiding that, especially when you're screen sharing all day while troubleshooting an event.

Kubernetes: How to use service account

Author: Cassius Adams •
  1. Attach ServiceAccount to a pod:
    spec:
      serviceAccountName: SA_NAME
  2. Bind least privilege:
    $ kubectl create role ro -n NAMESPACE --verb=get,list,watch --resource=pods
    $ kubectl create rolebinding ro-bind -n NAMESPACE \
      --role=ro --serviceaccount NAMESPACE:SA_NAME
  3. Short-lived token (in k8s 1.24+):
    $ kubectl -n NAMESPACE create token SA_NAME --duration=15m
  4. Disable automount if not needed:
    spec:
      automountServiceAccountToken: false
EXPERT TIP Name RoleBindings predictably (ex ro-bind) so audits read like a sentence.
EDITOR'S NOTE If the ServiceAccount doesn't need the token, there's no need to mount it. Minimal permissions are a feature not a bug.

Kubernetes: How to view configmap

Author: Cassius Adams •
  1. List and describe:
    $ kubectl get configmaps -n NAMESPACE
    $ kubectl describe configmap CONFIGMAP_NAME -n NAMESPACE
  2. YAML/JSON and a single key:
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o yaml
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o jsonpath='{.data.KEY}{"\n"}'
  3. All data as JSON (script-friendly):
    $ kubectl get configmap CONFIGMAP_NAME -n NAMESPACE -o json | jq '.data'
EDITOR'S NOTE Do a quick visual scan with describe, but exact values with -o json. Maintaining a two-pronged approach helps me go quickly and precisely.

Kubernetes: How to view secret

Author: Cassius Adams •
  1. List + RBAC check:
    $ kubectl get secrets -n NAMESPACE
    $ kubectl auth can-i get secret/SECRET_NAME -n NAMESPACE
  2. Describe (shows keys + sizes, not plaintext):
    $ kubectl describe secret SECRET_NAME -n NAMESPACE
  3. Decode one key (prints plaintext):
    $ kubectl get secret SECRET_NAME -n NAMESPACE -o jsonpath='{.data.KEY}' | base64 -d
IMPORTANT Avoid writing decoded values to disk or paste buffers you don't control.
EDITOR'S NOTE I confirm key names with describe, then decode exactly one value in memory if I must. I will always try to keep it off-disk, terminal window only.

Kubernetes: How to convert YAML to JSON

Author: Cassius Adams •
  1. Local files with yq (v4):
    $ yq -o=json '.' manifest.yaml > manifest.json
  2. Many YAML docs to NDJSON:
    $ yq ea -o=json -I=0 '. as $item ireduce ([]; . + [$item]) | .[]' manifests.yaml
  3. Live objects from the cluster:
    $ kubectl get deploy/NAME -n NAMESPACE -o json
EXPERT TIP Converting “desired” manifests? Use yq. Inspecting "live" objects? Use kubectl ... -o json.
EDITOR'S NOTE I convert to JSON when I'm scripting or diffing. It's less pretty but I find for me it's more precise and less prone to formatting errors.

Storage

Persistent volumes and claims, snapshots, and mounting secrets/config files securely.

Kubernetes: How to access persistent volume

Author: Cassius Adams •
  1. Notes on PV/PVC relationships, pods for access, and readOnly vs readWrite modes are being drafted — check back on 2025-10-15.

Kubernetes: How to backup pvc

Author: Cassius Adams •
  1. Walkthrough of PVC backups with VolumeSnapshots, clones, and rsync strategies is being drafted — check back on 2025-10-15.

Kubernetes: How to create persistent volume

Author: Cassius Adams •
  1. Guide to defining PVs and using StorageClasses/dynamic provisioning is being drafted — check back on 2025-10-15.

Kubernetes: How to delete persistent volume

Author: Cassius Adams •
  1. Steps for PV deletion with reclaimPolicy (Delete/Retain/Recycle) and safety checks are being drafted — check back on 2025-10-15.

Kubernetes: How to list pvc

Author: Cassius Adams •
  1. Cheatsheet for listing PVCs by storage class, capacity, and binding status is being drafted — check back on 2025-10-15.

Kubernetes: How to mount a file

Author: Cassius Adams •
  1. Examples for mounting single files via projected volumes, subPath, and ConfigMap/Secret keys are being drafted — check back on 2025-10-15.

Kubernetes: How to mount pvc

Author: Cassius Adams •
  1. Tutorial on attaching PVCs to Pods via volumes/volumeMounts with access modes is being drafted — check back on 2025-10-15.

Kubernetes: How to mount volume

Author: Cassius Adams •
  1. Overview of mounting volumes (emptyDir, hostPath, PVC, config/secret) with examples is being drafted — check back on 2025-10-15.

RBAC & Security

Identity and permissions: service accounts, tokens, and secure access patterns.

Nodes & Scheduling

Node lifecycle, labels, taints/tolerations, draining and capacity management.

Kubernetes: How to how many master nodes

Author: Cassius Adams •
  1. HA control plane sizing primer (1 vs 3+ nodes, quorum, and failure domains) is being drafted — check back on 2025-10-15.

Kubernetes: How to how many pods per node

Author: Cassius Adams •
  1. Guide to max Pods per node (kubelet --max-pods, CNI/IPAM limits, and tuning) is being drafted — check back on 2025-10-15.

Kubernetes: How to add node

Author: Cassius Adams •
  1. Procedures for joining worker nodes (kubeadm join/managed node pools) are being drafted — check back on 2025-10-15.

Kubernetes: How to drain a node

Author: Cassius Adams •
  1. Runbook for cordon/drain/uncordon with PDBs, DaemonSets, and disruptions is being drafted — check back on 2025-10-15.

Kubernetes: How to get node ip

Author: Cassius Adams •
  1. Tips for finding node Internal/External IPs via kubectl and JSONPath are being drafted — check back on 2025-10-15.

Kubernetes: How to join node

Author: Cassius Adams •
  1. Kubeadm join flow and managed cluster node-pool onboarding notes are being drafted — check back on 2025-10-15.

Kubernetes: How to label nodes

Author: Cassius Adams •
  1. How-to on adding/removing node labels and selecting them in scheduling is being drafted — check back on 2025-10-15.

Kubernetes: How to move pod to another node

Author: Cassius Adams •
  1. Approaches for relocating Pods (drain, taints/affinity, and disruption planning) are being drafted — check back on 2025-10-15.

Kubernetes: How to remove node from cluster

Author: Cassius Adams •
  1. Checklist for decommissioning nodes (cordon/drain, node delete, cloud cleanup) is being drafted — check back on 2025-10-15.

Kubernetes: How to remove taint

Author: Cassius Adams •
  1. Commands and patterns for removing taints (and verifying tolerations) are being drafted — check back on 2025-10-15.

Kubernetes: How to restart a node

Author: Cassius Adams •
  1. Runbook for rebooting nodes safely (cordon/drain, maintenance mode, post-checks) is being drafted — check back on 2025-10-15.

Kubernetes: How to taint a node

Author: Cassius Adams •
  1. Tutorial on applying taints (NoSchedule/NoExecute/PreferNoSchedule) and testing effects is being drafted — check back on 2025-10-15.

Autoscaling & Resources

Horizontal Pod Autoscaler and manual scaling to match demand and SLOs.

Kubernetes: How to autoscale

Author: Cassius Adams •
  1. Intro to HPA configuration, metrics targets, and verifying scale events is being drafted — check back on 2025-10-15.

Kubernetes: How to scale down pods

Author: Cassius Adams •
  1. Guide to scaling down Deployments/StatefulSets safely (PDBs and draining traffic) is being drafted — check back on 2025-10-15.

Kubernetes: How to scale pods

Author: Cassius Adams •
  1. How-to on manual scaling with kubectl scale and monitoring results is being drafted — check back on 2025-10-15.

Troubleshooting & Observability

Logs, events, versions, cluster diagnostics, and common checks.

Kubernetes: How to tail logs (kubectl)

Author: Cassius Adams •
  1. Recipes for tailing logs live (-f, --tail, multi-container selection, time ranges) are being drafted — check back on 2025-10-15.

Kubernetes: How to view logs (kubectl)

Author: Cassius Adams •
  1. Overview of fetching logs by Pod/label/namespace with useful filters is being drafted — check back on 2025-10-15.

Kubernetes: How to watch logs (kubectl)

Author: Cassius Adams •
  1. How-to for live log streaming across containers and previous restarts is being drafted — check back on 2025-10-15.

Kubernetes: How to backup etcd

Author: Cassius Adams •
  1. Playbook for etcd snapshots, storage, and restore validation (etcdctl) is being drafted — check back on 2025-10-15.

Kubernetes: How to check pod logs

Author: Cassius Adams •
  1. Triage guide for inspecting Pod logs with container selection and timestamps is being drafted — check back on 2025-10-15.

Kubernetes: How to check pod memory usage

Author: Cassius Adams •
  1. Using metrics to view Pod/container memory (kubectl top, metrics-server, scraping) is being drafted — check back on 2025-10-15.

Kubernetes: How to check version

Author: Cassius Adams •
  1. Commands for checking kubectl/server versions and feature-gate awareness are being drafted — check back on 2025-10-15.

Kubernetes: How to create a cluster

Author: Cassius Adams •
  1. Primer on creating clusters (managed vs kubeadm vs local) with pros/cons is being drafted — check back on 2025-10-15.

Kubernetes: How to deploy

Author: Cassius Adams •
  1. High-level deployment flow (apply manifests, watch rollouts, verify health) is being drafted — check back on 2025-10-15.

Kubernetes: How to find out why a pod restarted

Author: Cassius Adams •
  1. Checklist using events, lastState/terminated, and OOM/crash clues is being drafted — check back on 2025-10-15.

Kubernetes: How to force image pull

Author: Cassius Adams •
  1. Techniques for forcing new image pulls (policy, digest pins, Pod restarts) are being drafted — check back on 2025-10-15.

Kubernetes: How to get started

Author: Cassius Adams •
  1. Newcomer roadmap (install kubectl, connect to a cluster, first apply/get/describe) is being drafted — check back on 2025-10-15.

Kubernetes: How to list images

Author: Cassius Adams •
  1. Commands to list container images used in Pods across namespaces with JSONPath are being drafted — check back on 2025-10-15.

Kubernetes: How to login

Author: Cassius Adams •
  1. Overview of authenticating to clusters (kubeconfig contexts, cloud plugins, SSO) is being drafted — check back on 2025-10-15.

Kubernetes: How to pull image

Author: Cassius Adams •
  1. Notes on image pulls in Kubernetes (credentials, imagePullPolicy, pre-pull strategies) are being drafted — check back on 2025-10-15.

Kubernetes: How to pull local docker image

Author: Cassius Adams •
  1. Guide to using local images with Kind/Minikube registries and pull policies is being drafted — check back on 2025-10-15.

Kubernetes: How to setup

Author: Cassius Adams •
  1. Environment setup checklist (kubectl, kubeconfig, context, and autocompletion) is being drafted — check back on 2025-10-15.

Kubernetes: How to start

Author: Cassius Adams •
  1. First steps after connecting to a cluster (namespaces, get/describe/apply basics) are being drafted — check back on 2025-10-15.

Kubernetes: How to upgrade

Author: Cassius Adams •
  1. Overview of cluster and workload upgrades (versions, drain windows, rollbacks) is being drafted — check back on 2025-10-15.

Kubernetes: How to use

Author: Cassius Adams •
  1. Kubernetes essentials overview (resources, controllers, and common workflows) is being drafted — check back on 2025-10-15.

Kubernetes: How to use local docker image

Author: Cassius Adams •
  1. Tutorial for pushing local images to an in-cluster registry and referencing them in Pods is being drafted — check back on 2025-10-15.

Kubernetes: How to view logs

Author: Cassius Adams •
  1. Covers basic log retrieval, multi-container selection, and filtering/since options — check back on 2025-10-15.

Contexts & Kubeconfig

Switch clusters/namespaces and manage kubeconfig credentials safely.

Kubernetes: How to add context (kubectl)

Author: Cassius Adams •
  1. Commands for creating and switching kubeconfig contexts with namespaces are being drafted — check back on 2025-10-15.

Kubernetes: How to install (kubectl)

Author: Cassius Adams •
  1. Install guides for kubectl on macOS/Linux/Windows with checksum verification are being drafted — check back on 2025-10-15.

Kubernetes: How to install windows (kubectl)

Author: Cassius Adams •
  1. Windows-specific kubectl installation via winget/choco/scoop with PATH setup is being drafted — check back on 2025-10-15.

Kubernetes: How to change namespace

Author: Cassius Adams •
  1. How-to for switching the active namespace (current context vs per-command flags) is being drafted — check back on 2025-10-15.

Kubernetes: How to get cluster name

Author: Cassius Adams •
  1. Tips for discovering the current cluster/context and mapping to cloud resources are being drafted — check back on 2025-10-15.

Kubernetes: How to get kubeconfig

Author: Cassius Adams •
  1. Obtaining kubeconfig files from managed platforms and merging contexts is being drafted — check back on 2025-10-15.

Kubernetes: How to install

Author: Cassius Adams •
  1. High-level install paths (managed clusters, kubeadm, local dev) with prerequisites are being drafted — check back on 2025-10-15.

Kubernetes: How to install calico

Author: Cassius Adams •
  1. Network plugin install overview (Calico manifest apply, prerequisites, verification) is being drafted — check back on 2025-10-15.

Kubernetes: How to install crd

Author: Cassius Adams •
  1. Instructions for installing CustomResourceDefinitions and verifying readiness are being drafted — check back on 2025-10-15.

Kubernetes: How to install helm

Author: Cassius Adams •
  1. Helm installation and first-use (repos, charts, values) quickstart is being drafted — check back on 2025-10-15.

Kubernetes: How to set namespace

Author: Cassius Adams •
  1. Setting the default namespace on the current context and verifying it is being drafted — check back on 2025-10-15.

Kubernetes: How to switch namespace

Author: Cassius Adams •
  1. Switching namespaces per command versus updating kubeconfig context is being drafted — check back on 2025-10-15.

Namespaces

Organize workloads and policies with namespace scoping.

Kubernetes: How to create namespace

Author: Cassius Adams •
  1. Guide to creating namespaces with labels/annotations and default quotas is being drafted — check back on 2025-10-15.

Kubernetes: How to delete namespace

Author: Cassius Adams •
  1. Procedures for namespace deletion, finalizer cleanup, and waiting strategies are being drafted — check back on 2025-10-15.

Kubernetes: How to list namespaces

Author: Cassius Adams •
  1. Cheatsheet for listing namespaces and filtering by labels/status is being drafted — check back on 2025-10-15.