A Base64 String Is Not a Security Strategy
Last year we were brought in to assess a mid-sized fintech company’s Kubernetes environment after a failed compliance audit. Their security team assumed Kubernetes Secrets were encrypted. They weren’t. Every secret — database credentials, API keys, TLS certificates — was sitting in etcd as base64-encoded plaintext. That’s not encryption. That’s encoding. An attacker with read access to etcd (T1552.004 — Unsecured Credentials in Files) could decode every secret in the cluster with a one-liner.
This is not an edge case. Most new Kubernetes clusters store secrets unencrypted in the cluster store, send them over the network in plain text, and mount them in containers as plain text. If your Kubernetes secrets management strategy stops at kubectl create secret, you have a problem.
This checklist walks through every layer you need to audit — from etcd encryption to external vault integration. Pass or fail each checkpoint. No ambiguity.
Checkpoint 1: Are Secrets Encrypted at Rest in etcd?
Why This Matters
The etcd datastore is the single source of truth for your entire cluster. If an attacker compromises a control plane node or obtains a snapshot of etcd, they get every secret in the cluster. Without encryption at rest, those secrets are readable immediately.
What to Check
Verify that your kube-apiserver is configured with an EncryptionConfiguration object. Here’s what a properly configured encryption config looks like:
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: cjjPMcWpTPKhAdieVtd+KhG4NN+N6e3NmBPMXJvbfrY=
- identity: {}
The aescbc provider must appear before the identity provider. If identity is listed first, secrets are written unencrypted and the aescbc block is dead weight.
Then confirm the API server flag is set:
--encryption-provider-config=/etc/kubernetes/etcd/encryption.yaml
In your /etc/kubernetes/manifests/kube-apiserver.yaml, you also need the volume mount:
volumeMounts:
- mountPath: /etc/kubernetes/etcd
name: etcd
readOnly: true
And the corresponding hostPath:
volumes:
- hostPath:
path: /etc/kubernetes/etcd
type: DirectoryOrCreate
name: etcd
How to Verify
Create a test secret and read it directly from etcd using etcdctl:
# Create a test secret
kubectl create secret generic secret1 -n default --from-literal=mykey=mydata
# Read it directly from etcd
ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 --hex
If the output shows readable plaintext or recognizable base64, you fail this checkpoint.
Pass criteria: etcdctl output shows encrypted (unreadable) data for all secrets created after encryption was enabled.
Fail criteria: Plaintext or base64-encoded values visible in etcd output.
Checkpoint 2: Are You Using KMS v2 Envelope Encryption?
Static encryption keys stored on the control plane node are better than nothing. But if an attacker gets root on that node, they have both the encrypted data and the key. Game over.
Kubernetes 1.29 shipped KMS v2 as GA. It’s a complete redesign of the original KMS v1 that shipped back in 1.11. The improvements matter:
- A new Data Encryption Key (DEK) is generated for every write operation
- DEKs are encrypted at rest using a Key Encryption Key (KEK) stored externally
- Attackers now need to compromise both the Kubernetes control plane and the external KMS
A control plane node snapshot alone is no longer enough to read a secret in plain text. This is the difference between a single point of failure and defense in depth.
I’ll take a position here: if you’re running production workloads on Kubernetes 1.29 or later, there is no valid excuse for not enabling KMS v2. The performance overhead is negligible. The security improvement is substantial.
Pass criteria: KMS v2 provider configured, KEKs stored in an external KMS or HSM.
Fail criteria: Static keys on disk, or still running KMS v1.
Checkpoint 3: Is RBAC Locking Down Secret Access?
During a quarterly security review for an enterprise client, we discovered that every developer in their organization had get and list permissions on secrets across all namespaces. Seventeen teams. Hundreds of secrets. Zero least-privilege controls.
Audit your RBAC policies:
# Find all roles and clusterroles that grant access to secrets
kubectl get clusterroles -o json | jq '.items[] | select(.rules[]?.resources[]? == "secrets") | .metadata.name'
Every role that touches secrets should be namespace-scoped, not cluster-wide. Use Role and RoleBinding, not ClusterRole and ClusterRoleBinding, for secret access. Apply least-privilege: most services only need get on specific secrets, not list or watch across a namespace.
Pass criteria: Secret access is namespace-scoped, limited to specific service accounts, no wildcard permissions.
Fail criteria: Cluster-wide secret access, broad list/watch permissions, or secrets accessible to default service accounts.
Checkpoint 4: Are Secrets Protected in Transit?
Encryption at rest is half the equation. Kubernetes transfers secrets from etcd to the kubelet to the pod — and by default, that traffic may not be encrypted. If you’re managing an environment where nodes communicate across untrusted network segments, this is a real attack vector (T1557 — Adversary-in-the-Middle).
Deploy a service mesh like Istio or Linkerd to enforce mutual TLS (mTLS) between all pod-to-pod communication. For the API server to etcd path, ensure etcd is configured with TLS peer certificates. Most managed Kubernetes services handle this, but if you’re running self-managed clusters on dedicated infrastructure, verify it manually.
Pass criteria: mTLS enforced between pods, TLS on etcd peer and client connections.
Fail criteria: Plaintext communication between any components handling secret data.
Checkpoint 5: Are You Using an External Secrets Store?
Kubernetes Secrets are a native API object. They’re convenient. They’re also not a secrets management platform.
Production environments should integrate with an external vault — HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager. Kubernetes provides the Secrets Store CSI Driver specifically for this integration. External vaults give you what native secrets don’t:
- Centralized audit logging of every secret access
- Dynamic secret generation with automatic expiry
- Fine-grained access policies independent of Kubernetes RBAC
- Automated key rotation without redeploying workloads
The NIST Cybersecurity Framework calls out key management and access control as core protective functions. An external vault maps directly to these controls. If you’re working toward compliance with any framework — SOC 2, ISO 27001, PCI DSS — an external vault is practically a requirement.
One caveat: external vaults introduce a dependency. If your vault is unreachable, pods that need secrets on startup will fail. Plan for this. Cache secrets locally with a TTL, configure retry logic, and test what happens when the vault goes down. We learned this the hard way on a client engagement when a network partition between their Kubernetes cluster and Vault instance caused a cascading deployment failure across three environments.
Pass criteria: External vault integrated via CSI driver or sidecar injector, with audit logging enabled.
Fail criteria: All secrets stored natively in Kubernetes with no external backing store.
Checkpoint 6: Are Secret Manifests Out of Source Control?
This one still catches teams. A client came to us after a former contractor’s GitHub repository — public — contained their production database credentials in a Kubernetes Secret manifest. The credentials were base64-encoded in the YAML. As discussed, base64 is not encryption.
Scan your repositories. Use tools like trufflehog or gitleaks to detect secrets in git history:
# Scan current repo for secrets in git history
gitleaks detect --source . --verbose
Use sealed-secrets or external secret references (like ExternalSecret CRDs) so that the actual secret values never touch your repo. Your manifests should reference secrets, not contain them.
Pass criteria: No secret values in any git repository, scanning tools in CI pipeline.
Fail criteria: Base64-encoded or plaintext secrets found in any branch or git history.
Checkpoint 7: Are Privileged Containers Restricted?
A privileged container can access the host filesystem, including the kubelet’s secret store. This means a compromised privileged pod can read secrets belonging to other pods on the same node (T1611 — Escape to Host).
Enforce Pod Security Standards at the namespace level. At minimum, use the restricted profile for workloads that don’t need elevated privileges — which should be most of them.
# Label namespace to enforce restricted pod security
kubectl label namespace production \
pod-security.kubernetes.io/enforce=restricted
For environments where you need deeper policy enforcement, consider tools like OPA Gatekeeper or Kyverno to create custom admission policies. If your organization needs help designing these controls for complex multi-tenant clusters, our team can help scope that engagement.
Pass criteria: Pod Security Standards enforced, no unnecessary privileged containers.
Fail criteria: Privileged containers running without documented justification.
Audit Summary: Minimum Passing Score
Here’s the scorecard:
| Checkpoint | Status |
|---|---|
| 1. Encryption at rest in etcd | ☐ Pass / ☐ Fail |
| 2. KMS v2 envelope encryption | ☐ Pass / ☐ Fail |
| 3. RBAC least-privilege on secrets | ☐ Pass / ☐ Fail |
| 4. Encryption in transit (mTLS) | ☐ Pass / ☐ Fail |
| 5. External secrets store | ☐ Pass / ☐ Fail |
| 6. No secrets in source control | ☐ Pass / ☐ Fail |
| 7. Privileged container restrictions | ☐ Pass / ☐ Fail |
Checkpoints 1, 3, and 6 are non-negotiable. Failing any of those three means your secrets are exposed — full stop. Checkpoints 2 and 5 are strongly recommended for any production environment. Checkpoints 4 and 7 round out a mature security posture.
If you passed all seven, good. Now schedule this audit quarterly, because configurations drift, new namespaces get created without policies, and teams add privileged containers when they’re debugging at 2 AM. If you’re also interested in how ransomware operators exploit encryption mechanics on the other side of the fence, understanding their TTPs will sharpen your defensive posture.
Run the checklist. Fix the failures. Automate the verification. That’s secrets management — not a single config file, but a continuous discipline.

