A financial services client we onboarded last year had every firewall rule tuned, MFA enforced across the board, and a clean vulnerability scan. Their cloud storage buckets? Sitting in plaintext. No data-at-rest encryption, no key management policy, no documented rationale for the gap. Three compliance frameworks required it. Nobody had owned it.
This is more common than anyone in the industry wants to admit. Teams focus on perimeter controls and identity, then treat storage encryption as something the cloud provider just handles. Sometimes it does. Often it does not handle it the way your auditor expects.
The Three Encryption Methods That Actually Matter
When we talk about data-at-rest encryption for cloud storage, the conversation usually lands on three approaches. Each one fits a different operational model, and picking the wrong one creates management overhead that nobody budgeted for.
Full Disk Encryption
Full Disk Encryption (FDE) encrypts the entire storage device — operating system, application files, temporary data, everything. If someone pulls a drive from a rack or clones a virtual disk image, they get ciphertext. Azure uses AES-256 encryption across its SaaS, PaaS, and IaaS storage services, including disk, blob, table, and file storage. This is the baseline, and it should be non-negotiable.
FDE is transparent to applications. Nothing changes in your deployment pipeline. Nothing changes in your monitoring. It simply works underneath everything else. For most managed environments we support at SSE, FDE is the first box we check.
The limitation is granularity. FDE is all-or-nothing at the device level. You cannot encrypt one tenant’s data differently from another on the same disk. You cannot apply different retention or key rotation policies per dataset. For multi-tenant environments or regulated workloads where different data classifications live on the same infrastructure, FDE alone is not enough.
File-Level Encryption
File-level encryption targets individual files or directories. It gives you granular control — different keys for different datasets, different rotation schedules, different access policies per folder. We use this approach for clients who store regulated data alongside general operational files in the same storage account.
The trade-off is management overhead. Every encrypted file or directory needs its own key reference. Key rotation becomes a per-object operation instead of a per-volume operation. We inherited an environment last year where a client had file-level encryption enabled on roughly 40,000 objects across three storage accounts with no key inventory. Figuring out which key decrypted which file took the better part of a week.
File-level encryption makes sense when your compliance requirements demand it. If you need to prove that PII is encrypted separately from general logs, or that financial records use a different key than marketing assets, this is the path. Just budget for the key management work.
Database Encryption
For organisations relying on cloud-hosted databases — Azure SQL Database, Cosmos DB, DynamoDB — encrypting at the database layer adds protection that sits closer to the actual sensitive records. Transparent Data Encryption (TDE) in Azure SQL, for example, encrypts the database files without requiring application changes.
Database encryption protects against a specific threat: someone gaining access to the underlying storage files without going through the database engine. It does not protect against a compromised application connection string or an over-privileged service account querying the data through normal channels. That distinction matters when you are writing your risk assessment.
Key Management Is Where Strategies Succeed or Fail
Here is my position, and I will stand by it: the encryption algorithm matters far less than your key management practice. Every major cloud provider uses AES-256. The algorithm is not your differentiator. How you manage, rotate, and audit your keys is.
Cloud providers typically offer two models. Provider-managed keys mean the cloud vendor generates, stores, and rotates keys on your behalf. Customer-managed keys (CMKs) give you control through services like Azure Key Vault or AWS Key Management Service. A third option, Bring Your Own Key (BYOK), lets you generate keys in your own HSM and import them into the cloud provider’s key management service.
For most of the client environments we manage, customer-managed keys strike the right balance. You control rotation schedules, you control access policies, you have audit logs showing every key operation. Provider-managed keys are fine for development and staging. Production workloads with compliance requirements need CMKs at minimum.
A government contractor we work with required BYOK for their Azure storage accounts. The key generation happened on-premises in a FIPS 140-2 validated HSM, then the keys were imported into Azure Key Vault. The process added about two weeks to the initial deployment. But when their auditors asked who controlled the encryption keys, the answer was unambiguous.
What the Cloud Providers Give You by Default
Azure encrypts all data at rest using AES-256 by default across its storage services. GCP applies AES-256 encryption to Cloud Storage automatically. AWS offers server-side encryption for S3 with AES-256 as the default. This is good. It is also not sufficient for most regulated workloads.
Default encryption uses provider-managed keys. You have no control over rotation. You have limited audit visibility. And if a compliance framework requires you to demonstrate key custody — not just that encryption exists, but that you control the keys — default encryption will not satisfy the requirement.
We benchmarked this across four client environments during annual compliance reviews. Every one of them had default encryption enabled. Two of them needed CMKs to meet their regulatory obligations and did not have them configured. The encryption was technically present but operationally insufficient.
Encryption Does Not Replace Access Controls
This is the caveat that too many encryption discussions skip. Data-at-rest encryption protects against one specific threat vector: unauthorised access to the underlying storage media. A stolen disk. A cloned snapshot. A backup tape that leaves the building.
It does not protect against a misconfigured storage bucket that is publicly accessible. It does not protect against an over-privileged IAM role that can read and decrypt data through normal API calls. It does not protect against a compromised service account.
Encryption must work alongside identity controls, network segmentation, and access policies. If you are building your Azure security posture, encryption is one layer in a stack, not a standalone solution. We always pair storage encryption work with an access control review. They are two sides of the same coin.
Practical Implementation: Where to Start
If you are starting from scratch or remediating gaps, here is the sequence we follow in client engagements.
Step 1: Inventory your storage. You cannot encrypt what you have not catalogued. List every storage account, database, blob container, and file share. Note the data classification for each. This step alone surfaces surprises — orphaned storage accounts, forgotten backup containers, test databases with production data.
Step 2: Enable platform-default encryption everywhere. This is your baseline. Every cloud storage resource should have encryption at rest enabled. For AWS, this includes enabling SSE on S3 buckets and DynamoDB tables:
resources:
Resources:
MyDynamoDBTable:
Type: AWS::DynamoDB::Table
Properties:
SSESpecification:
SSEEnabled: true
Step 3: Upgrade to customer-managed keys for regulated workloads. Identify which storage accounts hold PII, financial data, health records, or anything subject to regulatory requirements. Migrate those to CMKs with documented rotation schedules.
Step 4: Document your key management procedures. Write a runbook that covers key creation, rotation, revocation, and emergency access. Include the escalation path for a compromised key. If you have not tested your key rotation procedure, it does not work — test it in a non-production environment first.
Step 5: Audit and monitor. Enable logging on all key operations. Set alerts for unusual access patterns. Review key access logs quarterly at minimum. Configure your audit policies to capture encryption-related events alongside your existing identity and access monitoring.
Do Not Forget Your Backups
Backup data is one of the most common encryption blind spots. Your production storage might be encrypted with customer-managed keys, but what about the backups? Backup targets — whether in Veeam repositories, Azure Backup vaults, or S3 lifecycle-transitioned storage — need the same encryption treatment as the source data.
We discovered this gap during a routine review of a client environment where production Azure SQL databases used TDE with CMKs, but the automated backup jobs were writing to a storage account with only provider-managed encryption. The production data met compliance requirements. The backups did not. Same data, different encryption posture, same audit scope.
Tapes sent offsite, snapshots replicated to a disaster recovery region, data extracts sent to partner organisations — every copy of sensitive data inherits the same encryption obligations as the original.
The Takeaway
Data-at-rest encryption is a solved technical problem. Every major cloud provider supports it natively with AES-256. The hard part is not turning it on. The hard part is choosing the right key management model, documenting your procedures, and making sure every copy of every dataset is covered — including backups, replicas, and exports.
Start with your storage inventory. Upgrade to customer-managed keys where compliance demands it. Write the key management runbook. Audit quarterly. Boring, repeatable, defensible.
If your team needs help assessing encryption gaps or building a key management practice for your cloud storage, reach out to us. We have done this across enough client environments to know where the blind spots hide.