Cloud Security Reality Check: The Misconfigurations That Are Costing Companies Millions

I've done enough cloud security assessments at this point that I can practically write the report before I even start the engagement. Not because every environment is identical — they're not — but because the same categories of mistakes keep showing up, over and over, across companies of all sizes and industries. The tools change, the cloud providers vary, the architectures get more sophisticated. The misconfigurations? They're remarkably consistent. And they're expensive.

Gartner has been saying for years that through 2025, 99% of cloud security failures would be the customer's fault. In 2026, that prediction looks prophetic. The cloud providers are doing their job. The question is whether you're doing yours.

The Shared Responsibility Model: Where the Confusion Starts

Every major cloud provider — AWS, Azure, Google Cloud — publishes a shared responsibility model. It's not a secret. It's not buried in documentation. And yet, in assessment after assessment, I find that teams either haven't read it, don't fully understand it, or have convinced themselves that "the cloud is secure" without thinking about what that actually means.

Here's the short version: the cloud provider secures the cloud itself — the physical hardware, the hypervisor layer, the underlying network infrastructure, the data centers. You are responsible for securing everything you put in it. Your data. Your applications. Your network configurations. Your identity and access controls. Your encryption choices. Your logging setup.

Think of it like renting office space. The landlord maintains the building, the locks on the front door, the fire suppression system. But if you leave your filing cabinets unlocked, prop open the back door for convenience, or hand your key card to anyone who asks — that's on you. AWS didn't leave your S3 bucket public. You did.

Once teams genuinely internalize this distinction, the urgency of getting cloud security right shifts noticeably. The cloud is not a magic security blanket. It's a platform with powerful tools — but those tools default to configurations that prioritize functionality and ease of use, not least privilege and zero trust.

The Top 5 Misconfigurations I Keep Seeing in the Wild

Let me walk you through the greatest hits. If you're running workloads in any major cloud environment right now, I'd be surprised if at least two of these don't apply to you.

1. Overly Permissive IAM Roles and Service Accounts

This is the number one problem. Every time. Teams spin up a service, it needs permissions to do something, someone grants it a role with far more access than necessary because it's faster, and that over-privileged identity sits there quietly — a loaded gun waiting for an attacker to pick up. I've seen EC2 instances with AdministratorAccess policies attached. I've seen service accounts with organization-wide read/write permissions when they only needed to write to a single storage bucket. Least privilege is not optional. It's the foundation.

2. Publicly Exposed Storage Buckets (S3, Azure Blob)

The publicly exposed S3 bucket has been the punchline of cloud security jokes for years, and yet it keeps happening. A developer creates a bucket, sets it to public for testing, deploys to production, forgets about it. Six months later, a security researcher finds 50,000 customer records indexed by a search engine. I've seen this exact scenario — or variations of it — more times than I can count. Both AWS and Azure have added safeguards and warnings, but those only help if someone is paying attention when the bucket is created and periodically auditing afterward.

3. Unrestricted Security Groups and Open Firewall Rules

Port 22 open to 0.0.0.0/0. Port 3389 open to the world. An RDP-accessible Windows server sitting directly on the public internet "just for a quick admin session" that's been running for eight months. Security groups are your network perimeter in the cloud — treating them as an afterthought is like leaving every window in your office building open on the ground floor because it's easier than managing keys. Shodan and similar tools actively crawl for these exposures. Attackers don't have to work hard when you've done it for them.

4. Unencrypted Data at Rest and in Transit

Cloud providers make encryption remarkably easy. AWS KMS, Azure Key Vault, Google Cloud KMS — the tools are there, they're integrated, and enabling encryption often requires checking a box or adding a few lines of configuration. And yet I regularly find databases running without encryption at rest, internal services communicating over HTTP, and backup snapshots sitting unencrypted in storage. The encryption argument used to be about performance overhead. In 2026, that argument is dead. There's no acceptable excuse for unencrypted data in a cloud environment.

5. Missing Logging and Monitoring (You Can't Defend What You Can't See)

This one quietly infuriates me, because it means that when something bad happens — and eventually something will — the team has no idea what occurred, when it started, or what was accessed. CloudTrail not enabled. Azure Monitor not configured. No centralized log aggregation. No alerts on privilege escalation or unusual API calls. Running a cloud environment without comprehensive logging is like operating a bank without security cameras. You might not get robbed today, but when you do, you'll have nothing to go on. Detection and response starts with visibility.

Real Breach Examples: This Isn't Hypothetical

If you need proof that these mistakes have real-world consequences, look no further than the Capital One breach in 2019. An attacker exploited a misconfigured web application firewall combined with a server-side request forgery (SSRF) vulnerability to access AWS metadata credentials, then used those credentials — which had excessive IAM permissions — to enumerate and exfiltrate data from over 100 million customer records stored in S3. The breach cost Capital One more than $300 million when you account for settlements, penalties, and remediation. The root causes? A misconfigured WAF and an over-permissive IAM role. Two of the five problems I listed above.

The publicly exposed S3 bucket story is practically its own genre at this point. Verizon, WWE, the Republican National Committee, FedEx, Accenture — all made headlines for S3 buckets left publicly accessible containing sensitive data. Some were breached. Some were discovered by researchers. All of them were avoidable with basic configuration hygiene. The pattern is so consistent that automated scanners now routinely discover new exposed buckets faster than organizations discover them internally.

These aren't sophisticated nation-state attacks. They're opportunistic exploitation of basic configuration failures. The attack surface is enormous and the tooling to find these exposures is freely available. If you have misconfigurations, someone will find them — the only question is whether it's you first or an attacker.

Cloud Security Posture Management: Your Automated Sanity Check

Cloud Security Posture Management — CSPM — is the category of tools designed to continuously assess your cloud environment against security best practices and compliance frameworks. Think of it as an automated auditor that never sleeps, checking your configurations across AWS, Azure, and GCP simultaneously, flagging deviations, and prioritizing findings by risk.

Tools like Wiz, Prisma Cloud, Orca Security, and AWS Security Hub (for AWS-native environments) can scan your entire cloud footprint and surface the exact misconfigurations I've been describing — publicly accessible storage, over-permissive IAM policies, unencrypted resources, missing logging — in minutes. They map findings to frameworks like CIS Benchmarks, NIST, SOC 2, and PCI DSS, which matters enormously when you're dealing with compliance requirements.

CSPM is not a substitute for security expertise or good architectural decisions. But it dramatically lowers the barrier to catching configuration drift — the gradual accumulation of small deviations from secure baselines that happen as teams move fast and infrastructure evolves. If you don't have some form of continuous posture monitoring in place, you're flying blind.

Multi-Cloud Complexity Is Making All of This Worse

Just as organizations were getting their arms around AWS security, the multi-cloud era arrived. Now teams are managing AWS IAM, Azure AD (Entra ID), Google Cloud IAM, and possibly a private cloud environment — each with its own identity model, networking constructs, logging mechanisms, and security controls. The security expertise required to do all of this well is genuinely hard to find, and the surface area for misconfiguration has multiplied.

I've seen organizations run reasonably tight AWS environments while their Azure workloads are essentially ungoverned — a different team manages them, different tooling, different conventions, no unified visibility. Attackers don't care which cloud your data is in. They'll find the path of least resistance, and in a multi-cloud environment, that path is often the environment that gets the least security attention.

At ExColo, our hosting infrastructure is built on OpenStack — giving us deep control over the underlying platform and the ability to enforce consistent security policies at every layer. Whether you're running workloads on a public cloud, a private cloud, or a hybrid of both, the security fundamentals don't change. The implementations do, which is precisely where organizations need expert guidance.

What to Do Right Now

I'll make this concrete. If you're reading this and you manage cloud infrastructure, here's where to start this week:

Audit your IAM. Pull a report of every IAM role, policy, and service account in your environment. Look for wildcard permissions (* in action or resource fields). Look for unused roles and credentials. Look for human users with long-lived access keys when they should be using temporary credentials. Enforce MFA on every account with console access, no exceptions.

Enable logging everywhere. CloudTrail in every AWS region. Azure Monitor and Diagnostic Settings on every resource. Aggregate logs to a central SIEM or log management platform. Set up alerts for privilege escalation, unusual API activity, and access from unexpected geographies. You cannot respond to what you cannot see.

Scan for misconfigured storage. Run an audit of every storage bucket, blob container, and object store in your environment. Confirm that public access is blocked at the account level in AWS. Review access policies and ACLs. This should take an afternoon and should happen on a scheduled basis, not as a one-time exercise.

Deploy a CSPM tool. If you don't have continuous posture monitoring, start with your cloud provider's native tooling — AWS Security Hub, Azure Defender for Cloud, or GCP Security Command Center — before evaluating commercial options. Imperfect coverage that's actually running is better than a perfect tool that's still in procurement.

Enforce least privilege systematically. Build IAM policies based on what services actually need, not what's convenient. Use permission boundaries in AWS. Use Azure RBAC scoped to the minimum necessary level. Review and revoke permissions quarterly. Least privilege is a discipline, not a one-time configuration.

TJ
Tomasz J

Co-founder & Security Engineer, ExColo — 15+ years in cybersecurity, networking, and cloud infrastructure

Tomasz has led security and infrastructure projects for organizations across the Chicago area, specializing in Cisco security platforms, OpenStack, and Zero Trust architecture. He writes to share what he's learned in the field — the wins, the hard lessons, and the things vendors don't tell you.

About the team →