top of page

Cloud Misconfigurations: Little Mistakes Can Get You Breached

  • Writer: Joseph Rapley
    Joseph Rapley
  • Oct 31
  • 8 min read

A surprising number of data breaches still come down to the same thing, someone left something open that shouldn’t have been. Not a zero-day exploit or a nation-state actor, just a simple misconfiguration in an AWS deployment that no one noticed until it was too late.

Secure Cloud Deployment.
Secure Cloud Deployment.

It happens everywhere. Even mature teams get caught by configuration drift, neglected IAM roles, privileged access tokens, or an old S3 bucket that was used for a demo and never locked down. The cloud makes it quick and easy to deploy services, but it also makes it easy to leave those services wide open.


Shared Responsibility


AWS is very clear about the shared responsibility model, they secure the infrastructure, you secure how it’s used. In practice, that means the platform is only as safe as the people configuring it.


An EC2 instance with a public IP and an open SSH port is technically working as intended. So is an IAM user with admin rights created to get a script running quickly. These aren’t AWS failures, they’re often just default configurations. The service gives you near-infinite flexibility, and with that comes near-infinite ways to make small, simple mistakes that later become breaches.


The four services where those mistakes cause the most pain are EC2, IAM, S3, and Lambda.


EC2: The Misconfiguration Magnet

Elastic Compute Cloud (EC2) is the heart of most AWS environments, and it’s where simple oversights turn into full-blown attack paths.


One of the most common errors is leaving unnecessary public exposure. Engineers spin up an instance for testing, and by default port 22 is open for SSH, then forget to restrict or terminate it later. If a new Security Group is assigned to this instance, it will have port 22 open to the internet. Attackers constantly scan for these and can gain access within hours of exposure. Most AMIs are deployed requiring certificate-based access, but if SSH is configured to allow password access, then the brute force attempts begin.


Another frequent issue is enabling Instance Metadata Service v1 (IMDSv1). It’s still supported but outdated and can be abused to steal IAM tokens if an attacker reaches the host. IMDSv1 is disabled by default only for new instance types released from mid-2024, and only if you've enabled the account-level default setting. It's not universally disabled by default across all AWS accounts. It has inherent flaws that can be used to escalate access if an SSRF or similar vulnerability in public facing apps is found. Switching to IMDSv2 is a small but powerful control.


Unencrypted or shared EBS snapshots are another hidden risk. Snapshots copied between accounts, shared publicly, or left unencrypted can expose credentials, configuration files, and application code. This often happens when teams quickly share resources across environments without considering visibility or lifecycle management.


Patching remains an old problem in a new wrapper. EC2 instances don’t update themselves, and patch automation tools like Systems Manager are often only configured halfway. Forgotten instances running outdated AMIs are a consistent feature in breach writeups. Proper asset management and patch cycles for all EC2 instances is required.


IAM: The Root of Too Much Access

Identity and Access Management is meant to bring order to AWS permissions, yet it’s often the source of chaos.


The worst offenders are still static IAM users with long-lived access keys. Those keys leak into Git repositories, build logs, and laptops, and once exposed, attackers use them to move laterally through the environment. Roles and temporary tokens exist precisely to prevent this, yet many organisations continue to rely on static users because it’s easier.


Permission sprawl is another quiet risk. Policies like AdministratorAccess or s3:* tend to spread because they make problems go away fast. Over time, these permissions become so broad that even a minor credential leak turns catastrophic.


We've often seen misconfigured security policies on various AWS assets that have allowed privileged access when they shouldn't. Particular care needs to be taken when manually creating these policies, and they should be reviewed before being deployed.


Multi-account setups introduce complexity of their own. When cross-account roles aren’t properly scoped or external IDs aren’t enforced, it becomes almost impossible to track who can assume what. This is often discovered only after a penetration test or audit, when an assumed role unexpectedly grants full control over production.


Good IAM hygiene requires process and restraint. Use roles wherever possible, review permissions regularly, and limit admin access to a very small number of identities. It’s dull work, but it’s what keeps compromise contained when mistakes inevitably happen.


S3: Still the Number One Source of Leaks

Despite years of warnings, S3 misconfigurations remain a leading cause of exposure. It’s not because S3 is insecure but because it’s simple.


Public buckets are the classic problem. A bucket opened for temporary testing, a contractor integration, or a quick file transfer quietly stays open long after the project ends. Even with AWS’s block-public-access controls, older or inherited buckets can still slip through.


Within the same account, bucket policies and IAM policies are evaluated together as a union (not sequentially). Access is granted if either policy allows it, which means a permissive bucket policy can grant access even without corresponding IAM permissions. However, an explicit deny in either policy always takes precedence and blocks access.


A classic example is the use of a wildcard Principal: * combined with broad actions such as s3:GetObject or s3:PutObject. That line essentially tells AWS, “any identity, anywhere, can access this bucket.” Even if your IAM users and roles are locked down, that single policy opens the door to the entire internet. The danger is that many teams assume IAM boundaries will still protect the bucket, but in AWS’s evaluation logic, a bucket policy granting public access takes precedence.


It isn’t always that blatant, either. Policies written with partially qualified ARNs, forgotten Condition blocks, or missing "aws:SourceIp" restrictions can create narrow but real exposure paths. For example, a rule meant to allow access from one account’s role might unintentionally allow access from all roles if the resource ARN is truncated to just arn:aws:iam::*:role/developer. Similarly, temporary exceptions such as those used for a contractor, CI/CD integration, or migration job often linger long after the original need has passed. Months later, that same policy can still permit access from a system no one monitors.


Another subtle mistake is mixing bucket ACLs with bucket policies. ACLs and policies both control access, but they don’t merge intuitively. It’s possible to have a policy that blocks public access yet an ACL that grants it, or vice versa, depending on how they were applied. The result is often confusing and inconsistent permissions that behave differently from what the console shows.


Overly permissive use of s3:ListBucket is another overlooked risk. Granting that right publicly doesn’t just expose object names but also turns the bucket into an open index, revealing filenames, structure, and sometimes metadata. Attackers use that to identify sensitive objects even without direct read access.


The right approach is to write bucket policies as narrowly and explicitly as possible. Always define the Principal with a full ARN, limit actions to only what’s required, and include Condition keys such as "aws:SourceIp" or "aws:PrincipalArn" to narrow the scope further. Use AWS Block Public Access controls to enforce non-public behaviour, and test every policy change using Access Analyzer before deployment.


Treat every bucket as if someone will try to access it tomorrow. Use bucket-level logging, enable versioning, enforce encryption, and check exposure through Access Analyzer or Security Hub.


Lambda: Small Functions, Big Blind Spots

Serverless computing solved many operational problems, but it introduced new ways for security to slip through unnoticed. AWS Lambda functions are light, fast, and easy to deploy and that ease is exactly what makes them easy to misuse.


The most common issue is hardcoded secrets. Developers store database credentials, API tokens, or access keys directly in environment variables or even in code. Sensitive data passed as environment variables is visible in the environment variables section of AWS Lambda service in the AWS console. Secrets stored in environment variables will be visible to everyone having access to that Lambda in the AWS console. They should always be managed through AWS Secrets Manager or Parameter Store, referenced dynamically at runtime.


The next problem is over-privileged execution roles. Lambda functions are often attached to roles with permissions far beyond what they need, such as full access to S3 or DynamoDB. This means that if an attacker exploits the function, they gain the same level of access. Applying least privilege here matters just as much as in EC2 or ECS.


Unrestricted event triggers can also cause trouble. Functions tied to overly broad S3 or SNS triggers can be invoked unintentionally or even maliciously. This not only increases cost but also expands the attack surface for injection or logic abuse.


Finally, Lambda’s temporary file storage (/tmp) and runtime reuse between invocations can create residual data exposure. Sensitive information written to /tmp can persist briefly between runs if functions share warm containers.


Serverless is secure when done right, but “right” means treating each function like a standalone application. Control what triggers it, what it can access, and how it handles data. Audit IAM roles, scrub environment variables, and rotate any referenced secrets.


Configuration Drift and Forgotten Controls

A big issue in cloud environments isn’t a specific misconfiguration, but config drift. Systems start clean then slowly deviate from their original design as changes stack up and people move on. Old test functions, forgotten instances, access tokens, and unused roles all build up quietly until something breaks or gets found externally.


Configuration drift doesn’t announce itself, so the only real defence is continuous checking. Tools like AWS Config, Security Hub, and Inspector can highlight drift, but only if their alerts are reviewed. Open-source scanners like Prowler and ScoutSuite can add an independent perspective. Cloud services such as runZero can be implemented to monitor cloud assets.


Don’t run these tools once a year for compliance but run them continuously. Automate checks in CI/CD pipelines and monitor for changes in identity, network exposure, and storage access. Security should move at the same pace as deployments and should be a higher priority. AWS is often used by teams that are good at development, but might not be strong on AWS itself. AWS makes it easy for them to deploy what they are building, and it can be exciting to deploy at speed. But good security requires taking the time to review all aspects of your cloud deployment to ensure it is being done correctly.


Misuse Is Still Misconfiguration

It’s tempting to separate configuration issues from misuse, but in practice they blend together. An EC2 instance with admin rights for “just one script,” a Lambda function with full S3 access for convenience, or an IAM user kept alive “in case we need it again” are all symptoms of the same problem, not taking security seriously.


AWS can be as secure or as insecure as you like. The same platform that can host a secure, compliant infrastructure can just as easily host a tangle of short-term fixes that turn into long-term liabilities. The difference is in the care and time you take to understand your cloud platform, and your particular deployment.


Building a Culture That Prevents the Next Breach

The organisations that avoid major incidents aren’t necessarily more skilled but instead are often more consistent. They have standardised processes for development, testing and production deployments. They have structured cloud deployments and architecture. They take the time to learn and familiarise themselves with the platforms they are using, and they take the time to document architecture, changes, etc. They treat misconfigurations as process failures, not technical accidents. Every misconfigured port or exposed bucket reflects a decision that went unreviewed.


Misconfigurations and misuse remain the quiet causes behind most cloud incidents. Whether it’s an exposed S3 bucket, an IAM policy that’s too generous, or a Lambda function holding plaintext credentials, these aren’t sophisticated attacks but preventable errors.


AWS provides every tool required to fix them. Private subnets, least-privilege roles, encryption by default, and continuous monitoring aren’t advanced features. The difference between a secure setup and a risky one isn’t the tooling, it’s the consistency of its use.


Cloud security doesn’t end when the deployment passes testing. It starts there.


 
 
bottom of page