AWS Middle East (UAE) Region Hit by Drone Strikes, 109 Services Disrupted

In Cybersecurity News - Original News Source is cybersecuritynews.com by Blog Writer

AWS Middle East Services Disrupted

A series of drone strikes on Amazon Web Services data center facilities in the United Arab Emirates and Bahrain triggered one of the most severe cloud outages in AWS history, knocking out or degrading more than 109 services across the ME-CENTRAL-1 region beginning March 1, 2026, and leaving thousands of enterprise customers scrambling to migrate workloads for days.

The incident began at approximately 4:30 AM PST on March 1, when one of AWS’s Availability Zones in the UAE, mec1-az2, was struck by what the company initially described as “objects,” causing “sparks and fire” inside the data center.

Local fire departments shut off power to the facility and its generators while containing the blaze. AWS initially framed the event as a “localized power issue” while publicly downplaying the cause.

By March 2 at 4:19 PM PST, AWS confirmed the broader truth: two UAE facilities in ME-CENTRAL-1 had been directly struck by drone attacks, while a third facility in the AWS Middle East (Bahrain) Region (ME-SOUTH-1) was damaged when a drone struck in close proximity. AWS attributed the strikes to the “ongoing conflict in the Middle East”.

The attacks caused structural damage, disrupted power delivery, and in some locations triggered fire suppression systems that caused additional water damage.

According to Amazon status updates, a second availability Zone, mec1-az3, fell offline hours after the initial strike on mec1-az2, leaving only mec1-az1 partially operational. With two of three AZs simultaneously impaired, Amazon S3’s built-in regional redundancy, designed to tolerate the complete loss of a single AZ, was overwhelmed, resulting in high failure rates for both data ingest and egress.

Affected Services and Cascading Impact

The event cascaded rapidly across AWS’s service stack. At peak disruption, the outage touched 109 services across the ME-CENTRAL-1 region, 25 fully disrupted, 34 degraded, and 50 impacted. Core foundational services bore the brunt first:

Service Status Impact
Amazon S3 Disrupted High PUT/GET/LIST failure rates
Amazon DynamoDB Disrupted Elevated error rates, write/read failures
Amazon EC2 Disrupted Instance launches throttled region-wide
AWS Lambda Disrupted Dependent on S3/DynamoDB recovery
Amazon Kinesis Disrupted Cascaded from foundational service failure
Amazon CloudWatch Disrupted Monitoring degraded
Amazon RDS Disrupted Database availability impaired
AWS Management Console Disrupted Partial operational; page errors persisted

Beyond cloud infrastructure, the outage rippled into consumer-facing applications across the UAE. Ride-hailing and delivery platform Careem, and payment services Alaan and Hubpay all reported disruptions directly tied to AWS infrastructure failure, underscoring how deeply regional economies rely on hyperscale cloud providers.

AWS pursued parallel recovery tracks, physical restoration of damaged facilities alongside software-based mitigations designed to restore partial service availability ahead of full infrastructure repair

For Amazon S3, the company deployed updates enabling the service to operate within degraded infrastructure constraints. For DynamoDB, teams worked to remediate impaired tables to restore read/write availability for downstream services.

By March 3 at 8:14 AM PST, AWS reported continued improvement in S3 PUT and LIST operations, with newly written objects retrievable — though GET operations for pre-existing data remained dependent on physical infrastructure restoration. EC2 instance launches remained throttled. DynamoDB error rates stayed elevated.

AWS issued a strong advisory across all update cycles, urging affected customers to immediately enact disaster recovery plans, restore from remote backups in other regions, and redirect application traffic away from ME-CENTRAL-1. Recommended alternate regions include AWS deployments in the United States, Europe, and Asia Pacific based on latency and data residency requirements.

The incident has renewed urgent industry discussions around cloud infrastructure resilience in conflict zones, the risks of geographic concentration, and the need for multi-region active-active architectures, particularly for enterprises operating in geopolitically volatile environments.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.