Red Hat 3scale Outage History
Uptime record, past incidents, and downtime history for Red Hat 3scale.
Checking current status...
90-Day Trend
Monthly Uptime
| Month | Uptime | Days Tracked | Days with Issues |
|---|---|---|---|
| May 2026 | 0% | 12 | 12 |
| April 2026 | 0% | 30 | 30 |
| March 2026 | 0% | 4 | 4 |
Uptime is calculated from daily worst-status snapshots. A day with any non-operational status counts as a day with issues.
Daily Status (Last 46 Days)
Mar 28
Today
Operational
Degraded
Partial Outage
Major Outage
Maintenance
No Data
Incident History
May 2026
Quay.io HTTP 502 on Pull/Push
Started: May 7, 6:05 PM
monitoring
We are continuing to monitor for any further issues.
May 7, 8:44 PM
monitoring
A fix has been implemented and we are monitoring the results.
May 7, 8:34 PM
identified
We have identified the issue as caused by a recent deployment. Our team has reverted the deployment, and are gradually rolling back traffic to our primary region to re-enable pushes while monitoring.
May 7, 8:06 PM
investigating
Pulls have been restored. Pushes are unavailable while we continue to investigate.
May 7, 6:20 PM
investigating
We are currently investigating this issue.
May 7, 6:05 PM
cert-api.access.redhat.com outage
Started: May 5, 12:28 PM
monitoring
A fix has been implemented and we are monitoring the results.
May 5, 1:55 PM
investigating
We are currently investigating an issue where certificate-based requests to cert-api.access.redhat.com are returning HTTP 500 errors due to a gateway configuration mismatch. This is impacting Red Hat Lightspeed registrations and data uploads for both direct and Satellite-connected hosts.
May 5, 12:28 PM
April 2026
Quay.io Push/Pull Degraded
Started: Apr 21, 10:50 PM
identified
Pull API is working again. We are actively working on bringing push API back online.
Apr 21, 10:55 PM
investigating
We are experiencing degraded performance on quay.io push/pull API. We are actively investigating.
Apr 21, 10:50 PM
Subscription threshold exceeded notifications for non-Pay as you go products are showing misleading values
Started: Apr 3, 2:04 PM
identified
We want customers to know that the notifications capability within Subscription Watch is currently turned off due to over usage notifications being sent out in error.
Jira issue is https://redhat.atlassian.net/browse/SWATCH-4870
Apr 3, 2:34 PM
identified
Backend service responsible of checking over usage condition is processing the wrong data for non-PAYGO products. This is leading to unexpected subscription threshold exceeded notifications with misleading values.
Apr 3, 2:04 PM
March 2026
quay.io API failures
Started: Mar 30, 8:11 PM
identified
The team has identified the issue and is investigating. We've shifted Quay to be read-only now so pulls should gradually begin restoring but pushes will still fail.
Mar 30, 8:58 PM
identified
We are seeing 502s with quay.io pushes & pulls
Mar 30, 8:11 PM
AWS Outage: me-south
Started: Mar 2, 7:22 AM
identified
The outage is still ongoing on the AWS side, and recovery efforts are ongoing. AWS is recommending customers launch replacement resources in one of the unaffected Availability Zones or an alternate AWS Region
Mar 4, 4:44 AM
identified
ROSA clusters deployed in me-south-1 are degraded due to availability zone outages. We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe. Please refer to AWS Health for more information and recommendations on workload relocation.
Mar 2, 5:52 PM
identified
ROSA clusters degraded in me-south due to a localized power issue in mes1-az2.
Customers are advised to schedule their workloads into another availability zone.
AWS Status Page: https://health.aws.amazon.com/health/status
Mar 2, 7:22 AM
AWS Outage: me-central
Started: Mar 1, 2:02 PM
identified
AWS continues to make progress on recovery efforts across multiple workstreams. From now on, updates will be delivered directly to affected customers through the AWS Personal Health Dashboard. Customers who require assistance with this event are encouraged to contact AWS Support through the AWS Management Console or the AWS Support Center.
AWS Status Page: https://health.aws.amazon.com/health/status
Mar 4, 4:55 AM
identified
The outage is still ongoing on AWS side. The AWS Management Console is now operational, and AWS is recommending to retry operations where possible, although most of the underlying service are still offline.
Mar 3, 10:21 AM
identified
ROSA clusters deployed in me-central-1 are degraded due to availability zone outages. We recommend customers enact their disaster recovery plans and recover from remote backups into alternate AWS Regions, ideally in Europe. Please refer to AWS Health for more information and recommendations on workload relocation.
Mar 2, 5:52 PM
identified
Availability Zones in the ME-CENTRAL-1 Region (mec1-az2, mec1-az3) have been impacted by a power outage. Customers are advised to schedule their workloads into another availability zone.
Mar 2, 1:37 PM
identified
An Availability Zone in the ME-CENTRAL-1 Region (mec1-az2) has been impacted by a power outage. Customers are advised to schedule their workloads into another availability zone.
Mar 1, 2:26 PM
identified
ROSA clusters degraded in me-central
AWS Status Page: https://health.aws.amazon.com/health/status
Mar 1, 2:02 PM