[Resolved] We have identified the root cause of the cluster's partial service disruption. Currently, the cluster is stable and this will be monitored throughout the weekend. We have determined there were no services impacted outside of Synthetic test data. We apologize for the inconvenience this has caused.
January 24, 2020 6:55PM UTC
[Monitoring] While investigating the root cause, it has been determined that Synthetic monitoring was an impacted feature. We are aware that customers utilizing Synthetic tests may have been impacted by this issue. We are continuing root cause analysis in order to prevent this issue from reoccurring in the future. Thank you for your patience and our apologies for the inconvenience caused.
January 24, 2020 5:35PM UTC
[Monitoring] The problem has been found and mitigated. Access is restored to tenants and we are currently monitoring for stability. We are also working on finding the root cause of this issue for future prevention. Again, we apologize for the inconvenience this has caused.
January 24, 2020 5:07PM UTC
[Investigating] We have identified an issue with access to tenants on Cluster 36. This is under investigation and we are working to restore access as soon as possible. At this time there is no indication of data loss. We will provide updates here as the service is restored and the impact is evaluated. We apologize for the inconvenience this has caused.
[Resolved] The root cause has been identified for the issue on SaaS Cluster 22 and fixes have been deployed. We have also taken measures to ensure this problem will be found and prevented or addressed earlier in the future. We thank you for your patience during the investigation and apologize for any inconvenience this may have caused.
January 20, 2020 6:59PM UTC
[Monitoring] We have restored services as data is flowing again. We will continue to monitor this going forward and update here as it progresses towards resolution. Thank you for your patience and we apologize for the inconvenience.
January 20, 2020 5:56PM UTC
[Investigating] We are noticing that this issue is still occurring. We are investigating the issue and will update accordingly.
January 20, 2020 1:48PM UTC
[Monitoring] We have located the problem and resolved the agent data issue as data is coming back again. We are continuing to monitor the situation to ensure this stays operational. Root cause analysis is under-weigh for future prevention. Again, we apologize for the disruption.
January 20, 2020 1:41PM UTC
[Investigating] We have identified an issue with one SaaS cluster where agent data is currently not processing. This is limited to one cluster only (Cluster 22 in US East). There is still access to the tenants and we are working to restore service to this cluster. We apologize for the inconvenience.