Service degradation
Resolved
Nov 06 at 07:21am UTC
We have introduced further service optimisations and normal performance has now resumed during peak traffic times.
Affected services
Updated
Nov 04 at 06:00pm UTC
We have introduced bulk API endpoints on contacts-service, channels-service and users-service for ticket-service to use, reducing inter-service connections and lowering load.
Affected services
Updated
Nov 04 at 10:45am UTC
We have increased the number of pods and database connection pool sizes to try distribute the load further and are monitoring the situation.
Affected services
Created
Nov 04 at 09:20am UTC
A large customer was migrated to the Cue platform, introducing a large new load dynamic that pressured the internals of the Cue platform to the point of resource exhaustion and lockups, causing continuous service reboots as they were not responding to health checks.
Affected services