We have deployed the configuration change. We expect this has resolved the issues. We will update this status announcement tomorrow morning.
Posted Sep 17, 2025 - 13:29 CEST
Update
The configuration change we evaluated yesterday shows positive results. We applied the change to a quarter of our environment. It significantly reduced the median latency compared to the baseline. We plan to roll out this configuration change to the full environment later today.
We also adjusted our rate limiting to further improve service stability. Rate limits now resulting in HTTP 429 instead of the historic behaviour of HTTP 503. The rate limit differs per endpoint. If you are affected by the rate limits, please contact us.
We will continue monitoring closely during the full rollout.
Posted Sep 17, 2025 - 08:27 CEST
Update
We have deployed a configuration change to part of our cluster that should prevent the performance issues from recurring. We will evaluate the performance for both groups (baseline/treatment) tomorrow and continue with the roll-out if it performs as expected.
Posted Sep 16, 2025 - 20:08 CEST
Monitoring
We have applied a mitigation for one underlying issue, and hope to finish another infrastructure change by end of working day today.
Posted Sep 16, 2025 - 12:16 CEST
Identified
Between ~3:10 UTC and ~5:40 there were multiple periods during which RIPEstat was fully unavailable. We still see a higher tail latency for requests.
Our current analysis shows that this was the negative impact of a configuration change we deployed to mitigate the memory consumption issues we encountered last week. We hope to mitigate this fully by end of day today.
Posted Sep 16, 2025 - 09:28 CEST
This incident affects: Non-Critical Services (RIPEstat).