Skip to main content

8 posts tagged with "incident-report"

View All Tags

Incident Report: Guardrail logging exposed secret headers in spend logs and traces

LiteLLM Team
LiteLLM Core Team

Date: March 18, 2026 Duration: Unknown Severity: High Status: Resolved

Summary​

When a custom guardrail returned the full LiteLLM request/data dictionary, the guardrail response logged by LiteLLM could include secret_fields.raw_headers, including plaintext Authorization headers containing API keys or other credentials.

This information could then propagate to logging and observability surfaces that consume guardrail metadata, including:

  • Spend logs in the LiteLLM UI: visible to admins with access to spend-log data
  • OpenTelemetry traces: visible to anyone with access to the relevant telemetry backend

LLM calls, proxy routing, and provider execution were not blocked by this bug. The impact was exposure of sensitive request headers in observability and logging paths.

Incident Report: Cache Eviction Closes In-Use httpx Clients

Ryan Crabbe
Performance Engineer, LiteLLM
Ishaan Jaff
CTO, LiteLLM
Krrish Dholakia
CEO, LiteLLM

Date: February 27, 2026 Duration: ~6 days (Feb 21 merge -> Feb 27 fix) Severity: High Status: Resolved

Note: This fix is available starting from LiteLLM v1.81.14.rc.2 or higher.

Summary​

A change to improve Redis connection pool cleanup introduced a regression that closed httpx clients that were still actively being used by the proxy. The LLMClientCache (an in-memory TTL cache) stores both Redis clients and httpx clients under the same eviction policy. When a cache entry expired or was evicted, the new cleanup code called aclose()/close() on the evicted value which worked correctly for Redis clients, but destroyed httpx clients that other parts of the system still held references to and were actively using for LLM API calls.

Impact: Any proxy instance that hit the cache TTL (default 10 minutes) or capacity limit (200 entries) would have its httpx clients closed out from under it, causing requests to LLM providers to fail with connection errors.

Incident Report: Encrypted Content Failures in Multi-Region Responses API Load Balancing

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

Date: Feb 24, 2026
Duration: Ongoing (until fix deployed)
Severity: High (for users load balancing Responses API across different API keys)
Status: Resolved

Summary​

When load balancing OpenAI's Responses API across deployments with different API keys (e.g., different Azure regions or OpenAI organizations), follow-up requests containing encrypted content items (like rs_... reasoning items) would fail with:

{
"error": {
"message": "The encrypted content for item rs_0d09d6e56879e76500699d6feee41c8197bd268aae76141f87 could not be verified. Reason: Encrypted content organization_id did not match the target organization.",
"type": "invalid_request_error",
"code": "invalid_encrypted_content"
}
}

Encrypted content items are cryptographically tied to the API key's organization that created them. When the router load balanced a follow-up request to a deployment with a different API key, decryption failed.

  • Responses API calls with encrypted content: Complete failure when routed to wrong deployment
  • Initial requests: Unaffected — only follow-up requests containing encrypted items failed
  • Other API endpoints: No impact — chat completions, embeddings, etc. functioned normally

Incident Report: Wildcard Blocking New Models After Cost Map Reload

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

Date: Feb 23, 2026
Duration: ~3 hours
Severity: High (for users with provider wildcard access rules)
Status: Resolved

Summary​

When a new Anthropic model (e.g. claude-sonnet-4-6) was added to the LiteLLM model cost map and a cost map reload was triggered, requests to the new model were rejected with:

key not allowed to access model. This key can only access models=['anthropic/*']. Tried to access claude-sonnet-4-6.

The reload updated litellm.model_cost correctly but never re-ran add_known_models(), so litellm.anthropic_models (the in-memory set used by the wildcard resolver) remained stale. The new model was invisible to the anthropic/* wildcard even though the cost map knew about it.

  • LLM calls: All requests to newly-added Anthropic models were blocked with a 401.
  • Existing models: Unaffected — only models missing from the stale provider set were impacted.
  • Other providers: Same bug class existed for any provider wildcard (e.g. openai/*, gemini/*).

Incident Report: SERVER_ROOT_PATH regression broke UI routing

Yuneng Jiang
SWE @ LiteLLM (Full Stack)
Ishaan Jaff
CTO, LiteLLM
Krrish Dholakia
CEO, LiteLLM

Date: January 22, 2026 Duration: ~4 days (until fix merged January 26, 2026) Severity: High Status: Resolved

Note: This fix is available starting from LiteLLM v1.81.3.rc.6 or higher.

Summary​

A PR (#19467) accidentally removed the root_path=server_root_path parameter from the FastAPI app initialization in proxy_server.py. This caused the proxy to ignore the SERVER_ROOT_PATH environment variable when serving the UI. Users who deploy LiteLLM behind a reverse proxy with a path prefix (e.g., /api/v1 or /llmproxy) found that all UI pages returned 404 Not Found.

  • LLM API calls: No impact. API routing was unaffected.
  • UI pages: All UI pages returned 404 for deployments using SERVER_ROOT_PATH.
  • Swagger/OpenAPI docs: Broken when accessed through the configured root path.

Incident Report: vLLM Embeddings Broken by encoding_format Parameter

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Krrish Dholakia
CEO, LiteLLM
Ishaan Jaff
CTO, LiteLLM

Date: Feb 16, 2026 Duration: ~3 hours Severity: High (for vLLM embedding users) Status: Resolved

Summary​

A commit (dbcae4a) intended to fix OpenAI SDK behavior broke vLLM embeddings by explicitly passing encoding_format=None in API requests. vLLM rejects this with error: "unknown variant \`, expected float or base64"`.

  • vLLM embedding calls: Complete failure - all requests rejected
  • Other providers: No impact - OpenAI and other providers functioned normally
  • Other vLLM functionality: No impact - only embeddings were affected

Incident Report: Invalid beta headers with Claude Code

Sameer Kankute
SWE @ LiteLLM (LLM Translation)
Ishaan Jaff
CTO, LiteLLM
Krrish Dholakia
CEO, LiteLLM

Date: February 13, 2026 Duration: ~3 hours Severity: High Status: Resolved

Note: This fix will be available starting from v1.81.13-nightly or higher of LiteLLM.

Summary​

Claude Code began sending unsupported Anthropic beta headers to non-Anthropic providers (Bedrock, Azure AI, Vertex AI), causing invalid beta flag errors. LiteLLM was forwarding all beta headers without provider-specific validation. Users experienced request failures when routing Claude Code requests through LiteLLM to these providers.

  • LLM calls to Anthropic: No impact.
  • LLM calls to Bedrock/Azure/Vertex: Failed with invalid beta flag errors when unsupported headers were present.
  • Cost tracking and routing: No impact.

Incident Report: Invalid model cost map on main

Ishaan Jaffer
CTO, LiteLLM

Date: January 27, 2026 Duration: ~20 minutes Severity: Low Status: Resolved

Summary​

A malformed JSON entry in model_prices_and_context_window.json was merged to main (562f0a0). This caused LiteLLM to silently fall back to a stale local copy of the model cost map. Users on older package versions lost cost tracking for newer models only (e.g. azure/gpt-5.2). No LLM calls were blocked.

  • LLM calls and proxy routing: No impact.
  • Cost tracking: Impacted for newer models not present in the local backup. Older models were unaffected. The incident lasted ~20 minutes until the commit was reverted.