Shipping update - 2026-01-25
Shipped MCP proxy for external IDE tools like Claude Code and Cursor, added context overflow handling for AI chat, and patched a Node.js async_hooks vulnerability.
Highlights
- MCP proxy for IDE tools: added a new proxy endpoint that lets external tools like Claude Code and Cursor connect to the platform's MCP server, with secure token-based authentication, scope validation, and Server-Sent Events streaming for real-time responses.
- MCP token management: the dashboard now supports creating and managing dedicated MCP client tokens, with the origins field shown only for applicable token types and validation rules scoped to each configuration.
- AI context overflow handling: when a conversation exceeds the model's context window, the assistant now surfaces a clear error message with a "Start New Chat" action, disables further input, and persists the error in chat history so users understand what happened and how to recover.
- Structured error handling in the AI backend: context-limit errors are now emitted with standardized error codes, persisted into chat history as structured error messages, and excluded from future model context assembly to prevent cascading failures.
- Node.js updated to 24.13.0 across all backend services, addressing the async_hooks denial-of-service vulnerability (CVE-2025-59466).
User outcomes
- Engineers using Claude Code, Cursor, or other MCP-compatible IDEs can now connect directly to the platform for contextual assistance, with proper authentication and streaming support.
- Dashboard users can create and manage MCP-specific tokens with clear validation, making it straightforward to set up IDE integrations without manual configuration.
- AI assistant users get a clear explanation and recovery path when conversations hit context limits, instead of a cryptic error or silent failure.
- Backend services are patched against a known async_hooks vulnerability that could cause denial of service under specific workloads.
Technical wins
- Built the MCP proxy handler in Go with JSON-RPC request forwarding, organization context extraction from auth tokens, and SSE streaming to relay responses from the upstream MCP server — all behind new dedicated API routes with scope-validated authentication.
- Implemented end-to-end context overflow handling: the backend emits structured error codes via a new
errors.jsutility (99 lines), the thread manager persists error messages while excluding them from future context assembly, and the frontend renders error messages with distinctive styling, scoped CTAs, and disabled input state. - Added conditional form field rendering for MCP token management: the origins field only appears for MCP token types, and validation dynamically adapts based on the selected client type, keeping the token creation flow clean for all token types.
- Standardized error propagation across streaming and non-streaming conversation paths, ensuring consistent error codes and user-facing messages regardless of how the AI response is delivered.
Notes
- Sensitive/internal details have been redacted.
- A minor chat input refactor consolidated conditional placeholder logic into a single ternary expression.