CodexBridge is a local bridge for exposing Codex CLI/SDK as an OpenAI-compatible HTTP service. With it, Codex no longer has to live only in the terminal. OpenWebUI, Cherry Studio, scripts, automation systems, or any client that supports OpenAI Chat Completions can call it.
The two core endpoints are /v1/chat/completions and /v1/models. The former handles conversations and supports both normal and SSE streaming responses. The latter lets clients discover models in the same way they read an OpenAI-style model list. For tools that already support OpenAI APIs, this usually means changing only the base URL, API key, and model name.
Project: https://github.com/begonia599/CodexBridge
What it is useful for
CodexBridge is useful when you want to plug Codex into existing AI clients or workflows. For example:
- Select Codex directly in OpenWebUI or Cherry Studio.
- Call local Codex from
curl, Python, Node.js, or other scripts. - Let one frontend connect to OpenAI, Ollama, other compatible APIs, and Codex at the same time.
- Keep Codex’s local threads, sandbox, working directory, and approval behavior.
- Provide a unified
/v1/chat/completionsendpoint for internal tools.
It is not a new LLM, and it is not a full replacement for Codex CLI. More precisely, it is an adapter layer: Codex remains the upstream engine, while the bridge converts OpenAI-style requests into conversation input that Codex can handle.
Basic requirements
You need:
- Node.js 18 or later.
- Codex CLI installed and logged in.
- npm, or pnpm / yarn if you prefer.
Basic source deployment:
|
|
Then edit .env or .env.local to set the API key, default model, working directory, sandbox mode, network access, and related options.
Start the HTTP service:
|
|
The default port is 8080, and it can be changed with PORT. After startup, the service exposes:
|
|
CLI conversation mode
Besides the HTTP service, CodexBridge also includes a lightweight CLI:
|
|
You can type natural-language messages directly. Two useful commands are:
/reset: create a new Codex thread./exit: exit the CLI.
The current thread ID is stored in .codex_thread.json. If this file still exists the next time the CLI starts, the previous conversation can continue.
HTTP example
A minimal request looks like this:
|
|
Key points:
- The token in
authorizationmust matchCODEX_BRIDGE_API_KEY. modelcan include reasoning effort, such asgpt-5-codex:mediumorgpt-5-codex:high.session_idbinds the request to a conversation and allows reuse of the same Codex thread.
For streaming output, add stream: true:
|
|
For clients that support OpenAI streaming responses, this feels much closer to a normal chat experience.
How sessions are persisted
Session mapping is one of CodexBridge’s important features. A request can pass a session ID through these fields:
session_idconversation_idthread_iduser
It can also be passed through request headers:
x-session-idsession-idx-conversation-idx-thread-idx-user-id
For production use, enable:
|
|
This requires every request to include a session ID, preventing different users or chat windows from being mixed into the same temporary context. The bridge-side mapping is saved in .codex_threads.json. Deleting this file resets the bridge mapping, while Codex’s own threads remain under ~/.codex/sessions.
If CODEX_REQUIRE_SESSION_ID=false and the request provides no session ID, the bridge expands the current messages into one-off input for Codex. This is fine for temporary calls, but not for long-running conversations.
Multimodal input
CodexBridge supports OpenAI-style content blocks and converts images into Codex-compatible local_image input.
Remote images can be written as:
|
|
Local images can be written as:
|
|
Remote resources are downloaded into a temporary directory and cleaned up after the turn. In real use, watch the request body size, especially when sending base64 images. You may need to increase CODEX_JSON_LIMIT.
Structured output
If the client supports response_format, CodexBridge can map it to Codex’s outputSchema. This is useful when you want Codex to return a fixed JSON structure, such as a check result, summary, classification result, or automation report.
A minimal example:
|
|
type: "json_schema" must include a schema, otherwise the service returns 400.
Key environment variables
Common configuration can be grouped as follows.
Service and authentication:
|
|
Default model:
|
|
Codex runtime:
|
|
Network access:
|
|
If the service is only used for frontend chat, keeping network access off by default is safer. Enable these switches only when Codex clearly needs to run curl, git clone, or web search.
Docker and one-line scripts
The project also provides Docker deployment for long-running service use:
|
|
It also provides a Linux install script:
|
|
The script installs dependencies, clones or updates the repository, copies .env.example, and starts the service with Docker Compose. It requires sudo, so it is best suited to a clean server. If the machine already has a complex Node.js, Docker, or Codex setup, read the script before running it.
Common issues
Request returns 413
The request body is usually too large, often because of base64 images. Increase:
|
|
API key is rejected
Check that the request header includes:
|
|
or use x-api-key.
Codex reports a Git repository restriction
If the working directory is not a trusted repository, Codex may trigger a check. Use this only in an environment you trust:
|
|
Reset conversations
The bridge mapping lives in .codex_threads.json, while Codex’s own threads live in ~/.codex/sessions. Stop the service and delete the corresponding files or directories to reset them.
Recommendations
For local testing, start with the default API key and the read-only sandbox. After OpenWebUI, Cherry Studio, or scripts can call the service normally, gradually adjust CODEX_WORKDIR, CODEX_SANDBOX_MODE, CODEX_NETWORK_ACCESS, and CODEX_APPROVAL_POLICY.
For multi-user use, do at least three things:
- Require
session_id. - Change the default API key.
- Clearly limit the working directory and sandbox permissions.
CodexBridge is valuable not because it is complex, but because it places Codex inside the existing OpenAI-compatible ecosystem. If a client can change its base URL, it can treat Codex like a normal chat model while still retaining Codex’s local threads, sandbox, and tool behavior.