API Reference¶
This page provides detailed information about the LangGraph OpenAI Serve API endpoints and schemas.
langgraph-openai-serve package.
GraphConfig ¶
Bases: BaseModel
resolve_graph
async
¶
Get the graph instance, handling both direct instances and async callables.
Source code in src/langgraph_openai_serve/graph/graph_registry.py
GraphRegistry ¶
Bases: BaseModel
get_graph ¶
Get a graph by its name.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name of the graph to retrieve. |
required |
Returns:
Type | Description |
---|---|
GraphConfig
|
The graph configuration associated with the given name. |
Raises:
Type | Description |
---|---|
ValueError
|
If the graph name is not found in the registry. |
Source code in src/langgraph_openai_serve/graph/graph_registry.py
LangchainOpenaiApiServe ¶
Server class to connect LangGraph instances with an OpenAI-compatible API.
This class serves as a bridge between LangGraph instances and an OpenAI-compatible API. It allows users to register their LangGraph instances and expose them through a FastAPI application.
Attributes:
Name | Type | Description |
---|---|---|
app |
The FastAPI application to attach routers to. |
|
graphs |
A GraphRegistry instance containing the graphs to serve. |
Initialize the server with a FastAPI app (optional) and a GraphRegistry instance (optional).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
app
|
FastAPI | None
|
The FastAPI application to attach routers to. If None, a new FastAPI app will be created. |
None
|
graphs
|
GraphRegistry | None
|
A GraphRegistry instance containing the graphs to serve. If None, a default simple graph will be used. |
None
|
configure_cors
|
bool
|
Optional; Whether to configure CORS for the FastAPI application. |
False
|
Source code in src/langgraph_openai_serve/openai_server.py
bind_openai_chat_completion ¶
Bind OpenAI-compatible chat completion endpoints to the FastAPI app.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix
|
str
|
Optional; The URL prefix for the OpenAI-compatible endpoints. Defaults to "/v1". |
'/v1'
|
Source code in src/langgraph_openai_serve/openai_server.py
api ¶
chat ¶
schemas ¶
Pydantic models for the OpenAI API.
This module defines Pydantic models that match the OpenAI API request and response formats.
ChatCompletionRequest ¶
Bases: BaseModel
Model for a chat completion request.
ChatCompletionRequestMessage ¶
Bases: BaseModel
Model for a chat completion request message.
ChatCompletionResponse ¶
Bases: BaseModel
Model for a chat completion response.
ChatCompletionResponseChoice ¶
Bases: BaseModel
Model for a chat completion response choice.
ChatCompletionResponseMessage ¶
Bases: BaseModel
Model for a chat completion response message.
ChatCompletionStreamResponse ¶
Bases: BaseModel
Model for a chat completion stream response.
ChatCompletionStreamResponseChoice ¶
Bases: BaseModel
Model for a chat completion stream response choice.
ChatCompletionStreamResponseDelta ¶
Bases: BaseModel
Model for a chat completion stream response delta.
ChatMessage ¶
Bases: BaseModel
Model for a chat message.
FunctionCall ¶
Bases: BaseModel
Model for a function call.
FunctionDefinition ¶
Bases: BaseModel
Model for a function definition.
Role ¶
Bases: str
, Enum
Role options for chat messages.
Tool ¶
Bases: BaseModel
Model for a tool.
ToolCall ¶
Bases: BaseModel
Model for a tool call.
ToolCallFunction ¶
Bases: BaseModel
Model for a tool call function.
ToolFunction ¶
Bases: BaseModel
Model for a tool function.
UsageInfo ¶
Bases: BaseModel
Model for usage information.
service ¶
Chat completion service.
This module provides a service for handling chat completions, implementing business logic that was previously in the router.
ChatCompletionService ¶
Service for handling chat completions.
generate_completion
async
¶
Generate a chat completion.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
chat_request
|
ChatCompletionRequest
|
The chat completion request. |
required |
graph_registry
|
GraphRegistry
|
The GraphRegistry object containing registered graphs. |
required |
Returns:
Type | Description |
---|---|
ChatCompletionResponse
|
A chat completion response. |
Raises:
Type | Description |
---|---|
Exception
|
If there is an error generating the completion. |
Source code in src/langgraph_openai_serve/api/chat/service.py
stream_completion
async
¶
Stream a chat completion response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
chat_request
|
ChatCompletionRequest
|
The chat completion request. |
required |
graph_registry
|
GraphRegistry
|
The GraphRegistry object containing registered graphs. |
required |
Yields:
Type | Description |
---|---|
AsyncIterator[str]
|
Chunks of the chat completion response. |
Source code in src/langgraph_openai_serve/api/chat/service.py
89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
|
views ¶
Chat completion router.
This module provides the FastAPI router for the chat completion endpoint, implementing an OpenAI-compatible interface.
create_chat_completion
async
¶
Create a chat completion.
This endpoint is compatible with OpenAI's chat completion API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
chat_request
|
ChatCompletionRequest
|
The parsed chat completion request. |
required |
graph_registry
|
Annotated[GraphRegistry, Depends(get_graph_registry_dependency)]
|
The graph registry dependency. |
required |
service
|
Annotated[ChatCompletionService, Depends(ChatCompletionService)]
|
The chat completion service dependency. |
required |
Returns:
Type | Description |
---|---|
StreamingResponse | ChatCompletionResponse
|
A chat completion response, either as a complete response or as a stream. |
Source code in src/langgraph_openai_serve/api/chat/views.py
models ¶
schemas ¶
service ¶
Model service.
This module provides a service for handling OpenAI model information.
ModelService ¶
Service for handling model operations.
get_models ¶
Get a list of available models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
graph_registry
|
GraphRegistry
|
The GraphRegistry containing registered graphs. |
required |
Returns:
Type | Description |
---|---|
ModelList
|
A list of models in OpenAI compatible format. |
Source code in src/langgraph_openai_serve/api/models/service.py
views ¶
Models router.
This module provides the FastAPI router for the models endpoint, implementing an OpenAI-compatible interface for model listing.
get_graph_registry_dependency ¶
list_models ¶
Get a list of available models.
Source code in src/langgraph_openai_serve/api/models/views.py
core ¶
settings ¶
Settings ¶
Bases: BaseSettings
This class is used to load environment variables either from environment or from a .env file and store them as class attributes. NOTE: - environment variables will always take priority over values loaded from a dotenv file - environment variable names are case-insensitive - environment variable type is inferred from the type hint of the class attribute - For environment variables that are not set, a default value should be provided
For more info, see the related pydantic docs: https://docs.pydantic.dev/latest/concepts/pydantic_settings
check_langfuse_settings ¶
Validate Langfuse settings if enabled.
Source code in src/langgraph_openai_serve/core/settings.py
graph ¶
Service package for the LangGraph OpenAI compatible API.
graph_registry ¶
GraphConfig ¶
Bases: BaseModel
resolve_graph
async
¶
Get the graph instance, handling both direct instances and async callables.
Source code in src/langgraph_openai_serve/graph/graph_registry.py
GraphRegistry ¶
Bases: BaseModel
get_graph ¶
Get a graph by its name.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
name
|
str
|
The name of the graph to retrieve. |
required |
Returns:
Type | Description |
---|---|
GraphConfig
|
The graph configuration associated with the given name. |
Raises:
Type | Description |
---|---|
ValueError
|
If the graph name is not found in the registry. |
Source code in src/langgraph_openai_serve/graph/graph_registry.py
runner ¶
LangGraph runner service.
This module provides functionality to run LangGraph models with an OpenAI-compatible interface. It handles conversion between OpenAI's message format and LangChain's message format, and provides both streaming and non-streaming interfaces for running LangGraph workflows.
Examples:
>>> from langgraph_openai_serve.services.graph_runner import run_langgraph
>>> response, usage = await run_langgraph("my-model", messages, registry)
>>> from langgraph_openai_serve.services.graph_runner import run_langgraph_stream
>>> async for chunk, metrics in run_langgraph_stream("my-model", messages, registry):
... print(chunk)
The module contains the following functions:
- convert_to_lc_messages(messages)
- Converts OpenAI messages to LangChain messages.
- register_graphs(graphs)
- Validates and returns the provided graph dictionary.
- run_langgraph(model, messages, graph_registry)
- Runs a LangGraph model with the given messages.
- run_langgraph_stream(model, messages, graph_registry)
- Runs a LangGraph model in streaming mode.
register_graphs ¶
Validate and return the provided graph dictionary.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
graphs
|
Dict[str, Any]
|
A dictionary mapping graph names to LangGraph instances. |
required |
Returns:
Type | Description |
---|---|
Dict[str, Any]
|
The validated graph dictionary. |
Source code in src/langgraph_openai_serve/graph/runner.py
run_langgraph
async
¶
Run a LangGraph model with the given messages using the compiled workflow.
This function processes input messages through a LangGraph workflow and returns the generated response along with token usage information.
Examples:
>>> response, usage = await run_langgraph("my-model", messages, registry)
>>> print(response)
>>> print(usage)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the model to use, which also determines which graph to use. |
required |
messages
|
list[ChatCompletionRequestMessage]
|
A list of messages to process through the LangGraph. |
required |
graph_registry
|
GraphRegistry
|
The GraphRegistry instance containing registered graphs. |
required |
Returns:
Type | Description |
---|---|
tuple[str, dict[str, int]]
|
A tuple containing the generated response string and a dictionary of token usage information. |
Source code in src/langgraph_openai_serve/graph/runner.py
run_langgraph_stream
async
¶
Run a LangGraph model in streaming mode.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the model (graph) to run. |
required |
messages
|
list[ChatCompletionRequestMessage]
|
A list of OpenAI-compatible messages. |
required |
graph_registry
|
GraphRegistry
|
The registry containing the graph configurations. |
required |
Yields:
Type | Description |
---|---|
AsyncGenerator[tuple[str, dict[str, int]], None]
|
A tuple containing the content chunk and token usage metrics. |
Source code in src/langgraph_openai_serve/graph/runner.py
simple_graph ¶
Simple LangGraph agent implementation.
This module defines a simple LangGraph agent that interfaces directly with an LLM model. It creates a straightforward workflow where a single node generates responses to user messages.
Examples:
>>> from langgraph_openai.utils.simple_graph import app
>>> result = await app.ainvoke({"messages": messages})
>>> print(result["messages"][-1].content)
The module contains the following components:
- AgentState
- Pydantic BaseModel defining the state schema for the graph.
- generate(state)
- Function that processes messages and generates responses.
- workflow
- The StateGraph instance defining the workflow.
- app
- The compiled workflow application ready for invocation.
AgentState ¶
Bases: BaseModel
Type definition for the agent state.
This BaseModel defines the structure of the state that flows through the graph. It uses the add_messages annotation to properly handle message accumulation.
Attributes:
Name | Type | Description |
---|---|---|
messages |
Annotated[Sequence[BaseMessage], add_messages]
|
A sequence of BaseMessage objects annotated with add_messages. |
SimpleConfigSchema ¶
Bases: BaseModel
Configurable fields that are taken from the user
generate
async
¶
Generate a response to the latest message in the state.
This function extracts the latest message, creates a prompt with it, runs it through an LLM, and returns the response as an AIMessage.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
state
|
AgentState
|
The current state containing the message history. |
required |
Returns:
Type | Description |
---|---|
dict
|
A dict with a messages key containing the AI's response. |
Source code in src/langgraph_openai_serve/graph/simple_graph.py
openai_server ¶
LangGraph OpenAI API Serve.
This module provides a server class that connects LangGraph instances to an OpenAI-compatible API. It allows users to register their LangGraph instances and expose them through a FastAPI application.
Examples:
>>> from langgraph_openai_serve import LangchainOpenaiApiServe
>>> from fastapi import FastAPI
>>> from your_graphs import simple_graph_1, simple_graph_2
>>>
>>> app = FastAPI(title="LangGraph OpenAI API")
>>> graph_serve = LangchainOpenaiApiServe(
... app=app,
... graphs={
... "simple_graph_1": simple_graph_1,
... "simple_graph_2": simple_graph_2
... }
... )
>>> graph_serve.bind_openai_chat_completion(prefix="/v1")
LangchainOpenaiApiServe ¶
Server class to connect LangGraph instances with an OpenAI-compatible API.
This class serves as a bridge between LangGraph instances and an OpenAI-compatible API. It allows users to register their LangGraph instances and expose them through a FastAPI application.
Attributes:
Name | Type | Description |
---|---|---|
app |
The FastAPI application to attach routers to. |
|
graphs |
A GraphRegistry instance containing the graphs to serve. |
Initialize the server with a FastAPI app (optional) and a GraphRegistry instance (optional).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
app
|
FastAPI | None
|
The FastAPI application to attach routers to. If None, a new FastAPI app will be created. |
None
|
graphs
|
GraphRegistry | None
|
A GraphRegistry instance containing the graphs to serve. If None, a default simple graph will be used. |
None
|
configure_cors
|
bool
|
Optional; Whether to configure CORS for the FastAPI application. |
False
|
Source code in src/langgraph_openai_serve/openai_server.py
bind_openai_chat_completion ¶
Bind OpenAI-compatible chat completion endpoints to the FastAPI app.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
prefix
|
str
|
Optional; The URL prefix for the OpenAI-compatible endpoints. Defaults to "/v1". |
'/v1'
|
Source code in src/langgraph_openai_serve/openai_server.py
schemas ¶
Models package for the LangGraph OpenAI compatible API.
utils ¶
Utility functions.
message ¶
convert_to_lc_messages ¶
Convert OpenAI messages to LangChain messages.
This function converts a list of OpenAI-compatible message objects to their LangChain equivalents for use with LangGraph.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
messages
|
list[ChatCompletionRequestMessage]
|
A list of OpenAI chat completion request messages to convert. |
required |
Returns:
Type | Description |
---|---|
list[BaseMessage]
|
A list of LangChain message objects. |