New first-party packages: langchain-openrouter and langchain-litellm
#35672
Replies: 2 comments
-
|
From my point of view, dedicated packages are the right call here. Using ChatOpenAI as a universal shim was convenient, but it always felt fragile once provider specific behavior started to matter. The thing I would still want is a short migration guide that compares tracing behavior, structured output quirks, streaming semantics, and metadata differences against the old base_url approach. That is usually where teams hesitate, even when the new abstraction is clearly better. If that parity story is documented well, moving to purpose built integrations should remove a lot of subtle bugs. |
Beta Was this translation helpful? Give feedback.
-
|
Dedicated integration packages are the right call. The A few things that would make migration smoother from the base_url approach: Migration checklist worth documenting
On the broader unified-endpoint question One pattern worth mentioning: for teams that want zero-config multi-provider access without managing individual API keys, there are emerging options like GPU-Bridge that expose 30+ AI services (LLMs, image gen, embeddings, STT, TTS...) through a single Integration with the new LiteLLM package would be straightforward: from langchain_litellm import ChatLiteLLM
# Route through a unified proxy
llm = ChatLiteLLM(
model="openai/gpt-4o", # or llama-3.3-70b, claude-3-5-haiku, etc.
api_base="https://api.gpubridge.xyz/openai",
api_key="YOUR_KEY"
)The parity doc between old |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
We're excited to announce two new integrations that address one of the most requested areas of improvement: better support for multi-provider routing and OpenAI API-compatible services.
langchain-openrouter— Native OpenRouter supportUsing OpenRouter via
ChatOpenAIwithbase_urloverrides has been a common source of issues — broken structured output, missing reasoning content, unsupported provider-specific features, and incorrect tracing.ChatOpenRoutersolves this with a dedicated integration built on the official OpenRouter Python SDK.What you get:
_llm_typefor tracing and observabilityFull documentation - Reference docs
langchain-litellm— Now under thelangchain-aiorglangchain-litellmhas moved to thelangchain-aiGitHub organization! Originally created by @Akshay-Dongare, it now has first-party support from the LangChain team — meaning tighter integration, more consistent updates, and direct maintenance.LiteLLM lets you call 100+ LLM providers using a unified interface. If you're running a LiteLLM proxy or want a single abstraction across providers, this is the recommended path — rather than routing through
ChatOpenAIwithbase_url.Source & issues - Full documentation - Reference docs
Why dedicated packages?
It is not practical for
ChatOpenAIto handle every extension that third-party providers make to the Chat Completions standard. Dedicated integrations let us:ChatOpenAIbase_urloverride mismatchesIf you've been using
ChatOpenAIwithbase_urlpointed at OpenRouter or a LiteLLM proxy, we recommend migrating to these dedicated packages.Feedback
Encountered a bug or missing feature? Open an issue:
langchain-openrouter: langchain-ai/langchain (label:openrouter)langchain-litellm: langchain-ai/langchain-litellmBeta Was this translation helpful? Give feedback.
All reactions