Use this file to discover all available pages before exploring further.
LangChain.js is the JavaScript/TypeScript port of LangChain — a framework for composing LLM calls, tools, and retrieval into chains and agents. Arize AX captures every chain, prompt, tool call, and LLM call by manually instrumenting the @langchain/core/callbacks/manager module via the @arizeai/openinference-instrumentation-langchain package.
// example.ts// Importing instrumentation first ensures tracing is set up before any// LangChain client is created.import { provider } from "./instrumentation";import { ChatOpenAI } from "@langchain/openai";import { ChatPromptTemplate } from "@langchain/core/prompts";import { StringOutputParser } from "@langchain/core/output_parsers";// ChatOpenAI reads OPENAI_API_KEY from the environment.const model = new ChatOpenAI({ model: "gpt-5" });const prompt = ChatPromptTemplate.fromTemplate( "Answer the question concisely.\nQuestion: {question}\nAnswer:",);const chain = prompt.pipe(model).pipe(new StringOutputParser());const result = await chain.invoke({ question: "Why is the ocean salty? Answer in two sentences.",});console.log(result);// Flush any pending spans before the process exits.await provider.forceFlush();
Arize AX tracing initialized for LangChain.js.The ocean is salty because rivers continuously dissolve mineral salts from rocks and soil and carry them to the sea, where they accumulate over millions of years. Water leaves the ocean through evaporation but the salts remain, steadily concentrating until reaching today's roughly 3.5% salinity.
Open your Arize AX space and select project langchain-js-tracing-example.
You should see a new trace within ~30 seconds containing a RunnableSequence parent span (CHAIN) wrapping ChatPromptTemplate (CHAIN), ChatOpenAI (LLM, model gpt-5), and StrOutputParser (CHAIN) child spans, with the prompt, response, and token usage attached to the LLM span.
No traces in Arize AX. Confirm ARIZE_SPACE_ID and ARIZE_API_KEY are set in the same shell that runs example.ts. Enable OpenTelemetry debug logs with export OTEL_LOG_LEVEL=debug and re-run.
LangChain spans missing but other spans present.instrumentation.manuallyInstrument(CallbackManagerModule) must run before any code creates a LangChain client. Make sure import { provider } from "./instrumentation" (or a side-effect-only import "./instrumentation") is the first import in your entry point.
401 from OpenAI. Verify OPENAI_API_KEY is set and has access to gpt-5. Swap for a model your key can call.
Process exits before spans flush. Spans are exported asynchronously; always await provider.forceFlush() (or provider.shutdown()) before the process exits to avoid losing trailing spans.
Instrumentation >=1.0.0 supports both attribute masking and context attribute propagation. The matrix below tracks instrumentor support across LangChain core releases: