Creates a new ChatHeroku instance.
Optionalfields: ChatHerokuFieldsOptional configuration options for the Heroku Mia model
Protected OptionalmaxPrivate OptionalstructuredProtectedresolvedActual model ID used when calling Heroku APIs
ProtectedmodelPublic/alias model name exposed to LangChain (can differ from actual ID)
Protected OptionaltemperatureProtected OptionalstopProtected OptionaltopProtected OptionalapiProtected OptionalapiProtected OptionalmaxProtected OptionaltimeoutProtected OptionalstreamingProtected OptionaladditionalStaticlc_Returns the LangChain identifier for this model class.
The string "ChatHeroku"
PrivateapplyPrivateextractPrivateisPrivatecleanPrivatemaybeReturns the LLM type identifier for this model.
The string "ChatHeroku"
Bind tools to this chat model for function calling capabilities.
This method creates a new instance of ChatHeroku with the specified tools pre-bound, enabling the model to call functions during conversations. The tools will be automatically included in all subsequent calls to the model.
A list of StructuredTool instances or tool definitions to bind to the model
Optionalconfig: Partial<ChatHerokuCallOptions>A new ChatHeroku instance with the tools bound and tool_choice set to "auto"
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const calculatorTool = new DynamicStructuredTool({
name: "calculator",
description: "Perform basic arithmetic operations",
schema: z.object({
operation: z.enum(["add", "subtract", "multiply", "divide"]),
a: z.number(),
b: z.number()
}),
func: async ({ operation, a, b }) => {
switch (operation) {
case "add": return `${a + b}`;
case "subtract": return `${a - b}`;
case "multiply": return `${a * b}`;
case "divide": return `${a / b}`;
}
}
});
const modelWithTools = model.bindTools([calculatorTool]);
const result = await modelWithTools.invoke([
new HumanMessage("What is 15 * 7?")
]);
InternalGet the parameters used to invoke the model.
This method combines constructor parameters with runtime options to create the final request parameters for the Heroku API. Runtime options take precedence over constructor parameters.
Optionaloptions: Partial<ChatHerokuCallOptions>Optional runtime parameters that override constructor defaults
Combined parameters for the API request
OptionalrunManager: CallbackManagerForLLMRunOptionalrunManager: CallbackManagerForLLMRunLangChain streaming hook. Wraps _stream to produce ChatGenerationChunk items
so BaseChatModel.stream() uses the streaming path instead of falling back to invoke().
OptionalrunManager: CallbackManagerForLLMRunCreate a version of this chat model that returns structured output.
This method enables the model to return responses that conform to a specific schema, using function calling under the hood. The model is instructed to call a special "extraction" function with the structured data as arguments.
The type of the structured output
The schema for the structured output (Zod schema or JSON schema)
Optionalconfig: StructuredOutputMethodOptionsConfiguration options for structured output
A new runnable that returns structured output
import { z } from "zod";
// Define the schema for extracted data
const personSchema = z.object({
name: z.string().describe("The person's full name"),
age: z.number().describe("The person's age in years"),
occupation: z.string().describe("The person's job or profession"),
skills: z.array(z.string()).describe("List of skills or expertise")
});
// Create a model that returns structured output
const extractionModel = model.withStructuredOutput(personSchema, {
name: "extract_person_info",
description: "Extract structured information about a person"
});
// Use the model
const result = await extractionModel.invoke([
new HumanMessage("Sarah Johnson is a 28-year-old data scientist who specializes in machine learning, Python, and statistical analysis.")
]);
console.log(result);
// Output: {
// name: "Sarah Johnson",
// age: 28,
// occupation: "data scientist",
// skills: ["machine learning", "Python", "statistical analysis"]
// }
// With includeRaw option to get both raw and parsed responses
const extractionModelWithRaw = model.withStructuredOutput(personSchema, {
includeRaw: true
});
const result = await extractionModelWithRaw.invoke([
new HumanMessage("John is a 35-year-old teacher.")
]);
console.log(result.parsed); // { name: "John", age: 35, occupation: "teacher", skills: [] }
console.log(result.raw); // Original AIMessage with tool calls
PrivateisPrivateisHelper method to check if input is StructuredOutputMethodParams
PrivateinjectOptionaltools: HerokuFunctionTool[]ProtectedcleanProtectedbuildStandard headers for Heroku API calls
ProtectedpostPOST JSON with retries, timeout, and consistent error wrapping.
Protectedget
ChatHeroku - Heroku Managed Inference API LangChain Integration
A LangChain-compatible chat model that interfaces with Heroku's Managed Inference API (Mia). This class provides access to various language models hosted on Heroku's infrastructure, including support for function calling, structured outputs, and streaming responses that plug directly into LangChain
createAgent, LCEL chains, and LangGraph workflows.Example: Basic invocation
Example: LangChain function calling
Example: Structured output with createAgent
Example: Streaming updates via createAgent
See