Heroku LangChain.js - v1.0.1
    Preparing search index...

    Class ChatHeroku

    ChatHeroku - Heroku Managed Inference API LangChain Integration

    A LangChain-compatible chat model that interfaces with Heroku's Managed Inference API (Mia). This class provides access to various language models hosted on Heroku's infrastructure, including support for function calling, structured outputs, and streaming responses that plug directly into LangChain createAgent, LCEL chains, and LangGraph workflows.

    // Source: examples/chat-basic.ts
    import { ChatHeroku } from "heroku-langchain";
    import { HumanMessage } from "@langchain/core/messages";
    const model = new ChatHeroku({ temperature: 0.5, maxTokens: 512 });

    const response = await model.invoke([
    new HumanMessage("Tell me about Heroku Inference in one paragraph.")
    ]);
    console.log(response.content);

    const stream = await model.stream([
    new HumanMessage("Stream a short haiku about zero-downtime deploys.")
    ]);
    for await (const chunk of stream) {
    process.stdout.write(chunk.content as string);
    }
    // Source: examples/create-agent-custom-tool.ts
    import { tool } from "langchain";
    import { z } from "zod";
    import { HumanMessage } from "@langchain/core/messages";
    import { ChatHeroku } from "heroku-langchain";

    const getWeather = tool(
    async ({ city }) => `Weather in ${city} is always sunny!`,
    {
    name: "get_weather",
    description: "Get weather for a given city.",
    schema: z.object({ city: z.string() })
    }
    );

    const modelWithTools = new ChatHeroku({ temperature: 0 }).bindTools([getWeather]);
    const result = await modelWithTools.invoke([
    new HumanMessage("Use get_weather to check Tokyo before answering.")
    ]);
    console.log(result.content);
    // Source: examples/create-agent-structured-output.ts
    import { createAgent, tool } from "langchain";
    import { HumanMessage } from "@langchain/core/messages";
    import { InteropZodType } from "@langchain/core/utils/types";
    import { z } from "zod";
    import { ChatHeroku } from "heroku-langchain";

    const WeatherSchema = z.object({
    city: z.string(),
    temperatureCelsius: z.number(),
    condition: z.string()
    }) as InteropZodType<typeof WeatherSchema>;

    const getWeather = tool(
    async ({ city }) => JSON.stringify({ city, temperatureCelsius: 25, condition: "Sunny" }),
    {
    name: "get_weather",
    description: "Get weather for a given city.",
    schema: z.object({ city: z.string() })
    }
    );

    const agent = createAgent({
    model: new ChatHeroku({
    model: process.env.INFERENCE_MODEL_ID ?? "gpt-oss-120b",
    temperature: 0
    }),
    tools: [getWeather],
    responseFormat: WeatherSchema,
    systemPrompt: "You are a weather assistant. Always call get_weather."
    });

    const result = await agent.invoke({
    messages: [new HumanMessage("What's the weather like in Tokyo today?")]
    });
    console.log(result.structuredResponse);
    // Source: examples/create-agent-updates-stream.ts
    import { createAgent } from "langchain";
    import { ChatHeroku } from "heroku-langchain";

    // Reuse the `getWeather` tool defined above
    const agent = createAgent({
    model: new ChatHeroku({
    model: process.env.INFERENCE_MODEL_ID ?? "gpt-oss-120b",
    temperature: 0
    }),
    tools: [getWeather]
    });

    const stream = await agent.stream(
    { messages: [{ role: "user", content: "what is the weather in sf" }] },
    { streamMode: "updates" }
    );

    for await (const chunk of stream) {
    const [step, content] = Object.entries(chunk)[0];
    console.log(`step: ${step}`);
    console.log(content);
    }

    Hierarchy

    Index

    Constructors

    • Creates a new ChatHeroku instance.

      Parameters

      • Optionalfields: ChatHerokuFields

        Optional configuration options for the Heroku Mia model

      Returns ChatHeroku

      When model ID is not provided and INFERENCE_MODEL_ID environment variable is not set

      // Basic usage with defaults
      const model = new ChatHeroku();

      // With custom configuration

      const model = new ChatHeroku({
      model: "gpt-oss-120b",
      temperature: 0.7,
      maxTokens: 1000,
      apiKey: "your-api-key",
      apiUrl: "https://us.inference.heroku.com"
      });

    Properties

    maxTokens?: number
    structuredOutputTool?: StructuredOutputToolMetadata
    resolvedModelId: string

    Actual model ID used when calling Heroku APIs

    model: string

    Public/alias model name exposed to LangChain (can differ from actual ID)

    temperature?: number
    stop?: string[]
    topP?: number
    apiKey?: string
    apiUrl?: string
    maxRetries?: number
    timeout?: number
    streaming?: boolean
    additionalKwargs?: Record<string, any>

    Methods

    • Returns the LangChain identifier for this model class.

      Returns string

      The string "ChatHeroku"

    • Returns string

    • Parameters

      • options: Omit<
            ChatHerokuCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >

      Returns { ls_provider: string }

    • Returns void

    • Parameters

      • tools: (Record<string, any> | StructuredTool<ToolInputSchemaBase, any, any, any>)[]

      Returns StructuredOutputToolMetadata | undefined

    • Parameters

      • tool: Record<string, any> | StructuredTool<ToolInputSchemaBase, any, any, any>

      Returns tool is {
          type: "function";
          function: {
              name: string;
              description?: string;
              parameters: Record<string, any>;
          };
      }

    • Parameters

      • schema: Record<string, any>

      Returns Record<string, any>

    • Parameters

      • messages: BaseMessage<MessageStructure, MessageType>[]
      • options: Omit<
            ChatHerokuCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • existingToolCalls: { id: string; name: string; args: any; type: "tool_call" }[] | undefined
      • currentContent: string

      Returns Promise<
          | {
              toolCalls: { id: string; name: string; args: any; type: "tool_call" }[];
              content: string;
          }
          | null,
      >

    • Returns the LLM type identifier for this model.

      Returns string

      The string "ChatHeroku"

    • Bind tools to this chat model for function calling capabilities.

      This method creates a new instance of ChatHeroku with the specified tools pre-bound, enabling the model to call functions during conversations. The tools will be automatically included in all subsequent calls to the model.

      Parameters

      • tools: (Record<string, any> | StructuredTool<ToolInputSchemaBase, any, any, any>)[]

        A list of StructuredTool instances or tool definitions to bind to the model

      • Optionalconfig: Partial<ChatHerokuCallOptions>

      Returns ChatHeroku

      A new ChatHeroku instance with the tools bound and tool_choice set to "auto"

      import { DynamicStructuredTool } from "@langchain/core/tools";
      import { z } from "zod";

      const calculatorTool = new DynamicStructuredTool({
      name: "calculator",
      description: "Perform basic arithmetic operations",
      schema: z.object({
      operation: z.enum(["add", "subtract", "multiply", "divide"]),
      a: z.number(),
      b: z.number()
      }),
      func: async ({ operation, a, b }) => {
      switch (operation) {
      case "add": return `${a + b}`;
      case "subtract": return `${a - b}`;
      case "multiply": return `${a * b}`;
      case "divide": return `${a / b}`;
      }
      }
      });

      const modelWithTools = model.bindTools([calculatorTool]);
      const result = await modelWithTools.invoke([
      new HumanMessage("What is 15 * 7?")
      ]);
    • Internal

      Get the parameters used to invoke the model.

      This method combines constructor parameters with runtime options to create the final request parameters for the Heroku API. Runtime options take precedence over constructor parameters.

      Parameters

      • Optionaloptions: Partial<ChatHerokuCallOptions>

        Optional runtime parameters that override constructor defaults

      Returns Omit<
          ChatHerokuFields,
          "outputVersion"
          | "disableStreaming"
          | (keyof BaseLanguageModelParams),
      > & { [key: string]: any }

      Combined parameters for the API request

    • Parameters

      • messages: BaseMessage<MessageStructure, MessageType>[]
      • options: Omit<
            ChatHerokuCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • OptionalrunManager: CallbackManagerForLLMRun

      Returns Promise<ChatResult>

    • Parameters

      • messages: BaseMessage<MessageStructure, MessageType>[]
      • options: Omit<
            ChatHerokuCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • OptionalrunManager: CallbackManagerForLLMRun

      Returns AsyncGenerator<AIMessageChunk<MessageStructure>>

    • LangChain streaming hook. Wraps _stream to produce ChatGenerationChunk items so BaseChatModel.stream() uses the streaming path instead of falling back to invoke().

      Parameters

      • messages: BaseMessage<MessageStructure, MessageType>[]
      • options: Omit<
            ChatHerokuCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • OptionalrunManager: CallbackManagerForLLMRun

      Returns AsyncGenerator<ChatGenerationChunk>

    • Create a version of this chat model that returns structured output.

      This method enables the model to return responses that conform to a specific schema, using function calling under the hood. The model is instructed to call a special "extraction" function with the structured data as arguments.

      Type Parameters

      • RunOutput extends Record<string, any> = Record<string, any>

        The type of the structured output

      Parameters

      Returns
          | Runnable<
              BaseLanguageModelInput,
              RunOutput,
              RunnableConfig<Record<string, any>>,
          >
          | Runnable<
              BaseLanguageModelInput,
              { raw: BaseMessage; parsed: RunOutput },
              RunnableConfig<Record<string, any>>,
          >

      A new runnable that returns structured output

      import { z } from "zod";

      // Define the schema for extracted data
      const personSchema = z.object({
      name: z.string().describe("The person's full name"),
      age: z.number().describe("The person's age in years"),
      occupation: z.string().describe("The person's job or profession"),
      skills: z.array(z.string()).describe("List of skills or expertise")
      });

      // Create a model that returns structured output
      const extractionModel = model.withStructuredOutput(personSchema, {
      name: "extract_person_info",
      description: "Extract structured information about a person"
      });

      // Use the model
      const result = await extractionModel.invoke([
      new HumanMessage("Sarah Johnson is a 28-year-old data scientist who specializes in machine learning, Python, and statistical analysis.")
      ]);

      console.log(result);
      // Output: {
      // name: "Sarah Johnson",
      // age: 28,
      // occupation: "data scientist",
      // skills: ["machine learning", "Python", "statistical analysis"]
      // }
      // With includeRaw option to get both raw and parsed responses
      const extractionModelWithRaw = model.withStructuredOutput(personSchema, {
      includeRaw: true
      });

      const result = await extractionModelWithRaw.invoke([
      new HumanMessage("John is a 35-year-old teacher.")
      ]);

      console.log(result.parsed); // { name: "John", age: 35, occupation: "teacher", skills: [] }
      console.log(result.raw); // Original AIMessage with tool calls

      When method is set to "jsonMode" (not supported)

    • Helper method to check if input is a Zod schema

      Parameters

      • input: unknown

      Returns input is ZodType<unknown, unknown, $ZodTypeInternals<unknown, unknown>>

    • Parameters

      Returns BaseMessage<MessageStructure, MessageType>[]

    • Remove undefined keys to keep payloads clean

      Type Parameters

      • T extends Record<string, any>

      Parameters

      • obj: T

      Returns T

    • Standard headers for Heroku API calls

      Parameters

      • apiKey: string

      Returns Record<string, string>

    • POST JSON with retries, timeout, and consistent error wrapping.

      Parameters

      • url: string
      • apiKey: string
      • body: Record<string, any>

      Returns Promise<Response>

    • Returns string