Heroku LangChain - v0.2.1
    Preparing search index...

    Class HerokuMia

    HerokuMia - Heroku Managed Inference API LangChain Integration

    A LangChain-compatible chat model that interfaces with Heroku's Managed Inference API (Mia). This class provides access to various language models hosted on Heroku's infrastructure, including support for function calling, structured outputs, and streaming responses.

    import { HerokuMia } from "heroku-langchain";
    import { HumanMessage } from "@langchain/core/messages";

    // Basic usage
    const model = new HerokuMia({
    model: "claude-3-7-sonnet",
    temperature: 0.7,
    apiKey: process.env.INFERENCE_KEY,
    apiUrl: process.env.INFERENCE_URL
    });

    const response = await model.invoke([
    new HumanMessage("Explain quantum computing in simple terms")
    ]);
    console.log(response.content);
    // With function calling
    import { DynamicStructuredTool } from "@langchain/core/tools";
    import { z } from "zod";

    const weatherTool = new DynamicStructuredTool({
    name: "get_weather",
    description: "Get current weather for a location",
    schema: z.object({
    location: z.string().describe("City name")
    }),
    func: async ({ location }) => `Weather in ${location}: Sunny, 22°C`
    });

    const modelWithTools = model.bindTools([weatherTool]);
    const result = await modelWithTools.invoke([
    new HumanMessage("What's the weather in Paris?")
    ]);
    // With structured output
    const extractionSchema = z.object({
    name: z.string(),
    age: z.number(),
    occupation: z.string()
    });

    const structuredModel = model.withStructuredOutput(extractionSchema);
    const extracted = await structuredModel.invoke([
    new HumanMessage("John is a 30-year-old software engineer")
    ]);
    console.log(extracted); // { name: "John", age: 30, occupation: "software engineer" }
    // Streaming responses
    const stream = await model.stream([
    new HumanMessage("Write a story about a robot")
    ]);

    for await (const chunk of stream) {
    process.stdout.write(chunk.content);
    }

    Hierarchy

    Index

    Constructors

    • Creates a new HerokuMia instance.

      Parameters

      • Optionalfields: HerokuMiaFields

        Optional configuration options for the Heroku Mia model

      Returns HerokuMia

      When model ID is not provided and INFERENCE_MODEL_ID environment variable is not set

      // Basic usage with defaults
      const model = new HerokuMia();

      // With custom configuration
      const model = new HerokuMia({
      model: "claude-3-7-sonnet",
      temperature: 0.7,
      maxTokens: 1000,
      apiKey: "your-api-key",
      apiUrl: "https://us.inference.heroku.com"
      });

    Properties

    model: string
    temperature?: number
    maxTokens?: number
    stop?: string[]
    topP?: number
    apiKey?: string
    apiUrl?: string
    maxRetries?: number
    timeout?: number
    streaming?: boolean
    additionalKwargs?: Record<string, any>

    Methods

    • Returns the LangChain identifier for this model class.

      Returns string

      The string "HerokuMia"

    • Returns the LLM type identifier for this model.

      Returns string

      The string "HerokuMia"

    • Bind tools to this chat model for function calling capabilities.

      This method creates a new instance of HerokuMia with the specified tools pre-bound, enabling the model to call functions during conversations. The tools will be automatically included in all subsequent calls to the model.

      Parameters

      • tools: (Record<string, any> | StructuredTool<ToolInputSchemaBase, any, any, any>)[]

        A list of StructuredTool instances or tool definitions to bind to the model

      Returns HerokuMia

      A new HerokuMia instance with the tools bound and tool_choice set to "auto"

      import { DynamicStructuredTool } from "@langchain/core/tools";
      import { z } from "zod";

      const calculatorTool = new DynamicStructuredTool({
      name: "calculator",
      description: "Perform basic arithmetic operations",
      schema: z.object({
      operation: z.enum(["add", "subtract", "multiply", "divide"]),
      a: z.number(),
      b: z.number()
      }),
      func: async ({ operation, a, b }) => {
      switch (operation) {
      case "add": return `${a + b}`;
      case "subtract": return `${a - b}`;
      case "multiply": return `${a * b}`;
      case "divide": return `${a / b}`;
      }
      }
      });

      const modelWithTools = model.bindTools([calculatorTool]);
      const result = await modelWithTools.invoke([
      new HumanMessage("What is 15 * 7?")
      ]);
    • Internal

      Get the parameters used to invoke the model.

      This method combines constructor parameters with runtime options to create the final request parameters for the Heroku API. Runtime options take precedence over constructor parameters.

      Parameters

      • Optionaloptions: Partial<HerokuMiaCallOptions>

        Optional runtime parameters that override constructor defaults

      Returns Omit<HerokuMiaFields, (keyof BaseLanguageModelParams) | "disableStreaming"> & {
          [key: string]: any;
      }

      Combined parameters for the API request

    • Parameters

      • messages: BaseMessage[]
      • options: Omit<
            HerokuMiaCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • OptionalrunManager: CallbackManagerForLLMRun

      Returns Promise<ChatResult>

    • Parameters

      • messages: BaseMessage[]
      • options: Omit<
            HerokuMiaCallOptions,
            | "configurable"
            | "recursionLimit"
            | "runName"
            | "tags"
            | "metadata"
            | "callbacks"
            | "runId",
        >
      • OptionalrunManager: CallbackManagerForLLMRun

      Returns AsyncGenerator<AIMessageChunk>

    • Create a version of this chat model that returns structured output.

      This method enables the model to return responses that conform to a specific schema, using function calling under the hood. The model is instructed to call a special "extraction" function with the structured data as arguments.

      Type Parameters

      • RunOutput extends Record<string, any> = Record<string, any>

        The type of the structured output

      Parameters

      • outputSchema: Record<string, any> | ZodType<RunOutput, ZodTypeDef, RunOutput>

        The schema for the structured output (Zod schema or JSON schema)

      • Optionalconfig: StructuredOutputMethodOptions<false>

        Configuration options for structured output

      Returns Runnable<BaseLanguageModelInput, RunOutput>

      A new runnable that returns structured output

      import { z } from "zod";

      // Define the schema for extracted data
      const personSchema = z.object({
      name: z.string().describe("The person's full name"),
      age: z.number().describe("The person's age in years"),
      occupation: z.string().describe("The person's job or profession"),
      skills: z.array(z.string()).describe("List of skills or expertise")
      });

      // Create a model that returns structured output
      const extractionModel = model.withStructuredOutput(personSchema, {
      name: "extract_person_info",
      description: "Extract structured information about a person"
      });

      // Use the model
      const result = await extractionModel.invoke([
      new HumanMessage("Sarah Johnson is a 28-year-old data scientist who specializes in machine learning, Python, and statistical analysis.")
      ]);

      console.log(result);
      // Output: {
      // name: "Sarah Johnson",
      // age: 28,
      // occupation: "data scientist",
      // skills: ["machine learning", "Python", "statistical analysis"]
      // }
      // With includeRaw option to get both raw and parsed responses
      const extractionModelWithRaw = model.withStructuredOutput(personSchema, {
      includeRaw: true
      });

      const result = await extractionModelWithRaw.invoke([
      new HumanMessage("John is a 35-year-old teacher.")
      ]);

      console.log(result.parsed); // { name: "John", age: 35, occupation: "teacher", skills: [] }
      console.log(result.raw); // Original AIMessage with tool calls

      When method is set to "jsonMode" (not supported)

    • Create a version of this chat model that returns structured output.

      This method enables the model to return responses that conform to a specific schema, using function calling under the hood. The model is instructed to call a special "extraction" function with the structured data as arguments.

      Type Parameters

      • RunOutput extends Record<string, any> = Record<string, any>

        The type of the structured output

      Parameters

      • outputSchema: Record<string, any> | ZodType<RunOutput, ZodTypeDef, RunOutput>

        The schema for the structured output (Zod schema or JSON schema)

      • Optionalconfig: StructuredOutputMethodOptions<true>

        Configuration options for structured output

      Returns Runnable<BaseLanguageModelInput, { raw: BaseMessage; parsed: RunOutput }>

      A new runnable that returns structured output

      import { z } from "zod";

      // Define the schema for extracted data
      const personSchema = z.object({
      name: z.string().describe("The person's full name"),
      age: z.number().describe("The person's age in years"),
      occupation: z.string().describe("The person's job or profession"),
      skills: z.array(z.string()).describe("List of skills or expertise")
      });

      // Create a model that returns structured output
      const extractionModel = model.withStructuredOutput(personSchema, {
      name: "extract_person_info",
      description: "Extract structured information about a person"
      });

      // Use the model
      const result = await extractionModel.invoke([
      new HumanMessage("Sarah Johnson is a 28-year-old data scientist who specializes in machine learning, Python, and statistical analysis.")
      ]);

      console.log(result);
      // Output: {
      // name: "Sarah Johnson",
      // age: 28,
      // occupation: "data scientist",
      // skills: ["machine learning", "Python", "statistical analysis"]
      // }
      // With includeRaw option to get both raw and parsed responses
      const extractionModelWithRaw = model.withStructuredOutput(personSchema, {
      includeRaw: true
      });

      const result = await extractionModelWithRaw.invoke([
      new HumanMessage("John is a 35-year-old teacher.")
      ]);

      console.log(result.parsed); // { name: "John", age: 35, occupation: "teacher", skills: [] }
      console.log(result.raw); // Original AIMessage with tool calls

      When method is set to "jsonMode" (not supported)

    • Helper method to check if input is a Zod schema

      Parameters

      • input: any

      Returns input is ZodType<any, ZodTypeDef, any>