Creates a new HerokuAgent instance.
Optionalfields: HerokuAgentFieldsOptional configuration options for the Heroku Mia Agent
// Basic usage with defaults
const agent = new HerokuAgent();
// With custom configuration
const agent = new HerokuAgent({
model: "gpt-oss-120b",
temperature: 0.3,
maxTokensPerRequest: 2000,
tools: [
{
type: "heroku_tool",
name: "dyno_run_command",
runtime_params: {
target_app_name: "my-app",
tool_params: {
cmd: "date",
description: "Gets the current date and time on the server.",
parameters: { type: "object", properties: {} },
},
},
}
}
],
apiKey: "your-api-key",
apiUrl: "https://us.inference.heroku.com"
});
Protected OptionalmaxProtected OptionaltoolsProtected OptionalstreamPrivatetoolPrivate_ProtectedresolvedActual model ID used when calling Heroku APIs
ProtectedmodelPublic/alias model name exposed to LangChain (can differ from actual ID)
Protected OptionaltemperatureProtected OptionalstopProtected OptionaltopProtected OptionalapiProtected OptionalapiProtected OptionalmaxProtected OptionaltimeoutProtected OptionalstreamingProtected OptionaladditionalStaticlc_Returns the LangChain identifier for this agent class.
The string "HerokuAgent"
Returns the LLM type identifier for this agent.
The string "HerokuAgent"
InternalGet the parameters used to invoke the agent.
This method combines constructor parameters with runtime options to create the final request parameters for the Heroku Agent API. Runtime options take precedence over constructor parameters.
Optionaloptions: Partial<HerokuAgentCallOptions>Optional runtime parameters that override constructor defaults
Combined parameters for the agent API request
OptionalrunManager: CallbackManagerForLLMRunOptionalrunManager: CallbackManagerForLLMRunLangChain streaming hook. Wraps _stream to emit ChatGenerationChunk objects
so BaseChatModel.stream() stays on the streaming path.
OptionalrunManager: CallbackManagerForLLMRunCreate a version of this agent that returns structured output by instructing the model to produce JSON matching the schema (jsonMode-style).
OptionalmaybeOptions: {Get the locally bound no-op LangChain tools mirroring server tools.
ProtectedenqueuePush a server-provided tool result into the local queue for a tool name
ProtectedconsumeConsume the oldest queued server tool result for a tool name
ProtectedcleanProtectedbuildStandard headers for Heroku API calls
ProtectedpostPOST JSON with retries, timeout, and consistent error wrapping.
Protectedget
HerokuAgent - Heroku Managed Inference Agent Integration
A LangChain-compatible chat model that interfaces with Heroku's Managed Inference Agent API. This class provides access to intelligent agents that can execute tools and perform complex multi-step reasoning tasks. Agents have access to Heroku-specific tools like app management, database operations, and can integrate with external services via MCP (Model Context Protocol).
Unlike the basic ChatHeroku model, agents are designed for autonomous task execution with built-in tool calling capabilities and advanced reasoning patterns.
Example: LangChain createAgent integration
Example: MCP tooling via createAgent
Example: Streaming agent responses
See