Providers

Vercel AI SDK Provider

Use Composio with Vercel AI SDK

Vercel AI SDK allows you to configure an optional async execute function that the framework uses to execute the tool calls.

The Vercel provider for Composio formats the Composio tools and adds this execute function to the tool calls.

Setup

Vercel AI SDK and the provider are only available for the TypeScript SDK.

npm install @composio/vercel

You can specify and import the provider in the constructor.

import { class Composio<TProvider extends BaseComposioProvider<unknown, unknown, unknown> = OpenAIProvider>
This is the core class for Composio. It is used to initialize the Composio SDK and provide a global configuration.
Composio
} from '@composio/core';
import { class VercelProviderVercelProvider } from '@composio/vercel'; import {
function generateText<TOOLS extends ToolSet, OUTPUT extends Output<any, any, any> = Output<string, string, any>>({ model: modelArg, tools, toolChoice, system, prompt, messages, maxRetries: maxRetriesArg, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, providerOptions, experimental_activeTools, activeTools, experimental_prepareStep, prepareStep, experimental_repairToolCall: repairToolCall, experimental_download: download, experimental_context, _internal: { generateId }, onStepFinish, onFinish, ...settings }: CallSettings & Prompt & {
    model: LanguageModel;
    tools?: TOOLS;
    toolChoice?: ToolChoice<NoInfer<TOOLS>>;
    stopWhen?: StopCondition<NoInfer<TOOLS>> | Array<StopCondition<NoInfer<TOOLS>>>;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    experimental_activeTools?: Array<keyof NoInfer<TOOLS>>;
    activeTools?: Array<keyof NoInfer<TOOLS>>;
    output?: OUTPUT;
    experimental_output?: OUTPUT;
    experimental_download?: DownloadFunction | undefined;
    experimental_prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    prepareStep?: PrepareStepFunction<NoInfer<TOOLS>>;
    experimental_repairToolCall?: ToolCallRepairFunction<NoInfer<TOOLS>>;
    onStepFinish?: GenerateTextOnStepFinishCallback<NoInfer<TOOLS>>;
    onFinish?: GenerateTextOnFinishCallback<NoInfer<TOOLS>>;
    experimental_context?: unknown;
    _internal?: {
        generateId?: IdGenerator;
    };
}): Promise<GenerateTextResult<TOOLS, OUTPUT>>
Generate a text and call tools for a given prompt using a language model. This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText
} from 'ai';
import { const openai: OpenAIProvider
Default OpenAI provider instance.
openai
} from "@ai-sdk/openai";
const const composio: Composio<VercelProvider>composio = new new Composio<VercelProvider>(config?: ComposioConfig<VercelProvider> | undefined): Composio<VercelProvider>
Creates a new instance of the Composio SDK. The constructor initializes the SDK with the provided configuration options, sets up the API client, and initializes all core models (tools, toolkits, etc.).
@paramconfig - Configuration options for the Composio SDK@paramconfig.apiKey - The API key for authenticating with the Composio API@paramconfig.baseURL - The base URL for the Composio API (defaults to production URL)@paramconfig.allowTracking - Whether to allow anonymous usage analytics@paramconfig.provider - The provider to use for this Composio instance (defaults to OpenAIProvider)@example```typescript // Initialize with default configuration const composio = new Composio(); // Initialize with custom API key and base URL const composio = new Composio({ apiKey: 'your-api-key', baseURL: 'https://api.composio.dev' }); // Initialize with custom provider const composio = new Composio({ apiKey: 'your-api-key', provider: new CustomProvider() }); ```
Composio
({
provider?: VercelProvider | undefined
The tool provider to use for this Composio instance.
@examplenew OpenAIProvider()
provider
: new
new VercelProvider({ strict }?: {
    strict?: boolean;
}): VercelProvider
Creates a new instance of the VercelProvider. This provider enables integration with the Vercel AI SDK, allowing Composio tools to be used with Vercel AI applications.
@example```typescript // Initialize the Vercel provider const provider = new VercelProvider(); // Use with Composio const composio = new Composio({ apiKey: 'your-api-key', provider: new VercelProvider() }); // Use the provider to wrap tools for Vercel AI SDK const vercelTools = provider.wrapTools(composioTools, composio.tools.execute); ```
VercelProvider
(),
});

Usage

// create an auth config for gmail
// then create a connected account with an external user id that identifies the user
const const externalUserId: "your-external-user-id"externalUserId = "your-external-user-id";
const const tools: ToolSettools = await const composio: Composio<VercelProvider>composio.Composio<VercelProvider>.tools: Tools<unknown, unknown, VercelProvider>
List, retrieve, and execute tools
tools
.Tools<unknown, unknown, VercelProvider>.get<VercelProvider>(userId: string, slug: string, options?: AgenticToolOptions | undefined): Promise<ToolSet> (+1 overload)
Get a specific tool by its slug. This method fetches the tool from the Composio API and wraps it using the provider.
@paramuserId - The user id to get the tool for@paramslug - The slug of the tool to fetch@paramoptions - Optional provider options including modifiers@returnsThe wrapped tool@example```typescript // Get a specific tool by slug const hackerNewsUserTool = await composio.tools.get('default', 'HACKERNEWS_GET_USER'); // Get a tool with schema modifications const tool = await composio.tools.get('default', 'GITHUB_GET_REPOS', { modifySchema: (toolSlug, toolkitSlug, schema) => { // Customize the tool schema return {...schema, description: 'Custom description'}; } }); ```
get
(const externalUserId: "your-external-user-id"externalUserId, "GMAIL_SEND_EMAIL");
// env: OPENAI_API_KEY const { const text: string
The text that was generated in the last step.
text
} = await
generateText<ToolSet, Output<string, string, any>>({ model: modelArg, tools, toolChoice, system, prompt, messages, maxRetries: maxRetriesArg, abortSignal, timeout, headers, stopWhen, experimental_output, output, experimental_telemetry: telemetry, providerOptions, experimental_activeTools, activeTools, experimental_prepareStep, prepareStep, experimental_repairToolCall: repairToolCall, experimental_download: download, experimental_context, _internal: { generateId }, onStepFinish, onFinish, ...settings }: CallSettings & (Prompt & {
    model: LanguageModel;
    tools?: ToolSet | undefined;
    toolChoice?: ToolChoice<NoInfer<ToolSet>> | undefined;
    stopWhen?: StopCondition<NoInfer<ToolSet>> | StopCondition<NoInfer<ToolSet>>[] | undefined;
    experimental_telemetry?: TelemetrySettings;
    providerOptions?: ProviderOptions;
    ... 11 more ...;
    _internal?: {
        generateId?: IdGenerator;
    };
})): Promise<...>
Generate a text and call tools for a given prompt using a language model. This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling. The value is passed through to the provider. The range depends on the provider and model. It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText
({
model: LanguageModel
The language model to use.
model
: function openai(modelId: OpenAIResponsesModelId): LanguageModelV3
Default OpenAI provider instance.
openai
("gpt-5"),
messages: ModelMessage[]
A list of messages. You can either use `prompt` or `messages` but not both.
messages
: [
{ role: "user"role: "user", content: UserContentcontent: `Send an email to soham.g@composio.dev with the subject 'Hello from composio 👋🏻' and the body 'Congratulations on sending your first email using AI Agents and Composio!'`, }, ], tools?: ToolSet | undefined
The tools that the model can call. The model needs to support calling tools.
tools
,
}); var console: Console
The `console` module provides a simple debugging console that is similar to the JavaScript console mechanism provided by web browsers. The module exports two specific components: * A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream. * A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and [`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module. _**Warning**_: The global console object's methods are neither consistently synchronous like the browser APIs they resemble, nor are they consistently asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for more information. Example using the global `console`: ```js console.log('hello world'); // Prints: hello world, to stdout console.log('hello %s', 'world'); // Prints: hello world, to stdout console.error(new Error('Whoops, something bad happened')); // Prints error message and stack trace to stderr: // Error: Whoops, something bad happened // at [eval]:5:15 // at Script.runInThisContext (node:vm:132:18) // at Object.runInThisContext (node:vm:309:38) // at node:internal/process/execution:77:19 // at [eval]-wrapper:6:22 // at evalScript (node:internal/process/execution:76:60) // at node:internal/main/eval_string:23:3 const name = 'Will Robinson'; console.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to stderr ``` Example using the `Console` class: ```js const out = getStreamSomehow(); const err = getStreamSomehow(); const myConsole = new console.Console(out, err); myConsole.log('hello world'); // Prints: hello world, to out myConsole.log('hello %s', 'world'); // Prints: hello world, to out myConsole.error(new Error('Whoops, something bad happened')); // Prints: [Error: Whoops, something bad happened], to err const name = 'Will Robinson'; myConsole.warn(`Danger ${name}! Danger!`); // Prints: Danger Will Robinson! Danger!, to err ```
@see[source](https://github.com/nodejs/node/blob/v24.x/lib/console.js)
console
.Console.log(message?: any, ...optionalParams: any[]): void (+1 overload)
Prints to `stdout` with newline. Multiple arguments can be passed, with the first used as the primary message and all additional used as substitution values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html) (the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)). ```js const count = 5; console.log('count: %d', count); // Prints: count: 5, to stdout console.log('count:', count); // Prints: count: 5, to stdout ``` See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log
("Email sent successfully!", { text: stringtext });

On this page