Genkit
The Genkit class encapsulates a single Genkit instance including the Registry, Reflection Server, and configuration.
Installation
Usage
import { genkit } from 'genkit' ;
import { googleAI } from '@genkit-ai/googleai' ;
const ai = genkit ({
plugins: [ googleAI ()],
model: 'googleai/gemini-2.0-flash-exp' ,
});
genkit()
Initializes Genkit with a set of options.
Signature:
function genkit ( options : GenkitOptions ) : Genkit
Parameters
Configuration options for the Genkit instance plugins
(GenkitPlugin | GenkitPluginV2)[]
List of plugins to load
Directory where dotprompts are stored (defaults to './prompts')
Default model to use if no model is specified
Additional runtime context data for flows and tools
Display name that will be shown in developer tooling
Additional attribution information to include in the x-goog-api-client header
Returns
A configured Genkit instance
Class: Genkit
generate()
Generate calls a generative model based on the provided prompt and configuration.
Signatures:
// Simple text prompt
generate ( strPrompt : string ): Promise < GenerateResponse >
// Multipart prompt
generate ( parts : Part []): Promise < GenerateResponse >
// Full options
generate < O extends z . ZodTypeAny = z . ZodTypeAny > (
opts : GenerateOptions < O > | PromiseLike < GenerateOptions < O >>
): Promise < GenerateResponse < z . infer < O >>>
Parameters
prompt
string | Part[] | GenerateOptions
The input prompt - can be a simple string, array of parts, or full options object Show GenerateOptions properties
The model to use for generation
Tools available for the model to call
Model configuration (temperature, maxOutputTokens, etc.)
output
{ schema?: z.ZodTypeAny, format?: string }
Output schema and format specification
Returns
The generation response containing:
text: The generated text
output(): Parsed output according to schema
messages: Full conversation history
usage: Token usage information
Example
const ai = genkit ({
plugins: [ googleAI ()],
model: 'googleai/gemini-2.0-flash-exp' ,
});
// Simple text generation
const { text } = await ai . generate ( 'Tell me a joke' );
// With tools
const { text } = await ai . generate ({
prompt: 'What is the weather in Paris?' ,
tools: [ weatherTool ],
});
// With structured output
const response = await ai . generate ({
prompt: 'List 3 colors' ,
output: {
schema: z . object ({ colors: z . array ( z . string ()) }),
},
});
const { colors } = response . output ();
generateStream()
Streaming version of generate() that returns chunks as they are generated.
Signature:
generateStream < O extends z . ZodTypeAny = z . ZodTypeAny > (
options : string | Part [] | GenerateStreamOptions < O >
): GenerateStreamResponse < z . infer < O >>
Returns
An object containing:
response: Promise that resolves to the final response
stream: Channel of response chunks
Example
const { response , stream } = ai . generateStream ( 'Tell me a story' );
for await ( const chunk of stream ) {
console . log ( chunk . text );
}
const finalResponse = await response ;
console . log ( 'Final:' , finalResponse . text );
defineFlow()
Defines and registers a flow function.
Signature:
defineFlow <
I extends z . ZodTypeAny = z . ZodTypeAny ,
O extends z . ZodTypeAny = z . ZodTypeAny ,
S extends z . ZodTypeAny = z . ZodTypeAny
> (
config : FlowConfig < I , O , S > | string ,
fn : FlowFn < I , O , S >
): Action < I , O , S >
Parameters
config
FlowConfig | string
required
Flow configuration or name string Show FlowConfig properties
The flow implementation function
Example
const menuSuggestionFlow = ai . defineFlow (
{
name: 'menuSuggestionFlow' ,
inputSchema: z . string (),
outputSchema: z . string (),
},
async ( subject ) => {
const { text } = await ai . generate ({
prompt: `Suggest an item for the menu of a ${ subject } themed restaurant` ,
});
return text ;
}
);
const suggestion = await menuSuggestionFlow ( 'pirate' );
Defines and registers a tool that can be used by models.
Signature:
defineTool < I extends z . ZodTypeAny , O extends z . ZodTypeAny > (
config : ToolConfig < I , O > ,
fn : ToolFn < I , O >
): ToolAction < I , O >
Parameters
Tool configuration Description for the model
Tool implementation function
Example
const weatherTool = ai . defineTool (
{
name: 'getWeather' ,
description: 'Gets the current weather in a location' ,
inputSchema: z . object ({ location: z . string () }),
outputSchema: z . string (),
},
async ({ location }) => {
// Fetch weather data
return `The weather in ${ location } is sunny` ;
}
);
definePrompt()
Defines and registers a prompt based on a function or template.
Signature:
definePrompt <
I extends z . ZodTypeAny = z . ZodTypeAny ,
O extends z . ZodTypeAny = z . ZodTypeAny ,
CustomOptions extends z . ZodTypeAny = z . ZodTypeAny
> (
options : PromptConfig < I , O , CustomOptions > ,
templateOrFn ?: string | PromptFn < I >
): ExecutablePrompt < z . infer < I > , O , CustomOptions >
Example
const greetingPrompt = ai . definePrompt (
{
name: 'greeting' ,
input: { schema: z . object ({ name: z . string () }) },
messages : async ( input ) => [
{ role: 'user' , content: [{ text: `Hello, ${ input . name } !` }] },
],
}
);
const { text } = await greetingPrompt ({ name: 'World' });
run()
A flow step that executes the provided function. Each run step is recorded separately in the trace.
Signature:
run < T >( name : string , func : () => Promise < T > ): Promise < T >
run < T >( name : string , input : any , func : ( input ?: any ) => Promise < T > ): Promise < T >
Example
ai . defineFlow ( 'processData' , async () => {
const data = await ai . run ( 'fetch-data' , async () => {
return fetchDataFromAPI ();
});
const result = await ai . run ( 'process-data' , async () => {
return processData ( data );
});
return result ;
});
See Also