The Genkit class is the primary entry point for using Genkit in Python.
Initialization
from genkit import Genkit
from genkit.plugins.google_genai import GoogleAI
ai = Genkit(
plugins=[GoogleAI()],
model="gemini-2.0-flash",
prompt_dir="./prompts",
)
Parameters
List of plugins to initialize
Default model name to use
Directory to load prompts from (defaults to ./prompts if it exists)
Reflection server configuration
generate()
Generates text or structured data using a model.
# Simple text generation
response = await ai.generate(
model="gemini-2.0-flash",
prompt="Tell me a joke about programming.",
)
print(response.text)
# Structured output
from genkit import Output
from pydantic import BaseModel
class Person(BaseModel):
name: str
age: int
response = await ai.generate(
prompt="Tell me about a person",
output=Output(schema=Person),
)
person = response.output # Type: Person
Parameters
Model name (e.g., "gemini-2.0-flash")
prompt
str | Part | list[Part] | None
User prompt (text, Part, or list of Parts)
system
str | Part | list[Part] | None
System instructions
Control tool usage ("auto", "required", "none")
config
dict | GenerationCommonConfig | None
Generation configuration (temperature, max_tokens, etc.)
output
Output[T] | OutputConfig | dict | None
Output configuration for structured dataUse Output(schema=YourModel) for typed responses
docs
list[DocumentData] | None
Context documents for grounding
on_chunk
ModelStreamingCallback | None
Callback for streaming chunks
use
list[ModelMiddleware] | None
Middleware to apply
Returns
response
GenerateResponseWrapper[T]
Response wrapper with .text, .output, and .message properties
generate_stream()
Generates with streaming.
async for chunk in ai.generate_stream(
prompt="Tell me a story",
):
if chunk.done:
print("\nDone:", chunk.response.text)
else:
print(chunk.content, end="")
Returns
async_iterator
AsyncIterator[GenerateStreamResponse]
Async iterator yielding chunksGenerateStreamResponse fields:
done: bool - True when complete
response: GenerateResponseWrapper - Final response (when done)
content: list[Part] - Chunk content (when !done)
flow()
Decorator to define a flow.
@ai.flow()
async def summarize_article(url: str) -> str:
"""Summarizes an article from a URL."""
content = await fetch_article(url)
response = await ai.generate(
prompt=f"Summarize this article: {content}",
)
return response.text
# Call the flow
result = await summarize_article("https://example.com/article")
Parameters
Flow name (defaults to function name)
Input schema for validation
Output schema for validation
Returns
Flow decorator that wraps the function
Decorator to define a tool.
@ai.tool()
async def get_weather(city: str) -> dict:
"""Gets the current weather for a city."""
# Tool implementation
return {"temperature": 72, "conditions": "sunny"}
# Use in generation
response = await ai.generate(
prompt="What's the weather in Paris?",
tools=["get_weather"],
)
Parameters
Tool name (defaults to function name)
Tool description (defaults to function docstring)
Returns
embed()
Generates embeddings.
embeddings = await ai.embed(
embedder="text-embedding-004",
content="Hello, world!",
)
print(embeddings[0].embedding) # [0.123, 0.456, ...]
Parameters
Embedder name or reference
content
str | list[str] | list[DocumentData]
Text or documents to embed
Embedder-specific options
Returns
List of embedding vectors
retrieve()
Retrieves documents.
docs = await ai.retrieve(
retriever="my_retriever",
query="What is Genkit?",
options={"k": 5},
)
Parameters
Retriever name or reference
Retriever-specific options
Returns
evaluate()
Runs an evaluator on a dataset.
results = await ai.evaluate(
evaluator="faithfulness",
dataset=[
{
"input": "What is AI?",
"output": "AI is artificial intelligence",
"context": ["AI stands for artificial intelligence"],
},
],
)
Parameters
Evaluator name or reference
Evaluator-specific options
Returns
Example: Complete Application
import asyncio
from genkit import Genkit, Output
from genkit.plugins.google_genai import GoogleAI
from pydantic import BaseModel
ai = Genkit(
plugins=[GoogleAI()],
model="gemini-2.0-flash",
)
@ai.tool()
async def get_weather(city: str) -> dict:
"""Gets weather for a city."""
return {"temperature": 72, "conditions": "sunny"}
@ai.flow()
async def assistant(query: str) -> str:
"""AI assistant with tool access."""
response = await ai.generate(
prompt=query,
tools=["get_weather"],
)
return response.text
class Story(BaseModel):
title: str
content: str
async def main():
# Use tool-enabled flow
result = await assistant("What's the weather in Paris?")
print(result)
# Generate structured output
response = await ai.generate(
prompt="Write a short story",
output=Output(schema=Story),
)
story: Story = response.output
print(f"Title: {story.title}")
# Stream results
async for chunk in ai.generate_stream(
prompt="Count to 10",
):
if not chunk.done:
print(chunk.content, end="")
if __name__ == "__main__":
asyncio.run(main())