Skip to content

在 ADK 中使用不同的模型

Note

Java ADK 目前支持 Gemini 和 Anthropic 模型。更多模型支持即将推出。

Agent Development Kit (ADK) 设计具有灵活性,允许你将各种大语言模型 (LLMs) 集成到你的智能体中。虽然 Google Gemini 模型的设置已在设置基础模型指南中介绍,但本页详细说明了如何有效利用 Gemini 并集成其他流行模型,包括那些外部托管或本地运行的模型。

ADK 主要使用两种机制进行模型集成:

  1. 直接字符串 / 注册表: 适用于与 Google Cloud 紧密集成的模型(如通过 Google AI Studio 或 Vertex AI 访问的 Gemini 模型)或托管在 Vertex AI 端点上的模型。你通常直接向 LlmAgent 提供模型名称或端点资源字符串。ADK 的内部注册表将此字符串解析为适当的后端客户端,通常利用 google-genai 库。
  2. 包装类: 为了更广泛的兼容性,特别是对于 Google 生态系统之外的模型或需要特定客户端配置的模型(如通过 LiteLLM 访问的模型)。你实例化一个特定的包装类(例如 LiteLlm)并将此对象作为 model 参数传递给你的 LlmAgent

以下部分将根据你的需求指导你使用这些方法。

使用 Google Gemini 模型

这是在 ADK 中使用 Google 旗舰模型的最直接方式。

集成方法: 直接将模型的标识符字符串传递给 LlmAgent(或其别名 Agent)的 model 参数。

后端选项与设置:

ADK 内部用于 Gemini 的 google-genai 库可以通过 Google AI Studio 或 Vertex AI 连接。

语音/视频流的模型支持

为了在 ADK 中使用语音/视频流式处理,你需要使用支持 Live API 的 Gemini 模型。你可以在文档中找到支持 Gemini Live API 的模型 ID

Google AI Studio

  • Use Case: Google AI Studio is the easiest way to get started with Gemini. All you need is the API key. Best for rapid prototyping and development.
  • Setup: Typically requires an API key:
    • Set as an environment variable or
    • Passed during the model initialization via the Client (see example below)
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=FALSE

Vertex AI

  • 使用场景: 推荐用于生产应用,利用 Google Cloud 基础设施。Vertex AI 上的 Gemini 支持企业级功能、安全性和合规控制。
  • 设置:

    • 使用应用默认凭证 (ADC) 进行身份验证:

      gcloud auth application-default login
      
    • Configure these variables either as environment variables or by providing them directly when initializing the Model.

      Set your Google Cloud project and location:

      export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
      export GOOGLE_CLOUD_LOCATION="YOUR_VERTEX_AI_LOCATION" # e.g., us-central1
      

      Explicitly tell the library to use Vertex AI:

      export GOOGLE_GENAI_USE_VERTEXAI=TRUE
      
  • 模型:Vertex AI 文档 中查找可用的模型 ID。

示例:

from google.adk.agents import LlmAgent

# --- Example using a stable Gemini Flash model ---
agent_gemini_flash = LlmAgent(
    # Use the latest stable Flash model identifier
    model="gemini-2.0-flash",
    name="gemini_flash_agent",
    instruction="You are a fast and helpful Gemini assistant.",
    # ... other agent parameters
)

# --- Example using a powerful Gemini Pro model ---
# Note: Always check the official Gemini documentation for the latest model names,
# including specific preview versions if needed. Preview models might have
# different availability or quota limitations.
agent_gemini_pro = LlmAgent(
    # Use the latest generally available Pro model identifier
    model="gemini-2.5-pro-preview-03-25",
    name="gemini_pro_agent",
    instruction="You are a powerful and knowledgeable Gemini assistant.",
    # ... other agent parameters
)
// --- Example #1: using a stable Gemini Flash model with ENV variables---
LlmAgent agentGeminiFlash =
    LlmAgent.builder()
        // Use the latest stable Flash model identifier
        .model("gemini-2.0-flash") // Set ENV variables to use this model
        .name("gemini_flash_agent")
        .instruction("You are a fast and helpful Gemini assistant.")
        // ... other agent parameters
        .build();

// --- Example #2: using a powerful Gemini Pro model with API Key in model ---
LlmAgent agentGeminiPro =
    LlmAgent.builder()
        // Use the latest generally available Pro model identifier
        .model(new Gemini("gemini-2.5-pro-preview-03-25",
            Client.builder()
                .vertexAI(false)
                .apiKey("API_KEY") // Set the API Key (or) project/ location
                .build()))
        // Or, you can also directly pass the API_KEY
        // .model(new Gemini("gemini-2.5-pro-preview-03-25", "API_KEY"))
        .name("gemini_pro_agent")
        .instruction("You are a powerful and knowledgeable Gemini assistant.")
        // ... other agent parameters
        .build();

// Note: Always check the official Gemini documentation for the latest model names,
// including specific preview versions if needed. Preview models might have
// different availability or quota limitations.

Using Anthropic models

java_only

You can integrate Anthropic's Claude models directly using their API key or from a Vertex AI backend into your Java ADK applications by using the ADK's Claude wrapper class.

For Vertex AI backend, see the Third-Party Models on Vertex AI section.

Prerequisites:

  1. Dependencies:

    • Anthropic SDK Classes (Transitive): The Java ADK's com.google.adk.models.Claude wrapper relies on classes from Anthropic's official Java SDK. These are typically included as transitive dependencies.
  2. Anthropic API Key:

    • Obtain an API key from Anthropic. Securely manage this key using a secret manager.

Integration:

Instantiate com.google.adk.models.Claude, providing the desired Claude model name and an AnthropicOkHttpClient configured with your API key. Then, pass this Claude instance to your LlmAgent.

Example:

import com.anthropic.client.AnthropicClient;
import com.google.adk.agents.LlmAgent;
import com.google.adk.models.Claude;
import com.anthropic.client.okhttp.AnthropicOkHttpClient; // From Anthropic's SDK

public class DirectAnthropicAgent {

  private static final String CLAUDE_MODEL_ID = "claude-3-7-sonnet-latest"; // Or your preferred Claude model

  public static LlmAgent createAgent() {

    // It's recommended to load sensitive keys from a secure config
    AnthropicClient anthropicClient = AnthropicOkHttpClient.builder()
        .apiKey("ANTHROPIC_API_KEY")
        .build();

    Claude claudeModel = new Claude(
        CLAUDE_MODEL_ID,
        anthropicClient
    );

    return LlmAgent.builder()
        .name("claude_direct_agent")
        .model(claudeModel)
        .instruction("You are a helpful AI assistant powered by Anthropic Claude.")
        // ... other LlmAgent configurations
        .build();
  }

  public static void main(String[] args) {
    try {
      LlmAgent agent = createAgent();
      System.out.println("Successfully created direct Anthropic agent: " + agent.name());
    } catch (IllegalStateException e) {
      System.err.println("Error creating agent: " + e.getMessage());
    }
  }
}

Using Cloud & Proprietary Models via LiteLLM

python_only

To access a vast range of LLMs from providers like OpenAI, Anthropic (non-Vertex AI), Cohere, and many others, ADK offers integration through the LiteLLM library.

集成方法: 实例化 LiteLlm 包装类并将其传递给 LlmAgentmodel 参数。

LiteLLM 概述: LiteLLM 充当翻译层,为 100 多个 LLM 提供标准化的、与 OpenAI 兼容的接口。

设置:

  1. 安装 LiteLLM:
    pip install litellm
    
  2. 设置提供商 API 密钥: 为你打算使用的特定提供商配置 API 密钥作为环境变量。

    • OpenAI 示例:

      export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
      
    • Anthropic 示例(非 Vertex AI):

      export ANTHROPIC_API_KEY="YOUR_ANTHROPIC_API_KEY"
      
    • 查阅 LiteLLM 提供商文档 了解其他提供商的正确环境变量名称。

      示例:

      from google.adk.agents import LlmAgent
      from google.adk.models.lite_llm import LiteLlm
      
      # --- 使用 OpenAI 的 GPT-4o 的智能体示例 ---
      # (需要 OPENAI_API_KEY)
      agent_openai = LlmAgent(
          model=LiteLlm(model="openai/gpt-4o"), # LiteLLM 模型字符串格式
          name="openai_agent",
          instruction="你是一个由 GPT-4o 提供支持的乐于助人的助手。",
          # ... 其他智能体参数
      )
      
      # --- 使用 Anthropic 的 Claude Haiku(非 Vertex)的智能体示例 ---
      # (需要 ANTHROPIC_API_KEY)
      agent_claude_direct = LlmAgent(
          model=LiteLlm(model="anthropic/claude-3-haiku-20240307"),
          name="claude_direct_agent",
          instruction="你是一个由 Claude Haiku 提供支持的助手。",
          # ... 其他智能体参数
      )
      

Note for Windows users

Avoiding LiteLLM UnicodeDecodeError on Windows

When using ADK agents with LiteLlm on Windows, users might encounter the following error:

UnicodeDecodeError: 'charmap' codec can't decode byte...
This issue occurs because litellm (used by LiteLlm) reads cached files (e.g., model pricing information) using the default Windows encoding (cp1252) instead of UTF-8. Windows users can prevent this issue by setting the PYTHONUTF8 environment variable to 1. This forces Python to use UTF-8 globally. Example (PowerShell):
# Set for current session
$env:PYTHONUTF8 = "1"
# Set persistently for the user
[System.Environment]::SetEnvironmentVariable('PYTHONUTF8', '1', [System.EnvironmentVariableTarget]::User)
Applying this setting ensures that Python reads cached files using UTF-8, avoiding the decoding error.

Using Open & Local Models via LiteLLM

python_only

For maximum control, cost savings, privacy, or offline use cases, you can run open-source models locally or self-host them and integrate them using LiteLLM.

集成方法: 实例化 LiteLlm 包装类,配置为指向你的本地模型服务器。

Ollama 集成

Ollama 允许你轻松在本地运行开源模型。

模型选择

如果你的智能体依赖工具,请确保从 Ollama 网站 选择支持工具的模型。

为了获得可靠的结果,我们建议使用具有工具支持的适当大小的模型。

可以使用以下命令检查模型的工具支持:

ollama show mistral-small3.1
  Model
    architecture        mistral3
    parameters          24.0B
    context length      131072
    embedding length    5120
    quantization        Q4_K_M

  Capabilities
    completion
    vision
    tools

你应该在能力下看到列出的 tools

你还可以查看模型使用的模板,并根据你的需求对其进行调整。

ollama show --modelfile llama3.2 > model_file_to_modify

例如,上述模型的默认模板本质上建议模型应该始终调用函数。这可能导致函数调用的无限循环。

Given the following functions, please respond with a JSON for a function call
with its proper arguments that best answers the given prompt.

Respond in the format {"name": function name, "parameters": dictionary of
argument name and its value}. Do not use variables.

你可以将这样的提示替换为更具描述性的提示,以防止无限工具调用循环。

例如:

Review the user's prompt and the available functions listed below.
First, determine if calling one of these functions is the most appropriate way to respond. A function call is likely needed if the prompt asks for a specific action, requires external data lookup, or involves calculations handled by the functions. If the prompt is a general question or can be answered directly, a function call is likely NOT needed.

If you determine a function call IS required: Respond ONLY with a JSON object in the format {"name": "function_name", "parameters": {"argument_name": "value"}}. Ensure parameter values are concrete, not variables.

If you determine a function call IS NOT required: Respond directly to the user's prompt in plain text, providing the answer or information requested. Do not output any JSON.

然后你可以使用以下命令创建一个新模型:

ollama create llama3.2-modified -f model_file_to_modify

使用 ollama_chat 提供商

我们的 LiteLLM 包装器可用于创建使用 Ollama 模型的智能体。

root_agent = Agent(
    model=LiteLlm(model="ollama_chat/mistral-small3.1"),
    name="dice_agent",
    description=(
        "hello world agent that can roll a dice of 8 sides and check prime"
        " numbers."
    ),
    instruction="""
      You roll dice and answer questions about the outcome of the dice rolls.
    """,
    tools=[
        roll_die,
        check_prime,
    ],
)

重要的是设置提供商为 ollama_chat 而不是 ollama。使用 ollama 将导致意外行为,如无限工具调用循环和忽略先前上下文。

虽然可以在 LiteLLM 内部提供 api_base 用于生成,但截至 v1.65.5,LiteLLM 库在完成后调用其他 API 时依赖于环境变量。因此,此时我们建议设置环境变量 OLLAMA_API_BASE 来指向 ollama 服务器。

export OLLAMA_API_BASE="http://localhost:11434"
adk web

使用 openai 提供商

或者,可以使用 openai 作为提供商名称。但这也需要设置 OPENAI_API_BASE=http://localhost:11434/v1OPENAI_API_KEY=anything 环境变量,而不是 OLLAMA_API_BASE请注意,api base 现在末尾有 /v1

root_agent = Agent(
    model=LiteLlm(model="openai/mistral-small3.1"),
    name="dice_agent",
    description=(
        "hello world agent that can roll a dice of 8 sides and check prime"
        " numbers."
    ),
    instruction="""
      You roll dice and answer questions about the outcome of the dice rolls.
    """,
    tools=[
        roll_die,
        check_prime,
    ],
)
export OPENAI_API_BASE=http://localhost:11434/v1
export OPENAI_API_KEY=anything
adk web

调试

你可以通过在智能体代码中的导入后添加以下内容来查看发送到 Ollama 服务器的请求。

import litellm
litellm._turn_on_debug()

查找类似于以下的行:

Request Sent from LiteLLM:
curl -X POST \
http://localhost:11434/api/chat \
-d '{'model': 'mistral-small3.1', 'messages': [{'role': 'system', 'content': ...

自托管端点(例如 vLLM)

python_only

Tools such as vLLM allow you to host models efficiently and often expose an OpenAI-compatible API endpoint.

设置:

  1. 部署模型: 使用 vLLM(或类似工具)部署你选择的模型。记下 API 基础 URL(例如 https://your-vllm-endpoint.run.app/v1)。
    • 对于 ADK 工具很重要: 部署时,确保服务工具支持并启用与 OpenAI 兼容的工具/函数调用。对于 vLLM,这可能涉及标志,如 --enable-auto-tool-choice 和可能特定的 --tool-call-parser,取决于模型。参考 vLLM 关于工具使用的文档。
  2. 认证: 确定你的端点如何处理认证(例如,API 密钥、承载令牌)。

    集成示例:

    import subprocess
    from google.adk.agents import LlmAgent
    from google.adk.models.lite_llm import LiteLlm
    
    # --- 使用托管在 vLLM 端点上的模型的智能体示例 ---
    
    # 由你的 vLLM 部署提供的端点 URL
    api_base_url = "https://your-vllm-endpoint.run.app/v1"
    
    # 你的 vLLM 端点配置识别的模型名称
    model_name_at_endpoint = "hosted_vllm/google/gemma-3-4b-it" # vllm_test.py 的示例
    
    # 认证(示例:为 Cloud Run 部署使用 gcloud 身份令牌)
    # 根据你的端点的安全性调整这一点
    try:
        gcloud_token = subprocess.check_output(
            ["gcloud", "auth", "print-identity-token", "-q"]
        ).decode().strip()
        auth_headers = {"Authorization": f"Bearer {gcloud_token}"}
    except Exception as e:
        print(f"警告:无法获取 gcloud 令牌 - {e}。端点可能未受保护或需要不同的认证。")
        auth_headers = None # 或适当处理错误
    
    agent_vllm = LlmAgent(
        model=LiteLlm(
            model=model_name_at_endpoint,
            api_base=api_base_url,
            # 如果需要,传递认证头
            extra_headers=auth_headers
            # 或者,如果端点使用 API 密钥:
            # api_key="YOUR_ENDPOINT_API_KEY"
        ),
        name="vllm_agent",
        instruction="你是一个在自托管 vLLM 端点上运行的乐于助人的助手。",
        # ... 其他智能体参数
    )
    

使用 Vertex AI 上的托管和微调模型

为了企业级可扩展性、可靠性和与 Google Cloud 的 MLOps 生态系统集成,你可以使用部署到 Vertex AI 端点的模型。这包括来自模型库的模型或你自己的微调模型。

集成方法: 将完整的 Vertex AI 端点资源字符串(projects/PROJECT_ID/locations/LOCATION/endpoints/ENDPOINT_ID)直接传递给 LlmAgentmodel 参数。

Vertex AI 设置(综合):

确保你的环境已配置为 Vertex AI:

  1. 认证: 使用应用默认凭证 (ADC):

    gcloud auth application-default login
    
  2. 环境变量: 设置你的项目和位置:

    export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
    export GOOGLE_CLOUD_LOCATION="YOUR_VERTEX_AI_LOCATION" # e.g., us-central1
    
  3. 启用 Vertex 后端: 关键是,确保 google-genai 库针对 Vertex AI:

    export GOOGLE_GENAI_USE_VERTEXAI=TRUE
    

模型库部署

python_only

You can deploy various open and proprietary models from the Vertex AI Model Garden to an endpoint.

示例:

from google.adk.agents import LlmAgent
from google.genai import types # 用于配置对象

# --- 使用从模型库部署的 Llama 3 模型的智能体示例 ---

# 替换为你实际的 Vertex AI 端点资源名称
llama3_endpoint = "projects/YOUR_PROJECT_ID/locations/us-central1/endpoints/YOUR_LLAMA3_ENDPOINT_ID"

agent_llama3_vertex = LlmAgent(
    model=llama3_endpoint,
    name="llama3_vertex_agent",
    instruction="你是一个基于 Llama 3 的有帮助的助手,托管在 Vertex AI 上。",
    generate_content_config=types.GenerateContentConfig(max_output_tokens=2048),
    # ... 其他智能体参数
)

微调模型端点

python_only

Deploying your fine-tuned models (whether based on Gemini or other architectures supported by Vertex AI) results in an endpoint that can be used directly.

示例:

from google.adk.agents import LlmAgent

# --- 使用 Vertex AI 上部署的 Claude 模型示例 ---
agent_claude_vertex = LlmAgent(
    # 完整的 Vertex AI 端点资源字符串
    model="projects/my-project/locations/us-central1/endpoints/1234567890",
    name="claude_vertex_agent",
    instruction="你是一个有帮助的助手,在 Vertex AI 上运行。",
    # ... 其他智能体参数
)

通过 LiteLLM 在 Vertex AI 上使用 Claude 模型

一些提供商,如 Anthropic,直接通过 Vertex AI 提供他们的模型。

Integration Method: Uses the direct model string (e.g., "claude-3-sonnet@20240229"), but requires manual registration within ADK.

Why Registration? ADK's registry automatically recognizes gemini-* strings and standard Vertex AI endpoint strings (projects/.../endpoints/...) and routes them via the google-genai library. For other model types used directly via Vertex AI (like Claude), you must explicitly tell the ADK registry which specific wrapper class (Claude in this case) knows how to handle that model identifier string with the Vertex AI backend.

Setup:

  1. Vertex AI Environment: Ensure the consolidated Vertex AI setup (ADC, Env Vars, GOOGLE_GENAI_USE_VERTEXAI=TRUE) is complete.

  2. Install Provider Library: Install the necessary client library configured for Vertex AI.

    pip install "anthropic[vertex]"
    
  3. Register Model Class: Add this code near the start of your application, before creating an agent using the Claude model string:

    # Required for using Claude model strings directly via Vertex AI with LlmAgent
    from google.adk.models.anthropic_llm import Claude
    from google.adk.models.registry import LLMRegistry
    
    LLMRegistry.register(Claude)
    

Example:

from google.adk.agents import LlmAgent
from google.adk.models.anthropic_llm import Claude # Import needed for registration
from google.adk.models.registry import LLMRegistry # Import needed for registration
from google.genai import types

# --- Register Claude class (do this once at startup) ---
LLMRegistry.register(Claude)

# --- Example Agent using Claude 3 Sonnet on Vertex AI ---

# Standard model name for Claude 3 Sonnet on Vertex AI
claude_model_vertexai = "claude-3-sonnet@20240229"

agent_claude_vertexai = LlmAgent(
    model=claude_model_vertexai, # Pass the direct string after registration
    name="claude_vertexai_agent",
    instruction="You are an assistant powered by Claude 3 Sonnet on Vertex AI.",
    generate_content_config=types.GenerateContentConfig(max_output_tokens=4096),
    # ... other agent parameters
)

Integration Method: Directly instantiate the provider-specific model class (e.g., com.google.adk.models.Claude) and configure it with a Vertex AI backend.

Why Direct Instantiation? The Java ADK's LlmRegistry primarily handles Gemini models by default. For third-party models like Claude on Vertex AI, you directly provide an instance of the ADK's wrapper class (e.g., Claude) to the LlmAgent. This wrapper class is responsible for interacting with the model via its specific client library, configured for Vertex AI.

Setup:

  1. Vertex AI Environment:

    • Ensure your Google Cloud project and region are correctly set up.
    • Application Default Credentials (ADC): Make sure ADC is configured correctly in your environment. This is typically done by running gcloud auth application-default login. The Java client libraries will use these credentials to authenticate with Vertex AI. Follow the Google Cloud Java documentation on ADC for detailed setup.
  2. Provider Library Dependencies:

    • Third-Party Client Libraries (Often Transitive): The ADK core library often includes the necessary client libraries for common third-party models on Vertex AI (like Anthropic's required classes) as transitive dependencies. This means you might not need to explicitly add a separate dependency for the Anthropic Vertex SDK in your pom.xml or build.gradle.
  3. Instantiate and Configure the Model: When creating your LlmAgent, instantiate the Claude class (or the equivalent for another provider) and configure its VertexBackend.

Example:

import com.anthropic.client.AnthropicClient;
import com.anthropic.client.okhttp.AnthropicOkHttpClient;
import com.anthropic.vertex.backends.VertexBackend;
import com.google.adk.agents.LlmAgent;
import com.google.adk.models.Claude; // ADK's wrapper for Claude
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;

// ... other imports

public class ClaudeVertexAiAgent {

    public static LlmAgent createAgent() throws IOException {
        // Model name for Claude 3 Sonnet on Vertex AI (or other versions)
        String claudeModelVertexAi = "claude-3-7-sonnet"; // Or any other Claude model

        // Configure the AnthropicOkHttpClient with the VertexBackend
        AnthropicClient anthropicClient = AnthropicOkHttpClient.builder()
            .backend(
                VertexBackend.builder()
                    .region("us-east5") // Specify your Vertex AI region
                    .project("your-gcp-project-id") // Specify your GCP Project ID
                    .googleCredentials(GoogleCredentials.getApplicationDefault())
                    .build())
            .build();

        // Instantiate LlmAgent with the ADK Claude wrapper
        LlmAgent agentClaudeVertexAi = LlmAgent.builder()
            .model(new Claude(claudeModelVertexAi, anthropicClient)) // Pass the Claude instance
            .name("claude_vertexai_agent")
            .instruction("You are an assistant powered by Claude 3 Sonnet on Vertex AI.")
            // .generateContentConfig(...) // Optional: Add generation config if needed
            // ... other agent parameters
            .build();

        return agentClaudeVertexAi;
    }

    public static void main(String[] args) {
        try {
            LlmAgent agent = createAgent();
            System.out.println("Successfully created agent: " + agent.name());
            // Here you would typically set up a Runner and Session to interact with the agent
        } catch (IOException e) {
            System.err.println("Failed to create agent: " + e.getMessage());
            e.printStackTrace();
        }
    }
}