Class Gemini

java.lang.Object
com.google.adk.models.BaseLlm
com.google.adk.models.Gemini

public class Gemini extends BaseLlm
Represents the Gemini Generative AI model.

This class provides methods for interacting with the Gemini model, including standard request-response generation and establishing persistent bidirectional connections.

  • Constructor Details

    • Gemini

      public Gemini(String modelName, com.google.genai.Client apiClient)
      Constructs a new Gemini instance.
      Parameters:
      modelName - The name of the Gemini model to use (e.g., "gemini-2.0-flash").
      apiClient - The genai Client instance for making API calls.
    • Gemini

      public Gemini(String modelName, String apiKey)
      Constructs a new Gemini instance with a Google Gemini API key.
      Parameters:
      modelName - The name of the Gemini model to use (e.g., "gemini-2.0-flash").
      apiKey - The Google Gemini API key.
    • Gemini

      public Gemini(String modelName, VertexCredentials vertexCredentials)
      Constructs a new Gemini instance with a Google Gemini API key.
      Parameters:
      modelName - The name of the Gemini model to use (e.g., "gemini-2.0-flash").
      vertexCredentials - The Vertex AI credentials to access the Gemini model.
  • Method Details

    • builder

      public static Gemini.Builder builder()
      Returns a new Builder instance for constructing Gemini objects. Note that when building a Gemini object, at least one of apiKey, vertexCredentials, or an explicit apiClient must be set. If multiple are set, the explicit apiClient will take precedence.
      Returns:
      A new Gemini.Builder.
    • generateContent

      public io.reactivex.rxjava3.core.Flowable<LlmResponse> generateContent(LlmRequest llmRequest, boolean stream)
      Description copied from class: BaseLlm
      Generates one content from the given LLM request and tools.
      Specified by:
      generateContent in class BaseLlm
      Parameters:
      llmRequest - The LLM request containing the input prompt and parameters.
      stream - A boolean flag indicating whether to stream the response.
      Returns:
      A Flowable of LlmResponses. For non-streaming calls, it will only yield one LlmResponse. For streaming calls, it may yield more than one LlmResponse, but all yielded LlmResponses should be treated as one content by merging their parts.
    • connect

      public BaseLlmConnection connect(LlmRequest llmRequest)
      Description copied from class: BaseLlm
      Creates a live connection to the LLM.
      Specified by:
      connect in class BaseLlm