Interface LLMProvider

All Known Implementing Classes:
LangChain4JLLMProvider, SpringAILLMProvider

public interface LLMProvider
Framework-agnostic interface for Large Language Model (LLM) providers. This interface enables AI-powered components to communicate with LLMs without being tied to a specific implementation. Implementations are responsible for managing conversation memory, handling streaming responses, processing vendor-specific tool annotations, and handling file attachments.

Use the from(...) factory methods to create a provider from a vendor-specific model or client object:

 // Spring AI
 LLMProvider provider = LLMProvider.from(chatModel);

 // LangChain4j
 LLMProvider provider = LLMProvider.from(streamingChatModel);
 
Author:
Vaadin Ltd.
  • Nested Class Summary

    Nested Classes
    Modifier and Type
    Interface
    Description
    static interface 
    Represents a request to the LLM containing all necessary context, configuration, and tools.
  • Method Summary

    Modifier and Type
    Method
    Description
    from(dev.langchain4j.model.chat.ChatModel chatModel)
    Creates an LLMProvider from a LangChain4j ChatModel.
    from(dev.langchain4j.model.chat.StreamingChatModel streamingChatModel)
    Creates an LLMProvider from a LangChain4j StreamingChatModel.
    from(org.springframework.ai.chat.client.ChatClient chatClient)
    Creates an LLMProvider from a Spring AI ChatClient.
    from(org.springframework.ai.chat.model.ChatModel chatModel)
    Creates an LLMProvider from a Spring AI ChatModel.
    default void
    setHistory(List<ChatMessage> history, Map<String,List<AIAttachment>> attachmentsByMessageId)
    Restores the provider's conversation memory from a list of chat messages with their associated attachments.
    reactor.core.publisher.Flux<String>
    Streams a response from the LLM based on the provided request.
  • Method Details

    • from

      static SpringAILLMProvider from(org.springframework.ai.chat.model.ChatModel chatModel)
      Creates an LLMProvider from a Spring AI ChatModel.

      The provider manages conversation memory internally. Streaming is enabled by default and can be toggled via SpringAILLMProvider.setStreaming(boolean).

      Parameters:
      chatModel - the Spring AI chat model, not null
      Returns:
      a new provider instance, never null
      Throws:
      NullPointerException - if chatModel is null
    • from

      static SpringAILLMProvider from(org.springframework.ai.chat.client.ChatClient chatClient)
      Creates an LLMProvider from a Spring AI ChatClient.

      Use this when the ChatClient is pre-configured with custom advisors or externally managed memory. Note that providers created from a ChatClient do not support history restoration via setHistory(List, Map) because the memory is managed externally.

      Parameters:
      chatClient - the Spring AI chat client, not null
      Returns:
      a new provider instance, never null
      Throws:
      NullPointerException - if chatClient is null
    • from

      static LangChain4JLLMProvider from(dev.langchain4j.model.chat.StreamingChatModel streamingChatModel)
      Creates an LLMProvider from a LangChain4j StreamingChatModel.

      The provider manages conversation memory internally. Responses are streamed token-by-token.

      Parameters:
      streamingChatModel - the LangChain4j streaming chat model, not null
      Returns:
      a new provider instance, never null
      Throws:
      NullPointerException - if streamingChatModel is null
    • from

      static LangChain4JLLMProvider from(dev.langchain4j.model.chat.ChatModel chatModel)
      Creates an LLMProvider from a LangChain4j ChatModel.

      The provider manages conversation memory internally. Responses are returned as a single block (non-streaming).

      Parameters:
      chatModel - the LangChain4j chat model, not null
      Returns:
      a new provider instance, never null
      Throws:
      NullPointerException - if chatModel is null
    • stream

      reactor.core.publisher.Flux<String> stream(LLMProvider.LLMRequest request)
      Streams a response from the LLM based on the provided request. This method returns a reactive stream that emits response tokens as they become available from the LLM. The provider manages conversation history internally, so each call to this method adds to the ongoing conversation context.
      Parameters:
      request - the LLM request containing user message, system prompt, attachments, and tools, not null
      Returns:
      a Flux stream that emits response tokens as strings, never null
      Throws:
      NullPointerException - if request is null
    • setHistory

      default void setHistory(List<ChatMessage> history, Map<String,List<AIAttachment>> attachmentsByMessageId)
      Restores the provider's conversation memory from a list of chat messages with their associated attachments. Any existing memory is cleared before the new history is applied.

      Providers that support setting chat history should override this method.

      This method must not be called while a streaming response is in progress.

      Parameters:
      history - the list of chat messages to restore, not null
      attachmentsByMessageId - a map from ChatMessage.messageId() to the list of attachments for that message, not null
      Throws:
      NullPointerException - if any argument is null
      UnsupportedOperationException - if this provider does not support chat history restoration