
Introduction
In this article, I will describe how to implement persistent and isolated chat history for an AI travel agent using Spring AI. I will cover the importance of maintaining chat history, the reasons for persisting it, and the need for isolating conversation history per user. Additionally, I will provide a demo, an architecture overview, and implementation details.
This article builds on the previous one, AI Travel Agent using Spring AI. If you haven’t read it yet, I recommend starting there.
Why Chat History is Important?
Large Language Models (LLMs) are stateless by default. Each request is processed independently, with no built-in memory of previous interactions. It means that context must be explicitly provided on every request if we want the model to behave as if it “remembers” a conversation.
For a conversational application such as an AI travel agent, this context is not optional. Chat history is what allows the system to move beyond isolated question-answering and into a goal-oriented dialogue.
Chat history enables the AI to understand references to earlier messages, such as:
- Pronouns and implicit context (“Book that flight instead”)
- Follow-up questions (“What about cheaper options?”)
- Progressive refinement (“Actually, make it a direct flight”)
Without chat history, each user message must restate all prior context, leading to repetitive prompts and a poor user experience.
Travel planning is inherently iterative. A user might:
- Ask for destination ideas
- Narrow down dates and budget
- Compare flights
- Select hotels
- Adjust plans based on constraints
Persisted chat history allows the AI to reason across these steps and maintain a shared mental model of the trip as it evolves. This is critical for producing consistent, relevant, and personalized responses.
By keeping chat history, the AI can adapt to user preferences over the course of a conversation:
- Preferred destinations
- Preferences for dates
- Travel style (luxury vs. budget, solo vs. family)
- Preferences for flights and hotels
- Budget sensitivity
Even within a single session, this significantly improves response quality and makes the interaction feel natural rather than mechanical.
Below is an example conversation between a user and an AI travel agent when the chat history is not maintained:


Without any chat history, each user message is processed independently, without any contextual information. As a result, in the example above, the agent is unable to remember car rental preferences and cannot book cars that match the user’s preferences.
Here is an example conversation between a user and an AI travel agent when the chat history is maintained:



With chat history, the AI can remember user preferences and book cars that match them.
Short-term memory vs. long-term memory
In this article, I will focus on chat history, which is a form of short-term memory. It stores the conversation history for a single user as a series of events. Each question and answer is stored, allowing the agent to access the context of the current conversation and provide relevant, contextual responses.
Long-term memory is information extracted from the conversation and stored in a structured format. It contains key information such as user preferences, facts, and knowledge. Long-term memory typically involves extraction and consolidation of information from conversations.
Long-term memory is outside the scope of this article. If time permits, I will cover it in a future article. Keep in mind that user preferences should be extracted from chat history and stored for future use.
Why persist chat history?
By default, Spring AI stores chat history in memory using InMemoryChatMemoryRepository. This is fine for development but is not suitable for production.
We need to persist the chat history to durable storage so that it can be shared across multiple instances of the agent chat service and survive restarts.
How to persist chat history?
Chat history is persisted and accessed using ChatMemoryRepository.
The ChatMemory abstraction manages chat memory and decides which messages to keep and which to remove.
In my case, I used MongoDB as the persistent storage with MongoChatMemoryRepository.
The ChatClient creation remains unchanged and looks like this:
public AgentController(ChatClient.Builder chatClientBuilder, ToolCallbackProvider toolCallbackProvider, ChatMemory chatMemory) {
this.chatClient = chatClientBuilder
.defaultToolCallbacks(toolCallbackProvider)
.defaultAdvisors(MessageChatMemoryAdvisor.builder(chatMemory).build())
// ...
.build();
}I added a dependency for MongoDB chat memory in pom.xml, which autoconfigures MongoChatMemoryRepository and uses it as the implementation of ChatMemoryRepository, which is then used by ChatClient.
The Maven dependency looks like this:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-chat-memory-repository-mongodb</artifactId>
</dependency>To run the application locally, I configured a docker-compose.yaml file as follows:
services:
mongo:
image: 'mongo:8.2.4-noble'
restart: always
ports:
- "27017:27017"
environment:
MONGO_INITDB_DATABASE: travel-agent-chat
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: secretTo automatically start MongoDB as a Docker container using Docker Compose, I also added a dependency on spring-boot-docker-compose:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-docker-compose</artifactId>
</dependency>Whenever a user interacts with the agent, the chat history is persisted in MongoDB.
Let’s look at the MongoDB collection where chat history is stored:
$ mongosh -u admin -p secret localhost:27017
...
test> use travel-agent-chat
travel-agent-chat> db.ai_chat_memory.find()
[
{
_id: ObjectId('698262d6ae98b4a088ba03ae'),
conversationId: '859bdab7-deef-4cef-90a6-3addda92c072',
message: {
content: 'My preference are economy cars.',
type: 'USER',
metadata: { messageType: 'USER' }
},
timestamp: ISODate('2026-02-03T21:04:22.833Z'),
_class: 'org.springframework.ai.chat.memory.repository.mongo.Conversation'
},
{
_id: ObjectId('698262d6ae98b4a088ba03af'),
conversationId: '859bdab7-deef-4cef-90a6-3addda92c072',
message: {
content: "Thanks for letting me know your preference for economy cars!"
type: 'ASSISTANT',
metadata: { messageType: 'ASSISTANT' }
},
timestamp: ISODate('2026-02-03T21:04:22.833Z'),
_class: 'org.springframework.ai.chat.memory.repository.mongo.Conversation'
},
...
]The full source code of the agent is available on GitHub: https://github.com/dominikcebula/spring-ai-travel-agent/tree/main/agent
Why isolate the conversation history per user?
Without isolation, all users would share the same chat history, resulting in a single conversation history containing all user messages. This would cause conversation context to spill over into unrelated conversations.
As a result, the AI might confuse users’ preferences, leading to incorrect bookings and a poor user experience.
How to implement conversation history isolation?
Implementing conversation history isolation requires changes on both the frontend and the agent side.
On the frontend, I store a conversationId as a UUID in local storage. Whenever a user opens the app, I check if there is a conversationId in local storage. If not, I generate a new random UUID and store it.
Whenever the /api/v1/agent endpoint is called, I pass the conversationId as a query parameter.
The code snippet below shows how this is implemented on the frontend:
const CONVERSATION_ID_KEY = 'travel_agent_conversation_id';
function getOrCreateConversationId(): string {
let conversationId = localStorage.getItem(CONVERSATION_ID_KEY);
if (!conversationId) {
conversationId = crypto.randomUUID();
localStorage.setItem(CONVERSATION_ID_KEY, conversationId);
}
return conversationId;
}
const conversationId = getOrCreateConversationId();
async function callAgent(userInput: string): Promise<string> {
const response = await fetch(
`${API_BASE_URL}/api/v1/agent?userInput=${encodeURIComponent(userInput)}&conversationId=${encodeURIComponent(conversationId)}`
);
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
return response.text();
}Full source code of the above snippet is available on GitHub: https://github.com/dominikcebula/spring-ai-travel-agent/blob/main/agent-chat-ui/src/App.tsx
On the agent side, I receive conversationId as a query parameter along with the user input.
The conversationId is then used by advisorSpec to set the CONVERSATION_ID when using chatClient.
The code snippet below shows how this is implemented on the agent side:
@GetMapping("/agent")
public String generation(@RequestParam String userInput, @RequestParam UUID conversationId) {
return chatClient.prompt()
.user(userInput)
.advisors(advisorSpec -> advisorSpec.param(ChatMemory.CONVERSATION_ID, conversationId))
.call()
.content();
}Full source code of the above snippet is available on GitHub: https://github.com/dominikcebula/spring-ai-travel-agent/blob/main/agent/src/main/java/com/dominikcebula/spring/ai/agent/AgentController.java
Architecture
The diagram below shows the application architecture, updated to include MongoDB for persisting chat history.

Summary
In this article, I demonstrated how to implement persistent and isolated chat history for an AI travel agent using Spring AI. I explained why chat history is essential for conversational AI applications – LLMs are stateless by default, and without explicit context, they cannot maintain conversation context or remember user preferences.
I showed how to persist chat history using MongoDB with Spring AI’s MongoChatMemoryRepository, which ensures that conversation data survives application restarts and can be shared across multiple service instances. The implementation required adding the spring-ai-starter-model-chat-memory-repository-mongodb dependency and configuring MongoDB via Docker Compose.
I also covered how to implement conversation isolation by using a unique conversationId (UUID) per user session. On the frontend, this ID is stored in local storage and passed with each request. On the agent side, it is used with ChatMemory.CONVERSATION_ID to ensure each user’s conversation remains separate and private.
With these two mechanisms in place – persistent storage and conversation isolation – the AI travel agent can maintain context across interactions, remember user preferences within a session, and provide a coherent, personalized experience for each user.