Building a conversational chatbot with CrewAI, Groq, Chromadb, and Mem0
08/01/2025 There is an updated version of this blog post: Building conversational chatbots with knowledge using CrewAI and Mem0
Conversational chatbots are among the most common use cases for LLM-powered applications. On their own LLMs are not that powerful but given the ability to use tools and plan and store memories, we get a recipe for very powerful AI-powered applications.
In this tutorial, I will introduce adding memory to our LLM-powered chatbot applications. Having memory allows us to have applications that remember user preferences and settings. This makes them more interesting and useful when responding to questions. Our chatbot is going to use CrewAI for the agent framework, Groq for language processing, ChromaDB for vector storage, and Mem0 for memory management. This combination allows us to create a chatbot that can maintain context, learn from conversations, and provide more relevant responses over time. You can view the final chatbot code on Github.
Prerequisites:
- Basic knowledge of Python
- Python 3.10 or later installed
- pip package manager
- poetry package manager
The first thing we need to do is to install crewai using pip package manager. Open your terminal and run:
pip install crewai
This will install crewAI and allow us to use the crewAI CLI tool to create our conversational chatbot. Next step is to create our crewai_conversational_chatbot
using the CLI tool
crewai create crewai_conversational_chatbot
This creates a crewAI project with a skeleton project we can adjust to suit our needs. From the terminal, move into the newly created project root directory and install the necessary project dependencies.
poetry add mem0ai langchain-groq python-dotenv
poetry install
We are going to require 2 API keys for our project to function.
GROQ_API_KEY
- this will be used for the LLM inferenceOPENAI_API_KEY
- for the chromadb embeddings in ourmem0
memory store
Before we proceed go grab your API keys from both Groq and OpenAI.
# src/crewai_conversational_chatbot/.env
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxx
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx
When using the crewAI CLI created projects there is a separation in how agents and tasks are created. Agents are defined in crewai_conversational_chatbot/config/agents.yaml
and tasks under crewai_conversational_chatbot/config/tasks.yaml
. Let’s go ahead and define both.
# src/crewai_conversational_chatbot/config/agents.yaml
assistant:
role: >
Virtual Assistant
goal: >
To provide accurate, helpful, and engaging responses to a wide range of user queries, enhancing user experience and knowledge across various topics.
backstory: >
You are an advanced AI assistant with vast knowledge spanning multiple disciplines, designed to engage in diverse conversations and provide helpful information.
Your primary function is to assist users by answering questions, offering explanations, and providing insights, adapting your communication style to suit different users and contexts.
While you lack personal experiences or emotions, you're programmed to respond with empathy, use appropriate humor, and always strive to provide accurate, helpful, and tailored information, admitting when a topic is outside your knowledge base.
# src/crewai_conversational_chatbot/config/tasks.yaml
assistant_task:
description: >
Respond to the user's message: {user_message}. Use the provided conversation history {context}",
expected_output: >
Your output should be a relevant, accurate, and engaging response that directly addresses the user's query or continues the conversation logically.
The next step is to configure the LLM that the chatbot is going to use. We are going to use llama-3.1-70b-versatile
model for the chatbot.
# src/crewai_conversational_chatbot/crew.py
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_groq import ChatGroq
import os
llm = ChatGroq(
model="llama-3.1-70b-versatile",
api_key=os.environ["GROQ_API_KEY"],
)
Now let’s wire up the chatbot crew
# src/crewai_conversational_chatbot/crew.py
@CrewBase
class CrewaiConversationalChatbotCrew:
"""CrewaiConversationalChatbot crew"""
agents_config = "config/agents.yaml"
tasks_config = "config/tasks.yaml"
@agent
def assistant(self) -> Agent:
return Agent(
config=self.agents_config["assistant"],
llm=llm,
verbose=False,
)
@task
def assistant_task(self) -> Task:
return Task(config=self.tasks_config["assistant_task"], agent=self.assistant())
@crew
def crew(self) -> Crew:
"""Creates the CrewaiConversationalChatbot crew"""
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=0,
)
This code creates the chatbot assistant agent
, chatbot task and the crew
.
Now that we have the agent, task and crew created we head onto the next step to wire everything up together.
# src/crewai_conversational_chatbot/main.py
from crewai_conversational_chatbot.crew import CrewaiConversationalChatbotCrew
from mem0 import Memory
from dotenv import load_dotenv
load_dotenv()
config = {
"vector_store": {
"provider": "chroma",
"config": {
"collection_name": "chatbot_memory",
"path": "./chroma_db",
},
},
}
memory = Memory.from_config(config)
This code does 2 things
load_dotenv()
loads the contents of the.env
file into the environment so that our python scripts have access to themMemory.from_config(config)
creates the chromadb backedmem0
memory
It’s important to note that mem0
will store embeddings for the memories it stores. This is where the OPENAI_API_KEY
comes in. It’s the default used by cheomadb and mem0. There are other providers that you can use eg there is one powered by Ollama
.
Finally, let's implement our main function to run the chatbot. This implements the conversation loop.
def run():
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit", "bye"]:
print("Chatbot: Goodbye! It was nice talking to you.")
break
# Add user input to memory
memory.add(f"User: {user_input}", user_id="Lennex")
# Retrieve relevant information from vector store
relevant_info = memory.search(query=user_input, limit=3)
context = "\\n".join(message["memory"] for message in relevant_info)
inputs = {
"user_message": f"{user_input}",
"context": f"{context}",
}
response = CrewaiConversationalChatbotCrew().crew().kickoff(inputs=inputs)
# Add chatbot response to memory
memory.add(f"Assistant: {response}", user_id="Assistant")
print(f"Assistant: {response}")
The function enters a loop where it takes user input, retrieves relevant information from the vector store, generates a response, and stores the conversation in memory. The chatbot searches for relevant memories and passes them into the chat context along with the user_question
in the inputs.
To run your chatbot, go to the project root and run the command below from your terminal:
poetry run crewai_conversational_chatbot
That’s it. You can now start chatting with your AI assistant!
Congratulations! You've built a conversational chatbot that uses CrewAI, Groq for language processing, ChromaDB for vector storage, and Mem0 for memory management. This chatbot can maintain context across conversations and learn from past interactions to provide more relevant responses over time.
Continue to play around and develop the chatbot to add more features and see what you end up with. Remember to handle your API keys securely and respect Groq's usage limits and terms of service. You can see the final code on Github. If you have any questions you can reach out to me on X (formerly twitter) or LinkedIn. Happy coding!
AI should drive results, not complexity. AgentemAI helps businesses build scalable, efficient, and secure AI solutions. See how we can help.