How to Build a Simple Chatbot with Langchain : Part 1 of 2

How to Build a Simple Chatbot with Langchain : Part 1 of 2

A Beginner's Guide to Streamlit

Building a fully functional customer care chatbot might sound intimidating at first, especially if you’re new to programming or AI. But don’t worry—with user-friendly frameworks like Streamlit and Langchain, the process is now easier and more fun than ever! 🎉✨

These powerful tools simplify the development journey, letting you focus on crafting your chatbot’s personality and features instead of getting stuck in complex coding challenges. 🛠️💡

In this blog, I’ll guide you step by step 🪜 through creating a simple yet effective customer care chatbot. By the end, you’ll have a chatbot capable of engaging in meaningful conversations and addressing user queries like a pro! 🚀

Let’s get started! 🎯……

To begin , install these packages that are required to run the python(.py) file :

streamlit==1.41.1
langchain
langchain-google-genai
python-dotenv
requests
streamlit-chat

To install these packages copy these files in a “requirements.txt“ file then run

pip install -r requirements.txt

Alternatively, you can run the following in the terminal :

pip install streamlit==1.41.1 langchain langchain-google-genai python-dotenv requests streamlit-chat

Now create a python file and start writing the code.

1 : Importing the dependencies

It is important to import all the required libraries in our python file , without importing proper libraries the code will always throw an error during the runtime .

Line 1-9

import streamlit as st
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
import os
from io import BytesIO
from streamlit_chat import message

Let’s discuss the meaning behind each line of the code in this section :

  1. import streamlit as st

    • This imports the streamlit library and aliases it as st.

    • Streamlit is a Python library used for building interactive web applications. The alias st is conventionally used to access its methods and functions.

  2. from langchain.chains import LLMChain

    • This imports the LLMChain class from the langchain.chains module.

    • LLMChain is a core component in the LangChain library that connects language models (LLMs) with other functionality, such as prompting and memory management.

  3. from langchain.prompts import PromptTemplate

    • This imports the PromptTemplate class from the langchain.prompts module.

    • PromptTemplate is used to define and manage prompt templates that can dynamically generate prompts for the language model based on input variables.

  4. from langchain.memory import ConversationBufferMemory

    • This imports the ConversationBufferMemory class from the langchain.memory module.

    • ConversationBufferMemory stores and manages conversational context, allowing language models to maintain state and reference earlier parts of the conversation.

  5. from langchain_google_genai import ChatGoogleGenerativeAI

    • This imports the ChatGoogleGenerativeAI class from the langchain_google_genai module.

    • ChatGoogleGenerativeAI integrates Google's Generative AI (such as Bard or PaLM) with LangChain, enabling interaction with Google's advanced language models.

  6. from dotenv import load_dotenv

    • This imports the load_dotenv function from the dotenv library.

    • load_dotenv is used to load environment variables from a .env file into the program's environment. This is useful for managing sensitive information, such as API keys or configuration settings.

  7. import os

    • This imports the os module from Python's standard library.

    • The os module allows interaction with the operating system, such as accessing environment variables, reading/writing files, and managing paths.

  8. from io import BytesIO

    • This imports the BytesIO class from Python's io module.

    • BytesIO is used to work with in-memory binary streams, enabling tasks like processing files or binary data without writing to disk.

  9. from streamlit_chat import message

    • This imports the message function from the streamlit_chat library.

    • streamlit_chat is a Streamlit-based library for creating chat-like interfaces. The message function is used to display chat bubbles or messages in the Streamlit app.rt brings in specific functionality or tools that will likely be used in building an interactive chat application powered by a language model, leveraging features like prompt management, memory, and AI model integration.

Line 11-12 : Load .env

# Load environment variables
load_dotenv()

call the load_dotenv() function so that it connects the environment file (.env) with the code which contains the API keys necessary for the chatbot . Here we are using Google Gemini API hence we need to paste the key in the .env file in the following format:

GOOGLE_API_KEY=AIzaSyC-t3N_9vRiOCBg1hMvxxxxxxxxxxxxxxx


2: Initialization + Creation

In the second part of the code we are going to Initialize the Large Language Model , the memory that LLM will use and define a personality for our chatbot in the form of a prompt template .

Prompt Template: A prompt template in LangChain is a tool that helps create specific questions or statements for a computer program to use when talking to people. It includes placeholders where you can insert different pieces of information, allowing the program to generate responses based on the context or input it receives. This makes the interaction more relevant and engaging.

Memory: is useful for the bot because that is how it remembers the topic that we are discussing and stores our initial query to generate only relevant answers.

Line 14-19 : Initializing the LLM

llm = ChatGoogleGenerativeAI(
    temperature=0.7,
    model="gemini-1.5-flash",
    google_api_key=os.environ["GOOGLE_API_KEY"]
)
  • ChatGoogleGenerativeAI: This initializes a connection to Google's Generative AI using LangChain's interface.

  • temperature=0.7: The temperature parameter controls the randomness of the AI's output. Lower values produce more deterministic responses, while higher values make responses more creative.

  • model="gemini-1.5-flash": Specifies the Google AI model to use (e.g., "gemini-1.5-flash"). We will use this model because it is free to use and doesn’t have any small token limits .

  • google_api_key=os.environ["GOOGLE_API_KEY"]: Retrieves the Google API key from the environment variables. This key authenticates requests to the Google Generative AI service.

Line 21-22 : Initializing Conversation Memory

#initialize converstation memory
memory = ConversationBufferMemory()
  • ConversationBufferMemory: This creates a memory object to store the conversation's context.

  • This memory keeps track of past interactions (e.g., user queries and AI responses), enabling the AI to maintain continuity in the dialogue.

Line 24-39 : Defining the Prompt Template

#Define the prompt template 
prompt = PromptTemplate(
    input_variables=["chat_history", "user_input", "product_problem"],
    template=(
        "You are a highly skilled customer care representative dedicated to resolving users' product-related issues. "
        "The user has described the following problem with their product: {product_problem}. "
        "Provide clear and practical solutions to address their concerns, including troubleshooting steps, potential timelines for resolution, warranty information, and any additional support they might need. "
        "Feel free to invent plausible details where necessary to offer a seamless customer service experience. "
        "Engage with the user in a polite, professional, and conversational tone, ensuring their satisfaction. "
        "Ensure that you give small answers in short points, that is not too long to read "
        "\n\n"
        "Chat History:\n{chat_history}\n\n"
        "User: {user_input}\n\n"
        "Agent:"
    )
)
  • PromptTemplate: Used to define a structured prompt that the AI will follow.

  • input_variables: These placeholders (chat_history, user_input, product_problem) will be dynamically replaced with actual values during execution.

  • template: Contains the text prompt used to instruct the AI. Key aspects:

    • Context: Informs the AI that it acts as a customer care representative.

    • Specific Instructions:

      • Solve product-related issues.

      • Include details like troubleshooting steps, timelines, warranty info, and additional support.

      • Write in a polite, professional, and conversational tone.

      • Keep responses concise and easy to read, formatted in short points.

    • Dynamic Input:

      • {product_problem}: Specific product issue described by the user.

      • {chat_history}: Conversation context so far.

      • {user_input}: The user's latest message.

Line 41-45 : Creating the Chain

chain = LLMChain(
    llm=llm,
    prompt=prompt
)
  • LLMChain: Combines the language model (llm) and the prompt template (prompt) into a processing pipeline.

  • Purpose:

    • Takes user input and memory context.

    • Applies the prompt template.

    • Generates a response using the AI model.

  • This chain handles the interaction between the AI and user in a structured and context-aware manner.


3 : Taking input with proper configuration

Now , after completing the configuration of our LLM and Langchain , we begin with setting up our UI with state management , input fields , submit buttons , an UI that can handle the back and forth chat history , making an efficient chatbot with minimum of the resources. With this let’s explore the ease of use of Streamlit as a proper UI framework for single page python apps.

Line 48: Configure the Page

st.set_page_config(page_title="Customer Care Chatbot", layout="centered")
  • st.set_page_config: Configures the Streamlit app’s page settings.

    • page_title="Customer Care Chatbot": Sets the title displayed in the browser tab.

    • layout="centered": Centers the app’s layout on the screen.

Line 49-56 : Add the Title and Introduction

st.title("Customer Care Chatbot")
st.write(
    """
    **Disclaimer**  
    We appreciate your engagement! Please note, this bot is designed to assist with product-related issues.  
    Type your queries below to get started.
    """
)
  • st.write: Adds a disclaimer and introduction in Markdown format.

    • **Disclaimer**: Renders the word "Disclaimer" in bold.

Line 58-62 : Initialize the session state

#initialize session state
if "chat_history" not in st.session_state:
    st.session_state["chat_history"] = []
if "product_problem" not in st.session_state:
    st.session_state["product_problem"] = ""
  • st.session_state: A Streamlit feature for storing persistent data across user interactions (like browser refreshes).

  • This line checks if chat_history (a list for storing the conversation history) exists in session_state. If not, it initializes it as an empty list.

  • Similarly, it checks if product_problem (a string for storing the user's described issue) exists in session_state. If not, it initializes it as an empty string.

Line 65-68 : Input for Product Problem

if not st.session_state["product_problem"]:
    st.subheader("Step 1: Describe Your Problem")
    product_problem_input = st.text_area("What issue are you facing with your product?")
    if st.button("Submit Problem"):
  • if not st.session_state["product_problem"]: Checks if the product_problem field is empty.

    • If empty, prompts the user to describe their issue.
  • st.subheader("Step 1: Describe Your Problem"): Displays a subheader to guide the user to the first step.

  • st.text_area: Adds a multi-line text input field where users can describe their product issue.

    • The label of this text area is "What issue are you facing with your product?"
  • st.button: Adds a button labeled "Submit Problem". When clicked, it triggers the subsequent block of code.

Line 69-73 : Handle the Submitted Problem

if product_problem_input.strip():
    st.session_state["product_problem"] = product_problem_input.strip()
    st.success(f"Problem noted: {st.session_state['product_problem']}")
else:
    st.warning("Please describe your problem.")
  • product_problem_input.strip(): Checks if the user entered a non-empty string (removing extra spaces).

  • If valid:

    • st.session_state["product_problem"]: Stores the user's problem description in the session state.

    • st.success: Displays a success message confirming the saved problem description.

  • If the input is empty, st.warning displays a warning message prompting the user to provide a description.

Line 74-75 : Display Saved Problem

else:
    st.subheader(f"Problem: {st.session_state['product_problem']}")
  • else: If a product_problem already exists in the session state, it displays it as a subheader labeled "Problem".

    • This ensures that users see the problem they’ve entered, even on subsequent interactions.

And that’s a wrap on the first part of creating your own customer care chatbot! 🎉

In this guide, we covered the essentials: installing the necessary packages, importing dependencies, and setting up your environment. 🛠️ You now know how to initialize the language model, manage conversation memory, design a smart prompt template, and build a processing chain to power your chatbot. 🔗✨

We also explored crafting an engaging user interface with Streamlit, managing session states 🗂️, and handling user inputs like a pro. 💬 You're well on your way to delivering seamless, AI-driven customer support!

Hop onto the second part of this article, where we’ll dive deeper and reveal the full code to take your chatbot to the completion it needs!


https://candyman5757.hashnode.dev/how-to-build-a-simple-chatbot-with-langchain-part-2-of-2