Skip to main contentSkip to navigationSkip to footer
    Back to Blog
    AI/MLLangChainGPT-4PythonTutorial

    Getting Started with LangChain: Building Your First AI Agent

    A practical tutorial on building production-ready AI agents using LangChain and GPT-4. Learn how to create intelligent assistants that can reason, use tools, and maintain context.

    Priya Sharma

    Priya Sharma

    AI/ML Lead

    February 8, 2026
    5 min read

    Introduction

    AI agents are transforming how we build intelligent applications. With LangChain and GPT-4, you can create agents that don't just answer questions - they can reason, use tools, and solve complex problems autonomously.

    In this tutorial, we'll build a practical AI agent that can:

    • 🔍 Search the web for information
    • 🧮 Perform calculations
    • 💾 Remember conversation context
    • 🎯 Chain multiple tools together

    Prerequisites

    Before we start, make sure you have:

    # Python 3.9 or higher
    python --version
    
    # Install required packages
    pip install langchain openai python-dotenv
    

    You'll also need an OpenAI API key. Get one from platform.openai.com.

    Understanding LangChain Agents

    LangChain agents work by:

    1. Receiving a task from the user
    2. Reasoning about which tools to use
    3. Executing the tools in sequence
    4. Synthesizing results into a final answer

    Think of it as giving GPT-4 a toolbox and letting it decide which tools to use.

    Building Your First Agent

    Step 1: Set Up Your Environment

    Create a .env file:

    OPENAI_API_KEY=your_api_key_here
    

    Step 2: Import Dependencies

    from langchain.agents import initialize_agent, AgentType, Tool
    from langchain.llms import OpenAI
    from langchain.memory import ConversationBufferMemory
    from langchain.utilities import SerpAPIWrapper
    import os
    from dotenv import load_dotenv
    
    load_dotenv()
    

    Step 3: Define Tools

    Tools are functions your agent can call. Let's create a few:

    # Calculator tool
    def calculator(expression: str) -> str:
        """Evaluates mathematical expressions"""
        try:
            result = eval(expression)
            return f"The result is: {result}"
        except Exception as e:
            return f"Error: {str(e)}"
    
    # Search tool (requires SerpAPI key)
    search = SerpAPIWrapper()
    
    # Define tools list
    tools = [
        Tool(
            name="Calculator",
            func=calculator,
            description="Useful for mathematical calculations. Input should be a valid Python expression."
        ),
        Tool(
            name="Search",
            func=search.run,
            description="Useful for finding current information on the internet."
        )
    ]
    

    Step 4: Initialize the Agent

    # Initialize LLM
    llm = OpenAI(temperature=0, model="gpt-4")
    
    # Add memory for context
    memory = ConversationBufferMemory(
        memory_key="chat_history",
        return_messages=True
    )
    
    # Create agent
    agent = initialize_agent(
        tools=tools,
        llm=llm,
        agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
        memory=memory,
        verbose=True  # See the agent's reasoning
    )
    

    Step 5: Run Your Agent

    # Example 1: Simple calculation
    response = agent.run("What is 25 * 47 + 123?")
    print(response)
    
    # Example 2: Web search
    response = agent.run("What's the current price of Bitcoin?")
    print(response)
    
    # Example 3: Multi-step reasoning
    response = agent.run(
        "Find the population of Tokyo and calculate how many "
        "people that would be if it increased by 15%"
    )
    print(response)
    

    Advanced Features

    Custom Tools

    Create domain-specific tools for your use case:

    def get_weather(location: str) -> str:
        """Fetches weather for a location"""
        # In production, call a real weather API
        return f"Weather in {location}: Sunny, 25°C"
    
    weather_tool = Tool(
        name="Weather",
        func=get_weather,
        description="Get current weather for a location"
    )
    

    Prompt Engineering

    Customize your agent's behavior with system prompts:

    from langchain.agents import ZeroShotAgent
    
    prefix = """You are a helpful AI assistant specialized in data analysis.
    You have access to the following tools:"""
    
    suffix = """Begin! Remember to be precise and cite your sources.
    
    {chat_history}
    Question: {input}
    {agent_scratchpad}"""
    
    prompt = ZeroShotAgent.create_prompt(
        tools,
        prefix=prefix,
        suffix=suffix,
        input_variables=["input", "chat_history", "agent_scratchpad"]
    )
    

    Error Handling

    Make your agent robust:

    from langchain.callbacks import get_openai_callback
    
    def run_agent_safely(query: str):
        try:
            with get_openai_callback() as cb:
                response = agent.run(query)
                print(f"Tokens used: {cb.total_tokens}")
                print(f"Cost: ${cb.total_cost:.4f}")
                return response
        except Exception as e:
            return f"Agent error: {str(e)}"
    

    Production Best Practices

    1. Rate Limiting

    from langchain.llms import OpenAI
    from langchain.callbacks import RateLimitCallback
    
    llm = OpenAI(
        temperature=0,
        max_retries=3,
        request_timeout=30
    )
    

    2. Caching

    Save money by caching responses:

    from langchain.cache import InMemoryCache
    import langchain
    
    langchain.llm_cache = InMemoryCache()
    

    3. Monitoring

    Track agent performance:

    import logging
    
    logging.basicConfig(level=logging.INFO)
    logger = logging.getLogger(__name__)
    
    def monitored_agent_run(query: str):
        logger.info(f"Agent query: {query}")
        start_time = time.time()
        
        response = agent.run(query)
        
        duration = time.time() - start_time
        logger.info(f"Response time: {duration:.2f}s")
        
        return response
    

    Common Pitfalls

    1. Tool Description Quality

    Bad: "A tool for searching"
    Good: "Searches the internet for current information. Use when you need real-time data or recent events."

    2. Infinite Loops

    Agents can get stuck. Set max iterations:

    agent = initialize_agent(
        tools=tools,
        llm=llm,
        agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
        max_iterations=5,  # Prevent infinite loops
        early_stopping_method="generate"
    )
    

    3. Cost Management

    GPT-4 is expensive. Use GPT-3.5 for development:

    # Development
    llm = OpenAI(model="gpt-3.5-turbo")
    
    # Production
    llm = OpenAI(model="gpt-4")
    

    Real-World Use Cases

    We've built LangChain agents for:

    • Customer Support: Automated ticket resolution with 70% accuracy
    • Data Analysis: Natural language queries on databases
    • Content Generation: SEO-optimized blog posts with research
    • Code Review: Automated PR analysis and suggestions

    Next Steps

    Now that you've built your first agent, try:

    1. Add more tools: Database queries, API calls, file operations
    2. Improve prompts: Fine-tune for your specific domain
    3. Add guardrails: Content filtering, safety checks
    4. Scale up: Deploy with FastAPI or Flask

    Conclusion

    LangChain makes building AI agents accessible and powerful. With the right tools and prompts, you can create intelligent assistants that solve real business problems.

    Want to build a custom AI agent for your business? Contact us - we've built dozens of production AI systems and can help you succeed.


    Resources:

    Share this article

    Ready to Build Something Amazing?

    Let's discuss your project and bring your vision to life with cutting-edge technology.