close

DEV Community

Cover image for From LLMs to Agents: Build Smart AI Systems with Tools in LangChain
vaasav kumar
vaasav kumar

Posted on

From LLMs to Agents: Build Smart AI Systems with Tools in LangChain

In previous blogs, we learned how to:

  • work with multiple LLM providers
  • use PromptTemplate
  • build a Streamlit web app

Now let’s take a big step forward.

πŸ‘‰ Instead of just generating text, we will build AI Agents that can use tools.

Tool β†’ A function the AI can call
Agent β†’ An AI system that decides which tool to use based on the question

Create a Simple Custom Tool

Let’s start with a basic example.

from langchain_core.tools import tool

@tool
def get_weather(city: str) -> dict:
    """Return weather details for a city."""
    fake_data = {
        "chennai": {"temp_c": 33, "condition": "Sunny"},
        "mumbai": {"temp_c": 31, "condition": "Humid"},
        "delhi": {"temp_c": 29, "condition": "Cloudy"},
        "bangalore": {"temp_c": 26, "condition": "Rain likely"},
    }
    return {
        "city": city,
        **fake_data.get(city.lower(), {"temp_c": 30, "condition": "Unknown"}),
    }

res = get_weather.invoke({"city": "chennai"})
print(res)
Enter fullscreen mode Exit fullscreen mode

Output

{'city': 'chennai', 'temp_c': 33, 'condition': 'Sunny'}
Enter fullscreen mode Exit fullscreen mode

Setup API Keys for Real Tools

For real-world tools, you’ll need API keys:

import os

os.environ['SERP_API_KEY'] = "aaa"
os.environ['ALPHAVANTAGE_API_KEY'] = "bbb"
os.environ['OPENWEATHER_API_KEY'] = "ccc"
Enter fullscreen mode Exit fullscreen mode

Install dependencies:

pip install -U langchain-community google-search-results
Enter fullscreen mode Exit fullscreen mode

Create Web Search Tool (SerpAPI)

from langchain_community.utilities import SerpAPIWrapper
from langchain_core.tools import tool
import os

@tool
def search_web(query: str) -> str:
    """Search the web for factual information."""
    serp = SerpAPIWrapper(serpapi_api_key=os.getenv("SERP_API_KEY"))
    return serp.run(query)
Enter fullscreen mode Exit fullscreen mode

Create Your First Agent πŸ€–

from langchain.agents import create_agent

# Use your existing set_llm function
llm = set_llm('openai')

agent = create_agent(
    model=llm,
    tools=[search_web],
    system_prompt="""
    Use search_web for factual lookup

    Rules:
    - ALWAYS return output in valid JSON
    - NO explanation
    - NO extra text
    - NO markdown
    """,
)
Enter fullscreen mode Exit fullscreen mode

Test the Agent πŸ§ͺ

result = agent.invoke({
    "messages": [
        {"role": "user", "content": "Age of MS Dhoni and weather of his native?"}
    ]
})

print(result["messages"][-1].content)
Enter fullscreen mode Exit fullscreen mode

Output

Am pasting the output which i tried in Jupyter Notebook.

Here you can see that the response output is in JSON string and not structured, so we need to add Output Format Structure to this.

Pydantic models provide the richest feature set with field validation, descriptions, and nested structures.

from langchain.agents import create_agent
from pydantic import BaseModel, Field

class SearchOutputStructure(BaseModel):
    """A query output details with structure."""
    Name: str = Field(description="Name of the Person")
    Age: int = Field(description="Age of Person")
    City: str = Field(description="Native of Person")
    Temperature: float = Field(description="Temperature of native city")

agent = create_agent(
    model=llm,
    tools=[search_web],
    response_format=SearchOutputStructure,
    system_prompt=("""
    Use search_web for factual lookup

    Rules:
    - ALWAYS return output in valid JSON
    - NO explanation
    - NO extra text
    - NO markdown

    """),
    )

# Use your existing set_llm function
llm = set_llm('openai')

result = agent.invoke({
    "messages": [
        {"role": "user", "content": "Age of MS Dhoni and weather of his native?"}
    ]
})

result["messages"][-1].content
Enter fullscreen mode Exit fullscreen mode

Structured Format Output

To display Name of the queried person, add Name in SearchOutputStructure definition

Build Agent with Multiple Tools

Now let’s create a more powerful agent.

Tools Included:

  • 🌐 Web Search
  • πŸ“ˆ Stock Price
  • 🌦️ Weather

Define all tools

import requests
from langchain_core.tools import tool
from langchain_community.utilities import SerpAPIWrapper

@tool
def search_web(query: str) -> str:
    """Search the web for city, company and more details."""
    serp = SerpAPIWrapper(serpapi_api_key=os.getenv("SERP_API_KEY"))
    return serp.run(query)

@tool
def get_stock_price(symbol: str) -> str:
    """Get latest stock price using Alpha Vantage."""
    api_key = os.getenv("ALPHAVANTAGE_API_KEY")

    url = "https://www.alphavantage.co/query"
    params = {
        "function": "GLOBAL_QUOTE",
        "symbol": symbol,
        "apikey": api_key,
    }

    try:
        response = requests.get(url, params=params, timeout=20)
        data = response.json()

        quote = data.get("Global Quote", {})

        if not quote:
            return f"No data for symbol: {symbol}"

        price = quote.get("05. price")
        return {"price": float(price) if price else None}
    except Exception as e:
        return f"Stock API error: {e}"


@tool
def get_weather(city: str) -> str:
    """Get current weather using OpenWeather."""
    api_key = os.getenv("OPEN_WEATHER_API_KEY")

    url = "https://api.openweathermap.org/data/2.5/weather"
    params = {
        "q": city,
        "appid": api_key,
        "units": "metric",
    }

    try:
        response = requests.get(url, params=params, timeout=20)
        data = response.json()

        temp = data.get("main", {}).get("temp")
        if temp is None:
            return {"city": city, "temperature": None, "error": "Temperature not found"}

        return {"city": city, "temperature": float(temp)}
    except Exception as e:
        return {"city": city, "temperature": None, "error": str(e)}
Enter fullscreen mode Exit fullscreen mode

Create Multi-Tool Agent

from pydantic import BaseModel, Field
from langchain.agents import create_agent

class SearchOutputStructure(BaseModel):
    """Structured response for person, company, stock, and weather."""
    name: str = Field(description="Name of the Person")
    companyName: str = Field(description="Currently working/owning Company")
    symbol: str = Field(description="Company Share Symbol")
    city: str = Field(description="City of the Company Headquarter")
    temperature: float | None = Field(description="Temperature in celsius of city")
    stockPrice: float | None = Field(description="Latest stockprice of company")

agent = create_agent(
    model=llm,
    tools=[search_web, get_stock_price, get_weather],
    response_format=SearchOutputStructure,
    system_prompt="""
    Use:
    - search_web to identify the person's company, stock ticker, and city
    - get_stock_price to fetch stock price
    - get_weather to fetch the city's weather

    Interpretation rules:
    - For queries like "weather in his city", first resolve the person's city.
    - Prefer the person's current work city if clearly known.
    - If that is ambiguous, use the company's headquarters city.
    - Then call get_weather with the resolved city.
    - If no city can be resolved, set city=null and weather=null.

    Return fields exactly matching the schema:
    - name
    - city
    - temperature
    - company
    - stockprice
    """
)
Enter fullscreen mode Exit fullscreen mode

Test Multi-Tool Agent

result = agent.invoke({
    "messages": [
        {
            "role": "user",
            "content": "What is the share price of Tim Cook's company and weather in his company's headquarter city?"
        }
    ]
})

print(result["structured_response"])
Enter fullscreen mode Exit fullscreen mode

Output

name='Tim Cook' companyName='Apple Inc.' symbol='AAPL' city='Cupertino' temperature=14.22 stockPrice=270.23
Enter fullscreen mode Exit fullscreen mode

Docstring is read by the LLM to understand when and how to use the tool

  • What the model uses it for
  • deciding which tool to call
  • understanding what inputs mean
  • understanding what the tool returns

System Prompt controls how the model behaves overall instead of model guessing randomly.

One-line summary:

Docstring β†’ helps the model choose the right tool
System prompt β†’ helps the model use tools correctly and produce the right output

Conclusion

In this blog, we moved beyond simple LLM usage and explored how to build AI agents using tools in LangChain.

We learned how to:

  • create custom tools using @tool
  • integrate external APIs like SerpAPI, Alpha Vantage, and OpenWeather
  • enable LLMs to choose the right tool automatically
  • enforce structured outputs using system prompts

Top comments (0)