Skip to main content

2 posts tagged with "mcp"

View All Tags

Day 24: CES 2026 in Practice — Voice Agents That Act | The First 30 Days with EchoKit

· 6 min read

At CES 2026, the message was clear: Smartphones are so 2025.

The future isn't a bigger or foldable screen. It's AI pendants around your neck, holographic companions like Razer's Project AVA, robot pets that hug back, and always-on voice agents that act without touching any screen.

These aren't just "better assistants." They're proactive voice AI agents that listen, understand context, reason, act, and respond — all hands-free, no phone needed.

EchoKit is the open-source devkit showing how those AI devices work under the hood.

We've been building toward this. On Day 15, we introduced MCP (Model Context Protocol) as EchoKit's gateway to external tools. We showed how to connect to Tavily search. On Day 23, we added DuckDuckGo for real-time web search.

Those were about information — giving your voice agent the ability to retrieve knowledge from the web.

Today is about action.

Today, your EchoKit learns to do things for you. We will show you how to integrate Zapier's Google MCP server and EchoKit to manage your Google Calendar via voice.

Why Action Matters

Imagine this: You're rushing to get ready in the morning, hands full, and you remember you need to schedule a meeting with your team tomorrow at 2 PM.

Without action capability, your EchoKit could say, "You should schedule that meeting when you get to your computer." Helpful, but not helpful enough.

With action capability, you simply say:

"Schedule a team meeting tomorrow at 2 PM for one hour"

And your EchoKit actually does it.

No phone. No computer. No screens. Just voice.

That's the difference between a conversational AI that talks about your schedule and an agentic AI that manages it.

Zapier's Google Calendar MCP Server

For today's integration, we're using Zapier's Google Calendar MCP server. Zapier has built an excellent MCP implementation that provides:

  • Create events — add calendar entries with title, time, and duration
  • List upcoming events — see what's scheduled
  • Search events — find specific appointments
  • Update events — modify existing calendar entries

The Zapier MCP server handles all the OAuth authentication and API details, exposing clean tools that EchoKit can use to take action on your behalf. Remember that EchoKit supports MCP servers in the SSE and HTTP-Streamable mode.

Setting Up Zapier MCP Server

Before configuring EchoKit, you'll need to set up the Zapier MCP server and get your endpoint URL:

  1. Go to zapier.com/mcp** — This is where you manage MCP integrations
  2. Click "+ New MCP Server" — Zapier will walk you through creating the MCP server you want
  3. Click Rotate token to get the MCP server URL — It looks like: `https://mcp.zapier.com/api/v1/connect?token=YOUR_TOKEN``

Keep this URL handy — you'll need it for the next step.

Configure EchoKit for Google Calendar

Now add the Zapier Google Calendar MCP server to your EchoKit config.toml:

[llm]
llm_chat_url = "https://api.groq.com/openai/v1/chat/completions"
api_key = "YOUR_GROQ_API_KEY"
model = "llama-3.3-70b-versatile" # Or any tool-capable model
history = 5

[[llm.mcp_server]]
server = "https://mcp.zapier.com/api/v1/connect?token=YOUR_TOKEN"
type = "http_streamable"
call_mcp_message = "Hold on a second. Let me check your calendar."

Key points:

  • server: Paste the Zapier MCP server endpoint URL you copied above
  • type: or http_streamable for Zapier MCP servers
  • call_mcp_message: What EchoKit says while accessing your calendar

Ask EchoKit: "Schedule a Team Meeting"

Once configured, restart EchoKit server and try a voice command:

User: "Schedule a team meeting tomorrow at 2 PM for one hour"

Under the hood, here's what happens:

  1. LLM parses the request — understands it's a calendar action with time and duration
  2. Tool call initiated — invokes the Google Calendar create_event tool via MCP
  3. Action executed — Zapier adds the event to your Google Calendar
  4. Confirmation returned — EchoKit confirms the action was completed

EchoKit might respond like this:

"Let me check your calendar...

I've scheduled your team meeting for tomorrow at 2 PM. The event will last one hour."

Notice what happened: EchoKit didn't just say something. It did something.

Try It Now

Restart your EchoKit server and test it:

  1. Say: "What's on my calendar today?"
  2. Wait for EchoKit to check
  3. Say: "Schedule a test meeting tomorrow at 10 AM"
  4. Check your Google Calendar — the event should appear, actually created

If it works, you're ready to go. If not, check the troubleshooting section below.

More Voice Commands to Try

Once you have Google Calendar connected, here are some practical voice commands:

  • "What's on my calendar today?" — Get a rundown of your schedule
  • "Schedule a dentist appointment next Tuesday at 3 PM" — Create events with natural language
  • "When is my next meeting?" — Check upcoming events
  • "Block out time for deep work tomorrow morning" — Reserve focused time
  • "Move my team meeting to 3 PM" — Reschedule existing events

The LLM understands natural language timing — "tomorrow morning," "next Tuesday," "in two hours" — and converts it into proper calendar entries.

What makes Zapier's MCP server powerful is that it's not just about calendars. Zapier connects to 5,000+ apps, and through MCP, EchoKit can potentially interact with many of them:

  • Slack — Send messages, check channels
  • Gmail — Compose emails, search inbox
  • Trello/Asana — Create tasks, update boards
  • Notion — Add database entries, create pages
  • GitHub — Create issues, check repositories

Each Zapier integration you enable adds a new action capability to your voice agent.

From Voice to Action

Your EchoKit has evolved through these 24 days:

It started as a conversational AI that could talk with you.

Then it learned to listen and understand intent.

On Day 15 and 23, it learned to search and retrieve information.

Today, it learned to act.

This is the vision of agentic AI — not just conversation, but action. Not just talking about doing things, but actually doing them.

Your EchoKit isn't just answering questions anymore. It's getting things done.


Ready to give your voice agent action capabilities?

Want to get your own EchoKit?

Start building your voice-powered productivity assistant today.

Day 23: Real-Time Web Search with DuckDuckGo MCP | The First 30 Days with EchoKit

· 4 min read

On Day 15, we introduced EchoKit's ability to connect to MCP (Model Context Protocol) servers, which unlocks access to external tools and actions beyond simple conversation. We showed an example using a Tavily-based search MCP server.

Today, we're diving deeper into real-time web search using DuckDuckGo.

Why DuckDuckGo? It's privacy-focused, doesn't require API keys for basic usage, and provides a simple way to bring real-world, up-to-date information into your voice AI conversations.

Why Real-Time Web Search Matters

LLMs have a knowledge cutoff — they only know what they were trained on. Ask about yesterday's news, today's stock prices, or events that happened after the model's training, and they'll simply... not know.

But when you connect EchoKit to a web search MCP server, something magical happens:

  • The LLM recognizes it needs current information
  • It automatically invokes the search tool
  • Results are retrieved from the web in real-time
  • The LLM synthesizes an answer citing actual sources

Suddenly, your EchoKit isn't just a chatbot anymore — it's an AI agent that can access the entire internet through voice.

DuckDuckGo Web Search MCP Server

For today's demo, we're using a DuckDuckGo-based web search MCP server. DuckDuckGo is an excellent choice because:

  • No API key required for basic usage — just point and go
  • Privacy-focused — searches aren't tracked or profiled
  • Open ecosystem — multiple open-source DuckDuckGo MCP implementations exist

The server exposes a simple search tool that queries DuckDuckGo and returns structured results with titles, URLs, and snippets.

DuckDuckGo doesn't provide an official MCP server. You can check out this GitHub repo for more details: https://github.com/nickclyde/duckduckgo-mcp-server

Remember that EchoKit supports MCP server in the SSE and HTTP-Streamable mode.

Add the DuckDuckGo MCP server to your EchoKit config.toml:

[llm]
llm_chat_url = "https://api.groq.com/openai/v1/chat/completions"
api_key = "YOUR_GROQ_API_KEY"
model = "llama-3.3-70b-versatile" # Or any tool-capable model
history = 5

[[llm.mcp_server]]
server = "MCP Endpoint"
type = "http_streamable"
call_mcp_message = "Let me search the web for the latest information."

Key points:

  • server: The DuckDuckGo MCP server endpoint
  • type: http_streamable for streaming responses or SSE are supported
  • call_mcp_message: What EchoKit says while searching (provides feedback during latency)

Ask EchoKit: "What's New in CES 2026?"

Now for the fun part. Restart EchoKit server and ask a question that requires current information:

User: "What's new in CES 2026?"

Under the hood, here's what happens:

  1. LLM recognizes it needs real-time data about CES 2026
  2. Tool call initiated — the LLM invokes the DuckDuckGo search tool via MCP
  3. Search executed — DuckDuckGo queries the web for CES 2026 news
  4. Results returned — titles, URLs, and snippets come back through MCP
  5. Answer synthesized — the LLM processes the results and generates a natural response

EchoKit might respond like this:

"Let me search the web for the latest information...

CES 2026 highlights (as of the first week of the show) ...."

And it would cite the actual sources it found.

Once you have MCP configured, you're not limited to web search. The same protocol lets EchoKit:

  • Manage Google Calendar — add events, check schedules
  • Send messages — Slack, email, Discord
  • Control smart home — Home Assistant integration for lights, AC, security
  • Read and write files — local file system access
  • Run code — execute scripts and return results

Each MCP server adds a new capability. Mix and match to build the agent you need.

Today's DuckDuckGo web search demo shows how EchoKit breaks free from the LLM's training cutoff. It can now:

  • Answer questions about current events
  • Look up live data (sports scores, stock prices, weather)
  • Provide cited information from real sources
  • Act as a research assistant accessible by voice

This is the vision of agentic AI — not just conversation, but action. Not just static knowledge, but real-time information. Not just a chatbot, but a tool that bridges your voice to the entire internet.


Want to explore more MCP integrations or share your own agent setups?

Ready to get your own EchoKit?

Start building your own voice AI agent today.