📄️ EchoKit Server
EchoKit Server is the central component that manages communication between the EchoKit device and AI services. It can be deployed locally or connected to preset servers, allowing developers to customize LLM endpoints, plan the LLM prompt, configure speech models, and integrate additional AI features like MCP servers.
📄️ Run the EchoKit Server on Your Machine
In this guide, we’ll walk through running the EchoKit server locally.
📄️ Connect EchoKit Server with the Device
In this guide, we’ll walk through connecting your EchoKit device to the EchoKit server.
📄️ Configure the ASR-LLM-TTS Pipeline in EchoKit Server
EchoKit supports two pipeline approaches: the ASR-LLM-TTS pipeline (classic modular approach) and the end-to-end pipeline (single integrated model like Gemini Live).
📄️ Configure an End-to-End Pipeline for EchoKit
In addition to the classic ASR-LLM-TTS pipeline, EchoKit supports real-time models that can reduce latency. However, this approach has several limitations:
📄️ Add Actions for EchoKit
EchoKit supports MCP (Model Context Protocol), which allows LLMs to call external tools and services.
📄️ Qwen series models
Qwen is one of the best open-sourced LLMs in the world. Besides the open source models, Alibaba Cloud also offers multiple commercial models through their Bailian platform. In this article, we will show you how to integrate Qwen series models with EchoKit, which is especially useful if you're in China.
📄️ Test Your EchoKit Server
Once you have your EchoKit server running successfully, you can test it using a web-based EchoKit client to verify that voice interactions work properly.