Use AI services from your LLM
Connect Claude Desktop, Cursor, or any MCP-compatible client directly to Hudsonly AI. Generate speech, create avatar videos, and manage tasks without leaving your conversation.
User: Generate speech saying "Welcome to our demo" with a warm voice
Claude: I'll use the generate_speech tool to create that audio.
text: "Welcome to our demo"
voice: "af_heart"
Claude: Done! Here's your audio: audio_url
What is MCP?
The Model Context Protocol is an open standard that lets AI assistants use external tools and data sources. Instead of writing code to call APIs, your LLM calls them directly.
Conversational
Ask your AI to generate speech or create videos in natural language. No code needed.
Secure
Your API token stays local. Auth, credits, and rate limits all apply the same as the REST API.
Full Feature Set
All 8 API operations are available as MCP tools, from generation to task management.
Available Tools
Your LLM gets access to these tools when connected via MCP.
generate_speech
Convert text to speech audio. Returns an audio URL immediately. 1 credit.
generate_avatar
Generate animated portrait video. Returns a task_id for polling. 10 credits.
get_task_status
Poll an async task by ID until it completes or fails.
list_tasks
List recent tasks with optional filtering by service or status.
get_credit_balance
Check your current credit balance and recent transactions.
list_services
List available AI services with descriptions and credit costs.
list_models
List available models for a service (e.g. SadTalker, LivePortrait).
get_rate_limits
Check rate limit status for each service.
Setup Guide
Connect your MCP client in two steps.
Get your API token
Create an API token from your dashboard or via the API. You'll need this for authentication.
Add the MCP server config
Add the following to your MCP client configuration. Replace YOUR_API_TOKEN with the token from step 1.
Claude Desktop
Add to claude_desktop_config.json:
{
"mcpServers": {
"hudsonly-ai": {
"url": "https://api.ai.hudsonly.com/mcp/hudsonly",
"headers": {
"Authorization": "Bearer YOUR_API_TOKEN"
}
}
}
}
Cursor
Add to .cursor/mcp.json in your project:
{
"mcpServers": {
"hudsonly-ai": {
"url": "https://api.ai.hudsonly.com/mcp/hudsonly",
"headers": {
"Authorization": "Bearer YOUR_API_TOKEN"
}
}
}
}
Claude Code (CLI)
Run in your terminal:
$ claude mcp add hudsonly-ai \
--transport streamable-http \
--url https://api.ai.hudsonly.com/mcp/hudsonly \
--header "Authorization: Bearer YOUR_API_TOKEN"
Example Conversation
Here's what using Hudsonly AI through MCP looks like.
You
Create an audio clip that says "Thanks for watching our demo" in a natural voice, then make an avatar video with this portrait image: https://example.com/portrait.jpg
Assistant
I'll start by generating the speech audio, then use it to create the avatar video.
// Result: { audio_url: "https://storage.../audio_abc123.mp3" }
// Result: { task_id: "9f1a2b3c-...", status: "pending" }
// Result: { status: "completed", output: { video_url: "https://storage.../video_xyz.mp4" } }
Assistant
Done! Here are your results:
- Audio: https://storage.../audio_abc123.mp3
- Video: https://storage.../video_xyz.mp4
The avatar video shows the portrait speaking your text with natural lip movements. Total cost: 11 credits (1 for TTS + 10 for avatar).
Compatible Clients
Works with any client that supports the MCP streamable HTTP transport.
Claude Desktop
Anthropic
Claude Code
CLI Agent
Cursor
AI Code Editor
Any MCP Client
Streamable HTTP
Ready to connect?
Create an account, grab your API token, and add the MCP config. You'll be generating speech and video from your LLM in minutes.