Live MCP is built from two components that work together to give an AI assistant full access to Ableton Live. Understanding how they fit together will help you reason about what is happening when you interact with Live through your AI, and it will make troubleshooting much easier when something goes wrong.
The two-component design
AI Assistant <--> MCP Server <--TCP--> M4L Bridge <--> Ableton Live
(Claude) (Node.js) (Max for Live) (Live API)
The MCP server speaks the Model Context Protocol. It receives tool calls from your AI assistant (like “get the tempo” or “create a track”) and translates them into messages the bridge understands.
The M4L bridge is a Max for Live device that lives inside Ableton Live. It runs a TCP server that listens for requests from the MCP server, executes them against the Live Object Model (LOM), and sends results back.
The two components communicate over TCP on localhost, using a simple newline-delimited JSON protocol.
Why two components?
Ableton Live does not expose its internal API over a network. The only way to interact with the Live Object Model programmatically is through Max for Live, which runs inside Live’s own process. That constraint means the code that actually talks to Live must live inside a Max for Live device.
At the same time, MCP servers need to run as standalone processes that communicate over stdio with AI clients. You cannot run an MCP server inside Max for Live — the execution environments are fundamentally different.
The bridge pattern solves this. The M4L bridge handles the “inside Ableton” part (executing LOM operations), and the MCP server handles the “outside Ableton” part (speaking MCP with the AI client). TCP connects the two.
This separation also means the MCP server can run on a different machine than Ableton if needed. As long as the TCP connection can reach the bridge, it works. You could run your AI assistant on a laptop while Ableton runs on a dedicated production machine.
How a request flows
When you ask your AI assistant something like “What’s the tempo?”, here is what happens:
- Your AI assistant decides to call the
get_propertytool with pathlive_setand propertytempo. - The MCP server receives this tool call over stdio.
- The MCP server sends a JSON request to the M4L bridge over TCP:
{"id": "abc-123", "method": "get_property", "params": {"path": "live_set", "property": "tempo"}}. - The M4L bridge receives the request. It uses the Max for Live
LiveAPIto query the tempo from the Live Object Model. - The bridge sends the result back over TCP:
{"id": "abc-123", "result": {"value": 128.0}}. - The MCP server receives the result and returns it to your AI assistant as a tool response.
- Your assistant tells you the tempo is 128 BPM.
The whole round trip typically takes a few milliseconds on localhost.
The observation pattern
Most tool calls follow a simple request-response pattern. But observe is different — it sets up a persistent subscription. When you observe a property, the bridge watches it for changes inside Ableton and pushes notifications back to the MCP server without being asked.
This is how your AI assistant can tell you “the tempo just changed to 135” when you adjust it manually in Ableton. The bridge detects the change and sends an unsolicited notification over the TCP connection.
Further reading
- The Live Object Model — the object hierarchy the bridge navigates
- Available Tools — reference for all six MCP tools
- Cycling ‘74 LOM reference — the authoritative documentation for the Live Object Model API