Sometimes the biggest opportunities come disguised as unproven protocols released on a random Monday. Here’s why we bet on MCP before anyone asked us to.
Two months before anyone knew what MCP was, we made a bet that it would fundamentally transform how people interact with their data infrastructure.
November 25, 2024. A message dropped into #ai-news, our Slack channel where Keboolians collectively navigate the deluge of AI updates. Someone posted about Anthropic's new release: Model Context Protocol (MCP). It enabled Claude to connect directly to external systems, execute actions, and return results—not just suggest what you should do.
The implications were easy to see. This was the architectural standardization that, we believed, everyone would finally adopt, allowing us to bridge the gap between AI and system execution.
For years, the industry has been stuck with a fundamental limitation. AI systems could analyze queries, generate sophisticated code, and design complex solutions – but they couldn't actually execute anything. Every interaction required a human intermediary to copy code, paste it into the right system, run it, and bring back results.
But there was an even deeper problem: fragmentation. Every AI framework—LangChain, CrewAI, AutoGen—had its own way of connecting to external systems. Developers were forced to write custom integrations for each framework, creating a massive duplication of effort across the ecosystem. Want to connect your AI to Salesforce? Write one integration for LangChain. Another for CrewAI. Another for your custom agent framework.
MCP changes this at the protocol level. It’s a specification for a standardized communication layer that allows AI systems to:
Tangibly, what that means is straightforward: No more custom implementations for every tool and framework. Instead, SaaS providers themselves can create and distribute official tool sets for their systems that work everywhere. Write it once; use it anywhere. Whether you're using Claude, ChatGPT, or any MCP-compatible framework, the same tools just work.
For Keboola, with our API coverage spanning 300+ data connectors and transformation engines, this protocol was the perfect gift. We could build one MCP server and instantly make our entire platform accessible to every AI system that supports the protocol.
None of our customers were asking for an MCP server. In fact, most had never heard of it. We had a backlog of feature requests, proven roadmap items demanding resources, and here we were, evaluating a protocol that was literally days old.
But our instincts told us this was different. The protocol specification was elegant. The implementation path was clear. Most importantly, it solved real architectural problems we'd been wrestling with for years.
By January 2025, MCP's adoption trajectory validated our instincts. Major platforms were implementing support. The ecosystem was forming rapidly. We'd already built our proof of concept—just six tools initially, but enough to demonstrate the transformative potential.
Our implementation leveraged FastMCP for the server framework and integrated directly with our existing Storage API infrastructure. The technical architecture we developed included:
Core Tool Categories:
Each tool was built with comprehensive error handling, automatic workspace provisioning, and intelligent caching mechanisms. Our type-safe implementation using Pydantic ensured robust data validation across all operations.
While others were still figuring out basic MCP implementations, we tackled the industry's next challenge: remote deployment with enterprise-grade security.
Our engineering team architected a sophisticated OAuth-based authentication flow that eliminates the need for users to manage API tokens manually. The system we built features:
This wasn't just about making MCP work—it was about making it work at enterprise scale with less friction for end users.
Our MCP server exposes 30+ specialized tools, each meticulously designed for specific data operations:
The architecture automatically detects whether you're running on Snowflake or BigQuery and handles all the complexity of fully-qualified table names and proper quoting mechanisms.
Our implementation leverages several key design patterns:
Session State Management: Every tool function receives a context object that maintains user authentication, project scope, and connection state across multiple operations. This enables complex multi-step workflows while ensuring security isolation between different users and projects.
Type-Safe Tool Definitions: Using Pydantic models throughout, we ensure that every input and output is validated at runtime. This catches errors early and provides clear feedback to AI agents about what went wrong and how to fix it.
Intelligent Error Recovery: Our @tool_errors() decorator wraps each tool with contextual error handling that not only logs failures but provides recovery instructions to the AI. When a transformation fails due to a missing table, the AI receives actionable guidance on how to resolve the issue.
Async-First Architecture: Built on FastMCP's async foundation, our server handles concurrent operations efficiently. Multiple AI agents can execute queries, create transformations, and monitor jobs simultaneously without blocking each other.
Transport Flexibility: While users can connect via stdio for local Claude Desktop usage, our server also supports HTTP+SSE transport for web-based clients and enterprise deployments. This flexibility ensures compatibility with the entire MCP ecosystem.
The real power comes from how these tools compose. An AI can query table metadata, analyze the schema, generate appropriate SQL transformations, execute them, and monitor the results—all in a single conversation flow. Each tool is designed to provide just enough functionality to be useful on its own while combining naturally with others for complex operations.
The combination of MCP protocol support and our remote OAuth deployment creates an unprecedented capability: any AI system can now become a fully-functional data engineer.
Consider the implications:
This isn't incremental improvement—it's a fundamental shift in how data infrastructure can be accessed and manipulated.
By May 2025, our bet had paid off spectacularly. Anthropic announced “integrations” in close partnership with Cloudflare. Atlassian, Asana, Linear, and many more all released their remote MCPs. Microsoft was implementing support in VS Code. OpenAI adopted it in their APIs. And most recently, ChatGPT has now rolled out official support. The protocol had become the de facto standard for AI-to-system communication.
Our customers who didn't know what MCP was six months ago are now building entire workflows around it. Data teams are delegating routine operations to AI assistants. Business analysts are querying data warehouses through natural conversation.
The future we anticipated had arrived faster than anyone expected.
About the author
Jordan Burger, Applied AI Research Lead
Jordan leads our AI Reseach at Keboola, where he’s been instrumental in designing and delivering the Keboola MCP Server.