Efficient GitLab MCP: 93% token savings through progressive disclosure
I just published Efficient GitLab MCP, a token-efficient GitLab MCP server that dramatically reduces the context window overhead when AI agents interact with GitLab.
The problem with large tool sets
Most GitLab MCP servers expose dozens of individual tools directly to AI agents. Every tool definition—with its name, description, and parameter schema—consumes tokens from the context window. I measured the upstream zereight/gitlab-mcp server: 77 tools consuming ~27,000 tokens just for tool definitions, before any actual work begins.
That's a significant chunk of context that could be used for actual code, discussions, or reasoning.
The solution: progressive disclosure
Instead of dumping all 77 tools upfront, Efficient GitLab MCP exposes just 5 meta-tools:
| Meta-Tool | Purpose |
|---|---|
list_categories |
Discover available tool categories |
list_tools |
List tools in a specific category |
search_tools |
Search for tools by keyword |
get_tool_schema |
Get full parameter schema for a tool |
execute_tool |
Execute any GitLab tool by name |
The agent discovers tools on-demand. Need to create a merge request? The agent calls list_tools("merge-requests"), gets the schema for create_merge_request, and executes it. No wasted tokens on pipeline tools, wiki tools, or anything else it doesn't need.
Measured result: ~1,800 tokens instead of ~27,000. That's a 93% reduction.
A Christmas fork
This project started on December 24th when I forked zereight/gitlab-mcp, a well-maintained GitLab MCP server with 850+ commits from contributors worldwide. The original is excellent—comprehensive API coverage, active community, solid foundation.
But I wanted to experiment with token efficiency and apply some engineering practices I've been refining:
- Bun runtime — Faster builds, native TypeScript support
- Strict Biome rules — Zero
anytypes, no non-null assertions, cognitive complexity limits - Comprehensive testing — 120+ tests covering registry, config, logger, and MCP integration
- Semantic release — Automated versioning with conventional commits
- MCP protocol logging — Structured logs for agent observability
- HTTP transport security — DNS rebinding protection for production deployments
What's included
All the GitLab operations you'd expect, organized by category:
- repositories — Search, create, fork repos. Get files, push files, manage branches
- merge-requests — Create, update, merge MRs. Discussions, threads, diffs
- issues — Create, update, delete issues. Links, discussions
- pipelines — List, create, retry, cancel pipelines. Job output (requires
USE_PIPELINE=true) - search — Global, project, and group search across code, issues, MRs, commits
- And more — Projects, commits, namespaces, milestones, wiki, releases, users, notes, events, groups
Note: Some tool categories are disabled by default. Set USE_PIPELINE=true for pipeline tools, USE_MILESTONE=true for milestone tools, or USE_GITLAB_WIKI=true for wiki tools.
Try it
Install via npm or bun:
{
"mcpServers": {
"gitlab": {
"command": "npx",
"args": ["efficient-gitlab-mcp-server"],
"env": {
"GITLAB_PERSONAL_ACCESS_TOKEN": "glpat-xxxxxxxxxxxxxxxxxxxx",
"GITLAB_API_URL": "https://gitlab.com"
}
}
}
}
Or check out the GitHub repo for full documentation, including HTTP transport setup and self-hosted GitLab configuration.
Published to the official MCP registry
As of today, Efficient GitLab MCP is published to the official MCP Registry—the nascent but growing canonical source for MCP servers maintained by the MCP community.
The cool thing about the official registry is that PulseMCP (my favorite MCP server discovery resource that I built a Raycast extension for 🙂) automatically ingests everything from it. Previously, I had to manually submit servers via their form. Now, publishing to the official registry is enough—PulseMCP picks it up automatically. I don't currently know how often this happens but I've reached out to Tadas and will add a note here when I find out (if I remember).
This is the MCP ecosystem maturing in real-time. A single publish propagates across discovery tools.
What's next
The progressive disclosure (something I learned about from this blog post by Anthropic) pattern could apply to other large MCP servers. Any domain with dozens of tools—cloud providers, CRMs, project management—could benefit from this approach. I previously applied this pattern to my other, currently published server. That one hasn't gotten the automatic publishing-to-the-official-MCP-registry treatment yet. It's on the todo list.
For now, I'm using this daily with my GitLab workflows. It's amazingly effective--far better than your agent constantly running glab api... commands. If you try it out, I'd love to hear how it works for you.