Excited for the future!

I’m Sabareesh, a curious researcher exploring Large Language Models (LLMs) and reinforcement learning. By studying the inner workings of LLMs, I’m working to better understand their capabilities, uncover insights, and contribute meaningfully to these transformative fields.

Let’s create something extraordinary together!

MCP Compact: Keep Agent Context Lean

The problem: MCP agents return bulky tool outputs (screenshots, DOM dumps, network traces) and quickly blow past context limits. Downstream steps stall or get fuzzy because the signal is buried. TL;DR: MCP Compact sits between your agent and MCP server, summarizes noisy tool outputs per-tool, and keeps context lean (e.g., 109k DOM -> 8.9k tokens) without changing agent code. What MCP Compact does: it sits between your agent and the upstream MCP server, forwards every tool call, and summarizes the response with an LLM. You set per-tool rules (token budget, what to preserve), and the proxy enforces them automatically. ...

November 20, 2025 · 2 min · Sabareesh

All You Need is 4x 4090 GPUs to Train Your Own Model

How I built an ML rig for training LLMs locally, exploring hardware choices, setup tricks, and lessons learned along the way.

December 28, 2024 · 6 min · Sabareesh

Defining AGI

A thoughtful exploration of Artificial General Intelligence (AGI) through three fundamental concepts.

December 28, 2024 · 1 min · Sabareesh

Embarking on My Journey into LLM

Join a curious engineer’s quest into the fascinating world of Large Language Models (LLMs). From tinkering with GPUs to unraveling the mysteries of architectures like Llama2, this journey is filled with challenges, breakthroughs, and the relentless pursuit of understanding AI’s limitless potential.

December 27, 2024 · 5 min · Sabareesh