Smart Git Repository Compression for AI Agents

Optimize repository ingestion for LLMs with intelligent compression that preserves critical code while reducing token usage.

No credit card required • Free for public repos < 200mb

Context-Aware Compression

Reduce token usage by up to 75% while maintaining core understanding.

Lightning Processing

Optimize codebases in seconds to improve AI agent performance.

Support for All Major Languages

Optimize codebases in Python, JavaScript, Java, Go, and more.

Simple Integration

Quick Start

from langchain_core.tools import tool
from langchain.chat_models import init_chat_model
import requests

@tool
def getRepoSummary(username: str, repo: str) -> str:
""" Get a summary of a GitHub repository. """
	output = 'text'
	url = f"https://repo-ai.fly.dev/pack/{username}/{repo}?format={output}"
	response = requests.get(url)
	return response.text


llm = init_chat_model("llama3-8b-8192", model_provider="groq")
llm_with_tools = llm.bind_tools([getRepoSummary])

query = "How does huggingface/smolagents agent framework implement it's multi-step agent flow?"
llm_with_tools.invoke()