Built for developers.
One API endpoint. Predictive cost intelligence. Drop-in replacement for your existing LLM calls. Start getting smarter routing in minutes, not days.
Start in 3 lines of code
curl -X POST https://modelmeteriq.com/api/proxy
-H "Content-Type: application/json"
-H "Cookie: sb-access-token=YOUR_SESSION_TOKEN"
-d '{
"model": "anthropic-claude-haiku-4-5",
"messages": [{"role": "user", "content": "Summarize this quarterly report."}],
"max_tokens": 2000
}'Routes your request through ModelMeteriQ to the provider and returns the LLM response with usage tracked on your account.
Three steps to smarter LLM spending
Sign up and start analyzing
Create a free account. Start analyzing prompts and routing calls instantly through the web dashboard. No API key needed.
Point your calls to ModelMeteriQ
Replace your LLM provider endpoint with our proxy URL. Your existing code works as-is. We analyze, score, and route automatically.
Watch accuracy improve
Every call feeds the prediction engine. Check your accuracy dashboard daily. Weekly recalibrations make predictions sharper over time.
Authentication
ModelMeteriQ uses session-based authentication. Log in through the web app, and your session cookie authenticates API requests from the same browser. For programmatic access, include your Supabase session token in the Cookie header.
API Reference
Four endpoints. Everything you need.
Route a call through ModelMeteriQ to any supported LLM provider
Submit a prompt for cost and performance prediction (no LLM call made)
Check your prediction accuracy scores and trends
View your token consumption and daily limits
Works with your stack
Python
SDK coming soon
TypeScript
SDK coming soon
cURL
Works out of the box
SDKs coming soon. The REST API works with any language today.