Pricing
Free during beta. Full access to inference, model catalog, and planning tools.
Public Beta - No Payment Required
All features including inference are free during beta. Every account receives credits to run models. No credit card needed.
Free
BetaDiscover models, run inference, and plan deployments.
- Inference API access (beta credits included)
- Model catalog with structured data
- Side-by-side model comparison
- GPU capacity planner
- Model recommender
- Basic saved history
Pro
BetaHigher limits, exports, and priority access to new features.
- Everything in Free
- Higher inference credit allocation
- Unlimited saved models and comparisons
- Export comparisons to CSV and PDF
- Priority access to new models
- Request custom usage limits
Enterprise
For teams adopting AI models at scale.
- Dedicated inference capacity
- Team access and collaboration
- Custom model onboarding
- Dedicated support
- SLA and uptime guarantees
What every account gets
Available to all users. Tier-specific limits apply to saved items and exports.
Inference
- OpenAI-compatible API
Single endpoint, works with existing SDKs
- Smart routing
Automatic model selection based on prompt complexity
- Playground
Test and compare models from your browser
Discovery and Planning
- Model catalog
Structured data on hundreds of models, updated weekly
- Side-by-side comparison
Evaluate models across capabilities and benchmarks
- GPU capacity planner
VRAM calculations and GPU recommendations for self-hosting
Standard rate limits apply to ensure platform stability. If you need higher limits, contact us.
Frequently asked questions
How does inference pricing work?
During beta, every account receives free inference credits. After beta, pricing will be pay-as-you-go based on token usage, with model-specific rates.
Do I need a credit card to use Inferbase?
No. Everything is free during beta, no payment details required.
When will paid plans launch?
After beta, once billing and enterprise features are in place. Existing users will get advance notice and a clear migration path.
Is the inference API OpenAI-compatible?
Yes. You can use the OpenAI SDK or any compatible client. Just point it to our endpoint and use your Inferbase API key.
Can my team use Inferbase?
Team features are planned. For early enterprise access, contact us and we will work out the details.
Start building with the right model.
From model selection to production, one platform, no fragmentation.