Bengaluru-based AI startup Sarvam AI has introduced two new large language models, Sarvam-30B and Sarvam-105B, at the ‘India AI Impact Summit 2026.’ The Sarvam-30B models are designed to support a context length of up to 32,000 tokens, enabling more efficient usage and reduced power consumption. Trained on 16 trillion tokens, these models facilitate more effective thinking and logical reasoning with lower token usage.
Furthermore, Sarvam-105B offers support for a context length of 128,000 tokens, empowering the agent to tackle complex reasoning tasks. Sarvam has highlighted that this model’s performance is comparable to leading open and closed-source models in its segment.
Additionally, Sarvam AI has forged strategic partnerships with prominent global tech companies like Qualcomm, Bosch, and Nokia to integrate and deploy these innovative models.
Source: Inc42 Media