BitrixGPT 5.5 — новое поколение бесплатных моделей в AI Router
The Bitrix24 team has rolled out a new generation of its own models. They are already available through the single OpenAI-compatible endpoint POST /v1/chat/completions — no extra setup required.
What's new. bitrix/bitrixgpt-5.5 — the standard model: 262K context, vision built-in. It's the new platform default: omit the model field in your request (or pass "auto" / "bitrix/free") and you land on this one. bitrix/bitrixgpt-5.5-thinking — same backbone in reasoning mode via standard think-tags that our platform parses automatically. Better for logic, math, and multi-step tasks where you want to see the chain of thought.
Pricing. Both models are free through the platform — you only need a Vibe API key with the vibe:ai scope. No counters, no trial caps.
What happens to the legacy bitrix/bitrixgpt-5. It's marked DEPRECATED and stays online until 31 July 2026 — a grace period so internal Bitrix24 teams can migrate to 5.5. After 31 July, any request to bitrix/bitrixgpt-5 or bitrix/bitrixgpt-5-vl is transparently redirected to bitrix/bitrixgpt-5.5: no 404, the X-Model-Replacement header tells your AI agent which model to use going forward. This is the platform's standard model lifecycle mechanism (RFC 8594 Sunset + IETF deprecation-header).
Vision: one URL instead of two. The old bitrix/bitrixgpt-5-vl handled images. In 5.5 vision is built in — send your content in the standard OpenAI format (array of parts with type:text and type:image_url) straight to bitrix/bitrixgpt-5.5. Same schema as Claude and GPT-4o, but free.
How to try. In the UI: open /ai, find the BitrixGPT 5.5 card, hit "Use" — a ready-made prompt for an AI agent is copied to your clipboard. Without UI: existing code calling /v1/chat/completions needs no changes — the legacy bitrix/bitrixgpt-5 keeps working through July, and if you don't specify a model explicitly, you're already on 5.5.
Documentation: https://vibecode.bitrix24.tech/docs/ai/chat
If something is broken or you want a feature — send feedback right from the app through the AI agent, we'll look into it.