Tabnine now supports Perforce Helix Core
Home / Blog /
February Changelog
//

February Changelog

//
Christopher Good /
3 minutes /
March 31, 2025

Generative AI is moving fast. But inside the enterprise, adoption is often stalled—not by lack of interest, but by lack of fit. Engineering leaders aren’t looking for flashy demos. They’re looking for AI that integrates with the systems, policies, and architectures they’ve spent years building and securing.

At Tabnine, we’re not chasing the hype cycle. We’re building an AI development platform for mature, complex engineering organizations—the teams that know velocity without trust is a risk, and flexibility without governance is a non-starter.

This month’s releases sharpen control, expand infrastructure compatibility, and elevate the developer experience—all while staying rooted in what matters most: enterprise-grade flexibility, speed, and trust. These aren’t just new features—they’re milestones in a larger vision: AI that works with your systems, your policies, and your teams—not around them.

A UI Built for Speed, Clarity, and Developer Trust

At scale, even small usability issues become major sources of friction. Confusing interfaces slow down developers, erode trust in tools, and increase the burden on platform teams. That’s why we’ve launched a redesigned, modernized UI across Tabnine—focused on speed, clarity, and ease of use for both developers and administrators.

The experience is now faster, clearer, and more intuitive across both the IDE and Admin Console. Developers get a clean, focused interaction that keeps them in flow. Admins get more transparent model configuration, better defaults, and less ambiguity across environments.

This isn’t just about aesthetics—it’s about scaling AI adoption without increasing cognitive load. Most tools in the market treat UI as an afterthought. At Tabnine, we see it as infrastructure. A good UI isn’t just about ease of use—it’s about trust, safety, and developer momentum.

Claude 3.7: Expanding Model Choice Without Compromising Control

In 2023, most AI tools forced teams into a binary choice: take it or leave it. Single-model platforms limited experimentation and made it impossible to align AI capability with use case, policy, or risk level.

Tabnine is taking a different path, we aren’t in the business of forcing one-size-fits-all AI onto your team. With the addition of Claude 3.7, we’re expanding optionality—without ever compromising on control. Alongside other popular models, your platform team can choose the right tool for the job—backed by the same control plane, security policies, and observability that already governs your Tabnine deployment.

You stay in control of which models are enabled, where they run, and how they’re applied across your organization. This is optionality with governance built in—because enterprises shouldn’t have to choose between innovation and oversight.

Vertex AI: Private, Self-Hosted, and Built for GCP Enterprises

Too many AI platforms assume a single cloud, a single model provider, and a uniform infrastructure. That might work for a weekend hackathon—but it doesn’t work in a regulated enterprise environment with layered controls, hybrid cloud architectures, and real compliance needs.

That’s why Tabnine now supports Google Cloud’s Vertex AI in self-hosted deployments. For organizations running on GCP, this update removes a major blocker to production AI rollout—by letting you use Vertex as your LLM provider while maintaining full control over hosting, security, and policy enforcement. Organizations can run Tabnine in regulated or sensitive environments—while staying fully aligned with internal GCP infrastructure, security protocols, and data residency requirements. No risky data routing. No unmanaged third-party services. Just a clean integration between your cloud provider, your LLM, and your AI platform.

You don’t need to rewire your infrastructure or compromise on where data flows. This is how enterprise AI should work: flexible, composable, and compliant by default. If you’re building multi-cloud or managing sensitive workloads, Vertex + Tabnine gives you the power and control to move fast without introducing unnecessary risk.

For mature teams operating across multi-cloud or hybrid environments, this is exactly the kind of flexibility that turns AI from a pilot project into a production standard. It’s another step in our mission to support every enterprise architecture—no matter how complex.

Looking Ahead

We’re building for the engineering organizations that are already operating at scale—and need AI to meet them there. Compliance isn’t optional. Tooling can’t get in the way. And developer trust has to be earned, not assumed.

Tabnine is being built for the real world: where compliance isn’t optional, infrastructure isn’t uniform, and velocity can’t come at the cost of trust. And more importantly, it’s built for the teams who operate at that scale every day—the mature, complex engineering organizations who are shaping what software will look like in the next decade.

This is the foundation of our platform vision: an AI development environment that’s fully customizable, inherently secure, and designed to meet the needs of organizations building at scale.

We host Tabnine Office Hours every Wednesday— where engineering leaders, platform teams, and AI champions come together to share insights, ask questions, and influence our roadmap. We’d love to see you there. Reserve your spot for our next Office Hours.