Tabnine now supports Perforce Helix Core
Home / Blog /
March Changelog
//

March Changelog

//
Christopher Good /
4 minutes /
April 1, 2025

Last month, we drew a line: Tabnine isn’t chasing the hype cycle. We’re building for the real world of software engineering—where systems are complex, constraints are real, and quality matters.

AI should adapt to your developers, your policies, and your infrastructure—not the other way around.

This month’s updates deliver on that promise. More control. Deeper context. Smarter governance. These aren’t features. They’re building blocks for a future-proof platform: enterprise AI that’s fast, trusted, and secure by design.

Context Scoping That Understands Real Engineering Workflows

In large-scale environments, code doesn’t live in a neat local folder. It’s spread across terminals, remote repositories, mounted volumes, and multi-tool dev chains. But most AI tools can only see a narrow slice of that reality—making their suggestions unreliable, or worse, misleading.

This month, we expanded Tabnine’s context scoping to include remote files, folders, and active terminal sessions. That means Tabnine can now reason with a broader and more accurate picture of what the developer is actually doing.

This isn’t just an enhancement—it’s a requirement. Enterprise AI without access to meaningful context is like a senior engineer who’s only read half the ticket. This is a direct answer to a common pain point with generic AI tools: “It’s fast, but it doesn’t know what I’m doing.” 

Because in the enterprise, context isn’t just nice to have—it’s everything.

Provenance, Attribution, and Censorship: Control Where Your Code Comes From

As AI-generated code becomes a bigger part of your codebase, the question of where it came from matters more than ever. Engineering leaders are rightfully concerned about license contamination, IP leakage, and the downstream risk of unauthorized code inclusion.

This month, we’ve taken two major steps to help enterprises own their security posture when it comes to provenance:

  • Attribution controls and database updates ensure that the AI suggestions your developers see remain aligned with your policies around third-party code and license use.

  • Censorship toggles allow organizations to proactively block Tabnine from suggesting any code with restricted origins or privileged patterns.

This isn’t just about compliance—it’s about trust. Your developers need to move quickly without second-guessing every suggestion. Your security team needs to know that those suggestions are clean, trackable, and policy-aligned.

Tabnine makes that possible, and we’re continuing to lead the industry in enterprise-grade attribution and provenance infrastructure.

Tabnine Chat: Responsive, Contextual, and IDE-Native

Tabnine Chat is becoming the primary interface between engineers and their AI assistant. But in order to earn that position, it has to be fast, intelligent, and above all—non-intrusive.

We’ve redesigned the Apply button for better usability and performance. It now works seamlessly, implementing suggestions provided by Tabnine Chat regardless of your cursor location or page state—ensuring that inserting a suggestion is immediate, intuitive, and doesn’t interrupt your flow.

We’ve also expanded indexing support to Perforce, allowing even teams using legacy or specialized VCS tools to benefit from full-chat context awareness.

These improvements are part of our belief that AI should adapt to you, not the other way around. Where other tools push centralized chat UIs or web-first workflows, Tabnine is meeting developers inside their environment—on their terms.

Full Model Flexibility: Claude 3.7, Gemini 2.0 Flash, and Admin-Level Control

AI model choice is no longer just a question of performance—it’s a matter of policy, architecture, and preference. And in March, we’re delivering the flexibility enterprise customers have been asking for.

We’ve added:

  • Claude 3.7 Sonnet as the new default model for Enterprise SaaS deployments via Tabnine’s endpoint.

  • Gemini 2.0 Flash as a supported model for private deployments via Vertex AI.

  • A new admin toggle that allows complete control over which models are active—including the ability to disable Tabnine Protected entirely.

This shift is critical for organizations who want to fully customize their AI architecture. Whether you’re standardizing on a specific provider, managing latency/cost tradeoffs, or ensuring alignment with internal risk policies—you’re now in full control.

Other tools bake in their own models and force you to opt out. With Tabnine, you decide what models your engineers can use, where they run, and how they’re governed.

SMTP with OAuth: Aligning with Enterprise-Grade Security Standards

For self-hosted, air-gapped, or security-sensitive environments, Tabnine continues to evolve into a deeply configurable platform. This month, we’ve added support for SMTP with OAuth, replacing legacy user/password authentication with modern, token-based email integration.

Why does this matter? Because authentication is often the silent failure point in enterprise tooling. Static credentials introduce risk. OAuth tokens align with security best practices—and are often mandatory in organizations with hardened security postures or centralized SSO enforcement.

This update allows Tabnine to integrate more safely with internal systems while aligning with your org’s identity and access control standards. It’s another example of how we’re not just adding features, but building toward a platform that respects and reinforces your security architecture.

Looking Ahead

As we said last month: flexibility without governance is a non-starter. This month’s releases are about proving that those two things can—and must—coexist in a modern AI platform.

With smarter context awareness, safer identity syncing, secure system integration, stronger attribution controls, and unmatched model flexibility, Tabnine continues to evolve into the platform of choice for teams leading a new renaissance in software development—where AI enhances creativity, accelerates delivery, and elevates engineering itself.

Curious how it all works in practice? Have questions, or want to see Tabnine in action?We host Tabnine Office Hours every Wednesday— where engineering leaders, platform teams, and AI champions come together to share insights, ask questions, and influence our roadmap. We’d love to see you there. Reserve your spot for our next Office Hours.