Key takeaways and Tabnine FAQs from Nvidia GTC

Posted on March 27th, 2024

Last week we sponsored Nvidia GTC, joining about 30,000 developers, business leaders, and AI researchers in San Jose, CA for one of the premiere AI conferences. 

Tabnine’s booth in the Generative AI section of the exhibit hall received a steady stream of attendees who stopped by to see a demo of Tabnine in action, and to scoop up stickers, T-shirts, and custom Tabnine key caps. It was great to connect with long-time Tabnine users, as well as introduce Tabnine to developers learning about our AI tools for the first time. 

Visitors to our booth could also opt in to trade a scan of their badge for a donation to charity, with $3 going to their choice of CodeNation, the local Computer History Museum, or Women Who Code — ultimately raising $1,000 for these charities.

Among all the great conversations we had during the conference, a few questions came up repeatedly. So we thought it would be a good idea to share the most common questions, along with our answers.

Q: What’s the size of your model?

This was the first time I was asked this question, but it makes sense given the conference audience that there’d be an interest in the number of parameters in our large language model (LLM). 

Tabnine actually uses three models. The first is a CPU-optimized model consisting of 2 billion parameters that runs on the local machine for code completion. The other two models run on a Tabnine server. One model consisting of roughly 10 billion parameters provides enhanced code completion capabilities, while the other, with close to 20 billion parameters, supports Tabnine Chat. You can learn more about our models on our Code Privacy page.

In summary, the size of the model does matter, but only so much as it allows the model to be more flexible. (write McBeth as a Star Trek episode OR create an image of Julius Caesar crossing the Rubicon). When you know the data (fully permissive open source) and the use case, then the size of the model makes far less difference than the quality of the data and the security of the model. 

Q: My organization has restrictions on AI tool usage. Can Tabnine work with us?

As we say, Tabnine is the AI coding assistant that you control, and we work with companies with varying AI restrictions. We focus on three P’s: privacy, personalization, and protection. When handling compliance concerns, we focus on privacy and protection. We have a zero code retention policy that ensures your codebase remains private and is never used to train our models. Any code passed to our servers is held only in memory and deleted once a response is returned. For those requiring extra privacy, Tabnine can also be installed on-premises behind a firewall. Tabnine protects you with strict license compliance. We’ve trained our own models exclusively using permissively licensed open source repositories. You can work with our Sales team to figure out what works best for your organization. Learn more at our online Trust Center.

Q: Will Tabnine work with my IDE?

The integrated development environment (IDE) a developer chooses to work in can be a deeply personal choice and a lack of support could be a deal breaker. Tabnine works with all of the most popular IDEs. During the conference, we talked about VS Code, Visual Studio, the JetBrains family of IDEs, and even NeoVim. You can learn more about our supported IDEs in our documentation. 

Q: How much does Tabnine cost?

The Basic version of Tabnine is free and provides basic AI code completions from a local LLM. For individual developers and small teams, we recommend Tabnine Pro, which provides enhanced code completion and access to Tabnine Chat, allowing you to generate code, tests, documentation, and more. Pro includes a 90-day free trial and then it’s $12 per user per month when billed annually. 

Enterprise engineering teams get the full capability of Tabnine and can deploy anywhere for just $39 per user per month. You can learn more on our Plans & Pricing page.

Q: How does Tabnine compare with GitHub Copilot?

This was by far the most asked question all week. 

GitHub is owned by Microsoft and they’ve put a lot of effort into awareness. You’d be hard-pressed to find a developer unfamiliar with Copilot. One difference that tends to surprise people is that Tabnine predates Copilot. We released the first AI coding assistant back in 2018 and helped create the market. Aside from that, a developer might not notice the other differences because we’re so alike on the surface. The main difference between Tabnine and Copilot is our commitment to ensuring our customers’ privacy and our commitment to license compliance. 

We both help accelerate code development using a combination of code completion and AI chat. Where we differ lies in our approach. Whereas Copilot uses an OpenAI LLM, we trained our own models using only permissively licensed open source repositories because it ensures that our customers never introduce IP liabilities. We also don’t store any of your data or code, nor train our models using your codebase. Most uniquely, we give our customers the option of deploying Tabnine in their own environment (including our chat models): in secure SaaS, VPC, on-premises, or completely air-gapped deployments. We also offer our enterprise customers the option to use a private custom model trained on their codebase. If you’re part of an organization with a high focus on security and privacy, our solution was built with you in mind. We’re focused on bringing AI assistance to the entire software development life cycle while keeping our customers’ code and data private, secure, and compliant.

We look forward to answering your questions at our future events. Speaking of which: if you’re planning to attend Google Cloud Next or Atlassian Teams, look for our booth and come have an in-person chat. You can also join us for a webinar on April 18 to learn how to combine Tabnine’s code generation with CircleCI’s CI/CD automation to speed up software delivery. And as always, feel free to reach out to us on LinkedIn or X.