AI coding assistants: 8 features enterprises should seek

Posted on May 7th, 2023

Tabnine CEO, Dror Weiss, shares his insights on the transformative potential of AI technology in an article published on Info World. Similar to the impact of mobile phones and the internet, he highlighted AI’s significance in shaping our future. Dror identified eight key features that enterprises should look for in AI coding assistants to streamline software development processes at the enterprise level. The article provides valuable insights that can help optimize software development processes using AI technology.

Read the full article on

About Tabnine Enterprise

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. With Tabnine Enterprise, software engineering teams can quickly and efficiently write higher quality code, accelerating the entire software development lifecycle. Tabnine Enterprise supports a variety of programming languages and IDEs, as well as security and compliance standards designed for enterprise software development environments.

Open source and generative AI

Posted on May 1st, 2023

An interview with Liran Tal, Director of Developer Advocacy at Snyk and a GitHub Star

We recently had the pleasure of having a fantastic conversation with Liran Tal, who holds the position of Director of Developer Advocacy at Snyk and is also a GitHub Star. Get ready to delve into a captivating interview with Liran, along with our CTO Eran Yahav, and VP Product Yossi Salomon from Tabnine. During this interview, we discussed topics such as security vulnerabilities, generative AI for coding in open source, and the intriguing places that saying “yes” can take you. 

Buckle up and enjoy!

How did you become such an expert in the world of open source? 

I think it’s mainly about loving technology, trying out many new things, and not being afraid to dive into various new projects and opportunities. One of my secrets is that I almost always say yes to things – which can be good and bad. Honestly, it’s just a matter of being open and optimistic. With open source, it goes back to Linux. I started with Linux installations on all kinds of old computers, then ran BBSs in the days before the internet became mainstream in my country, and then I got into the KernelJanitors Linux mailing lists, etc.

So your advice would be to say yes to everything? Seems like a pretty risky approach, no? 

I mean, it’s worked out pretty well for me. I know more or less how to manage my time. 

My first exposure to the tech world was in a company that was a systems integrator, offering Linux services and implementing VoIP Asterisk systems. I was more in a SysAdmin kind of role, and I realized I wanted to get deeper into programming. So I went to several interviews, one of which was an absolute disaster – but there was one company that decided to give me a chance. 

One of the first tasks I was given in my role there was to write a state machine in C++. I told the R&D manager that I’ve never written C++ before, and he handed me a book on C++ and told me to get to it. It took me a couple of weeks to read the book, and I started writing C++. That’s just an example of how important it is to say yes to opportunities. This is especially important in open-source, where there aren’t many set rules, and the guidelines are more oral, like “be a good citizen” or “don’t be an a******”. The most challenging thing about the open-source community is trying to move things forward since most contributors are working part-time on these projects. So if someone is willing to contribute, whether it’s coding, documentation, or anything else, they’re usually warmly welcomed.

Have there been any situations where you felt like you bit off more than you could chew? 

For me, it’s all a learning experience. Even in my current role as a Developer Advocate at Snyk – I had no real idea what the role entailed. Before this role, I was either a development team leader or a programmer. I’ve done public speaking at conferences around the world for many years talking about open-source, Node.js, JavaScript, etc, but it was more of a hobby. Then someone from Snyk approached me and asked if I wanted to do it full-time. To be honest, I contemplated it for a couple of months – it was a pretty big career pivot that scared me – but I finally decided to go for it.

What are your day-to-day responsibilities in the role?

I’ve been very lucky in this role – I’m very proactive and I have lots of ideas, and this role gives me the opportunity to actually try them out. There are everyday responsibilities, including education, content creation, etc. For example, I also write command-line applications, so I created a CLI tool that scans all sorts of websites for security issues, called is-website-vulnerable. And it’s totally connected with the whole world of DevOps security. It didn’t even have a formal Jira story – it was a weekend side-project that suddenly exploded with over a thousand stars on GitHub. Lots of people started contributing to it and making such a CLI that puts everything in JSON, and then someone wrote a GitHub Action, and someone else connected it to his pipeline. That’s just one of many examples, especially when it comes to DevRel (developer relations), because it touches on so many other areas – product, development, end-user developers, education….

 

Over the last year, for example, I’ve done a lot of security research. I’m not a security researcher or analyst by trade, but I was fascinated by it, and ended up finding and disclosing a lot of security vulnerabilities in different open-source libraries.

One of the main concerns around generative AI is security. What, from your POV, is the right approach to introducing AI to enterprise R&D teams, and integrating it with secure coding paradigms? 

The concern here is real and tangible. On the other hand, I feel like things are a bit…all over the place at the moment. For example, I use ChatGPT, and something I’ve noticed (with me as well) is there’s this kind of “I want to get it done fast” mentality amongst developers. For example, I tell ChatGPT what I need, it gives it to me, and I copy/paste and run it. But that’s not really that new, you know? Before ChatGPT, we did the same thing. We’d ask Google, it directed us to Stack Overflow, we found an answer, saw that a million people upvoted it, and we copy-pasted it. So it’s not like things have changed that much for developers.

But on Stack Overflow, you’ve got some indication regarding quality and trustworthiness: Upvotes, downvotes, stars, downloads, etc. ChatGPT speaks with absolute confidence, whether it’s speaking truth or nonsense.

The concern you raise is valid and would be even more pressing if ChatGPT was within the IDE. Currently, you have to copy-paste from the web chat into the IDE, which still involves some friction. It’s small, but still enough to make it impractical, so a developer probably won’t do it constantly. Once it’s within the IDE, it becomes a much larger source of concern. I’ll put a disclaimer in here – I’m not a data scientist, and I don’t pretend to be. The issue is around how the models are trained. The ChatGPT model that I’m familiar with generates code based on the code it’s been exposed to, which obviously includes bad code as well, with lots of security vulnerabilities. So the source of the data is definitely another concern. How was the model trained? How robust is the data? Is it secure, high-quality, high-performance, fast, efficient in terms of memory use, etc.? Has the function that ChatGPT generated for me been checked, secure-coded, tested, etc? We know that the answer to all of these is probably no. 

So what are our expectations? It’s probably no worse than code written by a developer, but maybe our expectations are higher because supposedly a machine can do better.

I think that there’s an issue of awareness. Just this morning I arrived back from London. I was at a conference called CityJS, and I gave a lecture on “Path Traversal attacks”, teaching developers about a specific vulnerability that occurs in very popular open-source libraries that have been downloaded millions of times as well as impacted the Node.js runtime. The evening before my talk I was reviewing my slides and decided to open my lecture with a story. So I asked ChatGPT to create a file upload route in the Fastify web framework Node.js application, without any dependencies. The first code it generated looked really good. Functionally it almost worked. There was something small missing from the code – a method – which I fixed. It then regenerated the code for me. In both versions, there was a vulnerability that I noticed, but didn’t comment on, because I wanted to see how ChatGPT would handle it. In terms of the functionality, it now worked, but it still had a security vulnerability – basically, the code didn’t save the file in the correct way. So I opened my lecture by showing how commonplace these types of vulnerabilities are, even on code generated by ChatGPT – and I didn’t have to work hard to find an example at all. 

So how do we educate developers about the risks? 

Well, using common sense and discretion is a big part of it. The world of AI is here to help us work faster, but we still need to use our own judgment to steer it in the right direction. I was chatting with a friend who was consulting for a company and they showed him a bug generated by ChatGPT that he recognized from Stack Overflow – he’d seen the same bug in three different projects where developers had copied the same code, and ChatGPT had obviously trained on it as well. This highlights the importance of education. While augmented development can increase productivity, for critical tasks, we need to provide the machine with more detailed instructions, background, and context. We may need to use code repositories with low vulnerabilities or increase supervised learning for the models. It’s about using our judgment to direct the machine toward what we need.

 

In case you’re an enterprise looking to incorporate AI into your software development lifecycle, Tabnine is an exceptional option.

By utilizing Tabnine Enterprise, you’ll have the opportunity to benefit from contextual code suggestions that can boost your productivity by streamlining repetitive coding tasks and producing high-quality, industry-standard code. Tabnine code suggestions are based on Large Language Models that are exclusively trained on credible open-source licenses with permissive licensing.

 

What do you think about the idea of showing the developer how, for instance, a snippet of code is similar to a snippet on Stack Overflow? So for the example you mentioned, where code generated from ChatGPT had the same bug seen several times before in Stack Overflow, maybe something like that could be a help? 

That could be an interesting direction. Let’s try to apply this thought process to other things that are already happening in the ecosystem. For example, open-source libraries. Let’s say a developer wants to implement a certain functionality. They’d search for it in the libraries, and would probably find quite a few components to choose from, since the ecosystem is very mature at this point. In reality, I think they’d probably look at the code, see if there are tests, how many stars it has, how many downloads and contributors, and use it. So even before the developer starts writing the code, on the component level, there often isn’t the awareness that you need to go in and make sure that the code looks like it should. 

So what are the best practices? What do you suggest?

Well, for instance, at Snyk, we built a tool called Snyk Advisor, which examines four perspectives: Popularity, security, community, and maintenance, and then it provides a score for all 4. So for instance, let’s say there’s a package without a commit or release for over a year – that’s a flag that needs to be raised. The code might be amazing, with cross-platform testing and everything, but if it’s not maintained and has a vulnerability or even a bug, there’s no one to fix it on the spot, and you’ll end up stuck.  

There’s another cheat that I don’t often speak about. Node.js’s Canary in the Goldmine (CITGM) is a technology that pulls down modules from npm and tests them to see if they’ll break when Node.js updates its versioning.

Another tip is to look at prolific maintainers, like Matteo Collina and others. Look at what they wrote or what they choose to use. That goes to reputation, so if you know that a certain developer chose to use a specific package, they probably already did their due diligence, so you can use it. 

But say, for instance, the packages that make it into the Node test suite, they’re probably within the “trusted compute base,” right? 

There could be packages that have been downloaded 5 million times but could be terrible and break things. The CI is there to ensure that nothing in the ecosystem is broken, but it’s not an assurance that a package is good and should be used. So it works well as a basic filter, but it’s not enough to ensure the quality of the package.  

Let’s assume that in the future, code will be generated by tools like Tabnine, which are already in the IDE, and the volume of generated code (or copy-pasted) will be significantly larger than what we’re used to today. How will that affect the ecosystem, especially in the CICD? Does it transfer more responsibility to the gatekeepers? Obviously, developers will still be responsible for their code, but does it potentially transfer more responsibility to someone who’s supposed to catch issues further down in the SDLC funnel? As the manager of an R&D organization that adopts AI code generation, which processes and tools should I put in place to ensure that my code remains at the same quality as in the past? On the one hand, we’re creating systems that are far more accelerated. On the other hand, there’s a higher risk. So what can we do to ensure that production issues are prevented?

In the book by the Phoenix Project and the Accelerate report, as well as Puppet, which publishes a yearly “State of DevOps” report, they all talk about DevOps and link it to security. I think that the more mature an organization is, for example, when you’re already in the CICD phase with lots of feature flags, you’re very advanced in terms of technology, and you can fail fast, it’s much easier to integrate and use lots of different tools, for security and more. So maybe the controls won’t change much, but they’ll get sharper. 

As code is being generated faster with tools like Tabnine, there needs to be a greater emphasis on code review to catch any issues. Even with all the other security-related DevOps processes like dynamic and static application testing, someone needs to review the code to ensure it’s correct and working, including edge cases. Code review tools and processes aren’t meant to distinguish between good or bad coders, but to identify and catch specific issues. It’s particularly important for less mature companies to have a defined code review process because generating lots of code doesn’t necessarily mean it’s good code, and mistakes and vulnerabilities can easily slip through without proper testing and review.

In other words, the bottleneck in the dev process could move from writing code to code review or maybe testing, while the throughput remains the same.

Yes. If I were to give a parallel to the world of dependencies. If I go back a couple of “generations”, during the time when everything was embedded, you’d open your editor, create a new file project, with a blank page, and start writing. Today, you want to create a project, let’s say it’s Java, you’ve got Spring Starter Project or Spring Boot or whatever. Your default is to enter, like, twenty dependencies. And with JavaScript, it becomes a bit larger. So the default is that you’ve got lots and lots of code, and what I’m writing is maybe 10% business logic or something like that. So this world has really accelerated in a dramatic way.

 

Mastering the AI-driven world of software development in 2023

Posted on April 23rd, 2023

Artificial intelligence has become a critical tool for organizations to enhance productivity, innovation, and competitiveness. This webinar, which features Dror Weiss, Tabnine’s CEO, and Brandon Jung, VP of Ecosystems, covers the best ways to integrate AI into your team’s workflow.

Topics covered include whether AI will replace developers (spoiler: the answer is no, it will make them more efficient, and many developers are already incorporating AI into their daily work), the most productive ways that R&D organizations, developers, IT departments, system integrators, and other organizations are using AI within the SDLC, and how Tabnine’s AI technology helps developers save time and code better. 

In addition, the webinar touches on the evolution of programming languages, the role of developers in the era of AI and how AI is giving developers superpowers, the importance of AI tools as a learning tool for coding, as well as how, in the future, every organization will probably have an AI layer as a strategic component of their tech stack. 

 

About Tabnine AI for Enterprise

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. Tabnine Enterprise has been built to help software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.

 

3 generative AI misunderstandings resolved for enterprise success

Posted on April 19th, 2023

In his article for VentureBeat, Tabnine’s CEO, Dror Weiss, tackles three prevalent misconceptions about generative AI that are hindering enterprise progress. In today’s business landscape, as AI continues to gain traction, it is imperative to distinguish fact from fiction. A great resource for organizations looking to implement generative AI.

Read the full article on

 

About Tabnine AI for Enterprise 

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. Tabnine Enterprise has been built to help software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.

Generative AI for code and beyond

Posted on April 18th, 2023

Professor Eran Yahav, our CTO, and Brandon Jung, our VP of Ecosystems, examine the secrets of AI in this webinar, featuring the powerful generative large language models (LLM) technology that drives some of AI’s major advancements.

Along with providing insights into Tabnine’s innovative technology in the world of AI, topics covered include the use of AI and LLMs in software development, and how generative AI, powered by transformer-based LLMs, is changing software development. The discussion also covers the challenges of developing LLMs that provide value to customers, the PRMR component currently being developed, and the relationship between the size of the model and its accuracy.

 

Overall, the webinar provides a comprehensive overview of the current state of the market and where it’s headed in the coming decade with regard to the use of AI and LLMs in software development.

About Tabnine Enterprise

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. Tabnine Enterprise has been built to help software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.

Using generative AI to code with Tabnine and Google Cloud

Posted on April 17th, 2023

Artificial intelligence has been rapidly transforming industries, and Google Cloud is at the forefront of the AI revolution. This session, featuring Tabnine and CI&T, delves into the concept of generative AI and highlights why it’s an ideal starting point for developers.

By integrating AI, businesses can automate repetitive tasks and focus on more strategic initiatives. The video explores different AI technologies and showcases the advantages that CI&T has experienced through the implementation of generative AI for their developers.

About Tabnine AI for Enterprise

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. Tabnine Enterprise has been built to help software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.

Tabnine vs. Codeium

Posted on March 29th, 2023

The demand for reliable, accurate AI coding assistants is growing fast (let’s just say our sales team’s inboxes are currently flooded). Many enterprise R&D teams are currently exploring the capabilities of different tools, but it can be challenging to find an AI platform that not only provides accurate coding assistance but also provides enterprise-grade security and privacy while meeting the specific needs of each R&D team.

This post compares Tabnine Enterprise to Codeium for Enterprises, based on a range of key parameters that are critical to developers and R&D enterprise teams. By examining the capabilities of each tool, we aim to help you make an informed decision about which AI code assistant is right for your needs:

  • Price: Price point for each user in the organization
  • Context-awareness: What level of context can the different AI models take into account when providing suggestions? 
  • Open source compliance: Each company’s practices regarding the code that the AI models are trained on
  • Ability to train AI models on private code: The code that the AI models can be trained on
  • Code privacy: Privacy controls offered by each solution
  • Enterprise deployment: Deployment options available to the customer

Table comparison of Tabnine Enterprise vs. Codeium for Enterprises

[table id=4 /]

Drilling down further into Tabnine Enterprise vs. Codeium for Enterprises 

This section takes a more in-depth look at how the two solutions compare.

Price

Tabnine Enterprise charges $20 per user, while the cost of Codeium for Enterprises isn’t as straightforward and depends on the customer and their needs.

Inline code completions within the IDE and chat

Both Tabnine and Codieum offer inline code completions within the IDE, as well as chat.

Open source compliance

The use of code to train an AI solution’s models can have legal ramifications for customers using the solution.

Tabnine’s AI models are exclusively trained on code licensed under permissive licenses. This approach guarantees full transparency and attribution, which is critical in ensuring that Tabnine isn’t subject to the copyleft provisions of GPL licenses. By adhering to this policy, Tabnine can safeguard its users and customers from potential legal repercussions.
Furthermore, this practice aligns with Tabnine’s objective of respecting the original intent of code authors and maintaining good faith with the wider developer community.

It’s unclear whether or not Codeium’s models are trained on OpenAI or if they’re trained on nonpermissive licenses.

Air-gapped deployment

Tabnine Enterprise offers customers the option to self-host, deploying on the customer’s VPC or on-premises. Tabnine also supports cases where the customer network is air-gapped and can’t access the internet.

On the other hand, Codeium allows its enterprise customers the option of deploying on the customer’s VPC only. Running on a cloud (even a private cloud) means that code needs to leave the premises, which isn’t viable for some enterprises.

Ability to train AI models on private code

Tabnine Enterprise allows its customers to connect their own code repositories to its AI models, with the option to link specific models to particular repositories based on team or project needs. This feature allows the models to adapt and learn the organization’s unique coding practices, naming conventions, and preferred styles, resulting in highly relevant and context-sensitive code suggestions.

By leveraging this functionality, companies can streamline the onboarding and training process for new team members and junior developers, significantly reducing the burden on senior developers. The AI models learn from the company’s own code repositories, resulting in improved accuracy and efficiency in suggesting code, while maintaining consistency with the organization’s established practices.

Codeium trains its models on different coding languages and then fine-tunes the models on its customer’s codebase.

Code privacy 

Tabnine Enterprise prioritizes the confidentiality and security of its enterprise customers’ code, ensuring that customer code and training data are never transmitted to Tabnine or used to train its general AI models. This guarantees that customers’ sensitive and proprietary information remains strictly private and protected.

Additionally, Tabnine Enterprise offers flexible deployment options for its customers, allowing them to install the tool on their virtual private cloud (VPC) or on-premises. By enabling customers to have full control over their data and where it is stored, Tabnine Enterprise ensures that their customers’ privacy needs are fully met.

Codeium, however, uses its customer’s code for telemetry purposes, although it’s possible to opt out of this option.

About Tabnine 

Since launching our first AI coding assistant in 2018, Tabnine has pioneered generative AI for software development. Tabnine helps development teams of every size use AI to accelerate and simplify the software development process without sacrificing privacy and security. Tabnine boosts engineering velocity, code quality, and developer happiness by automating the coding workflow through AI tools customized to your team. With more than one million monthly users, Tabnine typically automates 30–50% of code creation for each developer and has generated more than 1% of the world’s code.

Unlike generic coding assistants, Tabnine is the AI that you control:

Tabnine ensures the privacy of your code and your engineering team’s activities.  Tabnine lives where and how you want it to — deployed as protected SaaS for convenience, on-premises for you to lock down the environment, or on VPC for the balance of the two. Tabnine guarantees zero data retention, and we never use your code, data, or behaviors to feed our general models.

Tabnine is also personalized to your team. Tabnine uses best-of-breed LLMs (which we’re constantly improving and evolving) and is context-aware of your code and patterns. This means that Tabnine provides coding suggestions and chat responses that take your internal standards and engineering practices into account.

Tabnine works the way you want, in the tools you use. Tabnine supports a wide scope of IDEs and languages, improving and adding more all the time. Tabnine also provides engineering managers with visibility into how AI is used in their software development process and the impacts it is having on team performance.

Try free for 90 days, or contact us to learn how we can help your engineering team be happier and more productive.

Tabnine Enterprise vs. ChatGPT Plus

Posted on March 21st, 2023

There’s been a lot of noise recently around ChatGPT’s ability to write code. But when it comes down to it, is it really an effective AI code assistant for developers and R&D enterprise teams?  

To fully understand the main differences between Tabnine Enterprise and ChatGPT Plus, we’ve put together a list of parameters that, as developers with years of experience serving the dev community, best reflect the needs and challenges of R&D organizations:

  • Main use case: The use cases for which the tool was designed and is most useful
  • Code privacy: Privacy controls offered by each solution
  • Open source compliance: Each company’s practices regarding the code that the AI models are trained on
  • Context awareness: The level of context that different AI models take into account when providing suggestions 
  • Ability to train AI models on private code: The code that the AI models can be trained on
  • Centralized configuration: The type of centralized configuration and management offered to customers
  • Price: Price point for each user in the organization
  • User management: The types of user management available 
  • Payment methods: The methods of payment available

[table id=3 /]

Drilling down further into Tabnine Enterprise vs. ChatGPT Plus 

This section takes a more in-depth look at how the two solutions compare.

Main use case

Tabnine’s code suggestions are context-sensitive and inline within the IDE, prompted as the developer types, or from natural language requests. There’s no need to copy and paste the code to your project. In addition, Tabnine’s AI models are aware of the organizational coding practices, styles, and standards, which is reflected in the accuracy of the code suggestions.

ChatGPT, on the other hand, can only code from scratch, and generates this code mainly from natural language requests, which requires providing detailed instructions and context, and then, obviously, adaption to the customer’s environment. Essentially, ChatGPT functions as a replacement for search and knowledge bases, such as Google and StackOverflow.

Code privacy 

Tabnine Enterprise ensures full and complete privacy for its enterprise customers’ code:

  • Customer code and training data are never sent to Tabnine.
  • Tabnine’s general AI models are never trained on customer code.
  • Tabnine Enterprise customers can install Tabnine Enterprise on a VPC or on-premises.

ChatGPT Plus, however, uses user interaction data to train its models. It also may use the code it generates to train its AI models. 

Code suggestion format 

Tabnine’s code completion works directly within the developer’s IDE, offering whole-line and full-function suggestions as the user codes (or via natural language hints). 

On the other hand, ChatGPT Plus only works on the dedicated ChatGPT website, generating code in response to requests. For the generated code to be relevant, the developer needs to provide multiple directions and instructions. Additionally, the generated code then needs to be copied/pasted into the IDE. This requires changing names, paths, etc., where required and can lead to bugs and other issues.

Open source compliance

The code on which a solution’s AI models are trained can have legal implications for the companies that use the solutions. 

Tabnine’s AI models are never trained on code with nonpermissive licenses and offer full transparency and attribution. This ensures that Tabnine isn’t restricted by the copyleft provisions of GPL licenses, and protects our users and customers from possible related consequences. This policy is also in line with Tabnine’s goal to honor the intent of code authors and maintain good faith with the rest of the developer community. 

However, ChatGPT trains its models on OpenAI, which could result in possible legal implications for its customers. There’s also evidence that ChatGPT has copied whole sections of nonpermissive coding, creating additional possible legal liabilities for its users. 

Context-awareness

The ability of the AI models to understand and account for context has a major impact on the amount of effort required by both the entire developer and the entire R&D team to generate high-quality code that aligns with the org’s own best practices, conventions, and styles. 

Tabnine can understand the relevant context from your project’s existing code as well as the organization’s private code repositories that our AI models are trained on. 

When using ChatGPT Plus, the developer interaction is far more complex, and providing the relevant context when composing a code request is practically impossible. Since the code provided is boilerplate, it requires the context to be provided in detailed, natural language, often needing multiple iterations. Even after being generated, considerable effort is required to copy/paste the code and adapt it to the relevant environments. 

Ability to train AI models on private code

ChatGPT Plus is trained only on OpenAI, while Tabnine Enterprise gives our customers the ability to connect our AI models to their code repositories. It’s also possible to connect different models to different repos specific to certain teams. This enables the models to learn the organization’s best practices, styles, naming conventions, and more, providing code suggestions that are both context-sensitive and relevant. In addition, this helps companies onboard and train new team members and junior developers way faster, while removing the burden from senior devs.  

Centralized configuration

ChatGPT Plus doesn’t offer any type of centralized configuration or management.  

Tabnine Enterprise’s centralized configuration allows organizations to do several things:

  • Configure the platform for your organization’s security and privacy requirements
  • Connect AI models to different repos for different teams
  • Manage access roles and permissions
  • Advanced reporting to monitor usage 
  • Manage subscriptions

User management

Tabnine allows enterprise customers to configure and manage user roles and permissions. ChatpGPT Plus doesn’t offer any user management capabilities.

About Tabnine Enterprise 

Tabnine is an AI assistant tool used by over 1 million developers from thousands of companies worldwide. Tabnine Enterprise has been built to help software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.

 

Introducing AI-powered unit testing generation! Accelerate your software development life cycle

Posted on February 22nd, 2023

Tabnine is excited to announce that we’re expanding our offering beyond code completion into additional areas of the SDLC, with the release of our AI-powered Unit Test generation capabilities.

As the leading AI assistant for software development, Tabnine has quickly become a part of many developers’ daily process as the top AI tool for VS Code (5M since 2018) and products from the IntelliJ family, such as JetBrains (2M since 2016). 

Last year, as part of our goal to double the productivity of R&D teams within organizations, we introduced Tabnine Enterprise. Building on our integration with GitLab, BitBucket, and GitHub, this offering addresses the needs of our enterprise customers, including enhanced security and privacy, self-hosting, AI model training on private code repositories, and central configuration.

Now, we’re leveraging our existing platform to broaden our offering. We all know how important it is to test our code thoroughly, but writing unit tests is often time-consuming, repetitive, and tedious. As a result, developers tend to neglect thorough testing, leading to less reliable software and production issues. 

Our new unit test generation capability uses cutting-edge AI technology to generate unit tests for your code automatically, helping ensure that your code is rigorously tested, resulting in fewer bugs and better code stability – especially important for larger projects or projects with tight deadlines.

This new capability supports multiple programming languages, including Python, Java, and Javascript. It’s also easy to integrate with your existing development environment, currently supporting VSCode, with a user interface that’s simple and intuitive, requiring minimal setup.

However, the most unique feature of Tabnine’s unit test generation is that it learns from your code as you write it. This means that the more you use the tool, the better it gets at generating unit tests that match your coding style and patterns.

We believe that Tabnine’s unit test generation can make a real impact in ensuring your code is covered by thorough, effective automated tests, allowing your team to quickly and reliably ship top-quality software for your customers. 

Want to get early access? Sign up now for the Beta version

As we broaden our enterprise offering, this new capability is just of several more coming in 2023, including search for code similarity and code explanations – so be sure to follow us and stay updated!

Tabnine: The enterprise-grade AI coding assistant

Posted on February 15th, 2023

The popularity of generative AI has broken out of tech circles and become a household term, with seemingly everybody talking about the amazing things that AI can do for them.

But for many software developers, generative AI has been an integral part of their lives for quite some time. Code assistants such as Tabnine and others have reached wide adoption among developers, and we’re now seeing more and more demand for an AI development assistant at the organization level, and not just at the individual developer level.

In the following sections, we’re going to describe Tabnine Enterprise and how it helps organizations create higher quality code faster with the help of AI.

Why do enterprises need an AI development assistant system? 

Organizations can expect to gain significant benefits by deploying AI in their SDLC: 

  1. Ship software faster
    The number one thing that engineering teams look for when considering AI solutions is to do more with less, and no wonder – AI can now dramatically accelerate development by automating larger and larger portions of repetitive coding and improving speed, while also preventing errors by helping developers get on the “right track”.
    As of today, Tabnine generates about 30% of our users’ code. Using AI in the SDLC can also significantly speed up code review and reduce errors by automating certain tasks. By generating code using natural language inputs, it can also help developers express their intent more clearly. Additionally, AI-generated code can be consistent in terms of style, formatting, and naming conventions. This makes it easier for reviewers to understand and identify potential issues. Furthermore, AI-generated code can be analyzed and evaluated for errors before it is even written, which can help catch mistakes early on in the development process. AI can also help with code refactoring and optimization by identifying patterns and patterns of inefficiency in the codebase, which can help improve the overall performance and maintainability of the code. 2. Onboard new team members faster
    AI models that are connected to an enterprise’s code repository are able to generate high-quality, consistent code that’s based on the best practices, naming conventions, styles, and formatting of a specific dev organization. This helps save time and reduce the learning curve for new team members, so they can start coding faster. This also relieves senior developers of the training burden, allowing them to focus on the more complex and valuable parts of the development process
  2. Improve overall code quality and consistency
    By generating repetitive code and providing code completions based on an organization’s code repositories, AI coding assistants help ensure that code is readable and understandable, promoting code consistency through style, formatting, and naming conventions. This results in higher-quality code, fewer errors, and a codebase that’s easier to maintain, making code review faster and easier. As a result, bugs are caught at an earlier stage, before production or integration.
  3. Improve developer satisfaction and happiness
    Developers are constantly looking for the best tools to improve their productivity and efficiency, and AI is one of them. By providing developers with an effective AI code assistant that’s specifically designed for enterprise R&D teams, companies can help ensure that developers have what they need to fulfill their full potential. 

There are several important features and capabilities that AI code assistants should offer organizations, including:  

  1. High-quality code suggestions
    AI code assistants should offer organizations high-quality code suggestions that are accurate and relevant to the current context.
  2. Quality and consistency
    AI code assistants should be able to conform to the organization’s best practices, coding standards, and naming conventions, styles, and more, in order to ensure quality and consistency of code.
  3. Ability to use Intellectual property of code created using AI
    Organizations should be able to use the intellectual property of code created using AI, as well as have the ability to customize the AI system to their specific needs, without facing the possibility of legal exposure or risk. 
  4. Privacy and security of the system
    AI code assistants should offer organizations privacy and security features that prevent code leakage and comply with the company’s security policy and regulations.
  5. Smooth integration
    AI code assistants should integrate smoothly with the existing tools and processes currently used by the organization, with minimal disruption to the development workflow.
  6. Reporting and monitoring
    AI code assistants should offer clear reporting on how effective the AI system is, including metrics on time saved, errors prevented, and code quality improvements.
  7. Compliance
    AI code assistants should be compliant with industry regulations, such as GDPR and HIPAA, to protect user data privacy.
  8. Scalability
    AI code assistants should be scalable to accommodate the growth of the organization and adapt to changing needs.
  9. Support
    AI code assistants should offer support for a variety of programming languages and frameworks, and provide ongoing updates and maintenance to stay current with new technologies.

What makes Tabnine a great choice for enterprises considering adoption of AI in their SDLC?

Tabnine Enterprise offers contextual code suggestions that automate repetitive coding, generating high-quality, best-practice code. Based on Large Language Models trained on billions of lines of code from credible open source licenses, Tabnine provides:

  • Whole-line code completion
  • Full function or snippet
  • Text to code

Tabnine generates ~30% of code, contributing to the following factors:

  • Faster development by keeping developers in the flow and removing the need for search
  • Preventing errors by putting developers on the right track
  • Expanding developer knowledge
  • Shortening code-review iterations

Smooth integration into the existing development workflow

Unlike AI chatbots like ChatGPT, Tabnine perfectly integrates into existing tools and processes. This means that no process change is required and you start getting value from day one. Tabnine functions as an extension of the development workflow directly within IDEs, with plugins available for all recent versions of Visual Studio Code, IntelliJ (and all JetBrains IDEs), Jupyter Labs, Visual Studio (full support for VS 2022 coming Q3), and Eclipse (full support coming Q3). Implementation is both fast and painless!

 

Battle-tested with millions of developers

Initially released in 2018, Tabnine isn’t only the most mature AI assistant for software development, but with millions of users worldwide, it’s also the most widely used product on the market. This is important because expertise matters. While many companies can train or serve Large Language Models for code prediction, the real trick is serving the day-to-day needs of the developers with the right suggestion at the right time with the correct scope and context. Tabnine is the result of countless iterations and improvements based on feedback from professional developers who use our product every single day.

Trained on code with permissive license only (no GPL etc., no ambiguity)

Tabnine is only trained on open-source code with a permissive license.. This decision has painful implications for Tabnine in terms of acquiring training data, but it helps ensure that developers can use the code that Tabnine generates in commercial projects without uncertainty about open-source licenses. Moreover, training our AI on code with permissive licenses only fully respects the intent of the developers who contributed code to open source.
Learn more about how we keep our users’ complete privacy.


Provides tailored guidance by learning private projects code and patterns

While AI that’s been trained on open-source code can definitely accelerate development, projects of significant size have an “internal language” comprised of internal services, frameworks, and libraries with their APIs and idiomatic patterns of how to accomplish certain tasks in the codebase. Tabnine Enterprise’s AI models provide fully secured, tailored guidance by learning private projects’ code and patterns, making the AI assistance especially relevant when working with internal APIs and patterns. This increases not only the speed of development but also the consistency of the code and the ease of onboarding onto a new codebase.

 

Enterprise-grade security

Source code is a core asset of companies, and as such, security of services touching code is critical and typically needs to meet certain standards to comply with corporate regulations. Tabnine prioritizes user security, implementing robust measures to keep your data safe:

  • Ability to run inside your network: Tabnine can (optionally) run inside your Virtual Private Cloud and even on your own servers, ensuring no code leaves your trusted network
  • Tabnine doesn’t train its general AI models on code created by our users
  • SOC-2 certified 
  • Coming soon:single-sign-on with your internal service for authentication and authorization 

Future-proof architecture

Tabnine’s architecture decouples the product from any specific AI model used as a basis, while also connecting to any additional foundational models as soon as they become available.  This means that when you choose Tabnine, you get on a platform that’s continuously improving, not just thanks to Tabnine’s own innovation, but also thanks to other community efforts for training better and stronger foundational models. 

About Tabnine Enterprise

Tabnine Enterprise is an AI code generation tool that helps software engineering teams write high-quality code faster and more efficiently, accelerating the entire SDLC. Designed for use in enterprise software development environments, Tabnine Enterprise offers a range of features and benefits, including the highest security and compliance standards and features, as well as support for a variety of programming languages and IDEs.