When’s the best time to write unit tests? Do you prefer to write code first and then follow up with unit tests? Are you the type of developer who prefers to start with the unit tests and watch as they pass and turn from red to green as you code? Or do you just skip unit testing altogether? Be honest, this is a safe space.
According to a survey of developers published in September 2020, 41% of the respondents said their organizations choose to write their tests first and have fully adopted test-driven development (TDD).
What is test-driven development?
TDD is a software development methodology that was developed by Kent Beck in the late 1990s as part of Extreme Programming. It’s an iterative development cycle that emphasizes writing test cases before a feature or function is written. The repetition of short development cycles combines building and testing to ensure code correctness and indirectly evolves the project’s design and architecture.
The Red-Green-Refactor cycle in TDD works like this:
Add a test to the test suite.
(Red)Run all the tests to ensure the new test fails.
(Green)Write just enough code to get that single test to pass.
Run all tests.
(Refactor)Improve the initial code while keeping the tests green.
Repeat.
While the process can sound slow, it does improve the quality of the software in the long run. It encourages writing testable, loosely coupled code that tends to be more modular, which is easier to write, debug, understand, maintain, and resume. This comes with many benefits such as easier and faster refactoring, improved team collaboration, and increased code confidence.
Accelerating TDD with AI code assistants
Today, a well-trained large language model (LLM) with proper context is great at generating code and unit tests from requirements written in natural language. Delegating these two tasks to an AI code assistant will greatly decrease the time required for each TDD cycle. AI is also great at refactoring both the tests and the core functionality as requirements evolve. Ensuring that tests pass becomes a validation step where developers check that the test suite reports all green and the feature works as expected.
Using Tabnine doesn’t eliminate the process, but it accelerates it tenfold because you can remove much of the busy work. Here are some ways Tabnine can help:
Generating unit tests from business requirements
Once you’ve selected a test framework and set it up in your project, you can ask Tabnine Chat to generate unit tests for a feature based on business requirements. You can then run the test suite and ensure all the tests fail.
Generating code from unit tests
With unit tests for the new feature in place, you can select them and ask Tabnine to create a function that passes the selected tests. Once the generated code is implemented, you can run the test suite again and ensure the tests pass.
Refactoring as requirements evolve
It’s very likely that requirements will evolve over time. When this happens, you can select the affected tests and ask Tabnine to update them based on the new requirements. You can then go and update the related code by selecting it and asking Tabnine to refactor it based on the test file by using an @mention.
Connecting directly with your product portfolio management system
Most companies maintain product business requirements using a product portfolio management (PPM) such as Atlassian Jira. Tabnine recently released two new Jira AI agents that simplify pulling business requirements into your project context. You’ll be able to reference a Jira issue using @Jira and say, “suggest tests and/or code to pass these requirements.” You no longer need to copy/paste from Jira or even have it open. Other PPM connectors will be available in future releases.
If you’re currently using TDD, try sprinkling in some AI assistance and see the lift it provides. If you’re curious about TDD, try it using the above examples. If TDD just isn’t for you, the Tabnine Testing Agent might be right for you. Not yet a Tabnine user? Get your 90-day free trial of Tabnine Pro today.
Introducing Tabnine’s AI agents for Atlassian Jira: Implement and validate Jira issues with one click
Posted on September 24th, 2024
At Tabnine, we firmly believe that the future of software development is AI-driven: AI agents working hand in hand with human engineers to accomplish tasks, with AI overseeing and managing every aspect of the software development life cycle. Earlier this year, we shared a preview of what we’re building at Tabnine to make this future a reality — an AI agent that generates code automatically from Jira issues. Today, we’re excited to announce the general availability of this capability by introducing two new AI agents — our Jira Implementation Agent and Jira Validation Agent.
Now with just a single click, Tabnine can implement a Jira issue, generating code from the requirements outlined in those issues. In addition to generating code for issues, you can also use Tabnine on either human- or AI-generated code to validate and review your implementation. The Jira Validation Agent will verify that your code accurately captures the requirements outlined in the Jira issue, offering guidance and code suggestions if it doesn’t.
Tabnine is the first AI code assistant to offer an integration with Jira and the first to offer a complete “issue-to-code” AI agent. This represents a step change in how developers use AI in software development. You can now accomplish macro-level tasks using Tabnine with a single click and can directly ask Tabnine to implement a story, bug, task, or subtask. There’s no longer a need to decipher requirements in a Jira issue, break it down into tasks, and feed specific prompts into an AI chat window. Today’s announcement expands our AI agent offerings, building on the recent innovations we’ve introduced with our AI Test Agent and onboarding agent.
What can you do with the new AI agents for Jira
Over the years, Atlassian Jira has become the heart of the agile engineering process for many teams. It’s now commonplace to create epics for large projects and then break them down into smaller, executable chunks by creating stories, bugs, tasks, and subtasks, which are then assigned to individual developers.
Even though there’s no technical limitation on the type of Jira issues that Tabnine can implement, to ensure the most relevant and accurate output, we recommend using Tabnine for issues like stories, bugs, tasks, and subtasks (i.e., issues that represent defined and specific units of work). This keeps the communication with AI concise and makes it easier to check the responses. During implementation, you can then ask Tabnine to refine the output (as needed), review the reference files used to generate the output, and work through the follow-up questions suggested by Tabnine.
Tabnine currently implements parent issues in Jira — for example, if a story (parent issue) has three subtasks (child issues), and you ask Tabnine to implement the story, Tabnine will only implement the story, not the subtasks. This limitation will go away over time, but as of today, you’ll need to implement each subtask separately.
Leveraging context from Jira issues for more accurate and relevant recommendations
Context is everything in AI. Without it, AI code assistants deliver output that might be textbook accurate, but is typically generic and not aligned with your codebase and company’s coding standards.
With the launch of these new agents, Tabnine now leverages the information contained in the Jira issues as context as you work on a project. As of this first release, Tabnine only uses the text in the Jira issue’s title and the description as context. Comments and other data will be incorporated over time. As such, to get optimal results, we recommend ensuring there’s detailed information in the issues’ title and description.
For example:
In addition to using the information in Jira issues, Tabnine continues to leverage locally available code and data in the developer’s IDE as context. This includes runtime errors, imported libraries, other open files, current file, compile/syntax errors, noncode sources of information, currently selected code, connected repositories, conversation history, Git history, project metadata, and other project files. Enterprise users also have the option to connect Tabnine to their organization’s code repos to provide more comprehensive context.
How to use Tabnine’s new Jira agents
To use the new agents, you must first connect Tabnine to Jira — simply click the Jira icon in Tabnine, review the connection request, and hit Accept to establish the connection.
Once the connection is established, all the Jira issues assigned to you as an individual user are available in Tabnine. Tabnine respects the existing Jira user permissions, ensuring that only the issues assigned to you are available. As with all other Tabnine AI agents, we have a zero data-retention policy with any information we’re exposed to when we connect to Jira.
Once you’re connected to Jira, you can select a specific issue and ask Tabnine to implement it. Once you hit Enter, Tabnine generates code to meet the requirements of the Jira issue. From there, you can review and revise the code as needed before inserting it.
In addition to code generation, you can validate code with this functionality. Simply select the code you’ve written and ask AI chat if it aligns with the requirements outlined in the Jira issue.
How to get started
Tabnine’s Jira Implementation Agent and Jira Validation Agent are now available to Tabnine Pro and Enterprise customers. There’s no additional cost to use this feature — individual users can simply update their IDE to get access.
For our Enterprise users, this functionality is disabled by default and Tabnine administrators have the control to enable it — once enabled, it’s available to all Tabnine users at the company. Contact us and we’ll assist you with getting started.
Crafting Helm Charts with confidence using Tabnine
Posted on September 16th, 2024
Kubernetes is an open source platform designed to automate deploying, scaling, and operating application containers. It’s the de facto standard for container orchestration. According to the 2023 CNCF Annual Survey, 84% of organizations were using or evaluating Kubernetes. We’ve reached a point where you’re very likely to encounter this platform during your career. At Tabnine, our product engineers manage the Kubernetes applications using Helm Charts.
Helm is the package manager for Kubernetes, similar to npm for Node.js, pip for Python, or crate for Rust. It uses Helm Charts to define, install, and upgrade Kubernetes applications. A Helm Chart is a collection of files that package all the resources needed to deploy an application to a Kubernetes cluster. Charts are easy to create, version, share, and publish.
However, using Helm requires you to work with YAML, a file format that developers either love or love to hate. You also have to learn the Chart template syntax, which adds an additional layer of complexity for non-DevOps engineers. And on top of that, you need some understanding of Kubernetes. Luckily, Tabnine can help.
The days of searching for an answer on Google or Stack Overflow are over. Your subscription to Tabnine Pro comes with the ability to use models such as GPT-4o or Claude 3.5 Sonnet to return personalized solutions, leveraging your context using the latest model to provide the most accurate and efficient solutions.
Here are some ways Tabnine’s AI code assistant can help you while working with Helm Charts.
Querying existing configuration
Use Tabnine chat to ask questions about the existing configuration. Ask it to explain the effect changing a certain setting will have. You can even have it create an ASCII flow diagram representing the life cycle of a pod based on the provided configuration. Get the answers you need via natural language conversation.
Error handling and debugging
Get insights into existing and potential issues. Check if any configurations are set incorrectly and get solutions.
Simplifying complex templates
Use the know-how of the LLM to give you known solutions for common problems in a standard way.
Documentation and inline comments
Add documentation along with inline comments to charts making them easier to understand and maintain.
Generic template creation
Convert your chart to a generic template that can be reused across different projects or requirements. Tabnine can help you replace hard-coded values with parameters to simplify customization and scaleability.
This AI-enhanced approach not only streamlines the workflow but also bridges the gap between operations and engineering teams by making Helm Chart management more accessible and less error prone. Querying and manipulating Helm Charts with the assistant is easy and saves time by reducing the need to remember specific syntax and best practices or searching for solutions online.
Are you using Helm Charts today without an AI code assistant? Sign up for a 90-day free trial of Tabnine Pro and let us know how it’s helped you.
Tabnine + Prisma: Making data easier to work with
Posted on September 11th, 2024
Prisma ORM is a Node.js and TypeScript ORM that’s used by 320K developers every month. It provides an intuitive schema that allows you to declare your database tables in a human-readable way. Prisma ORM then uses this schema to manage migrations and generate a type-safe client.
The ORM was designed with both SQL veterans and developers brand new to databases in mind, providing autocompletion and an IDE extension to lessen the need to dig through documentation to build queries tailored to your schema. This design also lends itself to working well with Tabnine’s AI code assistant.
Let’s take a look at how Tabnine can help accelerate working with a database when using Prisma ORM.
Using schema as context
The biggest challenge to using generative AI when working with databases is knowing the structure of the database. Without supplying a database schema as context, the models can only guess at the available tables and fields when generating a query.
Prisma utilizes a schema file to declare the structure of the database. Tables can be created using their Schema API or be introspected from an existing database. Because this schema.prisma file exists in the project workspace, it can be indexed by Tabnine. Once indexed, Tabnine will be context-aware of the database and respond with more personalized results.
Now let’s dive into how we can use Tabnine with Prisma.
Working with data models
Tables and relationships are created by scripting out new data models in the schema.prisma file. Prisma provides a robust DSL that allows you to define column types, relationships, indexes, and more. When working in the schema.prisma file, Tabnine will provide helpful auto completion while you type or based on a comment.
You can have Tabnine provide more information about a model by highlighting it and asking for an explanation in the chat.
Tabnine will generate new models and suggest alterations based on natural language prompts in the chat.
Once a schema has been defined, Tabnine can generate a seed function to populate the database with sample data.
The combination of context provided by the prisma.schema file and the knowledge within the LLM makes for a powerful tool to help you create the right database structure for your application.
Building queries
Once you have your database schema defined and the client generated using `npx prisma generate`, you’re ready to start querying your data. Tabnine will suggest autocompletions as you type or based on a comment. Since the client uses TypeScript, Intellisense will also be a big help as you build your queries.
Using the context provided by the schema.prisma file, the AI chat provides the best assistance.
Demystify complex queries by highlighting them and asking for an explanation.
Generate or refactor queries using natural language prompts.
You’re not limited by the Prisma API. You can ask the chat to generate full SQL statements.
These are just a few examples of what Tabnine is capable of when it has proper context for the database. Prisma’s schema file provides a clear picture of the database structure helping to produce more relevant results that are personalized to the project.
Not a Prisma user or fan of ORMs in general? Let us know how you would provide database context. Not yet a Tabnine user? Get your 90-day free trial of Tabnine Pro today.
AI coding assistants: 12 do’s and don’ts
Posted on September 5th, 2024
Coding assistants that use generative AI (GenAI) are game changers for software development and the biggest step change the industry has experienced in decades.
In a recent IDC survey, 56% of developers said they were experimenting with AI coding assistants. Gartner’s Innovation Guide for AI coding assistants predicts a full 75% of enterprise-level software developers will use them by 2028, resulting “in at least 36% compounded developer productivity growth.” That could equate to billions of dollars in value from tools that, until last year, weren’t on the radar for many developers.
In general, AI coding assistants provide value by automating low-level and repetitive tasks such as code completion, generation, understanding, debugging and maintenance. AI-powered coding assistants enable even entry-level developers to code faster and more prolifically, increasing productivity and freeing all developers to engage in higher-level work.
Paralyzing number of choices
However, the rise of AI software development tools has turned into a flood of mostly generic and consumer-grade options — at least 40 as of last year, up from a handful in 2022, Gartner reports. Overwhelmed developers — that is, the ones who would most benefit from AI in their development processes — will likely feel an additional burden in trying to figure out which tools will work best in terms of productivity and compliance with company requirements.
With that in mind, here are some key do’s and don’ts when selecting and using an AI-powered coding assistant.
Our goal at Tabnine is to create and deliver a top-to-bottom AI-assisted development workflow that empowers all code creators, in all languages, from concept through to completion.
Do’s and don’ts
Do: The most important “do” of all may be knowing your company’s policy about reusing code and whether you have permission to choose (or, for that matter, use) a coding assistant. If the use of coding assistants is not sanctioned at your company, do press leadership for reasons why (it’s most likely going to be a policy tied to privacy and data controls) and come armed with some stats about the productivity gains such assistants can enable. Searching for a product that can be securely and compliantly used across the organization may be in order.
Don’t: Use AI coding assistants in the shadows. Transparency is the key to the effective and safe use of AI. Obfuscating your AI efforts — no matter how well-intended — will bite your organization, and you, in the end.
Do: Your homework when evaluating the AI coding assistant options out there. Read the terms of service. Look for coding assistants that not only assist in generating, maintaining and testing code, but also integrate with your code repository and other development tools so that you can benefit from the context they provide. Also, look for an assistant that can explain existing code, identify issues and errors, and provide support via chat. It’s also very important to identify how coding assistants will use data and whether (and where) data is retained.
Don’t: Grab whatever AI-powered coding assistant seems easiest to use or is cheapest. And don’t waste your energy on generic chatbots. They are not tuned for software development use cases, and they don’t know what you are trying to do. There’s a lot of fear right now around the use of generative AI. In fact, “data security, privacy and IP leakage/copyright” concerns rank high as an inhibitor to enterprise adoption of generative AI — second only to cost, IDC research shows. The “right” AI-powered coding assistant is the one that alleviates AI fears and proves AI value by providing appropriate guardrails for your company and its partners and customers.
Do: Choose an assistant that leverages relevant, transparent models that can evolve and even be switched out as the technology improves. Can the model be optimized to fit a specific language and domain? Look for fair use and copyright protections. Ask if assistants offer complete privacy for the codebase and data usage. Is the assistant’s model secure and compliant? Does that matter for your company?
Don’t: Assume that any single model will always be considered the best. Along with coding assistants, large language models (LLMs) are also proliferating and evolving quickly.
Do: Take advantage of the extra time and resources gained by using an AI coding assistant to experiment and innovate. Spend more time on feature development, and leave lower-level tasks such as maintenance to AI.
Don’t: Worry about generative AI taking your job. Instead, look at coding assistants as another tool that enables you to do your job even better. Multiple studies show that developers gain significant time savings — up to 45% — and can complete coding tasks up to twice as fast with generative AI tools. Coding assistants have also proven to help developers learn new languages and develop their skills.
Do: Lean into the assistance that these tools provide, such as code suggestion and recommendation, unit testing and generating documentation. Research shows that more than 60% of developers who use coding assistants report feeling more fulfilled in their jobs, less frustrated when coding and able to spend more time on higher-value work.
Don’t: Completely abdicate your judgment or purpose or succumb to superlatives. For example, using these assistants doesn’t mean never searching for code or always being able to trust the code that’s generated. AI coding assistants are not replacements for experienced coders. It’s still important to have human eyes on the process and product.
Do: Continually experiment with how you interact with coding assistants to ensure you’re getting what you need, faster and faster. The efficiency and accuracy of this process will depend on the degree to which your coding assistant can be optimized to be more contextually aware.
Don’t: Set it and forget it.
Conclusion
Until now, meeting demand for more software meant increasing the number of developers or — much more likely — asking developers to do more (and more and more). This has been a losing proposition for many reasons, including the inability of developers to ever “catch up,” much less have the time to truly innovate.
AI in general and AI-powered coding assistants specifically are not a cure for everything that ails the development life cycle, but — when best practices are followed — they’re a significant step in a direction that will improve the developer experience and provide value for the organizations that support them.