An explosion in software engineers using AI coding tools?
GitHub surveyed 500 developers in the US for a sense of how they use AI coding tools. I examine the results and add context on how the survey was conducted.
👋 Hi, this is Gergely with a bonus, free issue of the Pragmatic Engineer Newsletter. In every issue, I cover topics related to Big Tech and high-growth startups through the lens of engineering managers and senior engineers. In this article, we cover one out of four topics from today’s subscriber-only The Scoop issue. To get full issues twice a week, subscribe:
GitHub just published a survey about developer productivity and AI coding tools. The company hired an external research agency to survey 500 US-based software developers who work at large, 1,000+ person organizations.
I reached out to GitHub to get more details on how the survey was conducted. Some details about the population surveyed — which were not published on the original survey:
All respondents work full-time, and are individual contributors
Specialization: 32% fullstack, 23% frontend, 17% backend, 18% DevOps, 4% mobile, 6% operations
Age: 47% 30-49, 41% 30-39, 6% below 29 and 6% above 50
Gender: 70% male, 30% female
The industry split was also pretty evenly distributed:
One finding really jumps out: 92% of developers say they use AI coding tools at work:
Back in April, we covered the productivity impact of AI tools, based on a survey of engineers who’ve been using AI coding tools for some time. While I know of few developers who don’t occasionally use ChatGPT, or have not tried an AI coding assistant, I always assumed this must be within my bubble. But this research indicates these tools have spread far and wide.
What do AI coding tools help the most with? The survey lists the top 3 areas mentioned by developers:
Learn: Develop coding language skills (57%)
Productivity: become more productive (53%)
Focus: spend more time building and creating, less on repetitive tasks (51%)
These findings chime with the biggest productivity gains developers mention in the article on AI coding tools, from using AI coding tools like Copilot, or generative AI tools like ChatGPT. Organizing the most common use cases mentioned from The productivity impact of AI coding tools in the three categories from the GitHub survey:
Learning (Generative AI)
New concepts, tools and frameworks, and research. Interestingly, the use case of learning unfamiliar topics was mentioned most frequently in the context of ChatGPT. Researching new topics is also a common use case.
Improving code quality: by asking ChatGPT to do this, or asking it to criticize the code. Another use case is to input code and ask ChatGPT to refactor it.
Getting started: kicking off new projects and tasks, and overcoming initial barriers. As one respondent shared: “it breaks the initial barrier of ‘where to begin?’” Generating code for greenfield tasks or projects is also a common use case.
Productivity (AI coding tools & Generative AI)
AI coding tools:
Scaffolding: “I have been using it mainly to get basic structure ready without typing it all out by myself. Helps me to do my task in a very short amount of time in most cases.”
Autocomplete: "I have it always on, suggesting autocomplete in the IDE."
Generative AI tools:
Debugging: One use case is to give ChatGPT some code and ask it why that piece of code isn’t behaving as expected.
Prototyping: several engineers mention they use ChatGPT to throw together a prototype quickly.
Focus (mostly AI coding tools)
AI coding tools:
Boilerplate code (AI coding tool): “Integrating with a very inconsistent SOAP service is an example of when I would go mad if I had to type it all out. Copilot is able to use bits of notes I have in IDE to generate reasonably-looking skeletons while adjusting types/names/literals.”
Generating repetitive code (AI coding tool): "It generates a lot of code I would type anyway. It speeds up my work."
Generating tests and documentation (AI coding tool): "I use it for everything − writing code, writing tests, writing documentation" and "generation of (block) comments, generation of tests, documentation."
Generative AI tools:
Routine, boring tasks: for example, SQL or Elastic queries and understanding what JSON responses mean.
We should expect even more heated competition between AI coding tools, off the back of this data. If close to 90% of developers are already using something to help them code, then we are in the “early adopter” phase, where 90% of the population is experimenting and is early in adopting these tools. This means the future market leaders are tools which are available today, or will launch very soon.
Now is a good time to recap AI coding tool alternatives. We previously covered these:
Tabnine (2019)
GitHub Copilot (2021)
Replit Ghostwriter (2022)
Amazon CodeWhisperer (2022)
Codeium (2022)
SourceGraph Cody (2023)
CodeComplete (2023)
FauxPilot (2023)
Tabby (2023)
In the 2 months since publishing that list, several new tools have launched, including:
Refact.ai: a coding assistant for VS Code or JetBrains, with a self-hosted option
Cursor: an AI-fist code editor.
GitLab Code Suggestions (beta.)
Studio Bot for Android Studio by Google. Trained on the model PalM2.
JetBrains AI coding assistant inside its JetBrains Fleet, the next-generation IDE.
Bito – bringing ChatGPT to the IDE. This company launched publicly only a few days ago!
Visual Studio Code plugins:
HuggingFace Autocomplete for VS Code. Based on the StarCoder LLM published by HuggingFace on 4 May.
10Minions for VD Code.
None of the above are endorsements: do your research on matters like which code models these companies use, their policies to keep your code secure, and other due diligence. See a comparison of the first 9 tools here. This space is evolving rapidly: it’s exciting and hard to keep up with!
Developers in this survey seemed to feel they are already evaluated roughly as they expect they should be. One interesting question was how these devs think their managers should rate their performance, and how they actually do. Unfortunately, the survey does not make apples-to-apples comparisons possible for most categories. I took the categories where direct comparison is possible, and the result is pretty surprising:
I’ll add that I find it problematic that GitHub has not released the full survey results, as it feels like they are cherry-picking some results, and making an apples-to-apples comparison hard to do. For example, the report says “developers want more collaboration (...) developers want collaboration to be a top metric in performance reviews.” But, when looking at the apples-to-apples data, this doesn’t ring true: 35% of developers said they’d like collaboration and communication to be measured for performance, and 33% said that their company already does this.
So how will evaluation criteria change if everyone uses AI coding tools? Developers think it won’t change much, based on directly comparable data from the survey.
There are interesting takeaways from the survey, but I feel the survey was parsed in a way to fit the narrative of how AI coding tools lead to more collaboration. From looking at the raw data – the limited amount that was released, of which much was curiously hidden – I did not reach the same conclusion. Sure, people will “collaborate” more with the AI itself, but I got no sense – or read on the data – that using AI tools would result in more collaboration between software engineers.
However, despite my suspicion about data being cherry-picked, I do agree with this takeaway of the survey:
“As AI technology continues to advance, it is likely these coding tools will have an even greater impact on developer performance and upskilling.”
Generative AI and within-IDE coding tools still seem to be distinct categories. In my observation, there’s still a big divide between two types of AI coding helpers:
Generative AI: chat “buddies” you can ask things like “explain how generics works in Go, and how it is different to generics in Java.” These tools greatly help learning and can also help scaffold or prototype ideas.
Within IDE AI coding tools: these aid coding workflow by enabling users to focus more on “interesting” work, and make coding more productive.
This was one out of the four topics covered in this week’s The Scoop. A lot of what I share in The Scoop is exclusive to this publication, meaning it’s not been covered in any other media outlet before and you’re the first to read about it.
The full The Scoop edition additionally covers:
AWS’s us-east-1 outage: a deep dive. Amazon’s most important region went down for 3 hours, and the whole of the web felt it. Which services and companies were impacted and what really caused this incident? I spoke with engineers at AWS to get answers. Exclusive.
Why Meta is reducing its number of managers. On a recent podcast, Meta’s founder and CEO shared his reasoning for why the tech giant now has fewer managers. I talked with current Meta engineers for their reaction – and give my two cents as well. Analysis.
HashiCorp’s ‘optimized’ layoff process. The infrastructure provider cut 8% of staff, and seems to have ‘optimized’ this process from a business perspective. How did the company go about communicating redundancies, and what would a more humane process have been? Exclusive.