How GenAI is reshaping tech hiring
Large language models are forcing tech hiring managers to adapt software engineering interview processes, fast. We look into how this is happening, and what to expect in the near future
Veteran engineering manager Karthik Hariharan advises leaders at startups, and recently shared an interesting observation with me:
“It feels like the interviewing process for engineers has changed since ChatGPT hit the scene two years ago. Some startup eng leaders I've talked to are embracing it, and assuming, or even encouraging usage during interviews! Others are putting in a lot of checks/secure approaches to ensure AI tools are not used.”
LLMs have taken the industry by storm in two short years, and our recent survey found that around 80% of software engineers use LLMs daily. The most popular are ChatGPT and GitHub Copilot, and there’s a long tail of other tools in use — with “GenAI-first” IDEs like Cursor, Windsurf and Zed also seeing a surge in popularity. It’s rare for a new technology to be so rapidly adopted as AI tools have been.
Coincidentally or not, ChatGPT is very good at solving Leetcode-style algorithmic interview questions. As a rule, LLM tools make for strong coding companions. This means many interview processes for software engineers which currently focus on algorithmic coding are rapidly ceasing to be useful at identifying coding talent when candidates have access to these LLMs.
But how are employers reacting to this development, and changing their processes to identify the best candidates?
This article tackles the issue with detailed contributions from 49 tech professionals, via direct messages and form responses. Sixty five percent of respondents are hiring managers (engineering managers, directors-or-above, founders/cofounders), and 35% are software engineers.
Thank you to everyone who contributed! Also thanks for the additional input from the cofounders of technical assessment vendors Cookd (a new type of technical testing), Equip (vet qualified candidates), interviewing.io (anonymous mock interviews with senior engineers from FAANG), Woven Teams (human-powered technical assessments). As usual, this publication has no affiliation with companies mentioned. See the ethics statement for more information.
We cover:
Impact on recruitment. More focus on catching “fakers,” recruitment becomes more effort, and more demand for interview questions that LLMs cannot assist efficiently with.
“Faking it” with GenAI tools. Candidates can get away easier with seeking unauthorized LLM help on remote interviews – while interviewers are increasingly suspicious. More employers will explicitly ban GenAI tools, but this ban will be hard (or impossible) to enforce in remote settings.
Impact on resume screening. Weaker resumesand cover letters, pedigree becoming more important than a well-written resume, and some companies could start to use GenAI to filter applications.
Changing take-homes and coding interviews. Exercises to evaluate candidates beyond their coding skills will likely spread, and new formats like code review interviews or system design interviews could be weighed more over coding interviews. More companies could drop Leetcode-style algorithmic interviews.
Effects on the interview process. Some companies will push for more in-person interviews, while others will integrate LLM usage into the interview process. Smaller companies are more likely to embrace LLMs, while larger ones could simply push for in-person interviews.
Redesigning the interview process at Box. Senior EM Tomasz Gawron shares exclusive details on how Box redesigned the software engieer interview process, to make it “GenAI-ready.”
Integrating GenAI into the interview process. Tips on how to take advantage of GenAI tools to run a more efficient interview process.
1. Impact on recruitment
There are common themes about the impact of AI tooling mentioned by respondents who are recruiters, hiring managers, and interviewers:
More focus on catching fakers
This is a major gripe of respondents:
"People clearly unqualified are able to basically bypass our recruiter screens, since genAI can make sure their resume matches every keyword, and they can use real time GenAI during recruiter screens well enough to get to the next round.” – Senior DevOps engineer at a transportation company
“Live discussion about the deliverable has become more valuable, and the code itself less valuable because it could have been produced by an LLM. I now have to try to understand if a candidate is responding personally during a live interview, or if they are reciting an LLM’s output aloud. Fortunately, LLMs are still pretty slow, so it's a bit obvious when they do this.” – Head of Data at a Series A startup
Recruitment is more effort
Hiring managers and recruiters alike say that GenAI tools create extra work during the recruitment process.
More resume screening:
“GenAI is making tech recruitment harder. It’s not just interviews, but especially with the initial application review and screening stages.“ – Recruiter at a Series D scaleup,
“It’s much harder to filter candidates to spend time on. These tools increase the amount of work recruiters and hiring managers need to do, and it’s harder to do ‘asymmetric’ interviews where the hiring company invests less time, and the candidate needs to invest more time in a task.” – EM at a startup.
Noiser. Jayanth Neelakanta, cofounder and CEO of pre-employment tech assessment startup, Equip, shares:
“Lots of candidates are bulk applying with AI-generated resumes. Applicant Tracking Systems (ATSes) are using AI to filter out these resumes. A new system will have to soon emerge to resolve this Catch-22 type situation.”
Harder to evaluate less experienced candidates. An interesting observation from a director at a UK digital agency:
“Most of our hiring is focused on entry-level developers and career switchers, so our focus has been aptitude-focused. So we give people relatively straightforward problems that they could work out for themselves – we don’t do the Leetcode-like problems.
We've seen a rapid increase in candidates pasting whole solutions, or ‘pausing to think’ while typing on a side-screen before making changes or answering questions – most likely interacting with an LLM."
Harder to get reliable signal. At heart, the recruitment process is about learning enough about potential hires in order to accurately assess them. GenAI makes this harder because it’s unclear how much signal comes from a candidate, and how much from the tool.
“It’s easy to throw our coding interview problem into a prompt, meaning we can not be sure if the submission we get comes from the candidate, from an LLM, or a mix.” – Engineering manager at a contract agency
“AI tools are very good at solving low-context challenges that we use during interviews. However, they are not nearly as good at solving day-to-day challenges. I don’t buy the argument that you should let candidates use these tools in an interview setting which needs to be low context, by design.” – director at a UK-based digital agency
More time in interviews. Ali Alobaidi, cofounder at Cookd shares:
“Onsite interviews are for pre-vetted engineers with strong technical skills and the goal of an onsite interview is to assess their collaboration skills. But LLMs are making pre-vetting exercises like async coding challenges very low signal. This means employers must do a lot more onsite interviews to validate technical skills. I know some companies at which engineers interview for 10 hours per week!”
Recruitment tooling vendors’ viewpoint
Companies want problems for candidates to tackle which can’t be solved by googling it. Equip cofounder and CEO, Jayanth Neelakanta, shares:
“Our customers previously used "non-Googleable" questions that had a plagiarism check. But GenAI disrupted this, as it can give answers that aren't identical. Instead, customers want granular monitoring of a candidate's environment; their screen, camera and mic, locking down copy-paste, etc.”
Around 5% of companies allow GenAI tools, and this number isn’t growing. Woven offers technical assessments for hiring software engineers, and customers can choose to allow or ban GenAI tools. Founder and CEO, Wes Winler, is surprised by how few opt in for AI:
"Among our customers, fewer than 5% are allowing GenAI tools in their live coding loop, their final round, or in their assessments. Surprisingly, this % has been stable since mid-2023 and hasn't been increasing (yet). The vast majority are keeping their previous process, with no GenAI allowed. I remain surprised by how slow the adoption curve is."
2. Faking it with GenAI tools
How common is it for candidates to use LLMs when the recruitment processes explicitly forbids use of AI? Well, a study has established a predictable but challenging motive for using chatbots against the rules: it works.
Aline Learner, CEO of interview startup interviewing.io, told us about a study they ran to find out how easy it is to cheat with ChatGPT. The company set up 32 audio-only interviews to ensure anonymity.
Interviewees were told to use ChatGTP, but not to tell the interviewer. The questions were Leetcode, slightly modified, and custom ones. The results:
Based on these results, ChatGPT is very good at passing coding interviews where the question – and some solutions – can be found online.
The experiment proved cheating is profitable – because it results in better pass rate, while being undetected. Interviewees had to rate how worried they were about being caught: 81% had no worries, while only 6% were very worried. Surprisingly, no interviewer reported noticing anything underhand taking place. Remember, interviewers were instructed: "conduct the interview just as you typically would" and to "evaluate the candidate as you usually do." Nothing was mentioned about interviewers sneakily using ChatGPT – which is in line with common interview expectations.
The interviewing.io team say:
“Companies need to start asking custom questions immediately, or they’re at serious risk of candidates cheating during interviews, and ultimately not getting useful signals from interviews.”
Just a reminder that this test didn’t cover video interviews; it was held in an audio interview environment. Check out the full study.
Faking it on camera
Incidents of candidates using AI tools during a video interview against the rules, is increasingly common, according to Wes Winler - Founder and CEO of Woven:
“Candidates are using GenAI to (mostly successfully) cheat in assessments, and they also unsuccessfully cheat in live coding interviews. We usually discover cheating during a final round interview. After a bombed interview, we go back and look at the live coding interview recording, and we usually find the candidate looking off-screen and pausing a lot.
Candidates who are fairly competent during the final round while cheating are likely to get away with it. Companies doing remote hiring with a live coding interview all have a story of candidates attempting to cheat awkwardly in a live coding interview.”
Some hiring managers admit to flying blind, like a Head of Engineering at a fintech company shares:
“I am sure candidates use GenAI during live interviews – we simply do not have sufficient tools to prevent or detect it.”
Interviewers are increasingly suspicious of “spat out” answers. A security engineer in Big Tech says:
“I interview L4 and L5 software engineers, and pose whiteboard coding questions on the spot. These interviews are virtual, and when asking coding questions, I have received suspicious answers. For example, candidates who do not ask clarifying questions nor talk through their thinking; they just spit out code. Sometimes the code looks right, but is actually wrong. I suspect it’s LLM generated, but I have no proof.”
GenAI is even reportedly used in systems design and behavioral interviews. Kwinten Van de Broeck, Director of Engineering, Cognite shares:
“During system design interviews, candidates type on a second monitor and read off responses in a robotic voice. At least it's usually pretty obvious, though I guess I wouldn't be able to tell if it was not obvious!”
“Candidates have used real-time AI to answer behavioral questions in live interviews. Surprisingly, extremely senior candidates have done this; like a candidate for a principal engineer position. How we realized it was that we asked them to clarify what they meant when they used a word we weren't familiar with. It was a made-up word that didn't exist, which the candidate couldn't explain!” - A group PM in Big Tech
Clues
Some things make interviewers suspicious that a candidate is using AI tools for an interview, and respondents shared some tell-tale signs :
Pausing after questions. Some interviewers share that they’ve become suspicious of quiet time; that the candidate could be silent while waiting for AI to spit out its response, not that they’re pausing for thought. There’s also been cases reported of interviewees using ChatGPT’s Voice Mode to answer, and candidates reading out the prompt returned by ChatGPT. Interviewers assuming that a candidate pausing means using an LLM would be unfortunate: as trust would be undermined potentially for no reason!
Looking off-screen. Candidates looking off-screen could be a sign there’s a second screen doing something important. A tell-tale sign can be eyewear if it reflects the glow of an extra screen.
Rambling answers. For example, if an interviewee begins their response to a question about how to troubleshoot issues with an EKS cluster by reeling off some facts about EKS, then that sounds pretty similar to how an LLM opens its reply in the descriptive, wordy style of the tools. The waffle then becomes the candidate’s own when they read it out loud.
Banning GenAI doesn’t always work
Woven is a startup that runs technical screenings, and when a customer wants no GenAI tools to be used, the company can detect if candidates break the rule. Cofounder and CEO Wes Winler shares:
“Between 7% and 25% of candidates are using GenAI for assessments where GenAI is explicitly banned. This is the percentage of candidates we actually catch, and we rate their submission as “zero points.” This is a low-end estimate; we're almost certainly missing some. Junior (college hiring) and international roles have higher cheating rates.”
3. Impact on resume screening
GenAI is heavily used for writing resumes and sending mass applications. This means more inbound applications, and more which are tailored to specific positions. It’s not hard to see where it leads: a bigger-than-ever pile of applications for recruiters and hiring managers, with more noise and less signal, making it harder to find qualified candidates. So hiring managers are adapting:
Weaker resumes and cover letters
A common observation – and complaint – of tech company hiring managers focuses on the standard of written applications.
Cover letters are almost all AI-generated, and therefore useless. This is a common sentiment:
“Easily 90% of the cover letters we get are clearly written by AI. We should probably remove that field entirely from our job form.” – Group PM, Big Tech.
“We‘ve had cover letters which were clearly augmented with AI to fit the job description perfectly. For example, one applicant applied to several positions at our company; from database administrator to React engineer. Their cover letter mirrored the job description perfectly; however, the cover letter had nothing to do with their actual resume!” – Head of Software Engineering
LLMs are increasingly ubiquitous in resume-writing, but it’s unclear they add value. Too many resumes look similar, with uniform wording and phrasing:
“GenAI-enhanced CVs are just bad and wordy for no reason.“ – Stefano Sarioli, engineering manager.
“We get a lot more resumes that are clearly generated by genAI tooling. They're all extremely similar and uninspiring to read.“ – Kwinten Van de Broeck, Engineering manager, Cognite
“So many LLM generated CVs. I’ll scream if I read about people ‘spearheading initiatives’ again – it’s such a common term that LLMs come up with!“ – Head of Engineering at a 150-person fintech company
Pedigree is more important
Wes Winler, cofounder at Woven:
“Recruiters are leaning more on pedigree in terms of experience at a high-value brand, well-known schools, and professional background.
Candidates are using GenAI to bulk apply, and recruiters are overwhelmed with resumes. Keyword searches are worse because AI is good at defeating those. As one of our customers told us: “I hate that I have to resort to prioritizing pedigree, but I need some way of prioritizing 1,000 resumes. I'm just one person, so I fall back on these signals”. ”
Using AI to filter resumes?
No respondents say they use AI to filter resumes, but plenty reckon that others do. Meanwhile, Leo Franchi, Head of Engineering at Pilot.com, is uncertain how effective the tools will perform during a trial run:
“We are trialing tools for resume analysis and filtering, right now. We’ve not yet determined if any of them are good."
This is a tricky area in Europe due to regulation. In 18 months, it will be mandatory to register AI models used for “high risk” cases such as making decisions about job applications. Companies could suffer reputational damage if they use AI to reject candidates, especially if the tool is revealed to have biases. In other regions, there is no such regulation.
At the same time, with more candidates using AI to mass apply to jobs, it’s hard to imagine companies not developing automated filtering systems to weed out AI-generated applications.
4. Changing take-homes and coding interviews
Interviewing is changing with the times and tools in a few ways: