Have you ever saw a recruiter in your company chat going like:
@here does anybody have the experience with Language X to review the challenge of a candidate?
It is not hard to see it for what it is. It is a cry for help.
What recruiters are looking for is a hiring decision, and they need an engineer to do it for them.
If everything goes well, one of your engineers will speak up and will accept the task of reviewing this code.
The process may look something like this:
Yet, the engineer will need to make a hiring decision to move on with their day, or whatever of it remains.
You do not want the review to be superficial—or worse, a coin toss—when hiring someone. You want to get the most signal out of it to get the best possible decision. Credible review of the code written by other people requires skill and experience. The engineers that can do that are usually the most tenured ones of your team (and likely the ones earning the most). Reviewing the solutions will consume the time of your most seasoned veterans. The very same engineers you rely on to build your product.
They will have to stop what they are doing and likely spend their whole day doing the submission review.
People are generally decent, and they will try to be fair. If the code they are reviewing has no tests, what will they do? They will try their best to see if a tainted first impression still hides a good developer beneath it. They will try to get the best out of the candidate solution. The better they do, the more time it will consume.
The worst-case scenario is that this goodwill—or high pressure to hire someone—will translate into a bad hire. And it will bring all the added costs of it spreading through the company and impacting your teams.
If you look closely and honestly, you can see that most of what your engineers are doing is toil. Repeated work which brings little to no value. Highly inefficient use of your engineering time.
We ran hundreds of such classic interviewing loops. What we have discovered is that reviewing about 40% of the code challenges sent by candidates is a waste of time. Some were incomplete, and others did not fit the description of the problem. Sometimes they were simply a complete trainwreck.
Another 20% will not pass a thoughtful review process. And only the remaining 40% will be the pool out of which you can hire. Not to mention that distribution, naturally, will be heavily skewed towards junior-level developers. Less than 10% of your original pool of applicants will be senior-level people and above.
Around 60% of the submissions will yank your people away from their tasks only to end up in a direct rejection. They will impact your product development without any return on investment. Do this often enough, and your whole development pipeline will grind to a halt.
(Note: we processed about six hundred candidates when we calculated those percentages.)
If you repeat, again and again, things that leave you off in a worse place than before—stop doing them!
__ / \ _____________ | | / \ @ @ | Stop making | || || | bad hiring | || || <--| decisions! | |\_/| \_____________/ \___/
Only then can you move forward. Counterintuitively, the path forward is not complicated or obscure as it could seem. What you need to do is improve your filtering.
Stop burning time in candidates that have zero chance to go through. Here we are talking about the initial 60% of the challenges you get back. Every hour invested here is a wasted hour. Your senior people should focus on building and maintaining your product instead. On a side note, you are also saving them from the frustration of reviewing the lousy solutions. After all, the happier they are, the better your company is.
Define a set of rules and metrics that you expect from a solution. Then, communicate them to the candidates in all fairness. No unit tests are a no-go? So be it, but let the candidate know it in advance, so the expectations are clear. That not only makes it a fair game but also provides you with the tools to apply those rules.
We applied a simple rule for our challenges: it has to work in production.
Immediate improvement was that we did not have to spend any time on the 60% of solutions that were not working. Some candidates did not reply, and others did not consider the test to be engaging or even fair. Some still insisted on the interview, while being unable to ship something that works. We freed our engineers from reviewing all those submissions.
That alone resulted in more than a 2x decrease in time spent on reviews. We returned hundreds of hours of engineering time to the company by removing all this toil. And on top of that, we removed all the not-fun work only!
For us, it resulted in happier engineers across the company. For candidates, it improved transparency. They now understood what to expect upfront and how well they are doing during the challenge. Overall, that increased our interview to offer rate by roughly 50%.
This simple change yielded fantastic results for us. So we have built it as a product to help all the companies that want to hire smartly. Engineering time is an invaluable resource, so start saving it with AutoIterative. Reinvest it back into your company and focus on your product, while we do the filtering for you. Then you can focus on interviewing the people who can deliver and meet the bar that you set.
Discuss this post on HackerNews.