Hire smartly.

Let us be blunt: the current approach for hiring software engineers sucks. It is impossible to review a résumé without falling for in-group bias. Whiteboard interviews are stressful for candidates. Algorithmic knowledge does not predict the ability to solve real-life problems. You hire the skill to pass interviews.

Hire on the ability to deliver instead.

With AutoIterative, you hire on the right metric—the ability to do the job.

For candidates, we provide fun and exciting challenges, which resemble real, actual work. We grade the submissions automatically to ensure that they meet challenge objectives.

For our customers, we provide the right metric for hiring the right people. It also saves them invaluable engineering time and frees them from unconscious bias.


No biases

Our automated assessments do not care about CVs, race, or gender. All data is anonymous by default.

Save time and money

Focus your time on candidates who meet your acceptance criteria. Reinvest what you saved back into your company.

You set the bar

Hire from the top 10% performers, or raise the bar and hire from the top 1%. We give you the data, you decide.

Stand out

Focus on the human side of your candidates and attract the most interesting ones. Make your interview process memorable.

Massive savings

You focus on people, we do the heavy lifting.

The candidate is sourced or has applied for a position
with AutoIterative
  • Invite a candidate to the challenge
  • Get notified when they pass it
  • Receive the complete assessment
  • Discard those that do not meet the bar
  • Prepare the interviews
vs
Unconsciously biased CV review
+
Phone screen, one hour, plus preparation
+
Technical phone screen, multiple engineering hours
+
Review take-home exercise, easily half a day of engineering work
+
Second or third review on the disagreement of opinions, more engineering time
Final interviews

Confirmed ability to deliver

$100 / assessment

8+ hours spent per Candidate

Still No Certainty

Common Questions

How massive are savings, exactly?

Easily 10x.

In a typical hiring process, you spend up to an hour reviewing and cross-checking the résumé.

You then plan, book, prepare, and run a phone screen, which will take you another hour, not counting any feedback you need to process afterward.

Technical phone screens will also require engineers to be engaged, so time starts adding up.

After the phone screen stage, you will move to a take-home exercise which will need to be reviewed with engineering time to understand it, grade it, and write a report.

If solutions submitted as incomplete archives, without setup instructions or outright in binary format, they will have to be filtered out, and it is your engineers that will have to spend valuable time doing this. They then will have to spend time setting things up trying to run the solution, before even having a rough clue whether it works at all. All this time investment needs to be done upfront before reviewing code and figuring out if the process should continue or not.

Sometimes you will need to have more than one opinion, and you will engage two or more engineers, which in turn can also derive in your engineers having to spend time agreeing what a good solution is.

You will also need to factor in all the subtle hidden costs, like nasty context switches that break the flow of your engineers, call scheduling and rescheduling, writing feedback, and sometimes rewriting it — only to get to the point where you're still not sure yet if the candidate can deliver something that works in production.

After all this work, depending on the company, approximately around 70% of the candidates are rejected at the code challenge stage. Thus, rendering all this work as waste.

With AutoIterative, you jump straight to the final interviews with the candidates that meet your criteria, without investing a single engineering hour. You only focus on the candidates that can deliver.

Can you filter remote candidates this way?

Yes.

Our filtering solution is modeled after how remote work happens in distributed companies, meaning it is asynchronous and self-service. And it is a much more realistic approach than an on-site interview for a remote employee.

How is this removing biases?

By ignoring the CV and hiding the candidate identity completely until we have the final assessment, we remove any kind of bias that comes with the candidate’s gender, origin, or race.

To do this we hand off a problem to solve to the candidate, and then we automatically measure only how well the candidate solved it, ignoring everything else.

How do you measure delivery?

By checking that the solution works.

On every successful build, we perform a set of validations that use the candidate solution in different ways.

We automatically test not only that the solution complies with the problem definition, but that it works correctly, that it handles edge cases and errors, that it replies within latency tolerance, and that it keeps working when we throw a lot of load at it.

The candidate only has access to the initial test that gives them early feedback on how they are doing. The rest of the stages are visible only to the hiring manager.

When the candidate decides that they are finished, they can signal it through the platform, and their solution will be immediately and automatically graded without any human intervention.

Tell me about the challenges

Challenges can be almost anything and use any open source technology, building on any stack, language, protocol, or providing actual datasets to process.

The following are our principles for designing new challenges.

Challenges mimic the real world

Because you are hiring people to do real work, not to solve puzzles. Hence, an example challenge is not “sort a string efficiently” or “find palindromes” or “calculate sqrt”, but rather “given this API specification, implement a service, package it as docker image and ship to production”. Just like you do at work.

Challenges use real tools

Because everyday developers have to pass tests in CI, package deliverables, and ship these deliverables to production. The candidates are given the same tools they would use for actual work, so they can show you how they can deliver.
Therefore, at the start of the challenge we hand off a git repository with a pre-configured pipeline so they do not waste time in setup, and all they need to do is have fun and deliver.

We do not give production access

Because likely there will not be production access in the real world, hence a candidate should rely on observability as they would do in real life. They do get the logs of the initial execution to troubleshoot the initial steps.

We do not enforce a specific language or framework

Because we do not know what you will use in production as solutions and services can be implemented in any language or technology stack you pick. As long as the solution is delivered according to the industry standard — docker image — we can run it in our production and send requests to it. And if it behaves according to specifications, then this is a job well done.

We do not test for academic knowledge

Because we don’t care about prescriptive measurements like algorithms, complexity calculations, or obscure data structures. As long as the challenge works as expected we will grade it for scalability and performance afterward. Solutions that are using an O(n!) approach simply won’t scale beyond a couple of records in a data set, just like in real life.

How hidden is the candidate identity?

We do not request the candidate email or name, in fact, we automatically create a made-up name for each candidate when they accept the invitation. We do not store any data on candidates, and we abide by the policy of “you can’t leak what you do not know”.

Can I pair program or watch the candidate screen during their execution?

No.

In the same way that you do in real work, you can’t (and shouldn’t) be looking over the shoulder of the candidate as they solve the challenge.

To work effectively, pair programming requires both developers to feel comfortable. This is not the case during an interview, thus we do not believe that a pair programming interview resembles real work. It provides an invalid metric for making a hiring decision, namely: “do I like how this person types and talks” instead of the valid “can they deliver”.

The same happens with time-constrained problem-solving. That’s not how real life happens. People need time to think and reflect on the problems they face to build the right solution. Because of this, we do not artificially limit time for solving a challenge.