Hire without wasting time.

Let us be blunt: the current approach for hiring software engineers sucks. Pairing sessions and whiteboard interviews are stressful for candidates. Algorithmic knowledge does not predict the ability to solve real-life problems. You are letting the talent slip and hiring the skill to pass interviews.

Hire the ability to deliver instead.

With AutoIterative, you hire using the right metric—the ability to do the real work. Using our production challenges that closely resemble real-life tasks that they will be performing when hired, your candidates will have the opportunity to shine while having fun. And the automated assessment of their solutions will ensure that you waste zero time at all while hiring the best talent.

Interested? Try it for free!

Hire without bias
Solutions are graded automatically, without taking into account your candidate's résumé, race, or gender. All data is anonymous by default.
Waste no time or money
Focus your time only on the candidates who can deliver.
Set your bar
Hire from the top 10% performers, or raise the bar and hire from the top 1%. We give you the data, you decide.
Stand out
Focus on the human side of your candidates and attract the most interesting ones. Make your interview process memorable.

Massive savings

AutoIterative is an efficient building block for your software engineering hiring loop. We do the heavy lifting of assessing the technical skills of your candidates, allowing you to focus on their personalities. We provide you with the opportunity to focus only on candidates that passed your acceptance criteria for the position. AutoIterative replaces the inefficient and expensive steps in your hiring loop, resulting in massive savings and allowing you to reinvest resources saved back into your company.

The candidate is sourced or has applied for a position
1. Invite a candidate to the challenge of your choice
Get notified when they deliver
Receive the complete assessment
Focus only on those who met your criteria
2. Review their code
vs
Unconsciously biased CV review
+
Phone screen, one hour, plus preparation
+
Technical phone screen, multiple engineering hours
+
Review take-home exercise, easily half a day of engineering work
+
Second or third review on the disagreement of opinions, more engineering time
Final interviews
$100 per assessment and confirmed the ability to deliver.
8+ hours spent on each candidate and still no certainty how well they perform.

Common Questions

How massive are savings, exactly?

Easily 10x. We offer you the opportunity to take back the only non-renewable resource of your company: time.

Let us elaborate. In a typical hiring process, you spend up to an hour reviewing and cross-checking the résumé. You then plan, book, prepare, and run a phone screen, which will take you another hour, not counting any feedback you need to process afterward. Technical phone screens will also require engineers to be engaged, so time starts adding up. After the phone screen stage, you will move to a take-home exercise that will need to be reviewed with engineering time to understand it, grade it, and write a report.

Here is where it starts getting expensive. Candidates can submit solutions as incomplete archives, without setup instructions, or outright in binary format. It is your engineers that will have to spend valuable time reviewing those submissions. They will have to set up all the things required to run the solution before they even have a rough clue whether it works at all. All this time investment needs to be done upfront before reviewing code and figuring out if the process should continue or not. Sometimes you will need to have more than one opinion, and you will engage two or more engineers, which in turn can make your engineers spend time agreeing on what a "good solution" is. Not to mention all the subtle, hidden costs: nasty context switches that break the flow of your engineers, call scheduling and rescheduling, writing feedback, and sometimes rewriting it — only to get to the point where you are still not sure yet if the candidate can deliver something that works in production.

After all this work, depending on the company, 80% or more of the candidates get rejected. Thus, rendering all this work as waste.

With AutoIterative, you jump straight to the final interviews with the candidates that had met your acceptance criteria without investing a single engineering hour. You only focus on the candidates that can deliver.

Can you filter remote candidates this way?

Yes.

We based our approach to filtering on how remote work happens in distributed companies. Your candidates can work on their solutions in a fully asynchronous way and within reasonable time frames.

Will this remove biases from my hiring loop?

Yes.

With Autoiterative, you give the candidate a challenge to solve, and we automatically measure only how well the candidate solved it, ignoring everything else. The platform will generate anonymous names, removing any bias that comes with the gender, origin, or race of the candidate. It allows you to build blind hiring loops that hide the identity of the candidate until the final interviews.

How do you measure delivery?

AutoIterative platform checks that the candidate's solution works in production.

Every time the candidate commits code to the platform, we run the initial set of tests that give them early feedback, just like the continuous integration gives it in real life. Behind the scenes, we deploy their solution to our production environment and run an extended set of tests. These assess whether their solution complies with the problem definition, that it works correctly, that it handles edge cases and errors, that it replies within latency tolerance, and that it scales under high load. When the candidate finishes, the AutoIterative platform grades their solution across all those dimensions and provides the report to the hiring manager.

Tell me more about the challenges.

We design AutoIterative challenges according to the following principles:

Challenges must resemble the real work:

You hire people to do real work, not to solve puzzles. Example of the challenge that we consider to be a good predictor of future performance and a good source of data about the candidate: given this API specification, please create a service and package it as a docker image. Examples of the challenges that we consider artificial and measuring the wrong metric: sort a string efficiently, find palindromes or calculate sqrt.

Challenges must use real tooling:

Every day developers pass tests in CI, package deliverables, and ship them to production. We give the candidates the same tools they would use for actual work, so they have an opportunity to shine. Every challenge comes with a pre-configured git repository and a CI pipeline, so the candidate does not need to waste time setting up the tooling. All they need to do is to focus on the task at hand, have fun, and deliver.

Candidates must not have production access:

When hired, your candidates will likely have no direct access to adjust their running code on your production. Same on the AutoIterative platform: the initial phase is accessible in continuous integration and has all the means to troubleshoot the initial solution. However, when deploying to the production environment, we only give hints to the candidates whether they passed all stages or not. It is up to the candidate to cover their solution with test cases and read the specification carefully, and doing that is a good predictor of future performance.

Challenges must not enforce the usage of a specific language or framework:

AutoIterative platform can enforce a specific way of solving the challenge, but being prescriptive brings no value. Your candidate should have the opportunity to perform in the language they know best. Or the opposite, if they want to learn something new and have fun. All we require is that the solution conforms to the industry standard — docker image — and we can run it in our production environment and assess it.

Challenges must not test for academic knowledge:

AutoIterative platform does not test the knowledge of specific algorithms, complexity calculations, or obscure data structures. Instead, we deploy it to our production environment and bombard it with a barrage of tests. As long as the solution works according to the specifications, we will grade it for scalability and performance. Solutions that use an O(n!) will fail those tests, just like in real life.

Can I pair program or watch the candidate screen during their execution?

No.

We built the AutoIterative platform on the opposite principles. People need time to think and reflect on the problems they face. Sometimes, they need to take a break and go for a walk or sleep on it. Sometimes, they come up with a better solution right after they finish. AutoIterative platform provides your candidates with the opportunity to solve the challenge at their own pace, without time restrictions, and without interruptions and someone looking over the shoulder.

Still interested? Try it for free!