Driftrock's developer interview process

This post is a snapshot of our current interview process for hiring software developers. We expect this to change as we learn but we think it is still useful for our future selves and potential candidates. For potential candidates, hopefully this explains why we do what we do, what to expect from interviewing at Driftrock and what we expect from you.

Much of this was devised for engineers who have just completed a coding bootcamp (such as Makers Academy) or who have 1-2 years experience so it'll be interesting to see how it evolves when we're looking for more experienced engineers.

 

Our interview process

We break this down into a pipeline of steps with an evaluation at every stage and our certainty about a candidate increasing as we progress through each step. We recognise that the best candidates are in high demand so we try to keep this process simple so we can decline, progress and make an offer as soon as reasonably possible. There's five main steps (below) and we’ll go through each in more detail.

  1. Sourced or applied

  2. Phone interview

  3. Remote technical test

  4. Pairing, problem solving and meet the team

  5. Offer

 

Sourced or applied

Because we're a relatively small and unknown startup (we're trying to change this!) we don't tend to get too many direct applicants. Of course, we still advertise our development roles on all the typical job boards - LinkedIn, AngelList, Work In Startups etc. - but we get a lot more success from finding potential candidates ourselves. As a hiring team we’ve collectively had negative experiences using external recruiters, so instead, we have a fantastic in-house recruiter, Maryline. Maryline helps us sources potential candidates from LinkedIn and AngelList whilst also handling candidate communications and organising interviews.

In addition, we've more recently been using Hired and Talent.io with a lot of success. These platforms help us source candidates who are actively looking. Because they offer a lot more insight about a candidate and there's not an overwhelming amount of potential candidates; we've found we can act faster here by having individuals from the development team regularly looking and shortlisting people. The shortlist is used as a signal to notify Maryline that we want to get in touch and see if they're interested in the role. If it's a yes, we'll organise the next step, a phone interview.

How do we evaluate candidate profiles?

Regardless of how people enter this stage, whether they've applied or been sourced we try to make our decisions in a consistent way. We look for two members of the development team to provide a pass, if one of the two decide it's a no then we reject the candidate. To help us make those decisions we always use the same criteria. Returning to the same criteria each time keeps us honest to our values, means any team member can take part and hopefully removes some of the many biases in play.

At this stage we have very little information - a Hired profile or CV only tells us so much. That makes this stage incredibly difficult to to accurately and consistently evaluate individuals, so we keep it lightweight and look for the following:

  • Do they show a clear interest in learning and improving? - This is without question the main thing we look for. We want people who aren’t afraid to learn something new and always looking for their next area of improvement.

  • Do they care? - We use this phrase a lot and I think it's best summarised as empathy. Empathy and emotional intelligence play big roles for us in how we work and collaborate as a team and with others. At this stage we can typically rely on a couple of proxy concepts for whether a candidate cares, for example:

    • Software Craftspersonship - Evidence of code quality, TDD and/or automated testing gives us a good indication that the candidate is aware of the benefits of these tools for helping team members read and reason about other people's code.

    • Agile methodologies - Awareness or understanding of some of these concepts can show an empathy for working in a team environment and optimising as a whole rather than an individual.

  • Is their profile well presented? - A little pedantic perhaps but programming is all about detail. If their profile is a mess or absent of information then we can assume their code will be too.

  • Have they programmed in React, Node, Ruby or Elixir? - Not a deal breaker but we currently use these languages/frameworks so it certainly helps if the candidate has experience in at least one of those.

 

Phone interview

The next stage is a 20-30 minute video or phone interview with a member of the development team. It's the ideal chance for candidates to get to know Driftrock, our products, our culture, our technology team and for us to get to know the candidate.

Throughout the interview process we try to stress that we're not about catching candidates out with questions they could easily answer after some research. In short, this means we ask questions like: "Tell me about a recent project you worked on?" Rather than "What are the main differences between relational and NoSQL databases?"

We find we can gather a lot more information based on what they've learnt from their experiences rather than whether they can recite something that can be googled. In addition, this keeps the interviewee closer to their comfort zone and, given this can be a stressful and pressured situation, this helps us see more of the real person we could end up working with. As the conversation flows we might dig a bit deeper on some areas but we allow the candidate to guide how far we go.

How do we evaluate a candidate phone interview?

Here we split our evaluation into personal and technical attributes, commonly known as soft (or people) and hard (or technical) skills. We emphasise the importance of having those more human skills early in the process because we want people who can and/or want to collaborate and communicate within a team and with other teams.

Naturally there’s lots of crossover with the previous evaluation, here’s what we look for:

Personal attributes

  • Continuous improvement - Do they show a clear enthusiasm to improve regardless of their level of experience?

  • Communication - Are they clear, concise and thoughtful when communicating their ideas and previous experience?

  • Open to new ideas - As a general rule we seek people who have an opinion about how to do something but recognise and value other people’s opinions too.

  • Teamwork - Do they want to work with other people? Here we are really trying to determine how they view teamwork and collaboration and we’re looking to avoid solo / individual workers.

Technical attributes

  • Ability to articulate something technical - We want people who can take a complicated problem or solution and deliver it concisely to an audience who may not fully understand.

  • Software craftspersonship - As we’ve already mentioned we care about testing and code quality a lot and we want candidates who understand why.

  • Languages - We typically try to determine which languages they know and whether the ones we use are of interest to them.

 

Remote technical test

At this stage it’s all about assessing the candidates technical skills. Straight after the phone interview, assuming we want to pass the candidate on we’ll send them a remote technical test. In the document we send, we set out the problem and our expectations of the solution clearly. This is almost always the longest step in our process, so we let candidates know that we expect a solution within about 1 week but don’t spend more than 3 hours on it and definitely don't let it take over your life. We can be more flexible if needed so we let candidates know they can tell us or ask any questions as they go. The problem to solve has evolved a lot over the last few months, we’re now at a good place where it's fairly simple and still relevant to the type of work we do daily (integrating with 3rd party services, transforming data and testing).

How do we evaluate a remote tech test?

At this point we can get a little more objective with our evaluation. We use a grading framework to guide the evaluation and we keep track of all previous scores in one place, so we can provide a number relative to others we’ve seen before.

Here’s the spreadsheet we use for tracking, how we mark it and what we look for:

Screen Shot 2018-05-26 at 15.44.59.png

Typically two people will review a technical test, sometimes that results in two evaluations in the spreadsheet, sometimes they pair and sometimes a team member may simply agree with the previous evaluation so doesn’t enter theirs.

Breaking down the grading framework some more we’ve got the following criteria:

  • Working solution - Yes or No - Can I get the solution running locally and see the desired outputs?

  • Requirements met - 0 to 5 (5 is good) - Did they meet all the requirements specified in the document?

  • Readability - 0 to 5 - How easy is their code to follow and reason about?

  • Modularity - 0 to 5 - Have they broken down the solution into a number of files? Is there any duplication?

  • Automated testing - 0 to 5 - Are there any automated tests? How is the test coverage? Are the tests easy to follow?

  • Documentation - 0 to 5 - Are there any comments? What’s the commit history like? Is there a readme file? Bonus points for mentioning in the readme their approach and where they think the code could be improved 😃

We take an average across each of these topics and this becomes the score on the right (the Yes or No question is converted to a 5 for yes and 0 for no).

We’ve also found that it’s useful to capture a few other things too; whether the reviewer thinks it’s a pass or not, the reviewer’s name, some extra notes and which language the test was in.

If it’s a no after this review then we will provide quite detailed feedback to the candidate, so the more information we have the easier this feedback is to construct.

 

Pairing, problem solving and meet the team

Finally we ask candidates to come into the office for two hours where we do three things:

  1. Pair programming - A developer works with the candidate to expand or improve the remote test they sent in. This usually takes about 45 minutes to an hour and we make time at the end to bring another team member into the conversation so the candidate can explain what they’ve just done.

  2. Problem solving - Next up is a 30-45 minute whiteboard exercise with two other team members. Don’t worry, this is not about writing code on a whiteboard, we’re not jerks. We present the candidate with a simple new feature we would like to build and we ask them to present their thoughts on how it could be built. We provide a lot of context at the beginning, keep the conversation fairly high level throughout and ask a few questions along the way to guide their understanding. We usually end up with a number of boxes on the whiteboard, neatly modelling some components that communicate with each other to solve the problem.

  3. Meet the team - Hopefully the candidate has now met most of the team already but if anyone else is around we’ll bring them into the conversation. This is a final opportunity for them to get to know everyone, ask any further questions and for us to ask for any feedback on the process.

How do we evaluate this final stage?

There’s a lot going on here so let me dig into what we’re looking for in those two main activities:

Pair programming

  • The basics - Familiarity with their IDE, the language and their solution.

  • Communication - Can they articulate their solution and their approach throughout this session.

  • Improvement (and humility) - They recognise their code isn’t perfect and there’s always room for improvement.

  • TDD - Bonus points for getting into a nice red-green-refactor, TDD flow too :).

Problem solving

  • Here we’re looking at how they openly explore a problem, embrace uncertainty and whether they take measured and systematic approach to finding a solution.

You’ll notice that there’s a lot of crossover again here with other stages, there’s good reason for that. We’re really just looking to confirm our view of those personal attributes we learnt about in the phone interview and technical attributes we learnt about with the remote test.

 

Final thoughts

It’s worth noting that everything we do puts people skills at the forefront of our evaluation. Once someone crosses a base level of technical aptitude we believe that with time; languages, practices or tools can be learnt and existing team members can help teach them. However, we strongly believe that if a candidate joins the team with proficiency in communication and listening skills and high emotional intelligence* then together we can get to the stages of norming and performing much faster.

If you’ve got any feedback or suggestions, get in touch at tech@driftrock.com.

Thanks to Max, Hervé and the rest of the team for their help putting this article together.

* That isn’t to say people skills can’t be learnt and improved upon too, they most certainly can.