There are multiple techniques to conduct interviews. Before I go into anything specific, do you have any thoughts on what have you seen work and what didn’t?

While interviewing a candidate, the objective is to figure out if he/she can add value to your company. Assessing things that have only one right answer was never our purpose. Keeping this in mind, at Flipkart, we evaluated candidates on their ability to understand problems, think, come up with unique solutions and develop those solutions into decent code. The moment this becomes your mindset, the way you interview will automatically change.

Interestingly, while measuring the quality of tech output is hard,  the potential of basic tech skills can easily be tested by automation. Let me tell you how Flipkart did it.

We went to a campus and told potential students “Hey! Here are a bunch of Flipkart problems to solve, go solve them and send us your code and come work with us.”  We were amongst the first users of HackerRank to evaluate these solutions in bulk. So even before we see the person’s resume, we got an evaluation report based on their code. Evaluating 500 people together was never this easy.

If one is to have a democratic filtering process like a Hackerank, what next? I mean in a pool of 500 people, if 50 clear the test and you need to hire only 5, how do you evaluate them further? Would you recommend giving puzzles?

We need to keep in mind the essential skills we are looking for. Coding is one, another important one is their problem solving ability.  We were looking for someone who was capable of looking at a problem through different perspectives and not just a straight jacketed view.

We didn’t give puzzles. When I had joined, our favorite problem was the 25 mechanical horse race one. You had to determine how many races it would take to figure out the three fastest horses. See it wasn’t a puzzle; you were required to actually pen down use cases. The only way to find the optimum answer was by challenging your thinking capacity. Unlike a typical puzzle, such tests do not assess knowledge but your ability to think and create unique solutions. The quality of output was far more valuable to us than the knowledge itself. The mere understanding of codes without the ability to brainstorm is futile.

We preferred open ended and system design problems with no right answer. E.g. Designing the Facebook newsfeed or the Blue Ticks feature of Whatsapp.

We wanted people who believed in possibilities; that they can build a messenger or a WhatsApp. If a candidate found such tasks daunting, that itself was a definite yellow flag for us.

Was there a defined framework on which candidates were evaluated?

A bunch of us at Flipkart spent decent time in coming up with competencies expected at every level. Take a software developer for instance, we laid out the required skills across all levels and defined core strengths needed.

Within software developer role, we had three levels starting with entry level developer, and two more levels that you progressed to. Across all these levels we evaluated for both tech skills and non-tech skills. The weightage of different skill sets differed across levels.

Within tech skills, we evaluated capabilities like coding, technology breadth, design, architecture,  and quality . Non-tech competencies included business impact, ownership, communication, execution, teamwork, mentoring and organization building. Often companies don’t consider the business impact when it comes to technologists but we believed that engineers are capable of dealing with business problems too.  

Measurement across these competencies served as the basis for promotions in the organization.

I’d like to highlight that there were some cases where we made the offer even though the process outcome was against the candidate. We rarely deviated from the process, (probably in 1-2 out of 1000 cases) but when we did, it was because we found spikes in 1-2 areas. What I learnt was that you can either expect everyone to pass in all the tests or prefer spiky people. We made an exception for some people who we believed spiked in the right competency.

You talked about assessing the level of ownership candidates are willing to take. How do you assess that?

You can figure out the ownership level by asking simple questions about their past and read signs, for instance someone who constantly plays the blame game versus owning up to issues.

For example: Someone is talking about their current job and says that it sucks. You then put up the question as to why have they been working there for the last three years. Now, if the candidate can show me any initiative taken on his part to solve the problem, it reflects to us that he tried to fix what was wrong. Taking ownership and initiative go hand-in-hand for us.

Another attribute that reflects ownership is the level of pride someone takes in his/her work; the passion and the joy with which they talk about their past jobs. I remember this conversation with Amod in which he was so thrilled about his work at Even though as a business the company didn’t necessarily succeed but the problems solved were amazing and he was proud of it.

It’s a good sign when people talk on behalf of their company instead of talking about it. If someone says “they do this and they did that..” etc., then that’s a definitive red flag.

If 5 people are going to be interviewed then are these interviews in progression? Do you apply learning from one interview and pass it onto another? OR Do all 5 independently interview without coloring anyone’s thought process?

I think the principle that we used is the latter. Since hiring is a probabilistic process, relying on one person’s judgment alone can be dangerous. Hence, it shouldn’t be used as a filter or a pipeline process. The way Flipkart dealt with hiring was- we decided upon 4-5 people who would be best suited to gauge the candidate and each of them had at most two competencies to cover. That way, one could go deep on those skill sets in each interview.

As a rule, the interviewers were not allowed to discuss about the potential candidate while the interviews were going on. Instead, after every interview, the feedback flowed back to the recruiter. If the recruiter got two strong negatives consecutively, he would short circuit the candidate.This decision didn’t lie with the interviewers. In fact it was held against them if they were found to spread biased opinions on the candidates.   

Finally, everyone had a veto at the debrief stage after the interviews. As a result, a software developer could reject the candidate while a manager pleaded to hire more people. If even one such case occured where a candidate was hired despite reluctance by a respected team member, imagine the consequences! That person was literally set up for failure if you weren’t able to address the situation properly and turn around the negative voter.

As I already mentioned, hiring is probabilistic and you will go wrong sometimes. But in my opinion, it better to be safe than being sorry. The loss of missing out on a talented person could be borne but making a wrong hire could cost you way more.  

How do you test for non-hierarchical mindset?

I would highly recommend something informal, like going out for coffee or a meal together. Something that worked for us was letting the team members interview their to-be manager. Very often managers feel threatened. This action also helped us assess the hierarchical mindset, attitude of the person and eliminated the need to ask them directly, which was good because when asked directly they would always have the right things to say. Catching them off-guard, gauging their comfort level was a better way to know their true selves. Their behaviour with the recruiters and coordinators was a tell-tale sign.

Did you ever evaluate interviewers?

Yes, we evaluated interviewers too on the basis of skill sets such as on areas they were going to interview on, their feedback, their punctuality in giving feedback, the number of times they reschedule an interview, etc. Over time, we did batting averages for interviewers.

To explain batting averages; we evaluated the same competencies in performance reviews and hiring.  Hence, we were able to recognize which interviewer’s predictions often went wrong or right. This works when you have enough data. But the point is to collect all this data and to measure how accurate your predictions are.

It is often seen that candidates accept the offer but back out at the last minute. How did you overcome that?  

We had a candidate engagement program. We wanted our potential employees to feel like a part of the team and one way to show the trust was by putting ourselves out there. We created a buddy system for each new hire. This buddy would help the new recruit navigate through the processes at Flipkart early on. We would also send them flowers and books at their homes and invite them to all our all-hand and other meetings even before they join.

When in the interview process, do you start selling to the potential candidate?

All the time ??

After we reached a team size of 40, we would often take candidates on an office tour before the interview. This helped them loosen up a bit and be comfortable. More importantly it gave them a glimpse of what life was like at Flipkart. We called it the Culture Round.

Another initiative that we took was to keep 10 minutes aside in a 40-60 minute interview only to answer the queries of the candidate.

While these processes helped as soft selling points, what had the most impact was the mindset of the interviewers. Good interviewers at Flipkart were welcoming and patient with the candidates. They would take trouble to answer all their questions.

If you look back at the early days, are there any incidences of interviewing mistakes that look pretty obvious now?

I think there were cases where we valued coding skills way more than the ownership aspects and those hires lasted only six to nine months. Now I am very clear that mindsets are far more important than coding skills.

We will like to thank Mekin for his time. If you have further questions/ remarks, please don’t hesitate to post them on our blog or send them on