The one data interview question I ask everyone, and why you should ask it too.
What's the one question every data practitioner should think about
Certain questions have become standard in my interviewing process.
One question stands out from them all.
The most important part of the Data Value Flow is the Decisions.
Decisions is where our work comes into play, and where we, as data practitioners, have the most influence. The more we understand decision-making, the better we become at our craft.
So over my career of interviewing candidates for data roles, the one question that I have found to be the most important is asking about their own assessments of the quality of a decision.
After a decision has been made, what lets you know if it was a good decision?
I ask this of everyone, for any role.
Q: Even data engineering? What about data governance roles? DataOps?
Yes! Every. role.
Why this question is so informative
It takes on double duty! It tells us about how well they understand general decision-making, and it also hints at how well they make decisions themselves in the sense of feedback loops.
What I look for in an answer
Ideal answers speak to the separation between decision quality and outcome quality (see Resulting Fallacy, below). Experienced data leaders as well as great decision-makers recognize the role of luck in everything we do and do not get fooled by randomness. They understand that the outcome does not imply the quality of the decision and can speak to it.
It’s not uncommon for a candidate to tell me that this is the first time they’ve thought about decision quality. In this case, I will ask them to share their thought process aloud. My follow-up questions will hint at the Resulting Fallacy, and I’m checking if they can put the pieces together.
Along the way, I am looking for how they apply these thoughts to their own decision-making. A growth mindset is critical for success in data, and growth depends on feedback loops. I’ve found the discussions that generate from this one question to be extremely insightful into how one improves their own decision-making.
Resulting Fallacy
To the best of my knowledge, Annie Duke coined this term. Closely related to outcome bias, the Resulting Fallacy is the human tendency to look at outcomes as a measure of decision quality.
Generally, folks who have ample experience in creating data science models quickly grasp this concept. The outcome is nothing more than a single observation from a random process.
However, humans tend to look at that outcome and incorrectly attribute it to the decision.
Everything turned out OK? → “🎉 That was a great decision! 😀”
Things turned out bad? → “🙅♀️ That was a dumb decision! 🤬”
Superbowl Example and the effect of external judgement
When we are judged externally, our decision quality diminishes, because we optimize for safe outcomes, as Duke describes of the 2015 Superbowl
Annie Duke on Decision-Making and Poker as a model
The connection between data nerds and poker has long been established, but it wasn’t until I saw Annie Duke’s explanation that it was clear why that is. In both worlds, it’s all about decision-making. (Annie Duke first rose to fame having won a World Series of Poker bracelet way back in 2004)
Interested in a 90 minute talk (45 minute at 2x speed)
An extended discussion is available as part of Talks at Google. However, if you’re this far down, I would greatly recommend the books Thinking In Bets and How to Decide.
Until next time,
Ricky