Browsed by
Tag: data analysis

30 lessons I learned managing people for the first time

30 lessons I learned managing people for the first time

I joined a startup out of college, wanting to effect change and make an impact right away.

Two years thereafter, I was thrust into a management role. It was exactly what I wanted. It started with a couple direct reports, and over time, I found myself leading 20 analytics professionals. Having no prior management training, I made many mistakes. People have quit, projects have failed, and targets have been missed. That said, I’ve also successfully helped the company grow from 12 to 100+ team members.

I owe my team for all the knowledge I’ve gained. The team that served as guinea pigs early on, forgave me time and again, and never gave up in our pursuit of success. Here are 30 lessons I learned on my journey so far:

  1. Don’t set any expectations with new hires, apart from the need for them to learn and ask a ton of questions. In other words, expect them to be curious. Expecting anything more has a high chance of catalyzing impostor syndrome.
  2. Before trying to influence people, first gain their trust.
  3. Perks, ping pong, and free beer matter less to team members than a purposeful mission, fast-tracked professional development, and fair compensation plans. Plus, all these standing desks, designer offices, and free food create a comfortable and entitled atmosphere that incentivizes chilling, rather than the underdog culture that pushes people to strive for more, to win. Which one do we want?
  4. It’s unreasonable to expect our boss to be perfect. It’s unreasonable to think that the CEO knows everything, and will always take the right decisions. We have a much better view of the challenges facing the business from down here. It’s thus our duty to speak up.
  5. One of the world’s top thinkers (Clayton Christensen) has researched and explained how to disrupt a market. We thus don’t need to reinvent the process or “figure it out” all over again. Let’s just make sure that our solution is actually disruptive and not sustained in nature (i.e. a solution that offers worse performance than existing solutions at first, and that existing clients don’t want right away, but that can be improved rapidly)
  6. Innovation isn’t brainstorming a ton of ideas and trying everything that seems interesting. There’s a systematic process that can make innovation projects much more effective, starting with problem definition, not problem solving.
  7. Forecasting is overrated. Scenario planning is much more critical when it comes to planning for the future.
  8. Managing my boss is just as important as managing my team. I have to understand that my boss doesn’t have time to explore what I need, how I’m doing, and what is reasonable to expect from me. It’s up to me to communicate all of that.
  9. As the organization grows, there is a tendency for teams to work in silos; caring only about their specific team goals. This can be detrimental to the organization as processes that require cross-team collaboration (i.e. everything) can break down. When/if that happens, everyone needs to come together and see the company as one unit, working towards the same goal.
  10. Surveys are misleading and lack context. Instead, let’s make time to observe. Observe team members to get a sense of their fears and motivations, observe customers to understand their pain points, and observe leadership for clues on the business challenges ahead.
  11. Toxic culture will destroy a companyThis includes both organizations where people stay silent and don’t bring up problems, and those where leadership listens to problems, but sweeps them under the carpet. Dishonesty prevents the company from seeings the obstacles ahead and plan accordingly.
  12. Don’t let instincts get in the way of a great work culture. Our subconscious can behave differently than desired, leading to biased decisions that hurt the company.
  13. Before hurriedly analyzing data to answer a question, let’s first why we care, what actions we plan to take, and what reports we envision being useful. Otherwise, it’s very likely we’ll get distracted by data and waste time.
  14. Adopting the right metrics helps to guide people when we’re not there and reminds them of what is important. It’s thus crucial for effecting change.
  15. Define the problem before solving it. Too much time is wasted solving the wrong problems.
  16. When it comes to decision making, the most important step is to evaluate all potential outcomes and to plan around each scenario. Nothing ever happens as planned, so we need to stand ready to face the worst case scenario.
  17. Before taking a decision, let’s first check our blind spot: Look for biases, subconscious tendencies, and invalidated assumptions.
  18. A hiring process is comparable to a sales process: A funnel with multiple stages that can be improved. The goal is to maximize the ratio of [people interviewed] over the number of [people hired and successfully working with us].
  19. Even A-players will feel unsuccessful without clear expectations and goals.
  20. Great insights are often lost because we don’t think to ask people about their past experiences. Before going live, let’s ask team members whether they’ve worked on similar projects, have experience with a new role we’re creating, or have previously implemented a change we’re considering.
  21. Implementing change is hard, because human beings are animals of routine. Before changing, we need to plan ahead, win hearts & minds, and reach mutual agreement. Ideally, change feels like a natural evolution everyone is excited about.
  22. Having too little processes makes operations chaotic, while having too many processes brings inefficiencies. A fine balance can be found via a set of guidelines that empowers team members to make individual calls.
  23. We should feel comfortable to disagree with our boss and challenge their opinion (with evidence). A healthy culture welcomes constructive debate and feedback.
  24. Shying away from having tough conversation and giving constructive feedback will make us frustrated in the long run. It’s a sign that we’re not comfortable exposing our thoughts. To make it less personal, we can focus our feedback on the behavior, not the person.
  25. Great leaders are also great coaches who diagnose, train, and support their team members. It’s unfair to delegate tasks to team members without diagnosing their capabilities first. It sets them up for failure.
  26. If someone is underperforming, they could be: A bad fit, not trying hard enough, or not getting the coaching they need. Work with them if it’s the latter case, but let them go otherwise. To avoid feeling bad when firing someone, set clear expectations, and give the team member a fair chance to improve. Let’s however not hold on to hope if there is no hope.
  27. Goals are important, but not more than everyday advancements. In addition to celebrating goals, let’s make time every day to praise team members’ effort and progress. This reinforces a growth mindset.
  28. +1’s matter. Showing support for other people’s ideas matters. It shows how popular an idea is, which influences the final decision. If we decide not to voice our support, then we are not entitled to complain after the decision.
  29. While we’re on this journey, let’s remember to breathe, to be mindful of the present, and to appreciate the value we’re bringing to our team, our company, and the world. There will always be a new mountain to climb, a new problem to solve. Let’s take time daily to turn around and appreciate the view on this adventure. Yoga helped me a ton with being mindful.
  30. Change jobs and move on when you’ve stopped learning and growing, when the culture is making you unhappy, or when you don’t trust the leadership.

 

Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

Top 6 business intelligence mistakes

Top 6 business intelligence mistakes

These days, companies large and small have an insane amount of data to help with decision making.

A small mom and pop restaurant with a cloud based reservation system can forecast how much ingredients to order for the week. Yet we all still make bad decisions. Why?

First of all, let’s not blame the data. By itself, data can’t do anything.

If there’s anyone to blame, it’s us. That’s right: the human beings behind the data.

We are the ones that decide what data to record, how to record it, how to analyze it, and how to look at it. Between the moment we have a question and the moment we make a decision, there are numerous chances of misusing data and arriving at the wrong conclusion. It’s like walking through a minefield.

Working in the analytics field, I’ve seen hundreds of data analyses go nowhere, wasting thousands of hours of effort. So I’m going to share five of the most prevalent mistakes I’ve seen.

“What’s the actual problem?”

I once helped an e-commerce company analyze their top 10 sources of new visitors. After seeing the results, they were ecstatic to find that both their paid campaigns and their blog were top sources of new visitors. These were channels that they could actively control and scale. So they did just that: They invested more money in their paid campaigns and kept their blog active.

Yet a few weeks in, they started to complain that their effort didn’t translate into higher revenue. A lot of new people were visiting the site, but not buying. Why is that?

The simple answer is that the analysis they wanted answered a specific question: Which sources brought the highest number of new visitors? It did not answer which sources brought the highest number of new paying customers, or high lifetime revenue customers, which would both have been more helpful to their actual problem of growing new revenue. So to avoid wasting time, effort, and money, let’s ask the right questions to begin with.

“Is the sample statistically significant?”

I once observed a sales team cancel a process change after 10 prospects failed to convert under a new process (they handled on average 200 prospects a month). By no means was that sample size significant enough to draw any conclusions yet, scientifically speaking. It was not a data-driven decision. It was an emotional decision.

I’ve also witnessed a case where a company made product decisions based on half-a-dozen phone interviews with select clients that they had good relationships with. This particular company had 500+ clients. Half-a-dozen people among a population of 500+ clients does not represent an accurate view of growth opportunities. In addition, the quality of the sample was also questionable. All clients interviewed had good relationships with the company, which indicates that the opinion of unhappy customers and churned customers were not acknowledged.

Sampling problems, including selection bias and lower than optimal sample size, abound in business intelligence. Startups are especially prone to taking shortcuts and use poor samples. Sometimes, it’s because there is simply not enough data… If a company just started acquiring customers, there may not be enough customers to make the analysis statistically significant. Other times, it’s because of pure impatience… Teams want to take decisions now, not in two weeks, so they often fail to wait for their experiments to fully complete.

The result is a decision based on poor data.

“Are the numbers relevant?

I’ve also witnessed many companies set future sales goals based on historical trends, but then change their entire sales process and expect the same goals to be hit. How can one expect the the same forecast when all input variables have changed?

It’s like expecting to fly from New York to Los Angeles in 6 hours, but then change our plane for a car and still expect to get there in 6 hours.

Let’s recognize that the analysis or forecast that we do is only good for the scenario that we considered. Should we decide to tweak or change our scenario, a new analysis needs to be performed.

“Are you sure the numbers are right?”

NASA once lost a $328 million satellite in space because one of its components failed to use the same measurement units as the rest of the machine. Target lost $5.4 billion in Canada partially because its inventory system had incorrect data.

Time and again, huge mistakes were made because the underlying data fueling these projects was bad to begin with.

So to make sure that my analysis is accurate, I often ask a second party to check the numbers. One should never review their own essay. The rule applies to analyses as well.

“What does this mean?”

Having access to information doesn’t mean that we know what to do with it. I’ve seen many people confused by data reports and unsure of what decision to take.

I once helped a B2B company evaluate which customer group to target for an advertising campaign. Their product was used by customers from three different industries, but they didn’t have the resources to tailor their sales processes and marketing content to all three groups yet.

So they began by looking at revenue generated by the three industries. Then they looked at revenue growth over time, profitability, and lifetime revenue. The results showed that 50% of their revenue came consistently from one industry, but that another industry was the fastest growing, going from 10% to 35% of their revenue over the past year. Both were potentially good choices to target and they didn’t know which one to pick.

I thus asked them to divide the total revenue by the number of clients/companies in each industry, effectively giving us the average revenue per client. My logic was that their sales and marketing efforts were going to be spent on a select number of prospects, so targeting prospects with higher individual revenue may yield a better ROI (e.g. between a $500/year client and a $5,000/year client, I’d advise to choose the $5,000/year client assuming that cost of support is similar). Based on the analysis, we saw that the fastest growing industry was also the one with the highest paying clients. This thus made the decision easier.

The point is that looking at the right information is important, not just information. This requires people that can interpret data, explain caveats, and tell a story. I thus highly recommend for all managers, data analysts, and data scientists to read Cole Nussbaumer’s Storytelling with Data book.

“We deleted what?

I once tried to help a SaaS company understand their user churn trends, only to discover that they delete customer account information 3 months after a user deactivates their account. This meant that there was only data on recently churned clients. The sample proved to be too small and biased to draw any useful conclusions.

Developers may delete data because they are running out of room on their hard disk, or because they think that a certain piece of data is unimportant. Regardless of what developers think, from an analytical perspective, we should never ever ever delete data.

Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. Subscribe on the left!

Need better insights? Stop surveying and start observing

Need better insights? Stop surveying and start observing

The number of surveys and feedback requests I receive from companies is insane.

Buying a plant at Home Depot prompts a 15-min experience survey. Getting out of an Uber ride prompts for immediate rating. Every software tool I’ve used has asked me “How likely I am to refer a friend…” It has become too easy for people to design and launch surveys, leading to too little planning around what to ask, why, and how often.

Our team has certainly been guilty of this behavior as well. We conduct a couple email surveys a year, ask for feedback during account calls, and actively track our NPS score.

While I support this customer-centric culture, I also believe that we’re asking many unnecessary questions.

As result, customers are becoming inundated with annoying requests for feedback. I can’t help but think of a future where people block surveys like they block online ads. In a way, they already are: Response rate of our customer surveys are around 5%. To boost this, we often have to resort to contests, prizes, and bribes that bias the sample population. In the end, we can’t even trust the data from our surveys.

Therefore, I’m advocating for a less intrusive and more accurate method of gathering customer insights, by observing customer behavior. To illustrate this approach, I’m going to analyze three common questions found in customer surveys, and how we can answer them without talking to customers.

“How likely are you to recommend us to a friend?”

I understand the need to know how much users love our tool via NPS. It helps us evaluate progress on customer satisfaction, and even compare against other companies.

What I don’t understand is why we need to ask people this question when we can simply track referral rates. Besides, if someone answers 9 or 10, but never actually referred anyone… are they playing nice or lying out loud? Either way, knowing how likely someone is to refer us doesn’t help our business. Actually referring people to our business does.

So instead of surveying NPS, what I’d advocate for is a referral system that allows customers to actually refer their friends directly in-app. We can then gauge how likely customers are to refer us based on actual data. There’s a clear difference here: NPS measures a person’s likelihood to refer someone (mere words), whereas the referral rate measures the ratio of people actually doing it (an action). If I remember right, action speaks louder than words.

With this data, we can even take the analysis a step further, and calculate the referral rate over time by registration cohorts (i.e. % of people that registered in a specific month and referred friends in month X after registration). It would show us when people are most likely to refer after registering themselves, and when numbers plateau, indicating an opportunity to remind them of our referral program. Taking actions to increase this metric is much more impactful than trying to increase NPS – it directly drives customer acquisition, not just a sentimental score. 

But wait, don’t we already have referral systems? Yeah, so why do we keep asking that NPS question?

“What would you like to see improved?”

I recently took a flight to Florida, which was delayed and overbooked, after which I got an email asking me for feedback. Boy did I have lot of feedback to share… But did I answer the survey? No.

Why? Because I had already spoken to a customer service agent before the flight to make sure my wife and I would be on the flight, along with a flight attendant about some other issues in flight. I didn’t feel like repeating myself.

In my opinion, no company that cares about customer happiness should survey customers about how they can improve. Most customers, at least in the USA, proactively complain to customer service. To ask for it again via a second channel is like saying: “Hey, I don’t remember what feedback you gave our team. In fact, I don’t trust that customer service recorded anything at all. May I ask you to refamiliarize yourself with your frustrations and repeat them to me again?”

Do we really want people to think about what frustrates them once more?

I didn’t feel like repeating myself.

In my opinion, surveying customers on how we can improve means that we either don’t have a help desk, or don’t use our help desk data intelligently.

So to gain ideas on how to improve our business, let’s analyze our customer complaints and help desk data first.

“What features would you like to see?”

I’ve helped many product managers set up conversations with clients to get ideas on new features. Clients are usually excited to share their thoughts, and most have very specific features in mind. To help put their ideas into context, we often resort to further probing: Asking customers why they need XYZ feature, how they plan to use it, and how they’d prioritize their wishlist. This usually leads to hour long conversations where the client isn’t doing work they’re paid to do. While we only gain a tiny window into the challenges of our users. Hearing a story is simply not the same as being there. It lacks context.

Instead of all this questioning, I’ve found visiting clients and observing them using our tool, without disturbing them, is much more insightful. Shadowing users provides critical context around how they’re using the tool, as part of what process, in combination with what else, when, etc. This allows me to clearly understand the core challenge that a client is facing. And more importantly, it helps me gain ideas that can improve how our software is used in combination with other tools, and in different situations.

If engineers and product managers simply took the time to observe the users they serve in their environment (not some ideal lab setting), or maybe even do what their customers do for a day, the world would function much more effectively.

Allow me to share another example: I recently visited a grocery store where they had just installed a new cash register / payment system at all checkout lanes. Register clerks had a frustrating time using them, leading to long lines. We could blame the issue on improper training, or we could ask ourselves how a cash register could be so hard to operate… I’m willing to bet that the machine had no issues in the lab setting that it was designed in, but that engineers never even tried to use it in a real grocery store by themselves. They likely designed the whole thing based on indirect customer feedback, which rarely provides enough context to a problem.

I don’t doubt that we can find exceptions to what I’m advocating above. The point stands however that we should first see if we can answer our questions through observations rather than surveys. It yields much more comprehensive and accurate insights, and doesn’t waste our customers’ time. Action speaks louder than words.


Recommended exercise

Let’s look at all the questions that we’re asking on our customer surveys and ask ourselves: “Can this be replaced with insights from their actual behavior?”


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

Where forecasting fails, scenario planning succeeds

Where forecasting fails, scenario planning succeeds

Most of us carry a flat tire in our car. Not because we forecast having a flat tire, but because it’s a potential scenario. While we use scenario planning to avoid being stuck on the road, we often fail to do so when planning our business’s future.

Most startups have yet to find a scalable business model. So in the face of changing customer needs and new competitors, we are prone to change our business processes, organizational structures, and product offerings much more frequently than established businesses do.

Under this constant need to adapt, I’ve often found myself lacking the time to plan and vet a decision. The only question I resort to asking my team is: “What’s the best solution to this problem and what do we foresee happening?”

With time, I’ve discovered that my question is limiting in two aspects: 1) It inadvertently forces people to only think about one choice, one “best” option, rather than many; and 2) It misleads people into thinking that there is only one possible outcome as result of our choice.

In reality, there are always multiple options, each with a multitude of potential outcomes.

Why is this important to recognize? Because I’ve often realized that there’s a better solution, but only after a decision has been made, after a change has been implemented. When it’s too late.  I also find that we could have identified that better option beforehand, if we simply took a minute to consider all our choices.

My team and I once faced with the common problem of having too much work, too many clients to support (a good thing), and not enough time. And as we just landed a series B investment, leadership had even faster growth in mind.

So I sat everyone down and asked “What’s the best solution to our large queue of work and what do we foresee happening?”

Immediately, everyone jumped right to the solution of hiring additional team members that could focus on a specific type of request. In other words, increase staffing and specialize. In the moment, it sounded like a good plan, so I advocated for additional team members. And we got them.

A few months following the hire of two additional team members, the same problem resurfaced. The number of clients didn’t proportionally grow, but we had more requests from the same pool of clients. Since we weren’t making additional revenue from these clients, hiring additional people was not a great solution. We all agreed that we couldn’t just throw money at the problem. So we sat down and asked ourselves: “What options do we have?” Everyone got surprisingly creative thought of ideas such as:

  • Set a quota to how much service time each client can access per month;
  • Stop doing certain type of work for clients and train them on doing it instead; and
  • Charge extra for access to our service team.

We then proceeded to plan around contingencies, asking ourselves what could happen if we implemented these solutions. For example, if we were to set a quota to how much time each client could use per month, our team foresaw that:

  • High demand clients could complain;
  • Low demand clients that don’t hit the quota could file requests just to fill their quota; and
  • Client could be frustrated if they exceeded their quota, and yet needed a critical service necessary to the functioning of their account.

The exercise was successful all around. People were creative, open-minded, and honest in their assessment of potential outcomes.

Fact is, all options identified were possible and better than hiring additional team members, both in terms of efficiency and scalability. And the fact that we analyzed potential outcomes, we were in a position to plan ahead or readily react with counter-measures. For example, we could have reached out to high demand clients and set new Service Level Agreements during a renewal conversation, and gradually roll out the concept of quotas.

Yet we only identified these solutions once we faced the same problem again, without an easy way out. The question thus begs: Could we have identified them in the first place? I think so. If we stopped and analyzed all our options.

It goes without saying that I’m now a huge fan of scenario planning. So for the rest of this blog post, I’m going to share my take on this crucial decision making tool.

What is scenario planning?

In the context of tactical decision making, scenario planning involves a process by which we first identify a series of potential solutions to our problem, including doing nothing. Next, we identify and analyze all plausibles outcomes of each solution identified, our scenarios, and plan around contingencies.

Based on an analysis or even experimentation of how effective each solution can be, we can then take our decision. From there, we’ll have our contingency plans available should any of the plausible outcomes identified during scenario planning materialize. We effectively stand ready to react.

Success translates into no surprises and readiness to respond.

What’s the difference between scenario planning and forecasting?

Technically, forecasts envision a probable future (how likely is it to occur?), whereas a scenario planning identifies plausible futures (can the event occur?). The relevancy of the two methods thus depends on how we want to plan for the future and what resources we have available. For example…

  • A prominent application for forecasting is weather. If we forecast rain today, we’re likely to plan on having an umbrella when commuting. If we were to perform scenario planning for weather, where rain is always a plausible future, we’d be walking around with an umbrella independent of the probability of rain – it’s simply a plausible outcome.
  • Scenario planning on the other hand is often used for trip planning. We can’t always forecast exactly what we will do, what we will visit, or what the weather will be like when traveling, so we plan for all plausible scenarios. We bring all kinds of clothes for comfort, medications for health, and even books for potentially boring moments.

Scenario planning is thus very much linked to contingency planning. Again, our goal is to simply stand ready to react.

For a more strategic application of scenario planning, I highly recommend Idealized Design by Dr. Ackoff.

When should I use scenario planning?

In my opinion, scenario planning needs to be applied anytime a decision is needed. This allows us to fully acknowledge the potential impacts of our decision, and plan around plausible risks and threats.

For further reading, I highly recommend HBR’s article on how Shell performs strategic scenario planning and what they gain from it.


Recommended exercise

Let’s pick a decision that we’re actively assessing right now and pull the team together to brainstorm on: “What do you think would happen if we decided to go ahead with___?” Is the team ready to face these consequences?


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

How do I know it’s the right decision?

How do I know it’s the right decision?

We’ve all made good and bad decisions.

The tricky thing is that we can only tell if a decision was effective in hindsight, after the fact. And more often than not, it’s also unclear whether our decision was truly the best one.

Take hiring for example: We review hundreds of candidates and come down to a handful of top choices. We can’t hire all good candidates and test their abilities during a 6 months probation period (maybe some companies can, but with limited resources for salary and training, our startup certainly can’t), so what do we do? We decide on the candidate that we think will be the best fit and make an offer. However, even if our chosen candidate ends up bringing positive value to the team, can we confidently say that other candidate couldn’t have done better? We will never know whether unexplored options might have been better.

So we can’t predict the future. Then how can we confidently tell if a decision is right or wrong beforehand? How do we plan for the future?

In my opinion, it comes down to leveraging our team’s existing knowledge and experience, as well as analyzing all possible scenarios that could result from our choice.

Here’s a series of questions that have helped me evaluate decisions:

Why are we considering this decision? If we don’t have clarity on the ultimate goal, then let’s not waste any time on a decision. So what’s the problem or pain that this decision is trying to deal with? What’s the goal that we’re trying to achieve?

How does the decision advance our team goal? All decisions must help the team hit its goals, and in turn, help the company achieve its vision. Decisions and activities that don’t help us advance our cause distract us, and waste precious resources. So let’s avoid them. For example, if our team goal is to sign on a large number of small and medium clients, deciding on how to better attract attention from large companies is a distraction.

What has already happened in relation to the decision? Let’s document all steps that we’ve taken to date, so that if other stakeholders need to be looped in, they can easily be briefed on the current status. For example, if we’re deciding on a new office location, what have we done already as part of the process? Have we visited potential offices, talked to agents, or analyzed our needs?

What did we do yesterday to cope with this problem? Is this a new problem, or are we trying to improve the way we deal with an ongoing problem? Documenting and communicating a problem’s history ensures that parties that are not familiar with the subject understand the full context when evaluating the decision.

How important is it to take this decision relative to other decisions in the pipeline? We’re likely to pursue many different decisions at one time, so we have to prioritize the ones that may have the strongest impact on our team goals.

When do we need to decide by? Waiting too long on a decision and it may become irrelevant. Not spending enough time evaluating our options and we may miss valuable insights. So let’s set a decision date based on how much research and analysis we can realistically do and afford to do.

What’s different this time? If we’re taking a decision regarding a situation that we’ve dealt with before, do we know the similarities and differences between the situations? What have we learned from the past event? Should we react similarly to last time, or take a different approach?

What insights exist to help us evaluate the different choices?

Data: Do we have relevant analyses and reports that can help us evaluate our options?

People: Who has experience or insight on the situation?

Historical cases: Have there been similar cases like this one before, either at our organization or at other organizations, that we can review and learn from?

Who needs to be involved in the decision? My recommendation is to loop in a representative from each team that may be impacted by the decision, subject matter experts that have insights to offer, along with someone that has no real stake in the decision. This last individual will be able to offer an unemotional and objective view of the situation.

What are our potential options? What are all potential options that we have regarding the decision? Which ones are realistic? Let’s remember that doing nothing is a choice too.

What are all potential outcomes? Have we evaluated all potential outcomes of our options? We can certainly leverage scenario planning here. The exercise will help us identify, plan around, and react to all possible outcomes.

Are we ready to face the impact of all potential outcomes? Which scenarios identified above are we ready to face, and which ones are we not ready to face? Are we OK with not being able to react to certain scenarios, or do we need more time to achieve operational readiness?

What’s the right thing to do? Is there a potential for people to get hurt, either today or tomorrow, by our choice? Will we be able to sleep easy with our decision? If there’s a risk of hurting people, can we tweak the solution to avoid it, or mitigate the risk? For example, if we find that producing oil from tar sands is still the most economically viable option, can we minimize damages to the environment?

Does everyone agree on the decision? Is there a clear option that most people involved in the decision agree upon? If there is no consensus, the main stakeholder, usually the person with the most stake in the decision that has to manage and implement the decision, needs to take the lead and make a choice. This will help to avoid decision paralysis, which hurts team cohesion and diminishes trust in the leadership. (In High Output Management, Andy Grove, former CEO of Intel, offers a very clear approach on decision making in a team environment to help get consensus)

How will we communicate the decision? I’ve often found that how I communicate a decision is just as important to success as making the right decision. In one situation, I let go of a team member that was clearly unproductive and a drag on the team. Yet by not communicating why we let go of that team member, who was a friend to many people around the office, it created confusion and fear among team members that were not familiar with the individual’s performance. So with every decision, let’s work to communicate why we’re taking the decision and how we went about evaluating our options. The key is to shed any doubt as to whether we reviewed the necessary data, consulted the relevant parties, and compared all possible scenarios.

The decision taking process explored above is like a production process. A car is worth much more than the value of the raw materials it’s made of. Similarly, with each additional question answered, a decision gains more value and importance.

So let’s be careful about not advancing unimportant decisions further along the evaluation process to avoid wasting resources, but also diligently vet important decisions.

I also recommend checking out the couple resources below on decision making:

Happy decision making!


Recommended exercise

The next time that we are faced with a decision, let’s start by asking ourselves: “Is this decision relevant to our mission?”


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!