Browsed by
Category: Leveraging data

Top 6 business intelligence mistakes

Top 6 business intelligence mistakes

These days, companies large and small have an insane amount of data to help with decision making.

A small mom and pop restaurant with a cloud based reservation system can forecast how much ingredients to order for the week. Yet we all still make bad decisions. Why?

First of all, let’s not blame the data. By itself, data can’t do anything.

If there’s anyone to blame, it’s us. That’s right: the human beings behind the data.

We are the ones that decide what data to record, how to record it, how to analyze it, and how to look at it. Between the moment we have a question and the moment we make a decision, there are numerous chances of misusing data and arriving at the wrong conclusion. It’s like walking through a minefield.

Working in the analytics field, I’ve seen hundreds of data analyses go nowhere, wasting thousands of hours of effort. So I’m going to share five of the most prevalent mistakes I’ve seen.

“What’s the actual problem?”

I once helped an e-commerce company analyze their top 10 sources of new visitors. After seeing the results, they were ecstatic to find that both their paid campaigns and their blog were top sources of new visitors. These were channels that they could actively control and scale. So they did just that: They invested more money in their paid campaigns and kept their blog active.

Yet a few weeks in, they started to complain that their effort didn’t translate into higher revenue. A lot of new people were visiting the site, but not buying. Why is that?

The simple answer is that the analysis they wanted answered a specific question: Which sources brought the highest number of new visitors? It did not answer which sources brought the highest number of new paying customers, or high lifetime revenue customers, which would both have been more helpful to their actual problem of growing new revenue. So to avoid wasting time, effort, and money, let’s ask the right questions to begin with.

“Is the sample statistically significant?”

I once observed a sales team cancel a process change after 10 prospects failed to convert under a new process (they handled on average 200 prospects a month). By no means was that sample size significant enough to draw any conclusions yet, scientifically speaking. It was not a data-driven decision. It was an emotional decision.

I’ve also witnessed a case where a company made product decisions based on half-a-dozen phone interviews with select clients that they had good relationships with. This particular company had 500+ clients. Half-a-dozen people among a population of 500+ clients does not represent an accurate view of growth opportunities. In addition, the quality of the sample was also questionable. All clients interviewed had good relationships with the company, which indicates that the opinion of unhappy customers and churned customers were not acknowledged.

Sampling problems, including selection bias and lower than optimal sample size, abound in business intelligence. Startups are especially prone to taking shortcuts and use poor samples. Sometimes, it’s because there is simply not enough data… If a company just started acquiring customers, there may not be enough customers to make the analysis statistically significant. Other times, it’s because of pure impatience… Teams want to take decisions now, not in two weeks, so they often fail to wait for their experiments to fully complete.

The result is a decision based on poor data.

“Are the numbers relevant?

I’ve also witnessed many companies set future sales goals based on historical trends, but then change their entire sales process and expect the same goals to be hit. How can one expect the the same forecast when all input variables have changed?

It’s like expecting to fly from New York to Los Angeles in 6 hours, but then change our plane for a car and still expect to get there in 6 hours.

Let’s recognize that the analysis or forecast that we do is only good for the scenario that we considered. Should we decide to tweak or change our scenario, a new analysis needs to be performed.

“Are you sure the numbers are right?”

NASA once lost a $328 million satellite in space because one of its components failed to use the same measurement units as the rest of the machine. Target lost $5.4 billion in Canada partially because its inventory system had incorrect data.

Time and again, huge mistakes were made because the underlying data fueling these projects was bad to begin with.

So to make sure that my analysis is accurate, I often ask a second party to check the numbers. One should never review their own essay. The rule applies to analyses as well.

“What does this mean?”

Having access to information doesn’t mean that we know what to do with it. I’ve seen many people confused by data reports and unsure of what decision to take.

I once helped a B2B company evaluate which customer group to target for an advertising campaign. Their product was used by customers from three different industries, but they didn’t have the resources to tailor their sales processes and marketing content to all three groups yet.

So they began by looking at revenue generated by the three industries. Then they looked at revenue growth over time, profitability, and lifetime revenue. The results showed that 50% of their revenue came consistently from one industry, but that another industry was the fastest growing, going from 10% to 35% of their revenue over the past year. Both were potentially good choices to target and they didn’t know which one to pick.

I thus asked them to divide the total revenue by the number of clients/companies in each industry, effectively giving us the average revenue per client. My logic was that their sales and marketing efforts were going to be spent on a select number of prospects, so targeting prospects with higher individual revenue may yield a better ROI (e.g. between a $500/year client and a $5,000/year client, I’d advise to choose the $5,000/year client assuming that cost of support is similar). Based on the analysis, we saw that the fastest growing industry was also the one with the highest paying clients. This thus made the decision easier.

The point is that looking at the right information is important, not just information. This requires people that can interpret data, explain caveats, and tell a story. I thus highly recommend for all managers, data analysts, and data scientists to read Cole Nussbaumer’s Storytelling with Data book.

“We deleted what?

I once tried to help a SaaS company understand their user churn trends, only to discover that they delete customer account information 3 months after a user deactivates their account. This meant that there was only data on recently churned clients. The sample proved to be too small and biased to draw any useful conclusions.

Developers may delete data because they are running out of room on their hard disk, or because they think that a certain piece of data is unimportant. Regardless of what developers think, from an analytical perspective, we should never ever ever delete data.

Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. Subscribe on the left!

Checking our blind spot when making a decision

Checking our blind spot when making a decision

In a previous post, I discussed a tendency for startup teams to be blindly optimistic.

So today I’m going to share a simple exercise to help check our blind spots when taking decisions.

We start by asking ourselves…

… how do we tend to react by default?

Understanding our default behavior provides critical details on who we are, what we stand for, and how we behave in our job.

It helps us acknowledge where we stand, and whether we’re going in the desired direction. By reflecting upon our natural tendencies, we shine a light onto behaviors that we don’t usually notice. It allows us to make corrections to subconscious actions.

For example, I once asked my team: “what is the first thing that you do when you get to work and why?” 

To which a team member responded: “I check my emails to check for any fires to fight, but I really should review and adjust my to-do list before reacting to anything…” Simply thinking about something that is more or less a habit can trigger a correction.

To paraphrase famed author David Foster Wallace, a fish may not even know what water is, being surrounded by it since birth. Similarly, there are so many elements in our day-to-day that require our active focus that we may not know how our subconscious is behaving. Personally, I had a tendency to hyper-focus on my work and neglect chats I receive throughout the day, leading some people to think that I don’t care about them. I only realized it after a team member joked about the situation over lunch, after which I became more aware of my chats throughout the day.

In the context of an organization (or a team), default tendencies act as a reflection of its culture. A proactive diagnosis thus helps to ensure that the team’s culture is aligned with its desired culture.

To diagnose my team’s tendencies, I like to first recognize three entities including:

  1. The team;
  2. The team leadership; and
  3. The team’s relation with other teams.

Next, I ask each team member to reflect on the tendencies and behaviors from these three perspectives. Specifically, I ask: “In your perception, what does the team or the team leadership…” OR “In your perception, when collaborating and working with other teams, what do we…”

  • “…enjoy spending their time on?”
  • “…don’t enjoy spending time on?”
  • “…excel in?”
  • “…repeatedly fails to achieve?”
  • “…never get the time to do?”
  • “…usually ask about?”
  • “…not ask about?”
  • “…forget about?”
  • “…get confused by?”

Compiling results from all team members provides us with a comprehensive picture of our tendencies, our blind spots, and our culture in general. Our goal is not to judge, but to effectively observe differences.

Next, we need to ensure that our culture is moving in the right direction. I thus pull all team members together and review whether each trait is desirable or not. In the case that it is not, we try and identify ways to actively remind ourselves of our bias and compensate for it. For example, if we have a tendency to avoid working with other teams, we could compensate by first asking “Does any other team need to be involved?” before kicking off any new projects.

How often should we assess our tendencies? I recommend performing this exercise every quarter or two. Culture is slow to change.

I do advocate for someone to act as a culture champion to hold people accountable to any tweaks and changes we decide to pursue. In the example above, a champion would praise people when they remember to consider whether other teams need to be involved in a project, and reprimand when we fail to do so.

In my opinion, success does not translate into achieving our dream culture, but very much being conscious of our existing culture. Simply being aware our biases, weaknesses, and tendencies helps to avoid taking decisions blindly.


Recommended exercise

The next time that we’re faced with a decision, let’s analyze our immediate response (default tendency) and then take a day to think and see if we change our opinion. Is our default state of mind limiting our abilities?


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

Need better insights? Stop surveying and start observing

Need better insights? Stop surveying and start observing

The number of surveys and feedback requests I receive from companies is insane.

Buying a plant at Home Depot prompts a 15-min experience survey. Getting out of an Uber ride prompts for immediate rating. Every software tool I’ve used has asked me “How likely I am to refer a friend…” It has become too easy for people to design and launch surveys, leading to too little planning around what to ask, why, and how often.

Our team has certainly been guilty of this behavior as well. We conduct a couple email surveys a year, ask for feedback during account calls, and actively track our NPS score.

While I support this customer-centric culture, I also believe that we’re asking many unnecessary questions.

As result, customers are becoming inundated with annoying requests for feedback. I can’t help but think of a future where people block surveys like they block online ads. In a way, they already are: Response rate of our customer surveys are around 5%. To boost this, we often have to resort to contests, prizes, and bribes that bias the sample population. In the end, we can’t even trust the data from our surveys.

Therefore, I’m advocating for a less intrusive and more accurate method of gathering customer insights, by observing customer behavior. To illustrate this approach, I’m going to analyze three common questions found in customer surveys, and how we can answer them without talking to customers.

“How likely are you to recommend us to a friend?”

I understand the need to know how much users love our tool via NPS. It helps us evaluate progress on customer satisfaction, and even compare against other companies.

What I don’t understand is why we need to ask people this question when we can simply track referral rates. Besides, if someone answers 9 or 10, but never actually referred anyone… are they playing nice or lying out loud? Either way, knowing how likely someone is to refer us doesn’t help our business. Actually referring people to our business does.

So instead of surveying NPS, what I’d advocate for is a referral system that allows customers to actually refer their friends directly in-app. We can then gauge how likely customers are to refer us based on actual data. There’s a clear difference here: NPS measures a person’s likelihood to refer someone (mere words), whereas the referral rate measures the ratio of people actually doing it (an action). If I remember right, action speaks louder than words.

With this data, we can even take the analysis a step further, and calculate the referral rate over time by registration cohorts (i.e. % of people that registered in a specific month and referred friends in month X after registration). It would show us when people are most likely to refer after registering themselves, and when numbers plateau, indicating an opportunity to remind them of our referral program. Taking actions to increase this metric is much more impactful than trying to increase NPS – it directly drives customer acquisition, not just a sentimental score. 

But wait, don’t we already have referral systems? Yeah, so why do we keep asking that NPS question?

“What would you like to see improved?”

I recently took a flight to Florida, which was delayed and overbooked, after which I got an email asking me for feedback. Boy did I have lot of feedback to share… But did I answer the survey? No.

Why? Because I had already spoken to a customer service agent before the flight to make sure my wife and I would be on the flight, along with a flight attendant about some other issues in flight. I didn’t feel like repeating myself.

In my opinion, no company that cares about customer happiness should survey customers about how they can improve. Most customers, at least in the USA, proactively complain to customer service. To ask for it again via a second channel is like saying: “Hey, I don’t remember what feedback you gave our team. In fact, I don’t trust that customer service recorded anything at all. May I ask you to refamiliarize yourself with your frustrations and repeat them to me again?”

Do we really want people to think about what frustrates them once more?

I didn’t feel like repeating myself.

In my opinion, surveying customers on how we can improve means that we either don’t have a help desk, or don’t use our help desk data intelligently.

So to gain ideas on how to improve our business, let’s analyze our customer complaints and help desk data first.

“What features would you like to see?”

I’ve helped many product managers set up conversations with clients to get ideas on new features. Clients are usually excited to share their thoughts, and most have very specific features in mind. To help put their ideas into context, we often resort to further probing: Asking customers why they need XYZ feature, how they plan to use it, and how they’d prioritize their wishlist. This usually leads to hour long conversations where the client isn’t doing work they’re paid to do. While we only gain a tiny window into the challenges of our users. Hearing a story is simply not the same as being there. It lacks context.

Instead of all this questioning, I’ve found visiting clients and observing them using our tool, without disturbing them, is much more insightful. Shadowing users provides critical context around how they’re using the tool, as part of what process, in combination with what else, when, etc. This allows me to clearly understand the core challenge that a client is facing. And more importantly, it helps me gain ideas that can improve how our software is used in combination with other tools, and in different situations.

If engineers and product managers simply took the time to observe the users they serve in their environment (not some ideal lab setting), or maybe even do what their customers do for a day, the world would function much more effectively.

Allow me to share another example: I recently visited a grocery store where they had just installed a new cash register / payment system at all checkout lanes. Register clerks had a frustrating time using them, leading to long lines. We could blame the issue on improper training, or we could ask ourselves how a cash register could be so hard to operate… I’m willing to bet that the machine had no issues in the lab setting that it was designed in, but that engineers never even tried to use it in a real grocery store by themselves. They likely designed the whole thing based on indirect customer feedback, which rarely provides enough context to a problem.

I don’t doubt that we can find exceptions to what I’m advocating above. The point stands however that we should first see if we can answer our questions through observations rather than surveys. It yields much more comprehensive and accurate insights, and doesn’t waste our customers’ time. Action speaks louder than words.


Recommended exercise

Let’s look at all the questions that we’re asking on our customer surveys and ask ourselves: “Can this be replaced with insights from their actual behavior?”


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

Where forecasting fails, scenario planning succeeds

Where forecasting fails, scenario planning succeeds

Most of us carry a flat tire in our car. Not because we forecast having a flat tire, but because it’s a potential scenario. While we use scenario planning to avoid being stuck on the road, we often fail to do so when planning our business’s future.

Most startups have yet to find a scalable business model. So in the face of changing customer needs and new competitors, we are prone to change our business processes, organizational structures, and product offerings much more frequently than established businesses do.

Under this constant need to adapt, I’ve often found myself lacking the time to plan and vet a decision. The only question I resort to asking my team is: “What’s the best solution to this problem and what do we foresee happening?”

With time, I’ve discovered that my question is limiting in two aspects: 1) It inadvertently forces people to only think about one choice, one “best” option, rather than many; and 2) It misleads people into thinking that there is only one possible outcome as result of our choice.

In reality, there are always multiple options, each with a multitude of potential outcomes.

Why is this important to recognize? Because I’ve often realized that there’s a better solution, but only after a decision has been made, after a change has been implemented. When it’s too late.  I also find that we could have identified that better option beforehand, if we simply took a minute to consider all our choices.

My team and I once faced with the common problem of having too much work, too many clients to support (a good thing), and not enough time. And as we just landed a series B investment, leadership had even faster growth in mind.

So I sat everyone down and asked “What’s the best solution to our large queue of work and what do we foresee happening?”

Immediately, everyone jumped right to the solution of hiring additional team members that could focus on a specific type of request. In other words, increase staffing and specialize. In the moment, it sounded like a good plan, so I advocated for additional team members. And we got them.

A few months following the hire of two additional team members, the same problem resurfaced. The number of clients didn’t proportionally grow, but we had more requests from the same pool of clients. Since we weren’t making additional revenue from these clients, hiring additional people was not a great solution. We all agreed that we couldn’t just throw money at the problem. So we sat down and asked ourselves: “What options do we have?” Everyone got surprisingly creative thought of ideas such as:

  • Set a quota to how much service time each client can access per month;
  • Stop doing certain type of work for clients and train them on doing it instead; and
  • Charge extra for access to our service team.

We then proceeded to plan around contingencies, asking ourselves what could happen if we implemented these solutions. For example, if we were to set a quota to how much time each client could use per month, our team foresaw that:

  • High demand clients could complain;
  • Low demand clients that don’t hit the quota could file requests just to fill their quota; and
  • Client could be frustrated if they exceeded their quota, and yet needed a critical service necessary to the functioning of their account.

The exercise was successful all around. People were creative, open-minded, and honest in their assessment of potential outcomes.

Fact is, all options identified were possible and better than hiring additional team members, both in terms of efficiency and scalability. And the fact that we analyzed potential outcomes, we were in a position to plan ahead or readily react with counter-measures. For example, we could have reached out to high demand clients and set new Service Level Agreements during a renewal conversation, and gradually roll out the concept of quotas.

Yet we only identified these solutions once we faced the same problem again, without an easy way out. The question thus begs: Could we have identified them in the first place? I think so. If we stopped and analyzed all our options.

It goes without saying that I’m now a huge fan of scenario planning. So for the rest of this blog post, I’m going to share my take on this crucial decision making tool.

What is scenario planning?

In the context of tactical decision making, scenario planning involves a process by which we first identify a series of potential solutions to our problem, including doing nothing. Next, we identify and analyze all plausibles outcomes of each solution identified, our scenarios, and plan around contingencies.

Based on an analysis or even experimentation of how effective each solution can be, we can then take our decision. From there, we’ll have our contingency plans available should any of the plausible outcomes identified during scenario planning materialize. We effectively stand ready to react.

Success translates into no surprises and readiness to respond.

What’s the difference between scenario planning and forecasting?

Technically, forecasts envision a probable future (how likely is it to occur?), whereas a scenario planning identifies plausible futures (can the event occur?). The relevancy of the two methods thus depends on how we want to plan for the future and what resources we have available. For example…

  • A prominent application for forecasting is weather. If we forecast rain today, we’re likely to plan on having an umbrella when commuting. If we were to perform scenario planning for weather, where rain is always a plausible future, we’d be walking around with an umbrella independent of the probability of rain – it’s simply a plausible outcome.
  • Scenario planning on the other hand is often used for trip planning. We can’t always forecast exactly what we will do, what we will visit, or what the weather will be like when traveling, so we plan for all plausible scenarios. We bring all kinds of clothes for comfort, medications for health, and even books for potentially boring moments.

Scenario planning is thus very much linked to contingency planning. Again, our goal is to simply stand ready to react.

For a more strategic application of scenario planning, I highly recommend Idealized Design by Dr. Ackoff.

When should I use scenario planning?

In my opinion, scenario planning needs to be applied anytime a decision is needed. This allows us to fully acknowledge the potential impacts of our decision, and plan around plausible risks and threats.

For further reading, I highly recommend HBR’s article on how Shell performs strategic scenario planning and what they gain from it.


Recommended exercise

Let’s pick a decision that we’re actively assessing right now and pull the team together to brainstorm on: “What do you think would happen if we decided to go ahead with___?” Is the team ready to face these consequences?


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!

What KPIs should my team use?

What KPIs should my team use?

I’ve heard this question asked a thousand times. The answer, of course, varies.

As explored in “Influencer,” metrics guide our team’s focus. It promotes specific actions. So to effect change, it’s critical to identify relevant metrics.

For most industries, there exist basic measurements that all players use for comparison. For example, the airline industry has Revenue Passenger Mile, the hotel industry has Revenue per Available Room, and any industry that sells products tracks Inventory Turnover. These metrics, while relevant to assess a company’s performance against competitors or an industry benchmark, may not be any useful in assessing a specific team’s performance.

The way we go about measuring the performance of our engineering, product, and marketing teams may vary from company to company, depending on the unique challenges faced.

At startups, the challenge of identifying relevant KPIs is intensified by rapidly shifting goals. Sometimes, a KPI is no longer relevant after just a few weeks. At our startup, we once changed the sales and implementation process three times over the course of a quarter, which necessitated three different sets of KPIs to be designed and adopted over the same amount of time. Chaotic, right?

So how do we go about defining a set of KPIs that focuses on long-term success, adapts to our changing team needs, and accurately reflects our progress? I’m going to argue that each team needs three sets of metrics:

  1. Strategic KPIs reflecting the team’s long-term mission, which shouldn’t change often;
  2. Tactical KPIs guiding the team’s immediate actions; and
  3. Individual KPIs guiding individual team members.

Let’s explore these three different types of team KPIs in more detail.

What pain does my team solve?

Ultimately, our team exists to solve a pain. The strategic KPI(s) that guides our team should therefore be a measure of how well we are solving that pain. So before thinking about measures, let’s first get an understanding of the problem our team tackles.

As an exercise, let’s assume that we lead a customer success team at a B2C retail company for a moment:

  • What pain does my customer success team solve? Bad customer experience.
  • How do bad customer experiences impact the company? Customers will not come back, and could also defame our brand on social media. In turn, they drive down future revenue.
  • What are indicators of bad experiences? We can learn about a bad experience when someone files a complaint, provides negative feedback, responds negatively to a survey, or speaks negatively of our product on social media.
  • What are indicators of repeat business? The lifetime revenue of our customers, the ratio of repeat customers, and the average time between purchases.
  • Can the metrics be used for comparison? Are some metrics parent to others, and can they be normalized?
    • All data points on bad experiences can feed into an overall count of negative experiences. We can further normalize by the count of active/purchasing customers over a time period, accounting only for customers that had the opportunity to have a negative experience. A relevant KPI could thus be the average count of negative experience per active customer by month.
    • The ratio of repeat customers and average time between purchases will both affect how the lifetime revenue of our customers grows. We can further cohort or group customers by their first purchase date to see if newer customers respond differently to our tactics. We can also take snapshots of customers’ lifetime revenue at different instances in their lifetime with us, to fairly compare new and old customers. For example, we could evaluate the lifetime revenue of customers that made a first purchase in the month of January, and their lifetime revenue 180 days after, and 365 days after…

As exposed above, we can clearly identify our team’s strategic KPIs by evaluating the context of the pain solved by our team, and by identifying parent metrics that others feed into. In this case, the customer success team’s strategic KPIs can track:

  1. The lifetime revenue growth of its customers by first purchase month cohorts; and
  2. The average count of negative experience per monthly active user.

How are we solving this pain?

To solve the pain that our team is responsible for, our team is likely to adopt different tactics over time. Each of these tactics also need to be measured and evaluated, leading to the need for Tactical KPI(s). Let’s again assume the leadership of our customer success team and explore some questions that can help us identify our tactical KPIs:

  • What is our team doing to boost customer lifetime revenue? We’re sending email newsletters to recommend specific products to a target audience.
    • How can we measure the newsletter’s success? By evaluating the number of repeat purchases that originate from our newsletter.
  • What is our team doing to reduce bad experiences? Solving problems to avoid a recurrence of the same problem with other users (i.e. fixing product issues and improving QA process)
    • How can we measure the success of our QA process? By counting the number of unique problems reported by customers for each product sold.

From the questions above, it’s clear that the customer success team’s tactical KPIs should track:

  1. The ratio of repeat purchases made by customers that clicked on the newsletter; and
  2. The average number of unique problems reported per product sold.

It’s important to note that a team should be able affect its tactical KPIs, and avoid metrics they can’t impact. In addition, as a team’s activities and methods of solving its pain evolve, the tactical KPIs should change. As a simple example, if our customer success team no longer send newsletters, and has moved on to using Facebook to interact with our customers, then we’ll need to track the ratio of purchases originating from Facebook.

How is each individual solving this pain?

Beyond team level metrics, there are also individual level metrics that assess how each team member is doing relative to another in helping to affect our strategic and tactical KPIs. The exact measure will depend on the team member’s unique responsibilities, and expectations that we have of them.

If two team members share the same responsibilities, the same KPI should be used for both of them. For example, sales agents that do the same thing should share KPI(s) to help compare their performance (e.g. Ratio of conversion from A to B).

A couple questions that can help us identify relevant individual KPIs include:

  • How does the team member contribute toward the tactical KPIs?
  • Are there unique expectations and improvement areas that we agreed upon with the team member that needs to be tracked?

Do individual KPIs have to relate to Strategic and Tactical KPIs?

Yes.

Individual team members should never compare themselves and gauge their performance on anything else but what matters to the team and the company. Everything else is a distraction. If team members start comparing each other’s professional growth in terms of the quality and price of computers / desks they have, the clothes they wear, or their business card designs, then our company is doomed.

Does this mean that we shouldn’t seek to have fun at work? Of course not. Realize however that the goal of social events, company sports teams, and non-work related contests (e.g. Christmas prizes, best halloween costume, etc.) is to make the workplace and our culture more appealing to employees. Our hope is that people are more motivated to focus on their team KPIs if they enjoy working here. So it’s all related 🙂

What if another team shares my KPI?

Great! Sharing KPIs is great news, because 1) it shows that our organization is capable of strategic alignment across teams, and 2) we get to work with people that think differently than us!

There are a couple considerations to account for when sharing KPIs with other teams:

  1. Ensure that each team’s responsibilities are clearly defined; and
  2. Have someone from each team manage the collaboration.

It’s obvious that the first point ensures everyone knows what they’re supposed to do, and avoid stepping on each others’ toes.

It’s the second point that some organizations fail to foresee a need for. May it be a project or program, having a specific individual manage the collaboration ensures that there’s an advocate on each team that will coordinate work, provide detailed updates, and remind the rest of the team of this KPI’s priority. Without collaboration managers, shared KPIs won’t have the same priority across teams. This can result in missed expectations and conflict if one team tried really hard, while the other slacked.

Do I trust the data?

Beyond designing the relevant KPIs to effect the desired change, trust in the data is critical for team members to take the numbers seriously.

Luckily, there’s a simple way to verify whether our metric is accurate: Get a second opinion.

If we’re expecting something different, it means that we’re receiving redundant information from a different source that reveals a discrepancy. We must therefore compare results from the two sources, understand why there’s a discrepancy (definition, data, etc.) and either leave the two as is and acknowledge the difference, or change one of the two definitions to reconcile.

Where can we get a second opinion? If we’re asking this question, it must be that there’s no easy access to redundant information from a different source.

For example, while we may be able to compare revenue from our payment processing system against the data we see from our operational database, it’s not as easy to find a second metric to compare employee satisfaction scores from a recent survey. So what do we do? We walk around the office, observe people, listen to water-cooler conversations, and have lunch with colleagues.

I can’t overstate the importance of simply gathering qualitative data. Managers need to dedicate time weekly to observing their team, colleagues, and customers. We can’t be the ones that eat lunch at our desk. It’s incredible how much insight we can gather by simply observing our environment on top of looking at dashboards. I also can’t take credit for this: I was reminded to observe team members by my boss after failing to realize that some individuals were unhappy with their growth path. I didn’t need to wait for the next performance review or the next employee survey to get this insight. I could have simply invited team members for coffee regularly. Andy Grove (former CEO of Intel) is a strong proponent of this tactic as well in High Output Management.

Hope you found this valuable 🙂


Recommended exercise

Let’s review all data reports we look at weekly and ask ourselves: “Which reports did we skip over or simply glimpsed at?” These reports are often meaningless distractions and should be replaced. Reports need to be actionable, meaningful, and serve as a strategic KPI, tactical KPI, or individual KPI.


Are you leading a startup team? Get started on the right foot with the Start-up Manager Handbook. And subscribe on the right for new insights every week!