Browsed by
Tag: startup leader handbook

start⬆Mngr handbook outro: Advance your leadership career via feedback

start⬆Mngr handbook outro: Advance your leadership career via feedback

We started this handbook with a discussion on how to gain trust. In this outro, I’ll discuss how to maintain trust. After all, what’s the point of gaining trust if we can’t keep it?

At start-ups, team members are typically young, intelligent, ambitious, and outspoken. They have an opinion on everything. Therefore, trust from team members is often built on a leader’s ability to seek and acknowledge people’s feedback. And the more trust you get from team members, the more they will do for you.

Yet with authority, it becomes difficult to gain feedback. People generally do not expect to tell their bosses how to do their job, and may even expect that they do their jobs perfectly. Team members may also dislike sharing feedback with their boss, fearing that it creates disagreements: A conflict where the boss most likely wins.

It is thus critical that we create an environment where team members are comfortable sharing feedback. Here are some tactics to help get started:

  • Set a clear expectation of what you want: This can be achieved by regularly prompting for feedback in different ways and praising team members when they act on it.
  • Establish a feedback channels: We can create processes by which team members can give feedback. Perhaps an anonymous survey goes out once a month, or every other one-on-one meeting is dedicated to team members sharing feedback. Simply saying “I’d love to hear your feedback anytime” is never enough. Team members still won’t know how to communicate their thoughts and feelings. Establishing feedback channels and holding team members accountable can facilitate this process.
  • Empathize: It’s important to put ourself in a team member’s shoes before taking a decision or announcing a change. This allows us to foresee how team members will react, allowing us to tailor our communication and roll-out strategy accordingly.
  • Eat the same food as team members: To effectively empathize, it helps to eat the same food, or do the same work, as front-line soldiers, at least once in a while or continuously in small doses. This also communicates to team members that you have the context to understand their pains.
  • Listen: Being an active listener will help team members feel that we understand them and acknowledge their concerns.
  • Respond to feedback: Acting on feedback translates into caring for the team. In the case where we disagree with a feedback, it’s important to clearly explain why that is the case and invite a constructive conversation. This ensures that team members understand our reasoning, even if they disagree with it.
  • Distinguish venting from feedback: There are times when team members want to vent and complain about a situation. It’s important to distinguish when that’s the case, versus when they’re providing feedback. Getting clarity ahead of a conversation ensures that we do not try to solve a problem when all that’s asked is a pair of ears to listen; when a person vents, they simply seek someone to listen, not necessarily respond. It is reasonable to simply ask the team member whether they want to vent at the beginning or a chat.

Recommended exercise

Lets ask team members for constructive feedback in a safe environment (e.g. Submit anonymously written notes)

Subscribe on the right for new insights every week!

start⬆Mngr handbook pt. 7: How do I make my boss happy?

start⬆Mngr handbook pt. 7: How do I make my boss happy?

First, a personal story.

After spending two years as a decent individual contributor, I was promoted to team lead. It would be the first time that I manage people. How exciting!

In my first few weeks as manager, I didn’t really do anything different, aside from having one-on-one talks with my new direct reports. I didn’t change how I interacted with my boss, and continued to hold bi-weekly meetings to report on my priorities. I let operations run as normal and only intervened when team members needed my help.

Everything seemed great at first, but after a few months, I started to feel like I wasn’t doing enough, and wasn’t growing as a leader. I had a rough sense of how my team was supposed to help advance the company, but I didn’t know if our innovation initiatives were aligned with that of other teams. Yet I didn’t want to bother my boss with my problems; I didn’t want to reveal that I felt insecure in my new position.

Every week that passed felt worse and worse. I started to feel incompetent as a manager, unsure of where I was supposed to lead my team. My boss rarely provided feedback, so I didn’t know whether my team and I were successful.

One day, I finally rallied up the courage to tell my manager: “I don’t know if I’m successful or not. I don’t exactly know what results are expected of my team. We work on a ton of projects, but I don’t know how they relate to the overall company mission. I also don’t know if I’m acting as a good leader and would like some mentorship and coaching.”

What ensued was one of the most productive conversations I’ve ever had. Turns out, my manager made the assumption that I knew how our team’s priorities related to other company initiatives, so it was never discussed with me. We took the time to clarify all of that. On the management coaching side, my boss also didn’t know I needed it as I was doing fine. We then setup a mentorship plan, and also involved another manager at the company so that I could get two perspectives instead of just one. That night, I felt relieved and re-motivated. I found clarity on what was expected of me. I was confident that my team was advancing toward the right direction, and I personally had access to two mentors to guide me.

What did I learn? That not having clarity on what my boss expects of me leads to a stressful, confusing, and unpleasant time. The more time I let pass without clarifying expectations with my boss, the more insecure I became.

Since my manager interacts with many people daily, they may assume that I know things that are actually news to me. It is thus my job to communicate what isn’t clear to me.

In part seven, I’ll discuss how to manage up and stay in sync with our boss. This will help:

  • Understand what our boss expects of us;
  • Meet our boss’s expectations; and
  • Hold tough conversations with our boss.

Recommended Reading

What does my boss expect of me?

As our responsibilities evolve, our manager’s expectation of us also evolves.

Whereas a nurse is expected to report on his/her patient’s status and health, a nurse leader is expected to report on his/her team’s performance.

As we work to understand what is expected of us, it’s a good idea to share with our manager the management system we use (if they are not already aware). This ensures that our boss knows how to communicate  expectations with us, and more importantly, how we want to be held accountable.

Once a management system has been agreed upon, we can ask for clarity on their expectations of us: What goals they want us to achieve, what weaknesses they want us to improve upon, and how they envision us growing professionally.

We can also take it a step further and request direction, support, or coaching in specific areas to help our manager understand where and how they can help. This saves them time in diagnosing how to spend their energy with us, in addition to showcasing our sense of self-awareness. And fact is, most managers won’t find the time to diagnose where we need coaching, so let’s facilitate their job and ask for it.

Here are some questions that can help us clarify expectations of us with our boss:

  • What are the performance indicators or success criteria that will be used to assess my role and responsibilities?
  • What is the preferred method to communicate updates on my progress? 
  • What is the preferred method to communication update on my team’s performance? 
  • What is the preferred method for me to give and receive feedback? 

Who is my boss?

It’s also critical to understand our manager, the person. Insight on the person provides the necessary context to explain why our boss expects certain things from us, and behaves the way they behave.

It’s thus a good idea to observe and record our perception of our boss’s motivations, frustrations, values, strengths, weaknesses, work styles, and perceptions of us, in the same way that we keep dynamic profiles of our direct reports. In addition to understanding the boss’s frame of mind, this also helps us tailor our priorities accordingly. For what’s important to our boss should also be important for us.

How do I give feedback to my boss?


It’s not natural for most individuals to give feedback to their manager. We have a tendency to expect our leaders to know more than us, to be self-aware, and to not need our feedback. We may even expect our boss to be perfect. Yet nothing could be further than the truth.

Nobody is perfect (even definition of perfect varies from person to person). Nobody knows everything. Nobody can assess their performance objectively. Nobody can be certain of how they are perceived by others.

It is thus critical to proactively give feedback to our boss, to ensure that they know how to work with us effectively, and help us achieve our full potential.

Here are some tips that can help us give feedback to our manager:

  • Develop trust: Without trust in us, a manager is rarely going to listen to anything we say. It’s thus critical to show that we understand the problems faced by the team, along with our boss’s priorities. Once trust has been established, we can begin to share feedback.
  • Agree on a feedback system: Proactively asking the manager how they’d like to receive feedback helps establish a channel where feedback can flow.
  • Be transparent with our intentions: If we are planning to communicate feedback, we should let the boss know in the beginning of a conversation.  e.g. “Would you mind if I shard a point of feedback with relation to the situation around ______? I’d love your thoughts on my interpretation of the situation.”
  • Be specific and provide evidence: To avoid any opportunity for debate about the feedback, we can take the following approach:
    1. Find and communicate non-negotiable evidence that support our thoughts and feelings around a feedback. e.g. “Earlier in the ___ meeting, you dismissed Taylor’s opinion.”
    2. Share interpretation of the observations and communicate how their behavior impacted us, the team or the company. e.g. “The quick dismissal of Taylor’s opinion makes it uncomfortable for the rest of the team to share their thoughts openly.”
    3. Share a potential solution, while giving them the benefit of the doubt that they meant no harm. e.g. ” I know that you didn’t mean to hurt Taylor or shut down her idea. There was probably a distraction in the moment. To help keep conversations constructive in the future, perhaps you could share with Taylor how she can share opinions the next time? Or perhaps educate all of us on how we can constructively share feedback?
  • End with a praise and vote of support: A vote of support to our manager helps them understand that we are trying to help improve a behavior, and not attacking them personally. e.g. “I care about this team and this company, and enjoy our relationship. I thus want to make sure that the entire team feels this way and have the same supportive and constructive relationship we have.”

It’s important to realize that managers and leaders seldom receive feedback from direct reports. More often than not, direct reports simply get frustrated and leave the organization before trying to communicate feedback. That’s a shame. To help us avoid this fate with our boss, I highly recommend the adoption of a clear feedback channel between both parties. Give your boss a chance.

Recommended reading:

Recommended exercise

Let’s pick a frustration or issue that we have with our boss and work to communicate it to him or her.

Subscribe on the right for new insights every week!

start⬆Mngr handbook pt. 6: Before we start analyzing data…

start⬆Mngr handbook pt. 6: Before we start analyzing data…

First, a personal story.

I once helped a colleague on the customer success team (let’s call him Lou) analyze our retention data.

Lou asked me: “Can you help me get a report on the number of inbound service requests filed in the past quarter for each of our customers?” Easy enough I thought. I pulled the data from our help desk, created the report and sent it over to Lou. I thought that was the end of it.

A few days later, Lou came back and said: “Thanks for your help last time. Can you also get me a report on the amount of time that we spent responding to requests in the past quarter for each of our customers?”

The report wasn’t complicated to create, but we lacked the data. We did not track time spent servicing customers. After speaking with Lou, we decided to have the team start tracking their time. We recorded over a month of data before we created a first report. Lou looked happy with the results, so I thought this was again the end of that project.

Turns out, Lou didn’t really have a goal in mind..

Wrong. Over the following weeks, Lou requested half-dozen more reports, and we initiated the tracking of many new data points. A good amount of time and energy went into this retention analysis.

After Lou’s requests died down, I curiously asked: “So how did all the reports help you in the end? Did you find what you were looking for?”

Turns out, Lou didn’t really have a goal in mind… Lou was at first curious about how much resources we were spending per client, which led to follow up questions along the way. Based on the data, Lou eventually suggested to the customer success leadership to start setting limits to how many hours each client could access per month. Yet because of other priorities and constraints, the suggestion was never implemented. So nothing came out of the analysis.

The good news is that the new data points we tracked provided us a ton of useful information that eventually led to other changes that helped improve our retention goals. However, that took another few weeks. And fact is, the whole project could have been a complete waste of time.

Having worked on hundreds of analyses with dozens of data-driven companies, I can confidently say that teams without an analytical process in place have an extremely high chance of wasting time performing data analysis.

Start-up companies today have at their disposal an unprecedented amount of data, but it doesn’t guarantee good decisions. It doesn’t matter what BI tools we use. They are all useless if we don’t know what questions need to be answered.

To avoid wasting time and energy while pursuing analytics projects, this blog post will showcase an analysis process and framework to follow before any analytical work begins. Let’s make sure every analysis has a clear purpose.

For analysis projects to be successful, we need three main ingredients: the relevant data, people that can interpret that data, and an analytical process to ensure that we’re asking the right questions and creating the relevant reports. In part six of our startup manager handbook, we’ll thus be exploring the process of initiating data analyses to help:

  • Gather evidence to a problem;
  • Measure success and evaluate performance;
  • Take data-driven decisions;
  • Avoid performing the wrong analyses;
  • Avoid answering the wrong questions.

Before going further, let’s clarify that depending on the organization, analyses can also be referred to as measures, metrics, reports, and other quantitative or qualitative evidence based assessments.

We will not be discussing analytical/statistical methods (or data science methods), since there exists a ton of content on statistical methods out there already. However, to help those that are completely new to data analysis, I’ve included links to some of my favorite data science resources at the end of this blog post.

1. What is the problem and the goal?


The most critical step of an analysis is to ensure that it answers the right questionYet more often than not, we are so eager that we jump right into the data without a clear goal. The result is wasted time and resources in performing analyses that may not yield relevant insights, and doesn’t help with decision making.

This widespread behavior is likely from undergoing 15+ years as students where problems are defined for us, and all that’s expected of us is to solve them. Unintentionally, schools have failed to teach us how to define problems.

Strongly recommended reading on problem definition:

What is being asked?

Before going further, let’s first understand what is being asked. Here are the critical elements to acknowledge before any analysis work can be performed:

  • Is the question clear? There are often acronyms and ambiguous words used in describing a question, problem, or desired analysis. It’s important that there is clarity on how these words are interpreted to avoid miscommunication.
  • Does the requester have a specific vision for the end result chart or report? Analysis consumers may have an idea on the specific report(s) they’re looking for. So ask for it. While the envisioned report may not be best suited to their analysis goal, simply acknowledging it will help us understand the context and motivations behind the analysis. In addition, individuals with specific ideas on the end result often want to see their desired reports regardless of what we say. I thus recommend building the report, explain why it doesn’t answer their question, and then reveal the better analysis. It shows that we acknowledged their need and understand the context of the problem.
  • Can I explain the goal of the analysis in my own words? Repeat the analysis goal in our own words and validate with the stakeholder(s) – this ensures that there is agreement on the goal. (e.g. “The goal of the analysis is to assess whether cars primarily driven on the highway have a longer service life than cars primarily driven in urban centers. Is this accurate?”)
  • Do I understand the motivations behind the goal? Understanding why the analysis goal is relevant to the team or organization will provide a sense of direction when we start identifying analyses to perform. It also helps us validate any assumptions we may have about an analysis’s motives. (e.g. For a transport company, they may need to know if cars primarily driven on highways have longer service lives to see if there’s an opportunity to incentivize their drivers to take the highway more often than local routes. There may thus be opportunities to also analyze why drivers currently like to take local vs. highway roads.)
  • What potential actions or decisions will be made based on the results? Why would we spend time on an analysis that doesn’t translate into a decision or action?

What motivations lie behind the analysis?

Of the five points explored above, understanding motivations can be particularly challenging. To help, let’s remind ourselves of a tool we’ve used before: The 5-why method for root cause analysis. This method can be leveraged to understand Why an analysis makes sense to tackle. Questions such as “Why is _____ of interest” or “Why does your team focus on _____” will help kickstart the process.

2. Who exactly cares?

what is your role?

Stakeholders previously helped to explain the analysis goal. For the analysis results to be meaningful and used in decision-making, these same stakeholders need to participate in the analysis process as well. It’s therefore critical that responsibilities are agreed upon with stakeholders before an analysis begins. Let’s explore some common  stakeholder responsibilities (an individual may certainly wear multiple hats):

  • Decision-taker(s): These are individuals that need the insight to drive a decision or assess a situation. Among decision-takers, I’ve found it helpful to identify one individual that also serves as an advocate for this analysis: A person that will take part in reviewing all progress. This ensures that the analysis has continuous buy-in from its stakeholders and remains a priority throughout its duration.
  • Data warehouse developer(s): These are individuals that have deep knowledge of the data warehouse. Among other duties, they can help us access the relevant data points and track new data points.
  • Subject matter experts(s): These can be individuals that have contexts around the data that will help us make sense of questions that may come up when performing the analysis.
  • Observer(s): These are individuals that are curious about the analysis for reasons unknown. Perhaps they want to become decision-makers based on results, perhaps they are curious as to how an analysis is carried out at the organization, or perhaps they are simply looking for investigation. Independent of motive, these are individuals that the analysis team will need to update when major milestones are met.

Having these stakeholders participate in the analysis process ensures that everyone is on the same page throughout the exercise. In turn, they buy into the analysis and understand its nuance and caveats before final results are presented.

When stakeholders fail to participate in the analysis process, they may doubt results presented in the end, losing trust in what the data has to say. This must be avoided at all cost.

3. What analyses will help answer our questions?


Next, it’s time to envision (not yet perform) analyses that will help answer our analytical questions. This translates into an analysis plan, avoiding the risk of analyzing blindly.

A good way to start envisioning what analyses to perform is to ask: “If I had access to any dataset, what analyses would I want to perform to answer this question?”

Assuming that we have access to any dataset makes us more creative. In the context of data analysis, our creativity can often be limited by data not being available, or data not being in a format that we need. Yet chances are that once the ideal analysis is identified, a way to work-around existing constraints will also be found: e.g. by tracking the missing data, or finding a similar dataset stored in the required format. Even if there are no work-arounds, it is still valuable to acknowledge that there are important analyses we couldn’t perform due to ______.

Next, let’s review some characteristics of a good analysis:

  • Relevant: The analysis needs to directly relate to the goal. Every datapoint that does not answer the main question(s) or provide additional context become distractions. Distractions do not help stakeholders with their decisions, they should be avoided.
  • Trustworthy: Both methods and datasets used in the analysis need to be trustworthy. There should be no doubt as to whether the data is accurately recorded and properly formatted, while the methods used are relevant to the analysis goal. This means that reasons and explanations are available to support every decision surrounding the analysis. Decision-takers will appreciate the diligence, but most importantly, trust that they can rely on the analysis for their decision.
  • To the point: At least one of the reports needs to answer the question directly. It should be as black and white as possible, revealing a clear insight that helps decision-takers come to a conclusion. Even if the conclusion is that we need to perform more analyses, that report needs to unequivocally and quickly show why that’s the case.


  • Communicates a story: To effectively communicate an insight, analyses need to be exposed in the form of a story. To this effect, I highly recommend this book on Storytelling with Data to explore the basics of of data communication. I also recommend adopting the following flow to the story:
    1. Communicate the recommendation first: I usually start with the final recommendation and reveal at least one data report that clearly shows why I’m making this recommendation (see “To the point” note above). This ensures that people do not wait to discover the final insight that the analysis achieved. In addition, it also prevents conversations from sidetracking before the final insight has been shared.
    2. Explore caveats and supporting arguments next: If there are other reports that provide additional contexts to the recommendation, explore them next. I recommend starting with reports that illustrate caveats or go against the recommendation to address concerns and skepticism right away. Then we can proceed with analyses that support the recommendation to show how they outweigh the negative arguments.
    3. Close by re-iterate the recommendation: Finally, re-iterate initial recommendations by coming back to the main analysis, and allow the audience to raise questions.

As a final tip, I recommend to review results and rehearse the story with a colleague before the final presentation. This helps to anticipate questions and catch mistakes before they affect the analysis’s trustworthiness.

Other viewpoints on properties of a good metric:

So what’s the plan?

The outcome of the 3 steps approach to initiate analysis projects can be best summarized in the analysis canvas explored below.

Analysis Canvas
Analysis context

  • Goal: What is the main question that the analysis needs to answer?
  • Motivation(s): Why is the analysis goal and core question relevant?
  • Action(s) to drive: What are the decisions and/or actions that the analysis will empower?
Planned analyses (measures, metrics, reports…)

  • List of analyses to build
Stakeholders and participants

  • Decision-taker(s): Who needs this analysis to help take a decision?
  • Helper(s): Who can help answer questions with regard to the analysis goal and context?
  • Observer(s): Who are simply interested in the analysis with no stake in its results?
  • Analyst(s): Who will be performing the analysis?

I personally only start performing analyses after core stakeholders, especially decision-makers, review and agree to the analysis canvas. This ensures that there is an agreement from the get-go with regard to the analysis goal, individual responsibilities, potential actions to take, and analyses to build.

In my experience, starting to analyze data without agreement on these points can lead to future conflicts and missed expectations. There’s no time to waste on any of that.

Data analysis / data science / statistics resources

Finally, as promised, allow me to share some resources on analysis methods:

Recommended exercise

Let’s pick an analysis that we want to perform and fill out an analysis canvas. What’s our goal? Why is that important? What actions do we plan to take based on the results? What are relevant analyses? Who else needs to be involved?

Subscribe on the right for new insights every week!

start⬆Mngr handbook pt. 5: How to keep processes lean

start⬆Mngr handbook pt. 5: How to keep processes lean

First, a personal story.

In the early life of our B2B start-up company, we had less than a hundred clients and were able to have close relationships with most of them. This translated into regular conversations with our customers, during which we’d get a lot of special requests for our software and services. It was easy to get a sense of what our customers wanted. And since we were still young and unsure of the specific direction we want to take with the product, we often catered to our customers’ needs. We followed through on almost all customer requests.

Slowly with time, and as our customer base grew, our tailored approach to servicing customers showed signs of weakness. Our teams worked 12 hour days just to keep up with all requests, and our product was being pulled in a dozen of different directions. Worst of all, our customers started holding us accountable for delays in service requests that we took on as courtesy (these weren’t technical support requests, but rather actions that customers could do by themselves).

It was clear that our approach wasn’t scalable. We couldn’t continue on without hiring dozens more team members. This would have set our margins way back. So we did what every startup in that need to scale does: We created processes. We began to standardize responses to customer requests, and setting clear expectations around what we do and what we don’t.

Everything worked well at first. Having processes helped us save time and improve the way we respond to common requests. In turn, it allowed us to focus more energy and time on customer education, creating more self-sufficient users.

Yet a few months after we started creating processes, it overwhelmed us. Our team documented preferred solutions and responses to all situations imaginable, including edge cases. Our operations manual was over 30 pages long. You know something’s wrong when people have to spend 15 minutes looking for the standard response for requests that take 5 minutes to resolve, or skip the standard process entirely because it was too much of a pain to find. Yes, we went overboard with standardization.

With time and experience, we were able to find a balance. We standardized responses to common requests, but more importantly, provided a general philosophy and rule of thumb for the rest. This gave flexibility and control to our team members, who could respond accordingly, acknowledging the context of the situation.

For example “Protect service margins” was one of the rule of thumbs that helped our account managers decide whether to pursue a time-demanding project, while “Prioritize high value clients” was another guideline they’d have to account for when dealing with customers with high value contracts. Based on these, there was no standard response to most edge cases, but simply guidelines to help team members take an independent decision.

The result was that we achieved consistency in our general approach to customer requests, without being boggled down in the details. For common requests, everyone responded similarly. For edge cases, our team members had the flexibility to do what was right in their perspective, based on general guidelines. More importantly, we were able to start providing standard service level agreements (SLA) to our customers, setting clear and transparent expectations.

I’ve learned that processes are good as long as it doesn’t slow our team down. In part five, we’ll explore how to manage common issues around process creation and process changes, including:

  • When to change a process;
  • Changing processes too often;
  • Handling edge cases.

Should I break the process?


As a company matures, it is bound to adopt more processes to guide operations rather than rely on one-time decisions from individuals. This is how an organization gains operational efficiencies as it grows. We will not be reviewing process improvement techniques in this blog post (explored in part 4), but rather discuss when to break a process and when not to.

The need to break a process usually starts with a special request or edge case that an existing process doesn’t allow or didn’t consider.

For example, a high value customer asks for a refund at a major retailer without a receipt, past the 30 day refund deadline. If we follow the refund process, we may disappoint and lose a high value customer. So should we break the process?

One thing to recognize is that every edge case is an opportunity to assess whether the business environment is shifting and identify new market trends. When there is evidence that customer expectations are shifting, responding quickly and effectively can yield a big competitive advantage.

Consider news readers that shifted their source of information from newspapers to online articles. Publishers that acknowledged the market change were able to adapt and avoid irrelevance, while those that didn’t went under or had near death experiences.

To evaluate how to best respond to an edge case request, let’s adopt an analytical approach to answer the following questions:

Who is affected by the event and how? Knowing who is impacted will allow us to seek and consider their opinion on how to respond and why. Stakeholders can be customers, partners, or internal teams.

Is this truly an edge case or the new trend? Statistically speaking, an edge case or outlier event only occurs in rare instances. The frequency at which an outlier occurs should also not increase substantially over time.

On the other hand, a trend should experience rapid change in the ratio of occurrences over time.

To perform an accurate analysis on an event’s occurrence, I recommend segmenting the data by different population groups, categories of events, and other relevant variables.


➡ If the event is an edge case: The question becomes whether the edge case is worth breaking a process for.

  • If the edge case doesn’t merit a break in process: Say no, explain why, and move on. This option is best used when the benefits of upholding the process outweighs the benefits of allowing an edge case to occur. Questions that can help us assess the situation include:
    • What’s the implication of breaking this process?
    • What’s the precedent that we set?
    • Does it set the precedent that we want? e.g. Is offering a refund to a customer aligned with our company strategy? It could be, if the strategy is to provide outstanding service, but it could also not be, if we are a low-cost provider.
  • If the edge case is worth breaking the process for, the benefits of breaking the process needs to outweigh the benefits of upholding the process. Note that repetitively breaking a process will slowly erode the process, so if we find ourself repetitively breaking a process for the same reasons, we need to re-assess whether the event is a new trend.

➡ If the event is a market trend: The organization needs to decide how and when to respond at the soonest opportunity. Not acknowledging a market trend puts the company at risk of being irrelevant.

  • Does this new trend require re-designing the entire process? This is usually the case if the new trend is rapidly changing market expectations, or if the existing process is quickly becoming irrelevant. There is no time to lose.
  • Or can the current process be amended and maintained for now? This approach can be taken if the existing process is still relevant, and there is no clear evidence that the new trend is rapidly changing market expectations. This allows us to wait and observe whether the trend grows, then prepare for an eventual process change when the time is right.

Let’s remember that every edge case is an opportunity to observe a potential shift in market needs. Sharing this insight with team members and having a data tracking system in place will ensure that no opportunity is missed.

Are you leading a startup team? Subscribe below to join our community!

Recommended exercise

Let’s pick a special request that was recently received and evaluate whether it is a new trend or an edge case.

Subscribe on the right for new insights every week!

start⬆Mngr handbook pt. 4: Intro to innovation & quality improvement

start⬆Mngr handbook pt. 4: Intro to innovation & quality improvement

First, a personal story.

Like most ambitious companies, our team is constantly looking to improve our performance. We’re never OK with the status quo. If sales conversion were at 90%, we’d want to take it to 100%. And as a start-up, there are so many things we can improve…

This meant that in the early days, before we had adult supervision and a clear company strategy, we changed processes a lot, switched focus weekly, and experimented with many different product features. On the customer success side, we’d setup retention email campaigns to keep churn rate down in one month, and  we’d leave that behind and work on up-selling customers the next month to boost revenue. Two quite different goals. On the product side, we’d get excited about an idea that a customer wants, spend a couple sprints developing it, but then never touch it or improve it again. This speed of change is not uncommon at early stage start-up companies.

…most successful innovation projects started with a problem definition stage before jumping to problem solving

On the good side, we were never bored. We always had something new and exciting to do. Yet on the downside, we had little consistency in our approach to improve the company’s performance, going in all directions and doing everything at once. Team members would be left wondering what our long-term goals were, and what set our product apart from the competition. We tried to be everything for everyone, without the resources necessary to do so.

This led to process changes and product features that didn’t improve our bottom line. We’d waste time and resources on non-coordinated initiatives that failed to convert more customers, did not attract more prospects, and didn’t retain users. Worst of all, we’d execute and move on from all these projects without documenting our learnings, so we’d repeat some of our mistakes.

To help correct for this lack of direction, I spent much time exploring how other companies managed innovation projects: From hospitals to grocery stores, from tech start-ups to auto-makers. To my surprise, I found a consistent theme: Almost all innovation projects were carried out in a systematic way, following scientific processes that relied on data and experimentation. More importantly, most successful innovation projects started with a problem definition stage before jumping to problem solving (very hard concept to grasp for engineers).

In this blog post, I’m going to explore how to properly challenge the status quo and innovate systematically. We will discuss a system to help us lead innovation and quality improvement (QI) projects in order to:

  • Increase effectiveness of a process or product;
  • Solve the root cause of a problem rather than the symptoms; and
  • Boost creativity in problem solving.

For a quick introduction to innovation and quality improvement, I highly recommend the following TED style video by Professor Russell Ackoff.

The purpose of quality improvement projects is to change the way a problem is currently solved for the better. Better can be defined as doing something right, increasing effectiveness / efficiency, or lowering cost / margin for error. Since the market continuously changes, and our competitors never stop evolving, innovation projects are necessary for organizations to stay relevant.

Now let’s explore the basic steps of carrying out an innovation or QI project in an agile environment:

quality improvement process

This process ensures that we solve the right problem, test different solutions, and validate results before scaling. It avoids jumping to solutions that solve the wrong problem and wasting resources (realize that finding solutions is the fourth step, not the first). Let’s explore this in detail.

I. What’s the problem?

As a first step, let’s find supporting evidence to the problem, understand why it is occurring, and whether it’s relevant to the company/team mission.

Does the problem really exist? Is there evidence?


Innovation projects tend to arise from frustrations and desires to improve existing systems or processes. Before jumping to solving what we believe is a problem, it’s critical to gather evidence in the form of quantitative or qualitative observations to prove that a problem actually exists. Through this step, I’ve often found that the actual problem is quite different than our initial perception.

Evidence can take the form of data reports, analyses, interviews, or surveys to name a few examples. With recent innovations around analytics, there’s no excuse to have at least one piece of statistical evidence.

For example, one may be frustrated at the slow speed at which team members answer and complete customer service calls. Evidence to support the fact that team members are working slower than possible can come from:

  • Listening in and interviewing call agents to understand if there are opportunities for further improvements on the call process.
  • Evaluating the average time it takes a group of fully trained agents to complete a call; and the average number of daily calls one agent can take.
  • Assessing whether the number of calls an optimal team can take in a day is higher/lower than the current volume.
  • Assessing the reasons customers call to receive help

With evidence, a more complete picture of the problem(s) will emerge. Again, we’re very likely to discover that initial assumptions on the problem are wrong, or that they are only a small part of the actual problem. That’s great. That’s why we gather evidence.

What’s the root cause?

root of the problem

Once all the necessary evidence has been gathered, a root cause analysis can be performed using the 5 why’s method. This ensures that the QI project focuses on addressing the source of the problem rather than its symptoms and side effects.

In our example, we may find through root cause analysis that the actual problem is that the agents are receiving too many calls due to business growth. This could result from the fact that the number of agents has remained unchanged, while the number of customers and calls has grown. Data could also show that agent’s response time and call duration have not varied over history. So the problem may not be that agents are too slow.

Ensuring that our problem is accurately defined is crucial. Otherwise, our solution will be useless. Yet more often than not, we are too eager, impatient, and jump right to identifying solutions. We enjoy solving things, coming up with ideas. Unfortunately, a good idea to an misdefined problem is not as effective as a bad idea to a well-defined problem. The result is wasted time and resources that may not bring the desired innovation and quality improvement.

This widespread behavior is likely from undergoing 15+ years as students where problems are defined for us, and all that is expected is for us to solve them. We get rewarded to solve problems, not defining them. Unintentionally, schools have failed to teach us how to pinpoint problems.

A good idea to an misdefined problem is not as effective as a bad idea to a well-defined problem

I personally have greatly benefitted from reading “Are Your Lights On?” in an attempt to get better at problem definition. I highly recommend it to everyone that solves problems, especially engineers.

What’s not part of problem?

Every project should have a defined scope. This specifies the part of the problem that we’re aiming to solve (Do we want to solve climate change, or do we want to reduce the amount of CO2 released by our fleet of vehicles?). It is critical to set a clear scope to our problem, as the 5 Whys process has a tendency to yield potentially irrelevant problems that we may not want to tackle just yet. We need to resist the urge to take on every problem we discover and focus on the problem that is most important, relevant, and that we can effectively make an impact on. If the 5 Whys reveals multiple large problems, the QI project can be broken down into phases, focusing on the most pressing problems first. There is more details on prioritization later on in this blog.

Is it relevant to the team goal and company strategy?


Next, compare the problem to the team’s goal and the company’s strategy: If they are aligned, the project will push the team and company toward its ultimate goal, but if they don’t align, the project will waste precious resources.

In our example, assuming that everyone agrees that “having too many inbound customer service calls that potentially don’t need agent help to resolve” is the problem, we will need to evaluate whether the behavior is desired or not in relation to the company strategy and team goal.

If the company strategy is to offer best in-class human support, then the problem identified may be irrelevant, and a potential business case should be made to hire additional agents to help with the lack of bandwidth. Yet if the company strategy is to empower customers to self-serve wherever possible and maximize the ratio of customers to agent, then this may be a project worth pursuing.

OUTCOME: At this stage, we’ve gathered evidence that validates the problem, a root cause analysis has been performed to identify the problem’s source, and we know whether the problem is relevant to the organization. There is thus enough preliminary information to decide whether this is a problem that the organization wants to pursue.

II. Is this a priority?


Most organizations are bound to have multiple innovation and QI projects running at once, yet limited resources. We thus need to prioritize. Here are some questions that can help us do that:

  • What’s the potential impact / value of this project?
  • What if we didn’t do it?
  • What if we didn’t do it now, but later?
  • What’s the cost? human, material, time, …
  • What’s the ROI? [Value / Cost]

Based on our answers to the questions above, we’ll be able to compare and rank projects by priority.

It’s important to realize that ultimately, a project’s priority is reflected by how much resources we allocate, so we need to ensure that each project has the appropriate allocation of time, people, and other resources. Generally speaking, important projects deserve more resources.

What about low priority projects? Again asking “what’s the impact if we didn’t pursue it?” will help us decide whether to:

  • Cancel the project and never revisit it; or
  • Re-assess its priority later when resources are freed.

When it comes to prioritization, choosing what not to do is more important than choosing what to do. This ensures that top priorities get the right amount of attention and resources, and that plans are realistic. This applies especially to ambitious teams that tend to take on too much. Let’s thus recognize that doing everything is not successful prioritization. It runs the risk of juggling too many balls, and consequently, dropping all balls. Our startup companies have limited resources, so let’s choose how we spend our time wisely.

OUTCOME: Our project is now prioritized against other initiatives, which gives a clear idea of when to pursue it.

III. What do I expect?


Now’s the time to envision our desired end result and answer: What do we hope to achieve?

Of the three categories of goals (individual goals, functional team goals, project goals), projects need the clearest goals and expectations at the start. For example, before building a house, a clear blueprint needs to be drafted to communicate to all stakeholders what is expected in the end. The same applies to innovation and QI projects. Setting expectations before work begins ensures that all stakeholders  can agree or debate the desired outcome before it is implemented. Being specific in outlining expectations is key, so I recommend adopting a SMART approach.

Note that this is not yet the time to envision solutions or think of how to achieve the desired expectations. Doing so would be like starting to drive toward California before figuring out that we should actually be going to Florida. 

Based on our example case of a company wanting to decrease the number of inbound calls that can be resolved with self-service, the expectations for the project can be to:

  • Decrease the percentage of self-serviceable requests addressed on calls from X% to Y% in three months.
  • Lower customer call wait time to less than 5 minutes for 90% of all calls by X date.
  • Maintain Net Promoter Score at X and above beginning on Month/Day/Year. 

When it comes to measuring our goals, there are two types of metrics that we can leverage: progress metrics and outcome metrics.

  • An outcome metric assesses whether the desired outcome was achieved. e.g. In our example, a call’s wait time is an outcome metric, necessary to check whether 90% of all calls are answered within 5 minutes.
  • A progress metric tracks the impact of specific solutions, which in turn may contribute toward the desired outcome. e.g. In order to achieve the desired expectation around response time, one change or solution may be to have all calls completed or resolved within 10 minutes to ensure a high level of bandwidth availability. To that effect, the percentage of calls that are resolved within 10 minutes is a progress metric, which may in turn impact the desired outcome of answering calls within 5 minutes.

OUTCOME: A clear set of expectations has now been identified, including, measures that will help gauge success objectively.

IV. What are potential solutions?

multiple solutions

To explore potential solutions, we’ll first need to identify relevant stakeholders that can help imagine solutions, and then use divergent thinking to dream of all possible solutions.

Who should participate?

Having the right people on the project will maximize the potential of finding successful solutions.

In my experience, it can be a good idea to have people that are frustrated by the current process, that may not be top achievers, to participate in innovation projects. This is because top achievers work so well in the current process that they may not want to change. It may be hard for them to think outside the box. In other words, we need to find people that feel and understand the pain of the current problem to participate.

I also recommend the use of RACI to identify key players that need to be involved:

  • The product owner(s) / sponsor(s) / stakeholder(s): Who needs this project?
  • The scrum master(s) / project manager(s): Who’s managing progress of this project?
  • The team member(s): Who’s helping execute on this?
  • The subject matter expert(s): Who has special knowledge or skills that we need to consult? Who are the ones that live this process? Customers? Agents?

Without a clear solution or detailed project plan just yet, it is normal to involve additional players later on. Right now, let’s focus on involving the core people that are impacted by this problem.

Exploring solutions


Now comes the fun part. It’s time to identify solutions.

I will not discuss in too much detail processes by which teams and individuals solve problems – this demands deep knowledge of a specific industry, and special skills that I probably do not have. I do however recommend familiarizing ourselves with the concepts of Idealized Design and divergent thinking as frameworks to help think of solutions and answer the following questions:

  1. What’s the ideal solution? (If we don’t know what we want in an ideal world, how will we know what we want under constraints?)
  2. What’s preventing us from getting there today? (What constraints exist?)
  3. What’s the first step that we can take toward that solution? (Work backwards from the ideal solution to what is possible today under constraints. There may be multiple steps to eventually achieve the ideal solution, but what’s the first step that we can take? What’s the second?)

Another valuable resource that has helped me with problem solving is The Art of Problem Solving

Once a series of potential solutions have been scoped out, I recommend for the project manager to work with relevant players to:

  1. Describe the solutions in detail including:
    • What it is;
    • What impact is expected (both positive and potentially negative);
    • What resources are necessary;
    • Potential risks
  2. Rank solutions by a set of criteria relevant to the problem and the organization
  3. Seek agreement with stakeholders on which solutions to test
  4. Design experiments that can prove the effectiveness of the solutions chosen. This can take the form of a scaled-down version of the full solution, a fake back-end where the solution is launched without the full infrastructure or resources necessary to support it, or even vaporware if one is testing for market need.

The reason that I recommend testing solutions before changing processes is to minimize risk. Should a solution not produce the expected results, only limited time and resources would have been sacrificed.

Once the list of solutions to test are chosen, the project manager will draft project plans including a timeline with clear deadlines and deliverables. This ensures that all necessary tasks are outlined for accurate implementation.

OUTCOME: We now have a list of solutions to test, including a detailed project plan.

V. Does the solution work?

plan vs reality

Managing progress

To ensure that the project runs as planned, the project manager will be responsible to manage and report on progress. Questions that the project manager needs to ask include:

  1. Are we on path to the next milestone(s)?
  2. What challenges are we facing?
  3. Do adjustments have to be made to the project plan based on latest progress?
  4. Are we discovering new insights as work is done on this? Should the end solution be adapted now that we know more?

Many project management methods exist out there to help answer the above questions. In the context of start-ups that need quick validation and delivery of solutions, I recommend adopting an agile approach to project management.

Reviewing test results and identifying next steps

After experiments around solutions are done, it’ll be time to assess results.  Did we meet our SMART goals?

Specifically, all stakeholders and players need to:

  • Compare final results to initial expectations;
  • Understand where things went wrong, why, and how we can prevent that in the future;
  • Understand where things went well, why, and how we can consistently achieve this in the future.

It is especially important to review results of both progress metrics, that assess how specific initiatives and solutions performed, and outcome metrics, that assess whether the desired outcome was achieved. The ideal scenario is one where both progress and outcome metrics changed as expected, but there can certainly be scenarios where both were not impacted, or where the progress metric changed as expected yet failed to impact the outcome.

Insights gathered at the review meeting will help us decide whether to:

  • Cancel a solution due to its ineffectiveness; or
  • Iterate a solution and further experiment to see if additional improvements can be achieved before scaling its implementation; or
  • Scale a solution to the entire organization / process.

OUTCOME: By the end of this QI process, there is evidence to support the effectiveness of different solutions, and decisions are ready to be made as to whether to scale their implementation.

VI. We solved it. Are we done?

We’re never done. It’s human nature to never be satisfied. Once we’re on top of a peak, we will naturally shift focus onto the next peak to conquer. If we became market leaders in one segment, we’d shift focus to widen our lead or conquer other segments. We have the same mindset in our personal lives: The moment that I find parking in a downtown area where parking is scarce, my focus naturally shifts to whether I could have found parking closer to wherever I was going.

Fact is that most problems that we’re solving today are the same ones that our ancestors were solving hundreds of years ago. Love, communication, food, shelter,… Each generation solves it differently, using technologies available at the time. We used to communicate with smoke signals, and now we have emails and Twitter. We used to farm our own food, and now we have food delivered via apps. We used to listen to neighborhood announcements for news, and now we have email newsletters (so many of them). These fundamental problems will never ever disappear, and we will continuously innovate to solve them differently.

Whether our company will be leading these innovations and disruptions is another story. In the “Innovator’s Dilemma,” Clayton Christensen makes the argument that some of the best companies risk becoming irrelevant because they are too well managed. Sounds ironic? I highly recommend reading his book.

How do I project manage?

Considering that project management skills are essential to the successful execution of innovation projects, I recommend reading up on the subject should anyone be unfamiliar:

Recommended exercise

The next time that a problem comes up, let’s take a moment to investigate the problem, and accurately define it, before starting to solve it.

Subscribe on the right for new insights every week!