No items found.

Effective Sales Planning With Christopher Goff and Greg Peña

Table of Contents
Loading the audio player...

In our Experts in ICM Q&A Series, we’re sitting down with leaders from across the incentive compensation and sales performance management space to explore learnings, trends, and opportunities that exist for today’s ICM and SPM professionals.

In this installment, we focus on one of the most critical challenges facing sales leaders today: building sales plans that actually work. From grounding decisions in the right data to learning from past plan performance and anticipating unintended consequences, effective sales planning requires both rigor and adaptability.

To dig into these challenges, CaptivateIQ CMO Katie Foote moderates a discussion with two seasoned sales planning experts:

Together, they share practical guidance and real-world perspectives on what it takes to design smarter, more resilient sales plans.

Let’s start with goal setting. How do you think about making goals that are both challenging and achievable? 

Chris: There are a couple of steps to achieving the right balance. First, make sure there’s alignment on strategy and an understanding of the budget. 

Then, make goals as personal as possible, meaning they are within an individual’s control Sometimes there are limitations that make this difficult. But more often, when establishing goals, there’s a historical trajectory we can reference to anticipate future performance levels.

We want something that’s realistic and achievable for the teams. Ultimately, it’s important to make sure both managers and employees believe these goals are attainable, as well. 

What are some of the common pitfalls that you would encourage folks to avoid when initiating the design process? 

Greg: It’s easy to try to fix every single problem that you see within the company. That’s not the most realistic approach. You have to understand what can actually be accomplished in a given year, and set goals accordingly. You don’t want to be too ambitious and try to achieve everything when that’s just not possible. 

It’s critical to understand historical performance. How are we anticipating that to change next year? Back-test potential plans. For example, if we were to have the exact same performance this year as last year, but applied new factors or calculations, what would performance look like? 

The other piece is listening to reps. You get feedback from them throughout the year. Make sure that feedback is incorporated during the planning process, since this is the moment to meaningfully affect change. 

In terms of the timeline for financial planning, what if your fiscal year is not a calendar year? How many months ahead should you start before your fiscal year is set to begin? 

Chris: It’s largely contingent on the magnitude of change. If we know we’re changing a lot, you’ll need to start earlier to allow for more socialization and broader buy-in from key stakeholders. It’ll also depend on the organization. If you have a complicated coverage model with multiple roles and you’re implementing something companywide, then you’ll need buy-in from all of those groups to make sure you have a successful outcome. 

If we’re not changing much, ideally you know enough about your organization to know what else might slow you down, things like target allocation, budget setting, etc. Some of those factors may be out of your control, but understanding them upfront helps you plan what’s possible and when.

How can compensation planning committees ensure they’re working with the right data? 

Greg: It all comes down to cross-functional collaboration. Try to engage the right stakeholders as early as possible. Make sure you’re working with the analytics teams, data solutions teams, and anyone who touches or owns the data. It’s all tied to company performance. 

Establishing a common source of truth is critical. It’s a prevalent challenge, but partnering with the right departments early helps you get ahead of potential issues and anticipate the questions that will inevitably come up.

How do you assess historical plan performance to identify areas of optimization for the future? 

Greg: Start by looking at payout and performance distributions. Where are the outliers compared to last year’s plan? We review performance frequently, on a monthly and quarterly basis, so it’s less about a one-time historical review and more about ongoing evaluation. As we approach planning season, we take a more holistic view.

We ask: Where are the outliers, and what isn’t aligning with how the company should be performing? For example, if we’re seeing a spike in sales activity that isn’t converting into revenue, are we truly incentivizing the right behaviors, or are we overpaying for the wrong outcomes?

Ultimately, it’s about understanding where the majority of commission expense is going, and whether that spend actually makes sense.

Do you recommend company top-line metrics over department metrics? 

Greg: I don’t think we should look at them as two separate or competing measures. At the end of the day, you need to incentivize in a way that accounts for both company-level and department-level outcomes. If there’s a disconnect between departmental goals and company priorities, that signals a broader alignment issue. It’s not something you can typically solve through compensation design alone.

Katie: It can also depend on the company’s stage of evolution. Being honest about where the organization is today will often guide which metrics make the most sense to emphasize.

How do you address unintended outcomes or discrepancies in plan performance? 

Chris: The answer often differs by role and by measure. Ideally, when we’re establishing a target, it’s closely tied to what the individual can actually influence. From there, you need to understand both performance and process-related factors.

When evaluating plan performance, it’s important to step back and first look at individual performance and role execution in the context of the broader strategy. That also includes examining technical or operational factors that may translate back to company performance.

Take a BDR role, for example, and the lead process. Along the way, changes in technology, newly rolled-out tools, or shifts in the sales process can all influence outcomes. Those changes may later be perceived as poor plan design or weak performance, when in reality they stem from process evolution.

Unintended consequences or poor outcomes tend to surface most visibly in pay. We see the end result and assume someone is wildly overpaid or significantly above target. Ideally, though, there’s a healthy performance distribution, where overperformance and underperformance are both explainable and justified. The key is identifying what range the organization is comfortable with. That starts with a clear definition of success tied to the role and the plan, along with regular performance reviews.

Ultimately, it’s about having a process that allows you to use outcomes as a signal of alignment. Outcomes themselves are a lagging indicator. You have to dial in acceptable performance ranges so your data isn’t skewed toward extreme over- or underperformance, and those ranges are unique to every business.

It’s an interesting challenge. Budgets often assume a fixed distribution where everyone performs at 100%, and I can’t say I’ve ever seen that reflected in reality. Some level of distribution around that target is both expected and appropriate.

If a company is moving from an ACV to a consumption-based pricing model, how should they think about applying historic plan performance when it’s like comparing apples to oranges? 

Greg: A good parallel is an acquisition scenario, where sellers from the acquired company are coming in on a completely different compensation plan. We’ve been through situations like that, where the metrics don’t line up at all. In those cases, we try to identify common elements we can leverage across both models.

When it comes to historical back-testing, I prefer to look at the data holistically. What fields can we use to translate or normalize old data into this new model? The goal is to make the comparison as close as possible, while also acknowledging it won’t be a perfect match. It’s important to have a shared understanding that the model is directionally useful, not exact.

How do you test different plans and scenarios to make sure you’re sussing out any potential unintended consequences? 

Greg: One of our favorite things to do at BigCommerce is back-testing. Before we started using CaptivateIQ, we handled this manually. Now with CaptivateIQ, we’ve been able to streamline those scenarios significantly. When we get questions from our CFO, we can go into a sandbox environment, test the scenario, and provide a clear answer quickly.

Chris: Sales compensation is rarely a single-person or single-department exercise. When I think about sales compensation, I’m thinking about payroll, legal, HR, finance, operations, and sales. All of those groups are involved in some capacity, and they all have perspectives on how things should work. Some of those perspectives are helpful, and some less so, but they’re all stakeholders.

Having a process that allows you to collect and evaluate input from these groups over time is critical not just for incentive design, but for broader operational improvements. At the end of the day, you’re asking: How can we make things easier and more effective for our customers, the sellers?

[BLOCKQUOTE
| Quote: It may sound unglamorous, but strong business processes are often what make sellers more successful.
| Author: Christopher Goff
| Title: Senior Director, Sales Compensation, Labcorp
]

Can you share an example of a time when scenario modeling uncovered unexpected insights or even opportunities to improve the plan before you pressed “go live”? 

Greg: A common example is modeling scenarios for new hires. How do we make sure we have the right compensation plan in place to support an effective ramp? We want new employees to feel confident, build the right habits early, and understand how their performance translates to earnings.

Scenario modeling allows us to ask: If a ramp period is longer or shorter, how does that impact commission expense? We can also test how changes to quotas or role definitions affect overall payout distribution. The goal is to ensure a healthy, sustainable distribution before the plan ever goes live.

How do you ensure your plans are working as intended? If they’re not, what tools and strategies do you have to course correct in real time? 

Chris: It starts with a clearly defined measure of success. That should be outlined before the plan goes live. Especially with new plans, we monitor them closely against established metrics like productivity, cost, margin impact, performance distribution, and levels of seller engagement. How you course-correct ultimately depends on what the data is telling you.

If it’s an annual plan, for example, you might formally review performance at the end of Q1, assess the data, and determine whether adjustments are needed. Depending on your organization’s culture, transparency is critical. You can say, “We don’t know exactly how this will play out, and we expect there may be changes.” I’ve been in organizations that were very intentional about setting that expectation upfront. They acknowledged, “This is our best estimate based on what we know today, and there are still unknowns.”

I’m reminded of a quote from Salvador Dalí: “Have no fear of perfection—you’ll never reach it.” That’s the reality of sales planning. You need intentional checkpoints built into the plan to account for uncertainty. And if there are major disruptions or structural shifts, you need mechanisms within the plan to address those moments without destabilizing the organization.

How do you strike the right cadence for reviewing plan performance without overreacting too early or waiting too long? Are there certain KPIs that require more time between evaluations?

Greg: It depends on both the metric and the department. Some teams may be measured against non-recurring outcomes, like a Management By Objective (MBO), which naturally require a different review rhythm. At BigCommerce, we review performance monthly with all key stakeholders. We look at how people are trending across quarterly, monthly, and annual plans to understand pacing toward goals.

A lot of this comes down to the length of the sales cycle. If a role has a 90-day sales cycle, we’re trying to anticipate what’s likely to close in the next 30 or 60 days. The goal is to understand how the compensation plan is actually expected to work in practice. We tend to err on the side of reviewing earlier and more often, because experience has shown that if you don’t, you risk missing issues that could have been caught sooner.

Katie: The unique context of each business really matters. There isn’t a standard playbook that says, review these KPIs on this exact cadence. It depends on your organization, your sales cycle, and other inputs that need to be evaluated thoughtfully when designing a review process.

[BLOCKQUOTE
| Quote: You’re often writing the playbook as you go.
| Author: Greg Peña
| Title: Lead Senior Financial Analyst, BigCommerce
]

<hr>

If you’re interested in participating in one of the Multiplier Q&A features, reach out to us at multiplier@captivateiq.com.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Author
Title
Audio clip with Mark Schopmeyer and Jon Saxton, developer extrordinaire
0:00
3:00
Subscribe to
Be the first to hear about new stories from the Multiplier.
Subscribe for free to view all content: