Colin (00:01.634)

Hi and welcome back to The Growth System, the podcast that looks at B2B growth through a systems thinking lens. I'm Colin Shakespeare.

Chris (00:10.728)

And I'm Chris Bayless.

Colin (00:12.69)

And today, we're going to be revisiting a topic that we've covered to some degree before, and that's metrics and measurables. I guess this time from a slightly different angle, given that we're working our way through, I guess, the significance of the different dimensions of our growth team operating system. And this is really the last of the kind of the

the four that we're doing at the start, is all about the kind of deep structuring principles and then onto the kind of orientation part of the operating system. We've obviously covered purpose and values and then that sort of nexus between ideals and reality of strategy. And I think today, moving on to measurables, it is a great natural bridge into more into the sort of

operations side of an execution side of things right Chris?

Chris (01:14.568)

Yeah, absolutely. I think tying metrics to strategy is really a natural kind of part of business practice. And what we're going to talk about today, I think, is in many ways how to do that successfully. But I guess sort building that bridge between strategy and metrics and sort of measurables more broadly. I guess a way to kind of paint the picture is, you know, if the

the sort of strategy is, know, like apples think differently, you know, that's always been their kind of strategic direction. It's, you know, let's do something different. Well, it's a fairly nebulous concept. It's two words, right? But, you know, how do we link that to things that we can measure to start pushing the organization forward? You know, what's forward to like quality is job one or something.

You know, if you were saying, okay, well, what are the metrics that underline the strategy of quality is job one? Well, well, actually something like Six Sigma might be your framework there and all the performance standards and metrics that live within that as a framework.

So really what we're talking about is, as you say, we're at the end of that orientation part of the growth team operating system. are, yeah, we've just talked about strategy and now we're going to talk really about how we bring strategy to life through measurement. And I think also it's worth mentioning that there is a really, really strong link, not just to what we've covered in the last episode in strategy, but also to what we covered in the first episode of this series on purpose.

because actually within our sort of great system design framework that a lot of our projects run to metrics could be considered a part of purpose because if anyone who listened to that episode might remember we talked about purpose being fractal purpose being something that split and meant different things at different levels of the organization at a sort of full org level and enterprise level.

Chris (03:21.489)

you know, and then how that then broke down into the different departments and different teams, and even the individual contributors. Well, you could consider in many ways that metrics exist as a form of purpose, almost as a proxy for purpose, because they are the way that we orientate ourselves. And that frankly, is what this whole first section is around. It's, it's using

measurable as a driver of the system. So that I think is what we'll get into and start talking about is what we do to kind of set the great metrics and to use them to drive system performance.

Colin (03:56.248)

guess there's dangers and opportunities there once we realise that the further down you go, the metrics kind of become the purpose, right? I mean, we have to wield them carefully.

Chris (04:05.775)

Exactly.

Chris (04:10.387)

Yeah, absolutely. But I think that in many ways is kind of one of the key outtakes really from from this episode is that I think the traditional view of metrics is that they were kind of neutral gauges of performance, if you like, they were there to as the as the name entails, they were there to measure what had happened, they were there to measure whether success had happened.

As we know from the purpose episode and from systems thinking concepts in general, well, measurables and metrics are way more than just gauges of performance. They are actually the one of the key tools we use to drive and guide and shape the performance of the system itself. So they have a sort of catalytic role in how we actually build our growth system. And

What they do ultimately is create a much like purpose. And that's where we're talking about there being real, an overlap between the two things is they become a sort of deep structure element. They become a guiding principle that allow employees, individual contributors to make decisions on a day-to-day basis. Cause we know that that is what good looks like is, know, we have a metric that we're aiming for or set of metrics we're aiming for. We've probably got a benchmark in place.

they probably live in a scorecard somewhere. So we know that's what good looks like. That's what we've got to aim towards. So it's that recognition that measurables are system drivers, not just indicators, but we'll talk about different kind of metrics, but maybe a word to a man that we mentioned quite a lot, I think, actually, on the podcast, which is sort of good heart and his law. And, you know, as a reminder,

that Goodhart's law essentially says something quite complicated about economics, but is often reduced down to whenever a metric becomes a target, it inevitably ceases to become a good measure. So what that really is saying is as soon as you start targeting a particular metric or set of metrics,

Chris (06:31.225)

then the system starts to orientate its performance towards them. It becomes a self-fulfilling prophecy to some degree. Systems are, or least businesses, are goal-seeking systems. So that point of good hearts that whenever you measure something, whenever measure becomes a target, it ceases to become a good measure, that's really also worth bearing in mind that whilst we have this catalytic role, we have this system driver role.

it doesn't necessarily mean that those metrics are going to be good measures of the health of the business, of the health of the system. I think it recalls to mind actually, I'm sure we probably used this same example when we last spoke about this on the metrics episode way back in season one, about Wells Fargo. Do we talk about that?

Colin (07:25.222)

I'm sure we did talk about the Wells Fargo case where they were, I think the metric was to open a set amount of new accounts daily. So the idea was to kind of boost growth. It's obviously a big new business initiative, but I think essentially what happened was the employees just opened millions of unauthorized accounts just to meet their quota. there's obviously...

Chris (07:44.37)

Yeah, exactly that. Yeah, exactly that. So.

The goal, this is from memory, but the goal I think was something like to drive sustainable growth through new customer relationships. And I think that it was kind of wrapped up in a whole of, know, how effectively it was a cross-selling target, but it was about, you know, building relationships with the customers. If you're in the mortgage department, you know, can you sell your good, you know, your good customer a checking account or whatever it might be. But actually,

you know, what they did was they made the metric that measured the goal of sustainable growth through customer relationships or whatever it was as being number of new accounts opened, which at face value on a scorecard when that scorecard is blank, of course, probably seemed like a perfectly reasonable number of things to do, you know, that we're we're a bank, you know, having lots of new accounts opened, that probably means we're growing. Great. then we'll look at the what they

failed to recognise was the sort of catalytic role, I guess, of not just the metric, but also the incentive that they put behind it. And we'll talk about incentives later on. But as you said, opened lots of unauthorised new accounts. had lots of sort of spurious conversations that said, you know, by the way, would you like, you know, would you like me to just set you up a checking account? And, you know, their bank processes enabled them to just do that.

So they had this hugely successful in inverted commas metric, know, these checking account openings going through the roof. You know, their path to sustainable growth grades looking like it was all good. But then what actually happened is there was an unanticipated side effect. And that was

Chris (09:41.104)

lots of these accounts weren't real. The customers didn't know they were opening them. They didn't realize that there was going to be a monthly charge for the account. And it actually ended up in, I think, a class action lawsuit. If it wasn't a class action lawsuit, it was certainly a regulatory investigation, but it ended up costing them hundreds of millions in fines. So whatever.

Colin (10:00.354)

I think they paid something like one and a half billion in fight just in the fines alone, by the way, I don't know about what they had to pay the consumers, but I think the fines alone were very significant.

Chris (10:03.155)

If they really, yeah, remember better than me.

Chris (10:13.139)

but a cautionary tale from Wells Fargo, and not even a very long time ago, I think it was about 10 years ago this, that setting metrics, defining purpose, sustainable growth, setting metrics, open a number of new accounts opened, metrics are a system driver. Be really careful about what direction they're driving the system in and how they then relate to the other elements within your system.

where they fell down there arguably was processes, that there was a link to the account opening process because the impetus for opening new accounts had changed. The process should have changed as well, but it didn't. So they didn't introduce governance into that process. They changed the incentive structure without changing the framework for measurement of what defined a high quality new account being opened or whatever it might be.

It's worth mentioning that metrics don't just reflect reality, they actually help construct new realities. kind of shape priorities, they shape investments and they shape the day to day micro decisions that your team members make. So you need to be really, really careful that you are, they are driving your system in the right direction.

Colin (11:32.322)

guess something that probably everyone listening will have seen and it'll have got their back up at some point is that numbers tend, once we've created metrics and at some point it's been agreed that a certain metric is going to be how we measure success for better or worse, those numbers then tend to sort of carry an aura of authority. And I think we've all heard

the data doesn't lie while having a debate with one's line manager about whether something is going well or not. I think what you probably tell us is from a systems lens, measurables can actually quite easily mislead. In fact, the data can lie.

Chris (12:17.875)

Yes, the data can definitely lie, or at least the data can obscure what's really happening, and I think that's the key thing. Much like what we were talking about a moment ago in terms of metrics shaping system behaviour, the people within the system can also shape the metrics negatively.

Colin (12:23.096)

Mm.

Chris (12:46.099)

you know, measurables can definitely mislead if they're not carefully designed. And it's something that we see a lot, you know, a really classic one is like net promoter score being the metric that that measures how happy customers are. Well, I mean, it doesn't does it? Anyone who's been involved in NPS program will tell you that nobody fills the things in. If they do fill them in, you know, they might answer the one question afterwards.

they're not really collecting deep client sentiment. They are heavily biased to the people who are either very, very happy or very, unhappy. And, you know, they ultimately don't do a very good job at working out if your client's happy. And I think that that's something that's really, really important because if we look at MPS and MPS is at, you know, 9.1 and everyone that

management team thinks great all of our customers must be delighted and but actually that you know there's a lot of missing context there well would you recommend us well for the five percent of the people that actually buy from you that bother to fill the thing in you know point the first yeah it may sound like they would would they recommend you but but do they actually like you are you just the best of a bad bunch

you know, do they perceive you as being, you know, just as bad as all the bloody others? And at least they've got some, you know, better the devil, you know, so they probably would promote you. Or at least they would, you know, if pushed say, you know, would you recommend us? Well, it probably doesn't mean anyone's happy. You probably haven't spotted the market context. You probably haven't spotted that relativity to the rest of your competition. You just saying, oh, well done us, you know, we've got 9.1. So I think that it sort of...

Yeah, it's quite an easy thing to hide behind a metric and think everything's okay without really understanding the broader context. And that's a really difficult thing to manage because, you know, how are you going to measure that? It's like things like brand sentiment. You know, it's really important. Everyone agrees. But how do you actually measure it? No one knows. know, media agencies might sell you some sort of econometric tracking study and tell you that they think they know, but really it's consumers themselves don't know.

Chris (15:05.722)

It's one of these that is, it's one of those kind of hopeful.

Colin (15:07.02)

Well, exactly. That's a really good point. And I kind of get why that's hard. These qualitative metrics is not an easy thing to pull off. But even at this stage of the level of fairly simple, easy to understand quantitative metrics, I think we've all been there where there is a tendency to kind of cherry pick and highlight data that's going to validate existing biases and maybe

ignore the inconvenient truth. I mean, I've been sitting in a big expensive SKO before, sitting around all the kind of senior sales guys, partly, you know, deliberately sat myself in that seat because it felt like the right sort of area to be in to learn something. And the big grumble is all how, and you know, I get that there's a bit of bias here involved as well. We never got anything from marketing. And we had actually, to be fair, done some analysis to show that very

Chris (15:39.879)

Yeah, I'm from this country.

Colin (16:05.522)

the marketing programs that we were spending on, very little of that was trickling into kind of later stage pipeline. And then we were all discussing this week for the CMO to come on stage and the large part of the presentation is all about how we created however many thousand MQLs last year and therefore everything's great from my department. there's a

Chris (16:28.208)

and

Colin (16:31.63)

Well, there's a couple of things going on there that are kind of relevant to what we're talking about here. First of all, that that metric has clearly been set as a measure of success for the CMO. And then there's also cherry picking that data to show how everything's going. And I have some sympathy for the C-suite there who, standing on stage, they've got to present quite a positive picture. You can't go on stage and say, well, we created 20,000 MQLs last year and

know, fair enough, 19,000 of them were total rubbish. really, so there is a bit of pressure, I do have some sympathy both ways, I think.

Chris (17:08.923)

Absolutely. I mean, we must, we mustn't disappear down the MQL rabbit hole here, but, but I will, I will briefly, you know, bite at that particular bait. just to say, yeah, MQLs are the poster child for ineffective contextless metrics, because they are ill-defined, easily gamed.

Colin (17:14.294)

No, no, no, I don't wanna go back taking that rabbit home, but...

Chris (17:39.124)

and basically meaningless for most organizations. In fact, there's a stat I found for a presentation I did recently for something that only 12 % of sellers think MQL is irrelevant to their job, which just like closed my mind. it's just, you know, and, but yeah.

Colin (17:56.312)

Wow. It's almost like a concept made up by a politician.

Chris (18:05.137)

The CMO has been said, you need to make MQLs. So they're going to make MQLs and MQLs are easily made if you change what the definition of MQL is to suit your particular requirements. they are the, say they're the sort of poster child of bad metrics, the poster child of metrics that, you know, drive misalignment that don't guide system performance. And yet are...

clung to by so many organizations as a measure of success. And I think that they are a core, you another caution retail that you really need to consider the place of the metrics that you pick, not just in shaping system performance, because if you ask for MQL, you'll get MQLs. But also in how MQLs shape downstream, how metrics can shape downstream behavior. And that sort of, you know, clutch of sales people saying we never get anything from marketing.

while they're watching the, you know, the highly paid CMO on stage, you know, be very self congratulatory about how many MQLs they've got. You know, that is poor metrics in action. And it's a story that plays out all too often. And I think does that, I think it kind of leads us into talking about the different kinds of metrics. And I think that one of my particular favorite

kind of metrics, you know, we talk about leading and lagging indicators a lot, know, two categories of metrics, but I really like the idea of the lurking indicator. I think that's a really good

Colin (19:45.26)

that's what I was thinking about when you started talking about NPS scores and what we really need to be sort of measuring is the lurking indicators like the, you know, is there actually some brooding unhappiness amongst our customers that the NPS score isn't measuring or, know, conversely could be something positive, but, you so I presume you're talking there about the less tangibles like

I guess the perception of the brand being eroded in a way that maybe the NPS code doesn't pick up or maybe just employee burnout or dissatisfaction, know, those rumbles going through the back seats at SKO about how the MQLs are useless.

Chris (20:19.899)

I was actually thinking.

Chris (20:29.203)

That's exactly what I was thinking about. that's exactly what I was thinking about. Sales team grumbling is the number one lurking indicator, I think, that most growth teams have within them. And marketing team grumbling as well. They are equally guilty of this, but...

Colin (20:39.854)

I've never participated in that except under peer pressure.

Chris (20:52.077)

that is the number one lurking indicator of misalignment is sort of water cooler conversation or map slack conversation. These days, you know, if you were getting that kind of, you know, one of the one of the Romans ever done for us, you know, conversation going on, then I think that

that that should be something that if you could measure that on a scorecard in any kind of quantifiable way, it would be a fantastic indicator of alignment. It would be a great metric to be presenting at management team level because if everyone was chirpy, you're probably getting a lot of stuff right in a way that the revenue targets, the MQL numbers and the SQL numbers and the opportunities and whatever else that are inevitably going to be on that scorecard just don't tell you.

So yeah, leading indicators, perhaps we should go back and clarify, you know, the sort of predictive indicators of what is going to happen. I think so often we are predisposed to lagging indicators. It probably goes back to that point, right at the start, that the traditional view of metrics are one that has sort of gauges of performance. You know, that's your lagging indicator. What has happened?

And of course, the common received wisdom is that you should also be at least equally balancing that with leading indicators like pipeline velocity or arguably SQLs and number of opportunities opened. But actually being able to kind of hear those quiet signals, hear those lurking indicators.

is a fantastically good way to understand the performance of your system at a sort of structural level. So I think the call out here is have a well-rounded measurement strategy that surfaces as many different kinds of indicator of performance as you can get, whether that is predictive.

Chris (23:03.557)

or historic or indeed the sort of signals information you can get in terms of system performance, because all of that together is going to really help you understand what is going on in the system and what levers you can pull to change it.

Colin (23:18.988)

I guess there's a, it's probably fairly sort of well established that you need this balance of leading and lagging indicators, I guess, because if you, the example I suppose everyone would go to immediately is if you're just going on lagging indicators, last quarter was our most successful quarter ever. And that's all you had to go on. might assume, well, next quarter is gonna be at least that successful again.

But unless you're looking at the leading indicator that says, in fact, there is zero pipeline for next quarter. In which case, then you maybe have a more balanced and realistic view. But even that, which is, guess, the kind of generally accepted wisdom on balancing those two, is potentially focusing too much on what's easier to measure and missing out.

as we were calling them, the lurking indicators, the hidden weak signals, I suppose.

Chris (24:16.851)

Mm hmm. Yeah.

Yeah, exactly. And you know, I think that those lurking indicators, those kind of that are becoming easier to measure in some ways. You know, I'm a big fan of signals data in terms of how we do kind technical architecture for growth teams, you know, being able to pull in the sort of intersections of different data points and

I think you could really call that lurking indicators. You could call that certainly a sort of propensity to buy is something we'd talk about. Well, that's a kind of a signals data. We have a collection of companies that have recently had a new CMO and they've been on your website. That is a of lurking indicator of intent. So I think you can use that same thing in different ways. And I think that...

when you start tracking those kind of signals over time, then you can start coming up with some really, really interesting patterns that tell you not just about the system performance, but also broader context like market performance. So I think that they're quite a key thing that probably seemed quite daunting to try and add to your dashboard. But I suspect that

As I'm saying this, know that we don't have any of those in our own dashboards, but I think that it's something that maybe we should put that right and report back in a future episode what that tells us, because I think that if you can find a way to get them on paper, I suspect they'll tell quite an interesting story over time.

Colin (25:41.518)

you

Colin (25:55.63)

Maybe not on dashboards, but I think to be fair to us, Chris, we do try and incorporate these into, with mixed success, into scorecards and other ways of looking at measurables. Don't we? I'm sure.

Chris (26:09.491)

Yeah. Well, certainly we're pretty good at signals data for, uh, yeah. So let's talk about more signals, shall we? Because, you know, a characteristic of all systems of feedback loops. And I think that we need to kind of acknowledge the fact that, um,

much as we talked about metrics being shapers of system performance, they can, you know, create it in a few different ways. And something that I've been thinking about really is sort of reinforcing loops, which can be positive or negative. And I think there's something that, you know, we've you've kind of touched on already, actually, that

Metrics, if you choose the right ones, can have a sort of snowball effect, of reinforcing, kind of negative reinforcing loop, if you like. So if we had a metric that was just, you know, revenue, for instance, then, and that revenue was going down, then we may make a decision to

cut marketing spend to keep profitability where it was. We may make the decision to reallocate resources or indeed get rid of human resources within the business because to kind of right size.

And in doing that, we might undermine the morale of the rest of the team who were gonna, you know, who were gonna sort of further start checking their CVs and stop generating any more pipelines. So we get this kind of negative spiral, this kind of negative reinforcing loop that as soon as you start measuring one thing and then making a decision based on one data point, it can have a snowball effect that starts affecting lots of other data points. And I think that this is something that is really

Chris (28:21.265)

worth bearing in mind because that's a scenario that I've just sort of loosely painted a picture of that, you know, I certainly have seen happen in my life in organisations that I've worked in. I know that you have too Colin.

Colin (28:35.434)

Yeah, mean, there are some examples that have like rants building in my head as you're talking about it. I'm going to spare the audience another rant on this one.

Chris (28:35.515)

and it's up.

Chris (28:47.099)

Yeah, I thought I'd just heat you up for those. if you're not in the mood today, I think we can.

Colin (28:50.446)

Well, yeah, no, I've, yeah, mean, I've, not so much not in the mood, I just thought maybe I'd spare people because it's kind of a pet hate of mine there. So like a pet peeve, guess, where we have purely selected metrics, which are too easy to game, like, and misaligned metrics where two departments have to collaborate classically beings of sales and marketing where one will be on.

Chris (29:01.704)

Yeah.

Colin (29:20.364)

measured on revenue and the other on how many MQLs you produce and you decide what's an MQL. And then if we end up with a little pipeline count, for example, then suddenly, but it turns out we had loads of MQLs. So turns out that maybe what we should do is just produce more. Yeah, yeah, let's get rid of more salespeople and produce even more MQLs or even let's get rid of several of our BDR team.

Chris (29:36.817)

Yeah, you guys must not be doing a good job.

Colin (29:48.282)

and give the budget to marketing who then outsource it to some STI's who don't know our business. That was a particularly painful one that I was thinking about as you talked about that. And much as you could probably, I don't actually have any individuals to criticize, there was clearly a system failure essentially. And it was not really having a system that...

Chris (29:54.01)

You

Colin (30:15.672)

that acts as a guide to metric selection as a way of mitigating the risk of that happening. There you go. Was that reasonably general without me slipping into a massive rant and banging my hand off of my little fold-up table here?

Chris (30:26.227)

into the full conversion. Yeah, I think that was that was nicely judged. And I think those negative spirals happen all the time. And I think that they get another cautionary tale in terms of metric selection. But but I think it's also worth actually recognizing the fact that that can happen in a positive sense as well. You know, team confidence, and I see actually this particularly in marketing teams, you marketing teams are interesting beasts, because they

so often, you know, spend money without not knowing what is going to work, and I think are predisposed to

being quite cautious of spread betting, if you like. And, you know, what you see often, which can be a positive thing, also not be to be fair, but is, know, when you hit a little seam of gold, then actually, you know, the, the uptick in those metrics on that particular channel or campaign or whatever it might be, then

know, build confidence, they build confidence to pour more resource in and then you get this kind of, you know, success to the successful archetypes that we've perhaps talked about before from a more of sort of systems theory perspective that you then get more resources in so you're buying more of a share of the market. And then

you actually, you then come up against potentially another systems archetype for limits to growth. Perhaps that's a distraction for a conversation about metrics. But, but actually, you can create these positive spirals as well that you know, as you get confidence from one metric, it can cause you to, you know, to pile in and, and, you know, compound the success. So there can be positives to kind of reinforcing feedback loops as well as negatives. But I think the story, sorry,

Colin (32:13.902)

which can be very, I was just gonna say that can be very, very powerful, not just to sort of internally, but for the types of organizations that we work with who are maybe working towards another round of fund, maybe towards series B or maybe towards an IPO. These sorts of metrics are hugely beneficial when going for investment or wanting your sort of potential share price to rise, I think, because this is.

essentially what that valuation is going to be based on, right? And that confidence, even that slightly less tangible element, the confidence that they build.

Chris (32:52.369)

Yeah, absolutely. And I think that it's exactly what you were saying really, which is metric selection is a form of risk mitigation. And I think that you've got to choose carefully for lots of reasons we've just decided, just discussed, but also, you know, sort of

and sort of recognise that kind of the immersion impact, I guess, of how you configure metrics, because you then have to understand how people are going to act on them. And, you know, I think that we've perhaps, to this point in the episode, viewed what we're talking about through the lens of like a single metric. But actually, that sort of system effect often comes from the way that you configure a whole bunch of metrics together.

And I think that when you have scorecards with lots of different sort of indicators on them, then you're actually going to get an emergent system effect from the sort of collective interplay of all of those. And it's something that you need to be really, really careful of. And I think this is particularly in relation to kind of building scorecards. And I think that something that...

we see a lot when we go into organizations for the first time is that the metrics in the scorecards, sort of interplay between those metrics almost reinforce the silos within the organization. Because there is a form of competition that exists within scorecards. And I think it goes back to the point we're making about sort of people gaming the system. That if you've got NQLs and you know,

opportunities opened or, or, you know, pipeline velocity, or God knows what what you've got kind of between sales and marketing doesn't really matter.

Chris (34:52.899)

One being green and the other being amber creates an interplay on the scorecard. It creates a reaction in terms of the behaviour and I think that's the big recognition with metrics is that metrics that shape behaviour, they are the short term indicator that something needs to happen, positive or negative.

when you have multiple indicators on the scorecard together, it's very normal in an organization that individuals or individual departments will be the owners of those metrics that have been pushed together within a scorecard. And they can create tension that can actually sacrifice the overall efficiency of the system. Because it kind of goes back to that sort of Good Hearts lore piece, you know, if

If your upstream counterpart has got green on the scorecard and yours are amber, well, you're going to make sure that next time you go into that meeting that yours are green. So you're going to focus time and resources and perhaps you're going to do some stuff which may not entirely be something you're proud of in terms of, okay, well, let's, yeah, we're a bit down on MQL. Well, let's maybe say that everyone that's come to the website, you know, that we can de-anonymize on some tool is an MQL. Great. well, they've just shot through the roof. Look at us.

Colin (36:08.17)

suddenly you have a more positive opinion about what should constitute an MQL, right, in those circumstances and you've you've probably justified it to yourself so that actually you should have been doing this all along at that point, yeah.

Chris (36:12.625)

Yes, that's perhaps a nice way of putting it.

Chris (36:20.155)

Exactly. And it's a really old school form of business management, but it's still so prevalent that I think some senior management teams do actually play on that. They think they're doing a great job when they're playing one team off against another. They're sort of using that kind of...

tension between teams to what they think is drive performance and then when they've got everything going up and to the right and everything green they think they've done a great job but what's actually happening in the interplay between those two things I think that's the you know that's the thing that we've got to be really conscious of when building quality dashboards and yeah there can be a sort of

negative effect, guess, from some of those as well, a sort of, you know, counterbalancing, I guess. That if we, I don't know, think about the effect of chasing an MQL number, it's easy for us to keep coming back to that, as it can be the sort of whipping boy of the episode. But, you know, getting, getting that number green might

Colin (37:26.094)

Okay.

Chris (37:33.34)

because and one of the easiest ways to get your MQL numbers green as most marketers will and sort of quietly admit to you is we'll just pile a load more money into the media budget. And, and because things like and this is a classic that you know, can have sort of my own PTSD about from from jobs past in the dim and distant past when I was a full time marketer with a proper job is that

it's very easy to pull things out the event budget. It's very easy to pull things out of your brand budget. It's all very easy to pull things out of other places where the effect doesn't play so well on the dashboard and put it into things that make the numbers green. And, you know, I've been in organizations where this has been discussed.

Okay, we're actually we're going to cancel the events budget for this year. And we're going to put you know, we're going to put 80 % of it in our back pocket, because we've had to make some cuts and we're going to put the other 20 % into paid media. Great. And then six months down the line, eight months down the line, oh, well, yeah, it was a brilliant decision because nothing's changed. You know, we're seeing no negative effect from not having spent that, you know, tens of thousands, millions, whatever it might be on on events this year. But it's because they're not

measuring things like brand visibility or reputation because they're difficult to measure. know, as we discussed previously, you know, taking some money out of brand hurts you in the long term, but you can't really feel it in the short term. Normally it's you know, it's definitely got a significant lag in terms of the gap between cause and effect as a metric. And that's...

Colin (39:10.824)

And that doesn't just play out sort of externally there in terms of brand perception. That brand perception piece can also kind of affect morale as well. Like I've worked in an organization that previously used to spend millions on that one time everyone in the world got together at the start of the year for the big kickoff. no matter how immiscible anyone was going in, they tended to come out.

Drunk the Kool-Aid for a few days, had a great time, felt really well looked after. And generally speaking, could, know, morale was tangibly higher and the perception of the brand internally and therefore the confidence to go out there and go to market and do what needs to be done was much higher. And then we...

get a similar situation to what you're describing and then it's saying why don't we kind of cut our internal events budget and so suddenly we're pinching pennies on that one time a year we all get together and the word on the street, the word on slack, the word at the water cooler is generally sort of you know bringing the brand down sort of thing isn't it terrible how cheap we are now and sure enough a couple of months later we see a big wave of

Chris (40:10.47)

Yeah.

Colin (40:27.99)

sort of exodus from the business, which obviously has a big knock-on effect for the growth system of the organization,

Chris (40:30.514)

Yeah.

Chris (40:38.599)

Absolutely, you know, I think that's, you know, that's a great example of ever lurking indicator there as you very much like we were talking about before, but also a great example of what you might call kind of nonlinear effect. And I think this is something that, you know, that's a really good example of the impact of, you know, not spending money on employee, you know, engagement, and then retention rates. There two metrics that

almost certainly don't exist on anyone's same dashboard. You know, it's probably quite hard to draw an effect between them in the numbers, but it exists. But I think other kind of nonlinear relationships, or kind of perhaps long linear relationships, if that is indeed a phrase, which I suspect it isn't, but exists within B2B. And it's something that

that we see that we have to try really hard actually going into engagements ourselves to kind of coach leadership teams on is that if you are going to do something like a strategic APM program that the success might be measured in years. If you've been bred on look how many tens or thousands of MQLs we did last month and then every month it's green because we've made one more than last month and everything's great.

Well, if you redeployed that budget into doing a strategic ABM program, then suddenly all those MQOs might drop off a cliff. Everything's going to be red. my God, it must be a disaster because you've had a change to the strategy. The purpose of the team has changed, fractal purpose.

And actually, if we're then going whale hunting and we're trying to knock over that one big account or that huge enterprise account that's spending a couple hundred grand a year with us and we want them to be spending a 10 million, then how are we going to measure the success of that one thing we're going after? We're trying to land a Microsoft or a whatever.

Chris (42:38.129)

Well, then you have to start using things like account engagement and how many people have we spoken to and it all sounds a bit scary because you've kind of decoupled the cause and effect from the monthly dashboard. And I think this is something that's another kind of hobby horse of mine is the sort of time horizons we use for metrics and for targets. you know, matching, having monthly targets in organizations where sales cycles are, you know, way longer than a month.

is not necessarily a disaster, but it's certainly problematic in terms of how we deal with that kind of nonlinear effect. And, you know, I think that that's something that really needs to be kept in mind when designing that kind of configuration of metrics. I think the other kind of nonlinear thing going on there, which is, comes back to the conversation we're having about brand equity, that people

try to use proxies, think with varying degrees of effect for things that they would like to measure. And they use a sort something that they can measure as a proxy for the thing that they actually want to know and then confuse the two. Yeah, absolutely. I think that's a really good example of it. And, you know, I think brand equity is such a difficult thing, but they might

Colin (43:49.794)

That's really what the NPS score is, right?

Chris (44:04.583)

you might then measure it, do some sort of social listening thing, you know, how many times are we mentioned on social or, and then we use that to say, well, actually people must love us because we mentioned all the time. Presumably using some sort of, you know, slightly suspect piece of SAS that's going to tell them that. coming back to the point of a metric should only really be on a dashboard if you're going to do something about it, you know, if it's going to create action, well,

you know, proxy metrics can be really, really dangerous for driving action or inaction in other places in the business. you know, relying on sort of stable sort of, yeah, a sort of stable linear worldview for your metrics can really blind you to the sort of emergent realities that come from the different configurations things you're measuring, but also just the different

things that are happening in the organization that are on longer legs that are harder to put your finger on in terms of allocating a specific number to. So you really need to be aware of these blind spots and experiment and scenario plan effectively to try and get around.

Colin (45:23.319)

Yeah.

guess another thing on my mind when we're talking about this stuff is how and to what extent we share information and measureables. Like my example was obviously quite a kind of extreme for partly for comedic effect, like the CMO steps on stage and we have this lurking indicator of all those salespeople down in the stalls and in the cheap seats.

grumbling about the performance of marketing and then the CMO gets up and shares this wonderful statistic, this wonderful measurable of how many MQLs a department has produced this year. Isn't it the case that the way that metrics are shared is kind of as critical as the metrics themselves, maybe not as critical, but it's also critical.

Chris (46:19.987)

Yeah, 110%. I mean, the...

Colin (46:24.974)

Is that a measurable that we're going to... Is that a target?

Chris (46:31.438)

I think, yeah, well, I think a target should certainly be 100 % transparency, that's for sure. I mean, the way that you share data has a huge effect on the way that people can respond and the time that they can respond to it. So, you know, an example.

of this I think is about reporting cycles. When you move from someone compiling reports in an organization to having some sort of API driven always on dashboard, that time lag goes and I've worked in organizations where they've had to request something.

many years ago, to be fair, but they've had to request something from the Data and Insight team and it's taken two or three weeks to appear for a board meeting at the end of the month and then it sits around for a week and then someone looks at it and that data is a month old by that point. And then making decisions on something that's effectively happened in the past, it happened one twelfth of the year ago. And that's a real problem because you've got time lag effect.

Another issue is hiding data, think, thinking that only certain people in the organization need to know what's going on. We don't have time to talk about it now, but self-organization in Teams has this big relationship to data transparency to my mind.

When people know what the purpose is and they know what the metrics are and they can see those metrics in as close to real time as possible on a dashboard, then they can start making small course corrections. They can start making judgments. They can kind of use that sort of deep structuring principle of knowing where they're going and contrast that with what they see in the data to move forward.

Chris (48:20.499)

in a way that if they saw the metrics, you know, as a lagging dashboard for the month that's just passed, they of course they can do nothing about it at the end of time has passed. So I think that data transparency is really, really key. It actually reminds me of a example that's often quoted in systems thinking books about a study. I think it was an academic study. I must go back and look at this about because I mentioned it on occasion about a study that was done on

the position of electricity meters on houses in the Netherlands. Bear with me, there is a point. As abstract as that sounds for a B2B podcast, but it was about data transparency. And what they found was that electricity in houses where the electricity meter was in the front hall, I think this was in some housing estate and they'd been installed in two different places. I can't remember, but it's not important. But you know, where essentially

In houses where the electricity meter was in the basement versus houses where the electricity meter was in the front hallway, the electricity usage was, I can't remember now, but double digit percentages lower in the houses where they could see the data. And that's simply because they could see the thing spinning round, they could see where they were, they might not necessarily be able to relate in their head how many pounds and pence or euros in that case, presumably it was costing them, but they just knew that

that they had this consciousness that they were spending money because the thing was going round. And therefore they turned more lights off. They would think perhaps a little bit more intentionally about what devices they were using when they were leaving the television on, when they left the room, whatever it might be. But those that didn't have the data just carried on. And I think that's a really, really important point that

course correction is only possible when you've got data with which to plot a new course. yeah, transparency and the presentation of information is absolutely key.

Colin (50:25.558)

So that last part you said about the presentation of information, I think it's quite a, not a criticism of using the example, it's a great example, but your electricity usage, which by the way, when you look at it on your bill, if you've ever had to query your electricity bill can be incredibly complex how they work it out. But when it's on your smart meeting, it just says you've used this much and it costs you this much today. And you go, my God, and you go around and you turn off a few lights, right?

That doesn't require an extra presentation layer, really. There's just a display on your little smart meter, right? I've worked at some, at least one very, very large corporate, but large, large enterprise where we're dealing with fairly, first of all, fairly complex data and also potentially quite sort of banal or even boring or unengaging data to look at in face value. And what I find

particularly in those organizations, not exclusively in larger organizations, I think, know, certainly where I've worked before, the leaders there, one of the greatest skills that they had was the ability to kind of tie that to a narrative. So to really pair the metrics with the stories and actually sort of to interpret that. And of course they would.

Chris (51:39.698)

Yeah.

Colin (51:50.358)

inevitably interpret data and tell a story with a very positive spin, which is a skill in itself. there's a kind of political rhetorical element to it as well. But I think that's another thing worth mentioning. And guess we're going to touch on aspects of leadership and stuff as we go through this series.

Chris (52:13.139)

Yeah, that's a really, really good point. And I think that as much as I was sort of advocating for 24 seven, you know, data at your fingertips, I think that that kind of the speed at which and the sort of ease with which data is available does create a sort of apathy to analysis. You know, in the days where you had to go and

you know, have a data and insight person go chomp through lots of different sources and then come back with a story. You know, they spent a lot of time deeply considering the data and forming a narrative. And now I think that for teams.

success becomes quite binary, which I think is something you have to just accept. And that's that comes back to the point we're making about how to construct the store card, how to construct the metrics and the configuration of metrics. But but ultimately, if you've got a color coded scorecard, then people have three opinions about it, which is, you know, success, failure, or not quite on track.

And actually you do lose some of that narrative and that narrative can only really come through kind of deeply considering and interpreting and analyzing the data. Now, of course, narratives can be positive and narratives can be negative. I'm sure we've all been in a meeting where someone's been trying to justify the not particularly great number that's on the scorecard for them with a narrative. I'm sure I've done it myself on occasion, but...

And sometimes that can be really important. Sometimes it's perhaps just obfuscating what's actually going on. yeah, having a story that goes behind the data is really, really key. And the way that you can kind of pull that out, I think it can come from the configuration of the metrics. If you construct that dashboard, say, well enough.

Chris (54:18.535)

then yeah, and you you've intentionally put the things on there that are going to be your combinations of leading and lagging and indeed lurking indicators if you can, that they are configured actually physically positionally in a way that can start to create a narrative and you can start to of build that. But I think that it does take an individual to go and interpret what's on there and report it back.

So yeah, I think that that's sort of how we define success and how we sort of talk about success is really critical almost as much so as the availability of data being able to allow us to course correct dynamically.

Colin (55:01.132)

Yeah, yeah, and I guess that's an important point to come on to as well as like how we define success. think with something like metrics, there's so many like so much low hanging fruit in terms of negative examples. But actually in the work that we do, sort of going into organizations that are maybe, you know, like almost every organization kind of set in the way of doing things the way that they

everyone thinks that they're done. It can be quite a controversial topic to start delving into about how we measure success and hopefully this sort of hour or so of chatting about the significance of it will give leaders some pause for thought about the power of metrics within their organization and the attention that we should give to that.

Chris (55:56.66)

Absolutely. And perhaps that you raise another good point there, which is we have been waffling on for nearly an hour now, so perhaps we should start wrapping up with some conclusions.

Colin (56:02.574)

So I think, I guess from a systems thinking standpoint, the main takeaway is that measurables is not a mirror, like a static sort of passive mirror that reveals performance. Measurables, I guess, is a way of shaping performance.

shaping action, steering attention, and resources and human effort. And that can be, I guess, for better or worse, it can spark virtuous cycles or vicious cycles of sort of unintended distortion. And the challenge, and this is part of what we do at RevSpace with the Growth Team operating system, is to design metrics thoughtfully so that they actually reveal and reflect real.

sort of sorry, they'd reflect real strategic objectives and actually help enable the organ or shape the organization in such a way that there's moving towards those objectives. And we have to sort of incorporate hidden and intangible drivers and these lurking indicators that we're talking about and also adapt as the system's condition evolve, which is another sort of often missed point.

Chris (57:29.105)

Yeah, absolutely. I think I just, to be honest, is my main outtake. probably just echo what you said there, which is metrics are not passive. They are levers. They are handles that you can pull to start shaping the behavior of individuals in your organization. And therefore they are one of the key things that you can use to create that sort of immersion system performance.

use them carefully because it can all go horribly wrong, know, read well as far go. But equally, you know, done right, they are one of the key tools that you've got to drive performance within your organization, you know, and as we move, you know, next week into talking about system operations from system orientation, where we where we've been to date in this series, if you haven't got

the goal stack right if you haven't got the if you're not measuring the right things the system will not be pointing in the right direction and no matter what you do from this point on

you've already potentially got it right or wrong. Success or failure can be baked in at the point that you start pointing the system in that direction. So use with care, but also understand that creating that scorecard, creating that dashboard could be one of the most important things you do this year in generating success.

Colin (59:01.826)

Yeah, strong ending there, think, Chris. Love to hear from the audience, actually, like what their thoughts on this, because it is actually something that I think when we do get some feedback and we chat to people that listen to us talking about that, this is a problem that leaders wrestle with. And they're aware that they need to change something, their approach to this and their understanding of it, but perhaps.

Chris (59:04.957)

Thank you.

Colin (59:30.702)

maybe we don't know where to start or have kind of tried to the two in and then going back to the kind of standards ways of doing things and thus kind of sometimes baking in suboptimal performance, guess. As you said though, Chris, we have waffled on for close to an hour, very close to an hour, think, 10 seconds short. So I shall wrap it up for there for this week.

The growth system is brought to you by RevSpace. That's us. We're a growth systems consultancy and we connect B2B organizations with the future of growth. offer consultancy and education and indeed applied delivery services. Please don't forget to follow and rate the podcast. It really helps us to bring the content to a wider audience. And as I always say, we'd really appreciate a moment of your time to tell us what you think either about this episode.

or really about anything that we cover here on the growth system. But that's all we've got time for this week. Until next week. Have a great week and we'll see you then. Thanks, bye bye.

Chris (01:00:36.179)

Thanks for listening.