Evaluating Revenue Management Performance

Aug 30, 2020 - 3 minute read

“In God we trust. All others must bring data.” – W. Edwards Deming

Ten years ago hoteliers would debate whether revenue management was an art or a science. That debate has now long been settled, and it is clear that revenue management should be measurable. But how?

Revenue management is an enormously complex discipline. But so are financial markets, and here huge progress has been made in frameworks than can quantify, and evaluate, potential growth drivers. What sets the two industries apart is the low level of investment hospitality has dedicated to the area.

Let’s take a hypothetical hospitality owner: 50 properties and €100 million annual revenues. 20% of the properties have an RMS. A further 40% are served by cluster revenue management. The remainder are left to their own devices and the GM calls the shots. If the board had a demonstrable evidence there is a 5-10% difference in revenue growth between these 3 different approaches, and believed that the results would scale, do you not think they would make it the CEO’s priority to deliver these answers?

So what are the available approaches for finding the answers? First of all, we need clear and concise questions. In other words, we need clearly differentiated approaches, process, systems etc. that can be compared. In this example we have that. We could equally well be comparing two revenue managers, two different revenue strategies or two different RMS.

When we want to compare two distinct practices, or approaches, there are 3 common ways to evaluate them. For illustration let’s say we implemented an RMS in January, 2020.

In this example, how would we measure the impact on our revenue performance?

  1. We could look at year-on-year RevPAR to decide. But market conditions are obviously very different and it’s almost always impossible to draw any conclusions from this approach.
  2. An improvement on year-on-year is to compensate for market fluctuations by using Revenue Generation Index (your RevPAR / market RevPAR). This approach isn’t bad but there is a lot of noise in market data which means you can only detect very impactful changes.
  3. If your lucky enough to have multiple properties in similar locations, and with similar profiles, you can use this to get the most accurate results by running a side-by-side comparison.

But even two of your own properties will have different demand structures and while the evaluation can be good it’s hard to get reliable numbers, especially if the difference is lower than 5%. From other disciplines, we can see that the way to get much more accurate results is to dig a bit deeper.

The following two approaches, rarely used in hospitality, give us more accurate results

  • Simulations - building an artificial model where the two different approaches can be compared. The challenge here is building a model that is complex and sophisticated enough to mirror the real world.
  • Controlled Experiments - if you’ve heard of AB-testing you already know what these are. Controlled experiments are essentially a more sophisticated version of side-by-side comparisons, that don’t require multiple properties.

Pace uses both Simulations and Controlled Experiments to evaluate revenue management performance. What is exciting about both of these approaches is that we can accurately detect performance improvements as small as 1-2%. Not in all cases of course, but we always report on the reliability of the conclusion so it’s clear what value to place on it.

You can read more about our approach to Impact Analysis in a recently published white paper. Or learn more in the below online presentation.