Eating my own ice cream – Part 1

According to Wikipedia the rather unfortunate term “dogfooding” was coined in reference to the use of one’s own products. That is, “if it is good enough for others to use then it is good enough for me to use”. I prefer the term coined by one time Microsoft CIO Tony Scott: “icecreaming”.

Image

Sanctorius of Padua literally eating his own ice cream.

In this two-part post, I am going to “eat my own ice cream” and dive into my own smart meter electricity data made available by my electricity distributor through an online portal. I will endeavour to find out what drives electricity usage in my household, how to make the data as predictable as possible and what lessons can be learned so that the utilities sector can get better insight from smart meter data.

Whether it is for regulatory requirements or generally better business decision making, traditional forecasting practices are proving to be inadequate. The current uncertainty in electricity pricing is partly driven by inadequate peak load and energy forecasts. Until the mid-2000s energy forecasting was very straightforward as electricity was a low cost resource and depended on very mature technology. And then everything changed. We had a run of hot summers followed by a run of wet, mild ones. We had the rooftop solar revolution helped in the early days by considerable government subsidy. We had changes in building energy efficiency standards and lately we have also had a downturn in the domestic economy.  And of course we have had price rises which has revealed the demand elasticity of some consumers.

This array of influences can seem complex and overwhelming, but armed with some contemporary data mining techniques and plenty of data we can build forecasts which can take into account these range of factors and more importantly dispel myths of what does and doesn’t affect consumption patterns. Furthermore we can build algorithms that will detect when some new disruptor comes along and causes changes that we have not previously accounted for. This is very important in an age of digital disruption. Any organisation that is not master of its own data has the potential to face an existential crisis and all of the pain that comes with that.

In this analysis I am going to use techniques that I commonly use with my clients. In this case I am looking at a single meter (my own meter), but the principles are the same. When working with my clients my approach is to build the forecast at every single meter, because different factors will drive the forecast for different consumers (or at least different segments of consumers).

So I don’t indulge in “analysis paralysis”, I will define some hypotheses that i want to test:

  • What drives electricity usage in my household?
  • How predictable is my electricity usage?
  • Can I use my smart meter data to predict electricity usage?

I will use open source/freeware to conduct this analysis and visualisations to again prove that this type of analysis does not have be costly in terms of software, but relies instead on “thoughtware”. As always, let’s start with a look at the data.

Image

As you can see I have a row for each day and 48 half hourly readings which is standard format for meter data. To this I add day of week and a weekend flag calculated from date. I also add temperature data from my nearest Bureau of Meteorology automated weather station – which happens to be only about 3 kilometres away and on a similar altitude. I also total the 48 readings so I have a daily kWh usage figure.  In a future post I will look into which techniques we can apply to the half hourly readings, but in this post I will concentrate on this total daily kWh figure.

This is the data with the added fields:

Image

My tool of choice for this analysis is the Generalised Linear Model (GLM).  As a general rule regression is usually a good choice for modelling a variable of continuous values. GLMs also allow tuning of the model to fit the distribution of the data.

Before deciding what type of GLM to use let’s look at the distribution of daily usage:

Image

Not quite a normal distribution. The distribution is slightly skewed to the left and high kurtosis which looks a little like a gamma distribution. Next let’s look at a distribution of the log of daily kWh.

Image

Here I can see a long tail to the left but if I remove that ignore that tail then I get quite symmetric distribution.  Let’s have a closer look at those outliers, this time by plotting temperature against daily kWh. They can be seen clearly in a cluster at the bottom of the graph below.

Image

This cohort of low energy usage days represents times when our house has been vacant.  In the last year these have mostly been one-off events with no data that I can use to predict their occurrence.  They can all be defined as being below 5 kWh, so I’ll remove them from my modelling dataset. The next graph then shows we clearly have a better fit to a gamma distribution (blue line) rather than a normal distribution (red line).

Image

We are now ready to model. This is what the first GLM looks like:

Image

In assessing my GLM, I will use three measures:

  •  The “p(>|t|)” to estimate the goodness of fit of each predictor (the smaller the better which means the greater confidence we have in the coefficient estimate),
  • R2 which represents how well the overall model fits (the higher the better –R2 can be thought of as the  percentage of variance in the data explained by the model), and
  • root mean squared error  (RMSE) which tells me what the quantum of average difference is between my actual and predicted values (the lower the better; an RMSE of zero means that predicted values do not vary from actual values).

The model above is not very well fitted as demonstrated by the p-values and that some coefficients did not produce an estimate. This model has an R-squared of 0.36 and a RMSE of 7 and these statistics are not very reliable given the p-values.

Also it seems odd that MinTemp is significant but MaxTemp is not. So I remove poor performing variables and add an interaction between MinTemp and MaxTemp as I expect to find a relationship between these two values and electricity usage.

 Image

This new model is better fitting with r-squared of 0.64 and RMSE of 5.32. But the p-value for “Day==Tuesday” is still not low enough for my liking given the sample size of only a few hundred observations. At risk of erring sightly on the side of underfit, I remove this term from the model.  Taking a closer look at temperature, I plot average temperature (the midpoint between MinTemp and MaxTemp) against daily kWh and I find an interesting pattern:

Image

We see cross over points in the direction of correlation in the same temperature band at different seasonal changeovers, like bookends to the winter peak in usage.  I use this insight to create two new temperature variables using splines. A bit of experimentation leads me to conclude that the temperature changeover is at 18 degrees Celsius which is also the temperature at the bottom of my U-curve scatterplot above. I create a variable called “spl1” which is zero for all values less than 18 degrees and then the average temperature minus 18 for all above.  The second variable, “spl2”, is the opposite: zero for all temperature above 18 degrees and 18 minus the average temperature for all below. Because I am using a log link function, these variables will describe a u-shape as in the scatterplot rather than a v-shape which is what would happen were I using linear regression.

Let’s see how these variables work in my model:

Image

Hey presto! We have a much stronger fitting model with r-squared of 0.71 and a RMSE of 4.86. This model is appealing in that it is highly parsimonious and readily explainable. When I visualise the model fit and produce a thirty day moving average r-squared increases to 0.88 and I have a model with a good fit.

Image

I have pointed out three periods where the model departs from actual usage. The two low periods coincide with times when we were away and the high period coincides with a period when I was travelling. I have seen market research which suggests that absence of the bill payer leads to higher household electricity usage. I can add dummy variables into my model to describe these events and then use those in future forecast scenarios. The important thing here is that I am not using a trend and given this fit I see no trend in my usage other than that created by climatic variability. Some consumers will have a trend in usage based on changes over time based on things like changes in productivity for businesses or addition of solar for residential customers. But it is not good enough to just count on a continuing trend. It is important to get to the drivers of change and findings ways of capturing these drivers in granular data.

In the next part of this post I’ll investigate how these meter-level insights can be used at the whole of network level, and some techniques which can be used to derive insight from individual meters to whole of network.

Productivity and Big Bang Theory

Productivity has been falling in Australia for some time. In the mining, utility and manufacturing sectors we have seen a remarkable fall in productivity over the last decade. Some of this has been caused by rising labour costs, but in mining and utilities in particular, capital expenditure on infrastructure has been major contributor. So how will new technology and the era of “big data” transform the way these sectors derive return on capital investment?

 

According to the ABS this may have been driven in part by rapid development of otherwise unprofitable mines to production in an environment of once-in-lifetime high commodity prices. From a labour perspective, this has also driven wages in the mining sector which has knock-on effects for utilities.

Meanwhile for the last decade utilities have been dealing with a nexus of chronic under-investment in some networks, our insatiable appetite for air conditioning in hot summers and a period of growth in new housing with poor energy efficiency design in outlying urban areas which are subject to greater temperature extremes. The capital expenditure required to keep pace with this forecast peak demand growth has been a major negative in terms of productivity.

In this post I am going to consider how analytics can find increased productivity in the utilities sector (although there should be parallels for the mining sector) and specifically through optimisation of capital expenditure. I’ll discuss labour productivity in a  future post.

Deloitte has recently released its report into digital disruption: Short Fuse, Big Bang. In this report the utility sector is one which is going to be transformed by technological change, albeit more slowly than other sectors. Having said that, electricity utilities and retailers are going to be the first to experience disruptions to their business models, before water and gas. This is being driven by the fact that electricity businesses are at the forefront of privatisation among utilities and the politicisation of electricity pricing. Internationally, energy security concerns (which as in turn has seen the rise of renewables, energy conservation and electric vehicle development, for example) have also driven technological change faster for electricity utilities.

On face value the concept of smart grid just looks like the continuation of big ticket capital investment and therefore decline in productivity. Is there, however, a way to embrace the smart grid which actually increases productivity?

Using good design principles and data analytics, I believe the answer is yes. Here are three quick examples.

Demand Management

The obvious one is time of use pricing of electricity which I have written about on this blog several times already. The problem with this from a savings point of view is that the payoff between reduced peak demand and saving in capital expenditure is quite lagged and without the effective feedback between demand management and peak demand forecasting then may just result in overinvestment in network expansion and renewal. In fact I believe that we have already seen this occur as evidenced by the AEMO’s revision of peak demand growth. When peak demand was growing most rapidly through the mid 1990’s , demand management programs were proliferating as were revisions to housing energy efficiency standards. It should have been no surprise that this would have an effect on energy usage, but quite clearly it has come as a surprise to some.

Interval meters (which are also commonly referred to as “smart” meters) are required to deliver time of use pricing and some parts of the NEM are further down the track than others in rolling these out, so this solution still requires further capital investment. In my recent experience this appears to be the most effective and fairest means for reducing peak demand. Meter costs can be contained however as “smart meter” costs continue to fall. A big cost in the Victorian rollout of smart meters has not just been the meters themselves but the communications and IT infrastructure to support the metering system. An opt-in roll out will lead to slower realisation of the benefits of time of use pricing in curbing peak demand but will allow a deferral of the infrastructure capital costs. Such an incremental rollout will allow assessment of options such as between communication-enabled “smart meters” versus manually read interval meters (MRIMs). They are meters which capture half hour usage data but do not upload that via a communications network. They still require a meter reader to visit the meter and physically download the data. These meters are cheaper but labour costs for meter reading need to be factored in. There are other advantages to communications-enabled meters in that data can be relayed in real time to the distributor to allow other savings spin offs in network management. It also makes it possible for consumers to monitor their own energy usage in real time and therefore increase the effectiveness of demand pricing through immediate feedback to the consumer.

Power Management

From real time voltage management to reduce line loss, to neural net algorithms to improve whole of network load balancing, there are many exciting solutions that will reduce operating costs over time. Unfortunately, this will require continued capital investment in networks that do not have real time data-reporting capabilities and there is little appetite for this at the moment. Where a smart grid has already rolled out these options need to be developed. Graeme McClure at SP Ausnet is doing some interesting work in this field.

Asset Lifetime

This idea revolves around a better understanding of the true value of each asset on the network. Even the most advanced asset management systems in Australian distributors at the moment tend to treat all assets of a particular type of equal value, rather than having a systematic way of quantifying their value based on where they are within the network. Assets generally have some type of calculated lifetime and these get replaced before they expire. But what if some assets could be allowed to run to failure with little or no impact on the network? It’s not that many talented asset managers don’t already understand this. Many do. But good data analytics can ensure that this happens with consistency across the entire network. This is an idea that I have blogged about before. It doesn’t really require any extra investment in network infrastructure to realise benefits. This is more about a conceptually smart use of data rather than smart devices.

The era of big data may also be the era of big productivity gains and utilities still have time to get their houses in order in terms of developing analytics capability. But delaying this transition could easily see some utilities facing the challenges to the business model currently being faced by some in the media and retail industries. The transition from service providers to data manufacturers is one that will in time transform the industry. Don’t leave it too late to get on board.

Have We Seen the End of Peak Demand?

There has been a lot of comment in the media lately about how dodgy forecasts have  impacted retail electricity bills. Is this really the case? Has peak demand peaked? Have we over-invested in peaking capacity? I don’t propose to come up with a definitive answer here but by exploring forecasting methodologies I hope to show why such predictions are so hard to do. In this post I am going to show that a pretty good model can be developed using free software and a couple of sources of publicly available data (ABS, BOM) on wet a Melbourne Saturday afternoon. To cheer me up I am going to use Queensland electricity data from AEMO and concentrate on summer peak demand. I am then going to apply this technique to data only up to summer 2009 to and compare that to the recently downward-revised AEMO forecast.

But first let’s start with a common misconception. The mistake many commentators make is confusing the economics of electricity demand with the engineering of the network for peak capacity. Increasing consumption of electricity will impact wholesale prices of electricity. To a lesser extent it will also affect retail prices as retailers endeavour to pass on costs to consumers. The main driver of increased retail electricity prices however is network costs; specifically the cost of maintain enough network capacity for peak demand periods.

Let’s start by looking at some AEMO data. The following chart show total electricity consumption by month for Queensland from 2000 – 2011.

Queensland Energy Consumption

We can see from this chart that total consumption has started to fall from around 2010. Interestingly, though, we have seen the peakiness increase from about 2004 where summers have a much greater electricity usage than non-peak seasons.

If we overlay this with peak demand then we see some interesting trends.

Consumption versus Demand

What we see is from 2006 onwards is an increasing separation between peak demand and total consumption. There are a couple of factors underlying this decoupling. One is increased energy efficiency of homes driven by energy efficient building standards and other schemes such as the home insulation scheme. The other is the rapid uptake of solar power. Generous feed in tariffs have encouraged a widespread uptake of solar panels which has decreased the amount of energy consumed from the grid except at peak times. A solar panel will reduce electricity consumption during the day but in during warm summer evenings when the sun has disappeared air conditioners will run heavily on network electricity. The implication of the decoupling of peak demand from total consumption is that we either have to pay more for our electricity to maintain the same standard of supply or accept lower reliability of supply, especially at time when we most need it – very hot and very cold days.

When we overlay temperature on peak demand we see generally summer peaking which is typical for Queensland. We also see that maximum temperatures were higher earlier in the decade and then generally cooler in the last three years. It is important to remember that what we are seeing is longer wave of variability which is not a trend. This is often understood but not properly accounted for in forecasting temperature-variant behaviour.Demand versus Temperature

The above chart does not use maximum monthly temperature but the average maximum of the hottest four days of each month. Those who have studied electricity usage behaviour know that the highest peak often occurs after a run of hot days. By averaging the hottest few days of each month we get a measure that captures both the peak temperature and the temperature run. It is not necessary for this purpose to explicitly calculate consecutive days because temperature is not randomly distributed: temperature tends to cluster anyway. Another way to capture this is count the number of days above a given temperature. Both types of variable can perform well in models such as these.

We can see from this chart that peak demand continues to rise despite variability caused by temperature. The next step then is to add variables that describe the increase in peak. In my experience population usually performs the best but in this case I’ll test a couple of economic time series measures form the ABS National Accounts.

I also create a dummy variable to flag June, July and August as winter months. My final dataset looks like this:

Data snapshot

Preparation of data is the most important element of analytics. It is often difficult, messy and time consuming work but something that many of those new to analytics skip over.

In this exercise I have created dummy variables and eventually discard all except a flag indicating if a particular month is a winter month as per the data shown above. This will allow the model to treat minimum temperature differently during cold months.

Another common mistake is that extremes such as peak demand can only be modelled on the extreme observations. In this case I look at peak demand is all months in order to fit the summer peaks rather than just modelling the peaks themselves. This is because there is important information in how consumer demand varies between peak and non-peak months. This way the model is not just a forecast but a high level snapshot of population response to temperature stimulus. Extreme behaviour is defined by the variance from average behaviour.

My tool of choice is the GLM (Generalised Linear Model) which gives me a chance to experiment with both categorical variables (e.g. is it winter? Yes/No) and various distributions of peak demand (i.e. normal or gamma) and whether I want to fit a linear or logarithmic line to the data.

After a good deal of experimentation I end up with a very simple model which exhibits good fit and each of the predictor variables fit significance greater than 95%. For the stats minded here is the output:

GLM Output

You will notice that I have just four variables from two data sources left in my model. Economic measures did not make it to the final model. I suspect that population growth acts as a proxy for macroeconomic growth over time both in terms of number of consumers and available labour supporting economic output.

Another approach borrowed from data mining that is not always used in forecasting is to hold a random test sample of data which the model is not trained on but is validated in terms of goodness of fit statistics. The following show the R-squared fit against both the data used to train the model and the hold out validation dataset.

Model Fit - Training Data

Model Fit - Test Data

We can be confident on the basis of this that our model explains about 80% of the variance in peak demand over the last decade (with I suspect that balance being explained by a combination of solar pv, household energy efficiency programs, industrial use  and “stochastic systems” – complex interactive effects that cannot be modelled in this way).

Another way to look at this is to visually compare the predicted peak demand against actual peak demand as done in the following graph.

GLM Model - Predicted versus Actual

We can see from this chart that the model tends to overestimate demand in the earlier part of the period and underestimate at the end. I am not too concerned about that however as I am trying to fit an average over the period so that I can extrapolate an extreme. I will show that this only has a small impact on the short term forecast. This time series does have a particularly big disruption which is the increased penetration of air conditioning. We know that the earlier part of the period includes relatively low air conditioner penetration (and we have now most likely reached maximum penetration of air conditioning). Counteracting this is the fact that the later period includes households with greater energy efficiency. These events in counteract each other. As with weather you can remove variability if you take a long enough view.

Let see what happens if we take temperature up to a 10 POE level and forecast out three years to November 2014. That is, what happens if we feed 1-in-10 year temperatures into the model? I emphasise that this is 10 POE temperature; not 10 POE demand.

GLM - 10 POE Temperature Prediction

We see from this chart that actual demand exceed our theorised demand three times (2005, 2007 and 2010) out of 12 years. Three years out of twelve can be considered as 25 POE or in other words peak exceeds the theorised peak 25% of the time over a twelve year period.

2010 appears to be an outlier as overall the summer was quite mild. There was however a spike of very warm weather in South East Queensland in January which drove a peak not well predicted by my model. The month also recorded very cool temperature which has caused my model to drag down peak demand. This is consistent with the concept of probability of excedance. That is, there will be observed occurrences that exceed the model.

The final test of my model will be to compare back to the AEMO model. My model predicts a 2013/14 summer peak of 2309 MW at 25 POE. The 50 POE summer peak forecast for 2013/14 under the Medium scenario for AEMO is 9262 MW and 9568 MW at 10 POE. If we approximate a 25 POE for AEMO as the midpoint between the two then we get 9415 MW. Which means we get pretty close with using just population and temperature, some free data and software and a little bit of knowledge (which we know is a dangerous thing).

GLM Fit to AEMO Model

This forecast is a significant downward forecast on previous expectations which has in part lead to the accusations of dodgy forecasting and “gold plating” of the network. So what happens if I apply my technique again but this time only on data up until February 2009? That was the last time we saw a really hot spell in South East Queensland. If new data has caused forecasts to be lowered then going back this far should lead to model that exceeds the current AEMO forecast. The purple line in the graph below is the result of this new model compared to actual and the first model and AEMO:

GLM Modelled Pre-2010

What we see here is much better fitting through the earlier period, some significant under fitting of the hot summers of 2004 and 2005, but an almost identical result to the original GLM model in forecasting through 2012, 2013 and 2014. And still within the bound of the AEMO 10 and 50 POE forecasts. Hindsight is always 20/20 vision, but there is at least prima facie evidence to say that the current AEMO forecast appears to be on the money and previous forecasts have been overcooked. It will be interesting to see what happens over the next few years. We should expect peak demand to exceed the 50 POE line once every 2 years and the 10 POE line every 10 years.

We have not seen the end of peak demand. The question is how far are we willing to trade off reliability in our electricity network to reduce the cost of accommodating peak demand. The other question is all-of-system peak demand forecasting is good and well, but where will the demand happen on the network, will it be concentrated in certain areas and what are the risks to industry and consumers of lower reliability in these areas? I’ll tackle this question in my next post.

Retail Therapy

July 1, 2012 will probably be mostly remembered at the date Australia introduced a price on carbon. But another event took place which may be more significant in terms of how households and small businesses consume their electricity:  the commencement of the National Energy Customer Framework (NECF).  The NECF gives the Australian Energy Regulator (AER) the responsibility for (among other things) regulating retail electricity prices.  Electricity retail prices continue to rise driven mostly by increasing capital expenditure costs for networks. Electricity businesses, regulators and governments are increasingly turning their attention to Time of Use (TOU) pricing to help mitigate peak network demand and therefore reduce capital expenditure.

Change will be gradual to start with however. A cynical observer may suggest that the NECF is no more than a website at present, but I believe that change is inevitable and it will be significant. Five states and the ACT have agreed to a phased introduction of the NECF following on from a 2006 COAG agreement, and the transition will be fraught with all of the complexities of introducing cross jurisdictional regulatory reform.

There are basically two mechanisms that drive the cost of electricity to produce and deliver. One is the weather (we use more in hot and cold weather) and the other is the cost of maintaining and upgrading the network that delivers the electricity. For the large retailers, the way to deal with the weather is to invest in both generation and retail because one is a hedge for the other. These are known as “gentailers”.

The network cost has traditionally been passed through as a regulated network tariff component of the retail price. The problem with this is that often the network price structure does not reflect actual network costs which are driven by infrequent peak use, particularly for residential customers. Those who use a greater proportion of electricity during peak times add to the cost of maintaining capacity in the network to cope with the peak. But for residential and other small consumers they all pay the same rate. In effect, “peaky” consumers are subsidised by “non-peaky” customers.

It is not yet really clear how a price signal will be built into the retail tariff but one policy option is for distributors to pass costs to reflect an individual consumer’s load profile. The implications for government policy are interesting but I’ll save for another post. In this post, I’ll explore what the implications are from the retailer’s perspective in contestable markets.

I believe that this is potentially quite a serious threat to the business model for retailers for a number of reasons that I’ll get into shortly, but at the heart of the matter is data: lots of it, and what to do with it. Much of that data is flowing from smart meters in Victoria and NSW and will start to flow from meters in other states. A TOU pricing strategy not only requires data from smart meters but also many other sources as well.

Let’s have a quick recap on TOU. I have taken the following graph from a report we have prepared for the Victorian Department of Primary Industries which can be found here.

The idea of TOU is to define a peak time period where the daily usage peaks and charge more for electricity in this time period. A two part TOU will define other times as off peak and charge a much lower tariff. There may also be shoulder periods either side of the peak where a medium tariff is charged.

How each of these periods is defined and the tariff levels set will determine whether the system as a whole will collect the same revenue as when everyone is on a flat tariff.  This principle is called revenue neutrality. That is, the part of the electricity system that supplies households and small businesses will collect the same revenue under the new TOU tariffs as under the old flat tariff.

But this should by no means give comfort to retailers that they each will achieve revenue neutrality.

For example, we can see from the above graphs that even if revenue neutrality is achieved for all residential and SME customers combined, residential customers may be better off and SME worse or vice versa but everything still totals to no change in revenue. If a retailer has a large share of customers in a “better off” category then that will translate to a fall in revenue if the retailer passes on the network tariff with their existing margin. In fact we find that residential bills for example may be reduced by up to five per cent, depending on the design of the network tariff.

Of course this is just one segmentation of TOU, there could be many, many more sub-segments all with different “better off” or ”worse off” outcomes.

Revenue neutrality can be affected by price elasticity (consumers reduce their peak consumption) or substitution (they move their peak usage to shoulder or off-peak and thus reducing their overall electricity bill). This means that retailers not only have to understand what the impact would be under a current state of electricity usage but also how the tariff itself will affect consumer behaviour.

Data is at the very centre of competitive advantage as this disruptive event unfolds in the retail electricity market. Indeed the threat may not just be disruptive: for some retailers this may be an existential threat, especially as we see data-centric organisations entering the market such as telcos and ISPs. So far the no large telcos have entered the market in Australia (as far as I know: please correct me on this if this has changed) but surely the elephants must be loitering outside the room if not already in it.

I think what is clear for incumbent electricity retailers is “do nothing” is not an option. There must be a clear strategy around data and pricing including technology, talent and process. Furthermore, the centrepiece must be time of use pricing excellence built on a deep capability with data flowing from new technology meters and networks.

So what exactly are the key issues? The following list is by no means exhaustive but certainly gives some idea of the extent of data and the quantum of skills required to handle such complex analysis and interpretation.

Opt In or Opt Out?

I believe that TOU tariffs for small consumers are inevitable, but how will it roll out and how fast will the rollout be? The key policy decision will be whether to allow customers to opt in to TOU tariffs or opt out of a scheme which will otherwise be rolled out by default (a third option is to mandate to all, but this is likely to be politically unpalatable). I think pressure on governments to act on electricity network costs means that the “opt in” option, if it is adopted by the AER, will by definition be a transitional process. But the imperative is to act quickly because there is a lag between reducing peak demand and the flow through to capital expenditure savings (this is another whole issue which I will discuss in a future post). This lag means that if take up of TOU is too slow then the effect to the bottom line will be lost in the general noise of electricity consumption cycles: a case of a discount delayed is a discount denied. Retailers will have the right to argue for a phased introduction but there will be pressure on governments and the AER to balance this against the public good.

Non-cyclical change in demand

In recent years we have seen a change in the way electricity is consumed. I won’t go into the details here because I have blogged on this before. Suffice to say that it is one thing to understand from the data how a price may play out in the current market state but it’s altogether another thing to forecast how this will affect earnings. This requires a good idea of where consumption is heading and in turn this is affected a by a range of recent disruptors including Solar PV, changes in housing energy efficiency and changes in household appliance profiles. Any pricing scenario must also include a consumption forecast scenario. It would also be wise to have way to monitor forecasts carefully for other black swans waiting to sweep in.

A whole of market view

The task of maintaining or increasing earnings from TOU pricing will be a zero sum game. That is, if one retailer gets an “unfair share” of the “worse off” segments, then another retailer will get more of the “better off” segments and it is likely that this will be a one-off re-adjustment of the market. There is a need for a sophisticated understanding of customer lifetime value and this will be underpinned by also having a good understanding of market share by profitability. The problem is that smart meters (and the subsequent data for modelling TOU) will roll out in stages (Victoria is ahead of the other states, but I think the rollout will be inevitable across the National Electricity Market). The true competitive advantage for a retailer comes from estimating the demand profiles of customers still on accumulation meters and those smart meter consumers who are with competitors. There are a range of data mining techniques to build a whole-of-market view but equally important is a sound go-to-market strategy built to take advantage of these insights.

There will be winners and losers in the transition to TOU. For consumers, it could be argued that the “losers” are currently “winners” because the cost of their electricity supply is being subsidised by less “peaky” customers. There will also be winners and losers among energy retailers. Some of the winners may not even be in the market yet. The question is who will the losers be?

Critical Peak Price or Critical Peak Rebate?

Australians pay billions of dollars every year for an event that usually doesn’t happen: a critical demand peak on the electricity network. Electricity networks are designed to ensure continuous supply of electricity regardless of the demand placed on it. Every few years we are likely to experience a heat wave or cold snap that drives up simultaneous demand for energy across the network. The infrastructure required to cope with this peak in demand is very expensive; infrastructure that is not used except during these relatively rare events.

Shaving even just a very small amount of demand off these peak days has the potential to save up to $1.2b each year nationally according to a recent report by Deloitte. The tricky part is to try and target the peaks rather than drive down energy consumption in non-peak times. It’s this non-peak consumption that pays the bills for infrastructure investment. If distributors get less revenue and their peak infrastructure costs stay the same then prices have to go up. This is one of the big reasons why electricity prices have risen so steeply in recent years.

One way to do this is to send a price signal or incentive for consumers to moderate their demand on peak days. Last year at the ANZ Smart Utilities Conference in Sydney, Daniel Collins from Ausgrid gave an interesting presentation comparing the benefits for distributors of offering a critical peak pricing versus critical peak rebate. A critical peak price is where the network issues a very steep increase in electricity price on a handful of days each year. This price might be as much as ten times the usual electricity price. Under a critical peak rebate scheme consumers are charged the same amount on peak days but are given a rebate by the distributor if they keep their peak below a pre-defined threshold.

In electricity market where distributors cannot own retailers (the most common type of market in Australia) it is very difficult for price signals set by distributors to reach end consumers. This is because the distributors charge retailers and retailers then set the price and product options offered to consumers. Distributor price signals can get obscured in this process. In this type of market critical peak prices are unlikely to be mandated by government because it goes against a policy of deregulation and is highly politically unpalatable in an environment of rapidly increasing electricity prices. The only option for distributors then is an opt-in price.

The effectiveness of such a price then is highly dependent on the opt-in rate and given that only consumers who do stand to lose under such a price are the only one likely to opt in then the overall savings may be quite low.

A more interesting concept is critical peak rebate. For a start the rebate is given by the distributor directly which avoids the incentive being obscured by retail pricing. Such a scheme is also likely to attract a much greater uptake than opt-in peak pricing. The tricky part however is the design. How much rebate should be offered? Which consumers should be targeted and will they be interested? How do we set the upper demand limit?

It would be a mistake to offer the same deal to all consumers as it is very hard to offer a general incentive with significant return. A badly designed rebate could easily cost more to administer than it saves. There are four crucial elements that need to be considered in the design of a CPR.

How to measure the benefit?

This is quite tricky but by far the most important design element. There is a lag time between energy peaks on the network and infrastructure costs. This is because infrastructure spending is usually allocated on a five year cycle based on forecasts developed from historical peak demand data. It is vital that a scheme is deigned to capture the net savings in peak demand and that there is process to feed this data into the forecasting process. Unfortunately, I have never seen a demand management team feed data to a forecasting team.

Critical Pricing Customer Engagment Strategy

Who do we target?

The first issue is to work out which consumers have high peaking demand and are likely to take up the incentive. There should also be consideration to how data will collected and analysed during the roll out of the program, and how this data is used to continually drive better targeting of the program. The problem with a one-size-fits-all scheme is that there may be a number of different groups who have different motivations for curtailing their peak demand. For example, the rebate financial incentive may be set for the average consumer but may not be high enough to appeal to a wealthy consumer. But there may be other ways to appeal to these customers such as offering donation to a charity if the peak demand saving target is reached. It is therefore to think about a segmentation approach to targeting the right customers with the right offer.

What price, demand threshold and event frequency do we set?

Pricing the incentive is a three dimensional problem: target demand threshold, price and frequency of events. Each of these effect the total benefit of the scheme and the consumer trade-offs need to be understood. The danger here again is relying on averages. Different cohorts of customer will have different trade-off thresholds and an efficient design is vital to the effectiveness of the incentive. It is unlikely that there is room to vary the rebate amount based on customer attributes but there is certainly room to design individualised demand thresholds and maybe also the frequency with which events are called for different cohorts of customers.

How do we refine the program?

In the rush to get new programs to market, response data and customer intelligence feedback is often not well considered. It is important that there is a system for holding data and routines for measuring response against control groups for each treatment group in the program so that incremental benefits can be measured but also so data can be fed back into improving models which select customers for the program. Incremental benefits of the program should also feed back into refining pricing of the rebate and target demand thresholds. Understanding which customers respond and the quantum of that response are valuable insights into customer behaviour which distributors usually do have the ability to capture in the normal course of their business.  These are all good reasons for running a well-designed CPR program.

3 Common Misconceptions about Prediction and Forecasting (and what we can do about it)

1. Prediction is about understanding the future

We humans have a lot of difficulty understanding the subtleties of time. It is important to remember how little we intuitively understand about the nature of time when building or interpreting forecasts and predictive modelling. Whilst I have built models that, for example, predict a customer’s propensity to churn reasonably well, weather forecasts for a given locality might at best predict only a few days into the future and this is the best  we can do even with the most powerful predictive models ever built. The difference is that the former “predicts” human behaviour whereas the second tries to peer into the future of a complex stochastic system. Predictive modelling works best when trying to predict human behaviour because it is a human invention bounded by human experience. Modelling does not predict a priori. I prefer to think of predictive modelling as projected behavioural modelling. Prediction sounds too good to be true. Traditional forecasting tries to project past trends and says if recent past conditions prevail what will the future look like? This is a fundamental misunderstanding of the nature of time. We have seen this breakdown significantly in recent time with energy consumption forecasting. There have been a range of significant disrupters such as the  global financial crisis, new appliance technology, distributed generation and changing housing standards to name a few. Some of these thing have been foreseeable and others have not, but none of them appear in the past record of energy consumption which is the prerequisite for a traditional forecast model.

2. My forecast is correct therefore my assumptions are correct

Just because a given forecast comes to pass does not mean that the model is without flaw. I am reminded of both Donald Rumsfeld and Edward Lorenz in debunking this. Lorenz discovered patterns that are locally stable and may replicate itself for a period of time, but are guaranteed not do so indefinitely. This is at the heart of chaos theory and every good modeller should understand this. The conditions which causes patterns to break down are sometimes what Rumsfeld refers to as unknown unknowns. There’s not much we can do about those except to try and imagine them or else be agile enough to recognise them once they start to unfold. But there are also “known unknowns” those things which we know we don’t know.

3. My forecast was correct given the data we had at the time

My golden rule is that all forecasts are wrong – they are just wrong in different ways. Sometime the biggest problem is when a forecast or prediction comes to pass. If a forecast comes to pass it is not knowable to what extent the success was subject to the efficacy of the model. I am reminded of Tarot readings. It is easiest to convince someone about a prediction when it confirms the observers own bias and none of us are without bias. And there is always a get out clause if the model does not continue to predict well. This is relatively harmless if the prediction is about the likelihood of meeting a tall, handsome stranger, but more significant if it is a prediction about network energy consumption. In the case of the latter its not good enough to say that was the best we could do at the time.

So what can we do about it?

The reason we build forecasts is to provide an evidential basis for decision-making that minimises risk. It is therefore a crazy idea that major investment decisions can be made on a single forecast. It is like putting your entire superannuation on black on the roulette wheel. The first step in reducing risk in prediction and forecasting is to try and understand (or imagine) the range of unknowns that may occur. For example, we know that financial crises, bushfire and floods all occur and we have some idea of how extreme they might be. We even have a pretty good idea of their probability of occurrence. We just don’t know when they will occur. In terms of energy forecasting we know certain disrupters are unfolding now such as weather variability and distributed generation but we don’t quite know how fast and to what extent. Known knowns and known unknowns.

While we don’t know what will happen in the future we do have a pretty good understanding of how different populations will behave under certain conditions. The solution therefore is to simulate population behaviours under a range of scenarios to get an understanding of what might happen at the extremes of the forecast rather that relying on the average or “most likely” forecast.

The answer in my opinion is multiple simulation. Instead of building one forecast or prediction, we build a range of models either with different assumptions, different methodologies or (preferably) both. That way we can build a view of the range of forecasts and associated risks.  What I need to know is not what the weather is going to do, but whether I should plan a camping trip or not. Multiple simulated prediction gives us the tools we need to do what we as humans do best – make decisions based on complex and sometimes ambiguous information.

From CRM to ARM: what utilities can learn from banks about maximising value

Last week in Brisbane a small metal clamp holding an overhead electric cable failed causing a meltdown on the Queensland Rail network and leading to the government compensating commuters with a free day of travel. I expect that there are tens or hundreds of thousands of these clamps across the network and in all likelihood they are all treated in more or less the same way and assigned the same value.

There are interesting parallels between the current transformation of utilities to smart grid and what happened in banks in regards to customer analytics at the turn of the  millennium. Can we gain insights from over a decade of experiences in the banking industry of customer relationship management (CRM) to move towards a principle of asset “relationship” management (ARM).

When I became involved in my first large CRM project over ten years ago, CRM was at that point only concerned about the “kit” – the software and hardware that was the operational aspects of CRM – and not on the ecology of customer data where the real value of CRM lay. To give just one example: we built a  system for delivering SMS reminders which was very popular with customers, but when we went to understand why it was so successful we realised that we had not recorded the contact in a way that was easy to retrieve and analyse. If we had designed CRM from the point of view of an ecology of customer data then we would have been able to leverage insight from the SMS reminder initiative faster and for lower cost.

Once we understood this design principle we were able to start delivering real return on investment in CRM including developing a data construct of customer which spanned the CRM touch points, point of sale, transactional data systems and data which  resided outside of the internal systems including public data and data supplied by third party providers. We also embarked on standardising processes for data capture and developing common logical data definitions across multiple systems and then the development of an analytical data environment. The real CRM came into being once we had developed this whole data ecology of customer that enabled a sophisticated understanding of customer lifetime value and  the capacity to to build a range of models which predict customer behaviour and provide platforms for executing on our customer strategy.

The term “relationship” has some anthropological connotations and it may seem crazy to apply this thinking to network assets.  From a customer strategy perspective, however, it has a purely logical application: how can we capture customer interactions to maximise customer lifetime value, increase retention and reduce the costs of acquiring new customers?

If we look at customer value drivers we seem some parallel with capital expenditure and asset management. Cost to acquire is roughly synonymous with asset purchase price. Lifetime value applies to both a customer and an asset. Cost to serve for a customer is a parallel with the cost maintain an asset. Costumer retention is equivalent to asset reliability. The difference with advanced analytical CRM is that these drivers are calculated not as averages across customer classes but for every single customer.

The development of smart devices and the associated data environments necessary to support smart grid now enables utilities to look at a similar approach. Why can we not develop an analytical environment in which we capture attributes for, say, 30 million assets across a network so that we can identify risks to network operation before they happen?

If we could assign an expected life and therefore predicted probability of failure to the metal clamp between Milton and Roma Street stations; a value-to-network based on downstream consequences of failure and balance this with a cost to maintain/replace then we would be applying the same lessons that banks have learnt from understanding CRM and customer lifetime value.