Have We Seen the End of Peak Demand?

There has been a lot of comment in the media lately about how dodgy forecasts have  impacted retail electricity bills. Is this really the case? Has peak demand peaked? Have we over-invested in peaking capacity? I don’t propose to come up with a definitive answer here but by exploring forecasting methodologies I hope to show why such predictions are so hard to do. In this post I am going to show that a pretty good model can be developed using free software and a couple of sources of publicly available data (ABS, BOM) on wet a Melbourne Saturday afternoon. To cheer me up I am going to use Queensland electricity data from AEMO and concentrate on summer peak demand. I am then going to apply this technique to data only up to summer 2009 to and compare that to the recently downward-revised AEMO forecast.

But first let’s start with a common misconception. The mistake many commentators make is confusing the economics of electricity demand with the engineering of the network for peak capacity. Increasing consumption of electricity will impact wholesale prices of electricity. To a lesser extent it will also affect retail prices as retailers endeavour to pass on costs to consumers. The main driver of increased retail electricity prices however is network costs; specifically the cost of maintain enough network capacity for peak demand periods.

Let’s start by looking at some AEMO data. The following chart show total electricity consumption by month for Queensland from 2000 – 2011.

Queensland Energy Consumption

We can see from this chart that total consumption has started to fall from around 2010. Interestingly, though, we have seen the peakiness increase from about 2004 where summers have a much greater electricity usage than non-peak seasons.

If we overlay this with peak demand then we see some interesting trends.

Consumption versus Demand

What we see is from 2006 onwards is an increasing separation between peak demand and total consumption. There are a couple of factors underlying this decoupling. One is increased energy efficiency of homes driven by energy efficient building standards and other schemes such as the home insulation scheme. The other is the rapid uptake of solar power. Generous feed in tariffs have encouraged a widespread uptake of solar panels which has decreased the amount of energy consumed from the grid except at peak times. A solar panel will reduce electricity consumption during the day but in during warm summer evenings when the sun has disappeared air conditioners will run heavily on network electricity. The implication of the decoupling of peak demand from total consumption is that we either have to pay more for our electricity to maintain the same standard of supply or accept lower reliability of supply, especially at time when we most need it – very hot and very cold days.

When we overlay temperature on peak demand we see generally summer peaking which is typical for Queensland. We also see that maximum temperatures were higher earlier in the decade and then generally cooler in the last three years. It is important to remember that what we are seeing is longer wave of variability which is not a trend. This is often understood but not properly accounted for in forecasting temperature-variant behaviour.Demand versus Temperature

The above chart does not use maximum monthly temperature but the average maximum of the hottest four days of each month. Those who have studied electricity usage behaviour know that the highest peak often occurs after a run of hot days. By averaging the hottest few days of each month we get a measure that captures both the peak temperature and the temperature run. It is not necessary for this purpose to explicitly calculate consecutive days because temperature is not randomly distributed: temperature tends to cluster anyway. Another way to capture this is count the number of days above a given temperature. Both types of variable can perform well in models such as these.

We can see from this chart that peak demand continues to rise despite variability caused by temperature. The next step then is to add variables that describe the increase in peak. In my experience population usually performs the best but in this case I’ll test a couple of economic time series measures form the ABS National Accounts.

I also create a dummy variable to flag June, July and August as winter months. My final dataset looks like this:

Data snapshot

Preparation of data is the most important element of analytics. It is often difficult, messy and time consuming work but something that many of those new to analytics skip over.

In this exercise I have created dummy variables and eventually discard all except a flag indicating if a particular month is a winter month as per the data shown above. This will allow the model to treat minimum temperature differently during cold months.

Another common mistake is that extremes such as peak demand can only be modelled on the extreme observations. In this case I look at peak demand is all months in order to fit the summer peaks rather than just modelling the peaks themselves. This is because there is important information in how consumer demand varies between peak and non-peak months. This way the model is not just a forecast but a high level snapshot of population response to temperature stimulus. Extreme behaviour is defined by the variance from average behaviour.

My tool of choice is the GLM (Generalised Linear Model) which gives me a chance to experiment with both categorical variables (e.g. is it winter? Yes/No) and various distributions of peak demand (i.e. normal or gamma) and whether I want to fit a linear or logarithmic line to the data.

After a good deal of experimentation I end up with a very simple model which exhibits good fit and each of the predictor variables fit significance greater than 95%. For the stats minded here is the output:

GLM Output

You will notice that I have just four variables from two data sources left in my model. Economic measures did not make it to the final model. I suspect that population growth acts as a proxy for macroeconomic growth over time both in terms of number of consumers and available labour supporting economic output.

Another approach borrowed from data mining that is not always used in forecasting is to hold a random test sample of data which the model is not trained on but is validated in terms of goodness of fit statistics. The following show the R-squared fit against both the data used to train the model and the hold out validation dataset.

Model Fit - Training Data

Model Fit - Test Data

We can be confident on the basis of this that our model explains about 80% of the variance in peak demand over the last decade (with I suspect that balance being explained by a combination of solar pv, household energy efficiency programs, industrial use  and “stochastic systems” – complex interactive effects that cannot be modelled in this way).

Another way to look at this is to visually compare the predicted peak demand against actual peak demand as done in the following graph.

GLM Model - Predicted versus Actual

We can see from this chart that the model tends to overestimate demand in the earlier part of the period and underestimate at the end. I am not too concerned about that however as I am trying to fit an average over the period so that I can extrapolate an extreme. I will show that this only has a small impact on the short term forecast. This time series does have a particularly big disruption which is the increased penetration of air conditioning. We know that the earlier part of the period includes relatively low air conditioner penetration (and we have now most likely reached maximum penetration of air conditioning). Counteracting this is the fact that the later period includes households with greater energy efficiency. These events in counteract each other. As with weather you can remove variability if you take a long enough view.

Let see what happens if we take temperature up to a 10 POE level and forecast out three years to November 2014. That is, what happens if we feed 1-in-10 year temperatures into the model? I emphasise that this is 10 POE temperature; not 10 POE demand.

GLM - 10 POE Temperature Prediction

We see from this chart that actual demand exceed our theorised demand three times (2005, 2007 and 2010) out of 12 years. Three years out of twelve can be considered as 25 POE or in other words peak exceeds the theorised peak 25% of the time over a twelve year period.

2010 appears to be an outlier as overall the summer was quite mild. There was however a spike of very warm weather in South East Queensland in January which drove a peak not well predicted by my model. The month also recorded very cool temperature which has caused my model to drag down peak demand. This is consistent with the concept of probability of excedance. That is, there will be observed occurrences that exceed the model.

The final test of my model will be to compare back to the AEMO model. My model predicts a 2013/14 summer peak of 2309 MW at 25 POE. The 50 POE summer peak forecast for 2013/14 under the Medium scenario for AEMO is 9262 MW and 9568 MW at 10 POE. If we approximate a 25 POE for AEMO as the midpoint between the two then we get 9415 MW. Which means we get pretty close with using just population and temperature, some free data and software and a little bit of knowledge (which we know is a dangerous thing).

GLM Fit to AEMO Model

This forecast is a significant downward forecast on previous expectations which has in part lead to the accusations of dodgy forecasting and “gold plating” of the network. So what happens if I apply my technique again but this time only on data up until February 2009? That was the last time we saw a really hot spell in South East Queensland. If new data has caused forecasts to be lowered then going back this far should lead to model that exceeds the current AEMO forecast. The purple line in the graph below is the result of this new model compared to actual and the first model and AEMO:

GLM Modelled Pre-2010

What we see here is much better fitting through the earlier period, some significant under fitting of the hot summers of 2004 and 2005, but an almost identical result to the original GLM model in forecasting through 2012, 2013 and 2014. And still within the bound of the AEMO 10 and 50 POE forecasts. Hindsight is always 20/20 vision, but there is at least prima facie evidence to say that the current AEMO forecast appears to be on the money and previous forecasts have been overcooked. It will be interesting to see what happens over the next few years. We should expect peak demand to exceed the 50 POE line once every 2 years and the 10 POE line every 10 years.

We have not seen the end of peak demand. The question is how far are we willing to trade off reliability in our electricity network to reduce the cost of accommodating peak demand. The other question is all-of-system peak demand forecasting is good and well, but where will the demand happen on the network, will it be concentrated in certain areas and what are the risks to industry and consumers of lower reliability in these areas? I’ll tackle this question in my next post.

Retail Therapy

July 1, 2012 will probably be mostly remembered at the date Australia introduced a price on carbon. But another event took place which may be more significant in terms of how households and small businesses consume their electricity:  the commencement of the National Energy Customer Framework (NECF).  The NECF gives the Australian Energy Regulator (AER) the responsibility for (among other things) regulating retail electricity prices.  Electricity retail prices continue to rise driven mostly by increasing capital expenditure costs for networks. Electricity businesses, regulators and governments are increasingly turning their attention to Time of Use (TOU) pricing to help mitigate peak network demand and therefore reduce capital expenditure.

Change will be gradual to start with however. A cynical observer may suggest that the NECF is no more than a website at present, but I believe that change is inevitable and it will be significant. Five states and the ACT have agreed to a phased introduction of the NECF following on from a 2006 COAG agreement, and the transition will be fraught with all of the complexities of introducing cross jurisdictional regulatory reform.

There are basically two mechanisms that drive the cost of electricity to produce and deliver. One is the weather (we use more in hot and cold weather) and the other is the cost of maintaining and upgrading the network that delivers the electricity. For the large retailers, the way to deal with the weather is to invest in both generation and retail because one is a hedge for the other. These are known as “gentailers”.

The network cost has traditionally been passed through as a regulated network tariff component of the retail price. The problem with this is that often the network price structure does not reflect actual network costs which are driven by infrequent peak use, particularly for residential customers. Those who use a greater proportion of electricity during peak times add to the cost of maintaining capacity in the network to cope with the peak. But for residential and other small consumers they all pay the same rate. In effect, “peaky” consumers are subsidised by “non-peaky” customers.

It is not yet really clear how a price signal will be built into the retail tariff but one policy option is for distributors to pass costs to reflect an individual consumer’s load profile. The implications for government policy are interesting but I’ll save for another post. In this post, I’ll explore what the implications are from the retailer’s perspective in contestable markets.

I believe that this is potentially quite a serious threat to the business model for retailers for a number of reasons that I’ll get into shortly, but at the heart of the matter is data: lots of it, and what to do with it. Much of that data is flowing from smart meters in Victoria and NSW and will start to flow from meters in other states. A TOU pricing strategy not only requires data from smart meters but also many other sources as well.

Let’s have a quick recap on TOU. I have taken the following graph from a report we have prepared for the Victorian Department of Primary Industries which can be found here.

The idea of TOU is to define a peak time period where the daily usage peaks and charge more for electricity in this time period. A two part TOU will define other times as off peak and charge a much lower tariff. There may also be shoulder periods either side of the peak where a medium tariff is charged.

How each of these periods is defined and the tariff levels set will determine whether the system as a whole will collect the same revenue as when everyone is on a flat tariff.  This principle is called revenue neutrality. That is, the part of the electricity system that supplies households and small businesses will collect the same revenue under the new TOU tariffs as under the old flat tariff.

But this should by no means give comfort to retailers that they each will achieve revenue neutrality.

For example, we can see from the above graphs that even if revenue neutrality is achieved for all residential and SME customers combined, residential customers may be better off and SME worse or vice versa but everything still totals to no change in revenue. If a retailer has a large share of customers in a “better off” category then that will translate to a fall in revenue if the retailer passes on the network tariff with their existing margin. In fact we find that residential bills for example may be reduced by up to five per cent, depending on the design of the network tariff.

Of course this is just one segmentation of TOU, there could be many, many more sub-segments all with different “better off” or ”worse off” outcomes.

Revenue neutrality can be affected by price elasticity (consumers reduce their peak consumption) or substitution (they move their peak usage to shoulder or off-peak and thus reducing their overall electricity bill). This means that retailers not only have to understand what the impact would be under a current state of electricity usage but also how the tariff itself will affect consumer behaviour.

Data is at the very centre of competitive advantage as this disruptive event unfolds in the retail electricity market. Indeed the threat may not just be disruptive: for some retailers this may be an existential threat, especially as we see data-centric organisations entering the market such as telcos and ISPs. So far the no large telcos have entered the market in Australia (as far as I know: please correct me on this if this has changed) but surely the elephants must be loitering outside the room if not already in it.

I think what is clear for incumbent electricity retailers is “do nothing” is not an option. There must be a clear strategy around data and pricing including technology, talent and process. Furthermore, the centrepiece must be time of use pricing excellence built on a deep capability with data flowing from new technology meters and networks.

So what exactly are the key issues? The following list is by no means exhaustive but certainly gives some idea of the extent of data and the quantum of skills required to handle such complex analysis and interpretation.

Opt In or Opt Out?

I believe that TOU tariffs for small consumers are inevitable, but how will it roll out and how fast will the rollout be? The key policy decision will be whether to allow customers to opt in to TOU tariffs or opt out of a scheme which will otherwise be rolled out by default (a third option is to mandate to all, but this is likely to be politically unpalatable). I think pressure on governments to act on electricity network costs means that the “opt in” option, if it is adopted by the AER, will by definition be a transitional process. But the imperative is to act quickly because there is a lag between reducing peak demand and the flow through to capital expenditure savings (this is another whole issue which I will discuss in a future post). This lag means that if take up of TOU is too slow then the effect to the bottom line will be lost in the general noise of electricity consumption cycles: a case of a discount delayed is a discount denied. Retailers will have the right to argue for a phased introduction but there will be pressure on governments and the AER to balance this against the public good.

Non-cyclical change in demand

In recent years we have seen a change in the way electricity is consumed. I won’t go into the details here because I have blogged on this before. Suffice to say that it is one thing to understand from the data how a price may play out in the current market state but it’s altogether another thing to forecast how this will affect earnings. This requires a good idea of where consumption is heading and in turn this is affected a by a range of recent disruptors including Solar PV, changes in housing energy efficiency and changes in household appliance profiles. Any pricing scenario must also include a consumption forecast scenario. It would also be wise to have way to monitor forecasts carefully for other black swans waiting to sweep in.

A whole of market view

The task of maintaining or increasing earnings from TOU pricing will be a zero sum game. That is, if one retailer gets an “unfair share” of the “worse off” segments, then another retailer will get more of the “better off” segments and it is likely that this will be a one-off re-adjustment of the market. There is a need for a sophisticated understanding of customer lifetime value and this will be underpinned by also having a good understanding of market share by profitability. The problem is that smart meters (and the subsequent data for modelling TOU) will roll out in stages (Victoria is ahead of the other states, but I think the rollout will be inevitable across the National Electricity Market). The true competitive advantage for a retailer comes from estimating the demand profiles of customers still on accumulation meters and those smart meter consumers who are with competitors. There are a range of data mining techniques to build a whole-of-market view but equally important is a sound go-to-market strategy built to take advantage of these insights.

There will be winners and losers in the transition to TOU. For consumers, it could be argued that the “losers” are currently “winners” because the cost of their electricity supply is being subsidised by less “peaky” customers. There will also be winners and losers among energy retailers. Some of the winners may not even be in the market yet. The question is who will the losers be?

Text Mining Public Perception of Smart Meters in Victoria

Today the Herald Sun ran a story proclaiming that smart meters are here to stay and invited their readers to comment on whether the government should scrap the smart meter program. I am not going to comment here on the journalistic quality of the article but concentrate on comments section which gives stakeholders some valuable insight into the zeitgeist of smart metering in the Garden State.

By applying an unstructured text mining application I have extracted the key themes from the comments on this story. When analysed in conjunction with the structure and content of the story, we get some interesting insights into public perception.

To start with I excluded the words “smart”, “meter” and “meters” in order not to be distracted by the subject under discussion. This is what I got.

Word clouds often seem to point to a collective meaning that is independent of individual attitudes. If this is the case then the strong message here which we could interpret as a collective rejection of what is seen as government control being favoured over the wishes of the “people”. This may be more of a reflection of the Herald Sun readership rather than a general community concern however.

If I remove “government” and “power” we get a closer look at the next level of the word cloud.

An aside of note is that we see that Herald Sun readers like to refer to the premier by his first name which is perhaps a sign that he still has popularity with this demographic.

One interesting observation to me is that despite its prominent mention article, the myth of radio frequency radiation from smart meters is not a major concern to the community, so we are unlikely to see a repeat of the tin foil hats fiasco in California.

Once we get into some of the word cloud detail, we see the common themes relating to “cost of living”, namely the additional costs to the electricity bill of the roll out and potential costs associated with time of use pricing. The article does mention that time of use pricing is an opportunities for households to save money. Time of use pricing is also a fairer pricing regime than flat tariffs.

The other important theme that I see is that the smart meter rollout is linked to the other controversial big technology projects of the previous Victorian government – Myki and the Wonthaggi Desalination Plant. But the good news is that the new government still has some cache with the public (even in criticism readers often refer to the premier by his first name). The objective now should be to leverage this and start building programs which smart meter initiatives which demonstrate the value of the technology directly to consumers. This in part requires unlocking the value of the data for consumers. I’ll speak more about this in future posts.

UPDATE: For interpretation of word clouds I suggest reading up on concept of collective consciousness.

Is there a moral argument for time of use pricing?

I recently came across an interesting paper with a novel argument for demand pricing.  In a previous post I explained why peak demand drives network costs. Because we mostly have flat tariffs in Australia we have the situation of cross subsidy whereby people with ‘flat’ demand profiles subsidise those with ‘peaky’ demand profiles. Consider this example.

From a policy point of view there is nothing wrong with cross subsidy per se, but it is important to know who are the winners and losers in the transfer of costs. If the flatter demand belongs to lower income consumers and peaky demand belongs to higher income consumers then flat tariffs subsidise the rich and transfers demand costs to the poor. If this is the case then it is hard to argue that a flat tariff structure is fair.

To test this there are a number of factors that I have considered:

  1. How should ‘peakiness’ be measured?
  2. Is there an inherent link between ‘peakiness’ and household income?
  3. Therefore is a flat tariff fair or unfair, based on who is being cross-subsidised?

What I would like to do here is present a detailed analysis but I cannot do this because most of the data I have is highly confidential and there is a paucity of public data available. What I can do is share some general observations based on my experience across a number of jurisdictions and my general approach.

The standard measure of peakiness is load factor: for a given period this is the maximum demand measure divided by the average. This gives a measure of peak relative to the underlying demand. In the example above the ‘peaky’ profile is about 3% peakier than the flat profile. But another measure is the actual range in demand between the peaky profile and the flat profile. In the same example we get a 60% difference between the peaky and flat profiles.

What I have noticed is that if we look at load factor this is more or less uncorrelated with household income if I remove controlled load (the rationale is that controlled load is off peak anyway and controlled by the distributor, not the consumer).  If I look at the range in demand between the trough and the peak, we get a reasonably strong correlation with income. But we also have a correlation between income and total consumption, so range could just be a function of total consumption? That is, if you use more then the variation between your peaks and troughs are also going to be larger.

So what are the conclusions?

Firstly, I can’t find any evidence that demand pricing will inherently transfer costs from wealthy households to poor households. Wealthy households by virtue of their greater consumption contribute more to peak demand even though they are not inherently peakier in their usage profile.  From my analysis I can’t say that poor households are inherently flatter in demand and therefore subsidise richer households, but what I can say is that given an equivalent household income, flatter demand households do indeed subsidise peak use households under a flat tariff structure and that demand pricing such as a properly designed time of use tariff could remove this cross subsidy.

What’s going on with load factor?

For the last couple of years the commonly accepted view is that load factor growth is a major threat to the economical operation of electricity utilities in Australia. The problem is that peak demand has been growing but consumption has not been growing at the same rate. For utilities, the main capital expenditure cost is providing enough network capacity for maximum demand, but revenue from sales of electricity comes from what is sold.

Part of the hypothesis for why load is growing is that consumers are buying more energy efficient devices and are bombarded with energy conservation messages which means that ordinarily they use less electricity but on peak hot days they forgo conservation for comfort and contribute to peak demand. The result is that electricity utilities still have high capital expenditure but the revenue base shrinks due to non-peak energy saving by consumers. But is this really true or is it a false assumption based on a subtle variation of the Khazzoom-Brookes postulate?

In terms of an energy paradox, there are a couple of observations that I can make from the analytics we have been doing recently. As one industry leader told me recently, network forecasting was very easy until about 2006. The period from about 2003 to 2008 did not only include some of the hottest years on record but also was notable for record population growth.

Then everything changed.

Since the late-2000s many substation forecasts have started to break down. This has been a problem common to a number of utilities, but it has been of particular interest to networks that have been experiencing growth in connections. It is compounded by growing political pressure to reduce capital expenditure.

Firstly, we had a global downturn which depressed economic activity and caused widespread job losses that encouraged consumers to save money. In Australia, we also had what is starting to look like a long cycle change in the climate with a series of wetter, cooler summers which means that peak air conditioner use has fallen in many places. And in mid-2008 the rate of population growth started to slow. Even more curiously, the pattern of consumption by housing age starts to change. From about 1980 until the mid-2000s, the newer the house the higher the consumption, but around 2005-2008 all that changed. After this date, fewer houses start to consume relatively less electricity and possibly also less peak demand. Daniel Collins at Ausgrid has done some interesting analysis in this regard which he presented at the 2011 ANZ Smart Utilities Conference.

And then there are solar photo voltaic panels which have been subsidised by government and in effect increase load factor and therefore electricity prices. Solar panels reduce net consumption by feeding electricity back into the grid, and this usually happens during the middle of the day when the sun is brightest but peak demand happens in the early evening on hot days when people get home to a hot house and turn on the air conditioning. This leads to lower revenue but the same or higher network costs.

Electricity is perhaps the oldest post-Industrial revolution technology still in widespread use and this means that there is a long body of thought and experience in understanding its consumption and the mechanisms for its delivery. This strength may also however prove to be a weakness. Many ideas have been around for a long time and are no longer routinely challenged. The energy paradox is one of them. I am not saying that the underlying of the premise is wrong, but there is certainly room to reinterpret this idea in relation to the current situation in Australia. Challenging orthodox thinking. You need to be very sure of your facts. This is where analytics and a wide spread of both data and data mining methodologies can help. As I have said, analytics is at its strongest when it is not hypothesis driven but is working without an explicit hypothesis or trying to decide between many competing ones.

The answer is that I don’t really know what is going on with load factor. I have yet to be convinced that anyone (including myself) has worked how to properly account for the climatic variance of peak demand, or fully understand the relationship between housing age and consumption, or what the true relationship is with population growth, or how many speeds our multi-speed economy has, but something is definitely happening and investigative analytics has excellent potential in being able greatly develop our understanding of how all of these effects interact.