The public perception of electricity prices in Queensland

Last week the Queensland government announced a three person panel to investigate how electricity prices might be lowered. The Courier Mail story attracted 171 comments full of the usual colourful characters and partisan political commentary to be expected from such a forum. As I have done in a past post for Victoria, I have decided to see what text analytics can tell us about the current zeitgeist in terms of electricity prices in Queensland. This time however I use some more sophisticated analysis beyond word clouds.

The following discussion gets pretty technical, so firstly I’ll sum up the findings. Apart from the ubiquitous and tiresome slacktivism of partisan political commentary that accompanies online news stories, there are a few interesting insights. It seems that the message about why electricity prices are going up is getting through, at least to some sections of the general public (Online news commenters are probably not a good representation of the average community response as they have selected themselves by choosing to comment). The other observation of note is the undeniable rise of solar power in the public imagination. It seems in recent times that it has drifted away from being a green consumer choice to a libertarian one: a way of side stepping what these consumers see as the state’s interference in the rights of the individual. The comments confirm the growing public perception that electricity prices are a significant impact on household budgets, despite the fact that it is still a minor cost for most households.  If however consumer discontent with network costs continues to rise then we will see increasing numbers leaving the grid or at least reducing their reliance on it.

Now to get into the nitty gritty…

Text mining is a pretty good approach to unpicking meaning in newspaper comments as a casual reader tends to get caught up in some of the more loopy sentiments expressed or generally turned off by partisan comment. A logical and objective analysis of the text allows us to try and uncover some insights without getting caught up in the general argy bargy.

Before doing any analysis we want to remove common words and phrases that form the grammatical and lexical glue of our language (e.g. and, if, but, etc.). In text mining these are called stopwords. Next we load the data into what’s known as a document term matrix. That is, one row for each comment and a long list of columns for each word with a count of the number of times that word is used in the comment. Like this:

Document Term Matrix

We then use this as the basis for our analysis. A word cloud is a way of arranging words so that their colour indicates the frequency with which each word occurs and size is the deviation between the maximum frequency within a comment minus the average frequency across all comments (i.e. it’s “lumpiness” of use). The position in the cloud depends on which comments the maximum frequency occurs. This is our initial word cloud:

Initial Word Cloud

We can see that the commentary is dominated by the words “power” and “electricity”. Later we’ll remove these to see what the cloud looks like without these terms.

But first we will remove all of the words with small counts (i.e. word that appear less than 10 times). By doing this we reduce the number of terms from about 1800 down to 45.

We then see if we can cluster the remaining words using another method: k-means clustering. K-means clustering allows us to organise the comments into groups based on the natural word groupings. But how many clusters do we create? In the following graphic each digit represents a separate comment. They are represented by a number signifying which cluster they belong to:

After a bit of experimenting I settle on three clusters as these separate nicely as shown above and do not have any clusters that are overlapping or too small/outlying. Each comment then gets a 1,2 or 3 based on which of these clusters the comment is classified into. I then use a tree algorithm to work out which words are driving the clustering process. This is the tree:

The tree above splits on the count of times the words are mentioned in a comment. For example the far left “node” of the tree (i.e. the red 4) is defined as comments which have used the word “power” at least once (i.e. >= 0.5) and the word “solar” at least once.

There are a couple of observations that we can make from the two graphics above. Firstly, we see cluster 2 sitting around zero on each coordinate in the discriminant plot. This indicates that this is a cluster without significant patterns in word combinations, backed up by the fact that most cluster 2 comments in the decision tree are in nodes that mostly do not use the “terms of interest” decided by the model (i.e. “power”, “electricity”, “solar” and “government”). Cluster 1 is dominated by the terms “solar power”, “power” and “electricity”, while cluster 3 is dominated by the terms “electricity” and “government”.

We see that there is a large group of general comments but two distinct themes emerge: one where commenters discuss solar power and another where they discuss electricity and government.

The difficulty with interpreting this tree is that “power” is a synonym with “electricity” and a natural pair with “solar”. So let’s run the same process but with this time removing the word power from the analysis. There is an issue here because the analysis to date has been dominated by the words “electricity” and “power” which do not add insight to our discussion as it the thing we are really trying to analyse. We see in our word cloud the next level of significant words emerge. It confirms the significance of the discussion about solar power and we also see “money” and “cost” emerge:

Nest we again remove the low frequency words and cluster the resultant document term matrix and discover five clusters this time:

And a tree which splits nicely into five nodes with a particular word representing four clusters and a general comments cluster:

So what does this tell us? We add the word “pay” to our existing list of thematic words, but more importantly, the key themes in the commentary are distinguished in their use of particular words in isolation from the other thematic words (apart from the term “solar power” where two of our thematic words cluster together).

So how do we dig further into what this means? The answer is to look at which other commonly used words correlate with our thematic words. The following charts show these correlations (only with words that are used 10 times or more as correlation is particularly sensitive to outliers which can distort the interpretation). It is from these final graphs that I have drawn conclusions at the start of this post.

Critical Peak Price or Critical Peak Rebate?

Australians pay billions of dollars every year for an event that usually doesn’t happen: a critical demand peak on the electricity network. Electricity networks are designed to ensure continuous supply of electricity regardless of the demand placed on it. Every few years we are likely to experience a heat wave or cold snap that drives up simultaneous demand for energy across the network. The infrastructure required to cope with this peak in demand is very expensive; infrastructure that is not used except during these relatively rare events.

Shaving even just a very small amount of demand off these peak days has the potential to save up to $1.2b each year nationally according to a recent report by Deloitte. The tricky part is to try and target the peaks rather than drive down energy consumption in non-peak times. It’s this non-peak consumption that pays the bills for infrastructure investment. If distributors get less revenue and their peak infrastructure costs stay the same then prices have to go up. This is one of the big reasons why electricity prices have risen so steeply in recent years.

One way to do this is to send a price signal or incentive for consumers to moderate their demand on peak days. Last year at the ANZ Smart Utilities Conference in Sydney, Daniel Collins from Ausgrid gave an interesting presentation comparing the benefits for distributors of offering a critical peak pricing versus critical peak rebate. A critical peak price is where the network issues a very steep increase in electricity price on a handful of days each year. This price might be as much as ten times the usual electricity price. Under a critical peak rebate scheme consumers are charged the same amount on peak days but are given a rebate by the distributor if they keep their peak below a pre-defined threshold.

In electricity market where distributors cannot own retailers (the most common type of market in Australia) it is very difficult for price signals set by distributors to reach end consumers. This is because the distributors charge retailers and retailers then set the price and product options offered to consumers. Distributor price signals can get obscured in this process. In this type of market critical peak prices are unlikely to be mandated by government because it goes against a policy of deregulation and is highly politically unpalatable in an environment of rapidly increasing electricity prices. The only option for distributors then is an opt-in price.

The effectiveness of such a price then is highly dependent on the opt-in rate and given that only consumers who do stand to lose under such a price are the only one likely to opt in then the overall savings may be quite low.

A more interesting concept is critical peak rebate. For a start the rebate is given by the distributor directly which avoids the incentive being obscured by retail pricing. Such a scheme is also likely to attract a much greater uptake than opt-in peak pricing. The tricky part however is the design. How much rebate should be offered? Which consumers should be targeted and will they be interested? How do we set the upper demand limit?

It would be a mistake to offer the same deal to all consumers as it is very hard to offer a general incentive with significant return. A badly designed rebate could easily cost more to administer than it saves. There are four crucial elements that need to be considered in the design of a CPR.

How to measure the benefit?

This is quite tricky but by far the most important design element. There is a lag time between energy peaks on the network and infrastructure costs. This is because infrastructure spending is usually allocated on a five year cycle based on forecasts developed from historical peak demand data. It is vital that a scheme is deigned to capture the net savings in peak demand and that there is process to feed this data into the forecasting process. Unfortunately, I have never seen a demand management team feed data to a forecasting team.

Critical Pricing Customer Engagment Strategy

Who do we target?

The first issue is to work out which consumers have high peaking demand and are likely to take up the incentive. There should also be consideration to how data will collected and analysed during the roll out of the program, and how this data is used to continually drive better targeting of the program. The problem with a one-size-fits-all scheme is that there may be a number of different groups who have different motivations for curtailing their peak demand. For example, the rebate financial incentive may be set for the average consumer but may not be high enough to appeal to a wealthy consumer. But there may be other ways to appeal to these customers such as offering donation to a charity if the peak demand saving target is reached. It is therefore to think about a segmentation approach to targeting the right customers with the right offer.

What price, demand threshold and event frequency do we set?

Pricing the incentive is a three dimensional problem: target demand threshold, price and frequency of events. Each of these effect the total benefit of the scheme and the consumer trade-offs need to be understood. The danger here again is relying on averages. Different cohorts of customer will have different trade-off thresholds and an efficient design is vital to the effectiveness of the incentive. It is unlikely that there is room to vary the rebate amount based on customer attributes but there is certainly room to design individualised demand thresholds and maybe also the frequency with which events are called for different cohorts of customers.

How do we refine the program?

In the rush to get new programs to market, response data and customer intelligence feedback is often not well considered. It is important that there is a system for holding data and routines for measuring response against control groups for each treatment group in the program so that incremental benefits can be measured but also so data can be fed back into improving models which select customers for the program. Incremental benefits of the program should also feed back into refining pricing of the rebate and target demand thresholds. Understanding which customers respond and the quantum of that response are valuable insights into customer behaviour which distributors usually do have the ability to capture in the normal course of their business.  These are all good reasons for running a well-designed CPR program.

3 Common Misconceptions about Prediction and Forecasting (and what we can do about it)

1. Prediction is about understanding the future

We humans have a lot of difficulty understanding the subtleties of time. It is important to remember how little we intuitively understand about the nature of time when building or interpreting forecasts and predictive modelling. Whilst I have built models that, for example, predict a customer’s propensity to churn reasonably well, weather forecasts for a given locality might at best predict only a few days into the future and this is the best  we can do even with the most powerful predictive models ever built. The difference is that the former “predicts” human behaviour whereas the second tries to peer into the future of a complex stochastic system. Predictive modelling works best when trying to predict human behaviour because it is a human invention bounded by human experience. Modelling does not predict a priori. I prefer to think of predictive modelling as projected behavioural modelling. Prediction sounds too good to be true. Traditional forecasting tries to project past trends and says if recent past conditions prevail what will the future look like? This is a fundamental misunderstanding of the nature of time. We have seen this breakdown significantly in recent time with energy consumption forecasting. There have been a range of significant disrupters such as the  global financial crisis, new appliance technology, distributed generation and changing housing standards to name a few. Some of these thing have been foreseeable and others have not, but none of them appear in the past record of energy consumption which is the prerequisite for a traditional forecast model.

2. My forecast is correct therefore my assumptions are correct

Just because a given forecast comes to pass does not mean that the model is without flaw. I am reminded of both Donald Rumsfeld and Edward Lorenz in debunking this. Lorenz discovered patterns that are locally stable and may replicate itself for a period of time, but are guaranteed not do so indefinitely. This is at the heart of chaos theory and every good modeller should understand this. The conditions which causes patterns to break down are sometimes what Rumsfeld refers to as unknown unknowns. There’s not much we can do about those except to try and imagine them or else be agile enough to recognise them once they start to unfold. But there are also “known unknowns” those things which we know we don’t know.

3. My forecast was correct given the data we had at the time

My golden rule is that all forecasts are wrong – they are just wrong in different ways. Sometime the biggest problem is when a forecast or prediction comes to pass. If a forecast comes to pass it is not knowable to what extent the success was subject to the efficacy of the model. I am reminded of Tarot readings. It is easiest to convince someone about a prediction when it confirms the observers own bias and none of us are without bias. And there is always a get out clause if the model does not continue to predict well. This is relatively harmless if the prediction is about the likelihood of meeting a tall, handsome stranger, but more significant if it is a prediction about network energy consumption. In the case of the latter its not good enough to say that was the best we could do at the time.

So what can we do about it?

The reason we build forecasts is to provide an evidential basis for decision-making that minimises risk. It is therefore a crazy idea that major investment decisions can be made on a single forecast. It is like putting your entire superannuation on black on the roulette wheel. The first step in reducing risk in prediction and forecasting is to try and understand (or imagine) the range of unknowns that may occur. For example, we know that financial crises, bushfire and floods all occur and we have some idea of how extreme they might be. We even have a pretty good idea of their probability of occurrence. We just don’t know when they will occur. In terms of energy forecasting we know certain disrupters are unfolding now such as weather variability and distributed generation but we don’t quite know how fast and to what extent. Known knowns and known unknowns.

While we don’t know what will happen in the future we do have a pretty good understanding of how different populations will behave under certain conditions. The solution therefore is to simulate population behaviours under a range of scenarios to get an understanding of what might happen at the extremes of the forecast rather that relying on the average or “most likely” forecast.

The answer in my opinion is multiple simulation. Instead of building one forecast or prediction, we build a range of models either with different assumptions, different methodologies or (preferably) both. That way we can build a view of the range of forecasts and associated risks.  What I need to know is not what the weather is going to do, but whether I should plan a camping trip or not. Multiple simulated prediction gives us the tools we need to do what we as humans do best – make decisions based on complex and sometimes ambiguous information.

Energy consumption, customer value and retail strategy

I am sometimes surprised at the amount of effort that goes into marketing electricity. I can’t help but feel that a lot of customer strategy is over engineered. So here I present a fairly straightforward approach that acknowledges that energy is a highly commoditised product. This post departs a little from the big themes of this blog but is still relevant because the data available from smart meters makes executing on an energy retail strategy a  much more interesting proposition (although still a challenging data problem).

To start with let’s look at the distribution of energy consumers by consumption. This should be a familiar distribution shape to those in the know:

Energy Consumption Distribution

In effect what we have are two distributions overlayed: a normal distribution to the left overlaps with a Pareto distribution to the right. This first observation tells us that we have two discrete populations with the own rules governing the distribution of energy consumption. A normal distribution is a signature of human population characteristics and as such identifies what is commonly termed the electricity “mass market” essentially dominated by domestic households. The Pareto distribution to the right is typical of an interdependent network such as a stock market where a stock’s value, for example, is not independent of the value of other stocks. This is also similar to what we see when we look at the distribution of business sizes.

A quick look at the distribution of electricity consumption allows us to define two broad groups and because consumption is effectively a proxy for revenue we have a valuable measure in understanding customer value.

In our Pareto distribution we have a long tail of an ever decreasing number of customers with increasingly large consumption (and therefore contribution to revenue). To the left we have the largest number of customer but relatively low value (although mostly better that the customers at the top end of the normal distribution) and to the right a very few “mega-value” customers. We can therefore roughly define three “super-segments” as follows:

Energy Consumption Super Segments

With VLC on the right revenue is king. Losing just a few of these customers will impact overall revenue so the strategy here is to retain at all costs. At the extreme right for example individual relationship management is a good idea as is bespoke product design and pricing. To the lower end of this segment a better option may be relationship managers with portfolios of customers. But the over-riding rule is 1:1 management where possible.

The middle segment is interesting in that both revenue and margin are important. Getting the balance right between these two measures is very important and the strategy depends on whether your organisation is in a growth or retain phase.  If I was a new market entrant this is where I would be investing a lot of my energy. This is the segment of the market where some small wins could build a revenue base with good returns relatively quickly assuming that the VLC market will be fairly stable and avoids the risks inherent in the mass market. On the flip side, if I was a mature player then I would be keeping a careful eye on retention rates and making sure I had the mechanisms to fine tune the customer value proposition. An example might be offering “value-add” services which become possible with advanced metering infrastructure such as online tools which allow business owners to track productivity via portal access to real time energy data; or the ability to upload their own business data which can be merged and visualised with energy consumption data.

The mass market is really the focus of most retailers because often success metrics focus too heavily on customer numbers rather than revenue and margin, probably because this is easier to measure. The trap is that these customers have a high degree of variable profitability as described by the four drivers of customer lifetime value:

Customer Lifetime Value Drivers

Understanding these drivers and developing an understanding of customer lifetime value is critical to developing tailored engagement strategies in this segment. Because these customers are the easiest to acquire, a strategy based around margin means that less profitable customers will be left for competitors to acquire. If those competitors are still focussed on customer counts as their measure for success then they will happily acquire unprofitable customers which in time will increase pressure to acquire even more because of falling margins. Thus the virtual circle above is replaced with a vicious cycle (thanks to David McCloskey for that epithet).

And so there we have the beginnings of a data driven customer strategy. There is of course much more to segmentation that this and there now very advanced methodologies for producing granular segmentation to help execute on customer strategy and provide competitive advantage.  I’ll touch on these in future posts. But this is a good start.

From CRM to ARM: what utilities can learn from banks about maximising value

Last week in Brisbane a small metal clamp holding an overhead electric cable failed causing a meltdown on the Queensland Rail network and leading to the government compensating commuters with a free day of travel. I expect that there are tens or hundreds of thousands of these clamps across the network and in all likelihood they are all treated in more or less the same way and assigned the same value.

There are interesting parallels between the current transformation of utilities to smart grid and what happened in banks in regards to customer analytics at the turn of the  millennium. Can we gain insights from over a decade of experiences in the banking industry of customer relationship management (CRM) to move towards a principle of asset “relationship” management (ARM).

When I became involved in my first large CRM project over ten years ago, CRM was at that point only concerned about the “kit” – the software and hardware that was the operational aspects of CRM – and not on the ecology of customer data where the real value of CRM lay. To give just one example: we built a  system for delivering SMS reminders which was very popular with customers, but when we went to understand why it was so successful we realised that we had not recorded the contact in a way that was easy to retrieve and analyse. If we had designed CRM from the point of view of an ecology of customer data then we would have been able to leverage insight from the SMS reminder initiative faster and for lower cost.

Once we understood this design principle we were able to start delivering real return on investment in CRM including developing a data construct of customer which spanned the CRM touch points, point of sale, transactional data systems and data which  resided outside of the internal systems including public data and data supplied by third party providers. We also embarked on standardising processes for data capture and developing common logical data definitions across multiple systems and then the development of an analytical data environment. The real CRM came into being once we had developed this whole data ecology of customer that enabled a sophisticated understanding of customer lifetime value and  the capacity to to build a range of models which predict customer behaviour and provide platforms for executing on our customer strategy.

The term “relationship” has some anthropological connotations and it may seem crazy to apply this thinking to network assets.  From a customer strategy perspective, however, it has a purely logical application: how can we capture customer interactions to maximise customer lifetime value, increase retention and reduce the costs of acquiring new customers?

If we look at customer value drivers we seem some parallel with capital expenditure and asset management. Cost to acquire is roughly synonymous with asset purchase price. Lifetime value applies to both a customer and an asset. Cost to serve for a customer is a parallel with the cost maintain an asset. Costumer retention is equivalent to asset reliability. The difference with advanced analytical CRM is that these drivers are calculated not as averages across customer classes but for every single customer.

The development of smart devices and the associated data environments necessary to support smart grid now enables utilities to look at a similar approach. Why can we not develop an analytical environment in which we capture attributes for, say, 30 million assets across a network so that we can identify risks to network operation before they happen?

If we could assign an expected life and therefore predicted probability of failure to the metal clamp between Milton and Roma Street stations; a value-to-network based on downstream consequences of failure and balance this with a cost to maintain/replace then we would be applying the same lessons that banks have learnt from understanding CRM and customer lifetime value.

Text Mining Public Perception of Smart Meters in Victoria

Today the Herald Sun ran a story proclaiming that smart meters are here to stay and invited their readers to comment on whether the government should scrap the smart meter program. I am not going to comment here on the journalistic quality of the article but concentrate on comments section which gives stakeholders some valuable insight into the zeitgeist of smart metering in the Garden State.

By applying an unstructured text mining application I have extracted the key themes from the comments on this story. When analysed in conjunction with the structure and content of the story, we get some interesting insights into public perception.

To start with I excluded the words “smart”, “meter” and “meters” in order not to be distracted by the subject under discussion. This is what I got.

Word clouds often seem to point to a collective meaning that is independent of individual attitudes. If this is the case then the strong message here which we could interpret as a collective rejection of what is seen as government control being favoured over the wishes of the “people”. This may be more of a reflection of the Herald Sun readership rather than a general community concern however.

If I remove “government” and “power” we get a closer look at the next level of the word cloud.

An aside of note is that we see that Herald Sun readers like to refer to the premier by his first name which is perhaps a sign that he still has popularity with this demographic.

One interesting observation to me is that despite its prominent mention article, the myth of radio frequency radiation from smart meters is not a major concern to the community, so we are unlikely to see a repeat of the tin foil hats fiasco in California.

Once we get into some of the word cloud detail, we see the common themes relating to “cost of living”, namely the additional costs to the electricity bill of the roll out and potential costs associated with time of use pricing. The article does mention that time of use pricing is an opportunities for households to save money. Time of use pricing is also a fairer pricing regime than flat tariffs.

The other important theme that I see is that the smart meter rollout is linked to the other controversial big technology projects of the previous Victorian government – Myki and the Wonthaggi Desalination Plant. But the good news is that the new government still has some cache with the public (even in criticism readers often refer to the premier by his first name). The objective now should be to leverage this and start building programs which smart meter initiatives which demonstrate the value of the technology directly to consumers. This in part requires unlocking the value of the data for consumers. I’ll speak more about this in future posts.

UPDATE: For interpretation of word clouds I suggest reading up on concept of collective consciousness.

Appliance Penetration and the Wisdom of Crowds

Some of the burning questions for electricity utilities in Australia have to do with appliance take up. I decided to see what the wisdom of crowds could tell us about the take-up of some key appliances which are affecting load profiles and consumption trends. My crowd-sourced data comes from Google Insights for Search. I have taken the weekly search volume indexes for three search terms: “air conditioner”, “pool pump” and “solar pv”. In addition, I also took the search volumes for “energy efficient” to see if there has been a fundamental change in the zeitgeist in terms of energy efficiency.

Firstly, let’s have a look at Google “air conditioner” search data.

The graph shows strong seasonality with people searching more for air conditioners in summer which makes sense. We see indications of how profound the growth of air conditioners has been in Australia (and South East Queensland in particular), I decided to compare growth in air conditioner searching by country and city. Since 2004, Australia ranks second behind the US for air conditioner searches. For cities, Brisbane and Sydney rank fourth and fifth in the world, but if we adjust for population they rank second and third respectively behind Huston. This has been one of the causes behind the recent difficulties in forecasting demand. One of the big questions is will air conditioning load continue to grow or has air conditioner penetration reached saturation point? Read on to find some insights that I think this data may have uncovered.

When we look at the data for the search term “energy efficient”, we get the opposite temperature effect with dips in searches during summer and maybe winter is noticeable in this graph.

This tells us that people become less concerned with energy efficiency as comfort becomes more important which has also been shown in other studies. But if we want to look for underlying changes in behavior then we need to account for temperature sensitivity in this data and the first thing we need to do is come up with a national temperature measure that we can compare with the Google data. To do this I get temperature data for Australia’s five largest cities from the Bureau of Meteorology and create national daily maximum temperatures for 2004-2011 comprised of a population weighted mean of the maximum temperatures of the five largest Australian cities. This accounts for about 70% of Australia’s population and an even greater proportion of regular internet users. Now we can quantify the relationship between our appliances, energy efficiency and temperature.

Below are the scatter charts showing the R2 correlations. “Solar PV” is uncorrelated with temperature but all of the other search terms show quite good correlation. You may also notice that I have tried to account for the U-curve in the relationship between “Energy Efficient” and temperature by correlating with the absolute number of degrees from 21C. The main relationship is with hot weather; accounting for the U-curve only adds slightly to the R2. Interestingly, people don’t start searching for air conditioners until the temperature hits 25C, and then there is a slightly exponential shape to the increase in searches. For the purposes of this post I will stick to simple linear methods, but further analysis may consider a log link GLM or Multiple Adaptive Regression Splines (MARS) to help explain this shape in the data.

Now to the central question that this post is trying to answer: what are the underlying trends in these appliances, can we find this out from Google and BOM data and can we get some insight into current underlying trends and how this might help uncover the underlying trends in consumption and load factor.  To do this I create a dummy variable to represent time and regress this with temperature to see to what extent each factor separately describes the number of Google searches. I build separate models for each year which separates the trend over time in searches from the temperature related ones.

But before I do that I can look at the direct relationship between annual “solar PV” trends.

There was not enough search data to go all the way back to 2004 (which is of itself interesting) so we only go back 2007. What we see is a large growth in searches in 2008, statistically insignificant trend in 2009 and 2010 and a distinct decline during 2011. It looks like removal of incentives and changes to feed in tariffs are having an effect. The error bars show the 95% confidence interval.

Now on to pool pumps. Here we see a steady rise on searching for pool pumps which indicates that we can expect growth in pool pump load to also grow nationally. If anything it looks like the search rate is increasing and maybe apart from 2008 maybe not affected by the 2008 global downturn.

Once we account for temperature variability we see really no trend in terms of energy efficiency until 2010. This came after the collapse of Australia’s carbon trading legislation and the collapse of internally accord on climate change policy. It stands to seems to me that this is also reflected in the public concern with energy efficiency. It seems to me that if there is widespread public concern about the contribution of electricity to cost of living then it should be reflected here but it isn’t. This also seems to suggest that for consumers the motivation towards energy efficiency is driven by a sense of social responsibility rather than being an economic decision.

Finally, air conditioning. What we see represented here is the rapid growth in air conditioning that happened in 2004-2005 with a slowing in growth from 2006-2008. It looks like maybe the government rebates of 2009 may have been partially spent on air conditioning. But what we see is that from 2010 onwards there has been no significant trend in search term growth. Does this suggest that we have finally reached saturation some time during 2010?