Analytics: Insource or Outsource?

For someone who makes their living from consulting on analytics my answer to this question may surprise some. In a world increasingly dominated by data, the ability to leverage data is not only a source of competitive advantage it is now a required competency for most businesses.

External consulting can help accelerate the journey to fully insourced analytics capability. The trick is how to do this in the most cost effective way. I have dealt with a number of companies that have very different approaches to this question, and it is my observation that the wrong mix of insourcing and outsourcing can be very expensive, perhaps in ways that you may find surprising. The key is understanding that analytics is not primarily a technology function.

To illustrate my point I am going to describe the analytics journey of three hypothetical companies. Our three companies are all challenger brands, second or third in their respective markets. Their businesses have always been reliant on data and smart people, but new technology and competitive pressures mean that data is becoming more and more important to their business models. All recognise the need to invest, but which is the right strategy?

The CIO of Company A has launched a major project to implement a new ERP system which will transform the way they will manage and access data right across the organisation. He is also establishing an analytics team by hiring a handful of statistics PhDs to extract maximum value from the new data platform. He is investing significantly with a major ERP platform vendor and is using consultants to advise him on implementation and help manage the vendor. He sees no need to spend additional money on analytics consultants because he has already hired plenty of smart people who can help him in the short term. He does however see value in hiring consultants to help his organisation with the large IT transformation.

In Company B, the COO is driving the analytics strategy. Privately, he doesn’t rate the CIO. He sees him as coming from a bygone era where IT is a support function to the core business and technical capability should be delivered from technical centre of excellence. The CIO has built a team of senior managers who firmly believe that to maintain efficient use of resources; business users should only have access to data through IT-approved or IT-built applications. The company has a very large and well organised data warehouse, but mostly it is accessed by other applications. There are very few human users of the data, and virtually none outside of IT who mostly use the data warehouse for building applications and rely on a detailed specification process from internal “customers” to understand the content of the data.

To drive his strategy of developing organisational analytics capability, the COO is forced to either wait for lengthy testing of new applications and system access through an exception basis, or else outsource his analytics to service providers who can offer him greater flexibility and responsiveness. He secures funding for an asset management project to optimize spending on maintaining ageing infrastructure and secures the services of a data-hosting service. Separately, he hires consultants to build advanced asset failure predictive models based on the large volumes of data in his externally hosted data mart.

Company C has hired a new CIO who has a varied background in both technology and business-related positions. She has joined the company from a role as CEO of a technology company where she has had both technology and commercial experience. Her previous company frequently (but not always) used Agile development methodology. She too has been tasked with developing a data strategy in her new role. Company C is losing market share to competitors and the executive think this is because their two competitors have spent a large amount of money on IT infrastructure renewal and have effectively bought market share by doing so. Company C is not using their data effectively to price their products and develop product features to drive greater customer value, but they are constrained in the amount of money they can spend to renew their own data infrastructure. The parent company will not invest in large IT expenditure when margins and market share are falling. The CIO resists pressure from the executive and external vendors to implement a new cut price ERP system and instead focuses her team on building better relationships with business users, especially in the pricing and product teams. She develops a team of technology-savvy senior managers with functional expertise in pricing and product development, rather than IT managers. She delivers a strong and consistent message that their organisation’s goal is to compete on data and analytics. Every solution should be able to state how data and analytics are used.

As issues or manager-driven initiatives arise she funds small project teams comprising IT, business and some involvement of external consultants. She insists that her managers hire consultants to work on site as part of virtual teams with company staff. Typically consultants are only engaged a few weeks at a time, but there may be a number of projects running simultaneously. Where infrastructure or organised data does not exist, teams are permitted to build their own “proof of concept” solutions which are supported by the teams themselves rather than IT. Because the ageing data warehouse struggles to cope with increased traffic increasingly it is used as a data staging area with teams running their own purpose built databases.

So how might these strategies play out? Let’s look at our three companies 12 months later.

Company A has built a test environment for their ERP system fairly quickly. The consultants have worked well with the vendor to get a “vanilla” system up and running but the project is now running into delays due to integration with legacy systems and problems handling increasing size of data. The CIO’s consultants are warning of significant blow outs in time and cost, but they are so far down the path now that pulling out is not an option. The only option is to keep requesting more funds. The blame game is starting with the vendor blaming the consultants, the consultants blaming IT.  Meanwhile the CIOs PhD-qualified analytics team have little work to do as they wait many months for their data requests to be filled. The wait is due in part to the number of resources required to support the ERP project means that there are few staff available to support ad hoc requests. When the stats team gets data they build interesting and robust statistical models but struggle to understand relevance to the business. One senior analyst has already left and others will most likely follow. I have seen this happen more times than I care to remember. Sadly, Company A is a pretty typical example.

Company B has successfully built their asset management system which is best in class due to the specialised skills provided by the data hosting vendor and analytics consultants. It has not been cheap – but they will not spend as much as Company A eventually will to get their solution in place. The main issue is that no one is Company B really understands the solution and more time and money will be required to bring the solution in house with some expenditure still required by IT and the development of a support team. On the bright side, however, the CIO has been shown up as recalcitrant and the migration of the project in house will be a good first project for the incoming CIO when the current CIO retires in a few months. It will encourage IT to develop new IP and new ways of working with the business including sharing of data and system development environments.

Company C (as you may already have guessed) is the outstanding success. Within a few weeks they had their first analytics pricing solution in place. A few weeks after that, tests were showing both increased profitability and market share within the small test group of customers who were chosen to receive new pricing. The business case for second stage roll out was a no brainer and funding will be used to move the required part of the data warehouse into the cloud.

After 12 months a few of the projects did not produce great results and these were quietly dropped. Because these were small projects costs were contained and importantly the team became better at picking winners over time. Small incremental losses were seen as part of the development process. A strategy of running a large number of concurrent projects was a strain at first for an IT group which was more accustomed to “big bang” projects, but the payoff was that risks were spread. While some projects failed other succeeded. Budgets were easier to manage because this was delegated to individual project teams and the types of cost blow outs experienced by Company A were avoided.

The salient lesson here is to look firstly at how your organisation structures it approach to data and analytics projects. Only then should you consider how to use and manage outsourced talent. The overarching goal should be to bring analytics in house because that’s really where it belongs.

Retail Therapy

July 1, 2012 will probably be mostly remembered at the date Australia introduced a price on carbon. But another event took place which may be more significant in terms of how households and small businesses consume their electricity:  the commencement of the National Energy Customer Framework (NECF).  The NECF gives the Australian Energy Regulator (AER) the responsibility for (among other things) regulating retail electricity prices.  Electricity retail prices continue to rise driven mostly by increasing capital expenditure costs for networks. Electricity businesses, regulators and governments are increasingly turning their attention to Time of Use (TOU) pricing to help mitigate peak network demand and therefore reduce capital expenditure.

Change will be gradual to start with however. A cynical observer may suggest that the NECF is no more than a website at present, but I believe that change is inevitable and it will be significant. Five states and the ACT have agreed to a phased introduction of the NECF following on from a 2006 COAG agreement, and the transition will be fraught with all of the complexities of introducing cross jurisdictional regulatory reform.

There are basically two mechanisms that drive the cost of electricity to produce and deliver. One is the weather (we use more in hot and cold weather) and the other is the cost of maintaining and upgrading the network that delivers the electricity. For the large retailers, the way to deal with the weather is to invest in both generation and retail because one is a hedge for the other. These are known as “gentailers”.

The network cost has traditionally been passed through as a regulated network tariff component of the retail price. The problem with this is that often the network price structure does not reflect actual network costs which are driven by infrequent peak use, particularly for residential customers. Those who use a greater proportion of electricity during peak times add to the cost of maintaining capacity in the network to cope with the peak. But for residential and other small consumers they all pay the same rate. In effect, “peaky” consumers are subsidised by “non-peaky” customers.

It is not yet really clear how a price signal will be built into the retail tariff but one policy option is for distributors to pass costs to reflect an individual consumer’s load profile. The implications for government policy are interesting but I’ll save for another post. In this post, I’ll explore what the implications are from the retailer’s perspective in contestable markets.

I believe that this is potentially quite a serious threat to the business model for retailers for a number of reasons that I’ll get into shortly, but at the heart of the matter is data: lots of it, and what to do with it. Much of that data is flowing from smart meters in Victoria and NSW and will start to flow from meters in other states. A TOU pricing strategy not only requires data from smart meters but also many other sources as well.

Let’s have a quick recap on TOU. I have taken the following graph from a report we have prepared for the Victorian Department of Primary Industries which can be found here.

The idea of TOU is to define a peak time period where the daily usage peaks and charge more for electricity in this time period. A two part TOU will define other times as off peak and charge a much lower tariff. There may also be shoulder periods either side of the peak where a medium tariff is charged.

How each of these periods is defined and the tariff levels set will determine whether the system as a whole will collect the same revenue as when everyone is on a flat tariff.  This principle is called revenue neutrality. That is, the part of the electricity system that supplies households and small businesses will collect the same revenue under the new TOU tariffs as under the old flat tariff.

But this should by no means give comfort to retailers that they each will achieve revenue neutrality.

For example, we can see from the above graphs that even if revenue neutrality is achieved for all residential and SME customers combined, residential customers may be better off and SME worse or vice versa but everything still totals to no change in revenue. If a retailer has a large share of customers in a “better off” category then that will translate to a fall in revenue if the retailer passes on the network tariff with their existing margin. In fact we find that residential bills for example may be reduced by up to five per cent, depending on the design of the network tariff.

Of course this is just one segmentation of TOU, there could be many, many more sub-segments all with different “better off” or ”worse off” outcomes.

Revenue neutrality can be affected by price elasticity (consumers reduce their peak consumption) or substitution (they move their peak usage to shoulder or off-peak and thus reducing their overall electricity bill). This means that retailers not only have to understand what the impact would be under a current state of electricity usage but also how the tariff itself will affect consumer behaviour.

Data is at the very centre of competitive advantage as this disruptive event unfolds in the retail electricity market. Indeed the threat may not just be disruptive: for some retailers this may be an existential threat, especially as we see data-centric organisations entering the market such as telcos and ISPs. So far the no large telcos have entered the market in Australia (as far as I know: please correct me on this if this has changed) but surely the elephants must be loitering outside the room if not already in it.

I think what is clear for incumbent electricity retailers is “do nothing” is not an option. There must be a clear strategy around data and pricing including technology, talent and process. Furthermore, the centrepiece must be time of use pricing excellence built on a deep capability with data flowing from new technology meters and networks.

So what exactly are the key issues? The following list is by no means exhaustive but certainly gives some idea of the extent of data and the quantum of skills required to handle such complex analysis and interpretation.

Opt In or Opt Out?

I believe that TOU tariffs for small consumers are inevitable, but how will it roll out and how fast will the rollout be? The key policy decision will be whether to allow customers to opt in to TOU tariffs or opt out of a scheme which will otherwise be rolled out by default (a third option is to mandate to all, but this is likely to be politically unpalatable). I think pressure on governments to act on electricity network costs means that the “opt in” option, if it is adopted by the AER, will by definition be a transitional process. But the imperative is to act quickly because there is a lag between reducing peak demand and the flow through to capital expenditure savings (this is another whole issue which I will discuss in a future post). This lag means that if take up of TOU is too slow then the effect to the bottom line will be lost in the general noise of electricity consumption cycles: a case of a discount delayed is a discount denied. Retailers will have the right to argue for a phased introduction but there will be pressure on governments and the AER to balance this against the public good.

Non-cyclical change in demand

In recent years we have seen a change in the way electricity is consumed. I won’t go into the details here because I have blogged on this before. Suffice to say that it is one thing to understand from the data how a price may play out in the current market state but it’s altogether another thing to forecast how this will affect earnings. This requires a good idea of where consumption is heading and in turn this is affected a by a range of recent disruptors including Solar PV, changes in housing energy efficiency and changes in household appliance profiles. Any pricing scenario must also include a consumption forecast scenario. It would also be wise to have way to monitor forecasts carefully for other black swans waiting to sweep in.

A whole of market view

The task of maintaining or increasing earnings from TOU pricing will be a zero sum game. That is, if one retailer gets an “unfair share” of the “worse off” segments, then another retailer will get more of the “better off” segments and it is likely that this will be a one-off re-adjustment of the market. There is a need for a sophisticated understanding of customer lifetime value and this will be underpinned by also having a good understanding of market share by profitability. The problem is that smart meters (and the subsequent data for modelling TOU) will roll out in stages (Victoria is ahead of the other states, but I think the rollout will be inevitable across the National Electricity Market). The true competitive advantage for a retailer comes from estimating the demand profiles of customers still on accumulation meters and those smart meter consumers who are with competitors. There are a range of data mining techniques to build a whole-of-market view but equally important is a sound go-to-market strategy built to take advantage of these insights.

There will be winners and losers in the transition to TOU. For consumers, it could be argued that the “losers” are currently “winners” because the cost of their electricity supply is being subsidised by less “peaky” customers. There will also be winners and losers among energy retailers. Some of the winners may not even be in the market yet. The question is who will the losers be?

Energy consumption, customer value and retail strategy

I am sometimes surprised at the amount of effort that goes into marketing electricity. I can’t help but feel that a lot of customer strategy is over engineered. So here I present a fairly straightforward approach that acknowledges that energy is a highly commoditised product. This post departs a little from the big themes of this blog but is still relevant because the data available from smart meters makes executing on an energy retail strategy a  much more interesting proposition (although still a challenging data problem).

To start with let’s look at the distribution of energy consumers by consumption. This should be a familiar distribution shape to those in the know:

Energy Consumption Distribution

In effect what we have are two distributions overlayed: a normal distribution to the left overlaps with a Pareto distribution to the right. This first observation tells us that we have two discrete populations with the own rules governing the distribution of energy consumption. A normal distribution is a signature of human population characteristics and as such identifies what is commonly termed the electricity “mass market” essentially dominated by domestic households. The Pareto distribution to the right is typical of an interdependent network such as a stock market where a stock’s value, for example, is not independent of the value of other stocks. This is also similar to what we see when we look at the distribution of business sizes.

A quick look at the distribution of electricity consumption allows us to define two broad groups and because consumption is effectively a proxy for revenue we have a valuable measure in understanding customer value.

In our Pareto distribution we have a long tail of an ever decreasing number of customers with increasingly large consumption (and therefore contribution to revenue). To the left we have the largest number of customer but relatively low value (although mostly better that the customers at the top end of the normal distribution) and to the right a very few “mega-value” customers. We can therefore roughly define three “super-segments” as follows:

Energy Consumption Super Segments

With VLC on the right revenue is king. Losing just a few of these customers will impact overall revenue so the strategy here is to retain at all costs. At the extreme right for example individual relationship management is a good idea as is bespoke product design and pricing. To the lower end of this segment a better option may be relationship managers with portfolios of customers. But the over-riding rule is 1:1 management where possible.

The middle segment is interesting in that both revenue and margin are important. Getting the balance right between these two measures is very important and the strategy depends on whether your organisation is in a growth or retain phase.  If I was a new market entrant this is where I would be investing a lot of my energy. This is the segment of the market where some small wins could build a revenue base with good returns relatively quickly assuming that the VLC market will be fairly stable and avoids the risks inherent in the mass market. On the flip side, if I was a mature player then I would be keeping a careful eye on retention rates and making sure I had the mechanisms to fine tune the customer value proposition. An example might be offering “value-add” services which become possible with advanced metering infrastructure such as online tools which allow business owners to track productivity via portal access to real time energy data; or the ability to upload their own business data which can be merged and visualised with energy consumption data.

The mass market is really the focus of most retailers because often success metrics focus too heavily on customer numbers rather than revenue and margin, probably because this is easier to measure. The trap is that these customers have a high degree of variable profitability as described by the four drivers of customer lifetime value:

Customer Lifetime Value Drivers

Understanding these drivers and developing an understanding of customer lifetime value is critical to developing tailored engagement strategies in this segment. Because these customers are the easiest to acquire, a strategy based around margin means that less profitable customers will be left for competitors to acquire. If those competitors are still focussed on customer counts as their measure for success then they will happily acquire unprofitable customers which in time will increase pressure to acquire even more because of falling margins. Thus the virtual circle above is replaced with a vicious cycle (thanks to David McCloskey for that epithet).

And so there we have the beginnings of a data driven customer strategy. There is of course much more to segmentation that this and there now very advanced methodologies for producing granular segmentation to help execute on customer strategy and provide competitive advantage.  I’ll touch on these in future posts. But this is a good start.

From CRM to ARM: what utilities can learn from banks about maximising value

Last week in Brisbane a small metal clamp holding an overhead electric cable failed causing a meltdown on the Queensland Rail network and leading to the government compensating commuters with a free day of travel. I expect that there are tens or hundreds of thousands of these clamps across the network and in all likelihood they are all treated in more or less the same way and assigned the same value.

There are interesting parallels between the current transformation of utilities to smart grid and what happened in banks in regards to customer analytics at the turn of the  millennium. Can we gain insights from over a decade of experiences in the banking industry of customer relationship management (CRM) to move towards a principle of asset “relationship” management (ARM).

When I became involved in my first large CRM project over ten years ago, CRM was at that point only concerned about the “kit” – the software and hardware that was the operational aspects of CRM – and not on the ecology of customer data where the real value of CRM lay. To give just one example: we built a  system for delivering SMS reminders which was very popular with customers, but when we went to understand why it was so successful we realised that we had not recorded the contact in a way that was easy to retrieve and analyse. If we had designed CRM from the point of view of an ecology of customer data then we would have been able to leverage insight from the SMS reminder initiative faster and for lower cost.

Once we understood this design principle we were able to start delivering real return on investment in CRM including developing a data construct of customer which spanned the CRM touch points, point of sale, transactional data systems and data which  resided outside of the internal systems including public data and data supplied by third party providers. We also embarked on standardising processes for data capture and developing common logical data definitions across multiple systems and then the development of an analytical data environment. The real CRM came into being once we had developed this whole data ecology of customer that enabled a sophisticated understanding of customer lifetime value and  the capacity to to build a range of models which predict customer behaviour and provide platforms for executing on our customer strategy.

The term “relationship” has some anthropological connotations and it may seem crazy to apply this thinking to network assets.  From a customer strategy perspective, however, it has a purely logical application: how can we capture customer interactions to maximise customer lifetime value, increase retention and reduce the costs of acquiring new customers?

If we look at customer value drivers we seem some parallel with capital expenditure and asset management. Cost to acquire is roughly synonymous with asset purchase price. Lifetime value applies to both a customer and an asset. Cost to serve for a customer is a parallel with the cost maintain an asset. Costumer retention is equivalent to asset reliability. The difference with advanced analytical CRM is that these drivers are calculated not as averages across customer classes but for every single customer.

The development of smart devices and the associated data environments necessary to support smart grid now enables utilities to look at a similar approach. Why can we not develop an analytical environment in which we capture attributes for, say, 30 million assets across a network so that we can identify risks to network operation before they happen?

If we could assign an expected life and therefore predicted probability of failure to the metal clamp between Milton and Roma Street stations; a value-to-network based on downstream consequences of failure and balance this with a cost to maintain/replace then we would be applying the same lessons that banks have learnt from understanding CRM and customer lifetime value.

What the water sector can learn from analytics

Deloitte and the Australian Water Association has released the 2011 State of the Australian Water Sector Report and the most important issue raised by the industry is that of “sustainability” and some differences of opinion on whether environmental or economic sustainability is the more important concern. An analytical approach can help inform the industry on these important sustainability issues in different ways. The “smart grid” is an important factor in this to the extent that it is just the technological enhancements that will come with natural replacement of current infrastructure and therefore nothing particularly mystical. As I have said before, the “smart grid” is really just the slow evolution of old measurement technology and the “smart” part is how we get better at extracting useful insight from the data.

A case in point is climatic variability. I am studiously avoiding use of the term “climate change” because the political debate around this is really not useful in the context of managing climate-sensitive resources. The fact that our climate is highly variable is beyond debate and I will leave it for other forums the debate whether that variation is directional, cyclical or some combination of the two.

One thing that the water industry can learn from the electricity industry is around the work done to understand weather related demand and how to account for that variability. I have said before that while I do not believe the electricity utilities have yet cracked how to properly account for weather-related variability of demand, they have done a lot of work in this respect that could yield insights for water utilities. If this variability can be properly understood then we can isolate underlying growth factors and develop consumption scenarios under different hot dry climatic scenarios. From an analytics point of view, if we can model these down to individual consumers then we can develop incredibly rich scenarios with different cohorts of population responding in different ways. To this extent all utilities would do well to turn to scenario modelling rather than traditional forecasting in order to better understand the underlying growth in demand and provide a solid methodological basis for informing the policy debate (I’ll talk more about the difference between forecasting and scenario modelling in a future blog post).

In terms of economic sustainability, pricing and price setting is the key analytical exercise. Understanding price and demand elasticity is the critical element in developing future economic sustainability of the water industry. This is still a way off for water but is worth considering because it can help target spending on network infrastructure renewal so that the right data is collected for future modelling. Usually, elasticity is expressed as an average for all users. What is far more important to understand is the distribution of elasticity with a given population and whether there are other factors that describe elasticity segments. This can help drive product differentiation and demand management strategies which in turn support the economic sustainability of the network.

Seven Things IT Should Know About Analytics

  1. Analytics is not BI

Analytics is serviceably defined by wikipedia but this does not really do justice to the potential of a properly established analytics environment. To paraphrase Donald Rumsfeld BI deals with “known knowns” whereas analytics (at its most exciting) deals with “known unknowns” – that is, as data scientists, we know what we don’t know.

Let me illustrate.

It is important to measure new meter connections, where they are occurring and at what rate they are occurring. This is a well defined measure and can easily be translated into regular report with well defined metrics (e.g. how many, time from request to connection, breakdown by geography, meter type, etc). This is BI.

If however we don’t know why connections are growing or if they appear flat but if fact we suspect some classes are growing while others are shrinking so that the net effect is flat growth; or that consumption is changing (as it is in many parts of the country) and this this may somehow be linked to new connections. This is analytics.

  1. Analytics is a business function; not an IT function

Well, not always but usually. When I started in this field about ten years ago nobody really knew what to do with our team. We were originally part of a project team implementing a large scale CRM system. We were deeply technical when it came to understanding customer data but we weren’t part of IT. We were a “reporting” team but we were also data cube, database and web developers. There was no process for us to have access to a development environment outside of the IT department so we built our own (we bought our own server from Harvey Norman and when we had to move offices we wheeled it across the road on an office chair). We built our own statistical models to allocate sales because the system had not been built to recognise sales when they appeared in the product system. And eventually we started using the data to built predictive models and customer segmentation.

To begin with I think IT saw us as a threat to the safe running of “the system”, but over time we were accepted as a special case. It is true that to this day for an organisation to be truly competitive in analytics it must recognise the analytics teams embedded within the business are deeply technical and need to be treated as a special class of super user.

  1. Analytics is Agile

This is not a new idea. The best analytics outcomes are performed by small cross functional groups that cover data manipulation, data mining, machine learning and subject matter expertise. The groups are usually small because analytics developed is investigative and generally not hypothesis driven (or if there are hypotheses then there may be many competing ones which need to be tested), and the outputs of very complex analysis can often be disarmingly simple algorithms. It has not been usual for months of development to produce less than 20 lines of code.

  1. Analytics needs lots of data

Analytics thrives on lots of available data but its not used all of the time. When we are asked about what data we require, the trite answer is “give us everything”. The reason is that results can be biased if only built on part of the data record. That’s not to say that all of the data is used all of the time but a competent analyst will always know what has been excluded or how the data may have been sampled or summarised. For example, we spend a lot of time deciding on whether to treat missing data as null, zero or replace it in some way and the answer is always different depending on the project. In the age of smart grid data where datasets are very large (I have recently seen a ten terabyte table) an analytics environment should regularly build data samples for analytical use (properly randomised of course in consultation with analytics users).

  1. Analytics takes care of the “T” in ETL

Analytics teams like to take control of the “Transform” part of the ETL process, especially of the transform step involved summarisation or some other change to the data. Because data mining processes can pick up very subtle signals in the data, small changes to the data can lead to bad models being produced, and sometimes the what is considered bad data is an effect of interest to the data miner. For example, “bad” SCADA readings need to be removed from the dataset in order to develop accurate forecasts but the same bad data may be of interest in the building of asset failure models.

  1. Nothing grows in a sterile garden; don’t over-cleanse your analytics datasets

All raw data is dirty: missing records, poor data entry, mysterious analogue readings get digitised. But as in the example above, dirty data can be a signal worth investigating. Also, because models can be sensitive to outliers the data miner likes to have control over the definition of outlier and it is often a relative measure. If outliers have already been removed then this can cause valid records to be discarded in further outlier removal processes. Of course this needs to be balanced against the possibility that dirty data may lead to false conclusions especially with less experienced analytics users. So the right balance needs to be found but this does not always mean that cleaner is better.

  1. Be aware of who is doing analytics in your organisation

Because analytics is a technical function but it is embedded in business units and because it is agile by nature, analytics can be a very rapid way to develop value metrics for large system changes. It is good to know who is doing what as this may provide tangible evidence of business value. In most organisations that are doing some analytics, it tends to be in pockets throughout the organisation so it may not be immediately obvious who is doing what and how that might support IT business cases.

The Smart Grid is Already Here

I have just returned from the ANZ Smart Utilities Conference in Sydney last week where I heard a lot talk about the future potential of smart grids. It is true that there are some fundamental changes around the corner, particularly driven by new technologies in an energy constrained economy. But arguably the smart grid began sixty years ago when SCADA started collecting data and has been a mostly sleeping giant ever since. In a world where energy has been plentiful and cheap there has been no real desire to capture, store and analyse the data in any large scale and systematic way.

In some respects, what has happened to the electricity grid over the last few decades has been a lesson in the tragedy of the commons because cheap electricity has meant that the grid that delivers it has been treated as an over-exploited resource.

And now we are seeing the impacts of underinvestment in the grid, we are now interested in making it “smart” when in fact it has been smart all along. In spite of data quality issues and sporadic archiving of data, there is still a lot of value held in that data that has not been unlocked. This is especially so now that we have technology stacks that can handle very large datasets and apply advanced statistical and machine learning processes to data.

So the smart grid is already here. While it is important to keep an eye of future technology and trends it is equally important that utilities get their houses in order in terms of how they manage and interrogate their existing data assets.