Cape Town is almost out of water. Could Australian cities suffer the same fate?

 With water storages running low, residents of Cape Town 
get drinking water in the early morning from a mountain spring 
collection point. Nic Bothma/EPA

The world is watching the unfolding Cape Town water crisis with horror. On “Day Zero”, now predicted to be just ten weeks away, engineers will turn off the water supply. The South African city’s four million residents will have to queue at one of 200 water collection points.

Cape Town is the first major city to face such an extreme water crisis. There are so many unanswered questions. How will the sick or elderly people cope? How will people without a car collect their 25-litre daily ration? Pity those collecting water for a big family.

The crisis is caused by a combination of factors. First of all, Cape Town has a very dry climate with annual rainfall of 515mm. Since 2015, it has been in a drought estimated to be a one-in-300-year event.

In recent years, the city’s population has grown rapidly – by 79% since 1995. Many have questioned what Cape Town has done to expand the city’s water supply to cater for the population growth and the lower rainfall.

Could this happen in Australia?

Australia’s largest cities have often struggled with drought. Water supplies may decline further due to climate change and uncertain future rainfall. With all capital cities expecting further population growth, this could cause water supply crises.

The situation in Cape Town has strong parallels with Perth in Australia. Perth is half the size of Cape Town, with two million residents, but has endured increasing water stress for nearly 50 years. From 1911 to 1974, the annual inflow to Perth’s water reservoirs averaged 338 gigalitres (GL) a year. Inflows have since shrunk by nearly 90% to just 42GL a year from 2010-2016.

To make matters worse, the Perth water storages also had to supply more people. Australia’s fourth-largest city had the fastest capital city population growth, 28.2%, from 2006-2016.

As a result, Perth became Australia’s first capital city unable to supply its residents from storage dams fed by rainfall and river flows. In 2015 the city faced a potentially disastrous situation. River inflows to Perth’s dams dwindled to 11.4GL for the year.

For its two million people, the inflows equated to only 15.6 litres per person per day! Yet in 2015/6 Perth residents consumed an average of nearly 350 litres each per day. This was the highest daily water consumption for Australia’s capitals. How was this achieved?

Tapping into desalination and groundwater

Perth has progressively sourced more and more of its supply from desalination and from groundwater extraction. This has been expensive and has been the topic of much debate. Perth is the only Australian capital to rely so heavily on desalination and groundwater for its water supply.

Volumes of water sourced for urban use in Australia’s major cities. BOM, Water in Australia, p.52, National Water Account 2015

Australia’s next most water-stressed capital is Adelaide. That city is supplementing its surface water storages with desalination and groundwater, as well as water “transferred” from the Murray River.

Australia’s other capital cities on the east coast have faced their own water supply crises. Their water storages dwindled to between 20% and 35% capacity in 2007. This triggered multiple actions to prevent a water crisis. Progressively tighter water restrictions were declared.

The major population centres (Brisbane/Gold Coast, Sydney, Melbourne and Adelaide) also built large desalination plants. The community reaction to the desalination plants was mixed. While some welcomed these, others question their costs and environmental impacts.

The desalination plants were expensive to build, consume vast quantities of electricity and are very expensive to run. They remain costly to maintain, even if they do not supply desalinated water. All residents pay higher water rates as a result of their existence.

Since then, rainfall in southeastern Australia has increased and water storages have refilled. The largest southeastern Australia desalination plants have been placed on “stand-by” mode. They will be switched on if and when the supply level drops.

Investing in huge storage capacity

Many Australian cities also store very large volumes of water in very large water reservoirs. This allows them to continue to supply water through future extended periods of dry weather.

Cape Town’s largest water storage, Theewaterskloof Dam, has run dry, but the city’s total capacity is small compared to the big Australian cities’ reserves. Nic Bothma/EPA

The three largest cities (Sydney, Melbourne and Brisbane) have built very large dams indeed. For example, Brisbane has 2,220,150 ML storage capacity for its 2.2 million residents. That amounts to just over one million litres per resident when storages are full.

In comparison, Cape Town’s four million residents have a full storage capacity of 900,000 ML. That’s 225,000 litres per resident. Cape Town is constructing a number of small desalination plants while anxiously waiting for the onset of the region’s formerly regular winter rains.

This article was written by:
Image of Ian Wright Ian Wright – [Senior Lecturer in Environmental Science, Western Sydney University]




This article is part of a syndicated news program via

Explainer: power station ‘trips’ are normal, but blackouts are not

 The Loy Yang power station ‘tripped’ early in  
the year, triggering fears of a summer of blackouts. DAVID CROSLING/AAP

Tens of thousands of Victorians were left without power over the long weekend as the distribution network struggled with blistering temperatures, reigniting fears about the stability of our energy system.

It comes on the heels of a summer of “trips”, when power stations temporarily shut down for a variety of reasons. This variability has also been used to attack renewable energy such as wind and solar, which naturally fluctuate depending on weather conditions.

The reality is that blackouts, trips and intermittency are three very different issues, which should not be conflated. As most of Australia returns to school and work in February, and summer temperatures continue to rise, the risk of further blackouts make it essential to understand the cause of the blackouts, what a power station “trip” really is, and how intermittent renewable energy can be integrated into a national system.

Read more: A month in, Tesla’s SA battery is surpassing expectations


Initial reports indicate recent blackouts in Victoria were caused by multiple small failures in the electricity distribution system across the state, affecting all but one of the five separately owned and managed systems that supply Victorians.

Across the whole of mainland Australia, very hot weather causes peak levels of electricity consumption. Unfortunately, for reasons of basic physics, electricity distribution systems do not work well when it is very hot, so the combination of extreme heat and high demand is very challenging. It appears that significant parts of the Victorian electricity distribution system were unable to meet the challenge, leading to uncontrolled blackouts.

Parenthetically, electricity distribution systems are vulnerable to other types of uncontrollable extreme environmental events, including high winds, lightning, and bushfires. Sometimes blackouts last only a few seconds, sometimes for days, depending on the nature and extent of the damage to the system. 

These blackouts are very different from those caused by power station “trips”, although they have the same effect on consumers. When electricity is insufficient to meet demand, certain sections of the grid have to be startegically blacked out to restore the balance (this is known as “load shedding”).

It is the possibility of blackouts of this second type which has excited so much commentary in recent months, and has been linked to power station “trips”.

What is a ‘trip’ and how significant is it?

“Trip” simply means disconnect; it is used to describe the ultra-fast operation of the circuit breakers used as switching devices in high-voltage electricity transmission systems. When a generator trips, it means that it is suddenly, and usually unexpectedly, disconnected from the transmission network, and thus stops supplying electricity to consumers.

The key words here are suddenly and unexpectedly. Consider what happened in Victoria on January 18 this year. It was a very hot day and all three brown coal power stations in the state were generating at near full capacity, supplying in total about 4,200 megawatts towards the end of the afternoon, as total state demand climbed rapidly past 8,000MW (excluding rooftop solar generation).

Suddenly, at 4:35pm, one of the two 500MW units at Loy Yang B, Victoria’s newest (or, more precisely, least old) coal-fired power station tripped. At the time this unit was supplying 490MW, equal to about 6% of total state demand.

The system, under the operational control of the Australia Energy Market Operator (AEMO), responded just as it was meant to. There was considerable spare gas generation capacity, some of which was immediately made available, as was some of the more limited spare hydro capacity. There was also a large increase in imports from New South Wales, and a smaller reduction in net exports to South Australia.

By the time Loy Yang B Unit 1 was fully back on line, three hours later, Victoria had passed its highest daily peak demand for nearly two years. There was no load shedding: all electricity consumers were supplied with as much electricity as they required. However, spot wholesale prices for electricity reached very high levels during the three hours, and it appears that some large consumers, whose supply contracts exposed them to wholesale prices, made short-term reductions in discretionary demand.

Read more: A high price for policy failure: the ten-year story of spiralling electricity bills

This (relatively) happy outcome on January 18 was made possible by the application of the system reliability rules and procedures, specified in the National Electricity Rules.

These require AEMO to ensure that at all times, in each of the five state regions of the NEM, available spare generation capacity exceeds the combined capacity of the two largest units operating at any time.

In other words, spare capacity must be sufficient to allow demand to continue to be reliably supplied if both of the two largest units generating should suddenly disconnect.


AEMO forecasts energy demand, and issues market notices alerting generators about reliability, demand and potential supply issues. On a busy day, like January 18, market notices may be issued at a rate of several per hour.

These forecasts allowed generators to respond to the loss of Loy Yang B without causing regional blackouts.

What is not publicly known, and may never be known, is why Loy Yang Unit B1 tripped. AEMO examines and reports in detail on what are called “unusual power system events”, which in practice means major disruptions, such as blackouts. There are usually only a few of these each year, whereas generator trips that don’t cause blackouts are much more frequent (as are similar transmission line trips).

It has been widely speculated that, as Australia’s coal fired generators age, they are becoming less reliable, but that could only be confirmed by a systematic and detailed examination of all such events.

Managing variable generation

Finally, and most importantly, the events described above bear almost no relationship to the challenges to reliable system operation presented by the growth of wind and solar generation.

With traditional thermal generation, the problems are caused by unpredictability of sudden failures, and the large unit size, especially of coal generators, which means that a single failure can challenge total system reliability. Individual wind generators may fail unpredictably, but each machine is so small that the loss of one or two has a negligible effect on reliability.

The challenge with wind and solar is not reliability but the variability of their output, caused by variations in weather. This challenge is being addressed by continuous improvement of short term wind forecasting. As day-ahead and hour-ahead forecasts get better, the market advice AEMO provides will give a more accurate estimate of how much other generation will be needed to meet demand at all times.

Of course, AEMO, and the generation industry, do still get caught out by sudden and unexpected drops in wind speed, but even the fastest drop in wind speed takes much longer than the milliseconds needed for a circuit breaker in a power station switchyard to trip out.

At the same time, as the share of variable renewable generation grows, the complementary need for a greater share of fast response generators and energy storage technologies will also grow, while the value to the system of large, inflexible coal-fired generators will shrink.

This Explainer was prepared by:
Image of Hugh SaddlerHugh Saddler – [Honorary Associate Professor, Centre for Climate Economics and Policy, Australian National University]





This article is part of a syndicated news program via

Sustainable shopping: how to stay green when buying white goods

 It pays to think very carefully about your new fridge. 

Most of us have a range of white goods in our homes. According to the Australian Bureau of Statistics, common appliances include refrigerators (in 99.9% of homes), washing machines (97.8%) and air conditioners (74.0%). Just over half of Australian households have a dishwasher, and a similar number have a clothes dryer.

These white goods provide a host of benefits, such as reducing waste, improving comfort, helping us avoid health hazards such as rotten food, or simply freeing up our time to do other things. But they also have significant environmental impacts, and it’s important to consider these when using and choosing white goods.

Most white goods are used on a daily basis for years. This means the bulk of their environmental impact comes not from their manufacture, but from their everyday use. They use electricity, for example, which is often sourced from fossil fuels.

Life cycle impacts of typical white goods. MIT

When buying an appliance, many people focus on the retail price, but overlook the often significant operating costs. The table below shows the difference in annual energy costs and greenhouse emissions for different-sized dishwashers under various scenarios.

When it comes to appliances, size matters. Sustainability Victoria, Author provided

What to look for

Here are some questions you should ask when shopping for a new white goods appliance:

  • How resource-efficient is this model, compared with other options?
  • How much will it cost to operate?
  • Over the life of the product, would I be better off spending more now to buy a more energy-efficient model that costs less to run?

Read the label

White goods in Australia are required to carry a label detailing their energy and water ratings. The more stars a product has, the more energy- or water-efficient it is. The labels also provide information of average consumption across a year so that you can compare similar products, or account for factors such as the size of the appliance.

Knowing how to interpret consumer information can be valuable., Author provided

The Energy Rating website also allows you to make comparisons, and even calculates usage costs and savings for you. As shown in the figure below, choosing a 10-star fridge over a 3-star one will save you an estimated A$664 in running costs over 10 years. This would offset some or all of the extra up-front cost of buying a more sustainable model.

The Australian government’s energy rating website helps you calculate the savings from being energy-efficient., Author provided

Many other organisations and websites also provide performance and user reviews for appliances. Choice is an independent organisation that tests a variety of products, including white goods. The tests scrutinise a range of criteria, including energy- and water-efficiency, ease of use, operating costs, and durability.

Use wisely

Once you get your new appliance home, it is also crucial to use it properly. Make sure you read the manual and find out how to maximise the efficiency of the appliance.

For example, talk to your air conditioner installer to determine the optimal position to cool and heat your space, depending on the aspect and layout of your home. And make sure you leave enough space around the back and side of your fridge for air to circulate, which helps dissipate waste heat more effectively and can save up to 150kg of carbon dioxide emissions a year.

Make sure you turn the appliance off when not in use – failing to do this can further add to running costs and environmental impact. Ask yourself whether you really need to keep that ancient beer fridge humming away in the garage.

Of course, you don’t have to wait for a new appliance before doing all of these things. You can make your current appliances perform more efficiently by reviewing how you use and position them.

Washing a full load of clothes is more efficient and sustainable than only washing a part load. If you think you need to do smaller washloads generally, then consider buying a smaller washing machine, or find a model that has smart features such as being able to do a half load.

Many appliances such as dishwashers and washing machines also feature “eco” modes that can save significant amounts of water and energy.

Finally, it’s always worth asking yourself whether you truly need to buy that new appliance. Consider having a broken appliance fixed, as this will avoid using all the resources required to manufacture a new one. Or consider buying secondhand.

Even if buying secondhand, you can check the environmental performance of the appliance, either through the energy rating website or via the manufacturer. Make sure you compare this to new options to see which works out better over the life of the product.

And if you do buy yourself a new or secondhand appliance, make sure you look into how to recycle your old appliance, through your local council, charities or other organisations.

This article was co-authored by:

Image of Trivess MooreTrivess Moore – [Research Fellow, RMIT University]
Simon Lockrey
Simon Lockrey – [Research Fellow, RMIT University]




This article is part of a syndicated news program via


A month in, Tesla’s SA battery is surpassing expectations

 A month into operation, the Tesla lithium-ion  
battery at Neoen wind farm in Hornsdale, South Australia is already providing 
essential grid services. REUTERS/Sonali Paul

It’s just over one month since the Hornsdale power reserve was officially opened in South Australia. The excitement surrounding the project has generated acres of media interest, both locally and abroad.

The aspect that has generated the most interest is the battery’s rapid response time in smoothing out several major energy outages that have occurred since it was installed.

Following the early success of the SA model, Victoria has also secured an agreement to get its own Tesla battery built near the town of Stawell. Victoria’s government will be tracking the Hornsdale battery’s early performance with interest.

Generation and Consumption

Over the full month of December, the Hornsdale power reserve generated 2.42 gigawatt-hours of energy, and consumed 3.06GWh.

Since there are losses associated with energy storage, it is a net consumer of energy. This is often described in terms of “round trip efficiency”, a measure of the energy out to the energy in. In this case, the round trip efficiency appears to be roughly 80%.

The figure below shows the input and output from the battery over the month. As can be seen, on several occasions the battery has generated as much as 100MW of power, and consumed 70MW of power. The regular operation of battery moves between generating 30MW and consuming 30MW of power.

Generation and consumption of the Hornsdale Power Reserve over the month of December 2018. Author provided [data from AEMO]

As can be seen, the the generation and consumption pattern is rather “noisy”, and doesn’t really appear to have a pattern at all. This is true even on a daily basis, as can be seen below. This is related to services provided by the battery.

Generation and consumption of the Hornsdale Power Reserve on the 6th of Jan 2018. Author provided [data from AEMO]

Frequency Control Ancillary Services

There are eight different Frequency Control Ancillary Services (FCAS) markets in the National Electricity Market (NEM). These can be put into two broad categories: contingency services and regulation services.

Contingency services

Contingency services essentially stabilise the system when something unexpected occurs. This are called credible contingencies. The tripping (isolation from the grid) of large generator is one example.

When such unexpected events occur, supply and demand are no longer balanced, and the frequency of the power system moves away from the normal operating range. This happens on a very short timescale. The contingency services ensure that the system is brought back into balance and that the frequency is returned to normal within 5 minutes.

In the NEM there are three separate timescales over which these contingency services should be delivered: 6 seconds, 60 seconds, and 5 minutes. As the service may have to increase or decrease the frequency, there is thus a total of six contingency markets (three that raise frequency in the timescales above, and three that reduce it).

This is usually done by rapidly increasing or decreasing output from a generator (or battery in this case), or rapidly reducing or increasing load. This response is triggered at the power station by the change in frequency.

Tesla’s lithium-ion battery in South Australia has provided essential grid services on many occasions throughout December, according to the Australian Energy Market Operator. Reuters

To do this, generators (or loads) have some of their capacity “enabled” in the FCAS market. This essentially means that a proportion of its capacity is set aside, and available to respond if the frequency changes. Providers get paid for for the amount of megawatts they have enabled in the FCAS market.

This is one of the services that the Hornsdale Power Reserve has been providing. The figure below shows how the Hornsdale Power Reserve responded to one incident on power outage, when one of the units at Loy Yang A tripped on December 14, 2017.

The Hornsdale Power Reserve responding to a drop in system frequency. Author provide [data from AEMO]

Regulation services

The regulation services are a bit different. Similar to the contingency services, they help maintain the frequency in the normal operating range. And like contingency, regulation may have to raise or lower the frequency, and as such there are two regulation markets.

However, unlike contingency services, which essentially wait for an unexpected change in frequency, the response is governed by a control signal, sent from the Australian Energy Market Operator (AEMO).

In essence, AEMO controls the throttle, monitors the system frequency, and sends a control signal out at a 4-second interval. This control signal alters the output of the generator such that the supply and demand balanced is maintained.

This is one of the main services that the battery has been providing. As can be seen, the output of the battery closely follows the amount of capacity it has enabled in the regulation market.

Output of Horndale Power Reserve compared with enablement in the regulation raise FCAS market. Author provided [data from AEMO]

More batteries to come

Not to be outdone by it’s neighbouring state, the Victorian government has also recently secured an agreement for its own Tesla battery. This agreement, in conjunction with a wind farm near the town of Stawell, should see a battery providing similar services in Victoria.

This battery may also provide additional benefits to the grid. The project is located in a part of the transmission network that AEMO has indicated may need augmentation in the future. This project might illustrate the benefits the batteries can provide in strengthening the transmission network.

It still early days for the Hornsdale Power Reserve, but it’s clear that it has been busy performing essential services and doing so at impressive speeds. Importantly, it has provided regular frequency control ancillary services – not simply shifting electricity around.

With the costs and need for frequency control service increasing in recent years, the boost to supply through the Hornsdale power reserve is good news for consumers, and a timely addition to Australia’s energy market.

This article was written by:
Image of Dylan McConnell Dylan McConnell – [Researcher at the Australian German Climate and Energy College, University of Melbourne]





This article is part of a syndicated news program via

A high price for policy failure: the ten-year story of spiralling electricity bills

 The storm clouds have been gathering over energy policy 
for a decade or more. Joe Castro/AAP Image

Politicians are told never to waste a good crisis. Australia’s electricity sector is in crisis, or something close to it. The nation’s first-ever statewide blackout, in South Australia in September 2016, was followed by electricity shortages in several states last summer. More shortages are anticipated over coming summers.

But for most Australians, the most visible impact of this crisis has been their ever-increasing electricity bills. Electricity prices have become a political hot potato, and the blame game has been running unchecked for more than a year.

Electricity retailers find fault with governments, and renewable energy advocates point the finger at the nasty old fossil-fuel generators. The right-wing commentariat blames renewables, while the federal government blames everyone but itself.

The truth is there is no silver bullet. No single factor or decision is responsible for the electricity prices we endure today. Rather, it is the confluence of many different policies and pressures at every step of the electricity supply chain.

According to the Australian Competition and Consumer Commission (ACCC), retail customers in the National Electricity Market (which excludes Western Australia and the Northern Territory) now pay 44% more in real terms for electricity than we did ten years ago.

Four components make up your electricity bill. Each has contributed to this increase.

How your rising power bills stack up. ACCC, Author provided

The biggest culprit has been the network component – the cost of transporting the electricity. Next comes the retail component – the cost of billing and servicing the customer. Then there is the wholesale component – the cost of generating the electricity. And finally, the government policy component – the cost of environmental schemes that we pay for through our electricity bills.

Each component has a different tale, told differently in every state. But ultimately, this is a story about a decade of policy failure.

Network news

Network costs form the biggest part of your electricity bill. Australia is a big country, and moving electricity around it is expensive. As the graph above shows, network costs have contributed 40% of the total price increase over the past decade.

The reason we now pay so much for the network is simply that we have built an awful lot more stuff over the past decade. It’s also because it was agreed – through the industry regulator – that network businesses could build more network infrastructure and that we all have to pay for it, regardless of whether it is really needed.

Network businesses are heavily regulated. Their costs, charges and profits all have to be ticked off. This is supposed to keep costs down and prevent consumers being charged too much.

That’s the theory. But the fact is costs have spiralled. Between 2005 and 2016 the total value of the National Electricity Market (NEM) distribution network increased from A$42 billion to A$72 billion – a whopping 70%. During that time there has been little change in the number of customers using the network or the amount of electricity they used. The result: every unit of electricity we consume costs much more than it used to.

There are several reasons for this expensive overbuild. First, forecasts of electricity demand were wrong – badly wrong. Instead of ever-increasing consumption, the amount of electricity we used started to decline in 2009. A whole lot of network infrastructure was built to meet demand that never eventuated.

Second, governments in New South Wales and Queensland imposed strict reliability settings – designed to avoid blackouts – on the networks in the mid-2000s. To meet these reliability settings, the network businesses had to spend a lot more money reinforcing their networks than they otherwise would have.

Third, the way in which network businesses are regulated encourages extra spending on infrastructure. In an industry where you are guaranteed a 10% return on investment, virtually risk-free – as network businesses were between 2009 and 2014 – you are inclined to build, build, build.

The blame for this “gold-plating” of network assets is spread widely. Governments have been accused of panicking and setting reliability standards too high. The regulator has copped its share for allowing businesses too much capital spend and too high a return. Privatisation has also been criticised, which is slightly bizarre given that the worst offenders have been state-owned businesses.

Retail rollercoaster

The second biggest increase in your bill has been the amount we pay for the services provided to us by retailers. Across the NEM, 26% of the price increase over the past decade has been due to retail margins.

This increase in the retail component was never supposed to happen. To understand why, you must go back to the rationale for opening the retail sector to competition. Back in the 1990s, it was felt that retail energy was ripe for competition, which would deliver lower prices and more innovative products for consumers.

In theory, where competition exists, firms seek to reduce their costs to maximise their profits, in turn allowing them to reduce prices so as to grab as many customers as possible. The more they cut their costs, the more they can cut prices. Theoretically, costs are minimised and profits are squeezed. If competition works as it’s supposed to, the retail component should go down, not up.

But the exact opposite has happened in the electricity sector. In Victoria, the state that in 2009 became the first to completely deregulate its retail electricity market, the retail component of the bill has contributed to 36% of the price increase over the past decade.

On average, Victorians pay almost A$400 a year to retailers, more than any other mainland state in the NEM. This is consistent with the Grattan Institute’s Price Shock report, which showed that rising profits are causing pain for Victorian electricity consumers. Many customers remain on expensive deals, and do not switch to cheaper offers because the market is so complicated. These “sticky” customers have been cited as the cause of “excessive” profits to retailers.

But the new figures provided by the ACCC, which come directly from retailers, paint a different picture. The ACCC finds that the increase in margins in Victoria is wholly down to the increasing costs of retailers doing business.

There are reasons why competition might drive prices up, not down. Retailers now spend money on marketing to recruit and retain customers. And the existence of multiple retailers leads to duplications in costs that would not exist if a single retailer ran the market.

But these increases should be offset by retailers finding savings elsewhere, and this doesn’t seem to have happened. History may judge the introduction of competition to the retail electricity market as an expensive mistake.

Generational problems

So far, we have accounted for 65% of the bill increase of the past decade, and neither renewables nor coal have been mentioned once. Nor were they ever likely to be. The actual generation of electricity has only ever formed a minor portion of your electricity bill – the ACCC report shows that in 2015-16 the wholesale component constituted only 22% of the typical bill.

In the past year, however, wholesale prices have really increased. In 2015-16, households paid on average A$341 a year for the generation of electricity – far less than they were paying in 2006-07. But in the past year, that is estimated to have increased to A$530 a year.

Generators, particularly in Queensland, have been engaging in questionable behaviour, but it is the fundamental change in the supply and demand balance that means higher prices are here to stay for at least the next few years.

The truth is the cost of generating electricity has been exceptionally low in most parts of Australia for most of the past two decades. When the NEM was created in 1998, there was arguably more generation capacity in the system than was needed to meet demand. And in economics, more supply than demand equals low prices.

Over the years our politicians have been particularly good at ensuring overcapacity in the system. Most of the investment in generation in the NEM since its creation has been driven by either taxpayers’ money, or government schemes and incentives – not by market forces. The result has been oversupply.

Up until the late 2000s the market kept chugging along. Then two things happened. First, consumers started using less electricity. And second, the Renewable Energy Target (RET)was ramped up, pushing more supply into the market.

Demand down and supply up meant even more oversupply, and continued low prices. But the combination of low prices and low demand put pressure on the finances of existing fossil fuel generators. Old generators were being asked to produce less electricity than before, for lower prices. Smaller power stations began to be mothballed or retired.

Something had to give, and it did when both Alinta and Engie decided it was no longer financially viable to keep their power stations running. Far from being oversupplied, the market is now struggling to meet demand on hot days when people use the most electricity. The result is very high prices.

A tight demand and supply balance with less coal-fired generation has meant that Australia increasingly relies on gas-fired generation, at a time when gas prices are astronomical, leading to accusations of price-gouging.

Put simply, Australia has failed to build enough new generation over recent years to reliably replace ageing coal plants when they leave the market.

Is it renewable energy’s fault that coal-fired power stations have closed? Yes, but this is what needs to happen if we are to reduce greenhouse emissions. Is it renewables’ fault that replacement generation has not been built? No. It’s the government’s fault for failing to provide the right environment for new investment.

The right investment climate is crucial. Marcella Cheng

The current predicament could have been avoided if we had a credible and comprehensive emissions reduction policy to drive investment in the sector. Such a policy would give investors the confidence to build generation with the knowledge about what carbon liabilities they may face in the future. But the carbon price was repealed in 2014 and replaced with nothing.

We’re still waiting for an alternative policy. We’re still waiting for enough generation capacity to be built. And we’re still paying sky-high wholesale prices for electricity.

Green and gold

Finally, we have the direct cost of government green schemes over the past decade: the RET; the household solar panel subsidies; and the energy-efficiency incentives for homes and businesses.

They represent 16% of the price increase over the past 10 years – but they are still only 6% of the average bill.

If the aim of these schemes has been to reduce emissions, they have not done a very good job. Rooftop solar panel subsidies have been expensive and inequitable. The RET is more effective as an industry subsidy than an emissions reduction or energy transition policy. And energy efficiency schemes have produced questionable results.

It hasn’t been a total waste of money, but far deeper emissions cuts could have been delivered if those funds had been channelled into a coherent policy.

The story of Australia’s high electricity prices is not really one of private companies ripping off consumers. Nor is it a tale about the privatisation of an essential service. Rather, this is the story of a decade of policy drift and political failure.

Governments have been repeatedly warned about the need to tackle these problems, but have done very little.

Instead they have focused their energy on squabbling over climate policy. State governments have introduced inefficient schemes, scrapped them, and then introduced them again, while the federal government has discarded policies without even trying them.

There is a huge void where our sensible energy policy should be. Network overbuild and ballooning retailer margins both dwarf the impact of the carbon price, yet if you listen only to our politicians you’d be forgiven for thinking the opposite.

And still it goes on. The underlying causes of Australia’s electricity price headaches – the regulation of networks, ineffective retail market competition, and our barely coping generators – need immediate attention. But still the petty politicking prevails.

The Coalition has rejected the Clean Energy Target recommended by Chief Scientist Alan Finkel. Labor will give no guarantee of support for the government’s alternative policy, the National Energy Guarantee. Some politicians doubt the very idea that we need to act on climate change. Some states have given up on Canberra and are going it alone.

We’ve been here before and we know how this story ends. Crisis wasted.

This article was written by:
Image of David BlowersDavid Blowers – [Energy Fellow, Grattan Institute]





This article is part of a syndicated news program via

What’s the net cost of using renewables to hit Australia’s climate target? Nothing

 Managed in the right way, wind farms can actually 
help stabilise the grid, rather than disrupting it. AAP Image/Lukas Coch

Australia can meet its 2030 greenhouse emissions target at zero net cost, according to our analysis of a range of options for the National Electricity Market.

Our modelling shows that renewable energy can help hit Australia’s emissions reduction target of 26-28% below 2005 levels by 2030 effectively for free. This is because the cost of electricity from new-build wind and solar will be cheaper than replacing old fossil fuel generators with new ones.

Read moreWant energy storage? Here are 22,000 sites for pumped hydro across Australia

Currently, Australia is installing about 3 gigawatts (GW) per year of wind and solar photovoltaics (PV). This is fast enough to exceed 50% renewables in the electricity grid by 2030. It’s also fast enough to meet Australia’s entire carbon reduction target, as agreed at the 2015 Paris climate summit.

Encouragingly, the rapidly declining cost of wind and solar PV electricity means that the net cost of meeting the Paris target is roughly zero. This is because electricity from new-build wind and PV will be cheaper than from new-build coal generators; cheaper than existing gas generators; and indeed cheaper than the average wholesale price in the entire National Electricity Market, which is currently A$70-100 per megawatt-hour.

Cheapest option

Electricity from new-build wind in Australia currently costs around A$60 per MWh, while PV power costs about A$70 per MWh.

During the 2020s these prices are likely to fall still further – to below A$50 per MWh, judging by the lower-priced contracts being signed around the world, such as in Abu DhabiMexicoIndia and Chile.

In our research, published today, we modelled the all-in cost of electricity under three different scenarios:

  • Renewables: replacement of enough old coal generators by renewables to meet Australia’s Paris climate target
  • Gas: premature retirement of most existing coal plant and replacement by new gas generators to meet the Paris target. Note that gas is uncompetitive at current prices, and this scenario would require a large increase in gas use, pushing up prices still further.
  • Status quo: replacement of retiring coal generators with supercritical coal. Note that this scenario fails to meet the Paris target by a wide margin, despite having a similar cost to the renewables scenario described above, even though our modelling uses a low coal power station price.

The chart below shows the all-in cost of electricity in the 2020s under each of the three scenarios, and for three different gas prices: lower, higher, or the same as the current A$8 per gigajoule. As you can see, electricity would cost roughly the same under the renewables scenario as it would under the status quo, regardless of what happens to gas prices.

Graphs of the Levelised cost of electricity (A$ per MWh) for three scenarios and a range of gas prices.
Levelised cost of electricity (A$ per MWh) for three scenarios and a range of gas prices. Blakers et al.

Balancing a renewable energy grid

The cost of renewables includes both the cost of energy and the cost of balancing the grid to maintain reliability. This balancing act involves using energy storage, stronger interstate high-voltage power lines, and the cost of renewable energy “spillage” on windy, sunny days when the energy stores are full.

The current cost of hourly balancing of the National Electricity Market (NEM) is low because the renewable energy fraction is small. It remains low (less than A$7 per MWh) until the renewable energy fraction rises above three-quarters.

The renewable energy fraction in 2020 will be about one-quarter, which leaves plenty of room for growth before balancing costs become significant.

Graph of the Cost of hourly balancing of the NEM (A$ per MWh) as a function of renewable energy fraction.
Cost of hourly balancing of the NEM (A$ per MWh) as a function of renewable energy fraction.

The proposed Snowy 2.0 pumped hydro project would have a power generation capacity of 2GW and energy storage of 350GWh. This could provide half of the new storage capacity required to balance the NEM up to a renewable energy fraction of two-thirds.

The new storage needed over and above Snowy 2.0 is 2GW of power with 12GWh of storage (enough to provide six hours of demand). This could come from a mix of pumped hydro, batteries and demand management.

Stability and reliability

Most of Australia’s fossil fuel generators will reach the end of their technical lifetimes within 20 years. In our “renewables” scenario detailed above, five coal-fired power stations would be retired early, by an average of five years. In contrast, meeting the Paris targets by substituting gas for coal requires 10 coal stations to close early, by an average of 11 years.

Under the renewables scenario, the grid will still be highly reliable. That’s because it will have a diverse mix of generators: PV (26GW), wind (24GW), coal (9GW), gas (5GW), pumped hydro storage (5GW) and existing hydro and bioenergy (8GW). Many of these assets can be used in ways that help to deliver other services that are vital for grid stability, such as spinning reserve and voltage management.

Because a renewable electricity system comprises thousands of small generators spread over a million square kilometres, sudden shocks to the electricity system from generator failure, such as occur regularly with ageing large coal generators, are unlikely.

Neither does cloudy or calm weather cause shocks, because weather is predictable and a given weather system can take several days to move over the Australian continent. Strengthened interstate interconnections (part of the cost of balancing) reduce the impact of transmission failure, which was the prime cause of the 2016 South Australian blackout.

Since 2015, Australia has tripled the annual deployment rate of new wind and PV generation capacity. Continuing at this rate until 2030 will let us meet our entire Paris carbon target in the electricity sector, all while replacing retiring coal generators, maintaining high grid stability, and stabilising electricity prices.

This article was co-authored by:
Image of Andrew Blakers Andrew Blakers – [Professor of Engineering, Australian National University];
Image of Bin LuBin Lu – [PhD Candidate, Australian National University]
Image of Matthew StocksMatthew Stocks – [Research Fellow, ANU College of Engineering and Computer Science, Australian National University]






This article is part of a syndicated news program via

Energy ministers’ power policy pow-wow is still driven more by headlines than details

 Still no clear skies for the federal
government’s energy plans.

A quick scan of this week’s headlines shows the government’s new energy plan would “slash A$120 off power bills” and that the “Turnbull government plan to address energy crisis predicts A$400 price drop”. Yes, the initial findings of the modelling of the federal government’s National Energy Guarantee (NEG) are out. Cue the latest round of bluster, misinformation and confusion.

What has actually been released is a five-page summary of the findings, although some media reports contain extracts from a more detailed document. We won’t see that until after federal and state energy ministers have considered it at today’s COAG Energy Council meeting.

With some state energy ministers still expressing scepticism over the NEG, their interpretation of the detailed modelling may be crucial in resolving the debate.

What do our five-pager and media leaks tell us? Well, not a lot. There are enough facts and figures to provide instant filler for journalists’ articles. But as far as the modelling is concerned, there are only a couple of charts and a handful of numbers.

The summary forecasts that under the NEG wholesale electricity prices will drop back to their historical average of around A$50 per megawatt hour, compared with around A$100 per megawatt hour now. This will drive down our power bills during the decade from 2020 to 2030 decade by an average of about A$400.

But here’s the rub: prices will fall even if the NEG isn’t implemented. This is because between now and 2020 the Renewable Energy Target will be driving new renewable energy generation into the market. At the moment there is a supply shortage, which is keeping prices high. So if electricity demand remains relatively flat, new generation will drive prices down.

The real changes won’t happen until after 2020. When Liddell power station in New South Wales closes in 2022 wholesale prices will rise. This will happen with or without the NEG, albeit much faster without it. This is what underpins the convenient claims of an annual average A$120 drop in electricity bills.

That’s pretty much all that can be said about prices for now. We will have to wait until the full modelling is released to know for sure why prices rise more rapidly post-Liddell without the NEG. The original proposal from the Energy Security Board suggests that, with the guarantee, more supply, bidding behaviour and reduced risk to investors put downwards pressure on prices.

What about renewables?

The summary note contains only two other findings. First, under the NEG there will be investment in an additional 3,600 megawatts of “dispatchable generation capacity”. Second, renewables will form only 5% more of the generation mix by 2030 than if the NEG were not in place.

We’re still waiting to find out what “dispatchable generation capacity” actually means in the context of the NEG. There is no smoking gun for those who insist the NEG is a Trojan horse for coal. But similarly, anyone looking for the modelling to say anything about the future of energy storage will be disappointed. Watch this space.

A lot will be made of the relatively low levels of renewables predicted by the NEG modelling. Under the existing Renewable Energy Target, renewables are already expected to account for around 23.5% of the generation mix in 2020 (not counting rooftop solar). The NEG might deliver only 32-36% in 2030 – and this figure appears to include rooftop solar.

But before we get too worked up, remember that this finding says nothing about the effectiveness of the NEG in cutting greenhouse emissions. The NEG and the two earlier proposals modelled by the Finkel Review – a clean energy target and an emissions intensity acheme – work in very similar ways and would produce very similar results. The fact that Finkel’s modelling forecasts 42% renewables by 2030, and the government’s modelling delivers 32-36% – using the same emissions reduction goal – just tells us that the modelling is different.

Model behaviour

As I have pointed out here, modelling is an inexact science. Its outcomes are a function of the assumptions you use and the data you shove in. Like any modelling, NEG modelling will no doubt be wrong to a greater or lesser degree. But whatever the specifics, one principle is clear: agreeing on a firm policy will help to lower prices.

If replacing existing generation with renewables is the cheapest way to reduce emissions, then that is what will happen under any of the schemes. The modelling then becomes irrelevant.

There is one proviso to this. And that is the other part of the NEG – the “reliability” part. It is not clear from the five-page summary how the reliability requirement has been factored in, and how this will influence the generation mix. It seems we will have to wait well beyond this week for more information on this; reports suggest that the reliability mechanism may not be fully designed until the middle of 2018.

The biggest question about the NEG is whether it will lead to further concentration of market power in the retail and wholesale energy markets. It that were to happen, prices would probably go up, not down – creating yet more headaches for politicians, consumer watchdogs, and householders.

It is these two issues – the design of the reliability mechanism, and tackling market power – that energy ministers should really be focusing on. Australian energy customers will be the losers if the debate gets swamped by confected outrage about modelling and renewables. Ominously, another quick scan of Wednesday morning’s headlines suggests that confected outrage is winning.

This article was written by:

David Blowers – [Energy Fellow, Grattan Institute]






This article is part of a syndicated news program via

Fossil fuel emissions hit record high after unexpected growth: Global Carbon Budget 2017

 The growth in global carbon emissions has
resumed after a three-year pause. AAP Image/Dave Hunt

Global greenhouse emissions from fossil fuels and industry are on track to grow by 2% in 2017, reaching a new record high of 37 billion tonnes of carbon dioxide, according to the 2017 Global Carbon Budget, released today.

The rise follows a remarkable three-year period during which global CO₂ emissions barely grew, despite strong global economic growth.

But this year’s figures suggest that the keenly anticipated global peak in emissions – after which greenhouse emissions would ultimately begin to decline – has yet to arrive.

The Global Carbon Budget, now in its 12th year, brings together scientists and climate data from around the world to develop the most complete picture available of global greenhouse gas emissions.

In a series of three papers, the Global Carbon Project’s 2017 report card assesses changes in Earth’s sources and sinks of CO₂, both natural and human-induced. All excess CO₂ remaining in the atmosphere leads to global warming.

We believe society is unlikely to return to the high emissions growth rates of recent decades, given continued improvements in energy efficiency and rapid growth in low-carbon energies. Nevertheless, our results are a reminder that there is no room for complacency if we are to meet the goals of the Paris Agreement, which calls for temperatures to be stabilised at “well below 2℃ above pre-industrial levels”. This requires net zero global emissions soon after 2050.

Graph of 2017 emissions
After a brief plateau, 2017’s emissions are forecast to hit a new high. Global Carbon Project, Author provided

National trends

The most significant factor in the resumption of global emissions growth is the projected 3.5% increase in China’s emissions. This is the result of higher energy demand, particularly from the industrial sector, along with a decline in hydro power use because of below-average rainfall. China’s coal consumption grew by 3%, while oil (5%) and gas (12%) continued rising. The 2017 growth may result from economic stimulus from the Chinese government, and may not continue in the years ahead.

The United States and Europe, the second and third top emitters, continued their decade-long decline in emissions, but at a reduced pace in 2017.

For the US, the slowdown comes from a decline in the use of natural gas because of higher prices, with the loss of its market share taken by renewables and to a lesser extent coal. Importantly, 2017 will be the first time in five years that US coal consumption is projected to rise slightly (by about 0.5%).

The EU has now had three years (including 2017) with little or no decline in emissions, as declines in coal consumption have been offset by growth in oil and gas.

Unexpectedly, India’s CO₂ emissions will grow only about 2% this year, compared with an average 6% per year over the past decade. This reduced growth rate is likely to be short-lived, as it was linked to reduced exports, lower consumer demand, and a temporary fall in currency circulation attributable to demonetisation late in 2016.

Graph showing -Trends for the biggest emitters, and everyone else.
Trends for the biggest emitters, and everyone else. Global Carbon Project, Author provided

Yet despite this year’s uptick, economies are now decarbonising with a momentum that was difficult to imagine just a decade ago. There are now 22 countries, for example, for which CO₂ emissions have declined over the past decade while their economies have continued to grow.

Concerns have been raised in the past about countries simply moving their emissions outside their borders. But since 2007, the total emissions outsourced by countries with emissions targets under the Kyoto Protocol (that is, developed countries, including the US) has declined.

This suggests that the downward trends in emissions of the past decade are driven by real changes to economies and energy systems, and not just to offshoring emissions.

Other countries, such as Russia, Mexico, Japan, and Australia have shown more recent signs of slowdowns, flat growth, and somewhat volatile emissions trajectories as they pursue a range of different climate and energy policies in recent years.

Still, the pressure is on. In 101 countries, representing 50% of global CO₂ emissions, emissions increased as economies grew. Many of these countries will be pursuing economic development for years to come.

Contrasting fortunes among some of the world’s biggest economies. Nigel Hawtin/Future Earth Media Lab/Global Carbon Project, Author provided

A peek into the future

During the three-year emissions “plateau” – and specifically in 2015-16 – the accumulation of CO₂ in the atmosphere grew at a record high that had not previously been observed in the half-century for which measurements exist.

It is well known that during El Niño years such as 2015-16, when global temperatures are higher, the capacity of terrestrial ecosystems to take up CO₂ (the “land sink”) diminishes, and atmospheric CO₂ growth increases as a result.

The El Niño boosted temperatures by roughly a further 0.2℃. Combined with record high levels of fossil fuel emissions, the atmospheric CO₂ concentration grew at a record rate of nearly 3 parts per million per year.

This event illustrates the sensitivity of natural systems to global warming. Although a hot El Niño might not be the same as a sustained warmer climate, it nevertheless serves as a warning of the global warming in store, and underscores the importance of continuing to monitor changes in the Earth system.

Graph showing - The effect of the strong 2015-16 El Niño on the growth of atmospheric CO₂
The effect of the strong 2015-16 El Niño on the growth of atmospheric CO₂ can clearly be seen. Nigel Hawtin/Future Earth Media Lab/Global Carbon Project, based on Peters et al., Nature Climate Change 2017, Author provided

No room for complacency

There is no doubt that progress has been made in decoupling economic activity from CO₂ emissions. A number of central and northern European countries and the US have shown how it is indeed possible to grow an economy while reducing emissions.

Other positive signs from our analysis include the 14% per year growth of global renewable energy (largely solar and wind) – albeit from a low base – and the fact that global coal consumption is still below its 2014 peak.

These trends, and the resolute commitment of many countries to make the Paris Agreement a success, suggest that CO₂ emissions may not return to the high-growth rates experienced in the 2000s. However, an actual decline in global emissions might still be beyond our immediate reach, especially given projections for stronger economic growth in 2018.

To stabilise our climate at well below 2℃ of global warming, the elusive peak in global emissions needs to be reached as soon as possible, before quickly setting into motion the great decline in emissions needed to reach zero net emissions by around 2050.

This article was co-authored by the following international team pf scientists:







Energy prices are high because consumers are paying for useless, profit-boosting infrastructure

 Infrastructure construction –  
including poles, wires and substations – has far outstripped peak demand.

The preliminary report on energy prices released last week by the Australian Competition and Consumer Commission (ACCC) suggests that the consumer watchdog is concerned about almost every aspect of Australia’s electricity industry. It quotes customer groups who say electricity is the biggest issue in their surveys, and cites several case studies of outrageous price increases experienced by various customers.

The report is long on sympathy about the plight of Australia’s electricity users. But the true picture is even worse – in reality, the ACCC’s assessment of Australia’s energy prices compared to the rest of the world is absurdly rosy.

Australia has internationally high energy prices

The ACCC quotes studies from the Electricity Supply Association and the Australian Energy Markets Commission (AEMC) to compare electricity prices in Australia with those in other OECD countries. But the ACCC’s comparison is based on two-year-old data, and badly underestimates the actual prices consumers are paying.

The AEMC’s analysis assumes all customers are on their retailer’s cheapest available offer. This is an obviously implausible assumption, and gives a favourable impression of the price that customers are paying.

As previously pointed out on The Conversation, the Thwaites review – which looked at customers’ actual bills – found that in February 2017 Victorians were typically paying A35c per kilowatt hour (kWh) – 42% more than the AEMC’s estimate. What’s more, we know that Victoria’s electricity prices are lower on average than those in South Australia, Queensland and New South Wales, and hence below the Australian average.

A part of this 42% gap – around 15% – is explained by the latest price increases that are not included in the ACCC’s comparison. But this still leaves a 27% gap between what the AEMC assumes and the evidence of actual prices.

This begs the question: why did the ACCC not recognise the widely known flaw in the AEMC’s analysis?

The real problem is overbuilt network infrastructure

The report estimates that rising network charges account for more of the price increase than all other factors put together. There is no doubt that network charges are a real problem at least in parts of Australia, although their significance relative to retailers’ costs is contested territory.

But why would distributors build far more network infrastructure than they need? And why have government-owned distributors built far more infrastructure than private ones, despite having no more demand?

The answer to this perplexing question is to be found in part in Australia’s “competitive neutrality” policy. This is Orwellian doublespeak for an approach that is neither neutral nor competitive.

Under this policy, government-owned distributors are regulated as if they are privately financed. This means that when setting regulated prices, the Australian Energy Regulator (AER) allows government distributors to charge their captive consumers for a return on their regulated assets, at the same level as if they were privately financed. That is despite the fact that private financing is much more expensive than government funding.

It’s no surprise that when offered a rate of return that far exceeds the actual cost of finance, government distributors have a powerful incentive to expand their infrastructure for a profit. This “gold-plating” incentive is a well-known in regulatory economics.

Regulators, the industry and their associations have explained higher spending on networks in a variety of ways: higher reliability standards; flawed rules; flawed forecasting of demand growth; and the need to make up for historic underinvestment.

But was there ever historic underinvestment? A 1995 article co-authored by the current AEMC chair concluded that distribution networks had been significantly overbuilt. That was more than two decades ago, government distributor regulated assets are at least three times bigger per customer now.

The chart below – based on data from the AER’s website – examines how the 12 large distributors that cover New South Wales, Victoria, Queensland and South Australia spent their money on infrastructure between 2006 and 2013. This period covers the last five-year price controls established by the state regulators, and the first control established by the AER. It was during this time that expenditure ballooned. The monetary amounts in this chart are normalised by the number of customers per distributor.

Graph showing how Distributor spending on infrastructure between 2006 and 2013.
Distributor spending on infrastructure between 2006 and 2013. Author provided

The first five distributors from left to right (and Aurora) were owned by state governments and the others are privately owned. A clear pattern emerges: the government distributors typically built much more infrastructure than the private distributors. And the government distributors focused their spending on substations, which are much easier to build (or expand or replace) than new distribution lines or cables.

We also know that the distributors’ spending on substations far outstripped the increases in the peak demand on their networks. The figure below compares the change in the government and private distributors’ substation capacity (the blue bars) with demand (the red bars) over the period that most of the expenditure occurred. Again, the amounts have been normalised by number of customers.

Graph showing Substation capacity versus peak demand between 2006 and 2013
Substation capacity versus peak demand between 2006 and 2013. Author provided

The gap in spending between government and private distributors is stark. It is also obvious that in all cases, but particularly for the government distributors, the expansion of substation capacity greatly exceeded demand growth – which hardly changed over this period (and is even lower now, per connection).

To put it in more tangible terms, as an average across the industry, peak demand between 2006 to 2013 increased by the equivalent of the power used by one old-fashioned incandescent light bulb, per customer. But government distributors expanded their substation capacity by more than one 100 light bulbs, per customer. The private distributors did relatively better, but still increased the capacity of their substations by the equivalent of about 30 light bulbs per customer.

My PhD thesis included econometric analysis that shows government ownership in Australia is associated with regulated asset values that are 56% higher than private distributors, and regulated revenues that are 24% higher, leaving all other factors the same.

To some, this evidence supports a “government bad, private good” conclusion. Indeed it was this line of argument that the Baird government in New South Wales used to justify its partial privatisation of two network service providers.

But in international comparisons of government and private distributors in the United States, Europe and New Zealand, no such stark differences are to be found. The huge disparity between government and private distributors is a peculiarly Australian phenomenon.

How we got here

This Australian exception originates in chronic policy and regulatory failure. As far back as 2011, the Australian Energy Market Commission (AEMC) heard a proposal that government distributors should earn a return closer to their actual cost of financing – a suggestion that would have reduced prices significantly and removed the incentive to gold plate.

In response, the AEMC said the regulations were consistent with the “competitive neutrality” policy. But this is not true: in the policy’s own words, it was designed to stop government businesses from crowding out competitors. Distributors are protected monopolies; they do not have competitors.

The AEMC also argued, somewhat bizarrely, that it was good economics for a regulator to assume that government distributors are privately financed.

This represents the triumph of an idealistic “normative” regulatory model in which regulators act on the basis of how the regulated entity should behave rather than how they actually behave.

But it would wrong to blame the AEMC alone for this failure. All of Australia’s key institutions and governments have agreed that government distributors should be regulated as if they are privately financed. For governments that own their distributors, this has been a wonderfully profitable fiction.

Therein lies much of the explanation for what is effectively, if I may call a spade a spade, a racket.

It is an indictment of Australia’s polity and so many of its economists that the 2011 Garnaut Climate Change Review stands alone, in a library of reviews, as stating this problem clearly. In fact, if you review last week’s report from the ACCC, you will not find a single distinction between the impact of government and private distributors.

And if you thought this was yesterday’s war, you would be wrong. Despite the mass of evidence, our regulators persist in the fiction that ownership and regulation should be independent of one another.

It is difficult not to lapse into despair about Australia’s energy policy morass. Despite the valiant attempts by many, a deeply entrenched culture of half-truths, vested interests, ideology and wishful thinking still characterises all too much of what emanates from the political and administrative leadership of this industry.

Some energy consumers – Prime Minister Malcolm Turnbull among them – will buy their way out of this problem through solar panels and batteries. But the poorest households and many business customers will increasingly be left carrying the can.

Australians are angry about electricity. Not unreasonably.

This article was written by:

Image of Bruce MountainBruce Mountain – [Director, Carbon and Energy Markets., Victoria University]





This article is part of a syndicated news program via


Satellites are giving us a commanding view of Earth’s carbon cycle

 Carbon dioxide flux  
over China, measured by NASA’s Orbiting Carbon Observatory-2 satellite. NASA

The job of monitoring Earth’s carbon cycle and humanity’s carbon dioxide emissions is increasingly supported from above, thanks to the terabytes of data pouring down to Earth from satellites.

Five papers published in Science today provide data from NASA’s Orbiting Carbon Observatory-2 (OCO-2) mission. They show Earth’s carbon cycle in unprecedented detail, including the effects of fires in Southeast Asia, the growth rates of Amazonian forests, and the record-breaking rise in atmospheric carbon dioxide during the 2015-16 El Niño.

Another satellite study released two weeks ago revealed rapid biomass loss across the tropics, showing that we have been overlooking the largest sources of terrestrial carbon emissions. While we may worry about land clearing, twice as much biomass is being lost from tropical forests through degradation processes such as harvesting.

The next step in our understanding of Earth’s carbon dynamics will be to build sensors, satellites and computer models that can distinguish human activity from natural processes.

Can satellites see human-made emissions?

The idea of using satellites to keep track of our efforts to reduce fossil fuel emissions is enticing. Current satellites can’t do it, but the next generation is aiming to support the monitoring at the level of countries, regions and cities.

Current satellite sensors can measure CO₂ levels in the atmosphere, but can’t tell whether it is coming from the natural exchange of carbon with the land and oceans, or from human activities such as fossil fuel burning, cement production, and deforestation.

Likewise, satellites cannot distinguish between natural and human changes in leaf area cover (greenness), or the capacity of vegetation to absorb CO₂.

But as the spatial resolution of satellites increases, this will change. OCO-2 can see features as small as 3 square km while the previous purpose-built satellite GOSAT is limited to observing features no smaller than about 50 square km.

As resolution improves, we will be able to better observed the elevated CO₂ concentrations over emissions hotspots such as large cities, bushfire regions in Africa and Australia, or even individual power plants and industrial leaks.

By combining these sensing techniques with computer models of the atmosphere, oceans and land, we will be able to separate out humanity’s impact from natural processes.

For example, we have long known that atmospheric CO₂ concentration rises faster during an El Nino event, and that this is mainly due to changes on land. It was only with the bird’s-eye view afforded by the OCO-2 satellite could we see that each of the tropical continents reacted so differently during the recent big El Niño: fire emissions increased in Southeast Asia, carbon uptake by forests in Amazonia declined, and soil respiration in tropical Africa increased.

Similarly, we can now examine the processes behind the extraordinary greening of the Earth over recent decades as CO₂ levels have climbed. Up to 50% of vegetated land is now greener than it was 30 years ago. The increasing human-driven CO2 fertilization effect on vegetation was estimated to be the dominant driver.

We now have satellites that can study this process at spatial resolutions of tens of metres – meaning we can also keep tabs on processes that undo this greening, such as deforestation.

What’s in store

The coming decade will see the development of yet more space sensors and modelling tools to help us keep tabs on the carbon cycle.

GOSAT-2 will replace the current GOSAT, offering significantly improved resolution and more sensitive measurements of CO₂ and methane (CH₄), another important greenhouse gas.

Meanwhile, the GeoCarb satellite will be launched into a stationary orbit over the Americas to measure CO₂, CH₄ (largely from wetlands in the tropics), and carbon monoxide (from biomass burning). It will keep an eye out for any large leaks from the gas industry.

The BIOMASS and FLEX satellite missions will provide better global estimates of forest height and carbon density, and of plants’ photosynthetic capacity, respectively.

Aboard the International Space Station, an instrument called GEDI, will also estimate vegetation height and structure, and combined with ECOSTRESS will assess changes in above-ground biomass, carbon stocks and productivity.

In Australia, we are developing an atmospheric modelling system and a dynamic vegetation model able to ingest the latest generation of satellite and ground-base observations to map carbon sources and sinks over the entire continent.

Through the Terrestrial Ecosystem Research Network (TERN), we are preparing to take full advantage of these new missions, and help validate many of these space-borne estimates at TERN’s Supersites and other key sampling plots.

With the wealth of information set to be generated by space sensors, as well as earth-based observations and computer models, we are moving into an era when we will have an unprecedented ability to track humans’ impact on our atmosphere, lands and oceans.

This article was co-authored by: