Australia’s Emissions Reduction Fund is almost empty. It shouldn’t be refilled

 The Emissions Reduction Fund is not capturing enough 
emissions from the most polluting industries. AAP Image/Dave Hunt

Australia’s flagship climate policy, the Emissions Reduction Fund (ERF), has come in for fresh questions over whether the emissions allowances offered to big businesses will wipe out much of the progress made elsewhere.

This voluntary scheme – the central plank of Australia’s efforts to reduce greenhouse gas emissions by 26-28% below 2005 levels by 2030 – allows interested parties to reduce pollution in exchange for a proportion of the A$2.55 billion fund.

So far, through successive rounds of “reverse auctions”, the scheme has secured 191.7 million tonnes of emission reductions, at a price tag of A$2.28 billion.

As the budget for this scheme is nearly exhausted, it is important to ask whether it has been a success, or whether Australia’s carbon policy needs a radical rethink. Overall, the answer seems to be the latter.

Safeguards not so safe

Much of the problem stems from the ERF’s safeguard mechanism, which puts limits on the greenhouse emissions from around 140 large polluting businesses. Under the mechanism, these firms are not allowed to pollute more than an agreed “baseline”, calculated on the basis of their existing operations.

The mechanism is described as a safeguard because it aims to stop big businesses wiping out the emissions reductions delivered by projects funded by the ERF. But it doesn’t appear to be working.

The government has already increased the emission baselines for many of these businesses, for arguably specious reasons. Some firms have been given extra leeway to pollute simply because their business has grown, or even just because they blew their original baseline.

Worryingly, on February 21, 2018 the federal government released a consultation documentwhich favours “updating baselines to bring them in line with current circumstances” and suggests that “to help prevent baselines becoming out-of-date in the future, they could be updated for production more often, for example, each year”.

It doesn’t take a genius to realise that if baselines are continually increased over time, the fixed benefits of the ERF will inevitably be wiped out.

This underlines the importance of having a climate policy that operates throughout the economy, rather than only in certain parts of it. If heavily polluting businesses can so readily be allowed to undo the work of others, this is a recipe for disaster.

Contract problems

Even within the ERF process itself, many emissions reduction contracts have already been revoked. This is worrying but also avoidable if the contracts are written correctly.

It is important to note that these contracts run for around seven years, and thus it is possible that the planned carbon reductions never eventuate. Currently only about 16% of the announced 191.7 million tonnes of emissions reduction have actually been delivered.

For the ERF to work effectively, the government needs to know the “counterfactual” emissions – that is, firms’ emissions if they decided not to participate in the ERF. Yet this is completely unknown.

This means that projects that successfully bid for ERF funding (typically the cheapest ones) may not be “additional”. In other words, they may have established these emissions reduction projects anyway, with or without funding from the taxpayer.

Another problem with the ERF is that it is skewed towards projects from lower-polluting sectors of the economy, whereas heavily polluting industries are underrepresented. The largest proportion of signed contracts have involved planting trees or reducing emissions from savannah burning.

Meanwhile, the firms covered by the safeguard mechanism are largely absent from the ERF itself, despite these firms accounting for around 50% of Australia’s greenhouse emissions.

The bare fact is that Australia’s flagship climate policy doesn’t target the prominent polluters.

A different way

Australia’s climate policy has had a colourful past. Yet the economics of pollution mitigation remain the same.

If we want to reduce pollution in a cost-effective way that actually works, then we must (re-)establish a carbon price.

This would provide the much-needed certainty about the cost of genuine pollution reduction. This in turn would allow all major polluters to make strategic, long-term investments that will progressively reduce emissions.

Instead of spending A$2.55 billion to pay for modest emissions reductions that might be cancelled out elsewhere, creating a carbon price will allow for the generation of tax revenue that can be used for a host of purposes.

For example, distortionary tax rates (such as income and corporation tax) could be lowered, or the revenue could be used to fund better schools and hospitals.

A clear example of such a success can be taken from the northeastern states of the US. The Regional Greenhouse Gas Initiative is a cap-and-trade market that sells tradeable pollution permits to electricity companies. Estimates have shown that US$2.3 billion of lifetime energy bill savings will occur due to investments made in 2015.

To tax or cap?

If the ERF is to be replaced, what type of carbon price do we want? Do we want a carbon tax or a cap-and-trade market?

While advantages exist for both, most evidence shows that carbon taxes are more efficient at driving down emissions. Moreover, taxation avoids the potential problems of market power, which may exist with a small number of large polluters.

A carbon price would also remove much of the political rent-seeking that is encouraged by Australia’s current policy settings. A simple, economy-wide carbon tax would be more transparent than the safeguard mechanism, under which individual firms can plead for leniency.

With the ERF fund almost empty, the federal government should ask itself a tough question. Should it spend another A$2.55 billion of taxpayers’ money while letting major polluters increase their emissions? Or should it embrace a new source of tax revenue that incentivises cleaner technologies in a transparent, cost-effective way?


This article was written by:
Image of Ian A. MacKenzieIan A. MacKenzie – [Senior Lecturer in Economics, The University of Queensland]

 

 

 

This article is part of a syndicated news program via

 

How media framing limits public debate about oil exploration

 Research found that media largely frame debate  
about oil and gas developments in New Zealand around how drilling should 
take place, rather than whether it should happen at all. Garry Knight

Throughout the world, people are taking direct action to tackle environmental problems – from Standing Rock in the United States, to the Carmichael coal mine in Australia, to the community groups standing against oil and gas exploration in Aotearoa New Zealand.

Some of the most important societal changes have been made because of direct action, but this isn’t always the story the mainstream media reports.

Our research has focused on the media framing of the debate surrounding oil and gas developments in Aotearoa New Zealand.

We found that it shifted discussions towards how drilling should take place, rather than whether it should happen at all.


Read more: Latest twist in the Adani saga reveals shortcomings in environmental approvals


Framing avoids real debate

We interviewed more than 50 people, including climate activists, representatives from non-government organisations and the oil and gas sector, and local government officers. We also analysed mainstream media coverage over a period of six years.

The research shows that mainstream media in Aotearoa New Zealand tended to present further fossil fuel development as something positive for the economy, and therefore society. Opponents have tended to be framed as irrational, few, and extremist.

For example, former prime minister John Key was quoted describing Greenpeace as “rent-a-crowd” or a 7,000-strong protest as a “few people wandering around the beach”.

One view that was commonly emphasised in reporting is that protesters are taking their democratic right to protest too far. This was illustrated in 2016, when the climate activist group 350.org organised direct actions throughout Aotearoa New Zealand, as part of a series of global Break Free events. They targeted branches of ANZ bank to inform customers about its NZ$13 billion investment in the fossil fuel industry.

A case study

In 2016, in the small university town of Dunedin, around 200 climate activists blockaded three ANZ branches. Many customers delayed their banking or went somewhere else, but some were encouraged by police to use “reasonable force” to climb over the protesters who were blocking the doors.

Until this point, mainstream media outlets had resisted negative framing of protesters, and had quoted activists at length in their reporting of the blockade. After police told bank customers to climb over the protesters, an elderly woman tried to make her way into the bank. Bank staff and activists encouraged her to use a side door, but police insisted on her going through the blockade.

Reporting of the protests quickly changed and activists were portrayed as disrespectful and taking the protest too far. Social media erupted, and 350 Aotearoa’s Facebook page attracted more than 2,000 comments within a few hours, including threats of violence against the blockaders.

Even though a protester reported being kneed by police, police were represented in the media as having been balanced and compelled to act in this way.

One of the activists later wrote:

In the media coverage, the burden of responsibility for that [elderly] woman’s distress was placed solely with us. The coverage successfully removed responsibility from ANZ and the police, who worked together to create that scenario.

The narrative that emerged pitched decent citizens against “unemployable”, “disrespectful” protesters, with the police as benign supporters of decency and the bank as an apolitical service provider. Broader debates about climate justice and corporate responsibility were not heard in these media reports.

Media coverage limits public debate

Such media reporting about oil and gas exploration and drilling focuses on how fossil fuel extraction should take place, rather than whether it should happen at all. For example, the idea that Aotearoa New Zealand has the highest environmental standards in the world when it comes to exploration and extraction was often reported. In these reports, climate activists were portrayed as ignorant about the risks.

Discussions about the ethics of further fossil fuel extraction in a rapidly changing climate were lost – at a time when we need to be debating how we might change our economy and society to avoid the worst of climate change.

Climate change does raise ethical dilemmas and climate justice activists are trying to get us to think about them. As one of the people involved in the ANZ blockade in Dunedin said:

People couldn’t quite register the fact that there’s a vast difference between us making the day of a couple of people a bit more inconvenient, versus climate change killing people, and making people lose their homes. That’s considerably more inconvenient than not being able to get into a bank for the day when there’s another one just down the road.

The new government in Aotearoa New Zealand has sent some positive signals about taking climate change seriously. Consultation on a zero carbon act will begin later this year. In-depth media coverage that engages a broad range of people including activists, and pro-democracy reforms, will be essential to developing good debate and the best possible response to climate change.

Media portrayals of environmental activists as hopelessly idealistic, irrational hippies are nothing new. But when mainstream media continues to repeat these ideas, and frames the status quo as common sense, the public is denied opportunities for genuine debate about solutions to tricky environmental problems.


This article was co-authored by:
Image of Sophie Bond Sophie Bond – [Senior lecturer in geography, University of Otago];
 
Image of Amanda ThomasAmanda Thomas – [Lecturer in Environmental Studies, Victoria University of Wellington]
and
Image of Gradon DiproseGradon Diprose – [Senior Lecturer in Social Sciences, Open Polytechnic]

 

 

 

This article is part of a syndicated news program via

 

Latest twist in the Adani saga reveals shortcomings in environmental approvals

 Adani faces court over allegations of concealing the 
amount of coal water released in Caley Valley Wetlands last year. 
Ian Sutton/flickr

It was reported this week that the federal Environment Department declined to prosecute Adani for failing to disclose that its Australian chief executive, Jeyakumar Janakaraj, was formerly the director of operations at a Zambian copper mine when it discharged toxic pollutants into a major river. Under the federal Environmental Protection Biodiversity Conservation Act, Adani is required to reveal the environmental history of its chief executive officers, and the federal report found Adani “may have been negligent”.

The revelations come as Adani faces down the Queensland government in the planning and environment court, over allegations the company concealed the full amount of coal-laden water discharged into the fragile Caley Valley Wetlands last year.

These concerns highlight some fundamental problems with the existing regulatory framework surrounding the long term utility and effectiveness of environmental conditions in upholding environmental protections for land impacted by mining projects.

How effective are environmental conditions?

In 2016, the federal government granted Adani a 60-year mining licence, as well as unlimited access to groundwater for that period.

These licences were contingent on Adani creating an environmental management plan, monitoring the ongoing impact of its mining activities on the environment, and actively minimising environmental degradation.

But are these safeguards working?

In 2015 Advocacy group Environmental Justice Australia reported several non-compliance issues with the Abbott Point Storm Water Dam, such as pest monitoring, weed eradication, establishing a register of flammable liquids, and implementation of the water monitoring plan.

More recently, in late 2017, significant amounts of black coal water were discovered in the fragile Caley Valley Wetlands next to the mine. Adani stands accused of withholding the full extent of the spill, redacting a laboratory report showing higher levels of contamination.

Adani seems to have released coalwater into the wetland despite it being a condition of its environmental approval that it takes sufficient care to avoid contamination. Its A$12,000 penalty for non-compliance is relatively small compared with the company’s operating costs.

In this instance, the environmental conditions have provided no substantive protection or utility. They have simply functioned as a convenient fig leaf for both Adani and the government.

Who is responsible for monitoring Adani?

Adani’s proposed mine falls under both state and federal legislation. Queensland’s Environmental Protection Act requires the holder of a mining lease to plan and conduct activities on site to prevent any potential or actual release of a hazardous contaminant.

Furthermore, the relevant environmental authority must make sure that hazardous spills are cleaned up as quickly as possible.

But as a project of “national environmental significance” (given its potential impact on water resources, threatened species, ecological communities, migratory species, world heritage areas and national heritage places), the mine also comes under the federal Environmental Protection Biodiversity Conservation Act.

Federal legislation obliges Adani to create an environmental management plan outlining exactly how it plans to promote environmental protection, and to manage and rehabilitate all areas affected by the mine.

Consequently, assessment of the environmental impact of the mine was conducted under a bilateral agreement between the both the federal and state regulatory frameworks. This means that the project has approval under both state and federal frameworks.

The aim is to reinforce environmental protection however in many instances there are significant problems with a lack of clear delineation with respect to management, monitoring and enforcement.

Does the system work?

Theoretically, these interlocking frameworks should work together to provide reinforced protection for the environment. The legislation operates on the core assumption that imposing environmental conditions minimises the environmental degradation from mining. However, the bilateral arrangement can often mean that the responsibility for monitoring matters of national environmental significance devolves to the state and the environmental conditions imposed at this level are ineffectively monitored and enforced and their is no public accountability.

Arguably, some environmental conditions hide deeper monitoring and enforcement problems and in so doing, actually exacerbate environmental impacts.

For example, it has been alleged that Adani altered a laboratory report while appealing its fine for the contamination of the Caley Valley Wetlands, with the original document reportedly showing much higher levels of contamination. The allowable level of coal water in the wetlands was 100 milligrams. The original report indicated that Adani may have released up to 834 milligrams. This was subsequently modified in a follow-up report and the matter is currently under investigation.

If established, this amounts to a disturbing breach with potentially devastating impacts. It highlights not only the failure of the environmental condition to incentivise behavioural change, but also a fundamental failure in oversight and management.

If environmental conditions are not supported by sufficient monitoring processes and sanctions, they have little effect.

Environmental conditions are imposed with the aim of managing the risk of environmental degradation by mining projects. However, their enforcement is too often mired by inadequate and opaque enforcement and oversight procedures, a lack of transparency and insufficient public accountability  

While the Queensland Labor government considers whether to increase the regulatory pressures on Adani, by subjecting them to further EPBC Act triggers such as the water resource trigger or the implementation of a new climate change trigger, perhaps the more fundamental question is whether these changes will ultimately improve environmental protection in the absence of stronger transparency and accountability and more robust management and enforcement processes for environmental conditions attached to mining projects.


This article was written by:
Image of Samantha HepburnSamantha Hepburn – [Director of the Centre for Energy and Natural Resources Law, Deakin Law School, Deakin University]

 

 

 

 

This article is part of a syndicated news program via

How to use critical thinking to spot false climate claims

 Arguments against climate change tend to share 
the same flaws. gillian maniscalco/Flickr

Much of the public discussion about climate science consists of a stream of assertions. The climate is changing or it isn’t; carbon dioxide causes global warming or it doesn’t; humans are partly responsible or they are not; scientists have a rigorous process of peer review or they don’t, and so on.

Despite scientists’ best efforts at communicating with the public, not everyone knows enough about the underlying science to make a call one way or the other. Not only is climate science very complex, but it has also been targeted by deliberate obfuscation campaigns.

If we lack the expertise to evaluate the detail behind a claim, we typically substitute judgment about something complex (like climate science) with judgment about something simple (the character of people who speak about climate science).

But there are ways to analyse the strength of an argument without needing specialist knowledge. My colleagues, Dave Kinkead from the University of Queensland Critical Thinking Project and John Cook from George Mason University in the US, and I published a paper yesterday in Environmental Research Letters on a critical thinking approach to climate change denial.

We applied this simple method to 42 common climate-contrarian arguments, and found that all of them contained errors in reasoning that are independent of the science itself.

In the video abstract for the paper, we outline an example of our approach, which can be described in six simple steps.

The authors discuss the myth that climate change is natural.

Six steps to evaluate contrarian climate claims

Identify the claim: First, identify as simply as possible what the actual claim is. In this case, the argument is:

The climate is currently changing as a result of natural processes.

Construct the supporting argument: An argument requires premises (those things we take to be true for the purposes of the argument) and a conclusion (effectively the claim being made). The premises together give us reason to accept the conclusion. The argument structure is something like this:

  • Premise one: The climate has changed in the past through natural processes
  • Premise two: The climate is currently changing
  • Conclusion: The climate is currently changing through natural processes.

Determine the intended strength of the claim: Determining the exact kind of argument requires a quick detour into the difference between deductive and inductive reasoning. Bear with me!

In our paper we examined arguments against climate change that are framed as definitiveclaims. A claim is definitive when it says something is definitely the case, rather than being probable or possible.

Definitive claims must be supported by deductive reasoning. Essentially, this means that if the premises are true, the conclusion is inevitably true.

This might sound like an obvious point, but many of our arguments are not like this. In inductive reasoning, the premises might support a conclusion but the conclusion need not be inevitable.

An example of inductive reasoning is:

  • Premise one: Every time I’ve had a chocolate-covered oyster I’ve been sick
  • Premise two: I’ve just had a chocolate-covered oyster
  • Conclusion: I’m going to be sick.

This is not a bad argument – I’ll probably get sick – but it’s not inevitable. It’s possible that every time I’ve had a chocolate-covered oyster I’ve coincidentally got sick from something else. Perhaps previous oysters have been kept in the cupboard, but the most recent one was kept in the fridge.

Because climate-contrarian arguments are often definitive, the reasoning used to support them must be deductive. That is, the premises must inevitably lead to the conclusion.

Check the logical structure: We can see that in the argument from step two – that the climate change is changing because of natural processes – the truth of the conclusion is not guaranteed by the truth of the premises.

In the spirit of honesty and charity, we take this invalid argument and attempt to make it valid through the addition of another (previously hidden) premise.

  • Premise one: The climate has changed in the past through natural processes
  • Premise two: The climate is currently changing
  • Premise three: If something was the cause of an event in the past, it must be the cause of the event now
  • Conclusion: The climate is currently changing through natural processes.

Adding the third premise makes the argument valid, but validity is not the same thing as truth. Validity is a necessary condition for accepting the conclusion, but it is not sufficient. There are a couple of hurdles that still need to be cleared.

Check for ambiguity: The argument mentions climate change in its premises and conclusion. But the climate can change in many ways, and the phrase itself can have a variety of meanings. The problem with this argument is that the phrase is used to describe two different kinds of change.

Current climate change is much more rapid than previous climate change – they are not the same phenomenon. The syntax conveys the impression that the argument is valid, but it is not. To clear up the ambiguity, the argument can be presented more accurately by changing the second premise:

  • Premise one: The climate has changed in the past through natural processes
  • Premise two: The climate is currently changing at a more rapid rate than can be explained by natural processes
  • Conclusion: The climate is currently changing through natural processes.

This correction for ambiguity has resulted in a conclusion that clearly does not follow from the premises. The argument has become invalid once again.

We can restore validity by considering what conclusion would follow from the premises. This leads us to the conclusion:

  • Conclusion: Human (non-natural) activity is necessary to explain current climate change.

Importantly, this conclusion has not been reached arbitrarily. It has become necessary as a result of restoring validity.

Note also that in the process of correcting for ambiguity and the consequent restoring of validity, the attempted refutation of human-induced climate science has demonstrably failed.

Check premises for truth or plausibility: Even if there were no ambiguity about the term “climate change”, the argument would still fail when the premises were tested. In step four, the third premise, “If something was the cause of an event in the past, it must be the cause of the event now”, is clearly false.

Applying the same logic to another context, we would arrive at conclusions like: people have died of natural causes in the past; therefore any particular death must be from natural causes.

Restoring validity by identifying the “hidden” premises often produces such glaringly false claims. Recognising this as a false premise does not always require knowledge of climate science.

Flow chart for argument analysis and evaluation.

When determining the truth of a premise does require deep knowledge in a particular area of science, we may defer to experts. But there are many arguments that do not, and in these circumstances this method has optimal value.

Inoculating against poor arguments

Previous work by Cook and others has focused on the ability to inoculate people against climate science misinformation. By pre-emptively exposing people to misinformation with explanation they become “vaccinated” against it, showing “resistance” to developing beliefs based on misinformation.

This reason-based approach extends inoculation theory to argument analysis, providing a practical and transferable method of evaluating claims that does not require expertise in climate science.

Fake news may be hard to spot, but fake arguments don’t have to be.


This article was written by:
Image of Peter EllertonPeter Ellerton – [Lecturer in Critical Thinking, Director of the UQ Critical Thinking Project, The University of Queensland]

 

 

 

This article is part of a syndicated news program via

Cape Town is almost out of water. Could Australian cities suffer the same fate?

 With water storages running low, residents of Cape Town 
get drinking water in the early morning from a mountain spring 
collection point. Nic Bothma/EPA

The world is watching the unfolding Cape Town water crisis with horror. On “Day Zero”, now predicted to be just ten weeks away, engineers will turn off the water supply. The South African city’s four million residents will have to queue at one of 200 water collection points.

Cape Town is the first major city to face such an extreme water crisis. There are so many unanswered questions. How will the sick or elderly people cope? How will people without a car collect their 25-litre daily ration? Pity those collecting water for a big family.

The crisis is caused by a combination of factors. First of all, Cape Town has a very dry climate with annual rainfall of 515mm. Since 2015, it has been in a drought estimated to be a one-in-300-year event.

In recent years, the city’s population has grown rapidly – by 79% since 1995. Many have questioned what Cape Town has done to expand the city’s water supply to cater for the population growth and the lower rainfall.

Could this happen in Australia?

Australia’s largest cities have often struggled with drought. Water supplies may decline further due to climate change and uncertain future rainfall. With all capital cities expecting further population growth, this could cause water supply crises.

The situation in Cape Town has strong parallels with Perth in Australia. Perth is half the size of Cape Town, with two million residents, but has endured increasing water stress for nearly 50 years. From 1911 to 1974, the annual inflow to Perth’s water reservoirs averaged 338 gigalitres (GL) a year. Inflows have since shrunk by nearly 90% to just 42GL a year from 2010-2016.

To make matters worse, the Perth water storages also had to supply more people. Australia’s fourth-largest city had the fastest capital city population growth, 28.2%, from 2006-2016.

As a result, Perth became Australia’s first capital city unable to supply its residents from storage dams fed by rainfall and river flows. In 2015 the city faced a potentially disastrous situation. River inflows to Perth’s dams dwindled to 11.4GL for the year.

For its two million people, the inflows equated to only 15.6 litres per person per day! Yet in 2015/6 Perth residents consumed an average of nearly 350 litres each per day. This was the highest daily water consumption for Australia’s capitals. How was this achieved?

Tapping into desalination and groundwater

Perth has progressively sourced more and more of its supply from desalination and from groundwater extraction. This has been expensive and has been the topic of much debate. Perth is the only Australian capital to rely so heavily on desalination and groundwater for its water supply.

Volumes of water sourced for urban use in Australia’s major cities. BOM, Water in Australia, p.52, National Water Account 2015

Australia’s next most water-stressed capital is Adelaide. That city is supplementing its surface water storages with desalination and groundwater, as well as water “transferred” from the Murray River.

Australia’s other capital cities on the east coast have faced their own water supply crises. Their water storages dwindled to between 20% and 35% capacity in 2007. This triggered multiple actions to prevent a water crisis. Progressively tighter water restrictions were declared.

The major population centres (Brisbane/Gold Coast, Sydney, Melbourne and Adelaide) also built large desalination plants. The community reaction to the desalination plants was mixed. While some welcomed these, others question their costs and environmental impacts.

The desalination plants were expensive to build, consume vast quantities of electricity and are very expensive to run. They remain costly to maintain, even if they do not supply desalinated water. All residents pay higher water rates as a result of their existence.

Since then, rainfall in southeastern Australia has increased and water storages have refilled. The largest southeastern Australia desalination plants have been placed on “stand-by” mode. They will be switched on if and when the supply level drops.

Investing in huge storage capacity

Many Australian cities also store very large volumes of water in very large water reservoirs. This allows them to continue to supply water through future extended periods of dry weather.

Cape Town’s largest water storage, Theewaterskloof Dam, has run dry, but the city’s total capacity is small compared to the big Australian cities’ reserves. Nic Bothma/EPA

The three largest cities (Sydney, Melbourne and Brisbane) have built very large dams indeed. For example, Brisbane has 2,220,150 ML storage capacity for its 2.2 million residents. That amounts to just over one million litres per resident when storages are full.

In comparison, Cape Town’s four million residents have a full storage capacity of 900,000 ML. That’s 225,000 litres per resident. Cape Town is constructing a number of small desalination plants while anxiously waiting for the onset of the region’s formerly regular winter rains.


This article was written by:
Image of Ian Wright Ian Wright – [Senior Lecturer in Environmental Science, Western Sydney University]

 

 

 

This article is part of a syndicated news program via

Explainer: power station ‘trips’ are normal, but blackouts are not

 The Loy Yang power station ‘tripped’ early in  
the year, triggering fears of a summer of blackouts. DAVID CROSLING/AAP

Tens of thousands of Victorians were left without power over the long weekend as the distribution network struggled with blistering temperatures, reigniting fears about the stability of our energy system.

It comes on the heels of a summer of “trips”, when power stations temporarily shut down for a variety of reasons. This variability has also been used to attack renewable energy such as wind and solar, which naturally fluctuate depending on weather conditions.

The reality is that blackouts, trips and intermittency are three very different issues, which should not be conflated. As most of Australia returns to school and work in February, and summer temperatures continue to rise, the risk of further blackouts make it essential to understand the cause of the blackouts, what a power station “trip” really is, and how intermittent renewable energy can be integrated into a national system.


Read more: A month in, Tesla’s SA battery is surpassing expectations


Blackouts

Initial reports indicate recent blackouts in Victoria were caused by multiple small failures in the electricity distribution system across the state, affecting all but one of the five separately owned and managed systems that supply Victorians.

Across the whole of mainland Australia, very hot weather causes peak levels of electricity consumption. Unfortunately, for reasons of basic physics, electricity distribution systems do not work well when it is very hot, so the combination of extreme heat and high demand is very challenging. It appears that significant parts of the Victorian electricity distribution system were unable to meet the challenge, leading to uncontrolled blackouts.

Parenthetically, electricity distribution systems are vulnerable to other types of uncontrollable extreme environmental events, including high winds, lightning, and bushfires. Sometimes blackouts last only a few seconds, sometimes for days, depending on the nature and extent of the damage to the system. 

These blackouts are very different from those caused by power station “trips”, although they have the same effect on consumers. When electricity is insufficient to meet demand, certain sections of the grid have to be startegically blacked out to restore the balance (this is known as “load shedding”).

It is the possibility of blackouts of this second type which has excited so much commentary in recent months, and has been linked to power station “trips”.

What is a ‘trip’ and how significant is it?

“Trip” simply means disconnect; it is used to describe the ultra-fast operation of the circuit breakers used as switching devices in high-voltage electricity transmission systems. When a generator trips, it means that it is suddenly, and usually unexpectedly, disconnected from the transmission network, and thus stops supplying electricity to consumers.

The key words here are suddenly and unexpectedly. Consider what happened in Victoria on January 18 this year. It was a very hot day and all three brown coal power stations in the state were generating at near full capacity, supplying in total about 4,200 megawatts towards the end of the afternoon, as total state demand climbed rapidly past 8,000MW (excluding rooftop solar generation).

Suddenly, at 4:35pm, one of the two 500MW units at Loy Yang B, Victoria’s newest (or, more precisely, least old) coal-fired power station tripped. At the time this unit was supplying 490MW, equal to about 6% of total state demand.

The system, under the operational control of the Australia Energy Market Operator (AEMO), responded just as it was meant to. There was considerable spare gas generation capacity, some of which was immediately made available, as was some of the more limited spare hydro capacity. There was also a large increase in imports from New South Wales, and a smaller reduction in net exports to South Australia.

By the time Loy Yang B Unit 1 was fully back on line, three hours later, Victoria had passed its highest daily peak demand for nearly two years. There was no load shedding: all electricity consumers were supplied with as much electricity as they required. However, spot wholesale prices for electricity reached very high levels during the three hours, and it appears that some large consumers, whose supply contracts exposed them to wholesale prices, made short-term reductions in discretionary demand.


Read more: A high price for policy failure: the ten-year story of spiralling electricity bills


This (relatively) happy outcome on January 18 was made possible by the application of the system reliability rules and procedures, specified in the National Electricity Rules.

These require AEMO to ensure that at all times, in each of the five state regions of the NEM, available spare generation capacity exceeds the combined capacity of the two largest units operating at any time.

In other words, spare capacity must be sufficient to allow demand to continue to be reliably supplied if both of the two largest units generating should suddenly disconnect.

Forecasting

AEMO forecasts energy demand, and issues market notices alerting generators about reliability, demand and potential supply issues. On a busy day, like January 18, market notices may be issued at a rate of several per hour.

These forecasts allowed generators to respond to the loss of Loy Yang B without causing regional blackouts.

What is not publicly known, and may never be known, is why Loy Yang Unit B1 tripped. AEMO examines and reports in detail on what are called “unusual power system events”, which in practice means major disruptions, such as blackouts. There are usually only a few of these each year, whereas generator trips that don’t cause blackouts are much more frequent (as are similar transmission line trips).

It has been widely speculated that, as Australia’s coal fired generators age, they are becoming less reliable, but that could only be confirmed by a systematic and detailed examination of all such events.

Managing variable generation

Finally, and most importantly, the events described above bear almost no relationship to the challenges to reliable system operation presented by the growth of wind and solar generation.

With traditional thermal generation, the problems are caused by unpredictability of sudden failures, and the large unit size, especially of coal generators, which means that a single failure can challenge total system reliability. Individual wind generators may fail unpredictably, but each machine is so small that the loss of one or two has a negligible effect on reliability.

The challenge with wind and solar is not reliability but the variability of their output, caused by variations in weather. This challenge is being addressed by continuous improvement of short term wind forecasting. As day-ahead and hour-ahead forecasts get better, the market advice AEMO provides will give a more accurate estimate of how much other generation will be needed to meet demand at all times.

Of course, AEMO, and the generation industry, do still get caught out by sudden and unexpected drops in wind speed, but even the fastest drop in wind speed takes much longer than the milliseconds needed for a circuit breaker in a power station switchyard to trip out.

At the same time, as the share of variable renewable generation grows, the complementary need for a greater share of fast response generators and energy storage technologies will also grow, while the value to the system of large, inflexible coal-fired generators will shrink.


This Explainer was prepared by:
Image of Hugh SaddlerHugh Saddler – [Honorary Associate Professor, Centre for Climate Economics and Policy, Australian National University]

 

 

 

 

This article is part of a syndicated news program via

Sustainable shopping: how to stay green when buying white goods

 It pays to think very carefully about your new fridge. 
DedMityay/Shutterstock.com

Most of us have a range of white goods in our homes. According to the Australian Bureau of Statistics, common appliances include refrigerators (in 99.9% of homes), washing machines (97.8%) and air conditioners (74.0%). Just over half of Australian households have a dishwasher, and a similar number have a clothes dryer.

These white goods provide a host of benefits, such as reducing waste, improving comfort, helping us avoid health hazards such as rotten food, or simply freeing up our time to do other things. But they also have significant environmental impacts, and it’s important to consider these when using and choosing white goods.

Most white goods are used on a daily basis for years. This means the bulk of their environmental impact comes not from their manufacture, but from their everyday use. They use electricity, for example, which is often sourced from fossil fuels.

Life cycle impacts of typical white goods. MIT

When buying an appliance, many people focus on the retail price, but overlook the often significant operating costs. The table below shows the difference in annual energy costs and greenhouse emissions for different-sized dishwashers under various scenarios.

When it comes to appliances, size matters. Sustainability Victoria, Author provided

What to look for

Here are some questions you should ask when shopping for a new white goods appliance:

  • How resource-efficient is this model, compared with other options?
  • How much will it cost to operate?
  • Over the life of the product, would I be better off spending more now to buy a more energy-efficient model that costs less to run?

Read the label

White goods in Australia are required to carry a label detailing their energy and water ratings. The more stars a product has, the more energy- or water-efficient it is. The labels also provide information of average consumption across a year so that you can compare similar products, or account for factors such as the size of the appliance.

Knowing how to interpret consumer information can be valuable. Energyrating.gov.au, Author provided

The Energy Rating website also allows you to make comparisons, and even calculates usage costs and savings for you. As shown in the figure below, choosing a 10-star fridge over a 3-star one will save you an estimated A$664 in running costs over 10 years. This would offset some or all of the extra up-front cost of buying a more sustainable model.

The Australian government’s energy rating website helps you calculate the savings from being energy-efficient. Energyrating.gov.au, Author provided

Many other organisations and websites also provide performance and user reviews for appliances. Choice is an independent organisation that tests a variety of products, including white goods. The tests scrutinise a range of criteria, including energy- and water-efficiency, ease of use, operating costs, and durability.

Use wisely

Once you get your new appliance home, it is also crucial to use it properly. Make sure you read the manual and find out how to maximise the efficiency of the appliance.

For example, talk to your air conditioner installer to determine the optimal position to cool and heat your space, depending on the aspect and layout of your home. And make sure you leave enough space around the back and side of your fridge for air to circulate, which helps dissipate waste heat more effectively and can save up to 150kg of carbon dioxide emissions a year.

Make sure you turn the appliance off when not in use – failing to do this can further add to running costs and environmental impact. Ask yourself whether you really need to keep that ancient beer fridge humming away in the garage.

Of course, you don’t have to wait for a new appliance before doing all of these things. You can make your current appliances perform more efficiently by reviewing how you use and position them.

Washing a full load of clothes is more efficient and sustainable than only washing a part load. If you think you need to do smaller washloads generally, then consider buying a smaller washing machine, or find a model that has smart features such as being able to do a half load.

Many appliances such as dishwashers and washing machines also feature “eco” modes that can save significant amounts of water and energy.

Finally, it’s always worth asking yourself whether you truly need to buy that new appliance. Consider having a broken appliance fixed, as this will avoid using all the resources required to manufacture a new one. Or consider buying secondhand.

Even if buying secondhand, you can check the environmental performance of the appliance, either through the energy rating website or via the manufacturer. Make sure you compare this to new options to see which works out better over the life of the product.

And if you do buy yourself a new or secondhand appliance, make sure you look into how to recycle your old appliance, through your local council, charities or other organisations.


This article was co-authored by:

Image of Trivess MooreTrivess Moore – [Research Fellow, RMIT University]
and
Simon Lockrey
Simon Lockrey – [Research Fellow, RMIT University]

 

 

 

This article is part of a syndicated news program via

 

A month in, Tesla’s SA battery is surpassing expectations

 A month into operation, the Tesla lithium-ion  
battery at Neoen wind farm in Hornsdale, South Australia is already providing 
essential grid services. REUTERS/Sonali Paul

It’s just over one month since the Hornsdale power reserve was officially opened in South Australia. The excitement surrounding the project has generated acres of media interest, both locally and abroad.

The aspect that has generated the most interest is the battery’s rapid response time in smoothing out several major energy outages that have occurred since it was installed.

Following the early success of the SA model, Victoria has also secured an agreement to get its own Tesla battery built near the town of Stawell. Victoria’s government will be tracking the Hornsdale battery’s early performance with interest.

Generation and Consumption

Over the full month of December, the Hornsdale power reserve generated 2.42 gigawatt-hours of energy, and consumed 3.06GWh.

Since there are losses associated with energy storage, it is a net consumer of energy. This is often described in terms of “round trip efficiency”, a measure of the energy out to the energy in. In this case, the round trip efficiency appears to be roughly 80%.

The figure below shows the input and output from the battery over the month. As can be seen, on several occasions the battery has generated as much as 100MW of power, and consumed 70MW of power. The regular operation of battery moves between generating 30MW and consuming 30MW of power.

Generation and consumption of the Hornsdale Power Reserve over the month of December 2018. Author provided [data from AEMO]

As can be seen, the the generation and consumption pattern is rather “noisy”, and doesn’t really appear to have a pattern at all. This is true even on a daily basis, as can be seen below. This is related to services provided by the battery.

Generation and consumption of the Hornsdale Power Reserve on the 6th of Jan 2018. Author provided [data from AEMO]

Frequency Control Ancillary Services

There are eight different Frequency Control Ancillary Services (FCAS) markets in the National Electricity Market (NEM). These can be put into two broad categories: contingency services and regulation services.

Contingency services

Contingency services essentially stabilise the system when something unexpected occurs. This are called credible contingencies. The tripping (isolation from the grid) of large generator is one example.

When such unexpected events occur, supply and demand are no longer balanced, and the frequency of the power system moves away from the normal operating range. This happens on a very short timescale. The contingency services ensure that the system is brought back into balance and that the frequency is returned to normal within 5 minutes.

In the NEM there are three separate timescales over which these contingency services should be delivered: 6 seconds, 60 seconds, and 5 minutes. As the service may have to increase or decrease the frequency, there is thus a total of six contingency markets (three that raise frequency in the timescales above, and three that reduce it).

This is usually done by rapidly increasing or decreasing output from a generator (or battery in this case), or rapidly reducing or increasing load. This response is triggered at the power station by the change in frequency.

Tesla’s lithium-ion battery in South Australia has provided essential grid services on many occasions throughout December, according to the Australian Energy Market Operator. Reuters

To do this, generators (or loads) have some of their capacity “enabled” in the FCAS market. This essentially means that a proportion of its capacity is set aside, and available to respond if the frequency changes. Providers get paid for for the amount of megawatts they have enabled in the FCAS market.

This is one of the services that the Hornsdale Power Reserve has been providing. The figure below shows how the Hornsdale Power Reserve responded to one incident on power outage, when one of the units at Loy Yang A tripped on December 14, 2017.

The Hornsdale Power Reserve responding to a drop in system frequency. Author provide [data from AEMO]

Regulation services

The regulation services are a bit different. Similar to the contingency services, they help maintain the frequency in the normal operating range. And like contingency, regulation may have to raise or lower the frequency, and as such there are two regulation markets.

However, unlike contingency services, which essentially wait for an unexpected change in frequency, the response is governed by a control signal, sent from the Australian Energy Market Operator (AEMO).

In essence, AEMO controls the throttle, monitors the system frequency, and sends a control signal out at a 4-second interval. This control signal alters the output of the generator such that the supply and demand balanced is maintained.

This is one of the main services that the battery has been providing. As can be seen, the output of the battery closely follows the amount of capacity it has enabled in the regulation market.

Output of Horndale Power Reserve compared with enablement in the regulation raise FCAS market. Author provided [data from AEMO]

More batteries to come

Not to be outdone by it’s neighbouring state, the Victorian government has also recently secured an agreement for its own Tesla battery. This agreement, in conjunction with a wind farm near the town of Stawell, should see a battery providing similar services in Victoria.

This battery may also provide additional benefits to the grid. The project is located in a part of the transmission network that AEMO has indicated may need augmentation in the future. This project might illustrate the benefits the batteries can provide in strengthening the transmission network.

It still early days for the Hornsdale Power Reserve, but it’s clear that it has been busy performing essential services and doing so at impressive speeds. Importantly, it has provided regular frequency control ancillary services – not simply shifting electricity around.

With the costs and need for frequency control service increasing in recent years, the boost to supply through the Hornsdale power reserve is good news for consumers, and a timely addition to Australia’s energy market.


This article was written by:
Image of Dylan McConnell Dylan McConnell – [Researcher at the Australian German Climate and Energy College, University of Melbourne]

 

 

 

 

This article is part of a syndicated news program via

A high price for policy failure: the ten-year story of spiralling electricity bills

 The storm clouds have been gathering over energy policy 
for a decade or more. Joe Castro/AAP Image

Politicians are told never to waste a good crisis. Australia’s electricity sector is in crisis, or something close to it. The nation’s first-ever statewide blackout, in South Australia in September 2016, was followed by electricity shortages in several states last summer. More shortages are anticipated over coming summers.

But for most Australians, the most visible impact of this crisis has been their ever-increasing electricity bills. Electricity prices have become a political hot potato, and the blame game has been running unchecked for more than a year.

Electricity retailers find fault with governments, and renewable energy advocates point the finger at the nasty old fossil-fuel generators. The right-wing commentariat blames renewables, while the federal government blames everyone but itself.

The truth is there is no silver bullet. No single factor or decision is responsible for the electricity prices we endure today. Rather, it is the confluence of many different policies and pressures at every step of the electricity supply chain.

According to the Australian Competition and Consumer Commission (ACCC), retail customers in the National Electricity Market (which excludes Western Australia and the Northern Territory) now pay 44% more in real terms for electricity than we did ten years ago.

Four components make up your electricity bill. Each has contributed to this increase.

How your rising power bills stack up. ACCC, Author provided

The biggest culprit has been the network component – the cost of transporting the electricity. Next comes the retail component – the cost of billing and servicing the customer. Then there is the wholesale component – the cost of generating the electricity. And finally, the government policy component – the cost of environmental schemes that we pay for through our electricity bills.

Each component has a different tale, told differently in every state. But ultimately, this is a story about a decade of policy failure.

Network news

Network costs form the biggest part of your electricity bill. Australia is a big country, and moving electricity around it is expensive. As the graph above shows, network costs have contributed 40% of the total price increase over the past decade.

The reason we now pay so much for the network is simply that we have built an awful lot more stuff over the past decade. It’s also because it was agreed – through the industry regulator – that network businesses could build more network infrastructure and that we all have to pay for it, regardless of whether it is really needed.

Network businesses are heavily regulated. Their costs, charges and profits all have to be ticked off. This is supposed to keep costs down and prevent consumers being charged too much.

That’s the theory. But the fact is costs have spiralled. Between 2005 and 2016 the total value of the National Electricity Market (NEM) distribution network increased from A$42 billion to A$72 billion – a whopping 70%. During that time there has been little change in the number of customers using the network or the amount of electricity they used. The result: every unit of electricity we consume costs much more than it used to.

There are several reasons for this expensive overbuild. First, forecasts of electricity demand were wrong – badly wrong. Instead of ever-increasing consumption, the amount of electricity we used started to decline in 2009. A whole lot of network infrastructure was built to meet demand that never eventuated.

Second, governments in New South Wales and Queensland imposed strict reliability settings – designed to avoid blackouts – on the networks in the mid-2000s. To meet these reliability settings, the network businesses had to spend a lot more money reinforcing their networks than they otherwise would have.

Third, the way in which network businesses are regulated encourages extra spending on infrastructure. In an industry where you are guaranteed a 10% return on investment, virtually risk-free – as network businesses were between 2009 and 2014 – you are inclined to build, build, build.

The blame for this “gold-plating” of network assets is spread widely. Governments have been accused of panicking and setting reliability standards too high. The regulator has copped its share for allowing businesses too much capital spend and too high a return. Privatisation has also been criticised, which is slightly bizarre given that the worst offenders have been state-owned businesses.

Retail rollercoaster

The second biggest increase in your bill has been the amount we pay for the services provided to us by retailers. Across the NEM, 26% of the price increase over the past decade has been due to retail margins.

This increase in the retail component was never supposed to happen. To understand why, you must go back to the rationale for opening the retail sector to competition. Back in the 1990s, it was felt that retail energy was ripe for competition, which would deliver lower prices and more innovative products for consumers.

In theory, where competition exists, firms seek to reduce their costs to maximise their profits, in turn allowing them to reduce prices so as to grab as many customers as possible. The more they cut their costs, the more they can cut prices. Theoretically, costs are minimised and profits are squeezed. If competition works as it’s supposed to, the retail component should go down, not up.

But the exact opposite has happened in the electricity sector. In Victoria, the state that in 2009 became the first to completely deregulate its retail electricity market, the retail component of the bill has contributed to 36% of the price increase over the past decade.

On average, Victorians pay almost A$400 a year to retailers, more than any other mainland state in the NEM. This is consistent with the Grattan Institute’s Price Shock report, which showed that rising profits are causing pain for Victorian electricity consumers. Many customers remain on expensive deals, and do not switch to cheaper offers because the market is so complicated. These “sticky” customers have been cited as the cause of “excessive” profits to retailers.

But the new figures provided by the ACCC, which come directly from retailers, paint a different picture. The ACCC finds that the increase in margins in Victoria is wholly down to the increasing costs of retailers doing business.

There are reasons why competition might drive prices up, not down. Retailers now spend money on marketing to recruit and retain customers. And the existence of multiple retailers leads to duplications in costs that would not exist if a single retailer ran the market.

But these increases should be offset by retailers finding savings elsewhere, and this doesn’t seem to have happened. History may judge the introduction of competition to the retail electricity market as an expensive mistake.

Generational problems

So far, we have accounted for 65% of the bill increase of the past decade, and neither renewables nor coal have been mentioned once. Nor were they ever likely to be. The actual generation of electricity has only ever formed a minor portion of your electricity bill – the ACCC report shows that in 2015-16 the wholesale component constituted only 22% of the typical bill.

In the past year, however, wholesale prices have really increased. In 2015-16, households paid on average A$341 a year for the generation of electricity – far less than they were paying in 2006-07. But in the past year, that is estimated to have increased to A$530 a year.

Generators, particularly in Queensland, have been engaging in questionable behaviour, but it is the fundamental change in the supply and demand balance that means higher prices are here to stay for at least the next few years.

The truth is the cost of generating electricity has been exceptionally low in most parts of Australia for most of the past two decades. When the NEM was created in 1998, there was arguably more generation capacity in the system than was needed to meet demand. And in economics, more supply than demand equals low prices.

Over the years our politicians have been particularly good at ensuring overcapacity in the system. Most of the investment in generation in the NEM since its creation has been driven by either taxpayers’ money, or government schemes and incentives – not by market forces. The result has been oversupply.

Up until the late 2000s the market kept chugging along. Then two things happened. First, consumers started using less electricity. And second, the Renewable Energy Target (RET)was ramped up, pushing more supply into the market.

Demand down and supply up meant even more oversupply, and continued low prices. But the combination of low prices and low demand put pressure on the finances of existing fossil fuel generators. Old generators were being asked to produce less electricity than before, for lower prices. Smaller power stations began to be mothballed or retired.

Something had to give, and it did when both Alinta and Engie decided it was no longer financially viable to keep their power stations running. Far from being oversupplied, the market is now struggling to meet demand on hot days when people use the most electricity. The result is very high prices.

A tight demand and supply balance with less coal-fired generation has meant that Australia increasingly relies on gas-fired generation, at a time when gas prices are astronomical, leading to accusations of price-gouging.

Put simply, Australia has failed to build enough new generation over recent years to reliably replace ageing coal plants when they leave the market.

Is it renewable energy’s fault that coal-fired power stations have closed? Yes, but this is what needs to happen if we are to reduce greenhouse emissions. Is it renewables’ fault that replacement generation has not been built? No. It’s the government’s fault for failing to provide the right environment for new investment.

The right investment climate is crucial. Marcella Cheng

The current predicament could have been avoided if we had a credible and comprehensive emissions reduction policy to drive investment in the sector. Such a policy would give investors the confidence to build generation with the knowledge about what carbon liabilities they may face in the future. But the carbon price was repealed in 2014 and replaced with nothing.

We’re still waiting for an alternative policy. We’re still waiting for enough generation capacity to be built. And we’re still paying sky-high wholesale prices for electricity.

Green and gold

Finally, we have the direct cost of government green schemes over the past decade: the RET; the household solar panel subsidies; and the energy-efficiency incentives for homes and businesses.

They represent 16% of the price increase over the past 10 years – but they are still only 6% of the average bill.

If the aim of these schemes has been to reduce emissions, they have not done a very good job. Rooftop solar panel subsidies have been expensive and inequitable. The RET is more effective as an industry subsidy than an emissions reduction or energy transition policy. And energy efficiency schemes have produced questionable results.

It hasn’t been a total waste of money, but far deeper emissions cuts could have been delivered if those funds had been channelled into a coherent policy.

The story of Australia’s high electricity prices is not really one of private companies ripping off consumers. Nor is it a tale about the privatisation of an essential service. Rather, this is the story of a decade of policy drift and political failure.

Governments have been repeatedly warned about the need to tackle these problems, but have done very little.

Instead they have focused their energy on squabbling over climate policy. State governments have introduced inefficient schemes, scrapped them, and then introduced them again, while the federal government has discarded policies without even trying them.

There is a huge void where our sensible energy policy should be. Network overbuild and ballooning retailer margins both dwarf the impact of the carbon price, yet if you listen only to our politicians you’d be forgiven for thinking the opposite.

And still it goes on. The underlying causes of Australia’s electricity price headaches – the regulation of networks, ineffective retail market competition, and our barely coping generators – need immediate attention. But still the petty politicking prevails.

The Coalition has rejected the Clean Energy Target recommended by Chief Scientist Alan Finkel. Labor will give no guarantee of support for the government’s alternative policy, the National Energy Guarantee. Some politicians doubt the very idea that we need to act on climate change. Some states have given up on Canberra and are going it alone.

We’ve been here before and we know how this story ends. Crisis wasted.


This article was written by:
Image of David BlowersDavid Blowers – [Energy Fellow, Grattan Institute]

 

 

 

 

This article is part of a syndicated news program via

What’s the net cost of using renewables to hit Australia’s climate target? Nothing

 Managed in the right way, wind farms can actually 
help stabilise the grid, rather than disrupting it. AAP Image/Lukas Coch

Australia can meet its 2030 greenhouse emissions target at zero net cost, according to our analysis of a range of options for the National Electricity Market.

Our modelling shows that renewable energy can help hit Australia’s emissions reduction target of 26-28% below 2005 levels by 2030 effectively for free. This is because the cost of electricity from new-build wind and solar will be cheaper than replacing old fossil fuel generators with new ones.


Read moreWant energy storage? Here are 22,000 sites for pumped hydro across Australia


Currently, Australia is installing about 3 gigawatts (GW) per year of wind and solar photovoltaics (PV). This is fast enough to exceed 50% renewables in the electricity grid by 2030. It’s also fast enough to meet Australia’s entire carbon reduction target, as agreed at the 2015 Paris climate summit.

Encouragingly, the rapidly declining cost of wind and solar PV electricity means that the net cost of meeting the Paris target is roughly zero. This is because electricity from new-build wind and PV will be cheaper than from new-build coal generators; cheaper than existing gas generators; and indeed cheaper than the average wholesale price in the entire National Electricity Market, which is currently A$70-100 per megawatt-hour.

Cheapest option

Electricity from new-build wind in Australia currently costs around A$60 per MWh, while PV power costs about A$70 per MWh.

During the 2020s these prices are likely to fall still further – to below A$50 per MWh, judging by the lower-priced contracts being signed around the world, such as in Abu DhabiMexicoIndia and Chile.

In our research, published today, we modelled the all-in cost of electricity under three different scenarios:

  • Renewables: replacement of enough old coal generators by renewables to meet Australia’s Paris climate target
  • Gas: premature retirement of most existing coal plant and replacement by new gas generators to meet the Paris target. Note that gas is uncompetitive at current prices, and this scenario would require a large increase in gas use, pushing up prices still further.
  • Status quo: replacement of retiring coal generators with supercritical coal. Note that this scenario fails to meet the Paris target by a wide margin, despite having a similar cost to the renewables scenario described above, even though our modelling uses a low coal power station price.

The chart below shows the all-in cost of electricity in the 2020s under each of the three scenarios, and for three different gas prices: lower, higher, or the same as the current A$8 per gigajoule. As you can see, electricity would cost roughly the same under the renewables scenario as it would under the status quo, regardless of what happens to gas prices.

Graphs of the Levelised cost of electricity (A$ per MWh) for three scenarios and a range of gas prices.
Levelised cost of electricity (A$ per MWh) for three scenarios and a range of gas prices. Blakers et al.

Balancing a renewable energy grid

The cost of renewables includes both the cost of energy and the cost of balancing the grid to maintain reliability. This balancing act involves using energy storage, stronger interstate high-voltage power lines, and the cost of renewable energy “spillage” on windy, sunny days when the energy stores are full.

The current cost of hourly balancing of the National Electricity Market (NEM) is low because the renewable energy fraction is small. It remains low (less than A$7 per MWh) until the renewable energy fraction rises above three-quarters.

The renewable energy fraction in 2020 will be about one-quarter, which leaves plenty of room for growth before balancing costs become significant.

Graph of the Cost of hourly balancing of the NEM (A$ per MWh) as a function of renewable energy fraction.
Cost of hourly balancing of the NEM (A$ per MWh) as a function of renewable energy fraction.

The proposed Snowy 2.0 pumped hydro project would have a power generation capacity of 2GW and energy storage of 350GWh. This could provide half of the new storage capacity required to balance the NEM up to a renewable energy fraction of two-thirds.

The new storage needed over and above Snowy 2.0 is 2GW of power with 12GWh of storage (enough to provide six hours of demand). This could come from a mix of pumped hydro, batteries and demand management.

Stability and reliability

Most of Australia’s fossil fuel generators will reach the end of their technical lifetimes within 20 years. In our “renewables” scenario detailed above, five coal-fired power stations would be retired early, by an average of five years. In contrast, meeting the Paris targets by substituting gas for coal requires 10 coal stations to close early, by an average of 11 years.

Under the renewables scenario, the grid will still be highly reliable. That’s because it will have a diverse mix of generators: PV (26GW), wind (24GW), coal (9GW), gas (5GW), pumped hydro storage (5GW) and existing hydro and bioenergy (8GW). Many of these assets can be used in ways that help to deliver other services that are vital for grid stability, such as spinning reserve and voltage management.

Because a renewable electricity system comprises thousands of small generators spread over a million square kilometres, sudden shocks to the electricity system from generator failure, such as occur regularly with ageing large coal generators, are unlikely.

Neither does cloudy or calm weather cause shocks, because weather is predictable and a given weather system can take several days to move over the Australian continent. Strengthened interstate interconnections (part of the cost of balancing) reduce the impact of transmission failure, which was the prime cause of the 2016 South Australian blackout.

Since 2015, Australia has tripled the annual deployment rate of new wind and PV generation capacity. Continuing at this rate until 2030 will let us meet our entire Paris carbon target in the electricity sector, all while replacing retiring coal generators, maintaining high grid stability, and stabilising electricity prices.


This article was co-authored by:
Image of Andrew Blakers Andrew Blakers – [Professor of Engineering, Australian National University];
 
Image of Bin LuBin Lu – [PhD Candidate, Australian National University]
and
Image of Matthew StocksMatthew Stocks – [Research Fellow, ANU College of Engineering and Computer Science, Australian National University]

 

 

 

 

 

This article is part of a syndicated news program via