Stars that vary in brightness shine in the oral traditions of Aboriginal Australians

 The star Betelgeuse varies in brightness.

Aboriginal Australians have been observing the stars for more than 65,000 years, and many of their oral traditions have been recorded since colonisation. These traditions tell of all kinds of celestial events, such as the annual rising of stars, passing cometseclipses of the Sun and Moon, auroral displays, and even meteorite impacts.

But new research, recently published in The Australian Journal of Anthropology, reveals that Aboriginal oral traditions describe the variable nature of three red-giant stars: Betelgeuse, Aldebaran and Antares.

This challenges the history of astronomy and tells us that Aboriginal Australians were even more careful observers of the night sky than they have been given credit for.

What is a variable star?

The Greek philosopher Aristotle wrote in 350BCE that the stars are unchanging and invariable. This was the position held by Western science for nearly 2,000 years.

It wasn’t until 1596 that this was proved wrong, when German astronomer David Fabricius showed that the star Mira (Omicron Ceti), in the constellation of Cetus, changed in brightness over time.

In the 1830s, astronomer John Herschel observed the relative brightness of a handful of stars in the sky. Over the course of four years, he noticed that the star Betelgeuse, in Orion, was sometimes fainter and sometimes brighter than some of the other stars. His discovery paved the way for an entire field of astrophysics dedicated to studying the variable nature of stars.

But was Herschel the first to recognise this?

There is evidence that ancient Egyptians observed the variability of the star Algol (Omicron Persei).

Algol consists of two stars that orbit each other. As one moves in front of the other, it blocks the other star’s light, causing it to dim slightly. This is called an eclipsing binary. It can be seen in the sky as the winking eye of Medusa’s head in the Western constellation Perseus.

The variable star Algol is the winking eye of Medusa’s head, held by Perseus. Stellarium

Are there any clear records from oral or Indigenous cultures that demonstrate knowledge of variable stars?

Emerging research reveals two Aboriginal traditions from South Australia that show the answer is a clear “yes”.

Nyeeruna and the protective Kambugudha

A Kokatha oral tradition from the Great Victoria Desert tells of Nyeeruna, a vain hunter who comprises the same stars, in the same orientation, as the Greek Orion.

He is in love with the Yugarilya sisters of the Pleiades, but they are timid and shy away from his advances. Their eldest sister, Kambugudha (the Hyades star cluster), protects her younger sisters.

Nyreeuna creates fire-magic in his right hand (Betelgeuse) to overpower Kambugudha, so he can reach the sisters. She counters this with her own fire magic in her left foot (Aldebaran), which she uses to kick dust into Nyreeuna’s face. This humiliates Nyreeuna and his fire-magic dissipates.

Nyreeuna (Orion), Kambugudha (the Hyades), and the Yugarilya sisters (Pleiades) with the row of dingo pups between them. Journal of Astronomical History & Heritage

Nyreeuna is persistent and replenishes his fire-magic again to get to the sisters. Kambugudha cannot generate hers in time, so she calls on Babba (the father dingo) for help. Babba fights Nyeeruna while Kambugudha and the other stars laugh at him, then places a row of dingo pups between them. This causes Nyeeruna much humiliation and his fire-magic dissipates again.

The story explains the variability of the stars Betelgeuse and Aldebaran. Trevor Leaman and I realised this in 2014, but we did not realise until now that the story also describes the relative periods of these changes.

Betelgeuse varies in brightness by one magnitude every 400 days, while Aldebaran varies by 0.2 magnitudes at irregular periods. The Aboriginal people recognised that Betelgeuse varies faster than Aldebaran, which is why they say that Kambugudha cannot generate her fire-magic in time to counter Nyreeuna.

Waiyungari and breaking sacred law

The second oral tradition comes from the Ngarrindjeri people, south of Adelaide. The story tells of Waiyungari, a young initiate who is covered in red ochre.

He is seen by two women, who find him very attractive. That night, they seduce him, which is strictly against the law for initiates. To escape punishment, they climb into the sky where Waiyungari becomes the star Antares and the women become the stars Tau and Sigma Scorpii, who flank him on either side.

‘Milky Way Dreaming – Ngurunderi, Nepali, and Waiyungari up in the Milky Way’, a painting by Ngarrindjeri artist Cedric Varcoe telling the Waiyungari story. Cedric Varcoe

The Ngarrindjeri people say Waiyungari signals the start of Spring (Riwuri) and occasionally gets brighter and hotter, symbolising his passion for the women. It is during this time that initiates must refrain from contact with the opposite sex. Antares is a variable star, which changes brightness by 1.3 magnitudes every 4.5 years.

What does this tell us?

Ruddy celestial objects hold special significance in Aboriginal traditions – from red stars to lunar eclipses to meteors – which may be one of the reasons why these stars are so significant.

Red objects are often related to fire, blood and passion. Psychological studies show that the colour red enhances sexual attraction between people, which may explain why both stories relate to sexual desire and taboos.

The Aboriginal traditions change the discovery timeline of these variable stars, which historians of astronomy say were discovered by Western scientists.

We see that Aboriginal people pay very close attention to subtle changes in nature, and incorporate this knowledge into their traditions. Astrophysicists have much to learn if we recognise the scientific achievements of Indigenous cultures and acknowledge the immense power of oral tradition.

Duane Hamacher is giving a plenary talk on this research into Aboriginal observations of red-giant variable stars at the Australian Space Research Conference, to be held at the University of Sydney on November 15, 2017.

This article was written by:
Image of Duane W. Hamacher Duane W. Hamacher – [Senior ARC Discovery Early Career Research Fellow, Monash University]






This article is part of a syndicated news program via

Particle physicists discover mysterious structure in Great Pyramid – here’s how they did it

 Khufu’s pyramid is the largest in  
the Giza pyramid complex. Ricardo Liberato/wikipedia

Particle physicists have uncovered a large, hidden void in Khufu’s Pyramid, the largest pyramid in Giza, Egypt – built between 2600 and 2500 BC. The discovery, published in Nature, was made using cosmic-ray based imaging and may help scientists work out how the enigmatic pyramid was actually constructed.

The technology works by tracking particles called muons. They are very similar to electrons – having the same charge and a quantum property called spin – but are 207 times heavier. This difference in mass is quite important as it turns out it determines how these particles interact when hitting matter.

Highly energetic electrons emit electromagnetic radiation, such as X-rays, when they hit solid matter – making them lose energy and get stuck in the target material. Due to the muon’s much higher mass, this emission of electromagnetic radiation is suppressed by a factor of 207 squared compared to electrons. As a result, muons are not stopped so quickly by any material, they are highly penetrative.

Muons are commonly produced in cosmic rays. The Earth’s upper atmosphere is constantly bombarded with charged particles from the sun but also from sources outside of our solar system. It is the latter that provide the more energetic cosmic rays that can produce muonsand other particles in a chain of reactions.

Graphic representation of the known chambers of the pyramid and the newly discovered void. 
The known chambers of the pyramid and the newly discovered void.

As muons have a relatively long lifetime and are pretty stable, they are the most numerous particles seen from cosmic rays at ground level. And although a lot of energy is lost on the way, muons with very high energies do occur.

Doing science with muons

The particles are fairly easy to detect. They produce a thin trail of “ionisation” along the path they take – which means that they knock electrons off atoms, leaving the atoms charged. This is quite handy, allowing scientists using several detectors to follow the path of the muon back to its origin. Also, if there’s a lot of material in the way of the muon, it can lose all of its energy and stop in the material and decay (split into other particles) before being detected.

These properties make muons great candidates for taking images of objects that otherwise are impenetrable or impossible to observe. Just like bones produce a shadow on a photographic film exposed to X-rays, a heavy and dense object with a high atomic number will produce a shadow or a reduction in the number of muons being able to pass through that object.

The first time muons were used in this way was in 1955, when E. P. George measured the overburden of rock over a tunnel by comparing the muon flux outside and inside of the said tunnel. The first known attempt to take a deliberate “muogram” happened in 1970 when Luis W. Alvarez looked for extended caverns in the second pyramid of Giza, but found none.

Within the last decade or so, muon tomography has experienced a bit of a fresh boost. In 2007, a Japanese collaboration took a muogram of the crater of the volcano Mt Asama to investigate its inside structure.

Muon scans are also being used to investigate the Fukushima reactor remnants. In the UK, the University of Sheffield is proposing to use measurements of the muon flux to monitor carbon storage sites.

Exploring Khufu

The easiest way to use muons to investigate large objects such as a pyramid is to look for differences in the muon flux coming through it. A solid pyramid would leave a shadow or a reduction in the number of muons in that direction. If there is a large hollow void inside the pyramid the muon flux would be increased in the direction of that void. The bigger the difference between “solid” and “hollow” the easier it becomes.

All you need to do is sit somewhere near the ground, look a bit upwards from the horizon towards the pyramid and count the number of muons coming from every direction. As cosmic muons need to be somewhat energetic to pass through a whole pyramid and as our detector “eyes” are relatively small, we need to sit there and count for quite a while, typically several months in order to count enough muons. In the same way as we have two eyes to get a 3D image of the world in our brains, we want two separate detector “eyes” to get a 3D image of the void inside the pyramid.

Photo of the Muon telescope setup in front of Khufus Pyramid.
Muon telescope setup in front of Khufus Pyramid.

The interesting thing about the approach of this team is that they have chosen three different detector technologies to investigate the pyramid. The first one is a bit old fashioned but offers a supreme resolution of the resulting image: photographic plates which get blackened by the ionisation. These were left for months inside of one of the known chambers in the pyramid and analysed in Japan after data taking was finished.

For the second method plastic “scintillators” that produce a light flash when a charged particle passes through them were employed. These kinds of detectors are used in several modern neutrino experiments.

And finally chambers filled with gas, where the ionisation caused by the charged particles can be monitored, were used to look directly along the direction of the newly discovered cavern.

The electronic signal of those detectors was directly phoned back to Paris via a 3G data link. Of course a pyramid with three known caverns and a large hollow gallery inside is a bit of a complex object to take a muogram of (it only shows light and dark). So often these pictures need to be compared to a computer simulation of the cosmic muons and the known pyramid, with warts and all. In this case, a careful analysis of the pictures of the three detectors and the computer simulation yielded the discovery of a 30 metre long void, up to now unknown, inside of the Great Pyramid of Giza. What a great success for a new toolkit.

The technique can now help us study the detailed shape of this void. While we don’t know anything about the role of the structure, research projects involving scientists from other backgrounds could build on this study to help us discover more about its function.

It’s great to see how cutting-edge particle physics can help us shed light on the most ancient human culture. Perhaps we are witnessing the beginning of a revolution in science – making it truly interdisciplinary.

This article was written by:
Image of Harald FoxHarald Fox – [Senior Lecturer of Particle Physics, Lancaster University]






This article is part of a syndicated news program via

Five steps Australia can take to build an effective space agency

 What will it take to give Australia’s
space agency wings? Image from the opening ceremony at IAC2017. 

Senator Simon Birmingham’s September declaration that Australia would establish a space agency created a buzz across the space sector.

The announcement was unexpected. Few anticipated any government commitment until after Dr Megan Clark’s expert panel reported on Australia’s space industry capability in March 2018.

Establishing an agency is a sensible decision and rightly has bipartisan support. But the hard work in determining the shape of the agency has only just begun.

In forming the new agency, much has already been said about what it might do. But how the agency is set up will be just as important to success.

My five steps to an effective agency are: include both “new” and “old” space, give the agency actual power, make the most of the space “brain drain” and work cooperatively with the Department of Defence.

Today I’m pleased to announce on behalf of the Turnbull Government that Australia will have a space agency  🛰🚀

The new pathway to space

The most startling recent evolution in space is that there is more money on the table. Venture capital funding for space projects in each of 2015 and 2016 exceeded the total of all venture capital investments in space since 2000.

Australia has more than 43 small businesses focused on the space sector. This growth has been driven by a rapidly falling cost to participate in space activities. The cost and weight of satellites has plummeted as the technologies that deliver small, affordable smartphones found space applications.

Innovation, competition and ride-sharing on launch vehicles – think Elon Musk’s Space Xa and Auckland-based startup Rocket Lab – have reduced per-kilo prices to space, and costs will likely fall further.

In this rapidly changing environment, here are my five recommendations for space agency success.

1. Grow the ‘new space’ market

The “new space” market is characterised by projects focused on commercial return, particularly small satellites. This is a fast growing sector with existing companies that can deliver Australian technology jobs and export revenue.

To make the most of this existing pool of potential, the agency should fund widely with small amounts, just enough to prove concepts or encourage commercial participation. It should draw on venture capital in assembling this portfolio, as the CSIRO and the UK Space Agency are doing.

2. Do not neglect ‘old space’

Despite the hype around small satellites and commercial space, Australia should not neglect altogether the “old space” of large, reliable and expensive satellites. These are still the mainstay of the industry, and the training ground from which many startups spring.

Precisely because the work proceeds more slowly, old space offers steady cash flow to complement the precarious financing arrangements of many of the new space businesses. New space companies that can also sell hardware or services to old space companies are particularly valuable.

The path here is clear: the agency should work closely with existing trade programs to help the Australian space industry break into global supply chains, in particular helping business navigate restrictive foreign export and labour laws.

Images such as this one collected by NASA’s Suomi NPP satellite can be used to detect bushfires in remote Australia. NASA

3. Give the space agency ‘teeth’

It is not enough for the agency to develop a paper vision for the Australian space sector; it needs the power to make it a reality.

Historically, Australia’s civilian space strategy has been fragmented by a bureaucratic turf war across agencies including CSIRO, the Bureau of MeteorologyGeoscience Australia and the Department of Industry.

Now state and territory governments are joining the fray. South Australia recently launched a Space Industry Centre, and in October Australian Capital Territory Chief Minister Barr visited SpaceX and other aerospace giants on the US West Coast “to discuss opportunities”. 

South Australia looks for accelerators and incubators to support space industry startups 
South Australia looks for accelerators and incubators to support space industry startups – The Lead…
A global call has been issued to find experts in developing space businesses to help accelerate the industry in South Australia.

Australia’s agency needs the authority to impose national strategic discipline. The government could give the agency undisputed policy authority, for example, by making it a small group within Prime Minister and Cabinet. Or the agency could be given purse-string power by allocating the civilian federal space budget through it rather than the existing patchwork of agencies.

Anything less will make the agency a contested and ineffective leader for the Australian space sector.

4. Bring back home-grown talent

There is a wealth of Australians who have gone overseas to pursue space careers. Many were back home for September’s International Astronautical Congress in Adelaide, and were keen to contribute to the success of the agency.

The federal government should be flexible enough to include these dynamic individuals and accelerate the first years of the agency. For example, somebody like Christopher Boshuizen, the Australian co-founder of space startup Planet – on the path to “unicorn” US$1 billion valuation – would be a great asset working on behalf of Australian space startups.

Such talent would kick-start the late-blooming agency with world-class credibility and instant connections to global activity.

5. Work with Defence

A civilian space agency needs to establish a relationship of mutual respect with the Department of Defence space sector, while each maintains primacy in its own sphere.

Defence has substantial space experience, both directly and through Australia’s US alliance. And investments in national security space dwarf civilian spend. For example, Defence recently announced a decade-long program worth A$500 million to develop domestic satellite imagery capabilities.

With the right relationship, Defence would increase access to the agility and innovation of the commercial sector and the civilian agency would benefit from the experience of Defence personnel.

As Senator Birmingham announced Australia’s plans to the world’s largest civilian space conference (September 2017’s International Astronautical Congress), he was speaking to many who have lived through Australia’s big talk on space. We’ve experienced failed launch proposals on Christmas Island and Cape York, and the rise and fall of the Hawke government’s “Australian Space Office”.

Birmingham made an announcement on the biggest possible stage. The “how” will be as important as the “what” if we are to make good this time on high expectations.

This article was wriiten by:
Image of Anthony Wicht Anthony Wicht – [Alliance 21 Fellow (Space) at the United States Studies Centre, University of Sydney]






This article is part of a syndicated news program via

Will technology take your job? New analysis says more of us are safer than we thought, but not all


 Are our jobs safe from robots?

We all want to know how many jobs will be threatened by the rise of robots and technology. You might feel vulnerable if your job is one that could be affected.

But thanks to a new report, 27% of the 160 million people in the United States labour force can breathe easier knowing their jobs are safer than they thought.

That’s 43 million living, breathing and working people in America. By extension, that’s three million Australians, nine million Brits and 27% of most advanced economy workforces.

Their prospects have been re-rated in new work by a group that includes one of the mathematicians who first raised the alarm on the risk to employment.

The Future of Skills: Employment in 2030, published in September, is their most detailed investigation to date on the impact of technology and it now puts 20% of workers in the vulnerable category.

That’s down from the 47% cited as at risk in a 2013 study, The Future of Employment, by professors Karl Frey and Michael Osborne of the Oxford Martin School at the University of Oxford in the UK.

Other studies, other predictions

Many studies have since mirrored this finding. The original Frey/Osborne study focused on American labour force data. Their followup work reached similar conclusions for Britain and Europe.

The Committee for the Economic Development of Australia did similar work in a 2015 report Australia’s Future Workforce to reach a figure of 40%. This has been the basis for employment projections by both the CSIRO’s Data61 and the Foundation for Younger Australians.

It’s also underpinned the rising cry for a basic income to compensate the millions of people who risk losing work while machines create greater productivity.

The likelihood that 20% of the workforce is in occupations vulnerable to technology by 2030 is scary but well short of the original estimate of 47%. So what’s happened here?

The new analysis

The latest work – which includes Osborne as one of four co-authors – digs much deeper than the original analysis of US data that looked at nine identifiable skills that can easily be replicated by machines. It ran that data through a machine learning algorithm to reach a conclusion based purely on the impact of technology.

This time around the researchers started by putting together human focus groups to identify big trends other than technology that may impact employment. They included:

  • mitigation of climate change
  • retooling cities to cope with urbanisation
  • the care needs of ageing western societies, and
  • rising consumer demand for crafted products.

Then instead of going to nine categories of the O*NET data (which describes skills that make up jobs) they went to 120 categories. They found technology could supplement some jobs but not fully replace as many as earlier analysis claimed.

Their final, precise view was that 18.7% of the US labour force and 21.2% of the British workforce are in occupations vulnerable to technology disruption. At the other end of the scale 9.6% (8% in the UK) are in occupations where demand for humans? will increase through technology.

The remaining 70% or so on either side of the Atlantic are in the unknown category.

Skills needed for the future

Interestingly, this report warns of the risk to innovation from concerns over the previous high estimates. It agrees with growing assertions that creativity and complex problem solving ability to support technology skills are essential to future workforce success. So will personal interaction skills and the continuing ability to learn.

This was emphasised in last year’s work on innovative businesses by the Australian Council of Learned Academies. The ability of humans to supplement machines (or vice versa) is also central to recent work by Professor Thomas Davenport of Boston’s Babson College.

In his 2016 book Only Humans Need Apply, Davenport and Harvard Business Review editor Julia Kirby argue there will be plenty of human roles in technologically equipped workplaces – blue and white collar.

Speaking at a QUT Real World Futures conference last year, Davenport cited the law where technology looked threatening but, on his estimate, eight lawyers might do the work of ten.

The impact of technology on future workforces is now hotly contested. This research advance by one of the authors whose work has helped fuel a dystopian view of the future has potential to shift the boundaries towards technology acceptance.

While the headline numbers are appealing, the big question sits with the big number – the 70% in the unknown category.

The question across advanced economies is what do we think will happen to those workers in those industries and what does it mean to our future? We need more work on this in Australia.

In my book Wake Up – The nine h#shtags of digital disruption, I argue that public policy has been slowly reactive to technology disruption. The impact of Uber and AirBnB has been foreseeable but left to chance.

Forming a view on the future and then assembling the data are the minimum start we should demand from governments elected to lead. The alternative is that they, themselves, will be disrupted as the numbers go against them.

This article was written by:
David Fagan – [Adjunct Professor, QUT Business School, and Director of Corporate Transition, Queensland University of Technology]





This article is part of a syndicated news program via

Don’t use technology as a bargaining chip with your kids

How often do you take away your kid’s phone? 

Do you take away your teenager’s phone to manage their behaviour? Maybe when they arrive home late from a party or receive a bad report card?

Confiscating, time-limiting or permitting additional access to technology has become a popular parenting strategy. Surveys show that 65% of American parents with teenagers confiscate phones or remove internet privileges as a form of punishment.

It’s no longer simply a tool of distraction – technology access has become a means of behavioural control. But my recent research suggests that this approach might not be the best idea.

I’ve spoken with 50 Australian families with 118 children aged 1-18 about this issue. The data will be published in 2018. Among my sample, a family with two children owns on average six to eight devices. Some children also had devices from a very young age – the youngest was a one-year-old who received a tablet for her first birthday. The youngest mobile phone owner was six years old.

My qualitative investigation suggests that using technology as a bargaining chip can have adverse effects. It may impact the trust you build with your child and how they use technology.

The effect on younger children

For children 12 years and younger, I saw that parents often use technology as a reward for good behaviour. For example, allowing a two-year-old time on a tablet for using the potty “successfully”.

While it’s important to recognise a child’s achievements, kids can begin to associate technology with being “good” and making their parents proud.

As one eight-year-old explained while sitting on the couch with an iPad either side of him,

I’m a really good boy, that’s why I have two iPads!

This strategy also places emphasis on “use” as opposed to “quality use”.

Quality technology use is commonly understood as use that emphasises creativity and problem-solving. It’s important not to encourage kids to think about screen time in terms of gratification alone. Instead, it should enhance learning, help develop one’s sense of self, or facilitate positive connections.

Picture of kids using an iPad
To give your kid an iPad or not to give your kid an iPad? Jim Bauer/Flickr, CC BY-NC-ND

The effect on teenagers

In my study, parents with teens often removed or limited technology use as a punishment. For example, taking a phone from a 13-year-old because he was rude.

In separate discussions, parents and teens talked about the backlash to such actions. While parents often interpreted their protests as the punishment “working”, teenagers in my study explained it differently.

If their phone is taken away, they often withdrew from their parents. Instead of focusing on what they’d done wrong, they fixated on not having a phone and finding someone else’s to use in the mean time.

On top of this, teenagers characterised it as a privacy issue. One girl explained,

I don’t know what my mum does with my phone when she has it. She probably searches through it!

Worryingly, some teens interpreted their punishment in ways that could compromise the important messages that parents give children about safety on the internet.

Research shows that healthy family communication is crucial in reducing risky online behaviours such as cyberbullying, contact with a potential predator, or exposure to sexually explicit material.

In response to her phone being confiscated, for example, one 15-year-old girl expressed what many teenagers told me:

I don’t tell my parents much now about what happens to me because I don’t want my phone taken off me.

Three key points for parents

Our relationship with technology is complicated, so how should it be treated by parents?

Technology shouldn’t be used to fix all problems

Children told me that “the punishment needs to fit the crime!”

Using technology to encourage appropriate behaviour is not the answer unless it is in response to a technology-related incident. Say, a teenager bullying someone online.

If the incident has nothing to do with internet use, use a strategy that will help them understand and improve on the actual behaviour of concern.

Be a positive technology role model

Being a positive technology role model for children means encouraging quality technology use.

For example, setting aside some phone-free time each day so you can be “in the moment” with your child. If you watch online videos with them, make the clips useful, like learning how to design a new garden. Positive interactions can also be demonstrated, such as playing online chess with a friend.

When the punishment doesn’t work

My research suggests that there’s a point when using technology to manage behaviour simply doesn’t work anymore.

It can get too difficult to remove the smartphone each time your child needs to do their homework, for example. It could even cause animosity or unnecessary aggravation.

It’s important to develop a range of strategies that guide child behaviour. These do not always have to be in response to bad behaviour and they do not always need to be extreme. Instead, they could be used to nudge and guide your child towards comprehending their own actions.

We need to shift the focus away from parenting that relies on threats and rewards, to one that nurtures meaningful parent-child and child-technology relationships.

This article was written by:
Image of Joanne OrlandoJoanne Orlando – [Researcher: Technology and Learning, Western Sydney University]






This article is part of a syndicated news program via

5G will be a convenient but expensive alternative to the NBN

 How will 5G and the NBN compare? 

Will Australia’s National Broadband Network (NBN) face damaging competition from the upcoming 5G network? NBN Co CEO Bill Morrow thinks so.

This week, he even floated the idea of a levy on mobile broadband services, although Prime Minister Malcolm Turnbull quickly rejected the idea.

NBN Co is clearly going to have to compete with mobile broadband on an equal footing.

Read More: Like it or not, you’re getting the NBN, so what are your rights when buying internet services?

This latest episode in the NBN saga raises the question of exactly what 5G will offer broadband customers, and how it will sit alongside the fixed NBN network.

To understand how 5G could compare with the NBN, let’s examine the key differences and similarities between mobile networks and fixed-line broadband.

What is 5G?

5G stands for “5th generation mobile”. It builds upon today’s 4G mobile network technology, but promises to offer higher peak connection speeds and lower latency, or time delays.

5G’s higher connection speeds will be possible thanks to improved radio technologies, increased allocations of radio spectrum, and by using many more antenna sites or base stations than today’s networks. Each antenna will serve a smaller area, or cell.

The technical details of 5G are currently under negotiation in international standards bodies. 5G networks should be available in Australia by 2020, although regulatory changes are still needed.

Connections on 5G

In a mobile network, the user’s device (typically a smart phone) communicates with a nearby wireless base station via a radio link. All users connected to that base station share its available data capacity.

Australia’s mobile network typically provides download speeds of around 20 Mb/s. But the actual speed of connection for an individual decreases as the number of users increases. This effect is known as contention.

Anyone who has tried to upload a photo to Facebook from the Melbourne Cricket Ground will have experienced this.

Picture of a telephone tower base station
Mobile base stations. kongsky/Shutterstock

The maximum download speed of 5G networks could be more than 1 Gb/s. But in practice, it will likely provide download speeds around 100 Mb/s or higher.

Because of contention and the high cost of the infrastructure, mobile network operators also impose significant data download limits for 4G. It is not yet clear what level of data caps will apply in 5G networks.

Connections on the NBN

In a fixed-line network like the NBN, the user typically connects to the local telephone exchange via optical fibre. Directly, in the case of fibre-to-the-premises (FTTP), or by copper wiring and then fibre, in fibre-to-the-node (FTTN).

An important difference between the NBN and a mobile network is that on the NBN, there is virtually no contention on the data path between the user and the telephone exchange. In other words, the user’s experience is almost independent of how many other users are online.

But, as highlighted in the recent public debate around the NBN, some users have complained that NBN speeds decrease at peak usage times.

Importantly, this is not a fundamental issue of the NBN technology. Rather, it is caused by artificial throttling thanks to the NBN Co’s Connectivity Virtual Circuit (CVC) charges, and/or by contention in the retail service provider’s network.

Retail service providers like TPG pay CVC charges to NBN Co to gain bandwidth into the NBN. These charges are currently quite high, and this has allegedly encouraged some service providers to skimp on bandwidth, leading to contention.

A restructuring of the wholesale model as well as providing adequate bandwidth in NBN Co’s transit network could easily eliminate artificial throttling.

The amount of data allowed by retailers per month is also generally much higher on the NBN than in mobile networks. It is often unlimited.

This will always be a key difference between the NBN and 5G.

Picture of NBN Co chief executive Bill Morrow
NBN Co chief executive Bill Morrow announces the company’s full year results in Sydney on Tuesday, Aug. 16, 2016. AAP Image/Paul Miller

Don’t forget, 5G needs backhaul

In wireless networks, the connection between the base stations and internet is known as backhaul.

Today’s 4G networks often use microwave links for backhaul, but in 5G networks where the quantity of data to be transferred will be higher, the backhaul will necessarily be optical fibre.

In the US and elsewhere, a number of broadband service providers are planning to build 5G backhaul networks using passive optical network (PON) technology. This is the type used in the NBN’s FTTP sections.

In fact, this could be a new revenue opportunity for NBN Co. It could encourage the company to move back to FTTP in certain high-population density areas where large numbers of small-cell 5G base stations are required.

So, will 5G Compete with the NBN?

There is a great deal of excitement about the opportunities 5G will provide. But its full capacity will only be achieved through very large investments in infrastructure.

Like today’s 4G network, large data downloads for video streaming and other bandwidth-hungry applications will likely be more expensive using 5G than using the NBN.

In addition, future upgrades to the FTTP sections of the NBN will accommodate download speeds as high as 10 Gb/s, which will not be achievable with 5G.

Unfortunately, those customers served by FTTN will not enjoy these higher speeds because of the limitations of the copper connections between the node and the premises.

5G will provide convenient broadband access for some internet users. But as the demand for ultra-high-definition video streaming and new applications such as virtual reality grow, the NBN will remain the network of choice for most customers, especially those with FTTP services.

This article was written by:

Image of Rod TuckerRod Tucker – [Laureate Emeritus Professor, University of Melbourne]





This article is part of a syndicated news program via


Like it or not, you’re getting the NBN, so what are your rights when buying internet services?

 When buying internet services it pays to 
read the fine print. Dave Hunt/AAP

Complaints about the national broadband network (NBN), involving connection delays, unusable internet or landlines and slow internet speed are on the rise.

Most Australians will be forced to move onto the NBN within 18 months of it being switched on in their area, and that means navigating what can be confusing new contracts.

So, what are your rights regarding landline and internet connections?


Many consumers can and do manage without a landline. But particularly for those without a reliable mobile service, a landline can be essential. It is included in many phone and internet “bundles” offered by internet service providers.

Standard telephone services (primarily landline services) are subject to a Customer Service Guarantee enshrined in law under the Telecommunications Act 1997.

This means that standards apply to common services such as connection of a phone line, repairs of that line and attending appointments on time. The provider will have to pay compensation to the customer if the Customer Service Guarantee standards are not met.

Despite this, some providers suggest a customer waive his or her customer service guarantee rights. There are safeguards for this waiver to be effective, primarily in that the provider must explain the nature of the rights to the customer before asking for the waiver.

The idea behind allowing providers to request a waiver of the Customer Service Guarantee is that it will allow customers to obtain cheaper services than would otherwise be the case. However, we might question the integrity of the consent typically given to such waivers, given consumers generally don’t read contracts and may have little understanding of the value of the Customer Service Guarantee or the likelihood of having to claim under it.

In any event, providers cannot ask for a waiver for Universal Service Obligations, which ensure accessible services for all customers, including those with a disability and those who live in remote areas.


The Customer Service Guarantee does not apply to internet connections – although the Australian Communications Consumer Action Network has argued that it should.

So there are no statutory obligations for internet providers, or NBN Co, to connect customers within a particular time frame or respond promptly to complaints.

The main safeguard for customers for internet services is in the Australian Consumer Law (ACL).

If an internet service provider promises a particular broadband speed and does not provide that speed, the provider may have engaged in misleading conduct contrary to the ACL. Damages and even penalty payments could be awarded against it. And fine printqualifications to the headline statement about internet speeds will not necessary protect the provider.

In addition, the Consumer Guarantees under the ACL (not to be confused with the Customer Service Guarantee under the Telecommunications Act) ensure that any equipment provided with an internet service must be of acceptable quality, and services be provided with due care and skill.

If these standards are not met, the consumer has a right to certain remedies under the ACL and damages for losses that result from the failure. These rights should go some way to protecting telecommunications consumers, although of course they do not directly guarantee that the provider will arrive on time for a scheduled appointment.

So while you may wish to charge your internet service provider for not turning up to an installation appointment, you wouldn’t get far under current Australian law.

This article was written by:
Image of Jeannie Marie PatersonJeannie Marie Paterson – [Associate Professor, University of Melbourne]





This article is part of a syndicated news program via


Wi-Fi can be KRACK-ed. Here’s what to do next

WPA2 has been cracked, so it’s time to update your router. Ksander/Shutterstock

A security researcher has revealed serious flaws in the way that most contemporary Wi-Fi networks are secured.

Discovered by Mathy Vanhoef from the University of Leuven, the vulnerability affects the protocol “Wi-Fi Protected Access 2”. Otherwise known as WPA2, this encrypts the connection between a computer or mobile phone and a Wi-Fi access point to keep your browsing safe.

Because this security can be cracked, it’s possible for someone to read what is transmitted on the network, allowing them to intercept passwords or credit card details, or to inject malicious code when users visit websites.

Dubbed the “key reinstallation attack” (KRACK), Vanhoef’s discovery has the most serious implications for devices running the Android operating system, especially version 6.0 and above, and devices that use Linux.

But don’t freak out just yet: although almost every device that uses Wi-Fi is vulnerable, KRACK can only be deployed in certain circumstances. And there are some simple steps you can take to help keep your internet traffic safe.

What is WPA2, anyway?

Most secured wireless networks use the WPA2 security protocol. It allows users to login to a network and keep their communications secured.

The encryption process uses a set of secret keys that are agreed to between the connecting device and the wireless access point. These keys are used to scramble messages on the network and provide protection against someone sitting in an internet cafe, for example, and listening in on messages between laptops and the wireless router.

WPA2 was created to address weaknesses in previous protocols used to secure wireless networks, such as the Wired Equivalency Privacy (WEP) and the first version of WPA. Until now, it was arguably more secure.

How does KRACK work?

The KRACK attack requires the attacker to be physically close enough to a Wi-Fi network to perform a “man-in-the-middle” attack.

Man-in-the-middle channel attack. David Glance

Most Wi-Fi networks use a “4-way handshake”. This is a series of messages between the client and the access point used to ensure both parties have the right credentials.

In this scenario, the attacker can prompt the third message to be resent, which causes an existing key to be reused. These keys are used to scramble the contents of messages to prevent them from being read, but also to check if messages have been altered in any way. By forcing the reuse of old keys, these protections are effectively removed.

Because the key that is reused is set to zeros in Android 6.0 devices, messages can be more easily decrypted. On other platforms, and depending on the circumstances, only some messages can be exposed.

The KRACK in action.

What does this mean for you?

Vendors of affected devices have apparently known about the vulnerability since July or August. Since the attack is against Wi-Fi clients, devices like mobile phones and laptops are most at risk.

Make sure you update your devices

Apple and Google have told media outlets that they will have fixes for the flaw ready in a few weeks. Microsoft has already released a fix, and other companies will have either already fixed the vulnerability or have fixes shortly.

The key message is that you should immediately apply all updates that come out for phones, laptops or other devices. This is especially true for Android phones.

Apple has confirmed to me (and others) that Wi-Fi exploit KRACK patched in current betas of its four OSes. No imminent danger in any case.

Use sites with HTTPS

Until patches are available, it is worth remembering that Wi-Fi networks, even when secure, only protect communications up until the wireless access point. For end-to-end protection with websites, we rely on HTTPS to keep communication secure. Make sure you look for it in the URL of sites you visit.

Normally this would protect users even on a compromised network, although it is possible to bypass HTTPS if the website is not securely configured.

Use encrypted services

Other communications, such as those used in sending and receiving email, should also be encrypted. Although this is not always the case.

Services like Gmail are encrypted by default. Other applications that use their own end-to-end encryption like Facebook Messenger, WhatsApp and FaceTime would also be secure.

Get a VPN

One way of ensuring secure communication while using any form of Wi-Fi network is to use a Virtual Private Network (VPN) connection.

VPNs provide their own encryption, which protects all communication sent over the Wi-Fi network and would still provide that safeguard, even in the case of someone using the WPA2 KRACK.

We’re not all doomed

Although this is a serious breach, it is not a simple one technically and requires the attacker to have proximity to the Wi-Fi network. The attacker also has to rely on the attacked device going to unprotected, non-HTTPS sites and to not be using a VPN.

As industry commentators have pointed out, this is not quite as serious as media headlines might suggest. But consider it a timely reminder to install software updates on all your devices.

This article was written by:
Image of David GlanceDavid Glance – [Director of UWA Centre for Software Practice, University of Western Australia]






This article is part of a syndicated news program via

At last, we’ve found gravitational waves from a collapsing pair of neutron stars

 Artist’s impression of the collision  
of two neutron stars, the source of the latest gravitational waves detected. 
National Science Foundation/LIGO/Sonoma State University/A. Simonnet

After weeks of rumour and speculation, scientists yesterday finally announced the death spiral of two neutron stars as a source of gravitational waves.

It’s among the biggest news for science in decades, because the findings help shed light on many aspects of astrophysics, including the origins of cosmic explosions known as gamma-ray bursts and of some heavy elements in the universe, such as gold.

The latest detection has scientists excited because most predictions had favoured the detection of gravitational waves from coalescing pairs of neutron stars. Yet the first and all subsequent detections prior to today’s announcement had only come from collisions of black holes.

The first detection

It was back in 2015 when the Advanced LIGO (Laser Interferometer Gravitational-Wave Observatory) detectors heard the whoop of the first gravitational wave signal ever detected


That came from the collision of a pair of black holes in the distant universe about 1.3 billion light years away. Suddenly we knew that our detectors worked; suddenly we knew that the black holes of Einstein’s theory are really out there. Suddenly the dream of gravitational wave astronomy became reality.

The first strong signal was so surprising that the international teams at the LIGO observatories spent weeks trying to work out if someone could have secretly put signals into the data!

Since then there have been more black hole signals, but there was no sign of the predicted neutron stars.

An artist’s conception of two merging black holes
An artist’s conception of two merging black holes similar to those detected by LIGO. LIGO/Caltech/MIT/Sonoma State (Aurore Simonnet)

The neutron star connection

Physicists have long considered neutron stars to be perfect sources of gravitational waves.

Neutron stars are balls of neutrons, about the size of a city but weighing in at about 1.4 times the mass of our Sun.

The first neutron star was discovered by Jocelyn Bell Burnell in 1967, and in 1974 Russell Hulse and Joseph Taylor found a pair of neutron stars spiralling slowly together in the Milky Way, a discovery that led to their Nobel Prize in Physics in 1993.

Caltech physicist Kip Thorne – one of three people awarded this year’s Nobel Prize for Physics – led a campaign to build huge laser interferometers, optimised for detecting the final death spiral of a pair of neutron stars.

Barry Barish (another of this year’s Nobel Prize winners) internationalised the LIGO observatories, bringing Britain, Germany and Australia into the collaboration.

More than just a wave

During the decades of development of gravitational wave detectors, astronomers had become fascinated by vast bursts of gamma rays coming in from the distant universe at the rate of about one every day.

Israeli physicist Tsvi Piran proposed in 1989 that some of these bursts could be created by coalescing neutron stars. If this was the case, then bursts of gravitational waves would be accompanied by bursts of gamma rays.

Many astrophysicists modelled the violent coalescence of merging neutron stars. Some of the superdense neutron rich matter would be flung into space, where it would be relieved of the massive pressure inside the neutron stars.

Uncompressed, it would go off like a vast nuclear fission bomb, creating a slew of heavy elements such as gold and platinum. Within minutes a hot fireball would shine brightly, powered by the decaying radioactivity of the new formed elements.

A new signal detected

Advanced LIGO‘s two 4km detectors in the United States have been operating since 2015. The 3km Advanced Virgo detector in Europe came online on August 1 this year.

Photograph of Europe’s Virgo
Europe’s Virgo becomes the third detector in the hunt for gravitational waves. The Virgo collaboration

Many optical telescopes had signed up to receive any alerts from LIGO and Virgo.

Meanwhile, NASA’s orbiting gamma ray telescopes Fermi and Swift continued their continuous monitoring of the skies. Billions of dollars worth of astronomical hardware was poised and ready in August 2017.

Thursday August 17, 2017, was the day our detectors registered a slowly rising siren call that lasted for a minute and finished with a sharp crescendo.

It wasn’t the brief whoop of a pair of large black holes but the much slower death song of a pair of neutron stars with total mass about three times the mass of the Sun. Two seconds later the Fermi satellite detected a short gamma ray burst. Within minutes the source direction had been roughly localised.

The alert goes out

Within 30 minutes alerts went out to telescopes across the planet. Telescope schedules were interrupted, and before long a bright new object was found in galaxy NGC 4993, seen in the Hydra constellation, and visible in the southern hemisphere in August.

The new object decayed away exponentially over a few days as might be expected for a radioactively powered nebula.

NGC 4993 is 130 million light years away. The arrival of gravity waves and gamma rays within 2 seconds of each other tells us that to a precision of a part in a million billion, both types of wave travel at the same speed.

The fact that two completely different types of radiation, one that is a ripple of space itself, and the other that travels through space, should travel at exactly the same speed could seem astonishing, yet it is exactly what Einstein predicted.

The event is a treasure trove of astrophysics. From one faint gravitational sound, a momentary burst of gamma rays and the faint fading glow of exploding nuclear matter, we have the first direct measurement of the distance of galaxies.

This is because gravitational wave signals directly encode distance. And suddenly we know how gamma ray bursts are created. And suddenly we know that all our gold, our rings and treasures, was probably created in neutron star collisions.

It will take many years to fully explore the data, and meanwhile more and more data will flood in as we continue to open the gravitational wave spectrum with more observatories on earth and in space. The new era of multi-messenger astronomy has begun!

This article was written by:
Image of David BlairDavid Blair – [Director, WA Node of the ARC Centre of Excellence for Gravitational Wave Discovery, and the Australian International Gravitational Research Centre, University of Western Australia]






This article is part of a syndicated news program via

The Great Barrier Reef can repair itself, with a little help from science

 How the Great Barrier Reef can be  
helped to help repair the damaged reef. AIMS/Neal Cantin

The Great Barrier Reef is suffering from recent unprecedented coral bleaching events. But the answer to part of its recovery could lie in the reef itself, with a little help.

In our recent article published in Nature Ecology & Evolution, we argue that at least two potential interventions show promise as means to boost climate resilience and tolerance in the reef’s corals: assisted gene flow and assisted evolution.

Both techniques use existing genetic material on the reef to breed hardier corals, and do not involve genetic engineering.

But why are such interventions needed? Can’t the reef simply repair itself?

Damage to the reef, so far

Coral bleaching in 2016 and 2017 took its biggest toll on the reef to date, with two-thirds of the world’s largest coral reef ecosystem impacted in these back-to-back events. The consequence was widespread damage.

Picture of coral bleaching
Bleached corals on the central Great Barrier Reef at the peak of the heat wave in March 2017. Most branching corals in the photo were dead six months later. Neal Cantin/AIMS, CC BY-ND

Reducing greenhouse gas emissions will dampen coral bleaching risk in the long term, but will not prevent it. Even with strong action to tackle climate change, more warming is locked in.

So while emissions reductions are essential for the future of the reef, other actions are now also needed.

Even in the most optimistic future, reef-building corals need to become more resilient. Continued improvement of water quality, controlling Crown-of-Thorns Starfish, and managing no-take areas will all help.

But continued stress from climate change – in frequency and intensity – increasingly overwhelms the natural resilience despite the best conventional management efforts. Although natural processes of adaptation and acclimation are in play, they are unlikely to be fast enough to keep up with any rate of global warming.

So to boost the reef’s resilience in the face of climate change we need to consider new interventions – and urgently.

That’s why we believe assisted gene flow and assisted evolution could help the reef.

Delaying their development could mean that climate change degrades the reef beyond repair, and before we can save key species.

What is assisted gene flow?

The idea here is to move warm-adapted corals to cooler parts of the reef. Corals in the far north are naturally adapted to 1C to 2C higher summer temperatures than corals further south.

This means there is an opportunity to build resistance to future warming in corals in the south under strong climate change mitigation, or to decades of warming under weaker mitigation.

There is already natural genetic connectivity of coral populations across most of the reef. But the rate of larval flow from the warm north to the south is limited, partly because of the South Equatorial Current that flows west across the Pacific.

The South Equatorial Current splits into the north-flowing Gulf of Papua Current and south-flowing East Australian Current off the coast of north Queensland. This means coral larvae spawned in the warm north are often more likely to stay in the north.

So manually moving some of the northern corals south could help overcome that physical limitation of natural north-to-south larval flow. If enough corals could be moved it could help heat-damaged reefs recover faster with more heat-resistant coral stock.

We could start safe tests at a subset of well-chosen reefs to understand how warm-adapted populations can be spread to reefs further south.

Picture showing artificial coral growing
These two-year old corals reared in AIMS’s National Sea Simulator are hybrids between different species of the genus Acropora. They are the results of artificial selection under experimental climate change and show tolerance to prolonged heat stress expected in the future. Neal Cantin/AIMS, CC BY-ND

What is assisted evolution?

While assisted gene flow may be effective for southern or recently degraded reefs, it will not be enough or feasible for all reefs or species. Here, we argue that assisted evolution could help.

Assisted evolution is artificial selection on steroids. It combines multiple approaches that target the coral host and its essential microbial symbionts.

These are aimed at producing a hardier coral without the use of genetic engineering. Experiments at the Australian Institute of Marine Science are already making progress, with results yet to be published.

First, evolution of algal symbionts in isolation from the coral host has been fast-tracked to resist higher levels of heat stress. When symbionts are made to reengage with the coral host, benefits to bleaching resistance are still small, but with more work we expect to see a hardier symbiosis.

Secondly, experiments have created new genetic diversity of corals through hybridisation and researchers have selected these artificially for increased climate resilience.

Natural hybridisation happens only occasionally on the reef, so this result gives us new options for climate hardening corals using existing genetic stocks.

The danger of doing nothing?

The right time to start any new intervention is when the risk of inaction is greater than the risk of action.

Assisted gene flow and assisted evolution represent manageable risk because they use genetic material already present on the reef. The interventions speed up naturally occurring processes and do not involve genetic engineering.

These interventions would not introduce or produce new species. Assisted gene flow would simply enhance the natural flow of warm-adapted corals into areas on the reef that desperately need more heat tolerance.

Risk of increasing the spread of diseases may also be low because most parts of the Reef are already interconnected. A full understanding of risks is an area of continued research.

These are just two examples of new tools that could help build climate resilience on the reef. Other interventions are developing and should be put on the table for open discussion.

This article was co-authored by:





This article is part of a syndicated news program via