Artificial intelligence needs to be developed with an ethical framework. Shutterstock/Alexander SupertrampThe question of whether technology is good or bad depends on how it’s developed and used. Nowhere is that more topical than in technolgies using artificial intelligence. When developed and used appropriately, artificial intelligence (AI) has the potential to transform the way we live, work, communicate and travel. New AI-enabled medical technologies are being developed to improve patient care. There are persuasive indications that autonomous vehicles will improve safety and reduce the road toll. Machine learning and automation are streamlining workflows and allowing us to work smarter.
Read more: To protect us from the risks of advanced artificial intelligence, we need to act now
Around the world, AI-enabled technology is increasingly being adopted by individuals, governments, organisations and institutions. But along with the vast potential to improve our quality of life, comes a risk to our basic human rights and freedoms. Appropriate oversight, guidance and understanding of the way AI is used and developed in Australia must be prioritised. AI gone wild may conjure images of The Terminator and Ex Machina movies, but it is much simpler, fundamental issues that need to be addressed at present, such as:
- how data is used to develop AI
- whether an AI system is being used fairly
- in which situations should we continue to rely on human decision-making?
We have an AI ethics planThat’s why, in partnership with government and industry, we’ve developed an ethics framework for AI in Australia. The aim is to catalyse the discussion around how AI should be used and developed in Australia. The ethical framework looks at various case studies from around the world to discuss how AI has been used in the past and the impacts that it has had. The case studies help us understand where things went wrong and how to avoid repeating past mistakes. We also looked at what was being done around the world to address ethical concerns about AI development and use. Based on the core issues and impacts of AI, eight principles were identified to support the ethical use and development of AI in Australia.
- Generates net benefits: The AI system must generate benefits for people that are greater than the costs.
- Do no harm: Civilian AI systems must not be designed to harm or deceive people and should be implemented in ways that minimise any negative outcomes.
- Regulatory and legal compliance: The AI system must comply with all relevant international, Australian local, state/territory and federal government obligations, regulations and laws.
- Privacy protection: Any system, including AI systems, must ensure people’s private data is protected and kept confidential and prevent data breaches that could cause reputational, psychological, financial, professional or other types of harm.
- Fairness: The development or use of the AI system must not result in unfair discrimination against individuals, communities or groups. This requires particular attention to ensure the “training data” is free from bias or characteristics which may cause the algorithm to behave unfairly.
- Transparency and explainability: People must be informed when an algorithm is being used that impacts them and they should be provided with information about what information the algorithm uses to make decisions.
- Contestability: When an algorithm impacts a person there must be an efficient process to allow that person to challenge the use or output of the algorithm.
- Accountability: People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm, even if the impacts are unintended.