Sarah Chen's mortgage application was perfect. Stable job at a Sydney tech company, excellent credit score, 20% deposit saved. Yet the bank's AI system flagged her as "medium risk" while her colleague James, with identical financials, sailed through as "low risk." The only difference? Her surname, her postcode, and perhaps the training data the AI had learned from.
In Australia, it's potentially illegal. And it's happening more often than you'd think.
Australian businesses are racing to adopt AI systems for everything from hiring decisions to healthcare diagnostics, from loan approvals to welfare assessments. The complication? AI systems learn from historical data, and that data often reflects decades of human bias. When we feed biased data into algorithms, we don't just automate decisions. We automate discrimination at scale.
The 2023 Robodebt Royal Commission showed us the devastating consequences. Between 2015 and 2019, an automated debt recovery system raised approximately 794,000 debts against approximately 526,000 Australians. Of the 567,000 debts raised through income averaging, approximately 470,000 (around 80%) were false. The government wrongly recovered $751 million from 381,000 people who paid debts they didn't owe. The algorithm shifted the burden of proof onto vulnerable people, caused immeasurable psychological harm, and contributed to suicides. Commissioner Catherine Holmes called it a "costly failure of public administration, in both human and economic terms."
For Australian businesses, the stakes couldn't be higher. We've got unique demographics: 3.2% Indigenous Australians with distinct data sovereignty requirements, 29.3% born overseas speaking 300+ languages, and cities that rank among the world's most multicultural. AI systems trained predominantly on US or European data simply don't work fairly here.
This article will show you how to build AI systems that work for all Australians. We'll explore what bias really means, how it manifests in algorithms, the tools available to detect it, and the governance frameworks that can prevent it. Most importantly, you'll learn practical steps to ensure your AI systems don't discriminate against the diverse communities your business serves.
Understanding Bias: It's More Complex Than You Think
When most people hear "AI bias," they picture a racist algorithm deliberately discriminating. But bias in AI is far more subtle, systemic, and varied. Understanding the different types is your first step toward building fair systems.
Training Data Bias: Learning from an Unfair Past
AI systems learn patterns from historical data. If that data reflects past discrimination, the AI will learn to discriminate too. Amazon discovered this the hard way between 2014 and 2017 when they built an AI recruiting tool to review resumes. The system taught itself that male candidates were preferable because Amazon's existing engineering workforce was overwhelmingly male.
The bias was blatant: it penalised resumes containing the word "women's" (as in "women's chess club captain") and downgraded graduates from all-women's colleges. It favoured verbs like "executed" and "captured" that appeared more commonly on male engineers' resumes. Amazon scrapped the project in 2017, but not before demonstrating how easily historical inequality gets baked into AI.
For Australian businesses, training data bias is particularly problematic. Most large datasets come from the US or Europe. Facial recognition systems trained predominantly on white faces perform dramatically worse on Asian, African, and Middle Eastern faces. A landmark 2019 study by the US National Institute of Standards and Technology tested nearly 200 facial recognition algorithms and found they were 10 to 100 times more likely to misidentify Black or East Asian faces compared to white ones. Native Americans suffered the highest false positive rates.
Unexpectedly, algorithms developed in China performed better on East Asian faces, sometimes better than on Caucasian faces. This demonstrates that the developer's location (as a proxy for training data demographics) fundamentally shapes algorithm performance. For Australian businesses using off-the-shelf AI, this means your system might work brilliantly in Boston but fail dramatically in Bankstown.
Algorithmic Bias: When Models Amplify Inequality
Even with balanced training data, algorithms can amplify biases through their design choices. Consider what happened with a healthcare algorithm analysed in a 2019 study published in Science by researchers including Ziad Obermeyer. The algorithm, used by hospitals to manage care for millions of patients, predicted healthcare costs rather than health needs.
Sounds reasonable, right? Except Black patients with the same chronic conditions as white patients spent $1,800 less annually on medical costs due to unequal access to care. The algorithm interpreted lower spending as meaning Black patients were healthier. The result? At any given risk score, Black patients were considerably sicker than white patients. Fixing this bias would have increased the percentage of Black patients receiving additional help from 17.7% to 46.5%.
This is algorithmic bias: when the model's objective function, architecture, or optimisation choices create or amplify unfair outcomes. The data wasn't necessarily biased, but using "cost" as a proxy for "health need" in a society with healthcare disparities created systematic discrimination.
Measurement Bias: The Proxy Problem
Measurement bias occurs when AI systems use easily measured attributes as proxies for harder-to-measure qualities. This can introduce discrimination even when protected attributes like race or gender aren't explicitly used.
For example, postcode can serve as a proxy for socioeconomic status, which correlates with race and ethnicity in Australia. An algorithm that penalises certain postcodes might technically be postcode-blind to race while effectively discriminating along racial lines. Similarly, using "years of experience" as a hiring criterion can proxy for age, or requiring specific educational credentials can proxy for socioeconomic background.
The technical term is "disparate impact": when a policy or practice that appears neutral actually has a disproportionate negative effect on a protected group. Under Australian discrimination law, this can still be illegal even if there's no intent to discriminate.
Deployment Bias: Same System, Different Outcomes
Deployment bias happens when AI systems produce different outcomes for different groups in real-world use, even if they perform equally well during testing. This often occurs because the deployment context differs from the testing environment.
Consider an AI chatbot trained on standard Australian English. It might work perfectly for English speakers but fail for the 22% of Australians who speak a language other than English at home. The system isn't technically "wrong," but its deployment creates unequal outcomes.
Or imagine a voice recognition system in a government service. If it struggles with accents common in multicultural Sydney or Melbourne, it creates barriers for exactly the communities that might most need those services. The algorithm itself might be well-designed, but its deployment creates discrimination.
Defining Fairness: There's No Single Answer
If bias is the problem, fairness is the solution, right? Not quite. Here's where things get philosophically and mathematically interesting: there are multiple competing definitions of fairness, and you often can't satisfy them all simultaneously.
Demographic Parity: Equal Positive Outcome Rates
Demographic parity (also called statistical parity) means that positive outcomes should occur at equal rates across different groups. If 30% of white loan applicants get approved, then 30% of Asian applicants, 30% of Indigenous applicants, and 30% of applicants from any other group should also get approved.
This definition appeals to our sense of equality. If different groups face systematically different approval rates, something seems wrong. Demographic parity is what ProPublica used in their influential 2016 investigation of the COMPAS recidivism prediction system used in US criminal justice.
The catch? What if the groups genuinely have different underlying rates of the outcome you're predicting? If you force equal approval rates when base rates differ, you might need to apply different standards to different groups, which feels like discrimination itself. This is the demographic parity paradox.
Equalized Odds: Equal Error Rates
Equalized odds says that among people who would actually succeed (or fail), the algorithm should make mistakes at equal rates across groups. For a hiring algorithm, this means that among people who'd be excellent employees, the rejection rate should be the same regardless of race. And among people who'd be poor employees, the acceptance rate should also be the same.
This was Northpointe's defence of their COMPAS system: they claimed the algorithm was equally accurate at predicting recidivism across racial groups. And they were right, by this definition of fairness.
But here's the twist: ProPublica was also right. The algorithm satisfied equalized odds but violated demographic parity. Black defendants were almost twice as likely as white defendants to be incorrectly labelled high risk. White defendants were more likely to be incorrectly labelled low risk but go on to reoffend.
Researchers have proven mathematically that when base rates differ between groups, you cannot simultaneously achieve demographic parity and equalized odds. This "impossibility of fairness" means you must choose which definition of fairness matters most for your context.
Individual Fairness: Similar People Treated Similarly
Individual fairness takes a different approach: similar individuals should be treated similarly. If two people are identical in all relevant ways, they should receive the same outcome from your AI system.
This sounds obviously correct, but it's devilishly hard to implement. How do you define "similar"? What attributes are "relevant"? Two loan applicants might have identical credit scores and incomes but different social networks, different educational backgrounds, or different family circumstances. Should those factors make them "different" for fairness purposes?
And here's a deeper question: if historical discrimination means that protected groups systematically have lower credit scores or fewer years of experience, then treating "similar" people similarly might just perpetuate past discrimination.
Fairness Trade-offs: The Tough Choices
Beyond conflicts between fairness definitions, there are broader trade-offs. Often, making an AI system fairer means accepting lower overall accuracy. An algorithm that performs at 95% accuracy might drop to 90% when you adjust it to perform equally well across all demographic groups.
Is that trade-off worth it? That depends on what you value most. For spam filtering, you might prioritise accuracy. For criminal sentencing or welfare eligibility, fairness is paramount. For hiring, you need both, requiring careful balance.
Australian businesses need to make these decisions explicitly and transparently. There's no universal "fair" algorithm. The right approach depends on your industry, your use case, your users, and your values. What matters is that you choose deliberately, understand the trade-offs, and can justify your choices to stakeholders, regulators, and affected communities.
The Australian Context: Why It Matters
Australia isn't just another Western country when it comes to AI fairness. Our unique demographics, legal framework, and history create distinct challenges and responsibilities.
Indigenous Australians: Data Sovereignty and Self-Determination
Aboriginal and Torres Strait Islander peoples comprise 3.2% of Australia's population according to the 2021 Census: 812,728 people, with 91.4% identifying as Aboriginal, 4.2% as Torres Strait Islander, and 4.4% as both. This community has experienced systematic discrimination for over two centuries, creating profound challenges for AI fairness.
First, there's the data gap. Indigenous Australians are underrepresented in many datasets, making AI systems less accurate for this population. Worse, historical discrimination means that any data that does exist often reflects discriminatory patterns. An AI system trained on criminal justice data will "learn" that Indigenous Australians are higher risk because Indigenous people are incarcerated at 11 times the rate of non-Indigenous Australians. But this reflects systemic racism in policing and sentencing, not actual criminality.
Second, there's the principle of Indigenous data sovereignty. The Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective advocates that Indigenous peoples have the right to control data about their communities. This builds on the Canadian First Nations OCAP principles: Ownership, Control, Access, and Possession.
For Australian businesses, this means you can't just collect data about Indigenous people and use it however you like. You need:
- Free, prior, and informed consent from Indigenous communities
- Consultation with Indigenous data governance experts
- Consideration of cultural protocols and sensitivities
- Clear benefits for Indigenous communities from AI systems that affect them
- Respect for Indigenous peoples' right to control their own data
The Robodebt scandal disproportionately affected Indigenous Australians. Any AI system touching welfare, justice, healthcare, or government services needs explicit consideration of impacts on Indigenous communities and genuine consultation throughout the design process.
Multicultural Australia: Designing for Diversity
Australia is one of the world's most multicultural nations. The 2021 Census found that 29.3% of Australians were born overseas, with people from over 190 countries speaking more than 300 languages at home. Sydney and Melbourne consistently rank among the world's most culturally diverse cities.
This creates unique challenges for AI fairness:
Facial Recognition: As mentioned earlier, facial recognition systems trained on predominantly white faces perform dramatically worse on Asian, African, and Middle Eastern faces. Given that over 30% of Sydney and Melbourne residents have Asian ancestry, deploying off-the-shelf facial recognition in Australian contexts risks systematic discrimination against a massive proportion of users.
Language Bias: AI systems trained primarily on English (or even worse, American English) struggle with Australian English expressions, let alone the 300+ other languages spoken here. Voice recognition systems might work brilliantly for native English speakers from certain backgrounds but fail for speakers with accents reflecting our multicultural reality.
Cultural Assumptions: AI systems often embed cultural assumptions from their training data. A chatbot trained on US data might not understand Australian cultural references, government structures, or social norms. More problematically, it might make assumptions about "normal" behaviour that reflect American rather than Australian (or Chinese-Australian, or Lebanese-Australian, or Vietnamese-Australian) cultural contexts.
Service Accessibility: If your AI system requires specific technical literacy, language skills, or cultural knowledge to use effectively, it creates barriers for exactly the multicultural communities it should serve. This is deployment bias in action: the system might technically "work" but creates unequal outcomes.
Regional and Socioeconomic Divides
Australia's population is heavily concentrated in major cities, but 29% of Australians live in regional areas. This creates additional fairness challenges:
- Regional areas might have less data available, making AI systems less accurate
- Internet connectivity and digital literacy vary significantly between metropolitan and regional Australia
- Postcode-based proxies can discriminate against regional Australians
- AI systems optimised for urban contexts might fail in regional settings
Similarly, socioeconomic status creates fairness risks. AI systems that use credit history, educational credentials, employment stability, or residential location as factors can systematically disadvantage lower-income Australians.
Why Global Solutions Don't Work Here
The key insight: Australia's demographics differ substantially from the US and Europe where most AI systems and datasets originate. An algorithm that's "fair" in San Francisco might be deeply biased in Sydney. Our 3.2% Indigenous population, 29.3% overseas-born population, and extreme multicultural diversity mean we need Australian-specific approaches to AI fairness.
This doesn't mean reinventing every algorithm. But it does mean:
- Testing AI systems on representative Australian populations
- Adjusting or retraining systems that show demographic performance disparities
- Consulting with affected Australian communities, particularly Indigenous peoples
- Understanding that "fairness" in an Australian context has specific legal and cultural meanings
Tools for Detecting Bias: Making the Invisible Visible
You can't fix what you can't measure. Fortunately, researchers and technology companies have developed powerful tools for detecting and quantifying bias in AI systems.
Fairlearn: Microsoft's Open-Source Toolkit
Fairlearn is an open-source Python toolkit that helps assess and improve AI fairness. Originally developed by Microsoft Research, it's become one of the most widely used fairness tools in industry.
Fairlearn provides two main components:
Fairness Metrics: The toolkit measures various fairness definitions including demographic parity, equalized odds, and equal opportunity. You can quickly assess whether your model performs differently across demographic groups. For example, you might discover that your hiring algorithm approves 40% of male candidates but only 28% of female candidates, or that your loan approval system has different accuracy rates for different ethnic groups.
Mitigation Algorithms: Beyond just measuring unfairness, Fairlearn provides techniques to reduce it. These include pre-processing methods (adjusting training data), in-processing methods (modifying the learning algorithm), and post-processing methods (adjusting predictions). You can specify which fairness constraint matters most for your use case, and Fairlearn will help you build models that satisfy it.
The toolkit integrates smoothly with scikit-learn, Python's most popular machine learning library, making it relatively easy to incorporate into existing workflows. Documentation is available at fairlearn.org.
AI Fairness 360: IBM's Comprehensive Solution
AI Fairness 360 is IBM Research's contribution to bias detection, donated to the Linux Foundation AI & Data. It's a more comprehensive toolkit than Fairlearn, with over 70 metrics and numerous mitigation algorithms.
Key features include:
Extensive Metrics: AI Fairness 360 measures bias at multiple stages. You can detect bias in your training data before you even build a model, measure bias in model predictions, and monitor deployed systems for emerging bias. The 70+ metrics cover various fairness definitions and can highlight subtle forms of discrimination you might miss with simpler tools.
Explainability: The toolkit doesn't just tell you your system is biased; it helps explain why. You can identify which features contribute most to discriminatory outcomes, understand how different groups are affected differently, and generate reports for stakeholders or regulators.
Industry Integration: AI Fairness 360 works with TensorFlow, PyTorch, scikit-learn, and other major frameworks. It's designed for enterprise use, with clear documentation and example workflows for common scenarios like credit scoring, hiring, and healthcare.
The toolkit is available at ai-fairness-360.org and through GitHub.
What-If Tool: Google's Interactive Approach
Google's What-If Tool takes a different approach: visual, interactive exploration of model behaviour. Rather than just running statistical tests, you can literally probe your model with "what if" questions.
For example: "What if this applicant were male instead of female? Would the decision change?" You can flip attributes one at a time and see how predictions shift, helping you understand which features drive decisions and where bias might be hiding.
The tool works with TensorFlow models and integrates with Jupyter notebooks, making it particularly useful during model development and debugging.
Aequitas: Bias and Fairness Auditing
Aequitas, developed by the Center for Data Science and Public Policy at the University of Chicago, focuses on auditing. It's designed for organisations that need to demonstrate fairness to regulators, ethics boards, or the public.
Aequitas generates comprehensive fairness reports assessing your system against multiple fairness definitions. It's particularly strong for applications in criminal justice, healthcare, and government services where regulatory scrutiny is high.
Australian-Specific Considerations
While these tools are powerful, using them effectively in an Australian context requires additional steps:
Representative Test Sets: You need to test your AI system on data that reflects Australian demographics. If you're using facial recognition, your test set should include appropriate proportions of Indigenous Australians, Asian-Australians, Middle Eastern Australians, African Australians, and others. It should cover regional and urban populations, different age groups, and various socioeconomic backgrounds.
Culturally Appropriate Metrics: Some fairness metrics might matter more in an Australian legal and cultural context. For example, given our discrimination laws, disparate impact analysis is particularly important. Given our commitment to Indigenous self-determination, consultation metrics (did you consult affected communities?) might be as important as mathematical metrics.
Continuous Monitoring: Fairness isn't just about launch day. AI systems can develop bias over time as data distributions shift or as they interact with user behaviour. Australian businesses need ongoing monitoring, particularly for systems that affect vulnerable populations.
Human Review: No automated tool catches everything. The most effective approach combines automated bias detection with human review from people with diverse backgrounds and lived experiences. Someone who's experienced discrimination can often spot problematic patterns that metrics miss.
Building Governance: From Tools to Culture
Detecting bias is necessary but not sufficient. Australian businesses need governance structures that embed fairness into AI development from the start and maintain it throughout the system's lifecycle.
Ethics Review Boards: Diverse Perspectives Matter
An AI ethics review board is a cross-functional team that assesses AI systems for ethical concerns before deployment. Unlike a technical review, which asks "does it work?", an ethics review asks "should we build it?" and "how do we ensure it's fair?"
Composition: The most effective boards include:
- Technical experts who understand AI systems
- Legal counsel familiar with discrimination law and the Privacy Act
- Domain experts from the business area using the AI
- People with lived experience from communities the system affects
- Ethics or philosophy expertise
- External community representatives when appropriate
That last point is crucial. If you're building an AI system that affects Indigenous Australians, you need Indigenous people on your review board. If it affects people with disabilities, you need people with disabilities involved. Diverse perspectives aren't just ethically right; they're practically essential because people with lived experience spot issues others miss.
Terms of Reference: Clear governance means answering:
- What triggers an ethics review? (New AI system? Significant changes to existing ones? Certain risk levels?)
- Who has decision authority? (Can the board block deployment? Require changes? Or just advise?)
- What criteria are used? (How do you balance fairness against other goals?)
- How are disagreements resolved? (What happens if the board and the product team disagree?)
- How is the board's work documented and reported? (For regulators, auditors, or the public?)
Applying Australia's 8 AI Ethics Principles
In 2019, Australia's Department of Industry released eight AI Ethics Principles to guide responsible AI development. These aren't legally binding, but they provide a practical framework that aligns with emerging regulation.
The eight principles are:
- Human, societal and environmental wellbeing: AI systems should benefit individuals, society, and the environment.
- Human-centred values: AI systems should respect human rights, diversity, and individual autonomy.
- Fairness: AI systems should be inclusive, accessible, and not involve unfair discrimination.
- Privacy protection and security: AI systems should respect privacy rights and ensure data security.
- Reliability and safety: AI systems should operate reliably according to their intended purpose.
- Transparency and explainability: People should understand when AI significantly impacts them and how it works.
- Contestability: People should have a timely process to challenge AI outcomes that significantly impact them.
- Accountability: People responsible for different stages of the AI lifecycle should be identifiable and accountable.
In June 2024, the Australian government released the National Framework for the Assurance of AI in Government, which operationalises these principles for government agencies. While private businesses aren't legally required to follow this framework, it provides excellent practical guidance.
Practical Application: For each AI system, ask:
- Does this system improve wellbeing for all affected groups, or does it benefit some at others' expense?
- Have we consulted affected communities, particularly marginalised ones?
- Have we tested for discriminatory outcomes across demographic groups?
- How do we ensure privacy given the personal data involved?
- What happens if the system makes mistakes? How do we catch and correct them?
- Can users understand why the system made a particular decision affecting them?
- Can users challenge decisions? How quickly? How easily?
- Who's accountable if something goes wrong?
Escalation Paths: When Things Get Complicated
Not every ethical question has a clear answer. Sometimes your team will face genuinely difficult dilemmas where fairness conflicts with accuracy, where different stakeholder groups want different things, or where the right path forward is unclear.
Effective governance means having clear escalation paths:
- When does an issue go from the product team to the ethics board?
- When does it escalate further to executives or the board of directors?
- What external expertise gets consulted? (Legal? Community groups? Academic experts?)
- How do you document reasoning for difficult decisions?
- How do you involve affected communities in decisions that impact them?
For issues affecting Indigenous Australians, escalation should include consultation with Indigenous data governance experts and potentially affected Indigenous communities. The principle of free, prior, and informed consent matters: you're not just asking for feedback; you're seeking genuine agreement from people who'll be impacted.
Continuous Monitoring: Fairness Doesn't End at Launch
AI systems change over time. Models drift as data distributions shift. User behaviour evolves. Bugs emerge. Societal understanding of fairness advances. This means governance can't stop at deployment.
Effective monitoring includes:
Performance Metrics: Track accuracy and other performance indicators separately for different demographic groups. If performance degrades for a particular group, investigate immediately.
Fairness Metrics: Regularly recompute the fairness metrics you established during development. Are they stable? Improving? Getting worse?
Incident Reporting: Create clear channels for users and staff to report concerns about unfair outcomes. Make it easy to raise issues, and actually investigate and respond to reports.
Regular Audits: Schedule periodic reviews (quarterly? annually?) where you thoroughly reassess the system's fairness. Technology improves; your system should too.
Improvement Cycles: When you identify problems, fix them. This sounds obvious, but organisations often do audits that identify issues but then fail to allocate resources to address them. Monitoring without action is just theatre.
Learning from Failure: Case Studies That Changed the Conversation
Sometimes the best teacher is failure. Several high-profile AI bias incidents have shaped our understanding of what can go wrong and how to prevent it.
Amazon's Hiring Tool: When Historical Bias Gets Automated
We've already mentioned Amazon's recruiting AI, but it's worth examining in detail because it demonstrates training data bias so clearly.
Amazon's team began in 2014 with an ambitious goal: automate resume screening to find top technical talent. They fed the system resumes from Amazon's existing engineering workforce to teach it what "good" looked like.
The problem: Amazon's engineering workforce was overwhelmingly male, reflecting both the tech industry's gender imbalance and Amazon's specific culture. The AI learned that being male was a predictor of success because that's what the historical data showed.
By 2015, the team realised the system was biased. It penalised resumes containing the word "women's" and downgraded graduates from all-women's colleges. Developers manually edited the algorithm to be neutral to these specific terms, but they couldn't guarantee the system wouldn't find other ways to discriminate.
Amazon scrapped the project in 2017, but the damage to their reputation was done. More importantly, it became a landmark example of how AI systems perpetuate and automate historical discrimination unless developers actively intervene to prevent it.
Lessons: Historical data reflects historical discrimination. If your training data comes from an unequal past, your AI will learn inequality. You can't just tell an algorithm to "find patterns"; you have to explicitly design for fairness.
COMPAS: The Criminal Justice Controversy
In 2016, the investigative journalism non-profit ProPublica analysed COMPAS, a risk assessment algorithm used across the US criminal justice system to predict recidivism (reoffending). Courts used COMPAS scores to inform bail, sentencing, and parole decisions.
ProPublica analysed risk scores assigned to 7,000 people arrested in Broward County, Florida between 2013 and 2014, then tracked who actually reoffended. Their findings were explosive:
- Black defendants were almost twice as likely as white defendants to be incorrectly labelled high risk but not actually reoffend
- White defendants were much more likely than Black defendants to be incorrectly labelled low risk but then commit more crimes
- Only 20% of people predicted to commit violent crimes actually did
Northpointe, the corporation behind COMPAS, defended their system by arguing it was equally accurate across racial groups. And they were right: the algorithm predicted recidivism equally well for Black and white defendants.
Here's the fascinating part: both ProPublica and Northpointe were correct. They were using different definitions of fairness. ProPublica focused on error rates within predictions (among people labelled high risk, do error rates differ by race?). Northpointe focused on accuracy within outcomes (among people who reoffend, do prediction rates differ by race?).
Researchers later proved mathematically that when base rates differ between groups (which they do for recidivism due to systemic racism in policing and sentencing), you cannot simultaneously satisfy both fairness definitions.
Lessons: There's no single definition of fairness. Different stakeholders might prioritise different fairness metrics. You need to choose explicitly and transparently which definition matters most for your context. Also, be extremely cautious about using AI in criminal justice contexts where the stakes are freedom versus imprisonment.
While COMPAS is a US system, Australia has explored similar risk assessment tools. The COMPAS controversy should inform any Australian deployment.
Healthcare Algorithm: The Cost of Using the Wrong Measure
The 2019 Obermeyer study we mentioned earlier demonstrates measurement bias: using the wrong metric as a proxy for what you actually care about.
A widely used US healthcare algorithm affected around 200 million people annually. It predicted which patients needed extra medical care, aiming to identify people at high risk of health deterioration.
Sensibly (or so it seemed), the algorithm predicted healthcare costs as a proxy for health needs. Patients predicted to have higher future costs got enrolled in care management programs.
The bias: Black patients with the same chronic conditions as white patients spent $1,800 less annually on healthcare due to systemic barriers to accessing care. The algorithm interpreted lower spending as healthier patients.
At any given risk score, Black patients were considerably sicker than white patients. Fixing this would increase the percentage of Black patients receiving additional care from 17.7% to 46.5%.
The researchers worked with the algorithm manufacturer to adjust it to use actual health indicators rather than costs. This reduced racial bias in outcomes by 84%.
Lessons: Be extremely careful when using proxies. Just because something is easy to measure doesn't mean it's the right thing to measure. In societies with existing inequalities, proxies often encode those inequalities into your algorithm. Question your assumptions about what metrics really mean.
Robodebt: Australia's Algorithmic Catastrophe
The Robodebt scheme is Australia's most significant AI ethics failure to date. Between July 2015 and November 2019, the Australian Government used an automated system to calculate welfare overpayments and recover debts.
The Technical Failure: The system used income averaging: it took people's annual tax data, divided by 26 fortnights, and assumed they earned that amount consistently. Anyone whose fortnightly welfare payment was higher than that average income must have been overpaid, right?
Wrong. Many welfare recipients work casually or seasonally. They might earn nothing for months, then significant income for a few weeks, then nothing again. Income averaging created false debts.
The Human Cost: The system raised approximately 794,000 debts against approximately 526,000 people. Of the 567,000 debts raised through income averaging, approximately 470,000 (around 80%) were false. The government wrongly recovered $751 million from 381,000 people who paid debts they didn't owe. The scheme shifted the burden of proof: recipients had to prove they hadn't been overpaid, often requiring payslips from years earlier. Many paid debts they didn't owe because they couldn't prove otherwise.
The psychological harm was immeasurable. People already struggling financially faced unexpected debts, aggressive recovery methods, and the assumption they were criminals. Robodebt contributed to multiple suicides.
The Legal Outcome: By 2019, two Federal Court cases forced the government to admit income averaging was unlawful. The scheme ended in November 2019. In June 2021, the Federal Court approved a $1.872 billion settlement.
The Royal Commission handed down its report in July 2023, calling the scheme a "costly failure of public administration, in both human and economic terms." It referred several individuals to law enforcement for potential prosecution and specifically criticised former Prime Minister Scott Morrison for misleading Cabinet.
Lessons: Automated decision-making affecting people's lives requires human oversight. When the stakes are high (welfare, housing, healthcare, justice), algorithms can't have final say. Shifting the burden of proof onto vulnerable people is unconscionable. And technical efficiency doesn't justify legal or ethical violations.
For Australian businesses, Robodebt is a stark warning. Algorithmic decision-making that affects people's fundamental interests needs rigorous ethical review, continuous monitoring, and clear accountability. The AI might be fast and cheap, but if it's wrong, the consequences can be catastrophic.
Inclusive Data Collection: Building Fairness From the Start
Bias often enters AI systems at the very beginning: when you collect and label training data. Getting this right is essential for fair outcomes.
Representative Sampling: Whose Data Are You Using?
If your training data doesn't represent the population your AI system will serve, it won't work fairly for everyone.
For Australian businesses, this means:
Geographic Diversity: Include data from Sydney, Melbourne, Brisbane, Perth, Adelaide, regional cities, and rural areas. Don't just sample from major metropolitan centres.
Demographic Representation: Ensure your data includes appropriate proportions of:
- Indigenous Australians (3.2% of the population)
- People born overseas (29.3%)
- Different ethnic backgrounds reflecting Australia's multiculturalism
- Various age groups
- People with disabilities (21.4% of Australians as of 2022)
- Different socioeconomic backgrounds
- Different genders and sexual orientations
Oversampling Minority Groups: Because minority groups are, by definition, minorities, random sampling might not give you enough data for accurate modelling. Intentionally oversample underrepresented groups during training, then adjust your model to account for this during deployment.
Mind the Gaps: Be aware of who's missing from your data. If certain groups are systematically absent or underrepresented, your system will work poorly for them. Sometimes it's better to acknowledge limitations ("this system hasn't been tested for X population") rather than deploy something that fails for those users.
Ethical Data Collection: Consent and Purpose
The Australian Privacy Act and its 13 Australian Privacy Principles set clear requirements for personal data collection. For AI systems, key considerations include:
Informed Consent: People should know their data will be used to train AI systems. They should understand what that means and genuinely consent, not just click through a dense privacy policy.
Purpose Limitation: You can't just collect data and use it however you like later. If you collected data for one purpose ("improving customer service"), you generally can't repurpose it for something else ("training a marketing AI") without additional consent.
Data Minimisation: Only collect what you actually need. Just because you can gather extensive personal data doesn't mean you should.
Security: AI training data often contains sensitive information. Protect it appropriately.
From December 10, 2026, new automated decision-making transparency requirements come into effect. If your AI makes decisions that could significantly affect someone's rights or interests, you must disclose certain information in your privacy policy, including the kinds of personal information used, the kinds of decisions made, and how the computer program makes those decisions.
Indigenous Data Sovereignty: Self-Determination in Practice
For data about or affecting Indigenous Australians, additional principles apply. The Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective advocates for Indigenous peoples' rights to control data about their communities.
This means:
Collective Rights: Data sovereignty isn't just individual consent. Indigenous communities have collective rights to govern data about their peoples and lands.
OCAP-Inspired Principles: While OCAP (Ownership, Control, Access, Possession) is specifically a Canadian First Nations framework, similar principles apply in Australia:
- Ownership: Indigenous peoples own data about their communities
- Control: Indigenous peoples control how that data is collected, used, and shared
- Access: Indigenous peoples have the right to access data about them
- Possession: Indigenous peoples should physically control data infrastructure where feasible
Consultation: You can't just decide what's appropriate for Indigenous data. You need genuine consultation with Indigenous data governance experts and affected communities.
Benefit Sharing: If AI systems use Indigenous data, Indigenous communities should benefit. This might mean sharing insights, providing services, employing Indigenous people in the project, or other forms of tangible benefit.
Cultural Sensitivity: Some information is culturally sensitive or subject to traditional protocols. Respect these even if they don't align with Western data practices.
The Australian Government is implementing the Framework for Governance of Indigenous Data, which sets standards for government agencies. Private businesses should follow similar principles.
Data Labelling: The Humans Behind the Algorithm
Most AI systems require labelled training data: humans telling the system "this is a cat," "this is spam," "this person is a good hire." But labellers are human, which means they bring biases.
Labeller Diversity: Use diverse labelling teams. If all your labellers are young, urban, and university-educated, they'll make different judgments than a diverse team would. For subjective tasks like content moderation or resume screening, having labellers with different backgrounds helps identify and average out individual biases.
Clear Guidelines: Provide detailed, unambiguous labelling instructions. If labellers are guessing what you want, their individual biases fill in the gaps.
Quality Assurance: Have multiple labellers label the same data. Look for systematic disagreements that might indicate subjective judgment or bias. Investigate anomalies.
Training: Educate labellers about bias and fairness. Show them examples of biased judgments. Create a culture where raising concerns about potentially discriminatory labels is encouraged.
Subjectivity Awareness: For truly subjective tasks, acknowledge that "ground truth" might not exist. A resume that one person rates highly might seem mediocre to another. An image that seems professional to one culture might seem inappropriate to another. When judgments vary, that's valuable information about subjectivity, not just "label noise."
Synthetic Data: A Tool for Fairness
Synthetic data (algorithmically generated data that mimics real data) can help address underrepresentation. If you don't have enough training examples from a particular demographic group, you might generate synthetic examples to balance your dataset.
This can work well for some applications, but be cautious:
- Synthetic data reflects assumptions of whoever generated it
- It can help with representation but doesn't capture all real-world variation
- For sensitive applications, real representative data is better
- Validate that your synthetic data actually improves fairness rather than introducing new biases
Synthetic data is a tool, not a replacement for genuine representative data collection.
Communicating Fairness: Building Trust Through Transparency
Building fair AI systems matters little if nobody trusts them. Australian businesses need to communicate their ethical AI practices to customers, staff, and regulators.
Transparency for Customers: Explaining AI in Plain Language
Most Australians don't have computer science degrees. Explaining your AI system in accessible language is both an ethical requirement and good business practice.
When AI Is Being Used: Tell people when they're interacting with AI. If a chatbot is answering questions, don't pretend it's human. If an algorithm is making decisions about their application, tell them.
How It Works: Provide a plain-language explanation of what the AI does. You don't need to explain neural network architectures, but you should explain what factors the system considers and broadly how it makes decisions.
Why You Use It: Explain the benefits. Are you using AI to process applications faster? To identify health risks earlier? To personalise services? Help people understand what value it provides.
How Fairness Is Ensured: Explain what you've done to make the system fair. Have you tested it across demographic groups? Do you monitor for bias? Is there human oversight? Customers want to know you've thought about fairness, not just efficiency.
How to Challenge Decisions: If your AI makes decisions that significantly affect people, explain how they can challenge those decisions. This is increasingly a legal requirement under Australia's AI Ethics Principles (contestability) and the broader Privacy Act framework.
The Australian Government's 2024 Framework for the Assurance of AI in Government emphasises transparency and explainability as core requirements. Private businesses should follow similar standards.
Staff Training: Building an Ethical AI Culture
Your developers, product managers, and business leaders need to understand AI ethics and bias. This requires:
Ethics Training: Not just technical training on bias detection tools, but deeper education on what fairness means, why it matters, and how discrimination harms people. Invite speakers with lived experience. Read case studies like Robodebt. Make ethics tangible, not abstract.
Responsible AI Guidelines: Clear, practical guidance for staff on how to build fair AI systems. When should they escalate to the ethics board? What testing is required before deployment? What's the process for responding to bias reports?
Reporting Mechanisms: Make it easy for staff to raise concerns about potentially discriminatory AI systems. Create a culture where questioning fairness is valued, not punished. Some of your best bias detection will come from staff who notice something feels wrong.
Accountability: Make fairness part of performance evaluation. If product managers are measured only on speed and cost savings, fairness gets sacrificed. If they're also measured on ethical outcomes, behaviour changes.
Cultural Change: Building fair AI systems requires shifting from "can we build this?" to "should we build this?" That cultural shift starts at the top. Leadership needs to visibly prioritise ethics and fairness, not just give it lip service.
Engaging with Regulators: Proactive Compliance
Australia's AI regulatory environment is evolving rapidly. The Privacy and Other Legislation Amendment Act 2024, which received Royal Assent on December 10, 2024, introduces new automated decision-making transparency requirements taking effect in December 2026.
The Office of the Australian Information Commissioner released detailed guidance on AI and privacy in October 2024. The Australian Human Rights Commission has published guidance on AI and discrimination, particularly for insurance.
Smart businesses engage proactively:
Algorithmic Impact Assessments: Document your AI system's purpose, how it works, what data it uses, what testing you've done for bias, and what safeguards are in place. This becomes your evidence of due diligence.
Audit Trails: Keep detailed records of decisions made during AI development. Why did you choose this fairness metric? What trade-offs did you consider? How did you respond to bias detected during testing? If regulators investigate, this documentation demonstrates good-faith efforts.
Proactive Engagement: Don't wait for regulators to come to you. Engage with the OAIC, the Australian Human Rights Commission, or industry regulators proactively. Demonstrate that you're taking fairness seriously.
Industry Collaboration: Work with industry bodies and other organisations to develop best practices. Shared standards benefit everyone by creating clearer expectations and demonstrating sector-wide commitment.
Marketing and PR: Authenticity Matters
Ethical AI can be a competitive advantage. Many Australians are increasingly conscious of corporate values and want to support businesses that treat people fairly.
But authenticity is essential. "Ethics washing" (claiming ethical commitments without genuine follow-through) damages trust when exposed. The Robodebt scandal didn't just harm its direct victims; it eroded public trust in government automation generally.
If you're going to market your ethical AI practices:
- Be specific about what you've actually done (not vague commitments)
- Be honest about limitations and ongoing challenges
- Have substance behind the marketing (real ethics boards, genuine bias testing, documented processes)
- Prepare for scrutiny (journalists and activists will investigate claims)
And if something goes wrong (because AI systems are complex and failures happen), have a crisis communication plan:
- Acknowledge the problem quickly
- Explain what went wrong and why
- Describe what you're doing to fix it
- Be transparent about impacts on affected people
- Take accountability rather than making excuses
The Australian public respects businesses that admit mistakes and fix them more than businesses that deny problems or blame others.
Australian Legal and Regulatory Environment: Know Your Obligations
Australian businesses deploying AI systems operate within a complex legal framework that makes certain forms of algorithmic discrimination illegal.
Federal Discrimination Law: AI Is No Exception
Australia has four major federal anti-discrimination Acts. All apply to AI systems:
Racial Discrimination Act 1975: Makes it unlawful to discriminate based on race, colour, national origin, ethnic origin, or immigrant status. If your AI system produces systematically worse outcomes for people of certain racial or ethnic backgrounds, that's potentially unlawful even if race isn't explicitly used by the algorithm.
Sex Discrimination Act 1984: Prohibits discrimination based on sex, sexual orientation, gender identity, intersex status, marital status, pregnancy, or family responsibilities. An algorithm that disadvantages women (like Amazon's hiring tool) violates this Act.
Disability Discrimination Act 1992: Makes it unlawful to discriminate based on disability. If your AI system is inaccessible to people with disabilities or produces worse outcomes for them, that's potentially discriminatory.
Age Discrimination Act 2004: Prohibits discrimination based on age. Using proxies that correlate with age (like "years of experience" or "recent graduate") can constitute indirect discrimination.
The key concept: indirect discrimination. Even if your algorithm doesn't explicitly use a protected attribute like race or gender, if it has a disparate impact on a protected group, that can still be unlawful unless you can show the criterion is reasonable in the circumstances.
For example, requiring a certain height might seem neutral but could indirectly discriminate against women or certain ethnic groups. Using postcodes could indirectly discriminate based on race or socioeconomic status. Using educational credentials could discriminate based on disability or socioeconomic background.
AI systems make this more complex because the discriminatory mechanism might be hidden deep in the algorithm's learned patterns. But ignorance isn't a defence. If your system discriminates, you're potentially liable whether you intended to or understood why.
The Privacy Act 1988: Automated Decision-Making
The Privacy Act governs how organisations handle personal information through 13 Australian Privacy Principles (APPs). Several are particularly relevant for AI:
APP 1 (Open and Transparent Management): You must have a clear privacy policy explaining how you handle personal information. The Privacy and Other Legislation Amendment Act 2024 introduced new requirements: from December 10, 2026, if you use AI to make automated decisions that could significantly affect someone's rights or interests, you must disclose in your privacy policy:
- The kinds of personal information used
- The kinds of decisions made
- How the computer program makes those decisions
APP 3 (Collection): You can only collect personal information that's reasonably necessary for your functions. You can't just vacuum up data because it might be useful for AI training someday.
APP 5 (Notification): When collecting personal information, you must notify people of various matters including what you'll do with it. If you'll use data for AI training, you need to disclose that.
APP 6 (Use or Disclosure): You can generally only use or disclose personal information for the purpose you collected it, unless an exception applies. Repurposing customer service data for AI training might require additional consent.
The OAIC released comprehensive guidance on privacy and AI in October 2024, covering both developing AI models and using commercial AI products. This guidance is essential reading for any Australian business deploying AI.
Australian Human Rights Commission: Sector-Specific Guidance
The Australian Human Rights Commission has published guidance on AI and discrimination, particularly the 2022 resource on AI and Discrimination in Insurance. While focused on insurance, the principles apply broadly:
- Understand how AI systems might discriminate
- Test systems for disparate impacts across demographic groups
- Have clear governance and accountability
- Monitor deployed systems for emerging bias
- Provide transparency to affected people
- Enable contestability when decisions significantly affect individuals
The Commission has also advocated for creating an AI Commissioner (or AI Safety Commissioner) to provide expertise and guidance on AI compliance. While this hasn't been implemented yet, it signals the direction of Australian regulation.
State and Territory Laws
Beyond federal law, state and territory anti-discrimination laws may also apply. These vary by jurisdiction but often cover additional protected attributes or provide different remedies.
Penalties and Remedies: The Cost of Getting It Wrong
If your AI system discriminates unlawfully, consequences can include:
Damages: Courts can award compensation for economic loss, humiliation, hurt, and suffering. There's no cap on damages in most Australian discrimination law.
Regulatory Penalties: The OAIC can impose penalties for serious or repeated Privacy Act breaches. Maximum penalties are significant: up to $2.5 million for individuals or the greater of $50 million, three times the value of benefits obtained, or 30% of turnover during the breach period for corporations.
Reputational Damage: Beyond legal penalties, public exposure of discriminatory AI systems causes lasting reputational harm. The Robodebt scandal will be associated with the Australian Government for decades.
Operational Disruption: If regulators require you to stop using a discriminatory AI system, that can disrupt core business operations.
Class Actions: If your system discriminates against many people, class action lawsuits become possible, multiplying liability.
The best risk management is preventing discrimination in the first place through the tools, governance, and practices described throughout this article.
Emerging Regulation: What's Coming
Australia's AI regulatory environment is evolving rapidly:
- The National Framework for the Assurance of AI in Government (June 2024) sets standards for government AI use
- The Privacy and Other Legislation Amendment Act 2024 introduces new automated decision-making requirements
- Industry-specific regulation is emerging (banking, insurance, healthcare)
- Voluntary standards like ISO/IEC 42001 (AI management systems) are gaining traction
While much Australian regulation remains principles-based and voluntary, the trend is toward increasing specificity and mandatory requirements. Businesses that build ethical AI practices now will be better positioned as regulation tightens.
Key Takeaways: Building Fair AI for All Australians
Let's synthesise what we've covered into practical guidance you can apply immediately.
Understanding Bias:
- Bias in AI systems comes in multiple forms: training data bias (learning from historical discrimination), algorithmic bias (design choices that amplify inequality), measurement bias (using the wrong proxies), and deployment bias (systems working differently for different groups in practice).
- Fairness has competing definitions: demographic parity (equal positive outcome rates), equalized odds (equal error rates), and individual fairness (similar people treated similarly). You often can't satisfy all definitions simultaneously, so you must choose deliberately based on your context and values.
- Bias detection requires ongoing testing across demographic groups using tools like Fairlearn, AI Fairness 360, and others. But automated tools aren't enough; you also need human review from people with diverse backgrounds and lived experiences.
- Major failures like Amazon's hiring tool, COMPAS, the Obermeyer healthcare study, and Australia's Robodebt scandal demonstrate how bias manifests and the serious harms it causes. Learn from these cases.
Australian Context:
- Australia's unique demographics demand Australian-specific approaches. Our 3.2% Indigenous population, 29.3% overseas-born population, 300+ languages, and extreme multiculturalism mean AI systems trained on US or European data often don't work fairly here.
- Indigenous data sovereignty matters. You need free, prior, and informed consent from Indigenous communities, respect for the OCAP-inspired principles (Ownership, Control, Access, Possession as developed for Canadian First Nations), and genuine consultation throughout AI development.
- Multicultural diversity creates specific challenges: facial recognition bias, language barriers in voice systems, cultural assumptions embedded in training data, and accessibility issues for non-native English speakers.
- Regional and socioeconomic divides mean you can't just optimise for urban, affluent users. Representative testing must include regional Australians and people from various socioeconomic backgrounds.
Practical Tools:
- Use established bias detection tools: Fairlearn for fairness metrics and mitigation, AI Fairness 360 for comprehensive auditing with 70+ metrics, What-If Tool for interactive model probing, and Aequitas for formal fairness audits.
- Build governance structures: ethics review boards with diverse membership (including people with lived experience from affected communities), clear escalation paths for difficult decisions, and continuous monitoring of deployed systems for emerging bias.
- Apply Australia's 8 AI Ethics Principles: human wellbeing, human-centred values, fairness, privacy protection, reliability, transparency, contestability, and accountability. These provide a practical framework aligned with emerging regulation.
- Ensure representative data collection: oversample minority groups during training, respect Indigenous data sovereignty, use diverse labelling teams, and acknowledge limitations when data gaps exist.
- Communicate transparently: tell customers when AI is being used, explain how it works in plain language, describe fairness measures, and provide clear processes for challenging decisions.
Legal Requirements:
- Australian discrimination law applies to AI systems. The Racial Discrimination Act 1975, Sex Discrimination Act 1984, Disability Discrimination Act 1992, and Age Discrimination Act 2004 all prohibit both direct and indirect discrimination. Algorithmic discrimination is still discrimination.
- The Privacy Act 1988 governs personal data handling. From December 2026, new transparency requirements mandate disclosure of automated decision-making in privacy policies.
- The OAIC's October 2024 guidance on privacy and AI provides detailed compliance requirements. The Australian Human Rights Commission's guidance on AI and discrimination offers sector-specific advice.
- Penalties for discriminatory AI systems can be severe: damages with no caps, regulatory penalties up to $50 million or 30% of turnover, reputational harm, and operational disruption.
Moving Forward:
- Start with a fairness audit: test your existing AI systems for demographic performance disparities. If you find bias, fix it before expanding deployment.
- Establish governance now: create an ethics review board, define when ethics review is required, and build monitoring into your deployment processes.
- Invest in representative data: Australian businesses need Australian-representative datasets. This might mean collecting new data, oversampling minority groups, or partnering with organisations that serve diverse communities.
- Build ethical AI culture: train staff on bias and fairness, create reporting mechanisms for concerns, and make ethics part of performance evaluation.
- Engage proactively with regulation: conduct algorithmic impact assessments, maintain detailed audit trails, and engage with regulators before they come to you.
Fair AI isn't just ethically right and legally required. It's practically essential. AI systems that don't work fairly for all Australians will fail in our diverse market. Systems that discriminate will face legal challenges, regulatory penalties, and reputational damage.
The good news: the tools, frameworks, and knowledge exist to build fair AI systems. What's required now is commitment. Not just lip service to ethics, but genuine prioritisation of fairness throughout the AI lifecycle, from data collection through deployment and monitoring.
Australia has an opportunity to lead in ethical AI. Our multicultural diversity, our commitment to Indigenous self-determination, and our strong anti-discrimination legal framework create both challenges and advantages. Businesses that rise to meet these challenges will build AI systems that work for everyone, earning trust and creating value in our diverse society.
The choice is yours: build AI systems that perpetuate historical discrimination, or build systems that work fairly for all Australians. The tools are available. The legal framework is clear. The ethical imperative is undeniable.
It's time to build fair AI.
---
Sources
- Australian Bureau of Statistics. "Aboriginal and Torres Strait Islander people: Census, 2021". 2021. https://www.abs.gov.au/statistics/people/aboriginal-and-torres-strait-islander-peoples/aboriginal-and-torres-strait-islander-people-census/latest-release
- Australian Bureau of Statistics. "Snapshot of Australia, 2021". 2021. https://www.abs.gov.au/statistics/people/people-and-communities/snapshot-australia/latest-release
- Australian Bureau of Statistics. "2021 Census highlights increasing cultural diversity". 2022. https://www.abs.gov.au/media-centre/media-releases/2021-census-highlights-increasing-cultural-diversity
- Holmes, C. "Report of the Royal Commission into the Robodebt Scheme". Commonwealth of Australia. 2023. https://robodebt.royalcommission.gov.au/publications/report
- Australian Government. "Government Response to the Royal Commission into the Robodebt Scheme". 2023. https://www.pmc.gov.au/sites/default/files/resource/download/gov-response-royal-commission-robodebt-scheme.pdf
- University of Sydney Law School. "Unraveling Robodebt: Legal Failures, Impact on Vulnerable Communities, and Future Reforms". 2023. https://www.sydney.edu.au/law/news-and-events/news/2023/12/13/unraveling-robodebt-legal-failures-impacts.html
- Context News. "Australian Robodebt scandal shows the risk of rule by algorithm". 2023. https://www.context.news/surveillance/australian-robodebt-scandal-shows-the-risk-of-rule-by-algorithm
- Department of Industry, Science and Resources. "Australia's Artificial Intelligence Ethics Principles". Australian Government. 2019. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles
- Department of Finance. "National Framework for the Assurance of AI in Government". Australian Government. 2024. https://www.finance.gov.au/government/public-data/data-and-digital-ministers-meeting/national-framework-assurance-artificial-intelligence-government
- Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., Sameki, M., Wallach, H., & Walker, K. "Fairlearn: A toolkit for assessing and improving fairness in AI". Microsoft Research. 2020. https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
- Fairlearn. "Fairlearn: A toolkit for assessing and improving fairness in AI". 2024. https://fairlearn.org/
- Bellamy, R. K. E., et al. "AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias". IBM Journal of Research and Development, 63(4/5). 2019. https://research.ibm.com/publications/ai-fairness-360-an-extensible-toolkit-for-detecting-and-mitigating-algorithmic-bias
- AI Fairness 360. "AI Fairness 360 Open Source Toolkit". 2024. https://aif360.res.ibm.com/
- Dastin, J. "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
- MIT Technology Review. "Amazon ditched AI recruitment software because it was biased against women". 2018. https://www.technologyreview.com/2018/10/10/139858/amazon-ditched-ai-recruitment-software-because-it-was-biased-against-women/
- American Civil Liberties Union. "Why Amazon's Automated Hiring Tool Discriminated Against Women". 2018. https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. "Machine Bias: There's software used across the country to predict future criminals. And it's biased against blacks". ProPublica. 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- ProPublica. "How We Analyzed the COMPAS Recidivism Algorithm". 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
- Chouldechova, A. "Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments". Big Data, 5(2), 153-163. 2017.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. "Dissecting racial bias in an algorithm used to manage the health of populations". Science, 366(6464), 447-453. 2019. https://www.science.org/doi/10.1126/science.aax2342
- University of Chicago News. "Health care prediction algorithm biased against black patients, study finds". 2019. https://news.uchicago.edu/story/health-care-prediction-algorithm-biased-against-black-patients-study-finds
- Scientific American. "Racial Bias Found in a Major Health Care Risk Algorithm". 2019. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
- Grother, P., Ngan, M., & Hanaoka, K. "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects". NIST Interagency Report 8280. National Institute of Standards and Technology. 2019. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
- National Institute of Standards and Technology. "NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software". 2019. https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software
- MIT Technology Review. "A US government study confirms most face recognition systems are racist". 2019. https://www.technologyreview.com/2019/12/20/79/ai-face-recognition-racist-us-government-nist-study/
- Harvard Journal of Law & Technology. "Why Racial Bias is Prevalent in Facial Recognition Technology". 2020. https://jolt.law.harvard.edu/digest/why-racial-bias-is-prevalent-in-facial-recognition-technology
- Office of the Australian Information Commissioner. "Guidance on privacy and the use of commercially available AI products". 2024. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products
- Office of the Australian Information Commissioner. "Australian Privacy Principles Guidelines - Chapter 1: APP 1 Open and transparent management of personal information". 2024. https://www.oaic.gov.au/privacy/australian-privacy-principles/australian-privacy-principles-guidelines/chapter-1-app-1-open-and-transparent-management-of-personal-information
- Dentons. "OAIC releases guidance on Privacy and AI in Australia". 2024. https://www.dentons.com/en/insights/articles/2024/october/28/oaic-releases-guidance-on-privacy-and-ai-in-australia
- Future of Privacy Forum. "OAIC's Dual AI Guidance Set New Standards for Privacy Protection in Australia". 2024. https://fpf.org/blog/global/oaics-dual-ai-guidelines-set-new-standards-for-privacy-protection-in-australia/
- Norton Rose Fulbright. "Australian Privacy Alert: Parliament passes major and meaningful privacy law reform". 2024. https://www.nortonrosefulbright.com/en-us/knowledge/publications/be98b0ff/australian-privacy-alert-parliament-passes-major-and-meaningful-privacy-law-reform
- Securiti. "An Overview of Australia's Privacy and Other Legislation Amendment 2024". 2024. https://securiti.ai/australia-privacy-and-other-legislation-amendment-2024/
- Australian Human Rights Commission. "Guidance Resource: AI and Discrimination in Insurance". 2022. https://humanrights.gov.au/our-work/technology-and-human-rights/publications/guidance-resource-ai-and-discrimination-insurance
- Australian Human Rights Commission. "Artificial intelligence and anti-discrimination: Major new publication". 2024. https://humanrights.gov.au/about/news/media-releases/artificial-intelligence-and-anti-discrimination-major-new-publication
- Australian Human Rights Commission. "Technology and Human Rights". 2024. https://humanrights.gov.au/our-work/technology-and-human-rights
- Australian Human Rights Commission. "Australia Needs AI Regulation". 2024. https://humanrights.gov.au/about/news/australia-needs-ai-regulation
- Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Collective. "Indigenous Data Sovereignty". https://www.indigitize.org/data-sovereignty
- Carroll, S. R., et al. "The CARE Principles for Indigenous Data Governance". Data Science Journal, 19(1), 43. 2020. https://datascience.codata.org/articles/10.5334/dsj-2020-043
- First Nations Information Governance Centre. "The First Nations Principles of OCAP®". 2024. https://fnigc.ca/ocap-training/
- Lovett, R., et al. "Good data practices for Indigenous data sovereignty and governance". In A. Daly, S. K. Devitt, & M. Mann (Eds.), Good Data (pp. 26-36). Institute of Network Cultures. 2019.
- Thorpe, A., et al. "A framework for operationalising Aboriginal and Torres Strait Islander data sovereignty in Australia: Results of a systematic literature review of published studies". EClinicalMedicine, 45, 101321. 2022. https://www.sciencedirect.com/science/article/pii/S2589537022000323
- Hardt, M., Price, E., & Srebro, N. "Equality of opportunity in supervised learning". Advances in Neural Information Processing Systems, 29, 3315-3323. 2016.
- Corbett-Davies, S., & Goel, S. "The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning". arXiv:1808.00023. 2018.
- Barocas, S., Hardt, M., & Narayanan, A. "Fairness and Machine Learning: Limitations and Opportunities". fairmlbook.org. 2019.
- Mitchell, S., et al. "Algorithmic Fairness: Choices, Assumptions, and Definitions". Annual Review of Statistics and Its Application, 8, 141-163. 2021.
