Algorithms and Discrimination
Algorithmic bias can be found across all platforms. It doesn’t solely apply to search engine results and social media platforms. The seam between real life and our online digital spaces is becoming thinner every day, and it impacts our civil liberties, finances and employment prospects in a significant way.
We are already seeing how law enforcement agencies are incorporating facial recognition technologies into body-worn cameras to track the faces of their citizens. The criminal justice system is using predictive risk algorithms to decide who should be given bail. Employers and universities are using these algorithms to decide who gets accepted and who gets hired. Banks and financial institutions are using these algorithms to determine who gets approved for a loan.
Algorithmic bias in these areas could result in accidental privacy and free speech violations. It could also reinforce social biases of race, economics, gender, sexual orientation and ethnicity.
Artificial intelligence is not inherently more accurate, fairer or less biased than humans, and how we treat algorithms and artificial intelligence needs to account for this reality. This is particularly true when the data that is being used mirrors existing social inequalities.
“Datacentric technologies are vulnerable to bias and abuse. As a result, we must demand more transparency and accountability… If we fail to make an ethical and inclusive AI, we risk losing gains made in civil rights and gender equity under the guise of machine neutrality.” — Joy Buolamwini