Common issues associated with the irresponsible use of AI include AI that is not properly used by stakeholders, that causes harm or unfairness to a group of people, or that causes damage to the environment or society.
Irresponsible or unethical AI can be grouped into the following categories:
Unfair or biased AI
Those are usually systems that exhibit bias in their decision-making processes, often leading to discriminatory or unfair outcomes for minority groups. This may be caused by biased data or flawed algorithms that disproportionately impact certain groups.
Examples of unfair or biased AI include:
AI systems used for filtering out the best applicants to an academic institution: These might result in a self-fulfilling prophecy situation, whereby groups that are often associated with poorer academic performance find themselves at an unfair disadvantage that further feeds into the status quo.
AI systems used for predicting sentence length for convicts: These might result in predictions that are unfair to groups that are often correlated with imprisonment, despite the correlation not being a causality.
AI systems used at the tax department to identify fraudulent behavior based on features that can be considered biased.
Facial recognition AI
, where white male subjects are recognized significantly better than subjects with other skin complexions and genders. The documentary Coded Bias
discusses facial recognition algorithms that don't see dark-skinned faces accurately, which is also a phenomenon that MIT Media Lab researcher Joy Buolamwini concluded
AI with negative environmental impact
This generally includes any extremely heavy-to-train AI model (which are becoming more and more common). Those damage the environment through their substantial energy usage.
As discussed in a previous blog about AI's carbon footprint, researchers who assessed the energy cost and carbon footprint of four NLP models found that at worst, the process of training an algorithm can emit more than 626,000 pounds (or ~312 metric tons) of carbon dioxide equivalent. That’s a lot considering that in 2016, the average person in the Netherlands emitted about 10 tons of CO2 equivalent per year.
Gnerally, these are AI systems that lack transparency, making it difficult for humans to understand how they make decisions or how to hold systems accountable for their actions.
Examples of unaccountable AI include:
Autonomous weapon systems that work without human intervention, making it difficult to determine who is responsible for any harm or damage that they cause.
Chatbots that may provide misleading or even incorrect information to users without any accountability for the information shared by the system.
Financial trading algorithms that can cause market instability or engage in unethical practices without the intention to identify or correct such behavior.
Unethical autonomous AI
Autonomous systems can operate independently and make decisions without any human intervention, but sometimes, they are at risk of being unethical.
Examples of unethical autonomous AI include:
Self-driving cars that use AI algorithms to decide when to accelerate, break and steer based on data from sensors and cameras. Those may face moral dilemmas, such as who to save in risky situations: the passengers in the car or pedestrians.
AI systems that automatically filter out job applicants: These can be difficult to audit, and the quality of their decisions may decline if their data is not carefully audited and kept up to date by human input.
Drones programmed to fly anonymously and perform tasks such as mapping, surveying, and inspecting infrastructure: Those have been also unethically programmed for tasks such as surveillance or hunting.
This is a term that refers to AI systems that are too complex and hard to understand. The term “black box AI” specifically refers to opaque machine learning models, and similarly, it is difficult to understand how such models arrive at their decisions.
Examples of opaque AI include:
Credit scoring algorithms: Those may use complex calculations to determine how worthy users are to receive new credit, but do not always provide a clear explanation of the methodology and factors considered.
Deep neural networks trained to perform complex tasks: Those are often too difficult to interpret because they involve thousands of parameters.
Complex decision trees: Those can also be an example of opaque AI as they are difficult to interpret when they comprise of many branches or decision nodes.
Legally non-compliant AI
This refers to AI systems that do not comply with applicable laws or regulations, potentially resulting in legal and financial consequences.
Examples of legally non-compliant AI include:
Healthcare AI that violates patient privacy by collecting or processing patient data without proper consent or in violation of patient privacy laws.
Some facial recognition systems that collect and store personal data without obtaining proper consent, violating laws such as GDPR
Deep fakes, which compile AI-based human images and are usually used for malicious purposes, such as political manipulation and spreading false news. These may have harmful social effects and undermine public trust.