Skip to content
How Can We Help?

Search for answers or browse our knowledge base.

< All Topics

Aren’t some people marginalised by AI applications?

Data bias

The inherent bias of data sets used in AI algorithms training can result in some groups or individuals being treated unjustly or without redress by human arbiters.

Getting left behind

There are also risks associated with having all pervasive AI adoption without viable non-digital alternatives or options to opt out. The categories of risk as they stand do not include the risk to people being excluded in society and/or exclusionary practices which means certain people or people groups get left behind. We must protect the vulnerable and marginalised, recognising that individual categorisation and optimisation (not just social credit scoring) may cause wider injustice to a people group.

 

 

Was this article helpful?
0 out of 5 stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?
Table of Contents