Aren’t some people marginalised by AI applications?
PostedNovember 13, 2021
UpdatedNovember 16, 2021
Byadmin
0 out of 5 stars
5 Stars
0%
4 Stars
0%
3 Stars
0%
2 Stars
0%
1 Stars
0%
Data bias
The inherent bias of data sets used in AI algorithms training can result in some groups or individuals being treated unjustly or without redress by human arbiters.
Getting left behind
There are also risks associated with having all pervasive AI adoption without viable non-digital alternatives or options to opt out. The categories of risk as they stand do not include the risk to people being excluded in society and/or exclusionary practices which means certain people or people groups get left behind. We must protect the vulnerable and marginalised, recognising that individual categorisation and optimisation (not just social credit scoring) may cause wider injustice to a people group.