Most users cannot identify AI bias, even in training data | Penn State University
A study by researchers from Penn State and Oregon State University examined whether laypersons understand that unrepresentative data used to train AI systems can result in biased performance. Participants failed to notice the systematic bias in the training data, which used exclusively white faces to represent happy emotions and exclusively Black faces to represent unhappy emotions.
Credit: Provided. All Rights Reserved.UNIVERSITY PARK, Pa. — When recognizing faces and emotions, artificial intel...
Read more at psu.edu