The National Institute for Standards and Technology (NIST.gov) March 16 release of “Towards a Standard for Identifying and Managing Bias within Artificial Intelligence”
offers background and guidance [on a major source of risk relating] to the trustworthiness of [artificial intelligence]. [Beyond] the machine learning processes and data used to train AI software, bias is related to broader societal factors – human and systemic institutional in nature – which influence how AI technology is developed and deployed….
Rooting out bias in artificial intelligence will require addressing human and systemic biases as well….
...A crucial principle, for both humans and machines, is to avoid bias in order to prevent discrimination. It is critical that AI systems are trained with data that is unbiased and using algorithms that can be explained. The purpose of this project is to understand, examine, and mitigate bias in AI systems...
We understand, of course, that this bias issue covers a realm of possibly unconscious prejudices, stereotypes, assumptions, beliefs, and so on.
Even in individuals anywhere across the sociopolitical spectrum and within the industry matrix, or users of it —e.g., you and me, babe— who may be absolutely certain their thinking harbors no such flaws.
That right there is one of’em.
No one is exempt, because the capacity for habitual thought itself is biologically hard-wired, an evolutionary advantage sparing us the literal exhaustion and unaffordable time-sink of thinking from scratch for every situation encountered.
It’s a useful capacity. It just has some drawbacks that require regular routine maintenance for optimal operation. Which, realistically, will still not be perfect. But far better than otherwise.
NIST appears to be pairing human and AI system maintenance in this regard. All to the good!
(Use this page to subscribe to NIST email updates.)
Thoughts? ;)