When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence (AI) is not neutral. From facial scanning used for policing and surveillance to automated HR systems that mirror and magnify workplace prejudices, these technologies are created with fundamentally biased building blocks. Emboldened by these troubling discoveries, Buolamwini joins a group of pioneering women to shed a light on the underlying biases in the technology that shapes our lives and threatens our democracy.
Contact Spenser.Snarr@durangogov.org or call 970-375-3380 for more information