Preview Mode Links will not work in preview mode

Oct 3, 2022

The major societal challenge posed by artificial intelligence (AI) is that its algorithms are often trained on biased data. This fundamental problem has enormous implications in our criminal justice system, workplaces, schools, healthcare industry, and housing sector. The persistence of racism, sexism, ableism, and other forms of discrimination demonstrates the tendency of AI systems to reflect the biases of the people who built them. 

Critical deficiencies in algorithmic surveillance technologies reproduce the same inequities that we have seen evolve decade-after-decade. AI systems having the same biases as the people who built them.

Lydia X. Z. Brown of the Center for Democracy & Technology joins to recommend policy and systemic solutions to address these critically important challenges.

Bio

 

Lydia X. Z. Brown is a Policy Counsel with CDT’s Privacy and Data Project, focused on disability rights and algorithmic fairness and justice. Their work has investigated algorithmic harm and injustice in public benefits determinations, hiring algorithms, and algorithmic surveillance that disproportionately impact disabled people, particularly multiply-marginalized disabled people.


Website

Twitter

LinkedIn



Resources

Ableism And Disability Discrimination In New Surveillance Technologies: How new surveillance technologies in education, policing, health care, and the workplace disproportionately harm disabled people, Center for Democracy and Technology (2022), https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/ (last visited Sep 30, 2022).