What if the algorithm is racist?
As computers shift from being helpmates that tackle the drudgery of dense calculations and data handling to smart machines informing decisions, their potential for bias is increasingly an area of concern.
The algorithms aiding such decisions are complex, their inputs myriad, and their inner workings often proprietary information of the companies that create them. These factors can leave the human waiting on bail or a bank loan in the dark.
Experts gathered at Harvard Law School to examine the potential for bias as our decision-making intelligence becomes ever more artificial. The panel, “Programing the Future of AI: Ethics, Governance, and Justice,” was held at Wasserstein Hall as part of HUBweek, a celebration of art, science, and technology sponsored by Harvard, the Massachusetts Institute of Technology, Massachusetts General Hospital, and The Boston Globe.
Christopher Griffin, research director of the Law School’s Access to Justice Lab, described pretrial detention systems that calculate a person’s risk of flight or committing another crime — particularly a violent crime — in making bail recomm