Big data has the potential to both increase and decrease discrimination based on the ethics of its use, according to speakers at the Ford Foundation Fairness by Design event.

“Not using the data has highly discriminatory practices,” said U.S. chief data scientist DJ Patil in his keynote address at the event.

Julie Brill, a partner at Hogan Lovells and former commissioner at the Federal Trade Commission, agreed with Patil, noting that the data-based credit score system replaced a “who you know” system, which discriminated against those who did not have good enough connections.

“This information could be used for good,” Brill said in a panel at the event. “The very same information could be used to harm consumers.”

Patil and panelists at the event found that there could be problems with both the data itself and how the data is used. For example, if the source data used in profiling algorithms segments people based on racial, class, or gender biases, the algorithms become inherently biased.

“The concern is that people really were being segmented in that way,” Brill said. “There are so many scores out there. Have we determined that they have built-in biases?”

“Algorithmic opacity has consequences, some of them very negative,” said civil rights panel moderator Alvaro Bedoya, executive director for the Center on Privacy & Technology and adjunct professor of law at Georgetown University Law Center. He also commented that entirely innocent data could be used for discriminatory purposes.

For example, he said, if a company owner is told that employees living close to the office are more likely to stay at the company longer, the owner will start hiring people within a certain geographic area. However, it might also be the case that predominantly one ethnicity lives in that geographic area, resulting in discriminatory hiring practices.

Patil said that one way to remove discrimination through data was to train data officers in a way that focuses on the ethics of data alongside how to best employ it.

“We need to think about fairness by design when it comes to big data,” he said. Patil also focused on how data usage could help improve law enforcement practices.

“Deploying more officers has a disproportionate benefit to [curbing] domestic violence,” Patil said. He also said that data shows that officers who are deployed to cases of domestic violence or suicide cases need time to emotionally recover before going back out on calls. But in the case of the dispatch systems used to send officers out on cases, “there is no data use.”

Police departments also have a problem with releasing the data they do have to the public, particularly in divisive areas like racial profiling and police violence.

“We have really bad data around officer-involved shootings in this country,” said digital services expert at the US Digital Service Clarence Wardell III, who spoke at a panel on big data and criminal justice. “We don’t have good data on policing in this country.”

Wardell was part of the effort behind the Police Data Initiative, which launched about a year ago and encourages police departments to release more data to the public.

“We’re asking departments to commit to releasing at least three data sets around police-citizen interactions. It’s a commitment to work toward releasing data,” Wardell said. “We’re starting to see best practices.”

“There has to be a dialogue,” Patil said. “This is a conversation that we can’t just afford to have at a normal pace.”

Read More About