One of the criticisms of early AI applications is that they can ‘hard code’ the biases of developers into systems that are supposed to remove bias from decision making processes. A recent study reminds us that they can also work to uncover biases however.
The study revolves around a piece of legislation in Texas that required residents to present ID before being allowed to vote. The legislation was opposed on the basis that it would discriminate against minority voters.
The researchers developed an algorithm that could examine millions of publicly available records to determine whether voters had the right ID or not. The analysis revealed that fewer people than had originally thought were sufficiently unqualified, but that the law did nonetheless disproportionately affect minorities.
Suitable ID
The system was capable of matching people on the electoral register with the suitable ID required in order to vote by only using address, date of birth, gender and name. Indeed, this relatively narrow dataset was as effective as their social security number. The team also classified each voter according to their ethnic background to ascertain whether the law was discriminatory or not.
“In the last decade, states have been changing rules about registration, early voting, and voter ID,” the authors explain. “Voter ID is particularly controversial, because some of these laws seem to have been passed into law with a discriminatory intent.”
This was because white registered voters were significantly more likely to have the required legislation than African-American or Hispanic voters.
The original legislation has been updated to hopefully overcome these issues, but it hasn’t quite passed through the courts yet so it remains to be seen what form it will take when it eventually hits the statute books.