I use audits and lab-in-the-field experiments to study the authoritarian politics of AI.

In a similar spirit to resume audit on gender bias and audit on government agencies, I use audit experiments to study the political biases of AI systems. In essence, audits entail poking at an institution repeatedly - sending requests to government officials or queries to AI systems, while varying selected attributes - e.g., whether the citizen is a member of the ruling party. In the case of my paper on automated discrimination, I vary the ethnicity of criminal defendants to audit a commercial AI system used to assist judges in criminal sentencing and find that AI gives longer sentences to selected ethnic minorities, even when all relevant facts about the crime are held constant. In contrast to conventional audits, auditing AI systems has the advantage of not crowding out resources for people.

For lab-in-the-field experiments, I recreate commercial AI systems (e.g., for censorship) and vary key components (e.g., the make-up of the training data) in the recreation to study how it changes the performance of the system. In my other paper, for example, I train my own censorship AI models to approximate commercial systems and find that 1) AI’s accuracy in censorship decreases with more pre-existing censorship and repression; 2) the drop in AI’s performance is larger during times of crisis; but 3) the existence of the free world can help boost AI’s ability to censor. (See a similar design in political science here)

By nature, experiments are difficult for some countries and topics. I hope to bring some new ideas for these difficult contexts.