To build trust, researcher finds errors in AI
For artificial intelligence (AI) to reach its potential in benefits to society, people have to trust it. Building up that trust is the focus of Computer Scientist Nicolas Papernot’s research, now supported by a 2022 Sloan Fellowship.
“I basically study the intersection of AI or machine learning with computer security, privacy and trust in general,” the University of Toronto professor explains. “We’re trying to understand when machine learning algorithms fail, so that we can improve the trust that we humans and society in general have when deploying these machine learning algorithms, and essentially relying on their prediction.”
For example, machine learning is used to recognize our voice when we talk with virtual assistants on phones or smart speakers. “And what we found in one of our projects is that if you speak through a tube to the voice assistant, then it will recognize your voice as being the voice of another person. You can even choose the length of the tube, to induce the model in thinking you’re a specific person.
“That basically shows that the models have very different ways to recognize patterns in the data that they process than we would as humans. And so there’s this kind of semantic gap between how they make predictions and how we’re able to exploit that if we’re malicious entities.”
In another example, Dr. Papernot’s team was able to demonstrate that under certain conditions, an image classifier would recognize a stop sign as a yield sign.
His work is about understanding what needs to change about the algorithms to fix these issues. It aims to resolve security and privacy issues that undermine our trust in AI. The public’s reservations about the technology can limit applications in important areas like medicine and public services.
“When we apply machine learning, we currently have a poor understanding of when we can trust that the predictions are correct. So that’s one important reason why we are working on this, to basically have either failsafe mechanisms that we can rely on knowing that the prediction is going to be incorrect, or to ensure that we are actually making a correct prediction.”
Dr. Papernot says winning the Sloan Fellowship is “very humbling.
“It also gives me confidence that we can continue looking for problems that are higher risk in terms of the research that we’re trying to do,” he says. “Because that eventually is how we make the most progress. And so having this external validation that we’re thinking about the right problems is always helpful when we’re taking a bit more risk.”