91社区

Event

Rachel Morris (Concordia University)

Friday, October 3, 2025 15:30to16:00
Burnside Hall Room 1104, 805 rue Sherbrooke Ouest, Montreal, QC, H3A 0B9, CA

Convergence Guarantees for Adversarially Robust Classifiers

Abstract:

Neural networks can be trained to classify images and achieve high levels of accuracy. However, researchers have discovered that well-targeted perturbations of an image can completely fool a trained classifier, even in cases where the modified image is visually indistinguishable from the original. This has sparked many new approaches to classification which include an adversary in the training process: such an adversary can improve robustness and generalization properties at the cost of decreased accuracy and increased training time. In this presentation, I will explore the connection between a certain class of adversarial training problems and the Bayes classification problem for binary classification. In particular, robustness can be encouraged by adding a regularizing nonlocal perimeter term, providing a strong connection to classical studies of perimeter. Borrowing tools from geometric measure theory, I will show the Hausdorff convergence of adversarially robust classifiers to Bayes classifiers as the strength of adversary decreases to 0. In this way, the theoretical results discussed in the presentation provide a rigorous comparison with the standard Bayes classification problem.

Speaker

Rachel Morris received her PhD from NC State this summer under the supervision of Ryan Murray. Her PhD work studied data-driven optimization problems through the lens of calculus of variations and geometric measure theory. Currently, Rachel is a postdoc at Concordia University, working with Jason Bramburger and Simone Brugiapaglia

Back to top