Evaluating and Understanding Adversarial Robustness in Deep Learning

Download or Read eBook Evaluating and Understanding Adversarial Robustness in Deep Learning PDF written by Jinghui Chen and published by . This book was released on 2021 with total page 175 pages. Available in PDF, EPUB and Kindle.
Evaluating and Understanding Adversarial Robustness in Deep Learning
Author :
Publisher :
Total Pages : 175
Release :
ISBN-10 : OCLC:1291135695
ISBN-13 :
Rating : 4/5 (95 Downloads)

Book Synopsis Evaluating and Understanding Adversarial Robustness in Deep Learning by : Jinghui Chen

Book excerpt: Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to adversarial examples. A tiny perturbation on an image that is almost invisible to human eyes could mislead a well-trained image classifier towards misclassification. This raises serious security concerns and trustworthy issues towards the robustness of Deep Neural Networks in solving real world challenges. Researchers have been working on this problem for a while and it has further led to a vigorous arms race between heuristic defenses that propose ways to defend against existing attacks and newly-devised attacks that are able to penetrate such defenses. While the arm race continues, it becomes more and more crucial to accurately evaluate model robustness effectively and efficiently under different threat models and identify those ``falsely'' robust models that may give us a false sense of robustness. On the other hand, despite the fast development of various kinds of heuristic defenses, their practical robustness is still far from satisfactory, and there are actually little algorithmic improvements in terms of defenses during recent years. This suggests that there still lacks further understandings toward the fundamentals of adversarial robustness in deep learning, which might prevent us from designing more powerful defenses. \\The overarching goal of this research is to enable accurate evaluations of model robustness under different practical settings as well as to establish a deeper understanding towards other factors in the machine learning training pipeline that might affect model robustness. Specifically, we develop efficient and effective Frank-Wolfe attack algorithms under white-box and black-box settings and a hard-label adversarial attack, RayS, which is capable of detecting ``falsely'' robust models. In terms of understanding adversarial robustness, we propose to theoretically study the relationship between model robustness and data distributions, the relationship between model robustness and model architectures, as well as the relationship between model robustness and loss smoothness. The techniques proposed in this dissertation form a line of researches that deepens our understandings towards adversarial robustness and could further guide us in designing better and faster robust training methods.


Evaluating and Understanding Adversarial Robustness in Deep Learning Related Books

Evaluating and Understanding Adversarial Robustness in Deep Learning
Language: en
Pages: 175
Authors: Jinghui Chen
Categories:
Type: BOOK - Published: 2021 - Publisher:

DOWNLOAD EBOOK

Deep Neural Networks (DNNs) have made many breakthroughs in different areas of artificial intelligence. However, recent studies show that DNNs are vulnerable to
Advances in Reliably Evaluating and Improving Adversarial Robustness
Language: en
Pages: 0
Authors: Jonas Rauber
Categories:
Type: BOOK - Published: 2021 - Publisher:

DOWNLOAD EBOOK

Machine learning has made enormous progress in the last five to ten years. We can now make a computer, a machine, learn complex perceptual tasks from data rathe
Adversarial Robustness for Machine Learning
Language: en
Pages: 300
Authors: Pin-Yu Chen
Categories: Computers
Type: BOOK - Published: 2022-08-20 - Publisher: Academic Press

DOWNLOAD EBOOK

Adversarial Robustness for Machine Learning summarizes the recent progress on this topic and introduces popular algorithms on adversarial attack, defense and ve
Evaluating and Certifying the Adversarial Robustness of Neural Language Models
Language: en
Pages: 0
Authors: Muchao Ye
Categories:
Type: BOOK - Published: 2024 - Publisher:

DOWNLOAD EBOOK

Language models (LMs) built by deep neural networks (DNNs) have achieved great success in various areas of artificial intelligence, which have played an increas
Improved Methodology for Evaluating Adversarial Robustness in Deep Neural Networks
Language: en
Pages: 93
Authors: Kyungmi Lee (S. M.)
Categories:
Type: BOOK - Published: 2020 - Publisher:

DOWNLOAD EBOOK

Deep neural networks are known to be vulnerable to adversarial perturbations, which are often imperceptible to humans but can alter predictions of machine learn