|Ph.D Student||Baskin Chaim|
|Subject||Designing Deep Neural Networks for Efficient and Robust|
|Department||Department of Computer Science||Supervisors||PROF. Alexander Bronstein|
|PROF. Avi Mendelson|
|Full Thesis text|
Deep neural networks (DNN) have became a common tool for solving complex tasks in various fields such as computer vision, natural language processing, and recommendation systems. Despite the recent progress made in enhancing DNN performance, two major obstacles hinder their practical applicability in some applications: (i) their energy-expensive deployment on embedded platforms, and (ii) their amenability to malicious adversarial perturbations. In this thesis, we present our works focusing on different aspects of both these problems. Chapters 2 and 3 present a training-aware and post-training quantization approaches, which present the DNNs parameters and feature maps represented in fixed low-bit representations. Chapter 4 introduces a neural architectural search that allows to find optimal quantization bitwidths of neural network parameters for given complexity constraint. Chapters 5 and 6 present two entropy coding-based methods for reducing inference-time memory bandwidth requirements. The first method does not require any fine-tuning, while the second does and, in exchange, provides significant further bandwidth reduction with negligible additional complexity or accuracy reduction. Chapter 7 presents a simple framework that helps to design efficient hardware for quantized neural networks. In addition, in chapter 8 we show how quantization techniques can inspire new approaches to better cope with adversarial attacks, as well as demonstrate how an adversarially pre-trained classifier could boost adversarial robustness by smoothing between different levels of input noise. Finally, Chapter 9 introduces a simple single-node minimal attribute changing perturbation that can attack social graph-based DNNs, in a significantly more harmful way than the previously studied edge-based attacks.