TU BRAUNSCHWEIG

Attacks against Machine Learning

Overview

Semester: Winter 2017/2018
Course type: Block Seminar
Lecturer: Prof. Dr. Konrad Rieck
Assistants: Erwin Quiring
Audience: Informatik Master, Wirtschaftsinformatik Master
Credits: 5 ECTS
Hours: 2
Language: English or German
Capacity: max. 8 Students
Room: BRICS 107/108

Schedule

 Date  Step    
 17.10. 15 - 16:30 h  Primer on academic writing, assignment of topics    
 30.10 - 03.11.  Arrange appointment with assistant    
 01.12.  Submit final paper proposal    
 19.12.  Submit review of two fellow students    
 08.01.  Submit camera-ready version of your paper    
 25.01.  Presentation with Pizza    

Description

Machine learning is increasingly used in security-critical applications, such as autonomous driving, face recognition and malware detection. Most learning methods, however, have not been designed with security in mind and thus are vulnerable to different types of attacks.

An attacker, for instance, can mislead a spam classifier by using synonyms or slightly modified words for writing spam emails. Similarly, an attacker may attach stickers to stop signs, such that autonomous cars will confuse the signs and do not stop.

In this seminar, we study the field of adversarial machine learning and discuss attacks against learning methods, analyze corresponding defenses and investigate their impact on real-world systems.

Requirements

The seminar is organized like a real academic conference. You need to prepare a written paper (German or English) about the selected topic with 8-10 pages in ACM double-column style.

After submitting your paper at our conference system, you will write two short reviews about two of the papers submitted by your fellow students. In this way, you can give them feedback about how to improve their paper. Then, you will have time to improve your own final paper with reviews from the others.

Last but not least, you will give a 20-25 minutes talk about your paper and we will provide drinks and pizza to enjoy the talks at our small conference.

Contact

The seminar is organized by the Institute of System Security. For questions and further details, please contact

Seminar Topics

▸ Machine learning against machine learning

It was just a matter of time until machine learning is used against machine learning. Learning models can leak information what records were used for training. This can have a severe privacy impact, e.g. if medical databases were used. In this paper, you examine these membership inference attacks.

▸ Model stealing

An attacker can also try to reconstruct the complete learning model of an online system. This is definitely a threat for the emerging business model of machine learning as a service, provided by e.g. BigML, Google or Amazon. In this work, you evaluate the different reconstruction strategies.

▸ Classifier Poisoning

You can also try to manipulate the training process by injecting your own adversely crafted inputs. This represents a considerable threat for systems that continuously update their learning model with incoming data, e.g. an IDS, spam filter or honeypot. In this work, you evaluate various strategies how you can mislead the training process. Become a shady expert in classifier poisoning and outline possible defenses.

▸ Evasion of Android malware-classifier

This topic illustrates the threat of evasion attacks on a practical example. Explore how current learning-based Android malware classifier work and what strategies exist to mislead these classifiers. Of course, you should examine appropriate defenses.

▸ Evasion of PDF malware-classifier

This topic illustrates the threat of evasion attacks on a second real-world example. You will explore how attacks based on a gradient-descent or genetic programming can be used to mislead current real-world PDF malware-classifier. Of course, you should examine appropriate defenses.

▸ Perturbation Attacks

Deep learning is a trending topic, yet not completely understood. In a perturbation attack, you just change a few pixels. A human being still classifies the image correctly, but the classifier gets fooled. The impact can be severe for a practical system - take e.g. autonomous driving. In this topic, you will dive into deep learning and explore perturbation attacks as well as defenses.

▸ Privacy-preserving multi-party machine learning

Nowadays multiple parties have their own training data - take e.g. orders from various online shops. In some cases, for example fraud detection, it would be very beneficial to combine the data. However, for practical reasons such as privacy restrictions or competitive situations, these parties may not want to share their data directly. In this work, you present different strategies to learn a common classifier without exchanging the training data.

▸ Differential privacy for decision trees

If you work with privacy-sensitive data (e.g. medical databases), your learning model may inherit sensitive data. Differential privacy represents one concept to protect the training data. In this paper, you focus on hardening decision tree's. Ideally, you choose two approaches, implement both and compare their accuracy with results from a standard decision tree.

▸ Adversary-aware signal processing

Machine learning is not the only area that has to cope with an adversary. This problem can be seen as a bigger problem - Adversary-aware signal processing. More fields such as watermarking, multimedia forensics or biometrics deal with an attacker. Compare the developed attack and defense methods from each research field and become an interdisciplinary expert.


  aktualisiert am 11.09.2017
TU_Icon_E_Mail_1_17x17_RGB Zum Seitenanfang