Önálló labor témák 2022


All, Internet-of-Things, Embedded-Systems, Security-Analysis, Cryptography, Software-Security, ICS/SCADA, Intrusion-Detection, Machine-Learning, Privacy, Security-and-Privacy-of-Machine-Learning, Economics

A laborban több aktív kutatási területeken lehet önálló labor, szakdolgozat, és diplomaterv témát választani. Ezeknek a területeknek a leírása található alább. Ha valamelyik tématerület érdekel, keresd meg a tématerületért felelős kollégánkat, és beszéljetek lehetséges konkrét feladatokról a területen belül. Ne feledjétek, hogy az önálló labor keretében egy-egy feladaton kisebb csoportban (team-ben) is lehet dolgozni.

Security and Privacy in/with Machine Learning

Kategória: Machine-Learning, Privacy, Security-and-Privacy-of-Machine-Learning

Machine Learning (Artificial Intelligence) has become undisputedly popular in recent years. The number of security critical applications of machine learning has been steadily increasing over the years (self-driving cars, user authentication, decision support, profiling, risk assessment, etc.). However, there are still many open privacy and security problems of machine learning. Students can work on the following topics:

  • Own idea: If you have any own project idea related to data privacy, or the security/privacy of machine learning, and I find it interesting, you can work on that under my guidance... You'll get +1 grade in that case. (Contact: Gergely Acs)
  • Robustness, adversarial examples: Adversarial examples are maliciously modified samples where the modification is visually imperceptible yet the prediction of the model on this slightly modified sample is very different compared to the unmodified sample. A potential task can be to develop solutions to distinguish adversarial and benign samples, or to develop robust training algorithms. (Contact: Szilvia Lestyán, Gergely Acs)
  • Watermarking of Machine Learning models: As model extraction is easy (i.e., one can easily steal a machine learning by using it as an oracle), model owners embed a watermark into the trained model and claim ownership upon a copyright dispute in order to discourage model extraction. Watermarks can be implemented by inserting a backdoor sample into the model that is only known to the model owner. A potential task can be to develop, evaluate (compare) watermarking schemes. (Contact: Gergely Ács)
  • Record reconstruction from aggregate queries: It is (falsely) believed that aggregation preserves privacy, that is, if one computes several aggregation queries (SUM, AVG, COUNT, etc.) on a database then it is very hard to infer the individual record values in the table only from these aggregates. A potential task can be to implement attacks which check whether a set of aggregate queries can be answered without revealing any single individual record on which these queries were computed. (Contact: Gergely Ács)
  • Anonymization: Sequential data includes any data where data records contain the sequence of items of a user (e.g., location trajectories, time-series data such as electricity consumption, browsing history, etc.). A potential task can be to develop (GDPR compliant) anonymization methods so that individuals are not re-identifiable anymore in the dataset. (Contact: Gergely Ács)
  • Fairness vs. privacy vs. robustness in Machine Learning: In machine learning, privacy-preserving training is considered unfair to subgroups as the trained model is less accurate on underrepresented groups (e.g., minorities). It is an open question how privacy-preservation and fairness together influence robustness (i.e., resistance against integrity attacks such as poisoning or adversarial examples). A potential task can be to study the relation of privacy-preservation, fairness, and robustness in machine learning. (Contact: Gergely Ács)
  • Other Topics: For details, click HERE! (Contact: Balázs Pejó)

Required skills: none
Preferred skills: basic programming skills (e.g., python), machine learning (not required)

Létszám: 7 hallgató

Kapcsolat: Ács Gergely (CrySyS Lab), Szilvia Lestyán (CrySyS Lab), Balázs Pejó (CrySyS Lab)

Economics of cybersecurity and data privacy

Kategória: Economics, Privacy

As evidenced in the last 10-15 years, cybersecurity is not a purely technical discipline. Decision-makers, whether sitting at security providers (IT companies), security demanders (everyone using IT) or the security industry, are mostly driven by economic incentives. Understanding these incentives are vital for designing systems that are secure in real-life scenarios. Parallel to this, data privacy has also shown the same characteristics: proper economic incentives and controls are needed to design systems where sharing data is beneficial to both data subject and data controller. An extreme example to a flawed attempt at such a design is the Cambridge Analytica case.
The prospective student will identify a cybersecurity or data privacy economics problem, and use elements of game theory and other domain-specific techniques and software tools to transform the problem into a model and propose a solution. Potential topics include:

  • Ghostbusting in ML: eliminating free-riders in federated learning schemes
  • CPSFlipIt: attacker-defender dynamics in cyber-physical systems
  • Risk management for cyber-physical/OT systems
  • Incentives in secure software development: why should programmers have proper security training?
  • Interdependent privacy: modeling inference with probabilistic graphical models
  • Additional Related Topics
  • BYOT: Bring Your Own Topic!

Required skills: model thinking, good command of English
Preferred skills: basic knowledge of game theory, basic programming skills (e.g., python, matlab, NetLogo)

Létszám: 5 hallgató

Kapcsolat: Gergely Biczók (CrySyS Lab), Balázs Pejó (CrySyS Lab)