TAIC Workshop, Co-located with ITASEC25
Welcome to the Trustworthy AI for Cybersecurity (TAIC) workshop webpage, which hosts all the info and news needed for the workshop.
Important Dates
- December 9, 2024: Deadline for Workshop Paper Submission
- January 3, 2025: Author Notification
- February 3-8, 2025: Conference (TAIC workshop day yet to be decided)
- February 20, 2025: Camera-ready deadline
Workshop Abstract
Artificial Intelligence (AI) has now become a well-established technology in cybersecurity applications. In this scenario, the use of AI and Machine Learning (ML) techniques aims to foster the security of existing tools by operating as a core or additional mechanism to prevent and detect threats, revolutionizing key areas such as vulnerability and malware detection.
As cyber threats grow increasingly sophisticated and complex, the cybersecurity landscape demands innovative solutions. AI-driven approaches offer the automation and intelligence necessary to stay ahead of evolving attacks and novel threats, providing a crucial line of defense in our rapidly changing digital ecosystem. However, alongside an increasing number of cyberthreats, a likewise alarming number of vulnerabilities is associated with AI techniques, raising concerns about their use. In addition, such techniques are often conceived as a “black box”, thus giving decisions whose rationale remains unclear, sometimes even incorporating undesired biases. Given the wide use of AI techniques to support decision-making in high-stakes scenarios, such as in cybersecurity applications, these issues led the research community to focus on the trustworthiness of AI techniques, with the unified goal of validating their use by increasing security, transparency and fairness. In light of these issues and considerations, this workshop focuses on the trustworthiness of AI techniques for cybersecurity systems. Therefore, we are interested in two specific aspects:
- Trustworthy AI, where we focus on the trustworthiness of AI systems, thus aiming to advance the discussion on the security of models and algorithms by analyzing attack and defense techniques (e.g., evasion attacks and adversarial training, respectively), on explainability techniques increasing transparency, and finally on methods analyzing the fairness of the models and algorithms;
- AI for Cybersecurity, which refers to the study and analysis of cybersecurity tasks where the use of AI can improve the overall level of security, like spam, malware, and botnet detection, as well as automatically localizing and fixing security vulnerabilities in software applications.
Through these two separate yet affine aspects, our goal is to foster a unified discussion on trustworthy AI in cybersecurity. By doing so, we aim to help mitigate these issues and prevent them from hindering the development and adoption of AI techniques.
Workshop Organization
Workshop Organizers:
Giorgio Piras
University of Cagliari (UNICA)
Emanuele Iannone
Hamburg University of Technology (TUHH)
Maura Pintor
University of Cagliari (UNICA)
Katja Tuma
Vrije Universiteit (VU)
Battista Biggio
University of Cagliari (UNICA)
Fabio Massacci
University of Trento (UNITN)
Vrije Universiteit (VU)
Contact:
For any question, or info, please contact Giorgio Piras at: giorgio.piras@unica.it