cPAID
Cloud-based Platform-agnostic Adversarial aI Defence framework– CPAID
Project Details
- Project no:
101168407
- Type of project: HORIZON Research and Innovation Actions
- Call identifier: HORIZON-CL3-2023-CS-01
- Topic: HORIZON-CL3-2023-CS-01-03 - Security of robust AI systems
- Duration: 1 October 2024 to 30 September 2027 (36 months)
- CORDIS link:
https://cordis.europa.eu/project/id/101168407
- Official Website: https://cpaid.eu/
Description
Malicious actions and adversarial attacks pose significant threats to AI applications and operations, making innovative solutions for AI protection critically necessary. The EU-funded cPAID project aims to research, design, and develop a cloud-based, platform-agnostic defence framework to safeguard AI applications and operations from these attacks. The project will address adversarial attacks such as poisoning and evasion by using AI-based defence methods and ensuring compliance with EU principles for AI ethics. In addition, the project will validate AI system performance in real-life scenarios and promote research to develop certification schemes that certify the robustness, security, privacy, and ethical excellence of AI applications and systems.
Project Objective
cPAID envisions researching, designing, and developing a cloud-based platform-agnostic defense framework for the holistic protection of AI applications and the overall AI operations of organizations against malicious actions and adversarial attacks. cPAID aims at tackling both poisoning and evasion adversarial attacks by combining AI-based defense methods (e.g. life-long semi-supervised reinforcement learning, transfer learning, feature reduction, adversarial training), security- and privacy-by-design, privacy-preserving, explainable AI (XAI), Generative AI, context-awareness as well as risk and vulnerability assessment and threat intelligence of AI systems. cPAID will identify guidelines to a) guarantee security- and privacy-by-design in the design and development of AI applications, b) thoroughly assess the robustness and resiliency of ML and DL algorithms against adversarial attacks, c) ensure that EU principles for AI ethics have been considered, and d) validate the performance of AI systems in real-life use case scenarios. The identified guidelines aspire to promote research toward developing certification schemes that will certify the robustness, security, privacy, and ethical excellence of AI applications and systems.