TV Show – AI and Cybersecurity: New Threats, New Protections
On April 2nd, we had the pleasure of being invited by Thales to take part in a special live TV show during the Forum InCyber (FIC). The roundtable was hosted by Emmanuel Botta and Simon Chodorge from Capital magazine and focused on a crucial concern: AI as a target for cyberattacks.
Because AI is not just a tool. Itʼs an asset, and assets are targets.
Emerging threats against AI systems
Our AI security expert, Victor Guyomard, Katarzyna Kapusta (cybersecurity expert at Thales) and Patrick Bas (researcher at CNRS) shared their insights about the security threats targeting artificial intelligence.
Among the key topics discussed:
- How a model can be extracted
- How adversarial examples can manipulate a model
- How easily todayʼs large language models (LLMs) can be manipulated or bypassed
- The role of model pentesting in identifying vulnerabilities
- How AI itself can be used to spread disinformation
As AI systems are deployed in critical sectors like healthcare and defense, traditional security measures are showing their limits.
At Skyld, for instance, weʼve demonstrated how vulnerable some models still are. We managed to extract models from Google Photos, illustrating how simple it can be to steal a proprietary AI.
A shared concern in the ecosystem
The panelists concurred on the pressing necessity for dedicated AI security frameworks.
They advocated for proactive measures, including AI-specific penetration testing and the development of robust defenses tailored to AI systems, to safeguard against emerging threats.
Securing AI is no longer optional
At Skyld, we design security solutions tailored specifically for AI models. We propose two solutions today:
-
Anti-theft system for artificial intelligence
This solution allows strong licensing possibilities. Because model extraction and reverse engineering can deprive companies of their IP and competitive edge, we built an SDK to protect AI wherever they are deployed. It also protects against unauthorized copy and reuse, allowing AI owners to enforce a strict licensing policy. -
Adversarial robustness testing service
We are launching a service to test if models are vulnerable to adversarial examples in the lab, before they are deployed in real life.