Skyld

Skyld provides a SDK to secure on-device ML models against reverse-engineering

Contact Info
263 avenue du Général Leclerc 35700 Rennes

Our Newsletter

Subscribe to our newsletter to get our news delivered to your inbox!

Follow Us

Deploy on-device AI securely

Skyld safeguards your AI models from reverse-engineering.

Our Features

Skyld's SDK prevents access to your proprietary algorithms

shield_icon1
Comprehensive Protection

Comprehensive Protection: Our SDK safeguards your AI models both at rest and during execution and provides protection against both static and dynamic reverse-engineering. Focus on innovation, your competitive edge is secure.

rocket2
Hardware Agnostic

We protect your on-device AI models deployed on smartphones, connected objects, desktops, web browsers, on-premise servers. If you can deploy a model, we can protect it.

hand_icon3
Easy Usage

In less than 10 lines of code, our SDK turns a trained model into a protected model. The format of the model is not modified, so that you can use your favorite ML inference framework.

How It Works

Discover our solution for on-device AI security

FAQ

Your AI model is likely to be extracted whenever it is deployed on untrusted environnements: on the Edge, on IoT devices, on Smartphones and tablets, on desktop applications, on browsers and on-premises servers. It is also possible for the cloud provider to access the model.

Yes, it’s relatively easy to extract an on device AI model if security measures are not sufficiently robust. Through decompilation, an attacker can locate and extract the AI model. Even if the model is encrypted, it can be recovered with dynamic analysis: an attacker just needs to wait for the decryption before inference.

The most direct consequence is the theft of intellectual property. If an AI model is successfully reverse-engineered, it can be reused directly, or fine-tuned to fit the attacker purpose. Furthermore, access to the AI model facilitates more advanced attacks such as adversarial examples and model inversion. You can learn more about model inversion attack by reading this article.

Encryption is easy to deploy but is also easy to bypass. Indeed, even if the model is encrypted when it’s stored, it may still be vulnerable to runtime attack as it must be decrypted for inference. An attacker can stop the application execution at the decryption step and recover the entire model.

Skyld protects AI models everywhere they are executed. Our developed techniques prevent the software analysis and relevant AI attacks from getting models key information, especially the weights. We apply robust linear algebra transformations, so that the explicit parameter information cannot be extracted from the on-device AI model file. These transformations ensure that models are protected during runtime even on GPUs.

As OS: Android, Linux, Windows and as ML inference frameworks: ONNX, Tensorflow(lite), Keras.

Skyld protects different kinds of neural networks : CNN, RNN, LSTM, Transformers and Vision Transformers, LLM. To request the list of all the tested models, please contact us.

Our protection has no impact on model accuracy. As for performance, this depends on the specific architecture of the model used, generally below 20% overhead. Contact us us for your specific use case.

Our tech blog

Here are our latest articles

Our News

Here are our latest news.