A SDK To Fight Model Extraction

Secure your on-device AI models against reverse engineering with a low computing footprint. Protect your competitive edge and technological advance.

Deploy On-Device Without Reverse Engineering Risk

Deploying AI models directly in desktop applications, mobile apps, and IoT devices unlocks new business opportunities. However, on-device AI models are vulnerable to reverse-engineering Learn more…

Keep Your Competitive Edge Secure

An attacker can steal and replicate AI models at a lower cost, jeopardizing years of innovation and R&D. Attackers have a whole range of tools at disposal to understand the inner workings of compiled code: decompilers, code visualizers, debuggers, and hooking methods allow them to access and manipulate the software.They can extract the AI model, even if it is encrypted and re-use it at will. In addition, with access to all the parameters, adversarial example attacks or model inversion attacks are easier to perform Learn more…

Request a demo

What you'll gain with us…

A military-grade protection

for your AI models both at rest and during execution.

Platform versatility

compatible with Android, Linux and embedded Linux environments.

Easy installation and integration

protect your first model in a couple of minutes. Keep your usual ML runtime framework.

A minimal time overhead

for optimal user experience.

A 100% preservation

of the model accuracy.

A precise management

of your licenses and control over the use and deployment of your models.

from skyld import SkProtector
from models import MyModel

# Instanciate model
my_model = MyModel()

# Choose deployment configuration
protector = SkProtector(deployment=ONNX | TorchScript | TFLite)
# Protect the model
protector.protect(my_model)
# Export for deployment
protector.save("ProtectedModelName.onnx")
protector.save("ProtectedModelName.pt")
protector.save("ProtectedModelName.tflite")

A Complete SDK For Securing Your AI models

Skyld SDK provides military-grade protection for AI models with advanced encryption and algebraic transformations, ensuring security against reverse engineering. The protected model has the same format as the original model. The ML runtime framework is unchanged: you can deploy using ONNX, Pytorch, TensorflowLite…

Fine-Grained Control

Finally, our SDK allows you to control the deployment of your models. A controlled set of devices? We provide device-binding or GPU-binding. A deployment limited in time? We provide a full control on the validity of the key needed to use the model.

FAQ

Your AI model is likely to be extracted whenever it is deployed on untrusted environnements: on the Edge, on IoT devices, on Smartphones and tablets, on desktop applications, on browsers and on-premises servers. It is also possible for the cloud provider to access the model.

Yes, it’s relatively easy to extract an on device AI model if security measures are not sufficiently robust. Through decompilation, an attacker can locate and extract the AI model. Even if the model is encrypted, it can be recovered with dynamic analysis: an attacker just needs to wait for the decryption before inference.

The most direct consequence is the theft of intellectual property. If an AI model is successfully reverse-engineered, it can be reused directly, or fine-tuned to fit the attacker purpose. Furthermore, access to the AI model facilitates more advanced attacks such as adversarial examples and model inversion. You can learn more about model inversion attack by reading this article.

Encryption is easy to deploy but is also easy to bypass. Indeed, even if the model is encrypted when it’s stored, it may still be vulnerable to runtime attack as it must be decrypted for inference. An attacker can stop the application execution at the decryption step and recover the entire model.

Skyld protects AI models everywhere they are executed. Our developed techniques prevent the software analysis and relevant AI attacks from getting models key information, especially the weights. We apply robust linear algebra transformations, so that the explicit parameter information cannot be extracted from the on-device AI model file. These transformations ensure that models are protected during runtime even on GPUs.

As OS: Android, Linux, Windows and as ML inference frameworks: ONNX, Tensorflow(lite), Keras.

Skyld protects different kinds of neural networks : CNN, RNN, LSTM, Transformers and Vision Transformers, LLM. To request the list of all the tested models, please contact us.

Our protection has no impact on model accuracy. As for performance, this depends on the specific architecture of the model used, generally below 20% overhead. Contact us us for your specific use case.

Do you have questions about your AI model? Our expert team is here to help. Whether you need advice on best practices, have specific challenges to discuss, or need guidance on integrating our solution, we are with you every step of the way. Let’s schedule a meeting.

Request a demo

© 2024 Skyld. All rights reserved.

Get in Touch

contact@skyld.io

Stay Updated!

Be the first to know about our latest features, upcoming events, and where you can connect with us. Subscribe to our newsletter now!