These valuable assets can be used without permission, copied, and shared inappropriately. This lack of traceability leads to financial losses and minimal control over how a company’s intellectual property is used, diminishing the return on your research investment.
Without a strong code protection, no licensing is possible as anyone can extract and reuse the AI model through reverse-engineering. The code protection needs to involve a secret to be able to use the software. Code obfuscation is not enought: it prevents an attacker to understand how the program works, but does not prevent unauthorized reuse.
no risk of copying or fraudulently exploiting your models.
set an expiration date on your AI models.
control the set of devices where the AI model is deployed.
from skyld import SkProtector
from models import MyModel
# Instanciate model
my_model = MyModel()
# Choose the address of the license server
licence_server_url = "https://your.company/license_check"
# Choose deployment configuration
protector = SkProtector(deployment=ONNX | TorchScript | TFLite)
# Protect the model with licensing capabilities
protector.protect(my_model, licence_server = licence_server_url)
# Export for deployment
protector.save("ProtectedModelName.onnx")
protector.save("ProtectedModelName.pt")
protector.save("ProtectedModelName.tflite")
Our SDK protects AI models on any untrusted environment. Our protection is based on a unique activation key assigned to each model. You can control the number of deployments of the models via device-binding or GPU binding techniques. It is also possible to set an expiration date on models.
With our SDK, you’ll need only a couple of minutes to protect your first model. The protected model format is unchanged so integration and you can keep your ML runtime framework. Add the license server to your usual model distribution server and you can start controlling model deployment.
Skyld provides two types of licensing. For subscription-based licensing, it is possible to set an expiry date to each deployed model. For volume-based licensing, it is possible to activate device-binding, so that models can only be deployed on a given set of devices.
Yes, it’s relatively easy to extract an on device AI model if security measures are not sufficiently robust. Through decompilation, an attacker can locate and extract the AI model. Even if the model is encrypted, it can be recovered with dynamic analysis: an attacker just needs to wait for the decryption before inference.
The most direct consequence is the theft of intellectual property. If an AI model is successfully reverse-engineered, it can be reused directly, or fine-tuned to fit the attacker purpose. Furthermore, access to the AI model facilitates more advanced attacks such as adversarial examples and model inversion. You can learn more about model inversion attack by reading this article.
Docker is a containerization platform designed to isolate containers from the host system and other containers. While it provides isolation at the process level, Docker does not offer mechanisms to restrict access to AI models within a container. Therefore, Docker alone cannot be used to control or prevent unauthorized access to your AI models.
Do you have questions about your AI model? Our expert team is here to help. Whether you need advice on best practices, have specific challenges to discuss, or want guidance on integrating our solution, we are with you every step of the way. Let’s schedule a meeting.