What Are the Applications of On-Device Machine Learning?
AI models are everywhere—from unlocking your phone to powering medical diagnostics. But few realize how exposed these models become once deployed in real-world applications. In this article, we explore the different use-cases of AI and the risks they face through model inversion, extraction, and adversarial attacks.
This post answers:
- Which applications are powered by machine learning models?
- What risks do these models face once deployed?
- How can attackers exploit AI models, and why should developers care?
Facial recognition and biometric unlocking
These models, take images as input and produce identification as output. These systems are widely deployed in smartphones, smart locks, and surveillance.
The security of these models is important given the sensitivity of the data involved. Attackers can use model inversion attacks to reconstruct training data — like user faces or other biometric features. These images could then be used for malicious purposes, compromising the security and privacy of users. Academic reasearchers have shown how someone could produce vocal deepfakes without any real audio sample of the person, only inverting a voice identification AI model.
Therefore, it is essential to have robust security measures to protect these AI models, ensuring user trust and preventing potential privacy breaches.
Voice assistant and natural language processing
Voice assistants like Siri, Google Assistant, and Alexa use machine learning models to understand and interpret user voice commands. This enables more natural interaction and better understanding of spoken language.
Speech recognition technology is at the core of voice assistants, which uses sophisticated machine learning algorithms such as deep neural networks (DNNs). These models are trained on vast amounts of voice data to accurately transcribe spoken words into text. Voice assistants are susceptible to adversarial attacks, where malicious inputs are designed to deceive AI models and produce incorrect results. Adversarial examples can be generated by disrupting input audio signals or injecting subtle modifications into textual inputs.
Photo editing applications
In the field of photo editing, AI models often rely on Convolutional Neural Networks (CNNs) and advanced image processing techniques to perform tasks such as defect correction, detail enhancement, and even facial feature modification.
However, once integrated into applications and deployed on devices, these AI models become vulnerable to reverse engineering and malicious exploitation.
For example, imagine a company develops an application offering advanced photo editing features, powered by an AI model. This model is the result of significant investments in research and development and constitutes a competitive advantage for the company. A competitor may download your photo editing app, extract the proprietary AI model, and integrate it into their own product—bypassing years of R&D and millions in investment.
Medical applications
In healthcare, AI models help detect tumors, analyze medical scans, and assist diagnostics. Thus, these models often handle sensitive personal health data.
If an attacker manages to reverse engineer healthcare-designed AI model, they could compromise the confidentiality of sensitive medical data stored on medical devices.
An attacker can also choose to perform a membership inference attack. These aim to determine whether a specific individual’s data is part of a ML model training set, potentially revealing diagnoses or medical history.
This can lead to exposure of personal health data, regulatory violations (e.g., HIPAA, GDPR), and reputational damage.
Why This Matters
AI models are now central to product experiences and competitive differentiation — but they also represent a new target. Once deployed, they can be:
- Cloned through model extraction
- Manipulated with adversarial inputs
- Exploited for sensitive data through inversion or inference attacks
If you’re developing AI-powered products without securing the models themselves, you’re leaving your IP and your users vulnerable.
How Skyld Protects Your Models
At Skyld, we understand the importance of securing AI-powered applications. With expertise in AI and cybersecurity, we developped a SDK to protect AI models against extraction.