Edge AI: Benefits, Applications and Risks
What is Edge AI?
Edge AI is a contraction of two computer science notions: edge computing and artificial intelligence.
According to Gartner, the edge refers to the physical location where things and people connect with the digital world. The edge is the source of data, and edge computing refers to the setting where processing data occurs at, or near, this source. The term is crafted in opposition to cloud computing, where data is sent to a central server to be processed.
To put it simply, Edge AI is the case where artificial intelligence algorithms run in edge devices. Data is processed locally instead of being sent to a distant server. This mostly concerns the inference phase, which requires much less power than the training phase. Thus, it becomes within the reach of smartphones and other connected objects’ capabilities. With edge inference, decisions are taken in a split second, at the closest point of interaction with the user, without sharing users’ data to a distant server.
Cloud AI versus Edge AI
Benefits of Edge AI
-
Data privacy: The data is processed where it is produced, reducing the surface attack and the leakage probability. This is a huge advantage in the healthcare and quantified self industries, that are fueled by private data. Data is distributed, which decreases the risk of a massive data breach. Also, edge AI usually destroys data after use, which is also an asset when it comes to privacy.
-
Reduced latency: Transferring data takes time, especially if it is a video or an audio stream. Edge AI brings high-performance computing at the level of the captor or the sensor, allowing real-time analysis. For example, the video game industry leverages edge AI to bring a seamless experience for the end user. It may be also critical to be able to respond in a heartbeat in a fast-moving environment, such as self-driving vehicles.
-
Reduced costs: Using edge AI results in cutting costs in backend maintenance, cloud storage, and bandwidth use.
-
Reliability: When AI is running at the edge, it does not depend on network connectivity to provide a timely response. This may be vital in transportation applications, when connectivity may not be available during the whole route.
-
Reduced power: Reducing network usage and server load also saves on energy consumption.
Edge AI use-cases and industry examples
-
Industrial IoT: Edge AI can be used to bring the assembly line to the next level of automation, providing around-the-clock, reliable services. Using video analytics, AI can for example perform a visual inspection of products to detect defects.
-
Autonomous Vehicles: In autonomous vehicles, real-time analysis is critical. Whether it is for adjusting the behavior of the car at the approach of an intersection or to react to an event, decisions must be taken in mere seconds. In addition, network cover is not reliable.
-
Healthcare: Edge AI has multiple applications in healthcare. For example, it can be used as a support for diagnosis, or for patient monitoring. Privacy is paramount when dealing with health data and avoiding centralization is a winning strategy to avoid massive breaches.
-
Smart Homes: Smart homes rely only on IoT devices that collect and process data from captors and sensors around the house. Using edge AI instead of a centralized server guarantees privacy, and reduces the costs linked to cloud computing and network usage.
Specific risks of Edge AI
-
Data loss: Data privacy is a great asset for edge AI, but data is usually discarded after processing. This data, however, could be useful to fine-tune models. That is what federated learning is for: an intricate technology that allows training at the edge.
-
IoT security is brittle: A distributed infrastructure is more difficult to secure than a centralized one. This means that you’ll have to provide additional security measures to protect your applications. For example, you can implement proactive threat detection technologies to identify suspicious behavior as early as possible. We also recommend enforcing automatic patch cycles over the devices. We can guide you in implementing these additional measures, don’t hesitate to contact us.
-
Intellectual property: This risk comes with the precedent. Without any security at the application layer, the attacker can reverse engineer your code, and your models to craft an identical application for a fraction of the cost. Think of all the time invested in acquiring, cleaning, and formatting data to train a world-class model. Our next article will be dedicated to this problem: stay tuned!
-
Fraud and trust loss: With access to the model, it is particularly easy to craft data to deceive the model. This artifact is known as an adversarial example. The attacker introduces a small noise undetectable to a human eye into the input data that will dramatically modify the inference result. This is of paramount importance for all kinds of access control models: we can imagine a machine-learning powered scanner that scans luggage for dangerous items at the airport. A knife can be forged in such a way that the system interprets it as a pencil. An attacker can also design stickers that look harmless, and stick them over signs to fool the recognition system of autonomous vehicles. The stop sign then becomes a soda can for the vehicle camera and is discarded by the car, which results in a car crash. We will dedicate an article to this kind of attacks.