Coming back after Christmas Day, I got home and said, “Okay Google, switch on the lights”, as usual. And then I heard something along “Sorry, I’m not connected to the internet!”. Wait, what? Not that losing internet again surprised me, but *Why* should I stay in the dark when I have no internet connection? Well, because the AI used for voice recognition runs in the cloud, so no cloud, means no light! But couldn’t we do the same locally, without an internet connection? Well, we could indeed, and this is what we call Edge Artificial Intelligence or in short Edge AI. In this article, we will discuss how this technology can solve my problem and what it can offer us.
What is Edge Intelligence?
Knowing that the problem, with my lightbulb, comes from a loss of connectivity does not make it any less frustrating. But we have a fancy sounding solution: Edge AI.
For Real-Time applications, Edge Artificial Intelligence moves the AI closer to where it is really needed. In the device.
Instead of relying on a server in the cloud.
Before we move on, we should understand the basics of how machine learning works.
What is Edge ML ? Reminder on machine learning :
The two main phases of any machine-learning based solution are the Training and the Inference, which can be described as:
- The training is a phase where a (very) large amount of known data is given to a machine learning algorithm, for it to “Learn” (surprise) what to do. With this data, the algorithm can output a “model”, containing the results of its learning. This step is extremely demanding in processing power.
- The inference is using the learned model with new data to Infer what it should recognize. In our case, this would be interpreting what I wanted when I asked to turn on the lights.
The “known data” in the training phase, is called labeled data. This means that each piece of data (sound, image, …) has a tag, like a little sticker, attached describing what it is. A speech recognition AI is trained with thousands of hours of labeled voice data in order to extract the text from a spoken sentence. Natural language recognition can then be used to convert the text in commands, that a computer can understand.
Once trained, a model requires a fraction of the processing power to perform the inference phase. The main reason for that is that inference uses a single set of input data, while training typically requires a huge number of samples. The production model used for inference is also “frozen” (it cannot learn anymore) and may have less relevant features trimmed out, as well as being carefully optimized for the target environment. The net result is that it can run directly on an embedded device: The Edge. Doing this gives the decision power to the device, thus allowing it to be autonomous. That is Edge AI!
Hardware requirements of Edge Computing
Like a lot of new concepts, the technology behind Edge Artificial Intelligence or Edge AI has been around for some time now: machine learning algorithms are common in computers and smartphones, where they work just fine. But what about embedded devices? Well, the tools and hardware are now coming together to form a solution that makes sense, namely thanks to:
- Increases in processing power of devices, and the availability of modules providing hardware acceleration for AI (GPUs and ASICs)
- Constant improvement of both AI models and their performance
- The quality of the tools and resources, that make the journey easier for data scientists, AI specialists, and developers
What this means is that we can now integrate Artificial Intelligence solutions not only in supercomputers, but in cars, smartphones, web pages, Wi-Fi routers, and even in home security systems.
Any Stakes of AI Inference on a device?
Yes, Edge Artificial Intelligence is first a matter of choosing the right hardware.
AI Inference on Edge devices can be implemented on several hardware pieces:
- CPU: On smartphones and embedded devices, any recent ARM CPU (Cortex-A7 and higher) are more than capable to handle it. It may not be the fastest or most efficient solution, but it is usually the easiest. TensorFlow Lite is commonly used, and provides key features from the large TensorFlow framework. You got great information about Accelerated Inference on ARM CPU on the Tensor Flow blog
- GPU: Out-of-the box support for GPUs will vary, but they typically provide a large throughput (i.MX6 Vivante, nVidia Jetson), allowing superior inference frequency and lower latency. It also removes a large workload from the CPU.You may a look on our post about Yocto for NVIDIA Jetson
- AI specialized hardware (ASICS, TPU): This is a fast growing category. These hardware pieces provide the most efficient AI solutions, but may prove expensive or harder to design.
Let’s dig a little into the last two solutions here below.
One possibility is to leverage the processing power and paralleling capabilities of GPUs. An AI is like a virtual brain with hundreds of neurons: it looks very complex but is actually made of a large number of simple elements (neurons for the brain). Well, GPUs were made for that! Simple independent operations applied to every single point (pixel or vertex) on your screen. Most machine learning frameworks (TensorFlow, Caffe, AML, …) are designed to take advantage of the right hardware when it is present. Nvidia boards are good candidates, but virtually any GPU can be leveraged.
Another solution is to integrate specialized hardware. Machine learning can be accelerated through custom hardware, and the 2 contenders are AI-specific chips, and AI ASICs (Application-Specific Integrated Circuit). And these are moving fast! The first version of Google’s Edge TPU (Tensor Processing Unit) is now available to a few lucky beta testers (including us!). ARM unveiled its Machine Learning and Object Detection processors, and Intel, Microsoft, and Amazon are working at their own solutions too. Right now, the best option is to have a GPU supported by the AI’s tools you are using. Google’s Edge TPU will be more widely available soon but is not yet production-ready, and having a custom ASIC is expensive to design and produce, so it is for now a priviledge enjoyed solely by large-scale specific products.
But beyond the ‘avant-garde’ factor, we know that technology in a product, and Edge Articifial Intelligence, should add value.
So here is a short list of Edge Artificial Intelligence advantages and disadvantages, to help you decide if this is the right solution for your devices.
5 reasons to consider Edge AI for your innovation
- Offline availability: This is probably the most obvious argument. If an application needs to be available no matter the conditions and connectivity, intelligence must be put in a local device, . Loss of connectivity will happen due to unstable cellular data in a remote place, loss of service after a DDoS attack, or simply because your device is being used in a basement! This is a huge challenge for cloud-based solutions. But if the intelligence is put locally on a device, you have nothing to worry about.
- Lower cloud service costs (who doesn’t want that?): cloud services are very convenient (scalability, availability), but represent a considerable recurrent cost that will increase, as more and more people use a solution. And these costs will last throughout the life of a product. But if you were to sell a standalone device running the AI, you would significantly reduce its recurring costs and infrastructure needs.
- Limit connectivity costs: bandwidth and cellular data are expensive too. Processing the information locally can divide the bill by as much as 100x (or more for video), by sending only the result of the AI’s computation. For a video security solution, megabytes of video would transform into a few bytes. For your security camera, these few bytes would say: “No burglar here, but your dog just made a mess!”.
- Handle confidential information: Why send critical information over hundreds of kilometers of wire, when it can be gathered and processed locally? This does not mean that we should be less concerned about the security of our devices, but it is one thing less to worry about. And this will bring peace of mind to your customers too.
- Response time is critical: Most likely, gathering and processing data locally will achieve faster response time, improving the user experience. That is, however, only true if the device can process the data fast enough.
- (Bonus) Be “Green”! Okay, while this alone will not turn a company or product into an eco-friendly one, processing data locally definitely makes sense to make an efficient AI device. A small to medium IoT device will send 1MB or less daily, which can roughly evaluate to a 20g of CO2 daily. Compounded over a year, 10,000 devices are responsible for up to 73 tons of CO2! Processing it locally could shrink that to 730kg which is much better for the planet. And keep in mind that a video or image-based solution could have far more impact than that.
If you don’t identify with any one of those scenarios, then you are most likely better off with a standard cloud-based solution. Microsoft, Amazon and Google services provide a solid starting point, while many other standalone libraries can be used to build custom, more “hand-made”, solutions in the cloud.
But if you do identify with the above, then learning the limits of Edge AI is the next step, for a sound understanding.
Limitations of the applications of AI at the edge?
Let’s face it, Edge Artificial Intelligence is newer than cloud AI integration and intrinsically has some limits. To help you understand some of these limitations and risks, we compiled a list below:
- You need an edge capable device: Starting from the obvious one, you will need a place to put that AI. It can be a mobile device like a smartphone or a device that you are creating. Whatever it is, it must be capable of running an AI solution efficiently.
- Performance is limited by the device: An Edge device will always have less processing power than an army of virtual servers. For that reason, it is important to identify from the start the response time and the complexity of a solution in order to design it accordingly.
- Connectivity is needed to update the model: While inference is fast, training is usually a very long process, even on dedicated powerful hardware. As such, it is normally performed on dedicated machines, and cannot be done on your device. Online machine learning is one exception that we will address in a later article. After some time, your model will need to be updated (recognize new sentences, support new scenarios, …), and the most efficient way to deploy this update is with an OTA mechanism.
Making the move to Edge Artificial Intelligence can bring new opportunities
Edge Artificial Intelligence is the next step for many AI-based services. Maximum availability, data security, reduced latency, and costs are all key strengths of tomorrow’s AI systems. As a company or an individual, now is a good time to make the move. While some might argue that it has not yet reached maturity, technologies are coming together. Experimenting and implementing a proof of concepts is possible with a limited effort, granting a competitive advantage before it becomes globally adopted. Given AI’s and IoT exponential growth, Edge AI growth is inevitable, making it a very good candidate to invest in.
In the next article, we will see how to design an Edge AI solution. From connectivity to device AI updates, let’s make your device intelligent!
http://scikit-learn.org/stable/auto_examples/linear_model/plot_sgd_comparison.html
https://ai.googleblog.com/2017/04/federated-learning-collaborative.html
https://en.wikipedia.org/wiki/Online_machine_learning
https://cloud.google.com/edge-tpu/
https://azure.microsoft.com/en-us/blog/accelerating-ai-on-the-intelligent-edge-microsoft-and-qualcomm-create-vision-ai-developer-kit/
IoT Edge tutorial: build our own autonomous robot