Edge computing AI: an opportunity for data science
It is not new that edge computing AI is a big deal for data science and industrial application.
After all, Deloitte reports that in 2020 alone some 750 million edge AI chips would be sold, a market worth some $2.6 billion, and all capable of moving the capacity to run AI models onto an embedded device instead of relying on the cloud.
This investment coupled with enthusiasm for TinyML and the trend towards adopting edge AI for reasons ranging from lower computing costs to better environmental outcomes means that data scientists are hearing more and more about edge computing.
But taking your AI model to the edge is not always straightforward.
How can you help data science to think edge computing? How do you bring an AI model to the edge? How do you convert an existing Artificial intelligence model to something useful on a device? How should data from an embedded device be communicated to the cloud? And what type of hardware can you use for your edge computing, and why should it matter?
In this article, I’ll explain the answers to these questions and more to help demonstrate why taking artificial intelligence to the edge is a great step forward for data science.
What is the ‘edge’ in edge computing or edge AI?
Let’s start with the most basic question: just what is the ‘edge’ in ‘edge AI’ or ‘edge computing?
Put simply, the edge is shorthand for the processing of data closest to the end-user. In practical terms, this means that data might be processed on a connected device rather than pushed to the cloud for processing. It’s increasingly popular and with good reason. Audio and video processed closer to its capture can be processed and rendered faster than pushing it to the cloud or a local network, for example, and sensor data captured by an IoT device can be processed on that same device more efficiently, and with less chance of a data loss.
Before selecting the right Hardware platform, remember the reasons for using Edge computing AI?
Running an artificial intelligence model at the edge has four clear advantages for data science over-relying on the cloud:
- A reliable, always-up connection: When you can gather and process data on the same device instead of relying on a network connection to the cloud, you’ll avoid network connection issues.
- Goodbye to latency: When processing is local, you’ll avoid any and all issues around latency in communicating with the cloud.
- Fewer security and privacy issues: No need to communicate with the network or the cloud means reducing the risk that data will be intercepted, lost, stolen, or leaked.
- Lower bandwidth costs: Reduce the communication between devices and the cloud and the cost of your bandwidth is reduced, too.
With all this in favor of the edge computing you might get the impression that you don’t need the cloud at all. Well, not so fast.
Edge computing does not mean “no cloud”
Short answer: yes, you probably still need the cloud. While processing data on a remote device or elsewhere on the edge has its advantages, there are also limits. Training Artificial Intelligence models requires significant processing power that can usually only be found in the cloud.
With the latest research from IBM suggesting that AI models can be trained with more limited processing power, this might be changing. But this remains only a research topic for the moment and this new method of training networks is not yet available commercially.
That said, if your data science use case is anomaly detection and you have labels for your data, you do have one option: Nanoedge AI. Developed by Cartesiam, this framework allows the training of anomaly detection networks directly on the device. Outside of this very specific use case, though, the options are either to use an existing dataset to train the edge AI Model or send data to the cloud to start the creation of a new dataset.
Hardware & Software Selection
Which solution to send your data Edge to cloud?
There are really two ways to send your data to the cloud: buy or build.
On the buy-side, off-the-shelf solutions like Edge Impulse are built to connect edge devices to a cloud platform and securely transfer datasets between devices and the cloud. Developers can test their free pricing tier to learn how it works and experiments with the solution before stepping up to the enterprise level.
Alternatively, you could build a solution yourself based on a framework like MQTT or CoAP. MQTT is an open-source, industry-standard publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and with minimal network bandwidth. The Constrained Application Protocol, or CoAP, is an alternative to MQTT that is designed for the low-memory, low-power devices that constitute many IoT networks.
Which hardware do you need to build your edge computing AI?
When you talk about Edge computing or edge AI, you talk about embedded devices that are either based on microprocessors or microcontrollers.
Both have their pros and cons, and your choice will depend on the complexity of the AI model your data science needs.
Put simply, the difference between the two is the capabilities to run on a battery for one day (microprocessors) or several months/years (microcontrollers).
Few tips to Run Edge Computing AI
Data Optimization
Having made your hardware and embedded software choices for edge computing AI, you’ll likely have to revisit your artificial intelligence model. An edge AI model will need to be pruned or quantified in order to function efficiently on the edge and its smaller capacity processors.
Pruning involves reducing the number of neurons per layer in the model; as a result, the element is not considered in the calculation and reduces the strain on the processor. The end result is a smaller neural network or model and a subsequent reduction in the processing power required.
Quantization involves replacing 32 bits neurons (floating values) by an 8 bits approximation; as a result, the size of the network is reduced by 4 and allows to use optimizations specific to embedded devices (i.e.: single instruction multiple data, …)
Though there are limits and constraints to both, the end result of either approach is a smaller artificial intelligence model that demands less power and can be processed on the edge instead of relying on the cloud.
Hardware acceleration when your model can’t be smaller
If after pruning and quantization your edge computing AI model is still using too much power or not running fast enough, you might need hardware acceleration. In short, you delegate the most power intensive part of the execution of your model to a specialized hardware. This hardware is usually designed to bring an extra processing boost to your artificial intelligence model and reduce its power consumption. A good example of such a hardware accelerator is the NPU used by NXP on their new i.MX8M Plus.
Taking the opportunity of edge computing AI for your data science requires knowledge and understanding of embedded software, cloud connectivity, network optimization and – of course – data science, too. While many data scientists are comfortable with the specificities of artificial intelligence models, they are more likely to be behind the eight ball when it comes to the architecture of edge AI networks and the hardware of the devices supporting them. In such cases, the advice, guidance, and assistance of experts in embedded software, IoT devices, and edge AI like those here at Witekio will accelerate the deployment of models and optimize their processing in their new home, on the edge.
IoT Edge tutorial: build our own autonomous robot