Will TinyML supercharge Edge AI on MCU?

Homepage Will TinyML supercharge Edge AI on MCU?

Deep learning networks are getting smaller, write Pete Warden and Daniel Situnayake. The Google Assistant team, they explain, can detect words with a model just 14 kilobytes in size, and that’s small enough to run on a microcontroller. This new world of machine learning on the edge is known as TinyML, and it is one of the hottest trends in both IoT technology and machine learning.

As TX Zhuo writes at Venture Beat, between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models directly on microcontrollers. In a world of 250 billion microcontrollers in everything from printers and TVs to cars and pacemakers, TinyML means that all of these devices “can now perform tasks that previously only our computers and smartphones could handle”. All of our devices and appliances says Zhuo, are getting smarter thanks to microcontrollers, edge computing, and TinyML.

The enthusiasm for TinyML is spreading, with outlets like Forbes (“it will bring intelligence to millions of devices that we use on a daily basis”), The Next Web (“breathing life into billions of devices”) and IoT World Today (“potential to demystify machine learning”) all hawking this new trend.

But what does TinyML really mean for the IoT world? And how will industrial customers benefit from the machine learning and edge computing capabilities that TinyML offers?

We sat down with Witekio’s Director of Technology Cedric Vincent to learn more about machine learning on the edge.

Why is TinyML perfect for Edge AI on MCU?

Can we start with the whole idea of TinyML – what makes TinyML ‘tiny’ in the first place?

At its core, TinyML is machine learning running on low power devices – maybe battery-powered or maybe they can be plugged in – like microcontrollers. These are low-powered devices in any case, and they don’t have that much computing power, either. Essentially it is very cheap hardware, a range of inexpensive devices but with machine learning capabilities and the next evolution of edge AI.

How inexpensive are we talking?

Well, ten years ago you might have been looking at ten to fifteen dollars. Today? Maybe two or three dollars. This is completely accessible for many projects.

What is the genesis of TinyML for Edge AI?

And who was first to take advantage of these cheaper components with TinyML baked in?

That was Google, and one of the early applications was as an elegant solution to a smartphone problem.

Virtual assistants– think Siri on iOS, or Google Assistant, even Amazon’s Alexa – rely on a certain wake word to power up. This means that the virtual assistant needs to be constantly listening for the wake word, but this presents a problem. Having a device like a smartphone stay always awake and listening for a wake word is going to run down the battery fast. But from a UX perspective, demanding a user wake up their phone to use a voice assistant is less than ideal.

So, what Google did was add an additional component to their smartphone, a small microcontroller that would only listen for the wake word. If it heard the wake word, it would wake up the phone. It needed to have the capacity to process the voice command on its own, on the edge, without relying on the processing power of the phone, and it needed to be cheap – a 50-cent component, maybe a dollar.

Now Google could ship a phone that was always listening for the wake word but didn’t have to draw much power to do so. The virtual assistant would only wake up the phone when the user needed it, and there would be no friction when it came to UX. This is a perfect case study in edge AI, and Google called it TinyML.

That’s a nice solution, but it’s also just processing a single word. Is the machine learning on microcontrollers that TinyML offers limited to such simple tasks?

Not at all – let me give you an example.

You’ve heard of GPT-3 from OpenAI? That’s a machine learning tool that is capable of writing everything from a news article to a blog post to poetry. It’s amazing, really. You just ask for 1000 words on a topic and – bam – the machine learning goes to work and serves up something that is very close to what a human could write.

Now GPT-3 has 175 billion parameters, it’s incredibly complex. Think of it like 175 billion little knobs that need to be adjusted, and it takes time to adjust. That time – the time where the algorithm is learning – is something around a few hundred years on a regular computer! That’s a lot of time and a lot of computing power.

But then some other researchers realized that you didn’t need so many parameters and you didn’t need as much power to produce a result that was almost as good. One university group managed to produce very similar results with very similar performance with 0.001% of the parameters.

This example is, of course, a bit extreme because the optimized model still requires 223 million parameters and will never run on a microcontroller. But it does demonstrate a trend: more and more scientists are developing a new architecture for AI models using very few parameters yet still achieving a state of the art performances. These new network architectures will be TinyML’s playground.

And this is new?

The edge machine learning on microcontrollers is new, but the idea of making something simpler to reduce the processing power and the time required to do that processing is not.

It’s similar to what mathematicians and computer scientists call quantization. When you do a calculation with data you have the capacity to encode the value of that data with an unlimited number of decimal places. This allows you to be very precise so that instead of assuming a value is 3 you use the true result that might be 2.93523565800632861139.

But using all those additional decimal places takes more processing power and a specific algorithm that can deal with them. Yet for many use cases, using the shorter 3 is going to be good enough, and if you can make the calculation easier to process, it will require less power, it could be managed on the edge, and on a far less powerful device.

What impact will TinyML have on Edge AI?

TinyML isn’t the first time that machine learning has been a part of the IoT conversation. We’ve had cloud computing and machine learning in the cloud for some time, and edge computing, too.

That’s true, and so we can think of this as the next evolution in machine learning and edge artificial intelligence in the IoT world.

If you think about an IoT network that connects to the cloud to process its machine learning, you have a very powerful solution, but also one that comes with some costs. You need to secure the transfer of data between the IoT devices and the cloud, you need to secure the cloud, you need to work on connectivity, and depending on where you are processing and what data you are working with, you might run into privacy issues, too.

TinyML can mean keeping the data on the device and processing that data on the edge using the built-in machine learning algorithms. Straight away you have a reduction in complexity and, while you are still going to have to invest in a device and network security, you could escape the data privacy issues and the cloud-related challenges altogether.

So TinyML seems to offer a lot of promise but is there a chance we might be investing too much faith in this new technology?

That’s a good question. Right now, TinyML is brand new and honestly, it is rather close to Edge AI right now. There are really only some very specific use cases – think condition monitoring, for example – that is apparent right now. I’ve spoken with some companies that are very bullish on edge AI TinyML but whose customer base is not ready to make the switch right now.

Will it change everything? It’s probably too early to say, but we’re already working on edge projects with our clients and there are companies out there that are committing to TinyML and edge computing in a big way. It’s definitely one to keep an eye on.

Cedric Vincent - Chief Technical Officer
04 December 2020