Find out how IoT adoption challenges in terms of security and efficiency are already setting the stage for the next big tech.

Alexa, what is the next big tech coming our way?

Alexa: I have scoured recent developments for an answer: The growth of AI has enabled the efficient processing of Internet of Things (IoT) data in the Cloud and at the edge. However, as IoT grows and matures, so will the amount of data that is presented for real-time analysis.

Yet, most of today’s microcontroller units (MCUs) are too underpowered to catch up economically. These devices struggle to support high computing performance and high data throughput, causing latency issues, which is a death knell for AI.

Over the horizon lies a glimmer of hope: TinyML (or ‘very edge AI’), which enables data analytics to be performed on low-powered hardware with low processing power and small memory size, aided by software designed for small-sized inference workloads. It has the potential to revolutionize the future of the IoT.

In fact, I am the result of TinyML.

‘Very edge AI’ on the horizon

TinyML is broadly defined as a machine learning (ML) technology that enables the performance of data analytics on hardware and software dedicated for low-powered systems, typically in the Milliwatts (mW) range, using algorithms, networks, and models down to 100 Kilobytes (kB) and below.

This means TinyML often involves MCUs or sensors embedded in a module or device, enabling AI at ultra-low power and in performance-constrained environments.

However, in order to be equipped with TinyML, IoT devices or machines must be connected to public or private network, and they are involved in a basic level of automation and augmentation, focusing on singular tasks. The TinyML we have already known for years include Apple’s Siri, Amazon’s Alexa and the OK Google wake words in the brands’ gadgets.

In future, with the power to solve the issues of both cost and power efficiency in scaling up to the potentially explosive growth of IoT, 5G and associated Smart City and eco-sustainability goals, TinyML does seem like it will become the ‘next big thing’ when the ‘current big things’ get even bigger.

According to one technology group, ABI Research, the projected growth of IoT devices from 6.6bn in 2020 to 23.72bn by 2026 is set to fuel the need for TinyML, in terms of market growth from 15.2m shipments in 2020 to 2.5bn by 2030.

Getting ready to ride on TinyML

According to ABI Research’s AI & ML Principal Analyst Lian Jye Su: “By bringing AI analytics beyond machine vision, more end users can benefit from smart connected sensors and IoT devices based on soundwaves, temperature, pressure, vibration, and other data sources. This growth offers tangible and important benefits such as data privacy; high interconnectivity and interaction of various components; high-energy efficiency; small chipset footprint; functional safety and security concerns; and overcoming network bandwidth challenges.”

  • Data privacy: AI processing at the very edge minimizes the data traffic between IoT devices and gateways. Only data that are deemed critical to the system will be sent as an action point. Most of the data stay in the IoT devices without the risk of being collected for malicious purposes. As governments and regulators have become more stringent over data privacy concerns, this implementation will help IoT vendors comply with key government regulations, such as the General Data Protection Regulation (GDPR) enforced by the European Union in 2018.
  • High interconnectivity and interaction of various components: In many cases, an IoT device is part of a wider ecosystem. Data streams from the IoT device are collected by an edge gateway and the insight generated from the data is then shared across multiple IoT devices in the same network. This creates a highly connected network of devices with frequent information exchange. Performing analytics directly on the device reduces the unnecessary information loads transported across the device link, reducing processing latency and providing more accurate and timely feedback on the actual environment.
  • High energy efficiency: As IoT devices become more and more complex, the performance and power requirements of these devices may increase. Streaming data to a gateway or a cloud also leads to higher power consumption, as network transmission can consume more energy than local computing. In order to achieve higher performance without increasing the power budget, IoT solution vendors will need to rely on more power-efficient hardware to execute the AI workloads. This will also enable devices that were previously unable to support always-on capabilities to be able to stay alert constantly, significantly improving the user experience.
  • Small chipset footprint: Minimizing the number of bits in an arithmetic manner reduces the size of onboard memory or Static Random-Access Memory (SRAM) required per processor. This will help further reduce the size of footprint per unit, enabling the development of smaller smart IoT devices or modules.
  • Functional safety and security concerns: The benefits of a hardware-based AI implementation are the addition-al layer of safety and encryption that come with computational chipsets. A software-based solution is generally more prone to corruption and tampering. Having hardened hardware allows AI models to stay secure and be protected from people with malicious intents.
  • Overcoming network bandwidth challenges: While new network connectivity protocols, such as next-generation cellular connectivity and other Low-Power Wide-Area Networks (LPWANs), become more ubiquitous, not all IoT devices are able to function in an environment that has a perfect connection to a public or private network. The limited bandwidth means devices are not able to transfer all their data to a gateway or the cloud and requires them to have local AI processing. TinyML or very edge AI reduces the amount of data to offload and maintains AI performance across various scenarios

Su noted that, as TinyML (very edge AI) exists in a slightly more specialized environment compared to the more-general ‘edge AI’, end users must prepare their equipment, connectivity infrastructure, and internal expertise to capture the real benefits of very edge AI. This includes three priorities: focusing on providing optimal connectivity; facilitating constant Over-The-Air updates of the IoT devices; and working closely with IoT and AI vendors to resolve pain points and developing fruitful partnerships.

For vendors, Su has this advice: Knowing the user demand and requirements for TinyML, TinyML AI chipset vendors need to differentiate through the following:

  • Differentiate through key use cases
    Finding the right target market should be one of the first priorities of a very edge AI chipset vendor. TinyML applications range from consumer electronics and appliances to business-critical and mission-critical applications in industrial and manufacturing and smart cities. These applications also support various types of AI capabilities, including image and facial recognition, people tracking, condition-based monitoring, voice activation, and aromatic and scent. By identifying the right use cases, very edge AI chipset vendors can identify the right go-to-market strategies, the appropriate distributors, and the best partners.
  • Focus on software and services
    Software experience is equally important as the hardware capabilities. TinyML requires vendors to have strong expertise in various domains, including data governance, neural architecture search, model compression, and compilation, and not every vendor has the resources to invest in all of these. Offering sup-port for open-source AI frameworks, such as TensorFlow Lite for Microcontroller, and software, such as Real-Time Operating System (RTOS), may mean more adjustment, coordination, and Research and Development (R&D) efforts, but it will ultimately reduce the long-term maintenance cost for a proprietary solution, enable a higher degree of customization, and remove the customer anxiety of vendor lock-in.
  • Embrace developer communities
    The success of major AI vendors, such as NVIDIA and Intel, is based on a combi-nation of good hardware and a large developer base in the open-source com-munity. Making their solution open source can make a big difference. Despite revenue loss from deploying proprietary software, vendors can generate new revenue from managed services, such as deployment support, AI model customization, and an as-a-Service business model. Smaller TinyML AI vendors can trial and test their hardware and software with a large pool of audiences and learn more about their products. Vendors can also take this opportunity to influence the future roadmap and enhancement of open-source solutions through active contribution and engagement with the developer community.


Meanwhile, in addition to saving costs and power requirements, TinyML could also potentially save lives. Believe it or not, indoor air pollution is responsible for 1.6m deaths worldwide every year!

Hopefully, TinyML-enabled smart home devices will put a stop to these preventable tragedies by providing feedback to end users based on the changes to their surrounding environment, including drastic fluctuations in humidity and temperature; the absence of light sources and specific nutrients; and sudden spikes in harmful air particles—all without exerting a major impact on overall power consumption.