TinyML Takes a Major Step Forward as Israeli Company Releases New Chip

Published: 19 April 2022 | Last Updated: 19 April 2022949
TinyMCE, or optimizing machine learning (ML) models to run on resource-constrained devices, is one of the fastest-growing subfields of ML. To achieve this ultra-low-power, high-performance computing required by TinyML (or sometimes called TinyAI), engineers have explored a number of exciting new technologies.
AIoT Dev Summit keynote delivered by Pete Warden, TensorFlow Lite Engineering Lead at Google.

What’s TinyML good for

TinyMCE, or optimizing machine learning (ML) models to run on resource-constrained devices, is one of the fastest-growing subfields of ML. To achieve this ultra-low-power, high-performance computing required by TinyML (or sometimes called TinyAI), engineers have explored a number of exciting new technologies.

 

Capitalizing on this trend, the Israeli company Polyn announced last week that its latest neuromorphic analog signal processor, the TinyML/TinyAI processor, has been successfully packaged and evaluated. In this article, we'll take a look at the technology Polyn offers to understand the impact it may have on TinyML as a whole. 

 

Neuromorphic Computing for Artificial Intelligence


One of the exciting emerging technologies in the quest for lower power, higher performance AI computing hardware is neuromorphic computing.

The concept of neuromorphic computing is that the human brain is the most energy-efficient computing device known to man. When trying to run AI applications, it would be advantageous to create computing hardware that mimics the biological processes of the brain as closely as possible. While this may sound like a daunting task, engineers can attempt this type of entertainment through a combination of hardware and software.

 

From a hardware perspective, neuromorphic chips attempt to mimic the brain by acting as circuit elements for neurons, axons, and the weighted connections between them. To further mimic the brain, this hardware is typically implemented through analog circuits, which also help improve performance and power efficiency. Neuromorphic computing then relies on specialized neural networks, such as spiking neural networks and electrical signal modulation to simulate changes in brain signals. With this basic understanding in mind, let's take a look at Polin's new technology.


Polin's NeuroSense and NASP Technology


This week, Polyn announced that its proprietary neuromorphic computing chip, called NeuroSense, has been packaged and evaluated for the first time, which has received a great deal of attention. 



TinyML Takes a Major Step Forward as Israeli Company Releases New Chip.png

NASP Demo Chip

According to Polyn, their technology utilizes a unique platform that takes a trained neural network as input and uses mathematical modeling to synthesize the network into a true neuromorphic chip. Its NASP chips use analog circuits in which neurons are implemented using operational amplifiers and axons are implemented by thin-film resistors. 

They claim that their platform produces synthetic chips that are fully laid out and ready for manufacturing.

 

TinyML Takes a Major Step Forward as Israeli Company Releases New Chip(4).jpg

NASP Design Process


This newly packaged and evaluated NeurorSense chip is implemented in 55 nm CMOS technology. In addition, it is said to act as an edge-signal sensor, capable of processing raw sensor data using neuromorphic calculations without any digitization of the analog signal. For this reason, the company is calling it the first neuromorphic analog TinyML chip that can be used directly next to a sensor without an analog-to-digital converter (ADC).

While many technical specifications are not yet known, Polyn's NASP is said to offer 100 MW of power consumption for always-on applications, with "twice the accuracy" of traditional algorithms. 

 

Bringing TinyML Chips Into the Future


For now, Polyn is encouraged by its development, saying that the successful packaging and evaluation of its chip validates its technology and the entire NASP system. Moving forward, Polyn said it hopes to offer the chip to customers in the first quarter of 2023 as a wearable device with integrated photovoltaic volume tracing (PPG) and inertial measurement unit (IMU) sensors.

 

NASP Technology for Near-sensor Tiny-AI


According to Polyn, many applications can benefit from AI, in particular from the neural network paradigm, but the actual implementation of this powerful mathematical approach can be overly powered intensive when executed in a traditional way on a standard CPU or GPU. If applications use large amounts of data and access memory frequently, this can lead to bottlenecks in the von Neumann architecture. For cases with the continuous signal flow, dedicated processors are more efficient. A good example is a wearable device with heart rate (HR) tracking and human activity recognition (HAR), where the PPG/IMU sensor is constantly generating data and its processing consumes a lot of battery power. For devices that perform true always-on measurements, neuromorphic analog signal processing (NASP) is an ideal solution, offering ultra-low power consumption of 100uW and twice the accuracy compared to traditional algorithms. The improved accuracy also simplifies the overall system and reduces the associated costs. Another power-hungry application is predictive maintenance (PDM) sensor nodes. The Industrial Internet of Things (IIoT) utilizes IoT devices and sensors to monitor machines and environments to ensure optimal performance of equipment and processes.PDM monitoring the health of machines to identify (also known as predicting) possible component failures is an IoT technology that has received a lot of attention recently. To achieve effective PDM, large amounts of data need to be collected, processed, and analyzed through machine learning (ML) algorithms. If all of this data had to be sent to a hub for analysis, data communication and processing would be more trouble than it's worth. Sensor data pre-processing can dramatically reduce the amount of data sent to the cloud, saving money and improving latency. NASP addresses all of these scenarios, as well as many other uses for intelligent optimization (pre-processing) of raw data directly on the sensor. It not only solves the problems of existing applications but also opens up new opportunities for the entire industry.

 

Sensor Data Optimization


NASP is a true Tiny AI solution designed to optimize raw data and reduce CPU load and the amount of data being forwarded to the cloud. the NASP chip sits next to the sensors and forms the Tiny AI logic layer. It is an inference solution that uses already trained machine learning models to make predictions. In the NASP concept, data processing chips are synthesized by NASP automation tools from already trained neural networks. Based on POLYN's many years of expertise, the "inference-only" approach is very effective for applications such as speech extraction, sound/vibration processing, and wearable device measurements. It offers significant advantages in terms of power, accuracy, and latency.

 


The Neuroscience Behind NASP


The main advantage of neural network computing is parallel operation. The biggest advantage is neuromorphic computing, especially through hardware and software designed for maximum parallelism in an effort to mimic the human brain and achieve its computational-to-power efficacy. In addition to low power consumption and improved performance of computational workloads, neural networks provide fault tolerance, meaning that if sensor data is inconsistent, the system can still produce results. All sensor signals entering the input layer of the NASP chip at the same time are transmitted in parallel to subsequent layers. There are no execution cycles and no instructions to/from memory.   The human brain is not only an ultra-low-power parallel operating system but also an analog system that processes various signals without converting them into binary format. For tasks such as signal perception, analog systems are preferable. According to Semiconductor Research, the large number of analog signals expected in the next decade will require fundamental breakthroughs in hardware to produce a smarter world machine interface. NASP is one of these breakthroughs, designed to sense both analog and digital signals and, most importantly, to add "intelligence" to a variety of sensors.    The NASP chip contains artificial neurons (nodes that perform computations) and axons (weighted connections between nodes) implemented using circuit elements: the neurons are implemented using operational amplifiers and the axons are implemented using thin-film resistors. The NASP chip design embodies a sparse neural network approach, with only the necessary connections between neurons required for inference, which means that the solution significantly and efficiently reduces neural connections. The NASP approach simplifies the chip layout compared to a memory design where each neuron is connected to each adjacent neuron. This design is particularly suitable for convolutional neural networks (CNNs) with very sparse connections, as well as RNNs, Transformers, and Autoencoders. Neural network-tuning of the chip design is an important part of every on-chip neural network solution. The programmable solutions available in today's market have architectural limitations that impose additional transformations on the neural network. Sometimes, the original neural network undergoes almost 100% transformation during the migration process, which is a costly approach. To address this issue, the NASP model includes chip design automation tools, namely POLYN's T compiler and synthesis tools, that convert any trained neural network into the best mathematical model to further generate the chip layout while being fully compliant with POLYN's customer neural networks and saving the associated effort and cost. For a number of reasons, the digital transformation that the industry will embrace would not be possible without a renaissance in analog computing. One is the concept of energy efficiency. Excessive power consumption is incompatible with data computing in sensory systems. The next trend is that artificial intelligence is increasingly moving to the edge and is being applied to sensor nodes today. There is a need to optimize the communication between billions of IoT devices and offload data processing from the cloud to improve TCO and efficiency. Like the human brain, which excels at processing complex information and changing dynamically over time, neuromorphic analog signal processors excel at real-time computation, thus contributing to the beneficial meshing of the digital and analog technology worlds.

 

Related News

1Chip Packaging Lead Time Has Grown to 50 Weeks

2Eight Internet of Things (IoT) Trends for 2022

3Demand for Automotive Chips Will Surge 300%

4Volkswagen CFO: Chip Supply Shortage Will Continue Until 2024

5BMW CEO: The Car Chip Problem Will Not Be Solved Until 2023

6Shenzhen: This Year Will Focus on Promoting SMIC and CR Micro 12-inch Project


UTMEL

We are the professional distributor of electronic components, providing a large variety of products to save you a lot of time, effort, and cost with our efficient self-customized service. careful order preparation fast delivery service

Related Articles