Recom
STMicroelectronics

STMicroelectronics extends STM32Cube.AI development tool with support for deeply quantized neural networks

unnamed (7)

STMicroelectronics has released STM32Cube.AI version 7.2.0, the first artificial-intelligence (AI) development tool by an MCU (microcontroller) vendor to support ultra-efficient deeply quantized neural networks.

STM32Cube.AI converts pretrained neural networks into optimized C code for STM32 microcontrollers (MCUs). It is an essential tool for developing cutting-edge AI solutions that make the most of the constrained memory sizes and computing power of embedded products. Moving AI to the edge, away from the cloud, delivers substantial advantages to the application. These include privacy by design, deterministic and real-time response, greater reliability, and lower power consumption. It also helps optimize cloud usage.

Now, with support for deep quantization input formats like qKeras or Larq, developers can even further reduce network size, memory footprint, and latency. These benefits unleash more possibilities from AI at the edge, including frugal and cost-sensitive applications. Developers can thus create edge devices, such as self-powered IoT endpoints that deliver advanced functionality and performance with longer battery runtime. ST’s STM32 family provides many suitable hardware platforms. The portfolio extends from ultra-low-power Arm Cortex®-M0 MCUs to high-performing devices leveraging Cortex-M7, -M33, and Cortex-A7 cores.

STM32Cube.AI version 7.2.0 also adds support for TensorFlow 2.9 models, kernel performance improvements, new scikit-learn machine learning algorithms, and new Open Neural Network eXchange (ONNX) operators.

For more information about STM32Cube.AI v7.2.0 and the free download, please visit www.st.com

You can also read our blogpost at https://blog.st.com/stm32cubeai-v72/.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top