Read Part 1 of this two-part article.
Much of the sensor data generated today is discarded because of cost, bandwidth or power constraints—or sometimes a combination of all three. A good example of unused or discarded information is the asset management data discussed last month (in part 1 of this story) that's stranded in devices. Image transmittal is another bandwidth-intensive operation for which artificial intelligence (AI) can provide useful information by edge processing and transmitting only relevant changes in state.
Applications such as facial recognition and other image analyses appear to be the most widely used AI techniques. They're examples or training data sets for those wanting to learn more about machine learning (ML) and AI using smaller platforms, such as those on personal computers and similar processors available to average users
It's no surprise the same organizations working in the broader AI space, in particular NVIDIA and Google, are also pushing AI technology to run on small-footprint processors such as ARM and other reduced instruction set computers (RISC) used as the basis for most single-board computers (SBC) and microcontrollers.
These same organizations realized that to run the resulting algorithms on these smaller platforms, they needed to reduce the complexity of ML math by making certain changes, such as replacing floating-point calculations with simple eight-bit operations. This change created ML models that work much more efficiently, and require far fewer processing and memory resources.
To make the resulting tools more broadly available and encourage their use and adoption, organizations including ARM, Google and Qualcomm formed tinyML. It's a consortium focused on optimizing ML workloads, so they can be processed on microcontrollers no bigger than a grain of rice, and consume only a few milliwatts of power.
The organization allows developers at every level access to tools that make it easy to get started in AI and ML. All that’s needed are a laptop, an open-source software library and a USB cable to connect the laptop to the development boards. I learned firsthand that you often need additional elements beyond the board itself to collect the data, such as a camera for images.
One of the most widely used open-source tools for developing and training ML models is TensorFlow that can run directly in your browser on their servers. However, TensorFlow requires more processing than is typically available on a microcontroller or SBC, so the team has also developed TensorFlow Lite Micro for systems with smaller footprints.
All of us realize that gathering and preparing data for use represents much of the effort in any project. The same is true for AI and data science, where most of the work (five of seven steps in the AI development process) is data modeling, preparation and testing. In total, these steps represent 80% of the project time. The datasets tend to be quite large, and because SBCs and certainly microprocessor controllers have limited processing power, model development is normally done offline with only the resulting task-specific algorithm(s) running on the end device.
[sidebar id=2]
If you want to learn more about AI and ML, there are a number of processors available to hobbyists as SBCs. You can use them to become familiar with the tools, techniques and implementation of algorithms specific to your interests or problem in your facility. These include Raspberry PI, which recently announced support for TinyML and TensorFlow Lite Micro on the Pico platform, as well as the Arduino Nano 33 BLE Sense board. Major AI player NVIDIA also has an SBC offering, Jetson Nano, for which they offer an associated certification program. Jetson Nano uses NVIDIA-dedicated processors, so you can do the processing/model building on the unit itself.
Time now for each of you to decide what process you'd like to improve with AI, and then build it for testing in your environment to start capturing the benefits of AI with minimal investment.
About the author: Ian Verhappen