AI-powered Security Cam Part 4 : IoT Edge Solution

[Reading Time: 15 minutes]

With this blog post, I start the fourth series of the “AI-powered Security Cam”. It will be about being able to use Machine learning on a device (to run offline in case of doubt). In doing so, I will stick to the red thread – the post bus recognition.

The concept that I want to implement with you is simple. First, we do an object detection of generally known objects (cars, persons, birds, dogs, bicycles, …). When our system detects a car, it is applied to a second ML model, which checks if it is a post bus.

Analysis of an environment on Postbus

The picture above shows how the captured photo is to be analyzed. First, an object is to be detected and the class of the object, i.e. whether it is a car, a person, etc., is to be evaluated. If the class (or type) of the object is a car, the image must be evaluated by a second Machine Learning model. This model only evaluates whether the object is a post bus. A kind of notification would then be issued according to the above image. In any case, the analysis data and image snippets are stored for later use or transferred to the cloud with an existing Internet connection.

Jetson Nano setup

Wow! Let’s get started.
If you haven’t set up the NVidia Jetson Nano yet, I’ve written down a little recommendation for you, what you need to implement the described solution.
In principle, IoT Edge is already sufficient. But in principle driving a car is very easy. It’s the details that often get in the way.

First steps

When you’re set-up, we’ll get started. I assume you have already set up the IoT Hub in Azure and registered your Jetson Nano with it. I also assume that you have installed VSCode and extended it as in the post “AI-powered Security Cam Part3 : ML-Model on the edge”. To keep it simple, you can clone the code for the current project: GitHub repo “IoTEdgeObjectTracking”. The following commands clone the code and open VSCode with the appropriate directory as a workspace.

git clone https://github.com/totosan/IoTEdgeObjectTracking
cd IoTEdgeObjectTracking
code .
VSCode with open Workspace Folder

Developing the application

The IoT Edge Solution

Now we go through the application together. I would like to start with the basic structure. The application consists of several parts. These parts are put together in an IoT Edge Solution. IoT Edge is a managed service running on a device that basically hosts and orchestrates Docker Containers. At the same time, it can communicate with the IoT Hub in Azure to either obtain a new configuration or simply compare the state of the device. At a minimum, however, the IoT Hub is required to assign an identity to the IoT Edge Device.

Cloud solution according to PaaS
Services in Container die von IoT Hub auf IoT Edge gemanaged werden.

Services in containers managed from IoT Hub to IoT Edge.
The pictures above show how the IoT Hub provides devices with Docker Images. In addition, IoT Hub knows exactly which device, which configuration must be given. An IoT Edge device can also work “offline” for a longer period of time, so connectivity cannot become a problem.

My IoT Edge solution has four modules. One of them is only described by configuration in the deployment files (Azure Blob storage), the other three are custom code.

IoT Edge Modules

These modules are run/hosted in parallel by the IoT Edge Runtime Engine so that each container can “work” independently of the others. The principle of IoT Edge is simple. The runtime hosts all defined containers, which can then exchange information with each other via message-based communication. Alternatively, there are of course all other communication channels that you would have with other container hosts (Docker Desktop, Kubernetes, Podman, …). For example, I also mean REST for direct communication, like in the following picture.

Examples for communication under IoT Edge

The next picture shows how the containers in my application communicate with each other.

Running containers on the IoT Edge Device – Scheme of communication

The module YoloModule (Sorry for the name; when I created the module, I didn’t know what else would be in it – I should refactor it – you’re welcome to join me) has an ML model (Yolo – You only look once ) to recognize objects in width. I.e. the images that this module analyzes are first checked for general objects only. With general objects I mean “car”, “people”, “bicycle”, “motorbike”, “dog”, … . This is sufficient for a first assessment since my goal is to recognize the Postbus. With this construct much more is possible of course, but I keep things quite manageable for now.
If you have more ideas, maybe you can leave me a comment, which would be a great use case for you.

When the detector has detected a car, the module makes a REST call to the PostcarDetector module and transmits the current frame in which the car was detected. The PostcarDetector module then makes a prediction with another ML model, with the estimation if the car in the image is a Postbus (classification). The REST response is either “Post” or “NoPost”. After that, further processing is again done by the YoloModule.

By the way, the YoloModule does an additional object tracking to avoid asking the PostcarDetector Module every time a car is detected if it is a Postbus; this would cost a lot of performance. For object tracking, I will provide more details later.

When the YoloModule receives the answer from the PostcarDetector module, it sends a corresponding IoTEdge message to the measurement bus. Now the SpeechModule can react to this message and output the text as voice output “I see a Postbus”. For my Jetson Nano, I saved some texts in audio files using the Speech Service from Azure, so that they can be output to a connected speaker (by the way, I had a lot of “fiddling” until I could output the first sound to the speaker via the container… but more about that later).

The message of the YoloModule is also sent via the bus to the IoT Hub in Azure. If there is no connectivity at the time of the message, the messages are persisted for a certain time (default 2h=7200s). If a connection is established, the messages are sent to the IoT Hub and transferred there (at least in my solution) in Azure Time Series – more on this later.

Furthermore, YoloModule stores the frame where something was detected in a local Azure Blob Storage (on the device). The charm of this solution is that the blob storage module automatically synchronizes the images with its cloud counterpart when the IoT Edge Device is online.

Currently, however, only the IoT solution is available in VSCode. To be able to run it on a Jetson Nano, the solution must be made known to the Azure IoT Hub as a “configuration” and the modules must be stored in a registry in the form of docker images. And this is how it works.

IoT Edge Deployment Config

In the IoT Edge Solution, you will find a file named deployment.arm64v8.template.json. This file is the configuration for the IoT Hub. It describes for which CPU/HW, which modules and default settings are used. If you look at my code in the GitHub repo, you will also find deployment files for other systems. Since my Jetson is an ARM64 architecture, I’ll talk about this configuration in particular.

In the deployment file, you will find a section called “modules”. It contains all the modules I have planned for this IoT Edge Solution.

deployment.debug.arm64v8.template.json is intended for my debugging configuration

Of course, I can now offer different configurations. As you can see in the picture, I have a deployment configuration for debugging purposes. Here the settings are changed (e.g. in the picture under CreateOptions -> Env), so that I can better understand my tests locally. It might also be possible to have a configuration that does not contain the SpeechModule, for example, if the Device has no output. So these deployment configurations are already very useful when it comes to creating solutions tailored to devices.

Build & Push – Docker Images / Config

In order to use a Config, the used modules must be loaded as images into a registry. Under VSCode this is very easy:

  • Right-click on the deployment configuration
  • In the context menu, select “Build and Push IoT Edge Solution”.
Building the solution and registering it in a registry

After that, the building of the Docker Images start and if successful the push into the container registry. In my case, I have created an Azure Container Registry where I store my own images.

When the building and the pushes are finished, the deployment config is completed. For this purpose, the variables in the deployment templates (e.g. deployment.debug.arm64v8.template.json) are replaced by fixed values and the resulting configuration file is created in directory “config”. Example:

Module Definition of “YoloModule” with variables

The Yolo module is described with “version”: “1.5.$BUILD_VERSION_YOLO”. This is later the version number of the IoT Edge Module in the IoT Hub. The IoT Edge Runtime can use this version number to determine whether a module needs to be updated. In my solution, I have introduced a build number for this purpose so that I can control this centrally in the “.env” file. There I have also declared additional variables so that I can adapt other parts of the config as well.

.env file with the variables for the deployment templates

When the Build IoT Solution task is running, the value of the Json attribute “version” in the above example becomes “1.5.118”. The variable $CONTAINER_VIDEO_SOURCE under “settings -> createOptions -> env” is replaced by “/dev/video0”.

Note: In the template, you will find the attribute “Env” under “createOptions”. The assigned value is an array of text. In the generated, finished deployment file Env as such attribute does not exist anymore. Instead, it is now part of createOptions as a pure text type. This is how the docker image is created.
Extract: Generated Deployment File “Configuration”

But in the deployment template file (image “Module definition of “YoloModule” with variables”) you can see another form of variables: ${MODULES.YoloModule.debug-arm64v8}. This variable syntax is used to create the assignment for the used modules.
Modules.<module folder name>.<architecture>
For this purpose, a lookup is made in modules.json under the folder of the specified module under “modules”. This file lists which docker file should be used under which architecture to build the image.

modules – image – architecture – assignment

In my example, this means that “image” gets the value “iotcontainertt.azurecr.io/yolomodule:latest-debug-arm64v8”. This makes it clear how IoT Hub does the assignment configuration – docker image in the registry.

Assignment of the architecture to the target configuration
Creating and storing the image

IoT Edge Config Deployment

Now that all images are stored in the registry and the IoT Edge configuration has been created, the solution can be deployed to the appropriate device. There are two general solutions (I can certainly think of one or two more, but I don’t want to go into them here) – Azure CLI or VSCode (typing or clicking)

If you have set up IoT Edge on the Jetson Nano, as recommended at the beginning of the blog, then you already have a Device Identity registered in the IoT Hub. You can find this device again here ( VSCode Extension ):

List of devices registered in the IoT Hub

My device is called “TotoNano”, which will serve as a target device. There are many more devices registered in the list above. The entries with the icons as my TotoNano device has them in front are IoT Edge – Devices, the other microcontroller-based IoT Devices.

To deploy the IoT Edge Solution you need the generated deployment file, which was created in the step ” Build and Push IoT Edge Solution “. Since we are already in VSCode and no automated deployment is currently planned, we will do the obvious. To do this, right-click on the generated deployment file and the context menu will open. There you click on “Create Deployment for Single Device”.

Deploying IoT Edge Solution for Single Device
Auswahl des Zielgerätes

Selecting the target device
After selecting the target device, the deployment is initiated. In order to be able to understand what happens, I have illustrated the following of my Journal below.

journalctl -f -u iotedge | grep -v mgmt

With this command, I can print the journal permanently (-f follow). I only display all log entries for the unit (-u) iotedge. I was annoyed by the management logs (“[mgmt]”), so I excluded them with grep -v.

Extract of the IoT Edge Deployment to the unit

You can see here how the IoT Edge Runtime starts to download the individual modules as docker images (marked yellow). After the “pull”, IoT Edge stops the corresponding module and restarts it with the new image. The Azure Blob storage remains untouched in this ScreenShot because I did not increment the version number of the storage module during deployment and deployment was already running.

After complete deployment, the IoT Edge solution now runs independently. You can disconnect the Internet connection or keep it. The solution does not affect this. By the way, if you correct something in the deployment without touching any code or similar, it is sufficient to change the version number of the module (up or down doesn’t even matter) and right-click on the deployment template in the context menu that appears and select “Generate IoT Edge Deployment Manifest”. Your new deployment file will then be recreated in the config folder and you can restart the deployment as before.

Regenerate the deployment file

Remember: You should have a speaker and a webcam connected via USB to get a result. Nevertheless, you can take a look at what your Jetson Nano sees.

Open your browser and enter the IP of the Nano with port 8080. This should open a video that shows the Livestream and the detected objects.

A vehicle that was detected and is tracked with ID 2. (the coordinates next to it indicate the motion vector)
A video that recognizes and tracks a recognized truck and the following person.
Another example of tracking several persons

That what is recognized in the stream gets an ID and is marked with a dot. The marking describes the center of the recognized object (Centroid). In addition, the direction of movement is shown in brackets (amount of the current values (x,y) and the average of the entire x or y values of the previous centroids).

Maybe you have noticed in the videos that the centroids “standstill” for a while when the objects are no longer in the picture. This is the time it may happen that an object gets “lost” in some frames. So the tracking becomes more robust. In addition, I don’t always have to analyze every frame by the ML model and thus save resources. Finally, I have a video here that gives you some insight into the performance of the Jetson Nano while it is busy.

Left the video of the Jetson Nano – right “jtop”, which shows the CPU & GPU load + miscellaneous

Now you have learned a lot about the system and maybe you have already tried out one or the other. Then please let me know and write a comment.

So now the only thing missing is what makes the solution one. … Code! And it will come in the next article. Be curious!

By Thomas

As Chief Technology Officer at Xpirit Germany. I am responsible for driving productivity for our customers by a full stack of dev and technology in modern times. But I not only care for technologies from Microsofts stack like Azure, AI, and IoT, but also for delivering quality and expertise with DevOps

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.