How to Run Nvidia Jetson Inference Example on Forecr Products (SegNet) - Forecr.io

How to Run Nvidia Jetson Inference Example on Forecr Products (SegNet)

Jetson AGX Xavier | Jetson Nano | Jetson TX2 NX | Jetson Xavier NX

05 August 2021
ENVIRONMENT

Hardware: DSBOX-NX2 (Nano), Camera (MIPI CSI or V4L2 or RTP/RTSP streams)

OS: JetPack 4.5


How to recognize objects using segNet 


In this blog post, we will explain a deep learning example based on ImageNet example that classifies images. You can click here to set up the jetson-inference project and check the ImageNet example. 


SegNet uses pre-trained models for different environments that each class is assigned to a color. Similar to ImageNet example, classifying is done on pixel level and each recognized object is covered by a mask. 


There are five network models come with the jetson-inference project, available on a few resolution options as well. Each one will be explained below with the classes, color codes and examples. If you did not download while setting up the project, run Model-Downloader tool again by typing the following command line.


cd jetson-inference/tools 
./download-models.sh


To run the project, first go into jetson-inference directory and run the docker container. Then, go into build/aarch64/bin directory where the project is built. 

The common usage to run the semantic segmentation program is shown below with additional command lines and examples for each. 


Single Image:


./segnet --network= input.jpg output.jpg   #C++ 

./segnet.py --network= input.jpg output.jpg #Python


Sequence of images :

./segnet --network= "input_*.jpg" "output_%i.jpg" #C++ 

./segnet.py --network= "input_*.jpg" "output_%i.jpg" #Python


Opacity of the image:

./segnet --network= --alpha=200 input.jpg output.jpg  #C++ 

./segnet.py --network= --alpha=200 input.jpg output.jpg #Python



Output the solid segmentation mask:


./segnet --network= --visualize=mask input.jpg output.jpg #C++ 

./segnet.py --network= --visualize=mask input.jpg output.jpg #Python


Square the objects:

./segnet --network= --filter-mode=point input.jpg output.jpg #C++ 

./segnet.py --network= --filter-mode=point input.jpg output.jpg #Python



Let’s look at each network and their usage on different environments. 


Cityscapes


This model is created for segmenting objects in an urban city scene. To run the program run the following command for multiple images.


Deepscene


DeepScene includes a dataset of forest trail roads. To run, type the following command.


# C++
./segnet --network=fcn-resnet18-cityscapes "images/city_*.jpg" "images/test/city_%i.jpg"

# Python
./segnet.py --network=fcn-resnet18-cityscapes "images/city_*.jpg" "images/test/city_%i.jpg"


Multi Human Parsing (MHP)


Multi Human parsing consists of a dataset of human body parts and clothes. To run, type the following command.

/segnet --network=fcn-resnet18-mhp "images/humans_*.jpg" "images/test/humans_%i.jpg"  

./segnet.py --network=fcn-resnet18-mhp "images/humans_*.jpg" "images/test/humans_%i.jpg"


Pascal VOC


Pascal VOC is a more general dataset that contains animals, vehicles, and furniture. To run, type the following command. 


# C++
./segnet --network=fcn-resnet18-voc "images/object_*.jpg" "images/test/object_%i.jpg"

# Python
./segnet.py --network=fcn-resnet18-voc "images/object_*.jpg" "images/test/object_%i.jpg"


Sun RGB-D


Sun RGB-D dataset model created to recognize object in rooms and offices. To run, type the following command.


# C++
./segnet --network=fcn-resnet18-sun "images/room_*.jpg" "images/test/room_%i.jpg"

# Python
./segnet.py --network=fcn-resnet18-sun "images/room_*.jpg" "images/test/room_%i.jpg"

How to Run Live Camera Segmentation Demo?


You can also run semantic segmentation example on a live camera view. Supported cameras are:

• MIPI CSI cameras (csi://0)

• V4L2 cameras (/dev/video0)

• RTP/RTSP streams (rtsp://username:password@ip:port)


To run the segNet with a camera, you can use the following example codes for different camera models and dataset models. Do not forget to run the docker container first. 


C++:


./segnet --network= csi://0                    # MIPI CSI camera 
./segnet --network= /dev/video0 # V4L2 camera
./segnet --network= /dev/video0 output.mp4 # save to video file


Python


./segnet.py --network= csi://0                 # MIPI CSI camera 
./segnet.py --network= /dev/video0 # V4L2 camera
./segnet.py --network= /dev/video0 output.mp4 # save to video file



You can see the camera capture of a room example with Sun RGB-D network model below.


# C++ 
./segnet --network=fcn-resnet18-sun /dev/video0

# Python
./segnet.py --network=fcn-resnet18-sun /dev/video0

Thank you for reading our blog post.