I started off using the NVIDIA Jetson Inferencing repository on github. https://github.com/dusty-nv/jetson-inference
I then modified that to accept a grayscale waterfall display from a LimeSDR both for capturing labelled images and for detection/inferencing using SSD Mobilenet v1. The waterfall is produced by a very simple program that uses the cuFFT library and some CUDA code. The video above shows my initial results of monitoring the QO-100 narrowband transponder. It is not perfect but shows the possibilities.
Since doing the video I have captured more data and used a regular Linux machine to do the training using an RTX 3090 graphics card. I am now getting more reliable results. However yet more training is still required. I will also have to do some more training on 2M to capture some NBFM signals (not seen on QO-100). When I get an HF antenna up that works I may try it on HF as well. While the training is done on a regular Linux PC the inferencing (the bit that does the actual processing of realtime images) is done on a Jetson XavierNX but would work equally as well on a Jetson Nano.
A while ago I also played around with NVIDIA Digits using that to analyse DATV signals looking for signals that have real people in them and not just test cards or looped videos. So there are plenty of things to use it for.
I can also think of many non Amateur Radio projects to use it for.