Not having blogged for a while I thought I better do an update.
I have a number of projects on the go, one of the latest is using
NVIDIA's CUDA to do Software Defined Radio (SDR). I know
there is nothing too original about this but I wanted to learn how to
use CUDA and I thought SDR would be a place to start.
In the first picture you can see a waterfall of the 2 m band in a
Qt application, it uses OpenGL and CUDA.
There is not much to see, just some APRS. I am currently working
on the software digital down converters. Using the various memory
resources on the NVIDIA card needs careful planning to obtain
maximum acceleration. I am expecting to be able to have at
least 10 receiver channels running on the card. I have some other ideas
for this Parallel computer like PAPR reduction in DVB-T2.
The SDR I am using is an Ettus research B200 which has a USB3
interface between itself and the P.C. The B200 operates from about
50 MHz to 6 GHz. With about a 32 MHz BW. I put mine in a
Hammond 1455L1601 box as it comes without a case.
planning to upgrade that to a GTX 690 which has twice as many cores.
The 690 is actually 2 GTX 680s on one card and appears as two
compute devices. Fortunately as these cards are aimed at the Gamer market
and high end Gamers like to have the best gear so the price of last generation
cards on the used markets is very reasonable. I found Gum Tree to be
a better place to buy them rather than eBay.
NVIDIA have just announced their Pascal chips with NVLINK which
will provide a quantum leap in performance. Those cards will be available
in 2016. They stack memory and CPU wafers on top of one another and
interconnect using Silicon vias. They will also allow much faster
communication with the host CPU through a shared memory interface.
Even with PCIe v 3 the global memory interface between the motherboard
and the GPU is the main bottle neck.
NVIDIA and partners like IBM are working hard to bring this technology
to other programming environments like Java and Python. They are also
providing application specific libraries for things like Deep Learning
Neural Networks. Maybe one day I will have a Neural Network to work
DX for me while I code.
Well that is it for now, back to my CUDA 6.5 programming / learning.
No comments:
Post a Comment