discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] GRC's graphical sinks performance issues


From: Johnathan Corgan
Subject: Re: [Discuss-gnuradio] GRC's graphical sinks performance issues
Date: Thu, 2 Sep 2010 09:52:28 -0700

On Thu, Sep 2, 2010 at 08:39, Matt Ettus <address@hidden> wrote:

> I think you are missing the point here.  There is no need to lie to the
> program.  If you are sending the FFT sink 25 MS/s, then tell it you are
> sending it 25 MS/s.  If you give it a different rate you will have all sorts
> of other issues, like incorrect frequency scales on the display. If you want
> to decrease the processor load then reduce the display update rate.

Just to elaborate a bit on this.

The FFT sink in GNU Radio incorporates time domain frame decimation
via the "keep one in n" block.  The sample stream input to the sink is
divided into frames at the configured FFT size (1024 samples by
default in GRC).  Then, only one frame per "n" is forwarded out to the
FFT block, with "n" being calculated as the sample rate divided by the
display update rate, then divided by the FFT size.  In this way, we
only burden the CPU with the windowing/FFT/log power calculation and
graphics rendering as many times as is needed to refresh the display
at the requested rate (which are still all using fast C++ code, not
Python.)

The "sample rate" parameter to the FFT sink is *not* a control input.
You are simply telling the flowgraph the correct numerical time base
of the input sample stream, to be used in the above calculation.  The
sample rate itself is usually established elsewhere; in this case, by
the upstream USRP2 source block's decimation parameter.  The "sample
rate" parameter is also used to correctly display the units on the
x-axis of the FFT window.

Thus, the proper way to control the CPU usage of the FFT sink is to
vary the update rate, as was mentioned by Matt and others.  If in fact
you are CPU-bound, then "lying" to the FFT sink by giving it an
artificially high, incorrect sample rate will have the side effect of
increasing the time frame decimation, thus lowering the CPU load, and
appearing to "cure" the problem.  But the x-axis units/scale will be
incorrect and the update rate won't match the requested rate.

(None of this speaks to whether your system's OpenGL/video card
combination is properly functioning or whether it results in a
performance improvement over the non-GL version of the sink.)

Johnathan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]