Re: [Discuss-gnuradio] Sample rate of file souce/sink
From:
Jesse Reich
Subject:
Re: [Discuss-gnuradio] Sample rate of file souce/sink
Date:
Fri, 18 Mar 2016 01:57:44 +0000
Thank you Marcus and Ed. I finally had a chance to look back at my GRC flowgraph. Same data types in and out (complex for all blocks). And since I have been trying to figure this out for awhile, I've taken away all other blocks other than RTL-SDR Source -> File Sink and then File Sink -> throttle -> QT Gui Sink.
I figured it out and turns out to be embarassingly simple after all. It turns out I should've been paying attention to the output saying "invalid sample rate: 100000 Hz". It doesn't say what the sample rate was defaulting too. But setting both rates in file sink and file source flowgraphs to sample rates that are known to work (2e6) led to a playback that matches real time.
Again, thank you both very much. Probably one of the best things about GRC is the support everyone offers on this list serve.
Jesse
On Wed, Mar 16, 2016 at 9:25 AM Marcus Müller <address@hidden> wrote:
Hi Jesse,
this is not embarassingly simple!
So, the point is that GNU Radio is really totally agnostic when it
comes to sampling rates. For all the blocks, sample streams are
really nothing but a sequence of numbers. Whether the signal was
physically sampled at 1MS/s or 1S/s doesn't matter; the only thing
that matters is e.g. how long one event is in unit of samples (e.g.
the signal source doesn't really use the sampling rate for anything
but to calculate how long on period of cosine is – a signal source
with a cosine frequency of 1kHz and a sampling rate of 32kHz will
produce exactly the same signal as one with a sampling rate of 1 and
a cosine frequency of 1/32; there's really no difference in
behaviour).
The architecture of GNU Radio implies that every block processes the
input he has as fast as possible to produce output, with which the
next block works as fast as possible and so on; the fact that
there's a limit in processing speed of your file source comes
because somewhere, a downstream block has a full input buffer, so
the block before that can't produce samples, because there would be
no space to put them.
Now, if all your blocks were infinitely fast at processing samples,
the job of the block providing this (that is a very common mechanism
in buffered architectures, it's usually called backpressure)
would always fall to the throttle block (whose only job is to do
exactly that - only copy as many samples over per iteration as
allowed to keep the time-averaged sampling rate at the specified
limit).
Now, if there's another block that effectively limits the rate of
samples going through the flow graph, e.g. a complicated calculation
that is CPU-bound, then that will put up its own backpressure.
Together with the backpressure of the throttle block, that might
decrease average rate even below the throttle's rate.
For a deeper understanding of why this happens it would be necessary
to look at your actual playback flowgraph and the blocks involved.
Best regards,
Marcus
On 16.03.2016 02:29, Jesse Reich wrote:
This is probably embarrassingly simple but I can't seem
to find the answer anywhere. I just recorded a signal to a
file sink with a sample rate of 100k. I go to use that file
as a source with a throttle set to 100k and it seems to
playback at approximately 1/10 the speed. When I step up the
throttle sample rate to 1M it seems to be closer to
real-time. What am I missing??