discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss-gnuradio] Question about GSR internal architecture


From: David Beberman
Subject: [Discuss-gnuradio] Question about GSR internal architecture
Date: Thu, 08 Jul 2004 21:57:01 -0700
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.4.1) Gecko/20031114

Hi,

I'm still reading through the source code trying to piece together the overall structure.
It didn't help that I didn't know Python or SWIG.  I've since read through
the docs on both, and installed the wxPython and other pieces suggested in the doc. I have yet to get the source tree to fully compile yet. I'm still missing something but haven't had a chance to look at the make output to figure out whats wrong. Probably
something minor.

My problem is that I'm probably trying to do something with GSR that it wasn't
meant for.  (isn't that always the case?)

As far as I can tell, and please somebody correct me if I'm wrong, the GSR architecture consists of a set of modules hooked together with buffers. Except when threads are not available, each module runs in its own thread. The modules form a chain from a source such as the output of an ADC to a sink such as a graphical analyzer tool, speakers, or video display. I think there is one mutex that all the threads wait on to read their source buffer in the chain and also to to write their output into the sink buffer. (i'm not completely sure about that, but it looks like this is the case.). According to the docs, what happens is that a complete pass goes through the entire chain of sources and sinks queuing data to each module. Each module, when it gets the mutex, reads in the data, and then gives up the mutex. The module then does its processing in its own thread. Thus the modules kind of lock step to move the data buffers between each other then do processing independently. (Or I'm reading the code wrong, and each module simply waits on its source and writes to its sink with separate mutex's. )

The overall structure appears to be meant for broadcast reception or transmission. In either case, "pipeline" delays from processing are really not that important. I think this would
be categorized as a near realtime system, perhaps.

What I need to do is a little bit different. I want to have a receive and transmit path, and have them tied together for control purposes. I also want to have more of a realtime behavior out of the system. Since this is running in software on a regular PC, I have to define realtime a bit differently than an embedded system. What I want to have happen is that a relation exists between a received signal, and when a transmit signal is sent in response. The relationship should be that the transmit signal is sent at a given amount of time after the received signal
was originally received.
To do this, I would need to have some sort of estimate of when the received signal began, something like an interrupt giving me an energy detect point. Then I need to record the current processor clock time. On the transmit path, I want to hold up transmitting the signal until some increment of time has passed, given the recorded processor clock time.

Since this is a regular PC, I will make sure that total time elapsed of doing the receive path processing, and the transmit signal processing is less than the incremental time needed. That way I can be sure that at some point, the transmit path will hold up, waiting for the correct transmit time to arrive. As I understand it, the regular Linux kernel multitasks with a granularity in the range of 10 milliseconds. I am looking at using the Timesys kernel instead. They are claiming a much lower granularity level. I'm also planning to run a barebones system. No networking, no gui, no nothing.

For the work I'm trying to do, I can pretty much work with any latency that is needed to get through any processing paths needed. I just need to be able to hit a given realtime deadline as a synchronization point. I'm not even that concerned about jitter from scheduler/context switch overhead. I can account for that in my signal processing design.

I'm planning to write the code to implement what I'm describing and will be happy to redistribute it, if anybody else happens to ever need something like it. I'm wondering if someone could give me a couple pointers on where to start looking in the source code
to figure out how to implement this.

As a secondary question, I've been trying to find in the source code examples how to handle asynchronous frame reception. What I'm looking for is how to do synchronization on a frame header, followed by data reception. A simple approach would be to put a synchronizer (I use the term loosely), as a source to a data receiver module. The problem is that once the synchronizer is done, it will just be an extra context switch overhead without adding anything. What I would really want to see happen is that the chain of sources to sinks could be redirected once a component in the chain has done its job. This isn't strictly necessary, only an optimization issue. Just was wondering if this
already exists and I'm misreading the code.

I'm not familiar with DSP CPUs and DSP architecture in general, so what I'm asking may be obvious. If so, just let me know where to start looking and reading, and I'll do the rest.

David





reply via email to

[Prev in Thread] Current Thread [Next in Thread]