discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] inefficient large vectors


From: Miklos Maroti
Subject: Re: [Discuss-gnuradio] inefficient large vectors
Date: Wed, 21 Aug 2013 21:42:11 +0200

Yes, this is what I am doing, but it is not very nice, and you cannot
easily mix in blocks that want to work at the stream level. What
really bugs me that I think the scheduler could figure all out, and
treat my vectors as a stream, allocate nice buffers (who cares if the
vector fits in the buffer in an integer multiple times). Am I wrong
with this? I think this would be a nice further development... Miklos

On Wed, Aug 21, 2013 at 9:04 PM, Johnathan Corgan
<address@hidden> wrote:
> On Wednesday, August 21, 2013, Miklos Maroti wrote:
>
>>
>> I have many sync blocks that work with large fixed size vectors, e.g.
>> converts one vector of size 12659 to another with size 18353. I have
>> just multiplied the sizeof(gr_complex) with 12659 and 18353 in the
>> signature. However, when the flow graph is running, then I get a
>> warning about paging: the circular buffer implementation allocates
>> large buffers (e.g. 4096 many to make the paging requirement). I do
>> not want really large buffers. I have implemented the whole thing with
>> padding, but that becomes also really inefficient, since when you want
>> to switch between vectors and streams, then you have to jump through
>> extra hoops with the padding. In a previous version I had streams
>> everywhere, but then there is absolutely no verification whether I
>> messed up the sizes of my "virtual vectors".
>>
>> So is there a way to work with large odd length vectors which does not
>> have this buffer allocation problem, and does not require padding? It
>> seems to me that it could be supported: regular streams but the vector
>> size would be verified separately at connection time and would not be
>> used to multiply the item size. Any advice is appreciated...
>
>
> The best technique here is to round up your itemsize to the next integer
> multiple of the machine page size, typically 4K.  You can still operate a
> vector at a time, but you'll have to do a little math to identify the start
> of each vector in the input and output buffers, as they will no longer be
> contiguous.  It sounds like you might have already tried something like
> this.
>
>
>
> --
> Johnathan Corgan
> Corgan Labs - SDR Training and Development Services
> http://corganlabs.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]