discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] functions to generate random signals


From: CEL
Subject: Re: [Discuss-gnuradio] functions to generate random signals
Date: Tue, 5 Jun 2018 19:28:34 +0000

Hi!

Linda, you **must not** use rand(). It's not thread-safe, and GNU Radio
is inherently multithreaded. For reference, I've pointed that out in a
recent mail exchange [-1]; the recommendation was the same as Dave's:
use C++11's std:: random generators; that email even has a small code
example :) !

That being said, be warned that this is a lengthy email, and it
presumes you actually care about generating random numbers in detail.

Otherwise you really shouldn't be writing your own code, but follow
Dave's advice, and use the C++11 built in random number generator; the Boost 
library also contains variate generators, i.e. you
can use boost to generate uniform, but also gaussian, laplace, … random
variables.
GNU Radio does export its own random number generators, but let's be
honest: there's zero advantage to using it compared to something that
your language (C++11) brings.

To add something new to what Dave said: If you've got a recent release of GNU 
Radio (3.7.12.0 or later, or current master), then you'll find there's a 
gnuradio-runtime/include/gnuradio/xoroshiro128p.h [0], which
contains a fast, high-quality Pseudorandom number generator; it's usage
is pretty simple:

----------------------------------------------------------------------
#include<gnuradio/xoroshiro128p.h>
// your other code
uint64_t internal_state[2];
uint64_t seed = 42; /* whatever you want to use as seed */
xoroshiro128p_seed(internal_state, seed);
// all set up for use now!
while(you_need_random_numbers) {
     uint64_t uniform_64bit_int = xoroshiro128p_next(internal_state);
}
----------------------------------------------------------------------

which gives you uniformly distributed integers on [0;2⁶⁴-1]. Often, you
want floating point uniform numbers instead. Feel free to use these
functions (from [1], LGPL):

----------------------------------------------------------------------
/*! \brief Convert the uint64_t output of the RNG to a uniform float32
on [0,1]
 */
static inline float uint64_to_f(const uint64_t in) {
  // Float32 has 24 significant bits.
  return (in >> (64-24)) * ( ((float)1.0f) / (UINT64_C(1) << 24));
}

/*! \brief Convert the uint64_t output of the RNG to a uniform double
(float64) on [0,1]
 */
static inline double uint64_to_d(const uint64_t in) {
  // Float64 has 53 significant bits.
  return (in >> (64-53)) * ( ((double)1.0) / (UINT64_C(1) << 53));
}
----------------------------------------------------------------------

Now, generating Gaussians has traditionally been done with the Box-
Muller transform applied to a uniform variable; for example,
Boost::random does that. I find that this method is /very/ unsatisfactory in 
multiple aspects¹.
Instead, I propose approximating a Gaussian simply by applying the
Central Limit Theorem (CLT) to a sequence of random numbers. I've
tested that – with 32 summands² from the same XOROSHIRO128+ source, you
approximate the theoretical gaussian CDF to a degree much better than
you're likely ever to require. One caveat: The tails of the CDF
disappear, which is logical: by summing up 32 uniforms, you simply
can't get a number larger than 32 times the upper boundary.
My code [1] contains a method for that as well (I recommend using
iterations=32):

----------------------------------------------------------------------
  /*! \brief Use the Central Limit Theorem to generate a Normal RNG
   */
  static inline float xoroshiro128p_cltf(uint64_t *state, const
uint32_t iterations, const float stddev) {
    // expected value of a [0;2^32-1] uniformly discrete variable
    // 2^31 - 1/2
    static const int32_t mu_32 = (UINT64_C(1) << 31) - 1 ; // +0.5, see
sum -= iterations/2
    // variance: ((b-a+1)^2 - 1)/12; b = 2^32 - 1, a = 0
    //          =((2^32)^2 -1)  /12
    //          =( 2^64 - 1)    /12
    //          =  2^64 / 12 - 1/12
    //          =  2^62 / 3  - 1/12
    // close enough
    //         ~=  2^31 / sqrt(3)
    static const float std_32 = 1239850262.2531197f;
    float norm = std_32 / stddev * sqrtf(iterations);
    int64_t sum;
    if(iterations%2) { //odd
      sum = (xoroshiro128p_next(state) >> 32) - mu_32;
    } else {
      sum = 0;
    }
    for(uint32_t i = 0; i < iterations/2; i++) {
      uint64_t tmp = xoroshiro128p_next(state);
      sum +=  (tmp >> 32) - mu_32;
      sum +=  (tmp & 0xFFFFFFFF) - mu_32;
    }
    sum += iterations / 2;
    float val = (float)sum / norm;
    return val;
  }
----------------------------------------------------------------------

Hope that helps!

Best regards,
Marcus

[-1] http://lists.gnu.org/archive/html/discuss-gnuradio/2018-05/msg0023
7.html
[0] https://github.com/gnuradio/gnuradio/blob/master/gnuradio-runtime/i
nclude/gnuradio/xoroshiro128p.h
[1] https://github.com/marcusmueller/gr-xoroshiro/blob/master/include/x
oroshiro/xoroshiro-variates.h
[2] https://projecteuclid.org/download/pdf_1/euclid.aoms/1177706645
[3] https://github.com/marcusmueller/gr-
xoroshiro/blob/master/examples/Kolmogorov-Smirnov-Test.ipynb

¹ First of all, the code in Boost is either stolen from "Numerical
Recipes in C" 3rd ed, or both stole from the same place. If that's the case, 
neither cites their sources but the original paper [2], but that doesn't 
contain the code used in boost (or  (or the nearly identical code in Numerical 
Recipes). So, how do I put this? It's license-wise precarious.
Then, compute performance: Box-Muller depends on testing each partial
sum; which seems to thrash the pipeline on modern CPUs (this is a
guess); I found it slower than just doing the naive summing (normalizing on the 
way).
Then, numerical performance: if you plot the empirical distribution
(histogram) of a large set of box-muller generated "standardnormal"
variables, you'll always notice something strange happening around 0;
something is rotten in the state of Denmark.

² For a discussion of the amount of summands, see these plots I made
before we fixed the fastnoise_source, see [3]. Fastnoise source is fixed with 
respect to the multi-threading; it's unfixed with respect to speed: for 
realistic random pool sizes it's slower than just running xoroshiro128+ and 
generating variates "on the fly" (but still faster than boost's mt19937 and 
boost's variates). If we should switch over the non-fast noise_source to 
xoroshiro128+, we can get rid of fastnoise_source. We just haven't had time to 
pull that one off.

On Tue, 2018-06-05 at 13:17 -0400, Linda20071 wrote:
> I understand the uniform random generator and Gaussian generator have
> already been implemented in gnuradio. However, For some reason, I
> need to implement some customized blocks to generate my own random
> sequences.  
> 
> Could an expert here explain:
> 1. Is there a function in C++ that can generate a uniform noise? Is
> it the function rand() in math.h?
> 2. Is there a function in C++ that can generate a Gaussian noise?
> 
> If yes, are there any documents on how to use these two functions?
> 
> Thank you.
> 
> 
>  
> _______________________________________________
> Discuss-gnuradio mailing list
> address@hidden
> https://lists.gnu.org/mailman/listinfo/discuss-gnuradio

Attachment: smime.p7s
Description: S/MIME cryptographic signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]