This bugzilla service is closed. All entries have been migrated to https://gitlab.com/libeigen/eigen
Bug 628 - random_default_impl broken for integer types
Summary: random_default_impl broken for integer types
Status: RESOLVED FIXED
Alias: None
Product: Eigen
Classification: Unclassified
Component: Core - general (show other bugs)
Version: 3.2
Hardware: All All
: Normal Unknown
Assignee: Nobody
URL:
Whiteboard:
Keywords:
Depends on:
Blocks: 3.2
  Show dependency treegraph
 
Reported: 2013-07-10 11:54 UTC by Gael Guennebaud
Modified: 2019-12-04 12:28 UTC (History)
3 users (show)



Attachments

Description Gael Guennebaud 2013-07-10 11:54:29 UTC
The current implementation of random_default_impl for integers is as follow:

enum { rand_bits = floor_log2<(unsigned int)(RAND_MAX)+1>::value,
       scalar_bits = sizeof(Scalar) * CHAR_BIT,
       shift = EIGEN_PLAIN_ENUM_MAX(0, int(rand_bits) - int(scalar_bits))
};
Scalar x = Scalar(std::rand() >> shift);
Scalar offset = NumTraits<Scalar>::IsSigned
              ? Scalar(1 << (rand_bits-1))
              : Scalar(0);
return x - offset;

and that does not sounds correct to me. For instance, let's assume rand_bits==32, and scalar_bits==8 (Scalar==char). Then offset is equal to 2^31 while it should be 2^7.

Another problem is that std::rand() >> shift might not fit within a Scalar if Scalar is signed.

So, is it ok to assume that the bits returned by std::rand() have all the probability? and also that the bits of a random integer have the same probability? If so, then we could simply reinterpret_cast "std::rand()&(1>>scalar_bits-1)"?

or, we can also fix the current implementation like this:

if(IsSigned)
 return Scalar( (std::rand() >> shift) - (1 << (scalar_bits-1)) );
else
 return Scalar(std::rand() >> shift);
Comment 1 Christoph Hertzberg 2013-07-10 13:48:56 UTC
My rand() man page says that on some older/other systems the lower order bits are less random than the higher order bits. 
This is why in C 
  r = ((long long) rand() * n)/RAND_MAX; // is preferable to:
  r = rand() % n;

Subtracting (1<<(scalar_bits-1)) will give very strange behavior if (scalar_bits>random_bits) and for (scalar_bits<random_bits) subtracting the offset does not really make a difference, I guess. (Sure technically, signed integer overflow leads to undefined behavior, but I doubt that this happens on any platform we support.)

That said, I'm not really a fan of rand() anyways, as I generally prefer reproducible random numbers (e.g. by <boost/random/> or C++11 <random>)
Comment 2 Gael Guennebaud 2013-07-10 21:14:24 UTC
I still prefer to make sure no overflow will occur, so:

https://bitbucket.org/eigen/eigen/commits/942b7eedb4e8/
Changeset:   942b7eedb4e8
User:        ggael
Date:        2013-07-10 21:11:41
Summary:     Revisit the implementation of random_default_impl for integer to avoid overflows and compiler warnings.
Comment 3 Nobody 2019-12-04 12:28:14 UTC
-- GitLab Migration Automatic Message --

This bug has been migrated to gitlab.com's GitLab instance and has been closed from further activity.

You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.com/libeigen/eigen/issues/628.

Note You need to log in before you can comment on or make changes to this bug.