random_default_impl broken for integer types
@ggael
Submitted by Gael GuennebaudAssigned to Nobody
Link to original bugzilla bug (#628)
Version: 3.2
Description
The current implementation of random_default_impl for integers is as follow:
enum { rand_bits = floor_log2<(unsigned int)(RAND_MAX)+1>::value,
scalar_bits = sizeof(Scalar) * CHAR_BIT,
shift = EIGEN_PLAIN_ENUM_MAX(0, int(rand_bits) - int(scalar_bits))
};
Scalar x = Scalar(std::rand() >> shift);
Scalar offset = NumTraits<Scalar>::IsSigned
? Scalar(1 << (rand_bits-1))
: Scalar(0);
return x - offset;
and that does not sounds correct to me. For instance, let's assume rand_bits==32, and scalar_bits==8 (Scalar==char). Then offset is equal to 2^31 while it should be 2^7.
Another problem is that std::rand() >> shift might not fit within a Scalar if Scalar is signed.
So, is it ok to assume that the bits returned by std::rand() have all the probability? and also that the bits of a random integer have the same probability? If so, then we could simply reinterpret_cast "std::rand()&(1>>scalar_bits-1)"?
or, we can also fix the current implementation like this:
if(IsSigned)
return Scalar( (std::rand() >> shift) - (1 << (scalar_bits-1)) );
else
return Scalar(std::rand() >> shift);