Your code returns either 0 or -1 because 18446744073709551614 is far too large to fit in an int64_t. (In fact, it's slightly too large to fit in a uint64_t, since it is exactly 264, and the largest number that can fit in a k-bit unsigned integer is 2k-1.) So you end up with signed integer overflow. (gcc and clang (at least) warned you about this, even without -Wall.)
At any rate, it is not so difficult to produce the library function you are seeking, providing you have some mechanism to generate random 64-bit unsigned integers. A good option would be the Mersenne Twister library. However, for a demonstration we can use only standard C library functions, in this case lrand48 which produces a uniformly-distributed integer in the range (0, 231-1). Since that range produces only 31 bits of randomness, we'll need to call it several times in order to produce 64 bits.
#define _XOPEN_SOURCE
#include <stdlib.h>
#include <stdint.h>
uint64_t urand64() {
uint64_t hi = lrand48();
uint64_t md = lrand48();
uint64_t lo = lrand48();
return (hi << 42) + (md << 21) + lo;
}
To get an unbiased sample from the range [low, high), we need to restrict our random number generation to some multiple of high - low. The range of urand64 is of size 264, so we need to exclude modhigh-low264 values. Unfortunately, unless we have an unsigned int longer than 64 bits, we cannot actually compute the modulus directly. However, we can use the identity:
modk(modkm + modkn) = modk(m+n).
In this case, we'll choose m as 264-1 and n as 1, to avoid having to compute modhigh-lown. Also, it's easy to demonstrate that unless k is an exact power of 2, it's impossible for modk264-1 + modk1 to be exactly k, whereas if k is an exact power of 2, the desired modk264 is 0. We can use the following simple test for a power of 2, whose explanation can be found elsewhere:
bool is_power_of_2(uint64_t x) {
return x == x & -x;
}
So we can define:
uint64_t unsigned_uniform_random(uint64_t low, uint64_t high) {
static const uint64_t M = ~(uint64_t)0;
uint64_t range = high - low;
uint64_t to_exclude = is_power_of_2(range) ? 0
: M % range + 1;
uint64_t res;
// Eliminate `to_exclude` possible values from consideration.
while ((res = urand64()) < to_exclude) {}
return low + res % range;
}
Note that in the worst case, the number of values to exclude is 263-1, which is slightly less than half the range of possible values. So, in the worst case, we will require, on average, two calls to urand64 before we find a satisfactory value.
Finally, we need to deal with the fact that we're asked to produce signed integers, rather than unsigned integers. However, that's not a problem because the necessary conversions are well-defined.
int64_t uniform_random(int64_t low, int64_t high) {
static const uint64_t OFFSET = ((uint64_t)1) << 63;
uint64_t ulow = (uint64_t)low + OFFSET;
uint64_t uhigh = (uint64_t)high + OFFSET;
uint64_t r = unsigned_uniform_random(ulow, uhigh);
// Conform to the standard; a good compiler should optimize.
if (r >= OFFSET) return r - OFFSET;
else return (int64_t)r - (int64_t)(OFFSET - 1) - 1;
}
random_value % (rangeend - rangestart) + rangestart- keltarrandom_valuehas really high precision. Significantly more than 64 bits. So this is hard to implement using standard c. There is also the risk of integer overflows. For example consider signed integers whererangeendhas the maximum value andrangestart the minimum value. - CodesInChaosrand(), it's broken beyond repair. - CodesInChaos