I am looking for a way to generate normally-distributed values such that, for any representable x, the probability of the algorithm outputting x is exactly tail(mean(x, nextafterf(x, -FLT_MAX))) - tail(mean(x, nextafterf(x, FLT_MAX))), where tail(z) gives the area under the normal PDF to the right of z. (The listed formula is for floats. For doubles, do the same thing but replace double-precision things with long doubles, and floats with doubles.)
The only way I've come up with to do so is inefficient and calculation-intensive (not to mention requiring the use of doubles to produce float values, and long doubles to produce doubles) and goes like this:
Then when you need a value, generate a uniform random number with your favorite method, and use it to check which exponent to use, going by the precalculated odds (might need two random numbers to get enough precision to differentiate the boundaries.)
Now you know what exponent your float/double will have, use rejection sampling to get a number with that exponent. First generate the rest of the bits uniformly at random (so, 24 value bits for a float, 53 for a double, and a sign bit.) Then calculate the probability of getting closer to that number than any other representable number from the normal distribution, as a fraction of the odds of getting the smallest number with that exponent (which was precalculated,) to use as a rejection threshold.
Then generate another uniform value and check if it is less than the rejection threshold just calculated. If it is, return that value. If not, repeat to generate another candidate with the same exponent.
As you can imagine, this procedure is quite slow. Is there any known efficient way to produce normally-distributed float/double values that have exactly the ideal probability of producing each representable value?