A post on Hacker News descended into an argument in the comments ((finish the sentence before you go “oh, quelle surprise”, please.)) about whether
random() * random() gave different distributions.
Now, I know better than to wade into a Hacker News discussion unless it’s about mental arithmetic and can name-drop Colin Wright. However, this did interest me enough to think about. Maybe you’d like to think about it, too. Spoilers below the line.
At heart, the question asks “is drawing a random number $X\sim U[0,1]$ and then another $Y\sim U[0,X]$ the same thing as drawing two random numbers $P\sim U[0,1]$ and $Q\sim U[0,1]$, then letting $Y = PQ$?”
The combatants were agreed on one thing: these calls were both different to drawing a random number $X\sim U[0,1]$ and letting $Y=X^2$. If fish were ever in a kettle, this was an entirely different one.
This… well, none of it is easy to see. However, we can look at the chances of getting a high score - let’s say $Y \ge 0.81$, picking something that’s easy to square root.
In the squared case, we get that 10% of the time – if $X > 0.9$, we’re good.
random() * random() case, we’d need $P$ and $Q$ to lie in the region $PQ > 0.81$, which – you might want to check my integration – has area $0.19 +0.81\ln(0.81)$. Whatever that is, it’s not exactly 0.1.
If you go through the process for the
random(random()) case, you get… the same result. Coincidence? I think not! But a matching answer, no matter how unusual, doesn’t constitute a proof.
The key to proving that they’re the same lies in realising that (from the
random(random)) case) $Y \sim U[0,X]$ is the same thing as saying $Z = U[0,1]$ and letting $Y = XZ$.
So $Y$ is the product of two variables drawn from $U[0,1]$ – which is the same as the
random() * random() case.
A selection of other posts
subscribe via RSS