bug-gnubg
[Top][All Lists]

Re: [gnubg] Help with a new MET

 From: Timothy Y. Chow Subject: Re: [gnubg] Help with a new MET Date: Tue, 12 Nov 2019 16:39:05 -0500 (EST) User-agent: Alpine 2.21 (LRH 202 2017-01-01)

```On Wed, 13 Nov 2019, Joseph Heled wrote:
```
```"but for most practical purposes this is an irrelevant technicality"

```
Are you saying that I can treat each of the 4 estimates independently? That is, use sqrt(pq/N) as the std for each? seems problematic to me :)
```
```
No, I didn't say that. As I said, the 4 estimates are not independent. What I recommended was for you to compute the sample standard deviation for each parameter of interest. So for example, if you have 100 samples and you're interested in the gammon rate, then first compute the mean gammon rate over all your samples. Call that mu. Then for each sample value g_i, compute (g_i - mu)^2. Sum these up, divide by 100, and take the square root. This will give you some indication of the dispersion of your sample set.
```
```
The formula sqrt(pq/N) arises when you're doing hypothesis testing. It's the standard deviation under the null hypothesis. But so far, you haven't specified a null hypothesis.
```
```
Yes, a Bayesian approach would be better, but this probably involves things like contour integration or other horrors.
```
```
No, it doesn't. But you do need to specify a prior distribution. Suppose you're interested in the win rate, and your prior distribution is uniform on the interval [0,1]. For illustration purposes, let's say you're satisfied with accuracy to 1 decimal place, so each of the probabilities in the set {0.1, 0.2, ..., 0.9} has prior probability 1/9. Now you start to collect data. Say the first data point is a win. Then using Bayes's rule, you find that the posterior probability of a win rate of j/10 is obtained by multiplying the prior probability by j/10, and then normalizing so that everything sums to 1. So the posterior probabilities work out to be
```
[1/45, 2/45, 3/45, 4/45, 5/45, 6/45, 7/45, 8/45, 9/45]

```
Similarly, if you observe a loss, then you adjust by multiplying the prior probability by 1 - j/10 and normalizing. Repeat for every observation in your sample.
```
Tim

```

reply via email to