Critical model variables - is there a delay in ballot counting? - duration of counting delay - overall duration of ballot count including delays - mail in returns / mail in sent percentage ( may be greater than 100% ) - oldest recorded voter
I wish you every success with your prediction.
Critical model variables
- is there a delay in ballot counting?
- duration of counting delay
- overall duration of ballot count including delays
- mail in returns / mail in sent percentage ( may be greater than 100% )
- oldest recorded voter
I wish you every success with your prediction.
I have only one variable, and two critical constants in the hypercynical model:
i, rate, and pi.
Rate is a number close to, but not exactly, ramanujans constant
And i is the number we want to minimize as the "cynical" esimate.
We do
Start = i
i*(rate*pi)
And reduce i until the output matches the start value, this is our stopping condition
The final value of 'i' (and not the output itself) then becomes our new estimate.
Successfully estimates 40^2, from an initial value of 10k, for the number of breeding pairs after the toba eruption bottleneck event.
Also correctly estimated fatality numbers on the russian side when we were being lied with order of magnitude exagerrations.
Performed accurately on a number of other predictions too, like estimating available real results on google search vs the amount *claimed* to be returned.
Cheney predicted 2022 there were 44 vulnerable seats. Going with the hypercynical estimator, that means we can expect the Republicans to pick up maybe 5.
It's still just a hammer, so if you misuse it'll give obviously wrong results, but I find it interesting that the base rate is close to e, the same as the natural logarithm.
And I wasnt even working with logarithms when I stumbled on this particular model.
The idea was "when we lie with numbers, and wildly exaggerate, *what is the natural rate that those numbers tend to grow before they become unbelievable at anything more than a glance?*"
Intuitively, you'd think there would be no answering this, but it turns out the behavior pattern here follows a highly predictable rule.
We can predict not just that something is a big lie, but in general, the *tendency*, *by how much it* it is incorrect.
For me at least this conclusion was completely unexpected.
In other words, if we can say with some intuition and some accuracy that something is a major error (even if we dont know how much), the *human tendency* tends to follow a (what is it called, a powerrule?) A rule where every big lie/error contains an internal number, that reflects the growth rate it *would* have to have, for it to grow from the actual value, to the exagerrated value.
This, combine with my postcard model (because it'll fit on a postcard) of leadership psychology vs political sentiment vs national agenda/priority, has allowed me to predict numerous outcomes with pretty much better than expert accuracy, with very little domain specific knowledge or insight.
Basically we dont have a lot of decision making models that are good fits for the correct macrovariables. Most existing models are either too domain specific, reliant on experts, or are only good for either big generalizations or short term small scale estimates of what will happen, not both, and for that matter, rarely, either.
(post is archived)