Some software based on what he developed if I get it correctly here... 32/n
Compensation by % is so rare these days. Raphaël asks to the other attendants about their experiences. Someone says customers want to listen to "Santa Claus" stories and sharpe ratios. More to do with human psychology than with technical analysis... We move on. 33/n
Simple question: if you get a PCR test that turns positive, what are the probabilities you actually have covid? 34/n
PCR has 1% false negatives and 3% false negatives. This is a good test! But the amount of people infected is not "symmetric". 35/n
Similar to HIV testing by the way. 36/n
As the bulk of the population is negative, we have a total number of false positives equivalent to the number of true positives, meaning that a positive result translates effectively into a 50% chance of infection. 37/n
Let us assume now two categories. Imagine for instance A is obese people, B not obese people, or something similar. 38/n
Imagine A is 30% and B 70%. We select 10 people. A distribution of 3 obese 7 not is the most likely. And it falls very quicky from that! 39/n
Column I plotted in orange, the "Bayesian" correction in blue. This is the basis of Hill or maximum likelihood estimation, as we don't know the probability distribution, we just estimate the likelihoods... 40/n
Dave shared this link he thinks it is brilliant. 41/n
It is much better to be explicit in our hypothesis. Sometimes we decide that a distribution is Paretian and this is a huge assumption (saying we are in that distribution class). Raphaël mentions this lecture of his, that I might watch very soon... 42/n
The right way to do things is be truthful about one's priors, and where blind spots are, and if we don't know the alpha (and we are thinking of an epidemic) that we cannot project the number of victims, considering the uncertainty. 43/n
Attendees map, me highlighted. Sorry to the quants following this that my attention is now being eaten by these things. 44/n
What does Bayes tells us? Considering the data, the likelihood of the regime in which we are. Raphaël opens round of questions, not many. Is Nassim problem with Bayes generalized? Raphaël: I don't think he has a problem with it... @Heinonmatti says @nntaleb said in... 45/n
a debate "F* Bayes!"... Raphaël: "Va savoir what he wanted to say"... "He is like you all know". Someone asks if it is correct to assume Bayes is useful to say that Bayes is useful when we have not enough data. Raphaël disagrees. We cannot, in practice, get rid of Bayes. 46/n
This is a statistical version of the Gödel problem, even an infinite amount of data will not allow us to get rid of Bayes. The utility often has a huge sensitivity to the tail of the distribution, and there is a flaw in the whole 20th century not to look at this sensitivity. 47/n
So most of those decision making models were wrong. Raphaël talks of 2011, it was a period of high turbulence and many of his friends had to close their funds, those who survived got to swing: one needs money to rebound... 48/n
Then @Heinonmatti asks: What do you think of works in complex systems of early warning signals (like increased intercorrelation)? Raphaël says he does not know about it in depth, but he has talked with regulators about capital requirements for banks and stress testing. 49/n
Raphaël says his model helps identifying risk factors. Then something like the bank CEOs are responsible for their "boat" but responsibility to detect early signals of storms is not theirs, they are inside the "boats", rather in regulators. 50/n
If all "boats" are going to China, then the capital requirements increase. Thomas Barrau asks if that can create self-fulfilling prophecies. On the contrary, says Raphaël, this is about easing the risks. Raphaël asks @Heinonmatti about his conference tomorrow. 51/51









