TRC #354: Caitlyn Jenner vs Noah Galloway + Bayesian Statistics 101 with Alex Demarsh + Biodynamic Farming

TRC354AmericanGothicWelcome TRC Family to another jam-packed episode! Adam kicks off the show this week by looking into the brouhaha over whether Caitlyn Jenner really beat out Noah Galloway to win an ESPN Arthur Ashe Courage Award. Our guest panelist, statistician Alex Demarsh, introduces us to Bayesian Statistics demonstrating that some of us would have enjoyed stats way more in high school if he was at the chalkboard. Finally, Cristina unearths the dung in Biodynamic Farming. Special shout out to TRC’er David for sending in such a great parody suggestion + lyrics for “How Deep Is Your Woo” that even Baritone Pat couldn’t resist!

Download direct: mp3 file

If you like the show, please leave us a review on iTunes!

SHOW NOTES

Caitlyn Jenner vs Noah Galloway

Caitlyn Jenner Gets Arthur Ashe Courage Award, Noah Galloway Is Runner Up? – Snopes

Noah Galloway-Caitlyn Jenner Courage Award Rumor: Army Veteran Was Not The Runner Up – International Business Times

Dear Liberal Media, I Didnít Get Duped by the Noah Galloway/Caitlyn Jenner ESPY Controversy – Bristol Palin

Biodynamic Farming

The Guardian: Nine gardening myths debunked

Quackometer: Biodynamic farming a rather magnificent cow dung ice cream cone

Quackometer: Countryfile Interview Audio

Garden Professor Blog

Wiki: Biodynamic agriculture

Biodynamics Preparation 500

Seattle Times: Biodynamic Farming: Sounds Weird But Some Believe In It

Facebook Twitter Reddit Email
This entry was posted in The Reality Check Episodes and tagged , , , , . Bookmark the permalink.

One Response to TRC #354: Caitlyn Jenner vs Noah Galloway + Bayesian Statistics 101 with Alex Demarsh + Biodynamic Farming

  1. Ian says:

    Good talk about the statistics stuff, but the example can be made easier to explain – just say there’s a set rate that the test is “wrong”, the same in both cases (false positives, and false negatives). In that case it would go something like this:

    You’ve taken a test for a disease which is known to affect 1 in every 100 people. This test is 95% accurate – the chance of it making a mistake if used on any given person is 5%. Sounds pretty good, the usual p-values we keep hearing about are around that level, aren’t they? But the result you’ve gotten is positive. What’s the chance that you have the disease?

    The instinctual answer is pretty high. The cagey answer is usually still somewhere a bit above 50%. But then, if we take into account the prior probability we can see that we’re either one of those people who are really sick and got a true positive (0.95% =~ 1%) or one of those people who aren’t sick but for whom the test was wrong (4.95% =~ 5%). The latter group is so much bigger than the former group, so odds are very good that you’ll be one of the latter. And indeed, they constitute only about 1 in 6 of the total who would get this result, so that’s your actual probability.

    So yeah, this is a very similar thing, but I feel that having one rate for just “wrongness” would make for an example that works better when explaining it on the air. Fewer numbers is easier to remember and work with on the fly without turning confusing.

Leave a Reply

Your email address will not be published. Required fields are marked *