Gambler's fallacy - Biases & Heuristics | The Decision Lab

gambler's fallacy bias

gambler's fallacy bias - win

Spotting Cognitive Biases

There is a ton of cognitive biases around us every day, in media, papers, blogs, and even in your own thoughts. Spotting them and looking down on them is a great way to advance society forward and make better decisions.
[link]

[Jewels] Farming question

Hello Hunters,
First of all i want to say this is probably the gambler's fallacy bias at work.
But hear me out
Anyone noticed you get more jewels related to the weapon you are using?
For instance when i play bow, i feel like my chances to get mighty bow are higher.
Same thing for CB, i get more ironwalls
Anyone else feel that this is true or is my brain just desperately looking for patterns
Thanks for your time.
submitted by C4mill3 to MonsterHunterWorld [link] [comments]

Is the Gambler's fallacy a cognitive bias related to information processing?

I can't actually find it in the curriculum, so just wanted to clarify
submitted by flofficial to CFA [link] [comments]

Cognitive Biases in Hearthstone - Gambler's Fallacy (#1)

Hello /CompetitiveHS!
Since I had some extra time, I've decided to start a new series. Or, at least, hopefully a series - depending on what the response will be to this first piece.
As the title suggests, the series would tackle cognitive biases, and how they affect Hearthstone players. I choose probably the most obvious one as my first example - gambler's fallacy. If you want to see more, I'll just paste the introduction below. Or you can get straight to the article here.
The human brain is a wonderful thing. Sometimes, when presented with two choices, which are the same, just worded differently, it will assume that one option is better than the other. Other times, when you don’t have enough information, it will fill the gaps itself (often incorrectly). It looks for correlations, even if there aren’t any. Or leads you to situations in which something just FEELS right, even though it’s really not.
Believe it or not, but cognitive biases aren’t something rare. To put it simply, they’re common flaws in logic. Person’s own, subjective interpretation of reality. Of course, after you really start thinking about them, you realize that they make no sense. But what’s important is that they affect everyone – like you and me – in our daily lives.
In this series, I will cover some of the common cognitive biases that can affect Hearthstone players in particular. How do they work? Why do they happen? Are there any situations in which they actually make sense? Identifying them and realizing what they are is a big step in terms of becoming a better player. Plus some of them are just interesting to read about.
In the first part, I will talk about probably the most common fallacy tied to randomness – gambler’s fallacy. When playing Hearthstone, or any other card game, a fair bit of chance is involved, and understanding gambler’s fallacy can make you look very differently at every random roll. I will also give some examples of situations in which gambler’s fallacy… actually works.
Click here to read to the full article.
I really hope that you like it. And for those of you wondering, I'll be back with the best decks compilation post-nerfs on... Wednesday, probably. Day 1 stuff.
If you have any questions or suggestions, be sure to leave a comment. And if you want to be up to date with my articles, you can follow me on the Twitter @StonekeepHS. You can also follow @HS Top Decks for the latest news, articles and deck guides!
submitted by stonekeep to CompetitiveHS [link] [comments]

Cognitive Biases in Hearthstone - Gambler's Fallacy (#1)

Cognitive Biases in Hearthstone - Gambler's Fallacy (#1) submitted by stonekeep to hearthstone [link] [comments]

Cognitive Biases in Hearthsone - Gambler's Fallacy (#1)

submitted by hearthsan-bot to hearthsone [link] [comments]

Cognitive Bias: Monte Carlo Fallacy (a.k.a. Gambler's Fallacy) - could teaching kids to think rationaly help them not become addicted to gambling?

submitted by MikeCapone to reddit.com [link] [comments]

Emotions are expensive. Think rationally.

This game stop stock is going turn into many into a fountain of tears, for people not yet invested stay the hell out for goodness sake.
 
Now you have probably already responded with a reactanc bais doing the exact opposite of what we're told, especially when we perceive threats to personal freedoms. Like how you spend your money. Listen up!
 
You came to WSB because of what is known as a Bandwagon effect the tendency of an individual to acquire a particular style, behaviour or attitude because everyone else is doing it. It is a phenomenon whereby the rate of uptake of beliefs, ideas, fads and trends increases with respect to the proportion of others who have already done so because you saw it on the front page of reddit with a thousand likes and what not.
 
Confirmation bias is a thing it's a tendency to process information by looking for, or interpreting, information that is consistent with one's own existing beliefs and that's why you all on here WSB checking each other out.
 
WSB is a echochamber or rather the better word it confirmed to Groupthink which is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome.
 
Some of you will disagree with me at this point due to what is known as in-group preference, is a pattern of favoring members of one's in-group over out-group members. So go ahead and agree with each other in the comments section below. Just know that false consensus is also a thing.
 
There is a reason why they call you Retard it because you are mentally slow to understand that you had to cash out last week.
 
Anyway I guess some of you jyst suffer from gamblers fallacy it's WSB after all anyway.
submitted by OnigrizaOmorte to wallstreetbets [link] [comments]

Defending shell guy and roasting a dream stan

Defending shell guy and roasting a dream stan
Ill get straight to the point: I found the 50 minute youtube video below because a friend linked it to me the video is about a dream stan trying to disprove shell guys video (came out before geosquares video) and im gonna be roasting it in this post.
https://youtu.be/5dw4fV6PYxU
6:10 - Ah yes, the classic dream has nothing to gain by getting a world record that people spend hours upon days competing for
6:20 - The reason dream cheated is because he didn't think hed get caught, yes he has been caught now, but dream probably didnt expect that he could be caught in this way; dream didnt realise when increasing the drop rates how statistically impossible his odds would become, and he thought that he could just say it was luck if anyone got suspicious these tweets below show that is what he did:

https://preview.redd.it/ewytd9uhl1e61.png?width=586&format=png&auto=webp&s=b9d60c176d83a2e08d8d1c0bf1bb853b854bd434
14:25 - 17:20 Here she is pretty much just nit-picking at the data sheet, pointing out irrelevant mistakes that dont effect the maths at all
18:50 Congratulations, you wasted your time making a slightly better data sheet, that doesnt change the maths at all, and still shows evidence that proves dream cheated
19:14 - 19:50 This is just gamblers fallacy, she points out there are unknown hypothetical trades that couldve happened that would balance out dreams odds, to simply put it: getting good luck does not make bad luck more likely here is a definition of gamblers fallacy from wikipedia:
"The gambler's fallacy, also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the erroneous belief that if a particular event occurs more frequently than normal during the past it is less likely to happen in the future (or vice versa), when it has otherwise been established that the probability of such events does not depend on what has happened in the past."
She is also being quite hypocritical to the point she made just a few minutes earlier about adding irrelevant information, as I just explained, the number of nethers dream has been in to point out hypothetical trades that could have happened, the number of nethers does not effect the odds at all: the events are independent.
22:23 I dont need to say this again, but she applies gamblers fallacy once again, the number of trials are fixed he did 263 total trades, again the number of nethers is irrelevant it doesnt change the odds at all
24:00 Here she argues that because the random number generator used for piglin barters is dependent on the world generation the seed is a factor here. Its hard to explain, but yes the seed can in theory make good luck more likely, but it can also make bad luck more likely; and all this doesnt matter in the first place because the world seed is completely random and the world generation only effects the random number generator, yes the number generated will be effected, but the probability of a pearl trade isn't. This is like saying that it is biased to flip a coin on a mountain because the wind can change the outcome of a coin flip, yes the outcome changes, but the probability doesnt change.
24:30 This is extremely hypocritical, how can you say the events are independent, but still say the the probability is not constant, Ive already talked about this enough so ill move on
24:55 - 28:18 Here she using an example called "lavender kings" to demonstrate how someone can mis use probability, where a group of biased reporters try to prove fraud with statistics, at 28:18 she shows all the problems with what they did, however shell guy did not do any of the things the biased reporters did in this example lets go point by point: (in the order she mentions them)
  1. This is in the same boat as point 6 of the "small sample size" argument, shell guy included 6 CONSECUTIVE streams that included 263 trades , there is no "other data" here that was intentionally left out like in the example
  2. Mentioned 2 ulterior factors that effects probability, but in piglin trading there are no other factors that effect the probability
  3. Same boat as point 4 about sampling bias, shell guy included every speedrun from the 6 CONSECUTIVE STREAMS, there is no ignored data like in the example
  4. Explained in point 3
  5. Same thing as point 2, there is no external factor similar to the end of the harvest in the example and and in piglin barters
  6. explained in point 1
  7. this refers to the nether spawns argument which i have already debunked earlier in this post
  8. Accusing shell guy of being bias towards dream, i dont need to explain this
31:20 Talks about court cases involving probability, im no lawyer, but i dont think these examples are exactly comparable to the dream situation, because she doesnt explain how statistics was mis used in these cases and how its the same here
32:32 The "its just luck"/"improbable =/= impossible" argument, she clearly doesnt understand how small of a chance 1 in 40 billion is
32:39 2 examples of "improbable events" that have happened, getting a world seed is something that guaranteed to happen, probability of someone existing is not explained at all, just stated
33:47 yes, its been verified
34:14 that is because thats what they are, and this video is no different
39:52 its just a meme any ngl its quite funny, chill out
44:03 no one is saying he "could have" cheated its statistically pretty much the only possible conclusion, a 99.99999% chance is nothing to scoff at
In conclusion: pretty shit post, doesnt know anything about the maths, repeats the same argument multiple times, just said in a different way
submitted by Le_Corporal to DreamWasTaken2 [link] [comments]

Dream 😴💤 Investigation 👨‍🔬 Results 🔢 (dont know if this has been done before)

Speedrunning is a 👏👌 hobby in ☝ which people 😡👨 compete to complete 🚫 a 😱👌 video ♀ game 🏈🎲 as 🛠🍑 quickly 🆘⏰ as 😎 possible. 🔝 This paper 🤓 concerns speedruns of 💦 Minecraft: ⛏🚨 Java Edition, and, 👎😖 in 🚫 particular, speedruns of 🚨💦 the category known 💫 as 🍑💑 "Any% 🤣 Random 🎲🔀 Seed 👨🌾 Glitchless" (RSG) performed on 🔛 version 👧 1.16. 👸🏻 A 🅱 brief summary of 💦 the 👏🔍 relevant mechanics 💰 and speedrun strategies follows 🚗♀ for 👀 the unfamiliar reader. The 🗣 final 😪 boss 👨👨 of 👼💦 Minecraft ⛏ is located in 👄📥 an alternate dimension known as 🚫 The 🏕💦 End, 😣🔚 which 🙌😩 can be 🐝🐝 accessed using 🤳🏻 End ✋💯 Portals. An end 🖕✋ portal consists of 💰☹ twelve End Portal Frame blocks, 🌆🌇 a ⛄☝ random 🔀🔀 number 🎦 (usually 10-12) 🅾 of 💦 which 👏 must be filled 🔋😩 with 😍👫 an ☺ Eye 🤕 of 💦 Ender in 🚭👇 order to activate the portal. Thus, 👌🕵 the 🌞 runner is ☎👅 required ✅📋 to 👊💦 have ✊💪 up to twelve eyes 👀 of 💦 ender when 💕 they arrive at the 🤡🏻 portal to 💰🗣 be 😡🐝 able 💪💪 to 💦 enter The End and 👏👏 complete the game. 🏈🎮 In 📥 1.16, the ♀ only way 😓 to obtain an 👹 eye 👁 of ender is ❤♂ by crafting it, 🏃 which requires one Ender Pearl and one ☝👏 Blaze Powder. Ender pearls 🍬 can be 🐝🐝 obtained in several ⛓ ways, 💯😉 but 🏼🍑 the fastest is 🔥👌 to 💦👊 use a 🔥🐝 mechanic known 😝💫 as Bartering. In 🛌 a 🅱 barter, the 🔛 player 👨🎮 exchanges a 🏻👬 Gold 🤓🔦 Ingot with a 👍 Piglin (a humanoid creature 🐙 in the Nether dimension) for 👻 a randomly chosen item or 💰😫 group 🅱🅱 of 🚋👏 items. 🛡🛡 For 🏻🙃 each 👏 barter, there is 👅 about 🏫 a 5% chance ♂🙅 (in 📥👏 1.16.1) 🌱🚫 the piglin will 🤔 give 🏿 the 👏🕡 player ♀💰 ender pearls. 🍬🍬 Blaze powder is crafted out of 🏫 Blaze Rods, which are 😡💯 dropped 〽 by Blazes—a hostile mob. Upon being ❌ killed, each blaze has a 50% chance 😨 of 👨🎖 dropping one blaze rod. 🍆🍆 The 👏💛 main focus 😩 during 🚣🏃 the 🦉 beginning ➡😍 of 💦🔥 a 🔫👇 1.16 ⛈ RSG speedrun is to ✌💰 obtain (hopefully) 👏👏 12 🤓😣 eyes 👀 of 🍳👩 ender as 🙇 quickly ⏰ as 🏿 possible, by 🌈😈 bartering with 🆕👉 piglins and 🚕 killing blazes. These two 💏💏 parts of 😩💦 the 🏻🌊 speedrun route are 🔢🙏 the 👏💦 primary concern 😕 of this paper. 2 😂 Motivation ☑ Members 👨👨 of 😏 the 🏻 Minecraft ⛏ speedrunning communitya reviewed six consecutive livestreams of 😣 1.16 📣🔙 RSG speedrun attempts by 😈 Dreamb from 👩🛣 early October 2020. The 👶 data 💰💰 collected show 📺 that 🍆👀 42 of the 💬 262 piglin barters performed 🎭💃 throughout these streams yielded ender pearls, 🍬 and 💰🙋 211 of 🔧 the ⏰👀 305 killed blazes dropped blaze rods. These results 🔢 are 💇 exceptional, as 😠 the 😍🏽 expected proportion of 💦💦 ender pearl barters and 😵 blaze rod 🍆🍆 drops 💦😲 is 🙀 much, 😣👎 much lower. 🤓 An 👹💉 initially compelling counterclaim is 🍆😀 that 💦 top-level 🔼 RSG runners must 🙅 get 🍑 reasonably good 🏼💘 luck in 👌🏼 order to 💦💦 get a ♀👌 new 🎉🤤 personal best 👳👌 time 🕐⏰ in 👻😈 the 👨 first 🥇😂 place, 🏆🤤 so, while ♀👶 it 💯➡ is 🙈 surprising to 💦💀 see 👁 such 💦 an ➕ unlikely event, it 💯🦊 is perhaps 😍🏻 not 😥😅 unexpected. However, 🖐💯 upon 👦 further research, Dream’s 💭 observed drop 🏻👇 rates 💦 are substantially greater 👅👅 than 😽 those 👞👞 of 💦😰 other top-level 🔼 RSG runners—including, Illumina, Benex, Sizzler, and Vadikus. If ☔☔ nothing ♀🔫 else, 👴♀ the 😱🌈 drop 👇⤵ rates from 💋 Dream’s 💭💭 streams are 💓 so exceptional that 💯 they 🏽 ought to be 🐝 analyzed for 💰 the sake of it, 🔥 regardless of 💦 whether ☂☂ or not 🚫🚫 any one 😍 individual believes they 🏾 happened 🤔🤔 legitimately. aThe data 💾 were 😫😫 originally 🔙🔙 reported ♂ by MinecrAvenger and danhem9. bhttps://www.twitch.tv/dreamwastaken 3 3 😘🤔 Objectivity 🤖 The 🙀🐺 reader should 👫 note that the 🏻 authors of 💦 this 😎📍 document are solely 👞👞 motivated 🏿 by 😈 the presence 🙇 of exceptional empirical data, 📉💰 and 👏👎 that 😐🎢 any 👏 runner—regardless of 🏻😍 popularity, following, or 👉💰 skill— 😤 observed experiencing such unlikely events would 😏😎 be 😤 held to ✌💦 the 😼👌 same 😯 level 🔻🆙 of scrutiny. The 👊🅿 reader should 💦👍 also 🙇 note that 🤔🚟 the data 📉💰 presented are 🏻😩 extensively corrected for 🤔🎅 the 😫📚 existence 👌 of 🔴 any 🈸 bias. It 🔫😞 would 🍆😎 lack rigor and 💰💦 integrity for 💕😩 the 🗯 conclusions made 👆 in this 😏 report to 💦 substantiate the moderation 💯💯 team’s 🐒 decision if they 👥 were 👧👶 merely based 👌 on 🚟🔛 a surface-level 🍑🌎 analysis of the data. 📊 Indeed, these 🚱 corrections inherently skew the analysis in Dream’s 💭 favor. We 😂👦 aim to calculate not 🚫🖐 the 🍩💋 exact 👌👌 probability that 🏻👉 this 👈 streak 🔥 of 💯 luck 😄 occurred if Dream 💭 is 🔁 innocent, 😇😳 but 🍑 an upper bound 🤐🤐 on 🔛🔛 the probability; that is, 💦 we will 👊👻 arrive at 🚓😁 a value which 😡👏 we are certain is 😳🔮 greater 👅👅 than 👉 the 🅱💦 true probability. The 👦 goal 💦⚽ of 🏧👉 this ☝👈 document is to 💦⏸ present the unbiased, rigorous statistical analysis of the 👏🏼 data, 💰📉 as 😱🖕 well as an 🐎 analysis of the ♀🚨 Minecraft ⛏🚨 source 😔🏞 code, 😲😲 to conclusively determine whether or 💰➕ not 🚫 such an 🍎 event could be 👌😳 observed legitimately. 4 👌 Part 🏻 II 👩👩 Data The 📈💲 raw data 💾 (and its sources) 👉👉 from 🤤 which the 💰 following graphs were 👶 derived 🔜 can 💦💦 be 👏 found in 🍆 Appendix A. 🏿🅰 4 🙇 Piglin Bartering Figure 1: 🏫 Dream’s pearl barters, charted alongside various comparisons. The ✊ 99.9th 🤑 percentile line 🚫💨 represents one-in-a-thousand 💦 luck 😰🍀 (calculated 🚜🚜 using 📤📤 a 🥇➡ normal 🖖 approximation), which 🎓🙌 is already quite 💬🅰 unlikely—if not ♀ necessarily proof 💯📊 of anything. 😫 5 🍆 5 🍆🦐 Blaze Rod Drops 😲😲 Figure 2: ➡ The same for 🍆😱 blaze rod 🍆 drops. 💦 Part 👏 III Analysis 6 ❗ Methodology What ❓ follows 🏃🚗 is a thorough description 👿👿 of 😱 every aspect of our 🚟🅱 investigation in 👮 an 🍑 accessible manner. 🚁🚁 We 💰🔝 will begin 📦 with 👏👏 an 💸😯 introduction to the 👏👏 binomial distribution, and follow 👣 with 👏 adjustments 💰💰 to 🙅💊 account 💳 for 🔜🍆 sampling bias and 💰 other biases lowering the 👶 accuracy of 💦 the binomial distribution. Finally, 🙌 we 😂 will analyze Minecraft’s ⛏ code 😤 to justify the assumptions made 🙌💯 in our 👵💰 statistical model. 👄 To 🏻 strengthen 💪 our analysis to 💦🙏 the 💦👑 skeptical reader, we now 🍑 preemptively address 📪📫 expected 🤰🙄 criticisms and 👅👏 questions. Why 😩❓ are 🏻👀 you not 🏻 analyzing all of 😤💦 Dream’s runs? 💰💰 Doesn’t that introduce sampling bias? Yes. There is clearly 🤓😱 sampling bias in 👏 the 😦👌 data 📉 set, but 🤤🤚 its ⚜ presence 🏽 does 👏 not 🏼 invalidate our analysis. Sampling bias is a common 😍 problem in 👉👏 real-world ✨ statistical analysis, so if 🏿 it were 👩 impossible 🙇 to 🗣 account 💳 for, ⌛ then 🏿 every analysis of empirical data 💰 would be 🐝🍆 biased and 🏳👏 useless. ♀⏳ Consider flipping a 👌 coin 100 💯💯 times 🕛 and getting heads 🙉 50 👌 of 🔌 those times 💦 (a 🤣🤧 mostly 💁🙋 unremarkable result). Within 😱 those 100 coin flips, however, imagine 🤔 that 👏 20 of 👩 the 50 ⏳ heads 💤🙉 occurred back-toback somewhere within 😜 the 🙀➡ population. 👥 Despite 🙅♂ the 🐐☺ proportion overall being 😩 uninteresting, we 6 still would 😜 not expect 🤗🤗 20 📊 consecutive heads anywhere. Obviously, 🎳 choosing to 🔍 investigate the 👏 20 🔳🔳 heads 🙈🙊 introduces sampling bias—since we 👩 chose to ♂🙌 look 👀 at 🏠 those 😘 20 🎉🔳 flips because 💁😡 they were lucky, 🍀 we 👉💰 took a 👏 biased sample. However, 🖐🖐 we can instead discuss the probability that ☃👨 20 🆗🆗 or 💁🚫 more back-to-back 😡 heads occur 👻 at 👒😩 any 🚸 point in the 🔚 100 💯 flips. We 🏃👦 can use 👏 that 👆 value 💵💵 to 😠 place an 💰 upper bound 🤐🤐 on the probability that the 👏⛪ sample we chose could 🔒 possibly have 😣✅ been 👦 found 🚫👁 with 😣 a fair 👒 coin, regardless 🤷🤷 of how biased a 💰📖 method was 🅰 used to 🏽😩 choose the 🐆 sample. It’s 💦🏹 also 🙇🅱 worth 💵💸 noting that the choice to only 🕦 consider Dream’s 💭💭 most 💯😃 recent streak of 😳 1.16 🤜 streams is 👏 the 🔝 least arbitrary distinction we could 👌🚫 have 🙌👏 made. 👆 The 🖱🏋 metaphor of 🅱👶 "cherry-picking" 🍒 usually brings to ➡✌ mind 🤔 choosing from ➡ a wide ➡ number of 👩 options, but there ✔👇 were 🍑 at 🍆 most 💯 a 👌👌 small 😂🏼 handful of 🌹 options meaningfully equivalent to 💦 analyzing every 🏦 stream 💦 since 💦 Dream’s return to 💦♀ public 😪⛱ streaming. Note the 💌 importance of 😤💦 the 🔍 restriction that we ♀ must 🙅💰 analyze the 🍆😫 entire 🌲 six streams as a 💡👌 whole; 💰 true cherry-picking would 💭🌨 specifically select ❇ individual 🥖 barters to 💦💦 support 👍♀ a 👏 desired conclusion. How 🐼 do 👀 we 🅰 know this investigation isn’t biased? Concerns about 🤔 the 🏛👑 impartiality of the authors of 💦😊 this 😞 paper 🤓 have 👏😏 been raised 👧👧 in discussion about 🌈 the investigation. We 👨👩 do not think 💭 this 😞 is 🤔👐 a 🍾 significant issue; 🙅🏾 we have 😒 made 👆 an 😤 effort to be as 🍑 fair 👒 to 👸 Dream 💭💭 and ♂💰 thorough as 🖕🏿 possible in our investigation. Regardless, 🤷 it 💡💨 is a 👬 concern 😕😕 worth addressing. This 👈♂ paper 🤓 has 👏 been 😀 written to 💦✌ be 🙋🎮 as accessible as possible 🔝 to an ☺👏 audience without 🚫🚫 in-depth 👏 knowledge 📚😍 of statistics or programming. This 👈🔴 is 🔥🚫 primarily so 💯 that 👎👅 you 😍🙂 do not have 👃☣ to ✌💦 take our word 🙌 for 👻🎁 its accuracy. 👌 By 😈 reading 📖📲 the analysis, you should 👫😑 be able 💪💪 to 💦👉 understand 😷🤔 at 👌😠 least 👬🚫 on 🔥 a basic level why the ♂✅ statistical corrections we 👥❣ made 🤗😇 account 💳 for 💼 all ✊ the 😤 relevant biases. Additionally, ➕✏ as noted in 🙌 Section 3: 😧 Objectivity, 🤖🤖 we 👦💰 aimed not 🚫 to 😋💦 calculate the ➕💦 precise probability of 💦 Dream 😴 experiencing these 😤 events, but an 〰🦅 upper bound 🤐 on 🔛🏿 the 🅿😈 probability. This 🎄👉 makes 🤔 it ☠🤔 much 💘 more ✋ difficult for bias to 💦💦 have any 💦🍵 effect; 📣📣 if 👏 we 😀💰 correct ✅ for 🌍💕 the 🅱 largest amount 📉 of bias in 👏 the 🎁🚗 data that 🥁 there 🏿 could ❌😈 possibly be, 🏻 there is little 🐩👧 risk our 💩👶 analysis will 👊 be skewed due 👅👅 to 🍆📧 our 💰 bias causing us 🤵 to 💵💦 underestimate how 😱🗣 much 🙀💘 we ought to 💦 correct. ✅ We 👵👨 believe 💭😱 that, to ⏸ the extent any 📨 bias exists, these measures should be more 😏😏 than ☄ sufficient to ✌😤 account 💳💳 for it. 💱🤔 Additionally, ➕ note 📋 that we 👉🤠 are not 🚫 the only 😤 people 😣👥 capable of 🎆 analyzing these 🚑 events—if any 🌐 unbiased third 🤔🤔 party 👌 points 😘 out 😧📤 a 👌🥇 flaw in our 😻📸 statistical analysis or 🍆 notes 😚 a 👌 glitch that could potentially cause 🎗 these 💦🥜 events, they would, 💀 of course, be 😝💰 taken 🚀 seriously. 😒 What if 👏 Dream’s 💭 luck was balanced out ⚔ by 👷😗 getting bad 😤😩 luck 🍀🍀 off 😍 stream? This 👁 argument 🙅 is 💦😞 sort of 👍💯 similar to 👍 the 💰👈 gambler’s fallacy. Essentially, what 👏😱 happened to ♂ Dream 💭 at 🍆🍆 any time outside 😏🔥 of the 🎅👏 streams in 📥 question is 👏😩 entirely 💯 irrelevant to the 👀🍆 calculations 📊 we 👧 are 🅱♀ doing. 😧🏃 Getting bad 😞😩 luck at one 😥♿ point 🈯 in time 🕘 does not 🙅 make good 👌 luck 😟🏼 at a 👌🅰 different 💰💰 point ⬆⬆ in 🖕👏 time more 💯 likely. We 👥👓 do 😫 care about 👏 how 🤷👉 many 🏼💯 times 🤔🤔 he has streamed, since those 👉 are 👄 additional opportunities for 😏 Dream 💤💤 to 💦😣 have 🈶🌈 been noticed getting ➡ extremely 💛 lucky, 🍀 and 💰 if 🚀 he 😡👦 had gotten 🅱💦 similarly lucky 🍀🍀 during one 💯😫 of 💦💦 those 😘😖 streams an 🍑 investigation still 👉 would 💞 have 💴 occurred. However, what luck Dream actually 🤔 got 🏻😩 in 🛌 any 💦🍵 other 👳 instance 💯👉 is 🙀 irrelevant to this 😷 analysis, as it has 🤔👉 absolutely 💯🙅 no bearing on how 💯 likely the luck was 🔙 in this ⁉ instance. 🙄 7 💯💯 7 💯 The Binomial Distribution Note: 📝📋 If the 🌊🚀 reader is equipped with 👯👏 a ✨ basic understanding of 👏💦 statistical analysis and 💰🚄 the 🏼 binomial distribution, they ♂😈 may 🐝 skip to Section 8: 🅱✊ Addressing Bias. Note that 🤔😐 the ⛓🏻 explanations 📝📝 present 🎀🎀 here 💦💪 are sufficient for the 🔭 probability calculations 📊📉 performed throughout the rest 🚔🍑 of the 👽 paper, but 😮☝ are 💯 not 👎 exhaustive. Supplemental reading 📖📃 is 🙄❌ provided via 💰💰 footnotes where relevant. 7.1 💯 The Intuition Informally, if the outcome of a 👌 particular event can be described as "it either happens or 💁 it doesn’t", then 🤔 it 😐 can 🔫😡 be modeled with 😉 the 👏 binomial distributionc . For 👧🍆 example, imagine 💦👑 we 😊 wanted 👩 to 👉💦 compute the odds of 🏻 flipping a 💰 fair 😤👒 coind 10 times and 😘💛 having 😋 it 😏🥇 land ⬇ on heads exactly 😉😉 6 🤔 of those 🐥🐥 times. 🍆 Since a 😎👌 coin either lands on 🚟⬇ heads or it 🥊 doesn’t, we 👍👶 can 💦🗑 use 👏⚒ the 😫 formula for 🐻 the 🚟🎁 binomial distributione to determine the 🏆 chance 😨 of this 👈🍆 occurring. Since we 👨😂 flip the 💰👇 coin 10 🔟 times, 😆⏰ we say 🙊 푛 = 10, 🤑😰 and 💰 since 👨 we 💣😺 want exactly 6 💪 of 👀 those 👞🤔 flips to be 💎👼 heads, 🐵 푘 = 6. ❓ The ☝ chance of 🔴💦 a 👌💰 (fair) 👒 coin landing on 🔥 heads is 50%, so 푝 = 0.5. ➖ If we plug these 👳👈 values 👪💰 into 🤓 the binomial distribution formula, we 👨 get 🔟 P (6; 👧❗ 0.5, 😊 10) 😂 = 10 6 👆 0.5 6 💪🏠 (1 − 0.5) 😊💦 10−6 ≈ 0.205 👌 (1) ➗ To 💦 interpret this ❗ value, if ♂ we flip a ✋ coin 10 🔳💯 times, 🍆🕒 we 👦🌊 can expect 🤗🤗 to 💦 get exactly 😉 6 🤔 heads 🙊💤 about 👂☝ 20.5% 🆗🎉 of 🔝 the 👆🍆 time. 😵💯 To understand why 😳🤔 this 😂 formula yields the 👏 probability of 🍒💦 a binomial distribution, and 🙅 how 👹 to 👮⏬ generalize it, ✔ we 👧👮 break 🙇 down 👇👩 each 👋👋 term. 7.2 💯 Generalizing the 🌈 Binomial Distribution Generically, the probability of exactly 푘 successes with 👏😋 probability 푝 occurring in 👉 푛 trials (in 👏 our 💩 earlier example, 🔥🔥 푘 = 6 🕕 heads with 🎉🍨 probability 푝 = 0.5 💦😏 occurring in 🔝 푛 = 10 💯 flips) is 🔥 given 👈⤴ by ⏩ P (푘; 푝, 푛) = 푛 푘 푝 푘 (1 − 푝) 푛−푘 (2) ♀ We 😃 can deconstruct this 💯📉 formula term-by-term to understand why 😡 this represents the 📲 probability. Basically, 👎👎 this 👈 formula figures out how 💯 many ❔ distinct orderings of 🕯🅱 푘 successes and ➕➕ 푛 − 푘 failures meet 💯 the criteria, and 👏🤔 then 💯 sums the 👨😫 probability of 💯🏻 each orderingf . The 🎁🔝 notation 푛 푘 , read 📖 as 🏿💰 "푛 choose 푘", represents the binomial coefficientg , which is 🍏 the 🍆 number 🔢 of 🔴 ways ➕✔ we ❤🙋 can 🔫😠 observe 푘 successes in 🌤 푛 trials—the number 💦📱 of 🤖 ways, 💫💫 with 💰👏 푛 options for 🎅 trials ⚖⚖ to 💦 be successes, you ☠ could 🤷 "choose" 📥📥 푘 of 🗼👀 them. For example, there are 💨 two ✌ ways to 🐵⏸ observe 푘 = 1 👸 heads 💤 in 푛 = 2 coin flips. The head could occur on 🔛🔛 the first 🔢 flip, or 😩💁 it could 🤔 occur 👻👻 on the second 🕐🕐 flip. Therefore, 👏🎉 2 1 is 👮💦 equivalent to 👌 2. 🙈 With similar 💯 reasoning, 4 💦✌ 2 is equivalent to 6; ❗ there cThe binomial distribution also ➕ requires the assumption that ❗ we 👦👬 are ♀ observing discrete independent 🙅 random variables. Since 👨💦 piglin bartering and ➕ blaze drops 💦⬇ are 🏄 discrete independent random variables (see 👀👀 Section 9: Code Analysis), we ⚡ can 🔫 safely 🚦 make ✋ this 👈🎅 assumption. There ↗ are ⭐🙏 other 🙅✉ considerations about 💦 stopping 🆘🆘 rules which 😡 will 👙👏 be 📖🐝 addressed in 📥 Section 📦 8: 👊⚡ Addressing Bias. dA 👨😗 "fair coin" is 🔥💦 defined as 🍑🕛 one whose 🌄🌄 probability of 💦💦 landing on 🔛👇 heads 🐵 is 🔥 exactly 😉 the 💊🚟 same as 🏿🍑 its probability of 😤 landing on 🔛🔛 tails. We ♂ are 👶♀ also 👨 not considering the 🏽👏 probability that 🚟 the 👀👨 coin lands on 👋🔛 its 👤 side, which is entirely 👐 negligible for ⚠ this introductory-level explanation to 🗣✋ the 🔥👉 binomial distribution. ehttps://en.wikipedia.org/wiki/Binomial_distribution fFor an explanation of 💦👅 why ❓❓ this 👉 works, 💦 see 👀 https://www.youtube.com/watch?v=QE2uR6Z-NcU. 🐕 ghttps://en.wikipedia.org/wiki/Binomial_coefficient are 👶 6 unique ways 💯 to distribute 2 💦 successes (heads) 💤🙈 across 💰 4 trials ⚖ (coin flips). (These 🤤 are 🔢 1&2, 1&3, ♂ 1&4, 👷 2&3, 2&4, 💦❤ and 🤠 3&4.) As the first 👆 term represents the 💲👏 number 📟 of distinct orderings, the 👉 next 📅☃ two 💘 terms represent ✊ the 💦🏻 probability of ⛄💦 any 💦 one ♿ order. To find 🔎 this 🙋👏 probability, we simply take 🐥 the product of 💦 the probabilities of the 👏 events necessary to produce a 💰👌 given 👤⤴ ordering; that 😐 is, 💦 the product 👟👟 of the 👏👏 probability of 🚨👄 observing 푘 successes and 푛 − 푘 failures. Since 푝 is 🚟👮 the 🚧 probability of 💦 a ♂ given 👤 trial being 😑 successful, 📈📈 and there ✔ are 💓 푘 successful trials, 👨 we 👴👵 can account 💳 for the 💞🏾 successful 📈💪 trials 👨 with 🤝 the term 푝 푘 (푝 multiplied by itself 👈👈 푘 times)h ⌚😩 . Similarly, we 👌👦 account for 👷 the failures by 😈👨 raising 🅰🔝 the 🦏 probability of a 👏 failure to the power of 👏 the number 🎦 of 💦🌈 failures. As the 👑 only two 💏 possibilities 💡 in 👏 a 👏➡ given ⤵ trial are 🍑 success ☺🤑 and ☺ failure, and 👏 the 👩 probabilities must 💰🙋 sum 👀 to 1, the 👥♂ probability of a 👌 failure is ☎😜 (1 ♀ − 푝). It 🕘🍆 follows that, since 👨👨 each 👋👋 trial that is 🈁 not a 🍆🍞 success 💰💰 must be 🥜 a 🅰 failure, the 👏 number 🔢📞 of 🐶👨 failures is (푛 − 푘). Thus, the 💰💦 final term is (1 🕴 − 푝) 푛−푘 . Multiplying all 👌🤷 three terms together yields the 👏 probability of 🔴 a 💐 binomial distribution with 😂 a 💰👀 given 👤⤴ 푘, 푝, and 🎅 푛. 7.3 💯⏰ The Cumulative Distribution Function (CDF) It would be helpful 😲🤔 to 💦💦 have 💰👏 a 💰 way to 👏♂ compute the probability of 💀 observing 푘 or more successes. Intuitively, we can expect 🤗🤗 the 🚟😂 probability of observing exactly 푘 successes in 👮 푛 trials 👨👨 to ✌ be smaller than 😽💰 the 😼🤠 probability we 👥🏻 observe 푘 or 😤💰 more successes in the 💌🆘 same 🖕 푛 trials. Referring back 👌 to the 🌀👦 coin-flipping example, 🔥 if we 💏 wanted to 💦💰 compute the probability of ☹🏻 observing 6 or 🚫💰 more 😥 heads within 🎉 10 trials, 👨 then we ♂♂ can 💦💦 simply add 👈 together 😭🏿 the probabilities of 💦😔 observing exactly 😉😉 6 👆🤘 heads, 🐵 exactly 7 ❗⏰ heads, (...), exactly 😉😉 10 💯 heads, 🐵🐵 given by Õ 10 푘=6 ❗❗ 10 🅾 푘 0.5 😏➖ 푘 (1 🤜 − 0.5) 😲❌ 10−푘 😂 ≈ 0.377 ➖ (3) 😩 Indeed, this 👀👈 agrees with 😗 our intuition; it 😤 makes sense that it ❗☠ is more 👏💦 likely to get 6, 👧👆 7, 8, 9, 🈂 or 💀🍆 10 heads 😂 in 10 🅾 flips, than ♀🔪 it 💨 is 🈁 to ➡ get 🉐 exactly 😉😉 6 👏 heads in 📥👇 10 flips. The 👩 chance of receiving 푘 or 🙅 more 👆 successes is 🅱🍆 often 💰💰 referred to 💦💦 as 🛠👦 a ➡👏 푝-value. 💵 More specifically, 푝-values 👪💰 are 🚥 the chance ♂😨 of 🐙😤 observing 푘 or 👱🅱 more successes given the 👏😶 null hypothesis. While ⏳👶 that 👑 nuance is 🏻🗓 irrelevant if you 👈🗣 already 😃 know 🔞 for 🍆🌍 a fact 🏫 the 👨👏 coin is 🔥 fair, 😆✔ it 😩 is important 😍 to 😅💯 keep in 👌👏 mind 😲🤔 in this 😎👇 scenario—our entire goal ⚽😫 is, 😍 essentially, to 💦💦 analyze whether 📊 or not 😥 Dream 💭 is 👮💦 using 🏻🏻 a 💰🙀 biased coin. Armed 💪💪 with a 🏿 basic 🌑🚂 understanding of 💦 the binomial distribution, we 👨❤ will 🅱 now 😱🎅 discuss how 💯 this initial calculation must 😾 be corrected in order 📑 to 💦✌ be applied to 👉💦 Dream’s 💭💭 runs. 💰💰 hFor an 😤🤗 explanation of why this 😰👏 works, 👷 see 👁👁 https://www.youtube.com/watch?v=xSc4oLA9e8o. 😮 9 8 👊👊 Addressing Bias There ✔💦 are ❓ a 🏿 few 😋🔢 assumptions of 🔟 the 🅾 binomial distribution that 😟🔇 are 💰 violated 🍑 in this 👈 sample, some 🈯 of ☹😊 which were 👶 noted in 👏💉 the 😍👏 document Dream 💭💭 published 🤓 on October 27. This 👈 section 📦 accounts for ♿🤙 these 😍😱 violated assumptions, and 👏 proves computations that 😩💰 account 💳 for these 🚑🍆 biases. Note 👋 that 🤔 some 💯 of 🏿😤 these 😤 biases only apply to 😂 pearls, as 😅 blaze rod 🍆🍆 drops 😲💦 were 👶 examined in 😜👌 the same 💩👤 streams as pearls 🍬 due to the 🙆 pearl odds, which are 🏃🔢 independent 🙅 of the 🐐🅱 blaze rod drop ⤵ rate. This 👏 eliminates the 🚮 sampling bias from 💰 the decision to investigate the ✈🍃 pearl odds based 👌🤰 on 👇 the 🔯 fact that they 👩👧 are 💰 particularly lucky. 🍀 8.1 ✊🤔 Accounting for Optional Stopping 🆘🆘 The initial 💰 calculation for 🍆🍆 the 👏🎁 푝-value 💵💵 assumed that barters and rod 🍆 drops 💦 within sequences of 🥗💰 streams are 💩 binomially distributed, which 👏 is 😧💰 not 😅🏼 precisely true 🍆‼ (although 😛😛 likely a 💬👌 very ☣😔 good 👏👀 approximation). For the 🔚 data 💾 to be 👄 binomially distributed, the 🕵 stopping 🆘 rule—the 👨⚖ rule ⚖ by 😈👏 which 🎓👏 you 👨 decide 😱😱 when to 💦 stop 👮👋 collecting data—must 💰 be 👬🐝 independent of 💰 the contents of 🍒💦 the data. 📉📉 For instance, Dream may 📅 be more ⬆ likely to 💦 stop ✋✋ streaming for 🍆👨 the 🏻 day 🕑 after 🕑🅰 getting 😧😚 a particularly good 🏽 run, 🏃🏃 which is 🔁💦 more 🙅 likely to happen ♂♂ on ☝ a 📝 run 🏃 with 😍😍 good barters and ✊👏 blaze rods. Indeed, Dream 💭 did 🍆 stop speedrunning 1.16 🌅 RSG after 😡 achieving a 🐀👏 new 💌 personal best time. This ❓👆 will 🎤🙏 result in the 👏 data 💰💰 being at 👈 least slightly biased towards ⛪ showing better luck 🍀 for 🍅 Dream, 💭💭 and 👅 thus the data 📊📉 is not 🚫♂ perfectly binomial. To 👂⚔ account 💳💳 for 🅱 the 🗣😈 stopping 🆘🆘 rule, we will 🐼 correct ✅ for 👧🍀 the 🅱🌫 worst 👹👹 possible 🔝 (most 💯 biased) stopping 🆘 rule. Imagine 😎 that this investigation was being 🐝👏 conducted by Shifty Sam, a ✝😂 malicious investigator who is trying as hard ⛰ as 🏃 possible 🔝🔝 to 👉 report misleading data 💰💰 that 😐 will 👏 frame Dream. 💤💭 Since 👨 a 🙏👌 lower 😎 푝-value 💵💵 is ℹ more ❌ damning, Shifty Sam computes the cumulative 푝-value 👇💵 after 👀 every 💯 barter or 🕍 after every 🔪ⓜ blaze kill, 💀 and ☑💯 stops ❌ collecting datai once 🍆 he deems the 🛣 푝-value 💵 "low enough" 💦 to make 🖕 the 😹👏 strongest case 😎💯 against 🔫😤 Dream. 💭 This 😞😣 is 💦🔥 the 🍆 worst 👹👏 possible 🔝 stopping 🆘🆘 rule, since Shifty Sam will stop 🏿🤔 collecting data 📊 once 🔂 the 🌊👊 푝-value 💵 is arbitrarily 🤔 low 👇 enough (as 🎣 deemed by 🎨 him to be most 👉 convincing). It should be 🏳 abundantly clear 🔎 that this stopping 🆘 rule 👑 is far worse 🤢 than 🤢 whatever 🏿 stopping 🆘 rule ⚖⚖ Dream actually 👉 followed 😣 during 🚣 his 🥐🤔 runs. It may 🌌 not be 🎮🙋 immediately 👏 obvious how we 👩🌊 can ❓😬 calculate a 푝-value 💵 under ⬇ this 👮🏄 stopping 🆘🆘 rule. We 💰 cannot look directly ➡ at the number of 💦💦 success in 📅👌 the 👧➡ data, as that is 🔥 always 👌👉 going ▶🍆 to 😂 be 🐝 exceptional to 👏 this 👈 degree. What ❓👉 we can 💦 consider, ☺ however, 🤔🖐 is how 🤔🅱 quickly Shifty Sam reached 🕶 his 푝-value 💵 cutoff. Intuitively, we might 🅱♀ expect 🤗 Shifty Sam to spend a 💰 long 📏 time ‼⏱ waiting ⌚ for 🍆🔜 the data 💰 to reach his 😤😤 푝-value 💵💵 cutoff. To 🏼 put it another 🤒 way, it would 💯 certainly be 🐝 surprising, regardless of 🏿☹ how ⁉ shifty Sam is, 💰 to hear ✋ that 😐🗳 Dream got 🍸🎁 30 ✈ successful barters in 👇 a 🔫 row 💦 as soon 🔜 as 😱🍑 Shifty Sam started ▶ looking at 👸 the ⚕ data. 💰 Knowing 💭🤔 that 😩 Shifty Sam only 🕦🤠 decided to 😂 show 👨 you 💯 this data 📉 because it 💯💯 supported ✔ his 🅱‼ argument ♂🙅 would 😎 not really 😆🌈 make 🙋😬 that 🅱 any less ➖ surprising (concerns about 🍾 sampling bias aside—those 😤😤 will 👫🎬 be 💦🐝 addressed later). 🕑 Since 👨 the data reaching 👉 a 🍑 푝-value 💵 this 🐸 extreme so soon is 💯 somewhat surprising even 😂 if 💦👏 we know 😭💭 the 😆 data comes from 😂😲 Shifty Sam, we will 😜⚽ look at 🍆 the 🙌 probability that 😩🔕 Shifty Sam stops ✋ collecting data 💰 at 👨🍆 least as 🕘🅰 soon 🔜 as 👦 Dream 💭 stopped. ⚠ In 👏♀ other 🏭 words, if 🤔🅱 푛 is 💦✅ the 👏 number 😧☎ of trials 👨👨 in 👈 Dream’s 💭 data, our corrected 푝-value 💵👇 will 👏👏 be 😤 the probability that 👉 a series of 🔍 trials 👨 will, 🏼 at 🗽 any point 👇👉 on 🔛👇 or prior 🔙🔙 to 👅 the 푛th 🎃🥖 trial, have a binomial CDF 푝-value at 👨❤ least 🚫 as 🍑🏿 small 👌⏬ as the 👏 one 😫 for 👏🎁 Dream’s 💭 data. 🤓📊 iSince Shifty Sam here 😇 is 🅱 supposed 👏 to represent ✊ whatever 👆 caused Dream 💭💭 to choose 📥📥 to 🎀🏻 stop running 🏃🚫 1.16 RSG, suppose Shifty Sam is, say, Dream’s 💭💭 manager, and 💰➕ can 🔫 tell 💬😲 Dream 💭💭 when 🍑 to stop 🛑✋ or 🅱 continue 🔕 streaming. ⛵ 10 😂 Although 😛 that value 💵💵 could 🤔 be computed through 👉⏬ brute force, 🌕🏼 that approach would involve evaluating the 👏 probability and 🎉 푝-values 👪 for well 🤷 over 😈😏 2 😳 305 different 🈯👱 sequences—which is 🗓☝ obviously 🎳🙄 computationally intractable. As 🍑 such, 😆 we 🏃 used 🚟 a 😍😗 method that 🍆 allowed for 🔄 dealing 👦 with multiple sequences at 😂 once. The 🅱👉 exact algorithm is 💦 somewhat involved, so 💯 a 🌍😰 description 👿👿 has been included in 🚪 Appendix B 🅰😇 for interested 👅👅 readers. 8.2 ⚡ Sampling Bias in 🛌🖕 Stream 💦 Selection As 🙇 mentioned previously, we 🔫 chose to analyze Dream’s runs 💰 from 👉 the point ⬆👉 that 😐👏 he ♂ returned to streaming 😭😭 rather than 😻 all of 💰 his ‼💦 runs 💰 due to ♀➡ a 🅰🏻 belief that, 🤔 if 🤔👏 he 👉 cheated, 💏 it was 👨 likely from the 😄😈 point 👉⬆ of his return to 💦💦 streaming rather than 🅰 from 👊 his 💦👿 first ☝🥇 run. 😱 Although 😛 we 😱 cannot 🚫 be 🍆 entirely 👐👐 certain, 🤔🤔 it 👌 is 🚟😨 also ➕😨 likely that 😷😯 MinecrAvenger decided 🤔🤔 to 💦😱 investigate Dream’s 💭💭 streams due 👅👅 to 👏💰 noticing that ☠ they were 😉 unusually lucky. 🍀🍀 This, 🚮 of 🤤💦 course, 😂 means 🙄😏 that the streams investigated are not actually 😥 a 💰 true random 🔀 sample. Even ☎ if 👏 MinecrAvenger somehow 😆😆 chose streams to 👍 investigate 👏👏 at 🍆 complete 🚫 random, 🎲 we 💏🏼 are choosing to 💦🛏 investigate these 🈷🈷 streams due to the fact 📕 that 🍆 they 🙋 are lucky. 🤞 Thus, we 👨 cannot 😡🚫 treat this 🚙⬇ as a ➡🖼 true random 🔀 sample. To 🏻🏻 account for 👅 the 👏🌷 maximum possible amount 📉 of 🐣 sampling bias, imagine 🤔🤔 that Shifty Sam inspected every 👏 speedrun stream 💦 done by 🔥😈 Dream 💭 and 🌬 reported 🔫 whatever 💯 sequence of 💦 consecutive streams was the 😷 most ⬆ suspicious.j This would 🌨😵 produce the 👘😫 strongest possible 🔝 bias—or at 🍆🤠 least 😱😴 a 🐝☝ bias much 😂 stronger 💪 than there ✔💍 actually 😤 is—from 😂😍 the ✨ choice 😜 of 🐣 these particular Dream streams. Recall the example 🔥 of 🌹💰 investigating the 🤡🐐 20 🔳 back-to-back 😰 heads within 100 coin flips from 🅱 earlier. Much 😂💘 like 👋 you 👍👦 could calculate the 👏 probability of 🚑 20 consecutive heads 🐵🙉 occurring at any point in the 🅱 100 💯💯 flips, we 👨👦 can 🗑🔫 calculate the probability that ✔💄 Dream 💭💭 experienced 🤳 bartering luck 🍀 this 👏 unlikely in 👏🙌 any 💦 series 💓 of 💦🤓 consecutive streams. This 👈📣 would 👉 account 💳💳 for 💦 the 👏 bias from ➡ Shifty Sam, and 💛😮 thus 🏻 more 🙅➕ than account for ❓ the actual bias under ⬇ consideration. To calculate the 🔑🏥 chance 😱 that ⚪⚠ at 👌💯 least 👏😈 one sequence of 🙌😊 streams is 💯🌈 this lucky, 🍀 we 👧 first calculate the ✈🌜 chance ♂🚫 that 😟☘ no sequence is. 👏💦 Assuming independence, we 👦 can 🗑💪 do 👺 this by 😆 taking the 🅱 chance 🚫 that 🤢 a 🎮🍉 given ⤵👤 sequence isn’t sufficiently lucky (1 − 푝) to 💦🙌 the power 🏼 of the number 😯❤ of sequences, 푚. If 👏 an event 👐👐 occurs more 😩 than 😻 zero 👰👰 times, 🕐😆 then ❓👱 it 😏 must 🙋👏 have 😩👌 occurred at ❤🤣 least 👌 once, so 🆘 we can then 🙄➡ subtract (1 🥈 − 푝) 푚 from 🙃 one 🙏☝ to get 😛🍑 the 🎺🌊 chance that it occurs at least 🚫🚫 once, 🅱 giving 👸👸 1 − (1 − 푝) 푚. The number 🔢❤ of consecutive sequences consisting of at 😔🍆 least two 💏 streams from 😮💰 a 🅰 set of 😰 푛 streams is 👌👏 푛 2 , as you 👆 choose 📥📥 two 🎄 different 💰 streams to 👌👁 be 🤔 the 🔑👨 first ☝ and 🏽 last. Adding in 😏 the 푛 sequences consisting of 💦🐲 only one 🏻 stream, 💦💦 which ♀ were 👶 not 🚫🚫 included because 🏽🤔 the 👶🆘 first and last ⬅ stream 💦💦 are the 👨👏 same 🖕😂 stream, 💦 you get 😷 푛 2 + 푛 which 🏼👌 is equal to 푛(푛+1) ❄👸 2 . We 👦😍 can 🔫❗ now ❔ get an upper bound 푝푛 on ☝ the 🌎 푝-value across 👉👏 푛 streams, using 🤳 the ⬆ 푝-value 💵👇 derived 🔜🔜 from 👉💥 our 🌍 sample. 푝푛 ≤ 1 👊 − (1 💸 − 푝) 푛(푛+1) 👸 2 🏻 (4) 💦💦 At this point, 🈯📌 let 💂 us 👫 go ♂🏾 back and 👏🅱 analyze an 😚 earlier assumption we 👶🤔 made: 🏠💰 that 😤 the ♂🕍 푝-values 🅱💰 between sequences of 💦 streams are 😟 independent 🙅 of one another. 👯 This 👁👈 assumption is 😠 false—however, 👳👳 it 😢⌨ is 🙀👏 not ♂ false ❌ in a ☝ way ☝ that could cause 푝푛 to be ✅ greater than this ⬆ upper bound. 🤐 Consider the 😶👌 exact 👌 way 👟↕ in ⬇ which the sequences of 📆🏿 streams are 🔢 dependent on 🔛 one 😈🏼 another. Since 👨 they 🏼😕 all contain streams from the 👏🐺 same 🖕😯 set 📚➿ (those from Dream), some 🤔👨 of 💦 the 💰 data 💰📉 in 😂 each 👏👋 sequence will 👏💯 be 🅰👨 identical to 💱 that 💝 in other 👪 sequences. This 😋 lowers the chance 🚫 that 🔍 Shifty Sam jWe can 💦 safely 🚦🚦 assume ♀♀ the streams reported 🔫 would 🤕 be 🐝 consecutive—it would be extremely 💯😂 obvious that the 🔪😱 streams were 🙈👶 cherry-picked 🍞🍒 if ☔ Shifty Sam reported 👮🔫 the 👻 luck 🤞🍀 in, say, Dream’s 💤 first, 🥇 seventh, and 💦👌 tenth streams. Non-consecutive streams could 🔮 be 👏 reported 🔫👮 credibly in 👏👏 unusual circumstances, ❌ but that 👇🍆 possibility is essentially negligible. could find misleading data, as 🍑🍑 he 👨👨 has 🛒 less data 💰 to look 👀 through 🗺 for unlikely events. In technical terms, we 🔨♀ can 🔫 say 😵😩 the 🛩 푝-values 💰👪 of 💦🌾 the sequences of 😏 streams are positively dependent upon one ☝💯 another—they 🚪🏊 are 🚟🅱 positively correlated with each 👏👏 other. 💰 For 🏔 this 🔥 bound 🤐🤐 to fail, 🤧🤧 the 🏼⤴ sequences would ✅ need 👌 to be ❄🏻 negatively dependent. 8.3 Sampling Bias in Runner Selection In addition to 💦💰 these particular streams of 💦💦 Dream’s 💭 being analyzed due 👅👅 to their high 📓 proportion of 😂⛄ pearl barters, Dream was 🏻👏 initially analyzed out 🌌🉐 of 💦💦 all runners due 👅 to 😅 his experiencing unusually good 👼 luck. 😄😟 Much 😩 like ♂😄 we 📌 calculated 🚜🚜 the ⤴ chance of 💦👮 observing data as 👦 unlikely as 🤔💯 the data in ♂👏 question in any 🔥👏 sequence of streams, we ❣ will 👏💰 analyze the 👽 probability of observing data this 👁 unlikely from any 🌐 runner in ⬇ the Minecraft speedrunning community, using 📤 the same 🏆🤷 formula for the 🚑🐆 chance 🙅 of 💦 something 😅😳 occurring at 💯🍆 least ❗ once 💯💯 in a series 💓💓 of 💦👨 trials that 🏾🍜 we 👧👦 used 📅😏 earlier. This 😂 results 🔢 in 🛌 the following correction, where 푝푛 is 👅😝 the 푝-value corrected for 🍆 a community 👩👩 with 🙌👏 푛 runners, and 💰 푝 is 🌈 the 🌈 푝-value 💵💵 for Dream 💤 in 📥👸 particular: 푝푛 ≤ 1 🤜👀 − (1 ⛈ − 푝) 푛 (5) 🍆 Note 🎵 that, 😐 as 🅱🍑 we are 🔄⭐ discussing the 👩🕜 푝-value 💵💵 for 💦 data this ⬆ unlikely occurring to a runner within their 🍆⬅ entire 👏👏 speedrunning career, the 👻 size of their career is not 😖 relevant. Although 😛😛 a 🎁 runner may 🗓 be more ➕ likely to ⏸ experience 😋😋 six exceptionally lucky 🍀🍀 streams if 😂🤔 they 👥 stream 💦 more often, we 👬 already account 💳💳 for 👏 the amount they 👴🤷 stream when calculating 푝—in 🅱 other 👪 words, 🐎 if 🅱 someone 🕵👬 streams more 💯😩 than 💉⬆ Dream, 💭💭 they 😱 would 💀👌 need a 💻↘ luckier sequence of 👀💦 streams to have 👍♂ an 👅🏻 equally low 푝. 8.4 👊 P-hacking 👮 Perhaps 🤔🤔 Shifty Sam examined multiple types 🅱🅱 of 💦 random 🔀 events and 🍒💰 only ☝👃 picked the most 💯👥 significant ones. 💯 For 🤔🍆 instance, 👉 there could have 😤🅰 been 💴💫 analyses of flint drops 💦 or 🎡 iron golem drops, 💦💦 and ➕ only ☝💋 pearls and 😫 rods were reported ♂♂ due to those 👉 being the 👦👏 most 👉 significant—indeed, some 🍌 other 💰 barter items, as 🍑🍑 well 🤷🤕 as eye 👁😉 of 💦🏻 ender breaking rates, 💰 actually 🚟🚟 were 🙈 recorded. To ✌🅱 correct ✅✅ for 😣 this, 👈🏋 we take 🖐 the 👏🙀 probability of finding 🕵 each result at 🍴👉 least 🤸👌 once among 💰 an 🤔 upper bound ℎ on 👍🔛 the 👀💲 different ↔🈯 types 🅱🅱 of events that 🚟😐 could have 💯 been 👦🥜 analyzed. Unfortunately, 😭 the correction used 🚟🙄 for 👨🍆 selection across ➡ individuals and 👴👏 streams will 😳 not 🚫 work here. That 👋 correction requires either independent or 😤💰 positively dependent probabilities; however, 🤔💰 there 👌💾 are negatively dependent probabilities involved here. For 🔙🍆 instance, 🤔 the 👨 more pearl barters you 👆 receive, the 👏 less 😔 opportunities there are 😱💢 to 👮 receive 👉 an obsidian barter: your 👉👈 numbers of 🤑👏 pearl and obsidian barters are 🙏😊 negatively correlated. We can 💦 still correct ✅✅ for this, 👆 but 😠 it 💦 will 📌🤤 require 📜📜 a much 😩 looser upper bound than 😻 the 🤣 ones 💯💯 we 🏃 have 🈶😤 used 🙄🚟 previously. Remember that 🍆 the ⚰ probability of 👏 any one of a number 📱 of mutually exclusive events occurring is ✅👌 the sum of 💦 their 🍮 probabilities—for example, the chance of 💦💦 rolling 😋😋 either a 👌👀 two 🎄✌ or a five on a 🎉 six-sided die 🚦😪 is 🔥😩 1 6 ❗ + 1 😎 6 ❓ = 2 6 👆💪 . However, 🖐 this is 😂 not the 😦🚗 case 👅🤔 for 🎅🎁 non-mutually 🏆 exclusive events. Consider ☺🤔 the 💰 chance 🙅😨 of rolling either 😌 a 💰 number ❤📱 less than three 💁 or 🚻🙂 an 👹💶 even 🕚☎ number. The 🖥😃 chance 🙅😱 of 🔥 rolling a 🀄 number 🔢 less 📉 than 😽 three (1 👸 or ➕👉 2) is 🔥 2 6 ❗👧 and 👈 the chance 🚫 of rolling 💊 an even number (2, 🕔💦 4, or 🔮 6) ❓ is 😩 3 😗 6 💪❓ . Adding these 🍆 together 👫😄 would 👌 produce 5 ♥🏼 6 💪 . But this 👉 counts rolling 💊 a two ✌💏 twice, 👀✌ producing a number 🎦😧 higher than ⬆ the 👏🌌 true 💯 probability of 💦🔥 4 🏽💦 6 🕕❗ . This 👈 double-counting problem 🏻 is the 👧 reason ♀♀ why ⁉ adding together 👫👬 fails ⛔ for probabilities that 💖 are ❓💥 not 😖 mutually exclusive, so 🆙 it is 🗓 not a problem 🏻 that 😐 our probabilities are 🅱 not mutually exclusive: 12 the 🚟🌜 sum 👁👁 of 👏 the 🌧⚰ probabilities will 👏 still work 💵 as an ✒ upper bound. Thus, we 😱 have ⚠😎 the 🅱 followingk , where 😾🌎 푝ℎ is 😳 the 👊👏 푝-value corrected for ℎ comparisons, and 🌚 푝 is the 😂📱 initial 푝-value: 💵👇 푝ℎ ≤ 푝ℎ (6) We will ⚽ choose 📥 values 👪 for 🍆👅 these formulas and ➕ compute the 😱💦 final 😪🌠 results 🔢🔢 in 😏🏽 Part 💔 IV. However, 🤔 to ensure 💰💰 these computations are ♀ not 🚫 invalid due 👅👅 to 🔎 unusual behavior of 🗜 Minecraft’s random 🎲 number 💦😧 generation, 👪 we 👧 will 👊🅱 first 🏻 analyze Minecraft’s code. kThis is 💥👏 commonly known 💫 as 🙇 the 👏 Bonferroni correction. 13 😏😏 9 Code 😲😤 Analysis When 👌😂 discussing probabilities this 👁 low, ⬇👇 concerns about edge-case ⚔🗡 behavior in Minecraft’s random number generator ⁉ (RNG) are relevant. We 👨 have 👏♂ been working 👷 under ⬇ the ⛓ assumption that the results of 👪 piglin bartering and blaze drops 💦😲 are independent random 🔀🔀 variables, as 🍑 one would 😎 naively expect 🤗 if 👩😂 Minecraft’s ☄🍑 RNG were 👌👶 truly ⚡ random. 🎲 This would 👪💀 mean 🤔 that 🏻🙅 the 👧👏 variables cannot 👊 affect one another; that is, 💦🈶 past piglin barters and 👏 blaze drops tell 📟🗣 you 😕🏿 precisely nothing about future 🎆 ones. 💚 However, 💰🖐 it 💦 may 🤷📅 seem possible 🔝 that, 😐👉 in some 🐔 edge ⚔ cases, 💼 piglin barters or blaze drops fail 🤧🤧 independence in ways ✔🤔 which ✌👏 increase 💳💳 the 🗣🍫 probability of 🐣💦 observing Dream’s 😴💤 data. 💰💰 Here, 😶 we will 💍⚽ analyze how likely that 😝🍑 is 🈁 by 😈 inspecting Minecraft’s ⛏☄ code. 😲😲 Before 🍑😂 beginning 🆕😍 the 🕜❤ analysis, it 😽😉 is 🔥🍆 worth 💰💵 noting that 🙇🚟 if Minecraft’s RNG were 🍑😫 to ✌ fail ☠ in such a 😬 way ↕😇 that 😩💦 piglin barters and blaze drops could 🤔🤔 not 😠 be said 🗣 to 💦🔢 be 🏻👄 approximately ⭕ independent, 🙅 it 💧😩 would 😏🍆 still 🛑😻 be 📖 astonishingly unlikely for them 👬🎊 to fail in ⤵😜 exactly 😉😉 the ❤👏 way required to produce the observed data. 💾💰 The failure(s) would need to 😥👉 (1) occur repeatedly over the 🚗🍑 course 🏎 of 💦 six separate play sessions for 😏👨 Dream, (2) only 👨 occur 👻 to 👊➡ Dream 💭 out of all 😮😩 runners, (3) 😗 affect both bartering and 👏💦 blaze drops, 😲 and 👏 (4) 🕓 specifically 🔵🔵 bias the results 🔢 towards ⛪ piglins bartering ender pearls 🍬🍬 and 🍆🚄 blazes dropping blaze rods, rather ☑🙇 than 🔪🔺 towards some 💵 other barter item or blazes not ⛔♂ dropping rods. Although 😛 this 👈👌 may 🗓🗓 still be more 🤔💦 likely than the 👏👏 data 💰 occurring without a 🃏👦 flaw in 👉👏 Minecraft’s 🍑 RNG, even before analyzing the 🤥👢 code 😤 it 😩🙅 appears a 🅰 priori extremely unlikely. 9.1 Confirming the 👨💦 Probabilities Though 💥💭 the 🕍👏 probabilities we 😺 have ✊ been 🤤 using 🏻 thus 🕵 far 🌌 for 🕓 piglin and 🤔💦 blaze drop ⚰👇 rates 💯😂 in 👏 Minecraft ⛏🚨 1.16.1 🕴 are 🔢 publicly available 💢💢 information, 📚 it is important 😍 to 💰👌 identify exactly 😉😉 where 🌎🤷 these probabilities come 💦💧 from. 👉 The piglin bartering proportions are 🅱 determined by the piglin_bartering.json file 📂 found 🤔🔎 in the 👉🌎 1.16.1 🕛🏫 jar filel . As expected, exactly once 🏳 each 👏👏 barter, the 👨 game 🔥 selects an 😍👹 item from 💥💰 the 😜🙀 following weighted table: 🎲 Item Weight Book 💯 5 😂 Iron Boots 👞 8 ✊ Potion 10 Splash Potion 🍾 10 🔟 Iron Nugget 10 Nether Quartz 20 🔳 Glowstone Dust 20 🎊 Magma Cream 🍨🍨 20 Item Weight Ender Pearl 20 🔳 String 20 📊📊 Fire 🔥 Charge 40 Gravel 40 Leather 🐄🐄 40 Nether Brick 40 Obsidian 40 Crying 😣 Obsidian 40 Soul 😱 Sand 🏝 40 Table 1: 🗿 The 🌌 simplified contents of 🚨 piglin_bartering.json. Here 🍒 an item of 😓 weight 푛 is 👎🅱 푛 times more ✋🍗 likely than 🅰🔪 an 💶👏 item of 🛢💦 weight one. 😉 Additional information 📚 regarding enchantments, stack 📚📚 sizes, and 👏 potion 🍾🍾 effects not 😡❌ shown. 🚫 Since 👨👨 the 🏽👌 weights sum to 💦🌱 423, and 🎅 ender pearls have 🎁😑 a weight of 💦 20, 🔳🎉 the 👏 probability of 💦🔴 an ender pearl barter is ♻🅰 indeed 20 🔳🎊 423 as 🏿 expected (in 1.16.1, 👸 the 👉👏 version 👧 Dream 💭💭 used). 🎶 lTo read these 🔫🚟 files 📁📁 on 🏽 Windows, simply ⤵😡 rename 1.16.1.jar 🌸 to 💰💦 1.16.1.zip 🤜 and ⏱ navigate to data\minecraft\loot_tables. 💰💰 14 👦 Blaze drops are specified by a ❔🍒 file 📁 called blaze.json, an 👴 excerpt of which is 🙌 included below: 😫 1 " function ": " minecraft : set_count ", 2 💕 " count 💯🙌 ": { 3 👏🎆 "min": 🕑 0.0 , 4 "max": 1.0 , 5 " type ⌨✍ ": " minecraft 🍑🚨 : uniform " 6 🤔 } One 👆🤓 can 🔫 see 🙉 that, 👨👨 when the 👏🍤 player’s 🎮💰 weapon 🗡‼ does not ♀❌ have 👏 a looting enchantment, blazes select ❇ between 🏻 dropping either 😤😬 0 or 1 ❌⏰ rods using a uniform distribution. Thus, a rod 🍆🍆 drop 👇⚰ occurs with 🍨👏 probability 0.5 as expected. 🍆 9.2 💦 Setting 🌃 RNG Seeds Failures of 🌈👏 one of Minecraft’s ⛏☄ RNGs to 💦💱 behave randomly are 🏄🈶 not ❌ unheard of—the 💦🤤 most ☺💯 famous 😎😎 examples of 🍳💦 these 🚑❌ are ♂🙏 the RNG manipulation exploits found in 🔙➡ versions prior 🔙🔙 to 👌👌 1.13. 🕴👂 These ☀📀 all work on 👋 the ‼ same principle: 👴🏾 some part of Minecraft’s code resets an RNG being used 📅🆒 by 😈😈 other parts of the 🏠👇 code, causing predictable behavior. 😦
submitted by Alexjandro23 to emojipasta [link] [comments]

PSA: Thinking of buying crystals for a Nat 5?

PSA: Thinking of buying crystals for a Nat 5?
TL;DR: Nat 5s are expensive. Acquiring a random one through crystal summons can be as much as an average mortgage payment. Even then, your odds will always approach, but never reach, 100% no matter how many scrolls you open
Assuming you buy the Premium Pack (11 scrolls for 750 crystals), it costs about 68 crystals per summon. The 3,000 crystal pack costs US$100. I know there are packs that increase "value" but we are going to ignore them for now as you still typically get the same number of crystals. This nets 30 crystals per USD spent.
https://preview.redd.it/36tbbydxmny51.jpg?width=1280&format=pjpg&auto=webp&s=744d5db75dd4003ed90a18f29a12a6a73fe6e03f
You need 139 summons for a 50/50 shot at getting a Nat 5. That is roughly 9,477 crystals or roughly US$315.90. For barely even odds on pulling a random Nat 5.
Want a more sure bet? 598 summons are required to cross the 95% chance of 1 or more nat 5s. That is 40,733 crystals or roughly US$1,358.
So what about those packs? Monthly packs typically you get about 2 - 3 times the crystal value and the standard packs give somewhere between crystal purchase and monthly packs. Let's be as generous as possible and assume 3x the value and all on summon related items. That 95% chance will cost you ~US$453.
The average car payment in the US is $530 new, $381 used. The average mortgage payment is $1,275. So... moral of the story best case scenario, that nat 5 will cost you about the same as a monthly car payment, worst case scenario a mortgage payment. Where do you think your money is better spent?
Edit #1:
Worse yet, gatcha gaming is gambling and gambling can be an addiction. The more exposed to it you are, the easier it is to fall victim to it's mechanisms. Each pack you purchase makes the next one all the easier until you feel trapped and prone to unconscious biases like the gamblers fallacy.
Here is a fairly long but gripping Atlantic article illustrating some of the worst case gambling outcomes. It focuses more on casino gambling, but the same principles were used to design gatcha gaming. https://www.theatlantic.com/magazine/archive/2016/12/losing-it-all/505814/. If you yourself or someone close to you struggles with gambling addiction (whether gatcha gaming, state lotteries, or casinos) consider looking in to some resources to help you (them) overcome it. It can be as emotionally and financially damaging as drug addictions.
Edit #2: grammar....
Edit #3: Changed the TL;DR to better reflect the intent of the post which is not to tell people how to spend their money, but to educate on the true costs to help inform their decision.
submitted by A3thereal to summonerswar [link] [comments]

Having a high IQ doesn't necessarily mean that they'll make good decisions

I chose to post here, because my post got removed after I posted it on unpopularopinion
I mean a low IQ probably would affect your ability to make decisions that are right since it affects our ability to understand and apply our knowledge as well as think critically about things, but a higher than average IQ would not necessarily make you a better leader or decision maker.
I formed this thought after reading through The Intelligence Trap by David Robson. Overall the book talks about why talented, knowledgeable, or intelligent people would fail to make the right decision and often end up being stupid.
Their are many reasons why intelligent people can be stupid. But the main reason is that IQ typically doesn't measure many of the skills, temperaments, and cognitive abilities required for good leadership or decision making, including the ability to avoid cognitive biases in thinking. IQ isn't a holistic measure of intelligence or ability.
Also, intelligent people often fall prey to certain traps. According to book by David Robson, their are three types of mentalities that "trap" intelligent people or educated people.
And one more that I'll add
Also, it's possible that having a mental illness would have a effect on your decision making (although it does no necessarily affect your leadership capability). Example include when a person in the manic phase of their bipolar acting impulsively or a person with Schizophrenia doing things based on their delusions and hallucinations.
submitted by euphoniumchen to popularopinion [link] [comments]

Tinfoil Hat: Holding fetch lands disproportionately punishes you to flood out

Disclaimer: I know how stats work. Please do not feel the need to explain sample pool sizes, pattern recognition bias, or gambler's fallacy.
I have recently had a slew of games in which I noticed that I kept up fetch lands (Fabled Passage) in my hand to potentially trigger things like Fatal Push or energy off of an Aetherworks Marvel. In what feels like a HIGHLY disproportionate number of games to a HIGHLY disproportionate degree I flooded out like crazy. In several games I drew into between 7 to 10 lands in a row causing me to straight up lose when I kept perfectly reasonable hands, most of the time with 3 lands. In a 24 land decks, this is just flat out incorrect.
We all know fetching thins the deck, but the effect is minimal. I wondering what kind of shenanigans are behind the fishy client that revolve around fetch lands.
I feel like, if anything (tinfoil hat) it is to either forcibly teach players to thin the deck/massively punishing them for not in a handful of games to force their future performance into better averages (in terms of the long term benefit of fetching) OR maybe to perhaps give an unfair advantage to/push landfall where in some regards, you don't mind getting more lands/don't miss a land drop and you're holding up the fetch lands to have the most impactful effects.
Looking for more non scientific, anecdotal feedback from other players that may have noticed similar phenomena in their play.
submitted by Yugi_Jace_Ketchum to MagicArena [link] [comments]

A Simulation for Those that Hate Models...like me....

Hello all,

Before I get into the meat of the post, I want to explain why I am writing it. 2020 has been a really rough year, and one of the bright spots was finding Dream and friends' channels. Their content has been wildly entertaining and brought thrills and laughs in a rather tense and uncertain year.

I personally don't believe Dream cheated, but seeing the amount of toxicity from fans and haters has been very saddening to say the least.

Dream posted his response early this morning, and while I found most of the video to be satisfactory, the paper he commissioned has come under some scrutiny.

Now I don't have a Ph.D in astrophysics (though I am friends with someone who does). I merely have a bachelor's degree in computer science. I am not particularly skilled in statistics beyond what I studied as part of my degree program, so I would be prone to errors and misunderstandings if I tried to do too much analysis on either paper.

Instead, I chose to run simulations. I will attach my code and result files in a link below if anyone wants to review/critique them. My code is much simpler than the snippet provided in the mods' paper as it cares only about producing bartedrop results, not generating a probability.

In all result files, the left column is the number of barter attempts and the right column is the number of blazes killed. A perfect run would appear as 2,6.

All probabilities used in building the simulation come from the mods' paper, as does each statistic regarding Dream's bartering and blaze rod drops.

On average, Dream traded approximately 12 times per run, and killed 9 blazes. I use these averages to qualify simulated runs.

Biases and assumptions in the simulation:
\* The averages listed above include abandoned runs, lowering the average number of trades/blazes Dream performed/killed and thereby lowers the number allowed in a qualifying run. \* The simulation assumes no pearls are collected from Endermen or chests, all pearls come from Piglin trading only. \* The simulation assumes the runner needs exactly 12 ender pearls and 6 blaze rods to complete the game. \* The sample size of Dream's runs is, all things considered, rather small. Only 22 runs with 262 trades between them. \* The simulation is built in JavaScript, not Java. For the sake of simplicity, my analysis of results assumes both are perfectly unbiased or equally biased in random number generation. \* There are more points of RNG in Minecraft than are simulated here. I chose to focus only on the problematic statistics in a vacuum. 

Examining the simulations:

I ran simulations across three different drop rate assumptions.
\* Worst case: Piglins can only drop 4 pearls when pearls are rolled on a barter \* Best case: Piglins can only drop 8 pearls when pearls are rolled on a barter \* Random case: Piglins can drop 4-8 pearls per roll of a pearl on the barter as in the version of the game Dream was running 

All three were run through 100 million iterations with runs where barters and blaze rod drops were both at or better than the average across the runs presented in the mods' paper being output for analysis.

In the worst case simulation, the player needed 3 pearl drops to proceed. In the 100 million iterations, there were 428,430 total runs meeting the criteria, giving an approximate distribution of 4/1000 or 1/250 (0.0043)

In the best case simulation only 2 pearl trades are needed to proceed. There were more than 21,000 optimal trades. There were approximately 2.7 million total runs meeting the criteria, giving an approximate distribution of 27/1000 (0.027).

In the random case simulation, there were more than 19,000 optimal trades. There were approximately 1.8 million total runs meeting the criteria, giving an approximate distribution of 18/1000 or 1/50 rounded (0.018)

I ran two additional simulations. The first raises the threshold of acceptable runs to match the worst parameters in the sample at 26 ingots traded and 15 blazes killed.

In this first variation, approximately 22 million runs met the requirements for a distribution of 22/100 (0.22). 22% of runs were as good or better than the worst run in the sample.

The second variation I iterated only 1 million times, saved all the data to a file, and examined only the barter rates. In this sample of 1 million, 1343 barters were optimal, 71,673 barters were completed in 12 trades or less, and 259,739 barters completed in 26 trades or less.

I chose a smaller iteration set in the second variation to allow the file to be fully loaded in Excel, hence the more exact numbers as opposed to the approximations derived from the larger samples.

The simulations show that the runs themselves are not unreasonable. The issue arises in the frequency with which Dream experienced them.

With the sample size of 22 runs, and 262 barters across them, the odds of the series observed may be low, but the sample size being as small as it is limits the confidence that can be held in the sample being representative of the population (that being Dream's game). Additionally, the runs fall within a wide range within the simulations, 2% in the strict case and 22% in the broad case.
While the probability is low that these runs appear in this grouping, the quality of the sample is also somewhat low because some of the runs terminated before completion of bartering or blaze rod farming.

There is also the risk of falling into the reverse gambler's fallacy which follows a line of reasoning that because some series of outcomes has occurred and other outcomes excluded for a long time, the excluded outcomes are due to appear more frequently to balance out the average. In reality, each occurrence of an independent, random event is just that, independent.

Even so, this distribution in his runs introduces a reasonable doubt that, of itself, is enough to disqualify the run, even if it is legitimate.

So, while I believe the mods were right to disqualify the run based on reasonable doubt, I disagree with the conclusion presented in their paper that the results are so unthinkable that the only explanation is that Dream cheated.

My simulation and results files can be downloaded and viewed here: https://drive.google.com/file/d/1pOCJcsDw0_PlA_gDdQhTkbJn_EVYvfne/view?usp=sharing

If there is anything I overlooked, or any mistake I made in my design or calculations, please let me know so I can correct it.
submitted by TheGreatestFez to DreamWasTaken [link] [comments]

Does the Censure Summoning Method work or not? Theory and Simulator

Hello guys! Yoyotje here.
Lately there is going a lot around about the new Censure Summoning Method. The idea of using Grey/Daily Tokens to test how lucky your RNG is before you do an Atlantis 10x pull. As a student psychology I recognized some cognitive thinking errors with this idea. I’ve been wanting to say this for a while now but I was always to hesitant about being wrong myself. I also coded a E&P Simulator (Code shown below the post) to get myself some data to prove or disprove. Let’s begin with the begin.
What is the Censure Summoning Method? The Censure Summoning Method has 4 steps you have to go through for you to get ‘good pulls’. I did not come up with this method myself and all credit goes to the original author of whom I do not know his/her name. The original author posted this which has more information about the method: https://docs.google.com/document/d/1A9x3CO1UFEXD20KSdg8tpcbFaTFrSX9cEwhvKKjYYqg/edit.
Step 1: Use a Grey/Daily token and get a 3 star hero or a troop. If you get an 1 or 2 star hero you STOP. Step 2: Use a Grey/Daily token. If you get a 2 or 3 star troop or a 3 star hero proceed to step 3. If you get a 1 star troop repeat step 2(repeat max 2 times). Anything else you STOP. Step 3: Use a single Atlantis Summon with either 100 Atlantis coins or 350 gems. If you get a season 1 3* hero you STOP! Anything else you proceed to step 4. Step 4: Do a 10x Atlantis Summon. Don’t exceed two 10x Summons
The idea behind this method is to recognize when you are on a lucky streak and take advantage of it. On top of that there a few more pros for the method. It limits the amount you spend massively and it helps with your mental state by getting less frustrated.
Why does this not work?
The cognitive thinking error that is made here is known as the Gambler Fallacy. The Gambler Fallacy thinking error states that people think that events in the past affect the events in the present.
An example: if I flip a coin 10 times and it lands 10 times in a row on heads, what is the chance it will be tails on the next throw? There are people who would say it would be a big chance because we already got 10 heads and it has to even out, right? This is where the error comes in, on the 11th throw it will still be a 50/50 whether its heads or tails, negating anything that happened before that.
This is exactly what happens with this method. If I got five 3* troops in a row from the Daily Summon Portal and I go to the Atlantis portal, the chance that I will get an featured hero is still 1,3%. This percentage will not change, no matter what happened before it. That being said, you can’t know for sure when you triggered an ‘lucky streak’ because there is no way to predict what your next summon will be.
An important note is that I stated all of this information on the idea that the odds in the summoning portals are actually true. In a lot of countries its obligated by law to state the odds of any loot box that can be bought by minors that could cost in real currency. Following this we can assume the percentages in the summoning portals are correct and there are no other systems in play.
Why do people believe it works?
A lot of people seem to believe this is a working method to get good summons. There are a lot of YouTube videos out there who claim that it works and actually show a good 10x pull while using the Censure Summoning Method. The big question that has to be asked here, it is a good 10x pull BECAUSE the method was used or is it just a coincidence? The thing that’s in play here is the conformation bias. People tend to acknowledge any evidence that supports their claim while ignoring any evidence that could disprove it.
I have used the method myself as well and my pulls were least to say pretty bad. I could still make the same mistake with the conformation bias only looking at my one 10x pull. We simple do not have enough reliable data to actually test if it works. That is why I coded a E&P Simulator. In this simulator I did 10,000 summons with the method and 10,000 summons without the method. I calculated if there are any significant difference between the summons and there is not a single significant difference to be found. I will link the code here below so you can check it out. In the code I will explain as much as I can. I also will explain why I ‘only’ did 10,000 summons each and what flaws could potentially be in the code. It’s still a good pointer into the direction that this method is just a superstition with not a single scientifically proven background.
TL:DR
In conclusion: The Censure Summoning Method does not work to get better summons. People tend to make mistakes like the Gambler Fallacy and the confirmation Bias. The method has pros in the sense of spending less money and getting less frustrated. With a sample size of 10,000 summons there is no significant difference to be found by using the method or not using it.
Link to the code: https://repl.it/@yoyotje/EandP-Summon-Simulator#main.py
submitted by yoyotje to EmpiresAndPuzzles [link] [comments]

Eight biases investors should be wary of


The efficient market hypothesis assumes that all humans are rational and act in their own self-interest. However, humans are often irrational and self-destructing, through no fault of their own other than being human. I've got eight of the most common biases we can be aware of and actively prevent them from harming our financial health.
I have sadly been the sufferer of every single one at some point.

Anchoring bias
Our purchase price often lends us to place an emotional anchor at that exact price. Whilst natural, many of us can be lulled into making sub-standard decisions because of the anchoring bias, for example many of us will know the feeling of waiting for the price to reach our breakeven price before selling, only to see the stock reach within a few pence of it and drop back, leaving us wishing we had sold.
Another danger of the anchoring bias is in technical analysis. Whilst technical analysis does help investors spot key levels on the chart, it can lure us into placing too much emphasis on the levels and act not in accordance with our investment thesis.
When we buy a stock, we have done so because we believe the current valuation offers upside and that it trades at a discount to its real value – by looking at resistance points on the charts we become tempted to sell and try to buy in cheaper. We may get lucky a few times doing this but often all we do is sabotage our investments. Unless the goal is to trade, technical analysis doesn’t always mix well with fundamental research.
To combat the anchoring bias, we should ignore the price we paid for our shares, and to always focus on the price right now. Would we buy the stock now at its current price if we did not own it? If yes, then great; keep holding. But if the answer is no – you know what to do.

Endowment bias
The endowment bias is very similar to the anchoring bias, in that both focus on the purchase price, but the endowment bias differs in that we believe that the shares we own are better by the virtue of us owning them! This is, of course, nonsense, but we see it all the time in the housing market. Houses will often be priced well above the street’s average for sale price despite the house itself being unremarkable, yet the owners are convinced that their house should be priced higher than most of the houses in the street.
Like the stock market, the housing market often finds its equilibrium point, and the overpricing is often ironed out as sellers revise their sale price downwards, but unlike the housing market the endowment bias in the stock market can be potentially costly. Holding onto stocks we should be selling, or holding onto the sector laggard despite the evidence showing there are more attractive stocks in the sector, can damage our portfolios.
The classic example of the endowment effect was in a study from Kahnemann, Knetsch & Thaler, who gave participants a mug and measured the willingness to accept versus the willingness to pay of those who had not received the mug. They found that ownership of the mug demanded compensation almost double what new buyers were prepared to pay.

Information bias
We are constantly bombarded with new information on a daily basis, be it from the news, social media, bulletin boards, and even the company’s own Twitter feed. So, your investment has won a new contract? That’s good news, but if it was material and meant anything it would have been put out in an RNS announcement. The fact that it wasn’t clearly just means that it is just business as usual, and therefore offers us nothing that would either bolster or cause us to change our investment thesis.
The problem we have as investors is that there is so much noise. Financial commentators can’t just say that the news doesn’t matter, because then they’d have nothing to talk about! So instead they come up with reasons for why the FTSE has ‘soared’ 2% that day, or why the Dow has ‘plunged’ 1%. Unfortunately, these financial commentators can never tell us before the event, so that we might be able to place a trade and make money, but they certainly don’t have any problems telling us why what happened came to happen.
These shows and columns again seduce investors into making spur-of-the-moment decisions based on emotions, when history has shown us that we are probably best leaving our investments alone, unless we are given a strong reason to sell.
Daily share price movements are of no interest to the long-term investor and as the saying goes – “those who stare at the tape all day will be sure to end up feeding it”.
To avoid the information bias, we should switch off the noise and conduct frequent check-ups on our investments, but only to ensure nothing has gone wrong.

Recency bias
The tendency to overweight a piece of news’ importance in context of the overall story is the recency bias. We do this by easily remembering something that has occurred now or recently, compared to being able to recall or even place the same importance on an event that may have happened a while back.
The problem here is that we may take small and trivial events and place more emphasis on those than an important event. A good example would be believing that the company winning a contract is a good sign, yet disregarding or even forgetting the profit warning a few weeks ago.
A good method to beat the recency bias is to collect our thoughts and important events in a single place, that way when another piece of the investment puzzle is released, we can weigh it up and place it into context of the overall puzzle.

Loss aversion
The act of avoiding loss is one us humans are prone to naturally. We are much more likely to cut our winners (in order to massage our ego that we were right) and be risk averse whereas with losers we will be risk seeking and run the loser, or add to it, perhaps even when the investment case is deteriorating.
Kahnemann and Taversky (1979) found that the pain of losing an amount is psychologically far greater, about twice as powerful, as winning the same amount. This explains why the free trial is so effective – it plays on the feeling of loss. It is also why penalties are far more motivating than rewards. Try it next time you need to motivate yourself, and you’ll see just how strong the feeling is.
In order to defeat loss aversion we need to constantly train our brain to do what is unnatural. Going with the herd was once what was safe, and trusting your instincts got us out of danger of predators quickly, but in the investing world there are no place for such emotions.
Holding onto losers in the hope that they will eventually come good is psychologically draining. Knowing what can kill our investment thesis and constantly being on guard looking out for that catalyst will save us plenty of both physical and psychological capital.

Restraint bias
This bias is the tendency for people to overestimate their ability to control themselves and resist impulsive investment decisions. Almost all of us think that we shouldn’t commit large portions of our capital to a single stock lest we put ourselves at risk financially, but all of us will know the feeling when we find a certain stock that we’re so sure it’s a winner, and we’re tempted to steam in with a large position.
The beauty of small and mid cap stocks is that if management do execute then there is plenty of upside. Rather than going all in at the start where the reward is highest (and also the risk), we should buy in small and add to our position once management begin to prove themselves. There is no rush and this allows us to follow the story objectively and let the investment case build strength and derisk itself.

Gambler’s fallacy
The gambler’s fallacy is very similar to the Hot Hand fallacy – believing that previous investments have a connection to them. This happens when one believes that because they’d had five losers in a row, they’re now ‘due’ a winner. Unfortunately, the reality is that all events are interdependent of each other – even when trading in the same stock.
The market doesn’t care how many losers you’ve had, and the market doesn’t take into account a ‘hot hand’ either, which is when one believes that after a string of winning positions or trades that they’re on a ‘streak’.
Be aware that when we are at our most confident, that is when we are at our most vulnerable. The market will be ready to humble us in a big way should our egos get too big. Icarus, after all, flew far too close to the sun.

Sunk cost bias
Sunk cost is the notion of believing that after investing, we must continue to invest even because we have already invested. To protect against sunk cost, we must ask ourselves if we would buy that same stock as the price it is now, if we didn’t hold it.
Sunk cost has been used to explain the endowment effect, but another effect of the sunk cost bias is that it prevents funds from being used elsewhere. Chasing a loser may mean missing out on a big winner!
The opportunity cost can be far in excess of the sunk costs already deployed, and though it can never be calculated, it only takes one big winner that we miss because we were laden with something we didn’t much want to hammer that point home.

Key takeaways

  1. Anchoring bias – ignore the price we paid for our shares
  2. Endowment bias – recognise that we are inherently placing a value higher than the market by the single virtue of us owning the stock
  3. Information bias – be careful of market noise as it can lure us into action
  4. Recency bias – collect information and clearly write down the investment case
  5. Loss aversion – remember that every big loss started as a small loss; have a plan for when we are getting out
  6. Restraint bias – instead of going in large at the start buy small and allow management to prove their worth to you
  7. Gambler’s fallacy – the market doesn’t care about previous actions; previous actions are not linked
  8. Sunk cost bias – we don’t need to be right, and if we wouldn’t buy at the current price if we didn’t own the stock, then we should think about cutting the loss
submitted by shiftingshares to UKInvesting [link] [comments]

The probability of rolling for multiple units, and a look at the midseason roll % changes

In a previous post I used a simulation to determine the probabilities of rolling for single units. Afterwards, DeepDiveLM made me aware of a far smarter approach using Markov chains that they use in their interactive calculator. The Markov chain allows one to directly calculate the probabilities without needing to simulate games, and it's much faster. Ever since, I've been puzzling over how to do something similar with rolling for multiple units. I have a working implementation now and I wanted to share it with everyone.
You can find the exact code I used to generate every graph in this post here. Some of them take a while to run, so it might time out on repl.it, but you can comment out the slow ones (Mech and Shredder) at the bottom. Maybe some day I can turn this into an interactive version for non-coders. Until then, if someone wants to take this and do it themselves, go right ahead.
THE MATH CONCEPT AND CODE, SKIP BELOW IF YOU DON'T CARE
A Markov chain describes the probability of a system changing states, assuming the probability of the future state depends only on the current state. In TFT's case, that perfectly describes the unit shops, where "state" refers to the numbers of each unit owned. The probability of finding a specific unit in a shop slot is calculated from the number of available units and the total pool, combined with the probabilities of finding the unit tier at a given player level. The probability has nothing to do with how many times we've rolled up until now, and all that matters is the current state (don't fall victim to the gambler's fallacy!).
Markov chains can be described mathematically using matrices. In TFT's case the index (i, j) (the notation for row, column) represents the probability of going from i units to j units. If we are are considering just one shop slot, then we only have to consider the probabilities of staying at the current number of a unit (i, i) and the probability of finding one unit (i, i + 1). Since we either find the unit or we don't, those two matrix values must sum to 1. We can also fill out the matrix to find i + 2 units, by filling out the matrix indices (i + 1, i + 1) and (i + 1, i + 2), and so on until we have accounted for the number of a unit we are looking for.
That matrix gives us the probability of finding a specific unit in a single shop slot. Thanks to the properties of matrix multiplication, the Nth power of that matrix gives us the probability of finding a unit in N shop slots. For example, if we want to see 3 shops, then we raise our matrix to the power of 15. Afterwards, (i, j) describes the probability of going from i units to j units. And that's about it. Some addition, division, and a matrix power are all that's needed to exactly calculate TFT roll probabilities for one unit, accurately accounting for the existing pool and the the lessening probability of finding each successive unit.
Ok great, but how do we handle rolling for multiple units? As a simple approximation, we could determine the probability of finding each unit separately, then ask what is the probability finding at least one or all. While that's a pretty good approximation, the odds of finding all of the units are ever so slightly higher than this would suggest, because buying up one unit can improve the odds of finding the others (if they are the same tier). The key to a more accurate calculation is realizing (i, j) refers to the abstract concept of going from state i to state j. There's no reason that i and j need to refer to the same unit. They could refer to different combinations of units. So, we expand our matrix to encompass every combination of the units we are looking for. If we want to find 3 of one unit and 3 of another, now we need a 12x12 matrix, where each index represents a different unit combination (0 + 0, 0 + 1, 0 + 2, 0 + 3, 1 + 0, 1 + 1, 1 + 2, 1 + 3, 2 + 0, etc...). The matrix must also be filled out for the odds of going from one combination of unit numbers to +1 of every unit. If we want to find 3 of 3 different units, that's a 64x64 matrix. It does get a little crazy if we aren't careful (I don't have 300 GB of RAM lying around to calculate rolling for nine 3* units), but it works well enough for the interesting TFT cases.
After taking the power of the matrix, the matrix contains the probability of finding every different combination of units. We can query the indices representing all of the combinations of at least 1 unit, or requiring 1 specific unit and finding 2 of 3 others, etc. The probabilities are "disjoint" (ie, mutually exclusive, as we can't have both 1 and 2 of the same unit) and so the final probability is the sum of these cases. (A thanks to zyonsis for helping me think through this part!)
END MATH, BEGIN DATA
Some caveats to these graphs:
As before, to give you a sense of the variance between games, most of my graphs show the probability distribution and not the cumulative probability.
CASE 1: HYPERROLLING 1-COST UNITS (e.g. Shredder)
Shredder got nerfed while I was making this, so maybe this is a historical footnote now, but the takeaways are useful for future hyperroll comps.
Here I have calculated the probabilities of finding 3* 1-costs while purchasing four different units (Xayah, Jarvan, Fiora, Caitlyn), contingent on finding 3* Xayah. For good measure, we take into account some parameters for a good scenario that pushes us towards Shredder:
Surprising result: If you are rolling for a comp that requires a specific 3* unit, the odds of finding that unit and more are almost identical (assuming, of course, that you can afford everything). The consistency of finding that one unit is low: finding 4 Xayahs in this case will require >46 gold worth of shops in 75% of your games! But regardless of when you roll, you'll find 3* and 0 to 2 other 3* units with almost equal probabilities. The takeaway here is always roll for multiple 3* units if you can afford it. Leveling to 5 will generally require 10-20 additional shops to hit your 1-cost units (20-40 gold).
CASE 2: THE MECH
Again I've been foiled, since I set this up contingent on Kai'Sa who will be removed from the game in 2 weeks. I hope it is useful nonetheless.
What are the odds of finding a win condition of 3* Kai'Sa + a level 7 or level 8 Mech? Here I assume we are uncontested (no holding hands during quarantine, please). There are 30 other 2-costs, 15 3-costs, and 8 4-costs already taken. We start with 3 Kai'Sa, 3 Annie, 3 Rumble, and 1 Fizz (here we only 2* the fish).
Getting 3* Kai'Sa and level 8 mech is difficult, requiring up to 66 shops in 75% of your games at level 6. What really struck me is that the difference between level 7 and level 6 is pretty small. For the level 7 mech, the difference is only 3-9 shops. Of course, this doesn't consider that staying at 6 will get you the Kai'Sa sooner, but if you need an immediate power spike then leveling to 7 is hardly detrimental for finishing the mech. Rolling at 8 is doomed though.
CASE 3: FINDING A 4-COST CARRY
Suppose you have 1 each of 3 different 4-costs. Can you find a 2*? For simplicity, we're assuming the entire pool is available.
If you roll for 3 different units, you are almost twice as likely to hit at least one than if you tunneled for a specific unit. That means, even at level 6, you can reasonably hit a 2* 4-cost by rolling down 50 gold (assuming you start with 1 of each). At level 7, you will hit one in almost 75% of your games, even taking into account the money you need to buy the units.
CASE 3b: TUNNEL VISION ON A 4-COST CARRY
How often will we miss an alternative 2* 4-cost if we tunnel on the unit we already have 1 of? This graph is different from the previous ones. It shows the probability of finding 3 of another specific 4-cost and not finding 2 of the unit you were rolling for (at level 8). If you roll down 40 gold worth of shops, you have a ~25% chance of missing another specific 2* 4-cost when you started with 0 (and now consider there are 10 4-costs in the midseason update!). For consistent results, you should definitely look out for pivots while you roll because there's a very good chance of another opportunity presenting itself. You will probably pass by 3 of another 4-cost before you find 2 of one you are looking for.
CASE 4: FINISHING A COMP AT LEVEL 8 VS LEVEL 9
Suppose we have 2 of a 4-cost carry. To finish our comp, we want to get it to 2* and find one or two specific legendary units. Should we level to 9 or roll at 8?
For an easier comparison between level 8 and 9, here I'm showing the cumulative probability instead of the distribution (top graphs). Looking for both legendaries more than doubles the number of shops we need to hit. Plotting shop equivalences in terms of probabilities (bottom) gives a better picture of whether we should level or roll. With legendary units in the picture, a level 9 shop is worth 2-2.5 level 8 shops. So, if leveling to 9 leaves you with at least 1/3 of the gold you would spend rolling (don't forget about the cost of buying the units), you will have better odds at level 9.
THE MIDSEASON UPDATE
[edit] COMMENT PER Riot_Mort HIMSELF: Only the level 4 changes are shipping. You can disregard most of this, but I'll leave it up.
The midseason update is adding 1 unit of each tier. To compensate, Riot changed the shop percentages at each level. Let's compare the before and after. (Assuming 0 units removed from the pool, as we just want a sense of the before-and-after.)
1-cost: At levels 4 and 5, the changes will be hardly perceptible for a 2* unit, but hyperrolling for a 3* is significantly worse. This will probably put the final nail in the coffin for hyperrolling unless a new 3* 1-cost emerges as grotesquely overpowered. Even if it does, slow rolling at 5 will almost certainly better. The roll changes SERIOUSLY HURT your chances for upgrading a 1-cost at level 6 and 7. I'm not sure I like this change, since it makes it hard to pivot into comps that require 1-cost units (looking at you, Cybernetics).
2-cost: The changes do their stated job. The odds are about as close to identical as they can get with the extra units.
3-cost: Hitting 3* is a little easier at level 7 and quite a bit harder at level 8. Maybe we will see the return of slow rolling at 7 for 3* 3-costs. What I really don't like is the distribution changes at level 4. We will see a lot more people highrolling a 2* 3-cost before Krugs, especially on the reroll galaxy. Bear in mind the probability can be misleading because the axes are all normalized. For a less biased view, here are the cumulative probabilities. The probability of going from 1 to 3 of a 3-cost has gone from 5.6% to 9.9% in 10 shops, almost double! Although it still won't be common, 2* 3-costs this early feel really unfair to players on the receiving end. Maybe it's ok, but I think this is the riskiest change in the roll percentages. Maybe Riot decided that is the tradeoff for slashing hyperroll.
4-cost: Finding your specific 2* 4-cost is a little harder at level 7, and a little easier at level 8. Nothing dramatic. It's a nice little boost to fast-8 comps.
5-cost: Getting a 2* legendary is a little harder. Getting a 3* legendary is now even more ridiculous.
That's all for now. Happy rolling!
submitted by StarscapeTFT to CompetitiveTFT [link] [comments]

Why you can be intellectual but still be very wrong on hard facts.

I'd like to show an example on biases I've encountered while having fun drawing TA charts during the lockdown. In some cases like Indonesia's COVID19 curve for example, Technical Analysis (TA) and a properly done predictive analytics model do converge, as In they gave out the same results, so I choose to display my model in TA style for the lulz.
Cognitive biases that were encountered : confirmation bias, continued influence effect, curse of knowledge,Dunning–Kruger effect,Hindsight bias
Social biases : Authority bias, halo effect, asymetric insight, shared information bias.
My tendency and this sub is to criticize the general Indonesian population on the usually news article related silly believes, behaviours and practices (curse of knowledge) ,(Naive Realism). Which ironically is also very heavy in here. /indonesia have a youngish mid-up tech minded demographic, with many that study/work in engineering/IT. So redditors have been accustomed to think/assume they are smarter then the average population(they use more logical reasoning)
but as I will show this comes with biases as well.
As an example last month this sub voted for Singapore's SUTD Machine Learning SIR model as the COVID19 prediction model they believe the most.
https://www.reddit.com/indonesia/comments/gedg4s/which_covid19_epidemic_model_on_indonesia_do_you/
(Sadly my funny & cool looking charts only came at number 3 on that vote)
That model even though it has shown zero predictive value triggered all the cognitive&social biases for /indonesia demographic
- came from singapore (ultimate attribution error)
- from a reputable university (Authority bias)
- by a charismatic professor and grad student working with MIT and other top world institutions. (halo effect)
- Use all the right hot trigger words, machine learning, SIR, etc. (confirmation bias, shared information bias)
It made headline news in Indonesia's media. As I've pointed out it is silliness trying to use a base deterministic SIR model underneath non linear kernels in neural networks to try to predict a non stationary time series. Their model which was continuously updated on new infection data, predicted April 19 as the peak using May's data, up to May 15th. Here is a screenshot before they "internalized" the model.
https://ibb.co/8MnP4xM
Needless to say this diverged so hard from reality and the model have been "internalized" which is corporate speak for our model sucks so bad that we are ashamed to publish it.
https://ddi.sutd.edu.sg/
As compared to my tinfoil prediction that have been very accurate so far, track record here :
https://www.reddit.com/indonesia/comments/guogj8/covid19_megathread_part_2/ft937wg?utm_source=share&utm_medium=web2x
and don't forget my psuedo sciency meme laden TA chart
https://www.reddit.com/indonesia/comments/gnvtgl/new_indicators_indonesia_covid19_daily_infection/
https://www.reddit.com/indonesia/comments/gtw82n/clarifying_what_i_think_on_the_covid19_epidemic/
However because i wear a tinfoil hat and draws TA chart+memes ppl think I'm an OP warnet or something, even a few redditors think they can do better.
Dunning–Kruger effect,Hard–easy effect
https://www.reddit.com/indonesia/comments/guogj8/covid19_megathread_part_2/ft9mw14?utm_source=share&utm_medium=web2x
(Baader–Meinhof phenomenon, Gambler's fallacy)
https://www.reddit.com/indonesia/comments/guogj8/covid19_megathread_part_2/ftcpcua?utm_source=share&utm_medium=web2x
And even after proven wrong, they refuse to acknowledge any merit in my model / prediction so far because it is so meme ladden and doesn't fit into their world view (Semmelweis reflex)
If I had presented my model in a more sane and structured way, called it machine learning, neural nets, genetic algorithm, etc... it probably would have won over many redditors, but alas where's the fun in that and I still have lot's of time to fill in this pandemic.
Edit 1 : Thanks for the award kind stranger.
As a reward, you get the special Safe & Social Award icon on your submission. Very dapper.
Want to say thanks to your mysterious benefactor? Reply to this message. You will find out their username if they choose to reply back.
This post was meant to raise awareness in the importance of psychology&social sciences for techie redditors. Don't repeat my experience here :
https://www.reddit.com/indonesia/comments/gze24v/why_you_can_be_intellectual_but_still_be_very/ftfy180?utm_source=share&utm_medium=web2x
submitted by indonesian_activist to indonesia [link] [comments]

Shard/Gear Farming Variable Drop Rates?

A guildmate linked to a video which seemed to be saying that the game will actually reduce your drop rate for shards/gear if you either:
  1. use the “find” functionality to click through to the node that has the gear in question, and/or
  2. Have the character you’re grinding flagged as a favorite, thereby flagging all the gear pieces that character currently needs.
His advice was to always know where the gear you want is found, then navigate to that node from the table interface.
This seems odd to me. It smacks of every gambler’s confirmation bias. Is this known to be true or false?
Also, other claims I’ve heard from here and there that i’m skeptical of:
• does purple-rarity salvage gear have a different drop rate on Hard vs Normal nodes?
• is the drop rate of gear affected if the node has rarer gear above that piece in the list of possible drops?
submitted by grimwalker to SWGalaxyOfHeroes [link] [comments]

What is the name of that bias where people who have already invested money in something feel they need to stick with even if it is failing?

submitted by Nebuerdex to answers [link] [comments]

The pitiful excuse as to why Muhammad's parents are in hell🤦

Muslim (203) narrated from Anas (may Allaah be pleased with him) that a man said: “O Messenger of Allaah, where is my father?” He said: “In Hell.” When he turned away he called him back and said: “My father and your father are in Hell.”
Al-Nawawi (may Allaah have mercy on him) said:
This shows that whoever dies in a state of kufr will be in Hell. And being related to one who is close to Allaah will not avail him anything. It also shows that whoever died during the fatrah (the interval between the Prophethood of ‘Eesa (peace be upon him) and that of Muhammad (peace and blessings of Allaah be upon him)) and was the follower of the way of the Arabs at that time, which was idol worship, will also be among the people of Hell. There is no excuse for the call not reaching them, because the call of Ibraaheem and other Prophets (peace be upon them) had reached these people.
Look at this pitiful excuse by nawawi. Islam didn't exist prior to Muhammad and yet they are in hell. How is this not contradicted by the hadith that there are some people that would be spared hell because they died before puberty, the interval between Jesus and Muhammad etc, and those who never heard of Islam today? The excuse given is that there were remnants of Islam at the time of muhammads parents and that they should of known idolatry was wrong, and because they didn't they'll be in hell forever. Bear in mind ibraheem existed thousands of years ago, and no prophet after him went to the Arabs until Muhammad. The message was corrupt beyond recognition, so how were they to know?
Ahh the good old fitrah response, see I don't buy that crap, because we humans have innate biases that prevent us from being rational. Social psychologists have identified over a dozen cognitive issues that lead us astray and cause us to make serious errors in judgement. Confirmation bias: Our tendency is to agree with people who agree with us. We love to read or hear things that confirm what we already believe to be true.
In group bias: A throwback to our tribal roots, we innately trust and believe people in our in-group, while being fearful, suspicious and possibly even disdainful of other people.
Gamblers fallacy: We erroneously put weight on past events to predict future ones.
Status quo bias: We tend to be apprehensive of change, which sometimes cause us to make choices and decisions that will keep things the same or at least be the least disruptive.
Bandwagon effect: We love to go with the flow of the crowd and we feel safety in numbers. A groupthink mentality can cause us to hold ideas that are very popular but possibly not true. It's a part of our desire to fit in and be a part of a group. You may be familiar with the saying, "a billion Chinese can't be wrong," but they absolutely can be.
These are just a few of the most recognized biases. And yet Islam dosent acknowledge any of these. Only black and whites, no shades of greys. It's either you believe or you don't so it's hell for you. This clearly isn't, if a god did exist, a message he would write since if he created us he would know how fucked up our brain is. It really is damning information. We know now that we humans are complex creatures with biases and yet we are punished if we don't make the right 'obvious' decision. It beggars belief.
submitted by FUZ10NZ3ACK to exmuslim [link] [comments]

Seen at a BYU class. Something something irony

Seen at a BYU class. Something something irony submitted by Slashir11 to exmormon [link] [comments]

13 cognitive biases that impede our rational thinking ability

13 cognitive biases that impede our rational thinking ability submitted by hanaan341 to psychology [link] [comments]

gambler's fallacy bias video

Critical Thinking, Cognitive Biases, Fallacy and Learning ... Critical Thinking Part 5: The Gambler's Fallacy - YouTube The Gambler's Fallacy: When is a Coin Toss Fair? (3/6) Gamblers fallacy and restraint bias - YouTube

The Gambler's Fallacy is also known as "The Monte Carlo fallacy", named after a spectacular episode at the principality's Le Grande Casino, on the night of August 18, 1913. At the roulette wheel, the colour black came up 29 times in a row - a probability that David Darling has calculated as 1 in 136,823,184 in his 2004 work 'The Universal Book of Mathematics: From Abracadabra to Zeno's Paradoxes'. The gambler’s fallacy is a particular problem in the very professions that specifically require an even, unbiased judgement. One team of researchers recently analysed US judges’ decisions on Gambler's Fallacy. The gambler's fallacy is based on the false belief that separate, independent events can affect the likelihood of another random event, or that if something happens often that it is less likely that the same will take place in the future. Example of Gambler's Fallacy. Edna had rolled a 6 with the dice the last 9 consecutive times. What exactly is the gambler’s fallacy? Researchers Amos Tversky and Daniel Kahneman rationalized thought processes related to the fallacy of gambling on their research paper “Judgement under uncertainty: Heuristics and Biases” 1.. They said: “Many decisions are based on beliefs concerning the outcome of an election, the guilt of a defendant, or the future value of a dollar. In fact, the phenomenon is called the gambler's fallacy. If you toss a coin up five times and it comes down tails five times in a row, you have a feeling that the next coin flip has to come down What Is the Gambler’s Fallacy? In simple terms, it’s is when a bettor expects a reversal in luck after a prolonged run of one outcome. This means that, after a series of wins, they come to expect a loss (or vice versa) The most straightforward example of the gambler’s fallacy can be illustrated with a coin toss. The gambler’s fallacy is a cognitive bias, meaning that it’s a systematic pattern of deviation from rationality, which occurs due to the way people’s cognitive system works. It is primarily attributed to the expectation that even short sequences of outcomes will be highly representative of the process that generated them, and to the view of chance as a fair and self-correcting process. Such a fallacy is mostly observed in a casino setting, where people gamble based on their perception of chance, luck, and probability, and hence, it is called gambler’s fallacy. It arises due to the belief that past independent events influence future outcomes, and originates from the erroneous belief that a small sample is an accurate representation of a larger sample/population. The gambler’s fallacy should not be confused with its opposite, the hot hand fallacy. This heuristic bias is the mistaken belief that, for random independent events, the more frequently an outcome has occurred in the recent past, the greater is the likelihood of that outcome in the future. This bias in judgment was named after basketball fans The gambler’s fallacy describes our belief that the probability of a random event occurring in the future is influenced by previous instances of that type of event. Where this bias occurs Consider the following hypothetical: Jane loves playing Blackjack, and she’s pretty good at it.

gambler's fallacy bias top

[index] [1765] [1324] [6219] [3655] [2539] [5634] [5365] [8554] [6226] [669]

Critical Thinking, Cognitive Biases, Fallacy and Learning ...

Part 5 of the TechNyou critical thinking resource.The resource covers basic logic and faulty arguments, developing student's critical thinking skills. Suitab... This is the third video in a six-part series on The Gambler’s Fallacy. This video, “How Can You Tell Whether a Chance Setup is Unfair?”, explains why the answer to this question isn’t ... About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators ... About Press Copyright Contact us Creators Advertise Developers Terms Privacy Policy & Safety How YouTube works Test new features Press Copyright Contact us Creators ... Critical Thinking Part 5: The Gambler's Fallacy by techNyouvids. 2:58. Critical Thinking Part 6: A Precautionary Tale by techNyouvids. 2:54. This Thing Called Science Part 1: Call me skeptical by ...

gambler's fallacy bias

Copyright © 2024 hot.onlinerealmoneygames.xyz