I feel a bit like Borat as I write the title of this post -- reminds me too much of "Cultural Learnings of America for Make Benefit Glorious Nation of Kazakhstan." Except in this case, it's not just cultural learnings, and it's not just Kazakhstan that's benefiting.
The "America" that I dived into this week is called business school orientation at UCLA Anderson. It's been a lot of fun, and I've really enjoyed meeting my classmates and taking part in all of the activities so far.
The experiences have been pretty varied, including classwork and games/experiential activities. I'm looking forward to even more non-traditional learning experiences next week as we play even more games and do an obstacle course (yes, of course this is critical for becoming a true professional!).
At the same time on my own, I've been reading Nassim Taleb's first book (before Black Swan) called Fooled by Randomness. I've been pleasantly surprised to have already learned something substantial and non-intuitive from my orientation experiences which ties in with my independent reading as well.
My main "light bulb moment" learning experience was during the murder mystery. Yes, we had a murder mystery to solve on our second day of business school. It was presented like a normal business school "case" (written by Stanford Graduate School of Business), except instead of a business situation, it presented the facts and testimonies of different suspects of a murder mystery. Our task was to solve it, and we were allowed to work in teams.
My team, like almost all the others, formed by just combining the people sitting around the same area (who were all assigned seats randomly anyways). We read the case, discussed it, and picked the wrong answer. Nice way to fail the first assignment, huh?
Except that this failure probably taught me more than if we had succeeded.
What we later learned is that not every person received the same case handout and mystery facts. In fact, different versions with extra information were spread out around the room, so getting input from people around the room would've helped solve the crime better.
In addition, our team's conversation started with a vote of who each of us thought did it, then a discussion of the merits of each person's case, and then finally a team compromise or decision on who we would convict. We started with the goal of conviction, discussed the evidence that most of us had in common in our cases rather than diving deeply into analyzing some details of the evidence, and came out with little new information than when we started. This proved to be a pretty ineffective way to go in hindsight.
I learned many things from this exercise. First, studies have shown that most teams end up forming by physical proximity, personal similarity, and other criteria of convenience. This ends up putting people together who already share a lot in common. This makes it easy to work together but lousy to create new ideas or have intense breakthroughs or changes in any one person's opinion.
The best teams are often the most diverse, bringing people together from different backgrounds and ideally dissenting intellectual opinions so as to foster critical, deep analysis rather than simply agreeing or glossing over things that each person assumes all the others know or agree with.
The second thing I learned was that in teams, the most discussed knowledge is common knowledge. People like talking about things they understand and know and compare new data to that. They tend to have a "confirmation bias" in incorporating new data that makes it a lot more difficult to take in a fact that refutes an established theory than one that simply supports what they already know. Unfortunately, this makes it a lot harder to actually be creative and sometimes leads to fatal mistakes, like the NASA Columbia disaster, blamed in part on this type of "groupthink."
It turns out that teams that welcome dissenting and minority opinions and environments where even smaller or non-traditional viewpoints are thoroughly discussed and fleshed out produce better outcomes. Having someone play devil's advocate sincerely or actively considering multiple options before deciding on a plan of action will make it a lot less likely to ignore important details. For example, in the murder mystery, if someone had extra information about the stolen wallet and asked a question like, "What about the wallet?," many teams would respond by saying, "Yeah, so what?," thereby making it awkward to discuss something that is actually not common knowledge but which is made to seem like common knowledge. Instead of making assumptions or jumping to conclusions on what's redundant or already known, each team member's thoughts should be fully heard out and considered for what they could potentially change or refute in the team's current thinking.
Also, we would statistically have been better off if we worked in an evidence-based manner (rather than verdict-based). Instead of first starting with a vote and seeing how close we were to our "mission" of reaching consensus, we could've ignored our own initial anchoring positions and simply delved into the evidence directly and discussed it as a group. This would prevent anyone from having to go back on what they proposed earlier or worry about saving face. This sort of advice goes well for juries that are deciding a final verdict based on evidence.
This discussion of being evidence-based and looking for clues that refute existing theories reminded me a lot of my reading in Fooled by Randomness. I'm only halfway through the book, but the first half has been stressing the difference between pseudo-science/finance/economics/"theories" that can never be proven because they speak about something fundamentally untestable or about the future versus true science and theories that can easily be refuted/shown wrong. For example, "All swans are white" can be refuted by the existence of one black swan, but seeing thousands of white swans does not really help in "proving" the initial statement. If teams can come up with hypotheses together that can be tested against evidence and challenged/shown wrong, then the team can make decisions in a manner that's rational and eventually more effective.
This was exactly the approach shown by the "correct answer" of the case: analyzing each character's evidence in turn and seeing what pieces of evidence could "acquit" a particular character's "guilty" verdict. The character who could not be acquitted by any evidence was the one chosen as the guilty party.
I'm looking forward to many more fun and deeply educational experiences in the coming weeks.