Pages

Tuesday, October 3, 2023

The Duke Data Manipulation Branch of the Harvard Data Manipulation Affair

Much of the media attention concerning the behavioral science data manipulation affair has focused on Harvard. But there is a branch - which is the focus of a lengthy (very, very lengthy) - New Yorker article that recently appeared.

The story is the similar. "Interesting" results turned an academic into a celebrity, until the Data Colada folks began to take a look. Now there is a university investigation of Dan Ariely at Duke, although so far no lawsuits.

Excerpts:

The half-bearded behavioral economist Dan Ariely tends to preface discussions of his work—which has inquired into the mechanisms of pain, manipulation, and lies—with a reminder that he comes by both his eccentric facial hair and his academic interests honestly. He tells a version of the story in the introduction to his breezy first book, “Predictably Irrational,” a patchwork of marketing advice and cerebral self-help. One afternoon in Israel, Ariely—an “18-year-old military trainee,” according to the Times—was nearly incinerated. “An explosion of a large magnesium flare, the kind used to illuminate battlefields at night, left 70 percent of my body covered with third-degree burns,” he writes...

Note: But this origin story apparently is not quite true. 

Ariely came to owe his reputation to his work on dishonesty. He offered commentary in documentaries on Elizabeth Holmes and pontificated about Enron. As Remy Levin, an economics professor at the University of Connecticut, told me, “People often go into this field to study their own inner demons. If you feel bad about time management, you study time inconsistency and procrastination. If you’ve had issues with fear or trauma, you study risk-taking.” Pain was an obvious place for Ariely to start. But his burn scars heightened his sensitivity to truthfulness. Shane Frederick, a professor at Yale’s business school, told me, “One of the first things Dan said to me when we met was ‘Would you ever date someone who looked like me?’ And I said, ‘No fucking way,’ which was a really offensive thing to say to someone—but it weirdly seemed to charm Dan.” 

From that moment, Frederick felt, Ariely was staunchly supportive of his career. At the same time, Ariely seemed to struggle with procedural norms, especially when they seemed pointless. Once, during a large conference, John Lynch, one of Ariely’s mentors, was rushed to the hospital. Ariely told me that only family members were allowed to visit. He pretended that his scarring was an allergic reaction and, once he was admitted, spent the night by Lynch’s side. In his telling, the nurse was in on the charade. “We were just going through the motions so that she could let me in,” he told me. But a business-school professor saw it differently. “Dan was seen as a hero because he had this creative solution,” she said. “But the hospital staff, even though they knew this wasn’t a real allergic reaction, weren’t allowed to not admit him. He was just wasting their time because he felt like he shouldn’t have to follow their rules.” ...

At talks, [Ariely] wore rumpled polos and looked as though he’d trimmed his hair with a nail clipper in an airport-lounge rest room. He has said that he worked with multiple governments and Apple. He had ideas for how to negotiate with the Palestinians. When an interviewer asked him to list the famous names in his phone contacts, he affected humility: “Jeff Bezos, the C.E.O. of Amazon—is that good?” He went on: the C.E.O.s of Procter & Gamble and American Express, the founder of Wikipedia. In 2012, he said, he got an e-mail from Prince Andrew, who invited him to the palace for tea. Ariely’s assistant had to send him a jacket and tie via FedEx. He couldn’t bring himself, as an Israeli, to say “Your Royal Highness,” so he addressed the Prince by saying “Hey.” ...

Ariely and [Francesca] Gino [of Harvard] frequently collaborated on dishonesty. In the paper “The Dark Side of Creativity,” they showed that “original thinkers,” who can dream up convincing justifications, tend to lie more easily. For “The Counterfeit Self,” she and Ariely had a group of women wear what they were told were fake ChloĆ© sunglasses—the designer accessories, in an amusing control, were actually real—and then take a test. They found that participants who believed they were wearing counterfeit sunglasses cheated more than twice as much as the control group. In “Sidetracked,” Gino’s first pop-science book, she seems to note that such people were not necessarily corrupt: “Being human makes all of us vulnerable to subtle influences.” ... 

Near the end of Obama’s first term, vast swaths of overly clever behavioral science began to come unstrung. In 2011, the Cornell psychologist Daryl Bem published a journal article that ostensibly proved the existence of clairvoyance. His study participants were able to predict, with reasonable accuracy, which curtain on a computer screen hid an erotic image. The idea seemed parodic, but Bem was serious, and had arrived at his results using methodologies entirely in line with the field’s standard practices. This was troubling. 

The same year, three young behavioral-science professors—Joe Simmons, Leif Nelson, and Uri Simonsohn—published an actual parody: in a paper called “False-Positive Psychology,” they “proved” that listening to the Beatles song “When I’m Sixty-Four” rendered study participants literally a year and a half younger. “It was hard to think of something that was so crazy that no one would believe it, because compared to what was actually being published in our journals nothing was that crazy,” Nelson, who teaches at U.C. Berkeley, said. Researchers could measure dozens of variables and perform reams of analyses, then publish only the correlations that happened to appear “significant.” If you tortured the data long enough, as one grim joke went, it would confess to anything. They called such techniques “p-hacking.” As they later put it, “Everyone knew it was wrong, but they thought it was wrong the way it’s wrong to jaywalk.” In fact, they wrote, “it was wrong the way it’s wrong to rob a bank.”

The three men—who came to be called Data Colada, the name of their pun-friendly blog—had bonded over the false, ridiculous, and flashy findings that the field was capable of producing. The discipline of judgment and decision-making had made crucial, enduring contributions—the foundation laid by Kahneman and Tversky, for example—but the broader credibility of the behavioral sciences had been compromised by a perpetual-motion machine of one-weird-trick gimmickry. Their paper helped kick off what came to be known as the “replication crisis.” Soon, entire branches of supposedly reliable findings—on social priming (the idea that, say, just thinking about an old person makes you walk more slowly), power posing, and ego depletion—started to seem like castles in the air. (Cuddy, the H.B.S. professor, defended her work, later publishing a study that showed power posing had an effect on relevant “feelings.”) Some senior figures in the field were forced to consider the possibility that their contributions amounted to nothing.

In the course of its campaign to eradicate p-hacking, which was generally well intended, Data Colada also uncovered manipulations that were not. The psychologist Lawrence Sanna had conducted studies that literalized the metaphor of a “moral high ground,” determining that participants at higher altitudes were “more prosocial.” When Simonsohn looked into the data, he found that the numbers were not “compatible” with random sampling; they had clearly been subject to tampering. (Sanna, at the time, acknowledged “research errors.”) Simonsohn exposed similar curiosities in the work of the Flemish psychologist Dirk Smeesters. (Smeesters claimed that he engaged only in “massaging” data.) The two men’s careers came to an unceremonious end. Occasionally, these probes were simple: one of the first papers that Data Colada formally examined included reports of “-0.3” on a scale of zero to ten. Other efforts required more recondite statistical analysis. Behind these techniques, however, was a basic willingness to call bullshit. Some of the papers in social psychology and adjacent fields demonstrated effects that seemed, to anyone roughly familiar with the behavior of people, preposterous: when maids are prompted to think of their duties as exercise, do they really lose weight? ...

Ariely maintained that the [Ten Commandments] study had been conducted at U.C.L.A., by a professor named Aimee Drolet Rossi.* When I spoke to Rossi, she told me that she had never participated in the study: “I thought, well, first, what a joke! I don’t believe that study, and I certainly didn’t run it.” U.C.L.A. issued a statement saying that the study hadn’t taken place there. Last year, Ariely, having learned that an Israeli television program was investigating the case, wrote to Rossi, “Do you remember who was the RA that was running the data collection sessions in 2004 and 2005?” Rossi replied, “There was none. That’s the point.” Ariely says that the study took place, and it’s possible that it did, in some form. He told me he now remembers that the surveys were collected at U.C.L.A. but processed by an assistant at M.I.T., which might explain the mixup. He could not provide the assistant’s identity...

---

*Refers to a study in which asking people about the Ten Commandments supposedly made them more honest.

---

[Data Colada participant] Joe Simmons has been working on a blog post, which Data Colada will probably never publish, called “The Fraud Is Not the Story.” He notes, at the outset, that there is “a very large body of behavioral research that is true and important.” But, he says, there is also a lot of work that is “completely divorced from reality, populated with findings about human beings that cannot be true.” In the past few years, some eminent behavioral scientists have come to regret their participation in the fantasy that kitschy modifications of individual behavior will repair the world... “This is the stuff that C.E.O.s love, right?” Luigi Zingales, an economist at the University of Chicago, told me. “It’s cutesy, it’s not really touching their power, and pretends to do the right thing.” ...

The Data Colada guys have always believed that the replication crisis might be better understood as a “credibility revolution” in which their colleagues would ultimately choose rigor. The end result might be a field that’s at once more boring and more reputable. That sanguine attitude has been tested by a cascade of corruption. In the weeks after the Gino revelations, some of her co-authors have audited their work, although Gino did not provide original data files for comparison. They wanted to figure out who had collected and analyzed which data, and to exonerate the innocent—especially young people, whose work for the job market or tenure might have been fatally tainted. In one paper, which had several co-authors, data of the apparently unnatural variety were newly uncovered. Although the details aren’t fully clear, Gino seems to have had nothing to do with it. The data may have been altered by another professor. The suspicions have been reported to the university

Full story at https://www.newyorker.com/magazine/2023/10/09/they-studied-dishonesty-was-their-work-a-lie.

===

Note: The Data Colada blog is at https://datacolada.org/.

No comments:

Post a Comment