Last Christmas Eve, I resurrected the Open Thread for Night Owls, a DK daily feature that I produced from 2004-2021.
Over two decades, the Night Owls thread typically included my own writing or, more commonly, an excerpt from something written at a news outlet, blog, or magazine to spur end-of-the-day conversations that might not have made it into other DK posts. Sometimes something heavy, sometimes not. One of my favorite choices for Night Owls was excerpting the monthly Harper’s Index, a feature that began in 1984, the magazine’s 134th year of publication.
This year, in addition to the January 2026 edition of the Index, I’m including one of Harper’s short reads, which it intersperses with longer investigative pieces, poetry, fiction. If you go to the link, you can see the entire Index:
- Percentage of Americans who believed that violence might be necessary to “get the country back on track” in 2024: 19.
In 2025: 31
Source: Marist Poll
- Chance that an LGBTQ American says that they or a family member have tried to be “less visible” since November 2024: 1 in 4
That they or a family member have changed jobs because of anti-LGBTQ sentiment since then: 1 in 10
That they or a family member have moved to a different state because of it: 1 in 20
Source: Movement Advancement Project
- Portion of American women who say they would be willing to buy a house in which the previous owners had been murdered: 1/5
Of American men: 1/3
Source: YouGov
- Percentage by which more AI-generated articles were posted online last year than articles written by humans were: 3
Source: Graphite
- Chance that any given response to an online survey or poll last year was made by an AI bot: 1 in 5
Source: Lauren Leek
- Estimated percentage of news-related queries that AI chatbots answer in a misleading or false way: 45
SLOT-MACHINE LEARNING
From “Can Large Language Models Develop Gambling Addiction?,” a preprint study conducted by the Gwangju Institute of Science and Technology, in South Korea, and submitted in September to the Cornell University open-access archive arXiv.
As large language models are increasingly utilized in financial decision-making domains such as asset management and commodity trading, understanding their potential for pathological decision-making has gained practical significance. We systematically analyzed LLM decision-making at cognitive-behavioral and neural levels based on human-gambling addiction research. In slot-machine experiments, we identified cognitive features of human gambling addiction, such as illusion of control, gambler’s fallacy, and loss chasing. When LLMs were given the freedom to determine their own target amounts and betting sizes, bankruptcy rates rose substantially alongside instances of irrational behavior, demonstrating that greater autonomy amplifies risk-taking tendencies. Through neural-circuit analysis, we confirmed that model behavior is controlled by abstract decision-making features related to risky and safe behaviors, not merely by prompts. AI systems have developed humanlike addiction.