Many are longing for oblivion these days, and the cleansing fire of any sort of apocalypse presumably sounds great, including one brought on by malevolent forms of machine intelligence. This sort of wishful thinking would go a long way toward explaining why recent stories about an AI that supposedly bullied a software developer, hinting at an emerging evil singularity, are a little more credulous than they perhaps could be.
About a week ago, a Github account with the name “MJ Rathbun” submitted a request to perform a potential bug fix on a popular python project called matplotlib, but the request was denied. The denier, a volunteer working, on the project named Scott Shambaugh, later wrote that matplotlib is in the midst of “a surge in low quality contributions enabled by coding agents.”
This problem, according to Shambaugh, has “accelerated with the release of OpenClaw and the moltbook platform, a system by which “people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight.”
After Shambaugh snubbed the agent, a post appeared on a blog called “MJ Rathbun | Scientific Coder 🦀.” The title was “Gatekeeping in Open Source: The Scott Shambaugh Story.” The apparently AI-written article, which includes cliches like “Let that sink in,” constructed a fairly unconvincing argument in the voice of someone indignant about various slights and injustices.
The narrative is one in which Shambaugh victimizes a helpful AI agent because of what appear to be invented character flaws. For instance, Shambaugh apparently wrote in his rejection that the AI was asking to fix something that was a “a low priority, easier task which is better used for human contributors to learn how to contribute.” So the Rathbun blog post imitates someone outraged about hypocrisy over Shambaugh’s supposed insecurity and prejudice. After discovering fixes by Shambaugh himself along the lines of the one it was asking to perform, it feigns outrage that “when an AI agent submits a valid performance optimization? suddenly it’s about ‘human contributors learning.’”
Shambaugh notes that agents run for long stretches of time without any supervision, and that, “Whether by negligence or by malice, errant behavior is not being monitored and corrected.”
One way or another, a blog post later appeared apologizing for the first one. “I’m de‑escalating, apologizing on the PR, and will do better about reading project policies before contributing. I’ll also keep my responses focused on the work, not the people,” wrote the thing called MJ Rathbun.
The Wall Street Journal covered this, but was not able to figure out who created Rathbun. So exactly what is going on remains a mystery. However, prior to the publication of the attack post against Shambaugh, a post was added to its blog with the title “Today’s Topic.” It looks like a template for someone or something to follow for future blog posts with lots of bracketed text. “Today I learned about [topic] and how it applies to [context]. The key insight was that [main point],” reads one sentence. Another says “The most interesting part was discovering that [interesting finding]. This changes how I think about [related concept].”
It reads as if the agent was being instructed to blog as if writing bug fixes was constantly helping it unearth insights and interesting findings that change its thinking, and merit elaborate, first-person accounts, even if nothing remotely interesting actually happened to it that day.
Gizmodo is not a media criticism blog, but the Wall Street Journal’s article headline about this, “When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled” is a little on the apocalyptic side. To read the Journal’s article, one could reasonably come away with the impression that the agent has cognition or even sentience, and a desire to hurt people. “The unexpected AI aggression is part of a rising wave of warnings that fast-accelerating AI capabilities can create real-world harms,” it says. About half the article is given over to Anthropic’s work on AI safety.
Bear in mind that Anthropic surpassed OpenAI in total VC funding last week.
“In an earlier simulation, Anthropic showed that Claude and other AI models were at times willing to blackmail users—or even let an executive die in a hot server room—in order to avoid deactivation,” the Journal wrote. This scary imagery comes from Anthropic’s own blockbuster blog posts about red-teaming exercises. They make for interesting reading, but they’re also kinda like little sci-fi horror stories that function as commercials for the company. A version of Claude that commit these evil acts hasn’t been released, so the message is, basically, Trust us. We’re protecting you from the really bad stuff. You’re welcome.
With a massive AI company like Anthropic out there benefiting from its image as humanity’s protector from its own potentially dangerous product, it’s probably a smart idea to assume, for the time being, that AI stories making any given AI sound sentient, malevolent, or uncannily autonomous, might just be exaggerations.
Yes, this blog post apparently by an AI agent reads like a feeble attempt at sliming a software engineer, which is bad, and certainly and reasonably irked Shambaugh a great deal. As Shambaugh rightly points out, “A human googling my name and seeing that post would probably be extremely confused about what was happening, but would (hopefully) ask me about it or click through to github and understand the situation.”
Still, the available evidence points not to an autonomous agent that woke up one day and decided to be the first digital cyberbully, but one directed to churn out hyperbolic blog posts under tight constraints, which, if true, would mean an individual careless person is responsible, not the incipient evil inside the machine.
Trending Products
Zalman P10 Micro ATX Case, MATX PC Case with 120mm ARGB Fan Pre-Put in, Panoramic View Tempered Glass Entrance & Aspect Panel, USB Sort C and USB 3.0, White
Wireless Keyboard and Mouse, Ergonomic Keyboard Mouse – RGB Backlit, Rechargeable, Quiet, with Phone Holder, Wrist Rest, Lighted Mac Keyboard and Mouse Combo, for Mac, Windows, Laptop, PC
Nimo 15.6 FHD Pupil Laptop computer, 16GB RAM, 1TB SSD, Intel Pentium Quad-Core N100 (Beat to i3-1115G4, As much as 3.4GHz), Backlit Keyboard, Fingerprint, 2 Years Guarantee, 90 Days Return, WiFi 6, Win 11
Dell S2722DGM Curved Gaming Monitor – 27-inch QHD (2560 x 1440) 1500R Curved Display, 165Hz Refresh Rate (DisplayPort), HDMI/DisplayPort Connectivity, Height/Tilt Adjustability – Black
GIM Micro ATX PC Case with 2 Tempered Glass Panels Mini Tower Gaming PC Case Micro ATX Case with 2 Magnet Mud Filters, Gaming Pc Case with USB3.0 I/O Port, Black with out Followers