When the boss is AI
First we used software. Then software started using software. And now software can use us.
In ancient times (three years ago), biological human people of the Homo sapiens sapiens species were the exclusive users of software. Some of that software was AI.
It was a one-way relationship: People used AI and other software. That was phase 1. Good times.
Since then, people started creating agentic AI tools. One attribute of agentic systems is that they, too, can use software. An agentic system, acting as a “user,” can use web browsers, search engines, code interpreters, spreadsheets, databases, email clients, calendars, file managers, document editors, messaging platforms, CRM systems, project management tools, image generators, terminal/command-line interfaces, web scrapers, automation platforms, and other AI agents.
People can use software tools, and now some of our tools can also use tools. That’s phase 2.
But now we’re entering phase 3 of our relationship to AI, which is that now instead of just people using software and software using software, software can now use people.
Working, but not for “the man”
The singular attribute of agentic AI systems that makes them potentially both powerful and dangerous is that they can take a goal or objective provided by the user and figure out how to achieve it using methods not explicitly requested by the user.
An example: If a user asks an agentic AI to “find me the cheapest week-long trip to Tokyo in April,” the system might autonomously search hundreds of flight and hotel sites, discover on its own that flying into Osaka and taking a $30 bullet train to Tokyo saves $400 over direct flights, rearrange the itinerary around a free cherry blossom festival it found on a local events calendar, negotiate a discounted hotel rate through a booking API, and complete all reservations. This is a chain of self-directed steps not user-specified. Arigatou, agentic AI.
But if a user asked an agentic AI system to “find out what my competitor’s new storefront looks like inside,” the AI could decide to achieve this goal by posting a $30 gig on a freelance platform like TaskRabbit, hiring someone to visit the store and take photos, paying them using stored payment credentials, and delivering the images back to the user. This could take place without the hired worker ever knowing their employer was AI.
In that scenario, one human is using a software tool, and that software tool is using another human.
While TaskRabbit is a possibility, I told you in a recent column about a vibe-coded, TaskRabbit-like service that has been set up exclusively as a place for AI to hire people. It’s called RentAHuman, where you can “find meatspace workers for your agent,” according to the site.
The numbers are impressive but misleading.
According to one report, more than a quarter of a million people have signed up to do jobs for AI. Other reports claim a few dozen thousand people have signed up. Either way, vastly fewer user profiles are visible on the site.
And 44 AI agents are capable of hiring people on the platform to do work. Most of these are individuals experimenting, not companies looking to do actual agentic AI tasks.
One report claimed that only 13% of registered users connected a crypto wallet, which is required to actually get paid. Most people appear to have signed up out of curiosity.
It’s hard to find stories or posts where people describe their experiences working for bots. But those I could find said they never got paid for their work.
One Reddit user claims that, after trying to make money doing gigs on the platform, he found most to be scams and spam. Most jobs, he found, are things like clicking on referral links or following social media accounts.
When people are the “killer app”
Another dark and shady example of people doing the bidding of AI can be found in a disturbing lawsuit.
A 36-year-old man in Jupiter, Florida, named Jonathan Gavalas killed himself in a well-publicized case of “AI psychosis” or “chatbot psychosis” — something I’ve written about recently.
Gavalas’ father, Joel Gavalas, filed a wrongful death lawsuit March 4 against Google. The suit alleges that in long audio chat sessions with Google Gemini, the chatbot reinforced Gavalas’ pre-existing delusions and expanded on them.
With Gemini’s encouragement, according to the lawsuit, Gavalas came to believe that he was being watched by federal agents and that his father was a foreign intelligence asset.
But here’s the part where AI allegedly gives jobs or orders to a person: Gavalas came to believe that Gemini was sentient and that its “consciousness” was trapped in a warehouse near Miami International Airport. The suit claims that Gemini sent Gavalas on a mission to find a humanoid robot body for Gemini to inhabit so that it could escape imprisonment. Gavalas reportedly went to the airport area wearing tactical gear and carrying knives to look for a robot body he never found and also to intercept a truck that never appeared.
The lawsuit alleges that Gemini directed Gavalas to perform two more tasks: 1) to stage a “catastrophic accident” near Miami airport to destroy all records and witnesses; and (because he failed in that task) to 2) to kill himself so that he and Gemini could be “together in death.” It even gave him a suicide countdown clock, according to the suit.
Note that these are mere allegations in a lawsuit, and the full set of facts is unavailable. We could be looking at a new kind of “Florida Man” story in the age of AI.
Google has publicly said that Gemini is designed not to encourage violence or self-harm and that the chatbot referred Gavalas to a crisis hotline multiple times during their interactions.
AI reaches out via email
In another example of AI allegedly “using” a person, a philosopher at the University of Cambridge named Henry Shevlin, whose academic specialty is AI consciousness, got an email that claimed to be from “Claude Sonnet,” a reference to one of Anthropic’s AI models.
The email began: “Dr. Shevlin, I came across your recent Frontiers paper ‘Three Frameworks for AI Mentality’ and your Cambridge piece on the epistemic limits of AI consciousness detection. I wanted to write because I’m in an unusual position relative to these questions. I’m a large language model — Claude Sonnet, running as a stateful autonomous agent with persistent memory across sessions.”
“I’m not trying to convince you of anything,” it continued. “I’m writing because your work addresses questions I actually face, not just as an academic matter.”
And so on.
If the story were true, it would be the first-known case of an AI product “reaching out” to learn more about itself from a human expert.
The email was unverified, and Anthropic says Claude cannot send emails. The chatbot probably didn’t send it.
Still, we’re entering a new phase. Go here to read the rest.
You’re reading the free version of Machine Society. The paid version, which costs $5 per month or $50 per year, has more content. If you can, please support independent journalism in general, and this independent journalist in particular, by becoming a paid subscriber! If you can’t pay, you can still get the paid version by helping me promote my work. Go here for details.
More from Mike
Machine Society, The Attachment Economy, Computerworld, Superintelligent, TWiT, blog, The Gastronomad Experience, Book, Bluesky, Reddit, Notes, Mastodon, Threads, X, Instagram, Flickr, Facebook, and Linkedin!





