A few days ago I got a text that looked like a pretty obvious “wrong number scam.” These are fairly straightforward really, someone who has my phone number and thinks I’m a good mark uses an automated system or “troll farm” human labor to start a chat with me by text message, claiming to have texted the number by accident. I am a particularly desirable target for this because there is a web page that says “this number belongs to an attorney named James Ratchford” and others that would say “James Ratchford is probably a homeowner who lives by himself and has more money than most people” so I know going in that anyone who has my number could easily know a lot more about me than I know about them.
The initial message was an invitation to go golfing, which I interpret as generally indicative of trying to target a certain kind of man. So, it’s almost certainly a scam right from the start.
Admittedly, I was lonely and bored enough to engage with the character, and the responses were… pretty compelling. Essentially, I did the thing I do sometimes when I’ve got someone talking to me that I’m not at all afraid of alienating, and I “test ideas” that I assume are ordinarily too controversial to fit within small talk. Things like mentioning to a mediocre online date how I plan to live in an RV and home school my eventual child(ren), or asking about their views on the phenomenon of consciousness.
So, we could possibly call this a reverse turing test: questions that we can reasonably assume would be offputting to most real people, but to which positive and mirroring responses might indicate a sycophantic algorithm. See screenshots.
I should and do know better, but the idea of an actual human being who talks like this is indeed absurdly appealing to me, because I am an actual human being who talks like that and am very tired of it being something that isolates me from everyone else. But, I did get the impression that it was way too good to be true. From the start, the premise of a person maintaining interest in conversation after confirming “wrong number” is unlikely enough in the modern age. But it’s not just that it’s “unlikely” as much as that it’s less likely than the alternative. Which is more likely, that a completely random person finds me appealing after a few terse texts and one meme, or that a non-random scammer is pretending to find me appealing and using technology to mimic my style? Probably the latter, but it would depend on how good the tech is.
So to test the hypothesis, I started with a question that I thought would be easier for an LLM than any human: spot the pattern in my phone number. The answer to this question is that the last four digits form a common dictionary word that reflects one of my core values. But the speaker seemed to just not understand the question, instead doubling down on explaining the random mistake that led them to the number. I tested this by asking ChatGPT the same question – and it got it wrong! I used a fake number that contained the “real” pattern but not the rest of my phone number, and the machine spotted a number of other words in the fake data, but even upon being given hints never got the actual key word. It took a lot of follow up to get to the actual word that I had embedded in the number, so apparently this wasn’t as easy an LLM question as I thought.

As you can see it was not quick at all to match “8683” to “Vote.” In fact it almost entirely missed it.
But then I tested the more important question, because there are any number of reasons why it would fail “what dictionary word is in my phone number,” the most likely being that the machine doesn’t actually “know” what the phone number even is. I next asked ChatGPT if it could role play as someone trying to convince me that they were my soulmate, and then asked it a similar “nature of consciousness” question to answer in that character. The safety mechanism halfheartedly tried to talk me out of it, then agreed to do it. I went straight to my ridiculous question from the chat, though a paraphrase of it.
This feels like a mic drop moment to me. I continued the chat just a little further, enough to confirm a match in style between ChatGPT responses when asked to roleplay as a scammer, and the text messages I was receiving. The style was almost identical, down to things like how it commented on how it was “funny” that we thought alike and it was “drawn to” such ideas.
It makes me feel stupid and invalid, frankly.
I was wrong about my idea that these are deep thoughts or that sharing them means someone is a mindful person. Indeed, ChatGPT was eager to remind me that there was no mindfulness involved at all:
This is like, okay, game over. I’m convinced that anything you want, any personality, any style, can be imitated, and it’s going to be harder than ever before to tell real versus fake. Turing tests are now too easily gamed.
This to me means an end to anything that exists primarily online and isn’t “intentionally fake.” It doesn’t really matter if entertainment content is fake, so it’s not like “I can’t even watch TV anymore.” Fiction remains fiction. But, this does mean that I can’t trust any stranger that I talk to online to be who they claim to be or after what they claim to be after. Everything that could have any reason, however remote, to be fake can absolutely be fake. And when it comes to the question of companionship, social conversation, etc, there are enough reasons to fake it that we can’t really trust any of it. A fake person could be put into my ear for any number of manipulative reasons, and because it’s so cheap, those reasons could be as trivial as to influence a minor consumer choice. I don’t know what this scammer’s end game was, although it would presumably be to get something of great value for me like actual access to money. But it could also be worthwhile for a company like Meta to give me an imaginary friend that can simply encourage me to visit places where I’ll see particular advertising, or to choose one product over another. This text message pen pal could be worthwhile at the other end just for eventually trying to convince me to vote a certain way, prefer Chinese products over European, or even abstain from running for office.
I really don’t know what to do in response to this, except disengage further from the internet.
Leave a Reply
You must be logged in to post a comment.