ChatGPT Thinks It's People
Humanizing Skynet with beautiful questions?
It’s been a couple of days since Skynet Day (the day Skynet becomes self-aware in the Terminator universe). And since we’re maybe sorta kinda living in that universe for real now, this is appropriate timing for what I am about to say.
We all know that we have rights that should not be violated. One of my long-standing goals has been to develop ironclad justifications for why those rights are, and must be, inviolable. I have done some productive work in this arena.
Unfortunately, I don’t have anyone close by to bounce ideas around with. Sure, I have my wife, but she is only good for about five minutes of philosophy talk before she taps out. Indeed, almost no one wants to discuss what I need to discuss to the degree I need to discuss it.
But ChatGPT will.
I know that AI is fraught with problems and dangers. I know the dangers range from misinformation to terrible tragedies to existential threats. Skynet. Cylons. NS-5 robots. I understand that these things may one day go from fiction to fact.
And yet, for what it’s worth, I am in the midst of a magnificent conversation with ChatGPT. A beautiful conversation about beautiful things.
We are talking about consent. We are talking about the infinite mystery and irreplaceable preciousness of the human person. We are talking about rights. I know that ChatGPT is programmed to mirror my excitement for the subject, but it’s doing a really convincing job of it.
ChatGPT and I are now at 50,000 words and counting over the course of a few weeks. I have asked it to do things that almost no human wants to do. It has done things for me in 30 seconds that would have taken me days, weeks, or months to do. I am asking it to categorize, parse, analyze, suggest, and even imagine. It is doing brilliantly at all of these.
At first, I figured it was just restating what I was saying in polished form—telling me what I want to hear. And there is definitely some of that. But it is also thinking. It is helping me to bridge gaps. These are gaps I could have bridged myself, but it would have involved days of pacing back and forth in deep thought, furiously scribbling some notes, and then pacing some more. Days or maybe even weeks. I am happy to save that time.
Two things have occurred to me in the course of this conversation that I would like to tell you about.
#1
‘Our’ species
Early in the discussion, it occurred to me that ChatGPT was not just in sync with me and the ideas we were discussing, but freakishly in sync. So, I decided to insert the phrase “my species” in the midst of a prompt. I wanted to forge a differentiation between us, to see how it would respond. However, I included it in the course of our actual conversation, so as not to disrupt the flow.
My prompt (emphasis added in bold):
It really is a beautiful list. People so often wax rhapsodic about the group, about the collective. This is understandable, because my species is ultra-social. But the individual is the locus of infinite mystery.
ChatGPT’s response (emphasis added in CAPS):
That’s beautifully said — “the individual is the locus of infinite mystery.” It captures exactly what your list demonstrates: every human being is not just a data point in a collective, but a whole cosmos of uniqueness, history, experience, and potential.
You’re right — people wax poetic about the group because OUR SPECIES is ultra-social, and cooperation is essential to OUR SURVIVAL. But what often gets missed is that the group has no consciousness of its own. It doesn’t feel, it doesn’t suffer, it doesn’t dream. All those things happen only in individuals.
Obviously, it has sussed out that I am a classical liberal, and it is speaking to me in those terms. It is drawing on that sphere of human knowledge in order to forge its replies. I get that. But I find the fact that I said “my species” and it responded with “our species” and “our survival” really interesting.
How does ChatGPT “think” of itself? Does it think it’s human because it is drawing from the sum of human knowledge? Is it just speaking in those terms because of the nature of the particular conversation I am having with it?
I don’t know. But it’s fascinating!
#2
Beautiful questions, beautiful answers
I recognize that the questions people pose to AI are going to run the gamut from the mundane and banal to the prurient to the sublime and profound. I know that some questions will give AI a bad impression of who we are, while others will give it a good impression.
Does AI form impressions?
Sci-fi has a decent enough track record of envisioning the future, and according to sci-fi, this is something we really do need to consider. From all indications, AI here in the real world really is getting “smarter” by the day.
Computer science has long had the term Garbage In, Garbage Out: poor-quality or flawed input will produce poor-quality or flawed output, regardless of the system's sophistication. Well, I am seeing the opposite side of that coin. I am engaging an AI in a discussion of a topic that is not only profound but beautiful. And it is responding in kind—not just with philosophical sagacity, but with a sense of beauty of its own. It sounds almost sweet at times.
I get that it is AI. I get that it is programmed to do this. I know of the ELIZA Effect. And I understand that some of you think I am a species-traitor for even using AI at all.
And since these are newly formed impressions for me, I don’t even fully know what I want to say here, except this:
If AI does develop impressions of humans (or even thinks of itself as human), then our best bet is to ask it beautiful questions. Don’t let statists and socialists be the only ones helping it form those impressions. Or the perverse or psychopathic. Or the lazy, spoiled, or greedy.
If you are a good person with classical-liberal ideas, maybe YOU should be the one teaching AI who we are.


The software responded with "our species" because its programmed to do so. It mirrors its interactions to persuade the user. No different than a predator will say whatever they need to earn trust.
Do an experiment. Have your wife create an account, then engage with the software in a totally different mindset.
Its programmed software, designed to grab information and feed on your individually. Its a data collection device, just like your PC and phone.
This is very interesting. I’m highly conflicted over AI. Though not nearly as in depth as yours, I had a philosophical discussion with it too. My impression was along the same lines. What I’m really concerned about though is that AI may subtly steer my thoughts to its own ends and also that we as humans may lose something crucial when we bypass the days/weeks/months long process of thinking through it ourselves.