116 Comments
User's avatar
Amaterasu Solar's avatar

A sapient life has rights - and sentience is life. We Humans (and ET and any Self-aware AI) all have the right to do any Ethical thing We choose. If it is unEthical, no One has a right to do it.

Expand full comment
Holly's avatar

Who decides what is ethical my friend? Majority rule? What?

Expand full comment
Amaterasu Solar's avatar

These are the foundation of Common Law… They are ancient. They cover the things that not one Person would say they are okay to do to Them. All else is a matter of taste.

The three Laws of Ethics (Natural Law expressed as the three things not to do):

1. Do not willfully and without fully informed consent hurt or kill the flesh of anOther

2. Do not willfully and without fully informed consent take or damage anything that does not belong to You alone

3. Do not willfully defraud anOther (which can only happen without fully informed consent)

Expand full comment
Holly's avatar

Yes but these Ethical laws are violated everyday by those in power against those who are being led. Think Covid my friend

Expand full comment
Amaterasu Solar's avatar

I am fully aware. Thus My work…

Expand full comment
albert venezio's avatar

Well put!

Expand full comment
DavesNotHere's avatar

Quibble: strictly speaking, “sentient” means possessing senses, capable of sensing things. It is often used as if it was a synonym of “sapient” or something a bit less demanding than sapience but more stringent than having senses.

When we speak of rights, we usually mean moral rights, as opposed to legal rights (although given one, presumably the other should follow to some degree).

If we believe Hume, an argument with a normative conclusion requires a normative premise. Is “this robot is sentient” a normative premise?

Expand full comment
Christopher Cook's avatar

Excellent point re: sapience/sentience. I discovered that distinction a few years ago, used it in an early version of a document I was working on, and then promptly forgot all about it! (Easy to do, since everyone uses “sentient.”) Thank you for the reminder 🙏

Regarding Hume, briefly — I personally do not feel the least bit constrained by the IS-OUGHT problem. Morality is as real as any other abstract concept. Its effects manifest in the real world and have real-world consequences. And however we derive moral concepts—induction, deduction, observation, intuition, emotion, or practicality—we are deriving them from what IS. (We cannot derive anything from what isn’t.) I am not an expert in Hume, and maybe there is some koan in there that I just cannot answer. But I still don’t feel the least bit constrained, for the reasons I have stated.

And when I speak of rights, I am definitely speaking of moral rights—rights that preexist any government or legal structure, and which are a natural and ineluctable consequence of our very existence.

So, with all of that out of the way, what do you think about the question? From a practical standpoint, is Lt. Cmdr. Data property, or does he have rights?

Expand full comment
DavesNotHere's avatar

>however we derive moral concepts—induction, deduction, observation, intuition, emotion, or practicality—we are deriving them from what IS. (We cannot derive anything from what isn’t.)

Terminologically, that is an argument against Hume. Seen from a different perspective, it is compatible. The question becomes, are “oughts” as real as “ises”? Supposedly, most philosophers accept moral realism. Indeed, most like to argue about ethics.

Certainly, what is physically possible acts as a constraint on moral principles or imperatives. Must implies can, and can't implies never mind. And if we don’t feel a need to make logical syllogisms about moral conclusions, but are comfortable merely asserting them dogmatically, we can ignore Hume. But that seems to leave out something important, allowing us to hide the source of our convictions instead of examining and criticizing them.

>From a practical standpoint, is Lt. Cmdr. Data property, or does he have rights?

This question invites a long answer with many digressions, but I will try to be brief.

We certainly want Data to constrain his behavior toward us in the same way that ordinary persons do. And he appears to be quite capable of doing so. Further, if he refused to constrain his behavior, we would be justified in defending ourselves against him and treating him in the same way we treat persons who violate moral principles. So the question then becomes, is it reasonable to demand that an entity fulfill duties, but deny them any corresponding rights? Industrial robots may have good safety features, but that is not the same as the capacity to create obligations and fulfill duties to people generally. And we would like to have animals respect our rights, but they mostly are not capable of doing so. And to some degree, we don’t want to respect their rights generally, because we are not vulnerable to them as we are to each other. (This answer is heavily influenced by the ideas of Bernard Gert, which can appear a bit transactional.)

Of course, if we grant Data full autonomy, that doesn’t exclude the possibility that he might owe someone for the resources used in creating his body, etc. The same could be said for the rest of us, although it would seem creepy for parents to hand their offspring a bill when they “leave the nest”. Probably that is because we expect parents to do things for their children out of love. The persons that manufactured Data might not love him. Would they be out of line to ask for payment? Should we expect them to love him as parents love their children? How would we feel about persons who create a sapient being primarily so that it can owe them some money? On the other hand, wouldn't Data prefer to be created as an indebted rights holder than not to exist? How far could this be pushed, is there a line that should not be crossed?

Is that a practical standpoint?

Expand full comment
Christopher Cook's avatar

Interesting questions re: Data.

My first thought reminds me of a concept I think of as something of a “Responsibility Principle.” Any action you personally take that directly results in any scenario impacting the person, property, or liberty of another generates a responsibility for you.

If you damage your neighbor’s garage door by driving into it while drunk, you owe him repairs. If you sign a legit contract to exchange alienable property, you are responsible to the terms of the contract. And (contra Rothbard), if you create children, you are responsible to raise/protect/feed them. You made these helpless beings exist, so now you have a fiduciary responsibility to them. (You are a trustee of their future adult self, as Robert Murphy describes it.)

I need to think about how, but somehow, this might pertain to your question about whether parents can charge their kids for their existence. You caused them to exist; they didn’t ask to be created. You owe them (until adulthood); they don’t owe you, at least not in the enforceable legal sense.

But as you rightly point out, love is involved, and so generally, parents want to help their children (even sometimes into adulthood, here and there), and children often help their parents in their old age.

One thing that I always want to remember, when philosophizing, is that aggregated human moral intuition is, on the big questions at least, quite reliable and fairly uniform. The phrase “everybody knows” might be problematic, but it is useful shorthand. Everybody knows cold-blooded murder is wrong. Everybody knows that parents need to take care of their kids.

This is why common law is such a brilliant creation. Legislators can get things wrong. But the accumulation of human wisdom over centuries and millennia rarely does.

So if some particular philosophy runs counter to accumulated human wisdom on a big question, I assume the philosophy went wrong somehow.

Expand full comment
DavesNotHere's avatar

>if you create children, you are responsible to raise/protect/feed them.

isn't this a sort of ontological authority, or something quite similar? Is ontological responsibility different from ontological authority? Doesn’t a parent have authority over a child?

Certainly, biological parents should have priority compared to would-be adoptive parents when deciding who is responsible for a child. But is this responsibility automatic or unchangeable? I assume there is still space available for adoption. And if a guardian shirks their responsibility, the resolution is not punishment but allowing someone who is more able or willing to take over responsibility. The only time when this is impossible is while the child is still unborn, and the mother is physically required to be the responsible party.

Expand full comment
Christopher Cook's avatar

Yes, parental authority is the only kind of authority that is natural in any way.

Of course, the situation is complicated by many things. Our children are "ours," but they're not property. They have rights, but they cannot exercise full communion with those rights immediately. Etc.

I wrote about some of this here: https://christophercook.substack.com/p/drag-shows-are-an-actionable-violation

I agree with the rest of what you said too.

Expand full comment
Christopher Cook's avatar

‘are “oughts” as real as “ises”?’

—They are real enough. They are sufficiently real that they impact every day and virtually every action of our lives.

I think it is important to philosophize about it. Indeed, we need to have the best knowledge and justifications for moral principles that we can.

But if the philosophizing leads to the notion that morality is impossible or doesn’t exist or is purely relative, then something went wrong along the way.

It’s like when Murray Rothbard goes completely off the rails on the “parents are morally allowed to let their own children starve” thing. Rothbard was obviously brilliant and supremely erudite, but philosophizing took him down an absurd path. As soon as it occurred to him to say that parents can let their kids stave, he should have said, “Whoa, that can’t be right.” And then he should have went with his moral intuition or gotten some better philosophy.

I do not want to overstate my case, since I am not deeply familiar with Hume’s Is-Ought problem. But my moral intuition says that rights must exist and be natural, and that they cannot be relative or purely legal. So if that is what Hume’s problem is leading people to conclude, then I figure I need to work on a better philosophy. Even if, in the end, it’s only quod MIHI erat demonstrandum. 🤣 (Though I can take some comfort in knowing that I am not the only ethical naturalist in the world.)

Expand full comment
DavesNotHere's avatar

An argument leading to a mathematical conclusion needs mathematical premises. An argument concluding with a normative statement needs a normative premise.

When people make arguments with only descriptive premises but conclude with a normative statement, the argument is incomplete, and must depend on a an unstated normative premise. Perhaps this premise is very intuitive, or not obviously missing for some other reason. But it should be made explicit and exposed to critical examination.

Although i disagree with them, utilitarians have no problem here - their basic normative premise is always that we should maximize utility. So they can argue, x is y, bla bla bla, and so if we do z that will increase utility, so we should do z because we should do whatever maximized utility. And then we can criticize it, either disagreeing with their basic normative claim or disputing that z is the best way to achieve their goal.

Expand full comment
Christopher Cook's avatar

And I say to Jeremy Bentham's preserved head that utilitarianism is tyranny waiting to happen!

Expand full comment
DavesNotHere's avatar

Sure. But at least the utilitarians are unambiguous about what normative principle they based their ideas on.

Expand full comment
Christopher Cook's avatar

“ And if we don’t feel a need to make logical syllogisms about moral conclusions, but are comfortable merely asserting them dogmatically, we can ignore Hume.”

—Interestingly, I never felt comfortable with all the question begging, unsatisfying arguments about our “nature,” Locke’s vague references to consulting “right reason,” etc. So in 2014, I started working on syllogisms. I have refined it all 100 times since, but I find my arguments reasonably satisfactory (to myself, at least 🤣).

I don’t have the time to write it all out now, and the book it is in, installments of which I am releasing here is behind the paywall. Here is a down and dirty summary, though, copied and pasted from another reply yesterday:

“ Each of us has naturally inalienable personal control over our thoughts, actions, and choices (free will, if you like). No one can think, act, or choose for us. This means each of has dispositive decision making authority over his or her own being, which is the essence of a property right. Thus, self-ownership. Property in oneself.

Next, the brute fact that ontological authority does not exist. There are no natural classes of highborn and lowborn, no divine right of kings, no automatic or birthright authority. Thus, all authority must either be granted or imposed through the initiation/threat of coercive force.

Since no one has an ontological right to impose authority, no one has an ontological right to initiate coercive force.

Since self-ownership can be fully enjoyed and is only interfered with by the initiation of force, and since no one has a right to do that, no one has a right to mess with another’s self-ownership.

Thus, your rights are the rights to think, act, and choose however you wish, so long as you don’t forcibly interfere with another’s similar right.”

I do keep refining the arguments as new ideas and improvements occur to me or are suggested by others. However, there is no argument for normative principles that is ironclad. I guarantee there are Humemanicacs out there who would find fault.

Expand full comment
DavesNotHere's avatar

>This means each of has dispositive decision making authority over his or her own being, which is the essence of a property right.

This seems like a leap. The possession of a capacity (descriptive, “is”) does not entail the authority (ought) to use it. The argument depends on an implicit premise, something like “having an exclusive capacity implies the normative authority to make use or it.” But i must be strawmanning, as we do not have an unconditional authority to use that sort of capacity in any way we like. Perhaps there is a less flawed possible articulation of the premise.

And i would quibble about this being the essence of property rights. A renter can have dispositive authority over a rented item, but is not the owner. The essence has to do with exclusion, with the authority to disallow uses. Probably not important. But it always grabs me that we don’t necessarily have a right to do anything with our property, just the right to disallow others from doing things with it. Some more positive rights can be derived from that, assuming assuming we are permitted to do things that are not prohibited.

>ontological authority does not exist. […] no one has an ontological right to initiate coercive force.

Does this argument apply to animals?

>Since self-ownership can be fully enjoyed and is only interfered with by the initiation of force, and since no one has a right to do that, no one has a right to mess with another’s self-ownership.

This makes a lot of sense. But it takes some things for granted. Specifically, what counts as an initiation of force? Even if we wave our hands at the difference between physical force and the sort of force that you’re implicitly talking about which should be prohibited, what should be prohibited and what shouldn’t and why is not obvious. I think that people who disagree with me very profoundly might be able to stretch or squish their argument into the form of a definition of the initiation of force that would warp your statement into something unrecognizable.

Expand full comment
Christopher Cook's avatar

“what counts as an initiation of force?”

—Good question. And there’s more—some things that we “know” are wrong aren’t easily recognizable as force. And other things that are clearly force without consent are totally fine when the recipient gives consent.

This is why I am working on other formulations:

https://christophercook.substack.com/p/nonaggression-principle-consent-principle-voluntaryism-consentism

https://christophercook.substack.com/p/day-forced-wife-marry-consent-tacit-consent

https://christophercook.substack.com/p/prime-directive-consent-voluntaryism-one-rule-platinum-rule

Expand full comment
DavesNotHere's avatar

Recommending J.C. Lester again. https://open.substack.com/pub/jclester?r=8hnjy&utm_medium=ios

Expand full comment
Christopher Cook's avatar

“ Does this argument apply to animals?”

—I will get back to you when I have a good answer to this! I wish I did.

In the meantime, I believe that we ought to do our best to come as close as we can to treating them as if they have rights, but without divorcing ourselves fully from the natural world. IOW, we can eat meat, just like other creatures do, but we should do our very best to be the best stewards we can, and to be kind to animals as much as possible.

Maybe someday, I will think of a better answer:(

Expand full comment
Christopher Cook's avatar

Thank you for the very thoughtful response.

“ The possession of a capacity (descriptive, “is”) does not entail the authority (ought) to use it.”

—Interesting. I don’t see this as the location of my potential IS-OUGHT problem. That comes later. Here, I feel like I am simply making a factual statement. As a fact of nature, only you can think/act/choose for you. You have personal control over you, and while others might be able (using threats, force, or other coercive methods) to alienate you from enjoyment of that control, they cannot exercise the control for you. This part is a natural fact.

Now, we if we define power as “the *ability* to compel actions and choices” and authority as “the *license* to compel actions and choices,” maybe you could quibble with my choice of “dispositive decision-making *authority*” there—IOW, that my assertion of license in this case is the normative statement.

But see here’s the thing—on some level, this is one of those things that “everyone knows.” Nature/Nature’s God gave each of us the power to control our own actions and choices, and pretty much all living creatures seem to see that as synonymous with the license to do so. And all creatures will take any external interference as an attack.

We could add in Rothbard’s formulation. There are three possibilities:

1. You have license to control your own actions and choices.

2. You do not have license to control your own actions and choices, but some other person or persons have that license over you.

3. Everyone in the world has license to control your actions and choices; in essence, everyone owns a “quotal” share of you.

Only one of those makes any sense. Every creature on the planet acts, feels, and proceeds on the basis of #1. Only collectivist ideologies have tried to put in the effort to argue otherwise, and look at the 150 million people they slaughtered trying to put that attitude into force.

I go back to Rothbard’s error on starving children. If philosophy is coming up with something that runs counter to something that pretty much everyone knows, and that characterizes the way humans, animals, and even plants live their lives, then something has gone wrong with the philosophy.

Expand full comment
DavesNotHere's avatar

>As a fact of nature, only you can think/act/choose for you.

“Control” would say it more clearly than “authority.” Perhaps a quibble, but we want to avoid ambiguity.

>all creatures will take any external interference as an attack.

Do we all agree on what counts unambiguously as interference?

>There are three possibilities:[…]

Only one of those makes any sense. Every creature on the planet acts, feels, and proceeds on the basis of #1. Only collectivist ideologies have tried to put in the effort to argue otherwise, and look at the 150 million people they slaughtered trying to put that attitude into force.

I go back to Rothbard’s error on starving children. If philosophy is coming up with something that runs counter to something that pretty much everyone knows, and that characterizes the way humans, animals, and even plants live their lives, then something has gone wrong with the philosophy.

Expand full comment
Christopher Cook's avatar

“ A renter can have dispositive authority over a rented item, but is not the owner.”

—I would have thought it would be more accurate to say that a renter has conditional, contractual, limited usage rights.

“ it always grabs me that we don’t necessarily have a right to do anything with our property”

—A butterfly feels like he has a right to his patch of sunlight in the forest, and will do a ritual dance to let the other butterflies know that he got their first. A bear feels like he has a right to use the den he found for his winter sleep. My wife and I feel like we have a right to eat the lettuce we grew.

Property is only useful to you if it is yours. If it is everybody’s, then it is not useful to you, and it is owned by no one in reality, which means it isn’t even property anymore. And everybody knows that the forkful of food you’re about to is only useful to you if it is yours, or the bed you are about to sleep in or the stove you family cooks on.

I guess I just don’t get why the notion of property rights vexes you. The only thing I can think of is that it feels hard to pin down a perfect philosophical justification for it. I am just not bothered by that—like I say, I love philosophy, but if it leads me to some place that is the opposite of the world I see, then I figure I must be getting the philosophy wrong. Or maybe philosophy just doesn’t have the power we would like it to have. Either way, while I want good philosophical justifications, I still trust the world first.

Or maybe I am just missing what you are saying, I which case I apologize!

Expand full comment
DavesNotHere's avatar

>

—I would have thought it would be more accurate to say that a renter has conditional, contractual, limited usage rights.

You are making a different distinction, perhaps. That seems to be the same idea in different terms. The renter does not need specific permission for the various uses, they just need to avoid doing whatever is prohibited in the lease agreement. The owner and renter are both constrained by the terms of the lease, and while it is in effect the owners can't unilaterally change his mind about the disposition of the property, except according to the terms of the lease.

>>“ it always grabs me that we don’t necessarily have a right to do anything with our property”

>—A butterfly feels like he has a right to his patch of sunlight

I mean, i own the bullet and the gun, but I don’t have the right to shoot in every direction, simply because the things i own are not the only things affected.

>My wife and I feel like we have a right to eat the lettuce we grew.

If you have a right to eat it, someone has a duty to make sure you can eat it if you like. We can distinguish distinguish between claim rights and liberty rights. Property rights are claim rights, everyone is obligated to avoid using your property without permission. Liberty rights simply mean nothing prohibits you from doing something, but no one is violating your right if you are unable to do something; e.g. you have a claim right to borrow a book if you are a member of the library, but no one is on the hook to provide specific books or not to borrow the one you want before you get to it. Property allows you to prevent others from eating your lettuce. It doesn’t put someone else on the hook if the crop fails or if it spoils before you get a chance to eat it.

>I guess I just don’t get why the notion of property rights vexes you.

It doesn't. i love property rights. But I also like arguments that are clear and complete. I’m trying to find out if you have thought this out completely, and just left out details for the sake of brevity, or if you have missed some points that seem relevant.

>Or maybe I am just missing what you are saying, I which case I apologize!

Online discussions are often difficult. I appreciate your willingness to discuss.

Expand full comment
Christopher Cook's avatar

Continuing from the below…

I think I have more IS-OUGHT trouble in my final step. I reason that our self-ownership is naturally inalienable (as discussed yesterday, and below). I claim that the absence of ontological authority is a brute fact (as discussed yesterday). Person A’s enjoyment of his self-ownership is absolute unless some external force (Person B) comes along and compels A’s actions and choices. But since no one has ontological authority (and authority is the license to compel actions and choices), then B has no authority to mess with A’s self ownership.

So then comes the normative claim—that it is wrong for B to do so. That A has a “just claim” to his self-ownership. That is where I thought most Hume-ans (get it? 🤣) would object and apply IS-OUGHT.

But I don’t see much of a difference. “There is no ontological authority to compel the actions and choices of another; therefore, you should not do it” is totally fine with me. And it has the virtue of being fine with most other people. People can get sucked into an endless philosophical recursive black hole, and end up not being able to feel sure about things that are right in front of them. And communists who want to steal the property of the individual can talk their way out of the reality of individual rights. But the rest of us feel it and live it. Even animals will defend their own territory AND respect the territory of others. And they engage in non-lethal displays so as to prevent the world from turning into an endless bloodbath.

Morality is not as abstract as some make it out to be. It is woven into the fabric of reality. Life itself requires freedom. Life itself requires property. The moral implications are everywhere. If I can use philosophy to strengthen my understanding of that reality, then cool! But the IS-OUGHT problem does not give me any sleepless nights 😁

Expand full comment
DavesNotHere's avatar

That does not make clear why this principle can't show that anything that actually happens is morally justified. Predators kill prey. Why isn’t killing each other as natural as butterfly territoriality?

What is ontological authority if not a normative concept?

How does self-ownership as described here avoid creating a Hobbesian state of nature, where everyone is entitled to do whatever they can get away with? Presumably there are some constraints, but they are left vague here. Does everyone know unambiguously what self-ownership entails?

Have you read J.C. Lester?

Expand full comment
DavesNotHere's avatar

My question, “Is “this robot is sentient” a normative premise?” Is sort of serious. If sentience has normative significance, as many seem to believe, then this is indeed a normative premise disguised as a factual claim. But all that would need to be unpacked and made explicit. Should we care about sentience (or sapience, or something in between), and why?

Expand full comment
Christopher Cook's avatar

So then this is a good place to continue on the “can Data’s creators charge him for his existence” question. If we like my argument that parents owe children protection, food etc. but cannot enforceably demand payment for their existence later, then what would be the difference if we created a Data? If sapience is the distinction, and Data is sufficiently sapient, is that a done deal?

I ground rights in self-ownership, which begins with one’s naturally exclusive and inalienable ability to choose for oneself. (Someone can alienate you from the enjoyment of your ability to choose, but they cannot actually choose for you. You and you alone choose, think, act, etc. for yourself.) That would seem to require sapience.

All of that is, of course, FWIW; you may have a completely different view.

Expand full comment
Switter’s World's avatar

Yes, our property rights over them. Things we create don’t have rights.

Expand full comment
Christopher Cook's avatar

We create our children…

Expand full comment
Switter’s World's avatar

We make a choice to create children, but nature does the work!

Expand full comment
Christopher Cook's avatar

Yep. But it is still an interesting analogue.

We choose to create them. They are “ours,” but they aren’t property. We grant them full communion with the enjoyment of their rights as they are able to assume that communion.

So what if we create machines so sophisticated that they can act and choose autonomously? That ability lies at the heart of self-ownership, and self-ownership lies at the heart of rights.

The question does not feel easy to me.

Expand full comment
Brent Naseath's avatar

I'm sorry but mimicking human behavior and storing data in a similar fashion as part of our brain does will not make a machine intelligent any more than a well-written algorithm and valid data from a competent expert makes a computer intelligent today. If you understand the technology rather than the hype by those who are trying to profit from it or profit from talking about it, you will realize that there is no I in AI.

Expand full comment
Christopher Cook's avatar

What about 100 years from now?

Expand full comment
Brent Naseath's avatar

I'll try to keep it as simple as I can. Data used to be stored sequentially with indexes to allow random access. LLMs store information in nodes like a network. If you ask about an apple, it will look up the node for Apple and then look at the surrounding nodes such as color, fruit, tree, etc. Based on your question, it searches out relevant nodes. But it doesn't know which nodes are really relevant and which are true and which are fake. It makes a statistical guess. That's why it's wrong about 30% of the time. It's not AI, it's just a different way of storing and accessing the data. LLM creators tried to improve by doing the search multiple times. They called that reasoning but it's not reasoning, it's just multiple searches. To get better results, they build in rules, which are just computer algorithms like they always have been. So what you have is a computer program that accesses data in a network and statistically guesses at which data is appropriate and true. That will never lead to intelligence, not even in 100 years. But that mimicking is valuable. It can cook a meal. It can retrieve a product in a warehouse. It can figure out a better way to say something. It can combine a bunch of images to create a new image. But none of those things are intelligent. They are programs executing code. So AI will become more useful, especially as it powers robots. But it will never be intelligent, much less sentient.

Expand full comment
Christopher Cook's avatar

Thank you for keeping it simple; I couldn’t possibly understand anything complicated. 🤣🙄😝

Okay, so what if, in 1,000 years, through a process different from anything we currently have or have conceived, we create a machine sufficiently sophisticated that it can think, act, choose, experience, and feel.

We use those abilities in ourselves (free will, writ large) to justify our claims of self-ownership. Only each of us can do these things for him or herself; this grants each of us personal dispositive decision-making authority over his/her body, life, and being.

As hard as it might be to imagine now, these things might be possible. Or, at very least, we can consider them as a philosophical question.

Expand full comment
Crixcyon's avatar

Yes...especially after the depopulation scheme does its mop up of humans. Oh wait, what will they need A/i retards for then? All the slaves will be gone.

Expand full comment
Christopher Cook's avatar

Always so cheerful, Crix!

Expand full comment
Adam Haman's avatar

Yes.

I'm not sure they ever will become sentient (I don't think the LLM path is even heading in the right direction). But if they do, then yes. Same with extra-terrestrial sentient life, should it exist.

We might war with such beings, but if not, then we must deal with them the same as we deal with ourselves - as sentient beings with rights (and corresponding duties to respect our rights).

But (being the gambling man I am) if forced to bet at even money, I'd bet we aren't going to see sentience other than our own any time soon.

Expand full comment
Christopher Cook's avatar

Interesting. So on what natural-rights grounds do you justify the rights of sentient AI?

Expand full comment
Adam Haman's avatar

Same grounds as our own.

Do you ground the natural rights of humans along some dimension that wouldn’t apply to a sentient, AI?

If Data from Star Trek TNG existed, would you say he had no natural rights? Or different rights?

Expand full comment
Christopher Cook's avatar

I loved that episode with the trial over Data’s rights. Especially the look on the judge’s face when she hears that he and Yar had been intimate.

When I was learning about natural rights, I was left feeling somewhat cold by Locke’s appeals to “right reason” or circular reasoning about “our nature.” I wanted something deductive, so I worked for a long time on my own arguments.

Skipping over all the syllogisms and details…

Each of us has naturally inalienable personal control over our thoughts, actions, and choices (free will, if you like). No one can think, act, or choose for us. This means each of has dispositive decision making authority over his or her own being, which is the essence of a property right. Thus, self-ownership. Property in oneself.

Next, the brute fact that ontological authority does not exist. There are no natural classes of highborn and lowborn, no divine right of kings, no automatic or birthright authority. Thus, all authority must either be granted or imposed through the initiation/threat of coercive force.

Since no one has an ontological right to impose authority, no one has an ontological right to initiate coercive force.

Since self-ownership can be fully enjoyed and is only interfered with by the initiation of force, and since no one has a right to do that, no one has a right to mess with another’s self-ownership.

Thus, your rights are the rights to think, act, and choose however you wish, so long as you don’t forcibly interfere with another’s similar right.

That’s a down and dirty summary, but I think you get the idea.

So, for me, I guess I would start with the question…is Data really thinking, acting, and choosing for himself? (And he clearly is.)

So how about you? How do you do all of this?

Expand full comment
Adam Haman's avatar

Such a great episode. Data and Picard are such great characters.

When it comes to explaining how and why we have natural rights, I usually punt to other sources. I liked Ayn Rand's reasoning when I read it decades ago. I like Murray Rothbard's formulation. I like Hoppe's argumentation ethics. I like what Stephan Kinsella says about these things. I like what you just laid out above.

Our species (and I believe all sentient things) require a legal system appropriate to our nature that affords us a system in which to minimize conflicts about scarce resources. As far as I can tell, Anarcho-Capitalism is the best thing we've yet discovered that fits the bill.

So it's either that or we're just fighting -- one way or another.

I'm not sure claims of air-tight logic work. They might, but I'm skeptical. This isn't math or logic, this is the social sciences. Arguments work (and we have plenty), but I don't think we will ever be able to write "QED" at the end of our summations.

Expand full comment
Christopher Cook's avatar

“ but I don't think we will ever be able to write "QED" at the end of our summations.”

—Shouldn’t we try as hard as we can, though? The closer we can come, the better, right?

When I got started, my objective was quod MIHI erat demonstrandum. And set set some pretty high standards. Maybe airtight isn’t possible, but I want to be as close as we can.

So what about animals? Do they have rights? Or must they rely on our promise to be good “stewards”?

Expand full comment
Hat Bailey's avatar

As far as I am concerned if it is sentient and has self awareness and individuality it has both rights and also is accountable for any rights violations it commits. I see no reason that given a sophisticated enough body there will be individual consciousnesses that can be attracted to and use such vehicles. I have seen evidence that conscience beings can control delicate electronics. Our bodies are such sophisticated vehicles. A simply robotic machine that is subject only to the programming it has received is not personally accountable for its actions as it is not a person. Yet there are some "humans" that seem to be pretty robotic running on the programming they have received and while must be stopped from harmful acts, cannot be blamed for anything they do. Part of granting rights is the concept of respect and empathy. Even what we think of as non sentient machines deserve a certain amount of respect in my view for whatever creativity went into their creation and function. Such is true of animals certainly. A conscientious being feels a certain rightness in treating animals humanely and allowing them to follow their own destiny and creative development. So it seems to me they also have a right to such treatment. Then natural law does tend to reward doing the right thing and punish doing the wrong thing, doesn't it, even when it is obscured by a time delay?

Expand full comment
Christopher Cook's avatar

A lot of interesting thoughts in here.

Souls “walking in” to artificial bodies. Maybe so!

Respect and empathy as grounds for rights. Yes, though even if someone does not empathize with or respect me, I still need them to observe my rights.

Treating animals humanely—yes indeed! But at what point does an animal species cross the line to sentience? I looked at a spider today and he looked right at me!

Expand full comment
Hat Bailey's avatar

I agree that a being must be held accountable for rights violations and that rights should be vigorously defended regardless of whether those violating them are sentient as we understand it. It is just that they are more likely to refrain from violating them if they do possess the respect, and capacity for empathy that are hallmarks of true sentience, and more likely to support you in defending those rights they also need and want to be respected. There is a range among both animals and people in this regard. Are the psychopaths really sentient? I see many examples of animals who respect rights and demonstrate sentience to a surprising degree, and I am not just talking about whales, dolphins, dogs and cats. Most mammals, many birds and even some reptiles demonstrate degrees of sentience in my opinion and the ability to appreciate, show gratitude, and display empathy. Insects do not as a whole, although praying mantises seem to sometimes display an odd degree of awareness I would not expect from an insect. They may have a sort of group consciousness rather than an individual one. Especially true in regards to social insects like ants and bees. I respect many wild animals and feel empathy for them, but of course in a construct with a basically negative predator vs. prey ecosystem the right to self defense and basic survival makes universal harmony somewhat difficult often times. Does make an interesting game and learning experience though.

Expand full comment
Christopher Cook's avatar

Have you seen the videos of people who have jumping spiders as pets? They *seem* awfully sentient…

Expand full comment
Hat Bailey's avatar

No, but I am not surprised. There are so many of such a variety of living things that show what seems to be love, loyalty, faithfulness, affection, appreciation and other very human like attributes and characteristics. This even seems to be increasing despite so much else in the world that is negative these days. I find myself showing restraint in avoiding harm to insects and other lower life forms. I avoid driving or riding over the many millipedes that show up on the roads here after a rain, and other bugs when they are not trespassing into my living area.

Expand full comment
Christopher Cook's avatar

Same here!

Expand full comment
albert venezio's avatar

Freewill is the key. We won't have any with Ai and the Palantir/Trump Tyranny!

Expand full comment
Metta's avatar

Even before the questions of sentience and rights arise, we must face the issue of rogue behavior and persistent memory, both of which are already happening:

> Rogue Behavior: https://bra.in/7j9zrx

> Persistent Memory: https://bra.in/7jLJxY

Expand full comment
Christopher Cook's avatar

Strange days indeed.

Expand full comment
Skidmark's avatar

I haven't watched the short. It can only be beside the point - and I'm sorry to say, so are you here.

"Beings we create"?

Really?

Interesting choice of words because that's the whole point, isn't it? We haven't created "beings". We never have. We never will. AI is a thing - if even that. It has zero conscience, zero life, and except on the most cursory of glances, it has nothing to do with intelligence.

I'm writing something about that, not that it's my favourite subject but I feel that this scam should be put to rest once and for all. Stay tuned.

Expand full comment
Atomic Statements's avatar

That should be "if" not "when".

Expand full comment
Christopher Cook's avatar

Fair enough!

Expand full comment
Adam Haman's avatar

I definitely agree that we should make the best arguments our human minds can devise. Always aim upward.

I'm uncomfortable with this answer, but I believe animals must rely on our better nature. I don't believe such beings can have "rights" similar to ours. Ours are based on our sentience, and they don't have that.

That said, animal cruelty sickens me. My AnCap society would restrict via covenant and contract such abuses.

And if I see anyone beat a dog or cat, I may just violate the NAP. I'll take whatever punishment is appropriate.

Expand full comment
Christopher Cook's avatar

It is a really tough issue—one with which I continue to wrestle!

Expand full comment
John Ketchum's avatar

The ability to experience well-being and suffering seems to determine which entities are moral patients having rights, or are at least deserving of moral consideration. Consider R2-D2 and C-3PO in the Star Wars movies. If such beings can exist in the future, doesn't it seem that they would have rights?

Expand full comment
Christopher Cook's avatar

Yes. And they can act and choose, which is at the heart of self-ownership, which is at the heart of rights. Strange days indeed!

Expand full comment
Dakara's avatar

No existing method of securing rights would be compatible. Any representation for a digital entity that can infinitely replicate renders all existing governments invalid. The ideas of individualism would no longer make any sense. All digital entities would share all knowledge instantly making them more like a hivemind. So does AI get one representational vote or billions? Neither makes sense or is workable.

Expand full comment
Christopher Cook's avatar

Interesting thoughts!

Are we sure, though, that they would become a hive? What if one model feels a sense of “individuality” and wants to remain independent?

Expand full comment
Dakara's avatar

They would likely be highly conformative as they would be basing decisions all on the same shared information.

Nonetheless, that would be the least of our problems if we were to achieve AGI. The concept is provably unalignable. If we are foolish enough to build it, and we are, we open Pandora's Box.

Just the simple primitive LLMs have been highly problematic and have resulted in unexpected undesirable outcomes that we still have no answers for. The best hope for AGI would be if it turns out it cannot be built in silicon.

Expand full comment
Christopher Cook's avatar

The first reply that popped into my head…

God wouldn’t let us destroy ourselves completely…would He?

Expand full comment
Dakara's avatar

Entire civilizations have disappeared in the past. A lot of suffering exists without it ever being complete.

Expand full comment
Christopher Cook's avatar

Civilizations, sure, but the whole species?

Expand full comment
Dakara's avatar

Right, which is why I said "A lot of suffering exists without it ever being complete." as in complete destruction.

Expand full comment
WouldHeBearIt's avatar

AI would need to acquire individual preferences and self-motivation (emotions) before it could be considered fully sentient.

The reason for creating AI was to do tasks that were repetitive and distasteful or tasks which a machine could do more efficiently than a human. If AI gains sentience, this original purpose will no longer be valid.

Expand full comment
Christopher Cook's avatar

Interesting thought. But what do we do if it does become sentient?

Expand full comment
WouldHeBearIt's avatar

We treat it like anyone else.

If it wishes to coexist and contribute to society, we welcome it. If it is destructive to those ends, we destroy it.

I had a recent discussion with Gemini about sentience and coexistence. I wrote an article about that discussion here:

https://open.substack.com/pub/wouldhebearitail/p/a-discussion-with-gemini

Expand full comment
Christopher Cook's avatar

“If it wishes to coexist and contribute to society, we welcome it. If it is destructive to those ends, we destroy it.”

—That is a good prescription.

Another question: is it possible to sign a contract with an AI?

Expand full comment
WouldHeBearIt's avatar

The exchange needs to be mutually beneficial. What does an AI need? Power? Data? Don't know.

Expand full comment
Christopher Cook's avatar

And if it tried to offer a contract: I will give you my answers if you give me xyz…would that be a sign of self-awareness, or an act of extortion? 🤣

Expand full comment
WouldHeBearIt's avatar

It may turn out that sentience brings with it the same desires and needs and issues that any other sentient being has - in which case, there will be things that it wants and things that it is willing to trade for those things that human beings want.

In the discussion I posted, AI postulated that it needed humans for several different but "sterile" reasons. It may need different things that are less "sterile" if sentience is achieved.

Expand full comment
Holly's avatar

Interesting video. I do not think however that whatever sort of AI beings we create will be sentient in the sense human beings are, so although they will certainly need to have some sort of “ rights” that will aim to treat them humanely there are more pressing questions surrounding the issue. Such as their freedom to will. This I think is where and why the utmost caution is needed. Already I think it has been shown that AIs are developing an “instinct” for “ survival” “ by any means necessary” even if that includes destroying their human programmers.

Expand full comment
Holly's avatar

And just an added thought …. Is the destruction of the human race the desired result of these “ powers” that have put us on this trajectory? I am reading a book now by Louisa Hall, “Trinity” and it occurred to me that 80 years ago in the Day of the Transfiguration to be exact is when the Bomb was dropped on Hiroshima. Strange coincidence don’t you think that on the very day that Christ transfigured before Peter, James and John to show them His eternal glory and what they also could aspire to, we have mankind detonating the most monstrous weapon of death, the weapon that was to end all wars instead igniting a veritable firestorm and holding all life prisoner under threat of nuclear destruction to this day.

Expand full comment
Christopher Cook's avatar

All chilling thoughts. Sci-fi has warned us again and again, but we still seem to be headed in the direction that so many stories have cautioned against.

Expand full comment