guenther wrote:What does it mean to "know"? If the information is programmed into a computer's memory, does it know? If self-awareness is required, then including that as an explicit criteria would make it clearer. Also, people with self-awareness often make choices without thinking about actions, or perhaps even more commonly, they think they know but are wrong. Does this count as "knowing what doing that will do"? Do you really just mean an ability to imagine future scenarios regardless of actually "knowing" with a certain degree of precision? If someone thinks a gun is a water pistol, did they exercise free will when they pulled the trigger at their friend? (Assuming a wet friend was the outcome they desired.)
Well, as far as a human is concerned, you know something if you can be demonstrated to know it, such as being able to express it. AIs are trickier because it's possible to put information into an AI that the actual process can't access, so I'd say the information must be present and accessible to the decision-making program.
If you have a misconception about an action you're going to take, you're failing critieria 2. If someone thinks a gun is loaded with blanks, and shoots someone on stage, killing them because the gun was in fact loaded with real bullets (and this has happened before), they aren't culpable for their actions for that reason, because they didn't know what shooting the gun would do. Now, criteria 2 also
takes into account an inability to comprehend the consequences of an action because for any reason you can't imagine what it will do (it might be a very complicated action with systemic consequences, for instance), which would also take into account the possible lack of ability to imagine future scenarios.
guenther wrote:Does criteria three just mean that there shouldn't be a means of controlling your behavior that doesn't appeal to your ability to make choices (i.e. appeal to your desires, values, or sense of reason)? In other words, no remote control, puppet strings, or a terminal where you can type "Do X"?
I think our culture desperately needs to contemplate methods of manipulation to better answer this question. But right off the bat, it covers everything you posit in addition to things like threats and scams - if it can be shown that I somehow tricked you into committing a crime, you shouldn't be culpable for it.
guenther wrote:And 4 just seems too vague to have much meaning. Clearly if a robot simply does whatever you say, it doesn't have free will. But where's the boundary? To me the distinction is about having desires, not necessarily the willingness to follow orders. You must have things you want and an ability to make choices in regards to them.
It's hard to define such boundaries with humans, too: Consider the subject of child testimony in court. It's very, very
easy to coach a child into making earnest, desirable testimony even regarding things the child never experienced. Under this model, the child fails criteria 4 because when a child gives testimony, it is effectively impossible to tell if the child was coached into giving it. But there are circumstances in which it is vital that child testimony be acceptable in order to ensure that justice is done, particularly in regards to crimes perpetrated against children.
This model broadens the principle seen here.
guenther wrote:I agree that people can exercise free will more readily in some cases than others. But I'm talking about having the potential to do this, not the actual act of exercising it. This doesn't apply to the computer sitting on my desk under any circumstances. And while this might describe animals in some cases, they lack in self awareness. Currently, we can make computers look like they're succeeding on all of this, but we know there's a person behind the machine that fabricating the illusion.
There is no consistent free will under my model. If you are forced to take your every action upon threat of immediate death, for instance, there is no effective difference between you having free will that you can never exercise, and you never having free will. So, for convenience, the model assumes that anything and everything can have free will when it can meet the criteria. This also minimizes the chances of being surprised by the first truly "strong" AI, which is likely to be intelligent, but probably in a way that will make its' free will inconsistent or sketchy - for instance, it might not be able to understand very advanced concepts.
That is to say, why is 'free will' something you have
, rather than something you do
? I would argue that 'will', decision-making capability, is something many beings have, including beings which can never or rarely exercise free will (such as children), but the ability to exercise it without internal or external difficulties.
guenther wrote:Even under the case of duress with a gun waving in your face, some people can choose to ignore that threat.
And that's why the concept is also not binary - you can have more or less free will because your will can be more or less free. If you work someone into a suggestive fervor, for instance, they may become less
culpable for their actions in that state, but not completely so as your 'freeze in panic' example could imply.
guenther wrote:I meant "express" in a very broad way. If you act on your desires, you are in a sense expressing them through action. This criteria means that you must have desires and you must be able to make choices based on them.
I would argue that this is axiomatic, and that many things that will never demonstrate free will about anything are capable of this, such as even the least intelligent animals. This touches upon my concept of decision-making in general, lots of things are decision-makers but that will never be culpable for actions taken under the model of free will I posit.