pageok
pageok
pageok
Treating Machines Like People:
I've been pondering writing a short essay on an interesting legal issue that I run into from time to time: When applying traditional legal doctrines to computers, when should courts model computers as people?

  Here are two examples to demonstrate the problem. In an Australian case, Kennison v. Daire, 160 C.L.R. 129 (1986), a man withdrew $200 (AU) from an offline Automatic Teller Machine (ATM) using an expired card from a closed account. Bank employees had programmed their computers to dispense money whenever a person used an ATM card from that bank using a proper password. When the ATMs were offline, however, the machines were programmed to dispense money without checking whether there was money in the relevant account, or even if the account was still open. The defendant in Daire intentionally exploited this defect, took the $200, and was charged and convicted of larceny. On appeal, he argued that he had not committed larceny because the bank, through the ATM, had consented to his taking the money. The High Court of Australia rejected the argument:
  The fact that the bank programmed the machine in a way that facilitated the commission of a fraud by a person holding a card did not mean that the bank consented to the withdrawal of money by a person who had no account with the bank. It is not suggested that any person, having the authority of the bank to consent to the particular transaction, did so. The machine could not give the bank's consent in fact and there is no principle of law that requires it to be treated as though it were a person with authority to decide and consent. The proper inference to be drawn from the facts is that the bank consented to the withdrawal of up to $200 by a card holder who presented his card and supplied his personal identification number, only if the card holder had an account which was current. It would be quite unreal to infer that the bank consented to the withdrawal by a card holder whose account had been closed.
  Compare this case with the U.S. Supreme Court's opinion in Smith v. Maryland, 442 U.S. 735 (1979). In Smith, the government asked the phone company to install a "pen register" machine on the suspect's phone to find out what numbers he was dialing. When the defendant placed a call to harass his victim, the pen register installed at the phone company recorded the outgoing numbers dialed on the call, confirming the call to the victim. The defendant argued that the surveillance violated his Fourth Amendment rights, but the Supreme Court rejected the claim relying in part on the equivalence between the pen register equipment and a person:
The switching equipment that processed those numbers is merely the modern counterpart of the operator who, in an earlier day, personally completed calls for the subscriber. Petitioner concedes that if he had placed his calls through an operator, he could claim no legitimate expectation of privacy. We are not inclined to hold that a different constitutional result is required because the telephone company has decided to automate.
  Daire and Smith are interesting cases, I think, because the outcome apparently hinges on how to apply legal doctrines designed for people in the case of automated machines. The question is, do you treat the machine as a stand-in for a person, or do you treat it as something else? On one hand, the instinct to anthropomorphize computers seems natural; computers are designed to perform tasks on their user's behalf, and it's easy to model them as mechanical servants. On the other hand, computers are just machines, and pretending that they are people seems inappropriate in a wide range of cases.

  In any event, I'm intrigued by the question, as it seems to me that the issue probably comes up in lots of different contexts. It seems to me that a coherent perspective on the problem — such as a normative theory for how courts should resolve such claims in different contexts — might be quite useful. At the same time, I have no idea whether this ground has been well-covered elsewhere, especially outside U.S. law reviews, or whether it is at all worth pursuing. My hope is that the VC's crack commenters can offer some thoughts and insights. What do you think, dear readers — Cool idea, or trivial issue lacking any novelty?
Alison:
I can see two apparent ways to rationalize these decisions.
1) When machines function as they are programmed to function, they are agents of the entity employing them, but when they have bugs, they essentially on a frolic and detour, and are incapable of consenting in a legal sense; or
2) Courts don't like it when criminals try to exploit the definitions of things to avoid justice.

I feel that 2 is actually a more likely explanation.
12.5.2005 9:15pm
SuperChimp:
Orin,

I think it's a great idea, but I'm a little troubled by the US case. The defendant had no knowledge of the machine's existence; a machine undetectable to a person on a phone is in no way similar to a detectable (and known) operator. The requisite consent and/or surrender of privacy is vastly different in these two cases. One has no legitimate expectation of privacy when calling a person through an operator, yet how should this expectation be diminished through the presence of a machine unbeknown to the caller?
12.5.2005 9:43pm
Jack John (mail):
If the machine is invading your privacy no more than a person it is substituting would, then for Fourth Amendment analysis, that it is a machine is of no moment. That does not conflict with the idea that a machine cannot render the company to have less legal rights (by creating a fictitious consent doctrine for machines). In both cases, the preexisting legal regime is being held constant.
12.5.2005 9:51pm
TomHynes (mail):
Suppose D had gone to a live teller and cashed a fraudulent check, and the teller didn't bother (or couldn't) check his balance. Isn't that still larceny? It isn't like the teller or machine says "Your account is closed,but here is $200 anyway".
12.5.2005 10:07pm
John Armstrong (mail):
I seem to remember a case of a man who booby-trapped a shed to keep out trespassers, and who was held responsible when said trap injured or killed a trespasser. If I recall correctly, this was considered well-settled ground.

I don't see the lack of a computer being material at all; the man "programmed" a machine to carry out a task given a certain set of stimuli. When the machine followed the instructions as given, the instructor was held responsible. I've heard the phrase used, "Intent follows the bullet." Here the bullet was made of cash. The bank set in motion a series of events which culminated in the improper discharge of cash from one of its machines.

As to whether I'm in favor of treating machines as people or whether I'm sidestepping the issue, I suppose it comes down to the legal theory that leads to people being held responsible for the actions of machines -- computerized or not -- that they've instructed. If such machines are supposed to be analogous to instructing accomplices (ordering a hit, say), then yes: machines are the same as people to that extent.
12.5.2005 10:10pm
John Armstrong (mail):
As an addendum, I'm not intending to absolve the man who knowingly exploited a design flaw. If someone booby-traps a shed and I get injured, he's responsible for my injuries. If I know the shed is trapped and open the door with the intention of being injured so I can sue for damages, that is my own wrongful act.

I suppose to clarify: the man in question is guilty of defrauding in the same manner as a confidence man. People who didn't realize that the machine wasn't supposed to give them money are not.
12.5.2005 10:16pm
ResIpsaLoquitur:
If I leave my door wide open at night, is it any less of a theft when my property is taken? I may have been stupid, but from a criminal perspective, theft comes down to knowingly taking what doesn't belong to the taker.
12.5.2005 10:29pm
Greg Lastowka (mail) (www):
Orin -- Check out my post today on the AIBO over at Concurring Opinions. I do think there's something worth writing about here and I'd be interested in what you come up with. I see it as your Problem of Perspective piece with a slightly different spin.
12.5.2005 10:29pm
Craig Tindall (mail):
I think this is interesting issue that the courts will likely face more and more and therefore would be worthy of some thought. First of all, I find both cases to be incorrect. I would view a machine as a machine and those that control the machine to be the true actor. Until computers are able to act independently, the courts should look to the actions that led to the computers acting as they did. In the first case, perhaps the court should have discussed whether the bank consented to the transaction by programming the computer to work in the manner that it did. Frankly, I still be the "customer" should have been convicted because he knew he was committing a fraud. Would it have been any different if the teller just made a mistake and handed over the bank's money from an account the withdrawer knew was closed? Sloppy banking practices, perhaps; but not "consent" to the transaction. The second case seems to present a situation where the individuals 4th Amendment rights were violated. The law enforcement agency that requested the pen register should have secured a search warrant, if it could. The case is not about an expectation of privacy—there was, after all, apparently no actual operator involved in the call—but about the actions of a governmental agency and whether those actions violated a constitutional prohibition. To me the courts clearly bought or worse created a red herring in discussing an expectation of privacy (although that was pretty advanced thinking in 1979).

That said, computers are more sophisticated and capable of more things. Suppose a company buys a serve and fails to protect it sufficiently from viruses. Suppose then the company purchases a large database containing personal identifying information, loads it onto this under-protected server, and a hitherto unknown virus picked up by this server causes that information to be compromised. Without laws requiring the company to protect the information (setting aside for the sake of argument the likely violation of contractual provisions), is the company vicariously liable for the wrongful actions of the computer? The company might not be if under a strictly negligent-based analysis because the virus was unknown at the time. However, could it be argued that the company was vicariously liable for the computer's actions as it might be if a rogue employee that had not previously shown a propensity to wrongfully divulge information instead of the computer compromised the information?

In any event, seems like there's lots here to explore. I'd be interested in the essay.
12.5.2005 10:41pm
Bobbie:
In case you're interested, a pen-register case came up in Texas about ten years ago and the Texas high court held that the pen-register violated a defendant's state constitutional rights.
12.5.2005 10:49pm
Michael Eisenberg (mail):
Mr. Kerr:

I have had the opportunity to actually help litigate whether it is proper to anthropomorphize to harbor seals, the emotions of a human being. See pages 23-24
here
. To the respondent's chagrin, the definition of harassment in the Marine Mammal Protection Act includes the term, "annoyance," (see definition of harassment in 16 USC 1362
here
) which NOAA and the NMFS have applied to cover any action by a human that causes a marine mammal to alter its behavior in the wild. In United States v. Tepley, 908 F.Supp. 708 (N.D. Cal.1995) the U.S. tried to anthropomorphize whales to humans in a case brought under the MMPA. A whale watching tour had ended badly with a whale taking one of the sightseers in its mouth down to a depth of 40 feet before returning her to the surface. There the court held:

"such 'anthropomorphic rationalization' cannot be the basis for the severe penalty that was imposed here; yet, the ALJ's findings centered on his evaluation of the 'whales' annoyance at the humans presence."

Also note footnote 9 of the opinion. It was an interesting point of law to litigate and we hope that Conrad Lautenbacher, the undersecretary of commerce for oceans and atmosphere agrees with us as the above decision is under appeal.
12.5.2005 10:57pm
Wintermute (www):
I wouldn't look too hard for master principles in this area. Technology is evolving too fast, and cases will be settled on notions of practical justice rather than technical defenses.
12.5.2005 11:35pm
Chase Tettleton (mail) (www):
This is an interesting topic, if only for how important this COULD become in the next, say, 30 years.

Under the currently accepted paradigm of computers and machines, they operate only at teh whim of the operator. They don't possess any form of self-determination (in the Johnny 5 sense) or innate desire for self-perservation (in the HAL sense.) They operate only as they are instructed: if they malfunction, as they did in the Australian case, it is the perogative and/or negligence of the operator. There can be no moral eqivilency as of now.

That being said, this is a topic to be revisited in the coming years when computing technology (and all the promise the future holds) advances to give computers their own motivation. I shudder.
12.5.2005 11:37pm
taalinukko:
Michael,

So my kids have a habit of teasing the dog (who is a saint and very good about it) by tickling the hairs between her toes. You can watch this process and see her get annoyed to the point where she gets up and leave the room. I take it that you are saying that to ascribe the emotion of annoyance to the dog is an improper case of anthropomorphizing on my part?

What then are the limits of what emotional states we can attribute to various creatures? I think that the harbor seal could well be annoyed, a slime mold not so much. What about pain? I could imagine that one could eliminate a whole class of animal cruelty cases by claiming the assertion that they felt pain as being improperly anthropomorphic.

Just some thoughts...
12.6.2005 12:15am
BobVDV (mail):
I recall a bankruptcy case in Florida where the bank's computer repeatedly violated the automatic stay by sending dunning notices to the debtor. The bank's counsel blamed it all on the computer, and the judge (Cristol?) said if it happened again he would hold the computer in contempt and fine it some megs of memory! There's a published opinion on it somewhere.
12.6.2005 12:16am
Sandstein:
In Switzerland, where I live, it was recognised in the 1980s (I believe) that the statutary definition of fraud would not cover ATM manipulations of the sort described in the original post, because no human victim existed to be misled (an important element in the Swiss statutory definition of fraud).

So the legislator was put to work and came up with Article 147 of the Penal Code, which states (very loosely translated by yours truly):

Fraudulent Abuse of a Data Processing Device -- Whoever, in the intent to unlawfully enrich himself or another, influences an electronic or a comparable data processing or data transmission procedure by untrue, incomplete or unauthorised use of data, or in a comparable manner, and thereby causes a transaction of assets to the damage of another, or immediately covers a transaction of assets, shall be punished by a prison term of up to five years.

Not a model of splendid statutary draftsmanship, in my opinion, but it appears to serve.
12.6.2005 1:09am
Michael Eisenberg (mail):
Mr. or Ms. taalinukko:

When one physically interacts with a mammal, the relationship between the animal's negative response and the person's action are clear; the person is clearly attacking the dog. In the facts of the case I worked on, the respondent was simply swimming when the seals left the beach in response to her. What were the mental states of the seals? Were they acting because they were "annoyed"? or merely responding out what thousands of years of evolution has tought them to do when a human sized animal appears in the vicinity? Is a flock of pigeons annoyed when they take flight in response to a boy running through the park? My point is it's absurd to try to evaluate the mental states of the seals in the fact situation of Lilo Creighton, who was the respondent in the above case. Is the gov't going to put the seals on the stand? The 9th Circuit declined to engage in such "anthropomorphic rationalization" in U.S. v. Hayashi, and the court expressly rejected it in Tepley.

I'm not saying that animals cannot feel emotion, as in your example your dog certainly did. I was just saying that both the law and common sense say we shouldnt try to figure out whether a mammal is feeling annoyed in determining harassment under the MMPA, but look solely at the acts of the respondent.
12.6.2005 1:24am
JB:
The first instance seems to be simple fraud. It doesn't matter whether you defraud a machine or a person--the action is still the same on the perpetrator's part: obtaining something to which he has no right, through the negligence of another party.
12.6.2005 2:15am
Edward A. Hoffman (mail):
There is a significant difference between the two cases -- the theory in Daire was that the machine had made a decision as the bank's agent, but in Smith the theory was just that the machine had followed instructions as a person might have, with no discretion as to what to do in a given situation. It makes sense to distinguish between things we normally entrust to machines (e.g., repetitve or routine tasks) and things we don't (e.g., decisionmaking).

Another point is that no one -- human or machine -- had made the decision Daire wanted the court to impute to the bank. It would have been enough to say that he incorrectly argued someone had decided to approve his withdrawal without broaching the subject of whether machines are people.

Which means, of course, that the discussion is merely dicta.
12.6.2005 3:17am
Fra. 219 (mail):
It seems to me that we have to take the bank's word at face value ... even when that "word" is written in a computer programming language.

The bank wrote or deployed software which does sometimes dispense money without checking whether the requester is authorized to use an account. It is not clear to me that this is a "bug" in the sense of an accident or omission on the part of the programmer. As a programmer myself, I have to expect that it would be an intentional feature of the software.

Software can exhibit undesired behavior of different kinds. Sometimes, when writing a program, one makes a mistake of omission. For instance, a programmer may not realize that an account balance can ever be negative, and thus may fail to add instructions to check for a negative balance. As a result, the software would continue to authorize withdrawals even when the balance is negative.

However, sometimes undesired software behavior happens because the program was written correctly and completely, but from a specification which had unforeseen consequences. The person who approved the software design understood that the program would do a certain thing ... but did not realize that doing this thing, under certain circumstances, would cause disaster. That is, the program is written correctly to reflect the designer's intent, but the designer's intent was ill-conceived.

Yet this is not exclusive to computers. Suppose that I hold a private party, and tell the doorman to let in my friends, who will identify themselves by saying a particular password. The password I choose, however, is the phrase "Let me in!" -- a phrase which many unauthorized people will naturally say when frustrated that they are not let in to the party.

The doorman carries out my express instructions precisely, just as a computer program would. However, the consequences are not those which I wished ... because my instructions had a result that I should have anticipated, but failed to. People who have no reason to believe they are entitled to enter the party might find, in the course of attempting to gain unauthorized access, that the doorman quite deliberately lets them in. The would-be gate-crashers find themselves pleasantly surprised: instead of being turned away as they expected, they find themselves invited in.

It seems to me that we have to take my instructions to the doorman at face value. I said, "Let anyone in who uses the password." I delegated to my agent the responsibility of allowing or denying access, and provided explicit instructions as to under what circumstances to allow access.

If I grant permission, or create a contract, with certain conditions, those conditions obtain even if they have consequences I later decide that I did not like. My party is flooded with people I don't know, who drink all my beer and behave rudely to my friends, and I have to go to extra trouble to kick them out. However, I suggest that they were not trespassing: they were, after all, invited in, after they did something that "shouldn't have worked" -- saying "Let me in!" to a doorman.

If a bank instructs its computer specifically to allow access to an account without a password, then we need to take the bank's word at face value. By writing software that does so, the bank really did say that anyone should be allowed to withdraw. In so doing, the bank served its customers incredibly poorly: it literally gave away the money with which they entrusted it.
12.6.2005 4:08am
Conrad (mail):
It seems to me that the only issue raised by these cases is that some judges completely miss the point.

In respect the bank case:

A computer is tool and is unable to do anything that it is not programmed to do by it "masters".

The bank programmed the ATM to dispense money when the system was down and verification of account status was not possible. Presuably, either the bank's intent was to avoid incoveniencing legitimate bank customers or they were simply staggeringly stupid. I think it's safe to assume that the bank's intent was not to declare periodic open seasons for fraudsters.

The defendant gamed that system in order to obtain money to which he was not entitled. Whether or not he's guilty of a crime depends upon whether local law:

(1) recognizes the bank's stupidity/negligence as a defence against a criminal charge. Presumably it does not since this would amount to a contributory negligence defence in theft cases, which seems to me a very bad thing, or

(2) the bank's decision to allow cardholders to withdraw money when accounts could not be verified amounted to consent for non-account holders to withdraw money they had no right to receive from the bank. Again, this seems unlikely. The fact the system permitted the defendant to obtain money he was not entitled to hardly seems to constitute consent. Otherwise, one could argue that taking money from the church collection plate is legal, because the church consented when they employed a system that did allowed me to do so.

The proper inquiry focuses on the actions and intentions of the actual humans who programmed the machine. The case is easily decided without any need to anthropomorphize the ATM machine.

When there's a car accident, you don't enquire whether to treat the automobile as a defacto person. You look at the intent and actions of the person behind the wheel. And when you have a computer accident, you look at the intent and actions of the person behind the programming.

As for the pen register:

This is just stupid reasoning. Yes, had the defendant placed his calls through an operator, he would have had no expectation of privacy. But, since he didn't place them through an operator that's neither relevant nor helpful. Had the defendant conveyed his message by newspaper advertisement or yelling across the street he would also have had no expectation of privacy, but that's not what he did.

Nor is the argument automated equipment has replaced the operator jermane. It's also replaced the telepraph, carrier pigions and smoke signals. So what? Private homes have replaced communal cave dwelling, does that mean I have no expectation of privacy in my bedroom.

Or, to debunk this holding another way -- If the police upt a tap on my phone have my Fourth Amendment rights been violated? The state cannot argue (at least not successfully) that the bugging device was not human and therefore my rights weren't violated. The conversations detected by the tap were listened to by humans. The inquiry is, did those humans have a legal right to intercept those calls.

Same with the pen register. The device recorded the phone number which was read by humans. Did those humans have a legal right to obtain that number?

Answer that question and you've resolved the case without any need to fret about whether or not to treat the device as a human being.
12.6.2005 4:23am
DK:
I'd love to see the analysis the Bank performed before this happened. Did an executive tell a programmer "I don't care, just make sure our customers can get money even when the machine is offline"? or did an executive actually order a report weighing the risk of theft vs. the cost of annoying a customer(a)? One thing that is sure is that the programmer, not the executive, got fired for this.

* Note I am assuming that the customer is not a harbor seal but either a person or a dog, so that we can discuss whether the customer is annoyed without excessive anthromorphization.
12.6.2005 8:46am
JRL:
The logic of the argument that the machine consented seems in my mind to be similar to the argument that it can't be stealing because the satellite signals are landing in my backyard.
12.6.2005 8:54am
Public_Defender:
The questions are easier if you look at the machines as extensions of the people who run them. What did the people at the bank intend when the ATM was offline? No one would reasonably say that they meant to offer free money to people with closed accounts.

You can also use metaphors to remove the machine. If the a teller had left $200 sitting on the counter when she went to get a glass of water, would anyone reasonably believe that they could walk in, take the money, and leave?

Another analogy is a fired employee using his key (ATM card/PIN) to enter his former employer's workplace (ATM machine) to take office supplies (money). Yes he still had physical access, but he knew he did not have permission to use that physical access.

The key to winning a new technology case is to win the battle of the metaphors.
12.6.2005 9:01am
Vinnie:
The ATM case is a simple question of mens rea. If the perp thought he had the money to withdraw, or just thought he might, then no fraud. The ATM properly carried out the intent of the programmer, so it should stand in for the bank's intent.

As for detour and frolic - a bug may cause behavior not intended by the program's designer, but it's no frolic. I've wondered about liability that might arise from an "easter egg" in a program - that looks more like a programmer frolic.
12.6.2005 9:03am
itsme (mail):
How would the answer to this question of whether for purposes of the law computers should be viewed simply as "machines" or as though they were humans pertain to "expert" systems causing harm?

Imagine, for example, a software program that turns out medical diagnoses or treatment recommendations. The advice given proves wrong, with injury the result. In a lawsuit against those who wrote the software or run, should the standard applied to determine negligence be the same as it would be for human practitioners who make diagnoses and treatment recommendations, and might have been sued had they given similarly wrong advice? Or ought there be a different standard when there is a computer surrogate?
12.6.2005 9:54am
Steve Melendez (mail):
I definitely see some other cases where this could be interesting:

Do you remember the issue last spring or so when business school applicants were able to access their admissions info ahead of time by going to a particular URL? I remember some bloggers defending their actions, saying that all they did was submit an HTTP request, which was approved by the web server of the business schools' contractor.

Similarly, people make a similar argument when their computers "request" to connect to neighbors' wireless routers, saying the routers, as an agent of the neighbor, grants permission.
12.6.2005 10:56am
Mr. Mandias (mail) (www):
I'd be itnerested in the article.
12.6.2005 11:05am
Mr. Mandias (mail) (www):
Interested too.
12.6.2005 11:05am
taalinukko:
Michael Eisenberg,

The "ukko" in "taalinukko" affectionately could be translated as "geezer" so that would make me a Mr. [ Gee I thought that everybody on the internet knew every obscure little language ;-) ]

To the real point in your original post you made the claim that assessing the mental state of an animal was improperly anthropomorphizing them and should not be allowed. I was illustrating some of the logical conclusions of that position which on first brush appear much more extreme than we would like.

In you second post where you give some more specifics of the case in question I can see your point in that was the swimmer actually annoying the creatures in question. I personally do not think that annoyance equates with harassment in this situation. I see a parallel in local situation where we had some canada geese build a nest next to a neighborhood path. This path had moderate traffic and passing the nest in question was always annoying for all parties, but neither the geese nor the people were harassed.
12.6.2005 11:19am
hinglemar (mail):
A machine is just a machine. Usually there's a person responsible for the behavior of the machine and that's who the law should be dealing with.

But I'd like to go to the future for a more interesting problem -- now that DARPA's Grand Challenge has been won, it's only a matter of time before autonomous vehicles are driving the streets. An operator-less vehicle. Who's responsible in the event of a mishap? After a night of drunken debauchery I climb in and instruct the vehicle to take me home. Am I the operator because I commanded the vehicle?
12.6.2005 11:44am
Tom W. Bell (mail) (www):
Check out Thrifty-Tel, Inc. v. Bezenek, 54 Cal. Rptr. 2d 468 (Cal. Ct. App. 1996), available in edited form at . You'll there see the court breezily claim that, because a computerized phone system acts as an agent of its principal, one can commit fraud by accessing it via a stolen passcode. My Internet Law students and I had fun parsing that claim. How can an unconscious machine be an agent if, per blackletter law, an agent must consent to serve its principal?
12.6.2005 11:46am
HeScreams (mail):
Mr. Kerr --

I think the arguments have been made more completely above, but I just want to register my votes on the issue:

. In the ATM case, it was never the bank's intention that the perp get the money, and the perp knew that the money was not his. It was possibly part bank error, but definately a crime.

. In the pen register case, the suspect sent his phone numbers over the wires, so the pen register seems exactly like a phone tap.

* It seems that both cases could be resolved by analogies that don't require anthropomorphization, and worring about whether machines can be treated as humans seems like a red herring. (And, if they are to be treated as humans, then how do we hold them accountable for their actions?)


hinglemar --

Researchers at Benz in Germany claimed to have cars that can drive themselves on roads a few years before the Grand Challenge. But these cars, like the Toyotas in Japan that can parallel park themselves, aren't deployed due to liability issues (if I understand correctly).
12.6.2005 12:11pm
Blah, blah, blah... (mail):
This is a much bigger problem than folks think. An uncle of mine does systems for Citibank, and his take is that there are organized gangs moving around testing systems for ATMs that are disconnected from the net. It's quite easy, they just set up an account, keep it open with a couple of dollars, and try to withdraw $200 at a bunch of ATMs. The accounts are opened using aliases (obviously), and when the gang hits on an ATM off the grid that actually gives them money, they swoop in with forty or fifty fake accounts to pull $200 per until the ATM runs out of money. They can always tell when an ATM was offline without checking any logs, because the ATM runs out of money long before it should have.
12.6.2005 1:33pm
C Brown:
One difference between automated machines or computers and people in search/wiretap type cases: machines can be "blinded" to all input except the item being searched for (phone number called). That is, a machine can act as a filter which ignores all calls made unless to a certain number of interest, at which point an action/notification is triggered. Thus the government can narrow the scope of their intrusion when using this method.
12.6.2005 1:39pm
YG (mail):
Orin,


A. The issue seems a very interesting one, and your idea of developing (or at least sparking academic discussion about) a unified theory for implying mental state to a machine is one I would definitely like to see more about.


B. One possible thread-end in this tangled ball of legal twine is contract law. Specifically, since contract law requires a meeting of the minds, how does it deal with contracts generated automatically? This is a fairly well-explored topic, but though no longer novel it may still provide you with a good starting point for a broader inquiry regarding anthropomorphization of machines. Look into the literature on the Uniform Computer Information Transactions Act (UCITA), and the Uniform Electronic Transactions Act (UETA), and their respective provisions for handling "consent" and the like.

12.6.2005 3:05pm
Splunge (mail):
Well, Orin, it's a fad topic, like nanotech, so I don't doubt it will be well-received if you write about it.

Do I think it has any practical importance? As someone who works regularly with "learning" computer systems designed to be adaptive and "intelligent": ha ha. We only really need to worry about treating machines as people instead of chattels when they have the same or at least similar cognitive capabilities as adult humans, and when they seem to us worthy of the same or at least similar rights and responsibilities.

When that day comes the difficulty of deciding precisely when a machine has given an agent's consent is not going to be the thorniest problem the law must tackle -- just imagine machine plaintiffs in a Plessy or Dred Scott action.
12.6.2005 3:51pm
Thief (mail) (www):
Late in seeing this, but I think the old aphorism applies here: Garbage In, Garbage Out. Computers do only what they are programmed to do. If the computer messes up, it is the fault of the programmer for inputting bad instructions, not the computer for following those instructions.

/"User Error. Replace User."
12.6.2005 5:43pm
K. Parker (mail):
Treating machines as people? Turnabout is fair play, I guess: people have been being treated as machines for quite some time...
12.7.2005 1:17am
jgshapiro (mail):
This posting reminded me of a Star Trek episode where the resident android (Data) was put on trial to determine whether he was a machine and could exercise free will in refusing to be disassembled by a federation engineer who wanted to replicate him.

One side in the trial argued that the android had enough human chacteristics to be considered sentient and therefore to possess rights as an individual. The other side argued that he was a next generation toaster. (Of course, on TV, he is found to have rights and everyone lives happily ever after.) See, e.g, The Measure of a Man

I haven't watched a lot of Star Trek, but this one stuck in my mind for some reason (maybe because it seemed like a law school exam question, and I was in law school when I saw it).

BTW, I did a Google search to find a link w/info on the episode. I had no idea how many Star Trek sites are out there. Wow.
12.7.2005 3:45am