“The Reasonable Robot” Looks At The Intersection Of Artificial Intelligence (AI) And Law

“The Reasonable Robot” Looks At The Intersection Of Artificial Intelligence (AI) And Law

“The Reasonable Robot” Looks At The Intersection Of Artificial Intelligence (AI) And Law

Artificial Intelligence – I was sent a copy of Ryan Abbott’s “The Reasonable Robot” by the publishers. It is an interesting book that discusses a few critical areas of law as they could interact with artificial intelligence (AI). The book is worth reading, even if it is far from perfect. It is an excellent discussion point, a starting place for people to begin to think about artificial intelligence and the law.

Software and law has always been an intersection that has interested me. Back in the dawn of time, one of my senior papers during my undergrad was on how copyright, trademark and patent laws apply to software. Let’s say that my initial dislike of patents for software hasn’t changed over the decades.

When I was contacted about reviewing the book, I was therefore interested in what a lawyer has to say about AI. The book itself is more of a, in fiction terms, a novella. It’s a thin tome that is a good read as it can start conversations on the subject. Ryan Abbott has put together some thoughts on the issues, some of those ideas are ones to which I subscribe while others aren’t as well thought out.

The first chapter is a good introduction to AI for those non-technical people who need a foundational understanding in order to continue with the rest of the book. One quibble I have is that it would have been best, in his definition of AI, to describe the difference between AI and AGI. Artificial intelligence includes the basics. Artificial general intelligence (AGI), is the search for, as the name implies, a type of AI that is more like human thought – that is able to look at a wide variety of problems. As I’ve long said, it’s easy to define AI (really, AGI) as “whatever we still don’t understand about general intelligence”, with sections becoming their own specialty as we understand their basics. Computer vision, robotic, and even today’s deep learning, are good examples of that, with more bits of AI understood while naysayers continue to say AI is a myth because we still don’t understand what we still don’t understand. The second half of the chapter is a good overview of basic concepts and classes of understanding of AI.

The second chapter is one that lays out the core problem I have with the book. It is titled “Should Artificial Intelligence Pay Taxes?” That could have been better clarified, and even completely ignored, if the author understood the difference between AI and AGI. It should be a short answer: “no!”. AI is software. It is not its own entity. The people who own AI pay taxes. The author’s intent is good, just misguided. A better question would have been whether companies deploying AI should be taxed to replace the missing government revenue. That discussion is excellent and needs to be looked at as AI changes the economy.

The chapter lays out how this technology revolution is different from previous revolutions in that, yes, it will be destroying jobs. That means taxes will be lower because companies aren’t paying as much in payroll taxes and the unemployed also aren’t paying taxes. There’s also the additional question of a wealth tax, something beyond the scope of this article. The book’s point that governments need to take a clear look the impact of AI on taxes is critical, it’s the way that it’s described that creates the issues.

The confusion in this chapter, and later in the book, comes from a problem I’ve regularly seen: academics don’t often understand how business works (and that include most academics in business departments).

Chapter three is labeled with the book’s title, “Reasonable Robots.” It starts with a clear overview of liability and tort concepts as applied to AI. One key concept is the question of what is reasonable. For instance, current car accident law tends to view the actions of individuals and compare it to what a “reasonable person” would have done. So what happens when autonomous vehicles are statistically safer than people? Do we start incentivizing people to stop driving by comparing their actions to what a “reasonable robot” would have done? It’s a very intriguing concept.

The problem I have with this chapter is the author’s contention that strict liability is bad for AI. When you’re deciding on what an individual did, the focus is on that individual. AI? That is created by a corporation, an amorphous being where each individual can claim they weren’t the person making the decision and thereby avoiding responsibility. For a corporation and the lack of transparency of both their organization and their software, negligence is too weak a standard. Mr. Abbot thinks that strict liability might “discourage automation.” My response is that it discourages sloppy and dangerous automation. If the company focuses on being able to prove what it has done, strict liability won’t discourage honest advancement of AI anymore than it discourages the advancement of other products.

Chapters 4 & 5 focus on AI systems as inventors and what rights they may have. I, again, answer that simply with a “none.” They are tools. The individuals and companies that use those tools to invent are no different than those currently using advanced analytics to find new chemical compounds or other relationships. When there’s an AGI, the question of whether or not a system is an individual with rights will have important philosophic, moral and legal importance. Right now, it doesn’t.

The next chapter, six, again makes the novice mistake of ignoring the difference between AI and AGI. Punishing AI is no more logical than punishing the screwdriver used to kill somebody. “Bad! Bad screwdriver!”

The final chapter discusses “AI neutrality.” I agree that neutrality is necessary, as repeatedly mentioned, AI is a tool. One example the author uses is the issue bias in sentencing based on bad AI training data, something I discussed almost three years ago. To link back to the previous paragraph, we don’t blame the Pinto for exploding or a child’s toy for having lead as an ingredient. It’s not the AI that must be held accountable, it’s the company that built the system and the organizations that deploy the system.

This article originally appeared on forbes.com To read the full article and see the images, click here.

Nastel Technologies helps companies achieve flawless delivery of digital services powered by middleware. Nastel delivers Middleware Management, Monitoring, Tracking and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate. To answer business-centric questions and provide actionable guidance for decision-makers, Nastel’s Navigator X fuses:

  • Advanced predictive anomaly detection, Bayesian Classification and other machine learning algorithms
  • Raw information handling and analytics speed
  • End-to-end business transaction tracking that spans technologies, tiers, and organizations
  • Intuitive, easy-to-use data visualizations and dashboards

 

Leave a Reply

Your email address will not be published. Required fields are marked *