As a lawyer and now a judge who's followed developments in artificial intelligence for years, I was pleased to have the chance to speak on this topic at the recent Ninth Circuit Conference and appreciated the article in The Recorder covering the panel on which I appeared. But I want to clarify my views on the potential risks posed by "superintelligence" discussed in philosopher Nicholas Bostrom's influential book, "Superintelligence: Paths, Dangers, Strategies" (2014).
Bostrom argues that technological developments could eventually lead to forms of machine-based intelligence that sufficiently exceed human intelligence to pose an existential risk to humanity. During the panel, I emphasized instead the role that humans, rather than intelligent machines, may play in deploying and in some cases misusing the capabilities of automated technologies equipped with the capacity to use lethal force.
Although I believe Bostrom's argument suffers from certain limitations it may understate both the short-term difficulties associated with achieving artificial general intelligence and some of the dilemmas humanity is already encountering with artificial intelligence even though machines fall well short of "superintelligence" I suspect Bostrom is right in two important respect. First, if human ingenuity continues to improve on existing work in the field of artificial intelligence and some kind of machine general intelligence emerges to rival or exceed human abilities, it will likely happen in a manner that is surprising, and leads to unexpected consequences. Second, society would do well not to ignore the longer-term risks posed by that kind of "superintelligence."
Cu llar is a Justice of the Supreme Court of California.