Kissinger, Schmidt and Huttenlocher are not afraid to explore the darkest side of AI, either. They are clear-eyed about the ways that AI could enable dictators to monitor their citizens and manipulate information to incite people to commit violence.
Although AI is already making our lives better in many ways, Kissinger, Schmidt and Huttenlocher caution that it will take us as a species many years to create a system as powerful as we deserve. They wisely suggest that we not lose sight of the values we want to instill in this new machine intelligence.
Thank you, GPT-3! Now, a few notes:
First, the A.I. wasn’t an unqualified success. It took Sudowrite a few tries. On the first attempt, it spit out a series of run-on sentences that hinted that GPT-3 had gotten stuck in some kind of odd, recursive loop. (It began: “The book which you are reading at the moment is a book on a nook, which is a book on a book, which is a book on a subject, which is a subject on a subject, which is a subject on a subject.”) A few tries later, it seemed to give up on the task of book reviewing altogether, and started merely listing the names of tech companies. (“Google, Facebook, Apple, Amazon, IBM, Microsoft, Baidu, Tencent, Tesla, Uber, Airbnb, Twitter, Snap, Alibaba, WeChat, Slack.”)
But it warmed up quickly, and within a few minutes, the A.I. was coming up with impressively cogent paragraphs of analysis — some, frankly, better than what I could have generated on my own.
This speaks to one of the recurring themes of “The Age of AI,” which is that although today’s A.I. systems can be clunky and erratic at times, they are getting better fast, and will soon match or surpass human proficiency in a number of important tasks, solving problems in ways no human would have thought to solve them. At that point, the authors write, A.I. will “transform all realms of human experience.”
Second, while GPT-3 was correct about the scope of “The Age of AI” — with chapters on everything from social media algorithms to autonomous weapons — it failed to note that all of that broadness comes at a cost. The book feels cursory and shallow in places, and many of its recommendations are puzzlingly vague.
In a chapter on the geopolitical risks posed by A.I., the authors conclude that “the nations of the world must make urgent decisions regarding what is compatible with concepts of inherent human dignity and moral agency.” (OK, we’ll get right on that!) A brief section about TikTok — an app used by more than a billion people worldwide, whose ownership by a Chinese company raises legitimately fascinating questions about national sovereignty and free speech — ends with the throwaway observation that “more complex geopolitical and regulatory riddles await us in the near future.” And when the authors do make specific recommendations — such as a proposal to restrict the use of A.I. in developing biological weapons — they fail to elaborate on how such an outcome might be achieved, or who might stand in its way.
Credit: Source link