We are pleased to announce a community reading project that will culminate in a visit from the author.
A great deal of care went into the selection of The Alignment Problem: Machine Learning and Human Values by Brian Christian. Among other things, many of the people we have spoken to about AI have cited this book as one of the best, if not the best, book about AI that they have read (for more see the blurb below). In addition, the book falls neither into the AI evangelist, nor the AI doomer, camp, a rare thing right now. What it does well is to present a clear picture of some of the real problems inherent in deploying AI, and the work people are doing to solve those problems. There are, it should be noted, real questions about whether some of these problems can ever be solved.
We are willing to buy a book for anyone on campus who would like their own copy, but have also asked the library to obtain extra copies, including electronic copies. If you would like to get your own copy of the book, please use this book request form to provide your campus address so that we might send you a copy.
We are still nailing dates down for when the author will visit but are currently working on a timetable that would put his visit in early October. We hope to make future announcements about holding some guided discussion sessions ahead of the visit as well as the details of events around his visit.
"Brian Christian’s recent book The Alignment Problem is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. 'Alignment problem' originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do." - Ezra Klein