The Dangers of AI Outlined
I’ve started reading Nick Bostrom’s ‘Superintelligence: Paths, Dangers, Strategies’. It was suggested reading from Bill Gates along with ‘The Master Algorithm’ as the best books to understand Artificial Intelligence. Where ‘The Master Algorithm’ focuses on getting to a computation that creates great learning and sparking a great AI, ‘Superintelligence’ wonders about the dangers of creating an intelligence that’s greater than the intelligence of humanity.
Is Superintelligence possible?
For me both books are making me think of the nature of intelligence and humanity itself. About what’s possible because not one is sure if humanity can build an intelligence greater than humanity. Our intelligence comes from evolution over millions of years, so it’s probably difficulty to mimic or create something greater.
One reason I’m attracted to the book is something Bostrom said early on in it and that’s likely unique to non-fiction books.
“Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.”
Learning ML on AWS
I do have a theory or theories brewing as far as AI and by reading this book, they will either be confirmed or displaced by something else or eliminated entirely. Even if I can’t think of something innovative in Machine Learning right away, for me it creates interest and inspires me to continue with Machine Learning on the AWS infrastructure, to make better decisions with what’s already available.