Should we give Super Intelligent AI, goals, and if so, whose goals?
“Life 3.0 – Being human in the age of Artificial Intelligence” – Max Tegmark
The single most thorniest topic in AI, as the author states is ‘goals’.
Should we be giving AI goals? How can we do this? How do we ensure the AI when it moves from being sub human to super human in intelligence retains these goals? What happens if the goals of an AI system do not align with ours? This is a fascinating read that will change the way we think about AI, Super Intelligence, singularity and most importantly, the future of humanity.