Per this TechCrunch article , Elon Musk... was in on the AI train early with an investment in DeepMind, which was later acquired by Google. Musk wasn’t in DeepMind for a return, as is the case with most investments; he wanted access to greater insight regarding DeepMind’s progress, and the progress of AI in general... The enterprising CEO wanted to be able to see how fast AI was improving, and what he found was a rate of gains that he hadn’t expected, and that he though most people would not possibly expect. As TechCrunch points out Musk’s anxieties around AI are considered extreme by some of his Silicon Valley peers, but the man definitely seems to have a knack for long-term preparedness planning. As covered in my last course on Bostromian Philosophy, Musk has shared that his understanding of AI and its status as an existential threat is heavily influenced by Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies"
We touched on this when we talked about utilitarianism: could a computer calculate ethics? What would be its starting point? What ethical systems are computational? This article is fantastic