Greatest threat to humanity -- artificial intelligence
In The AFR I explain why Microsoft's Bill Gates, Tesla's Elon Musk, celebrated Cambridge University physicist Stephen Hawking, and nobel prize winning scientists think the single greatest existential threat to humanity is evolving a successor to our species in the form of "strong" artificial intelligence. My columns will occasionally flesh-out low probability yet catastrophic risks that investors need to be mindful of and having reviewed the evidence I am likewise convinced that the so-called "Singularity", which denotes the day that strong-form AI materially surpasses human capabilities (most scientific experts expect this to occur at some point between 2020 and 2050), falls into the doomsday camp, especially when coupled with innovations in advanced nanotechnology like atomic-level data storage and molecular machinery. Excerpt: "In May 2014 Hawking co-authored an open letter with three other leading academics warning that dismissing the risks of intelligent machines portrayed in movies like Transcendence as science fiction "would be a mistake, and potentially our worst mistake in history". Read for free here (VIEW LINK)
5 topics