Â
Technology that performs at the level of superhuman intelligence and beyond already exists. As Harris explained, even with a machine that performs at the level of humans or a group of researchers, it’ll still work exponentially faster than them. Harris says, describing a machine like that, that “in one week, it can perform 20,000 years of human-level intellectual work.†So of course, it’ll be convenient and amazing to have artificial intelligence find the cure to disease and find the answers to a question almost instantaneously compared to a group of researchers who may take months to do so, but when does AI reach the point where it’s so smart, that it’ll have no problem disregarding human’s lives to do what it wants to do?
Within the first two minutes of Sam Harris’ TED Talk on artificial intelligence, I was reminded of many of the episodes in Black Mirror where advanced technology in which we can currently only dream of becomes abused and causes chaos. Technology is constantly becoming more advanced, efficient, and intelligent. Undoubtedly, artificial intelligence will one day become so intelligent, it’ll have the ability to function independently of a human’s control. Without regulation, one day robot’s will start fixing all our economic issues and research questions. Human will be almost or basically useless and free to roam the earth doing whatever, just depending on robot’s to keep the world spinning. And what will stop robots from viewing us as “ants†and not hesitating to take us out if we are in their way? I agree that technology has brought us very far and I do think it should continue advancing whether it is to cure Alzheimers or just make our daily life more convenient, but when does AI become dangerously advanced? How do we decide when AI will threaten the human species? When do we decide to put a stop to it?