Geoffrey Hinton spent most of his life working on computers. He developed a computer system that works the same way the human brain works. It helped to advance Artificial Intelligence (AI). He became one of the world’s leading experts on AI. Then he realized AI is already more advanced than we thought.
He believes AI could become more intelligent than a human being within five years. They might even decide to take over. Hinton quit his job. Now he spends his time warning people about AI. Hinton is not the only expert worried. On May 30, 2023, hundreds of top scientists signed a letter. They warned that AI was just as much of a threat to humanity as nuclear war and global pandemics.
While this sounds like the plot of a science fiction movie, we are seeing some of the dangers of AI already. AI can generate images so convincing that it is impossible to tell whether or not they are real. This could be used to spread false information and fake news. It can search the Internet and find people’s personal information much faster than computer hackers. It can be used to create computer viruses that can evade detection.
The US government has heard the warnings about AI. It is studying the need for regulating, or creating government controls and limits to, AI. It has also talked to the leading technology companies about their development of AI. On July 22, 2023, seven of the top technology companies signed an agreement with the US government about AI.
The agreement includes several steps that will limit the dangers of AI. They have agreed to testing of their AI systems from people inside and outside of their companies. They will put “watermarks” or ways to detect whether something was created by their AI. They will also research risks to privacy and provide regular updates of their systems capabilities and limits.
Some argue that this is not enough. Companies do not always follow agreements that they sign if there are not laws to enforce it. Also, because AI is worldwide, other countries will also have to agree to AI limitations. Still, it is a first step in discussing the dangers of AI.
What Do You Think? Do you think there is a limit to how much AI should be able to do?
Photo Credit: Ramsey Cardy/Sportsfile for Collision/Getty Images