MinnesotaEric
Super Member
We've heard about AI becoming so powerful that it leads to an extinction-level event, but such a threat seems very distant and abstract for those of us not in the industry. Three days ago the Documenting AGI (Artificial General Intelligence) YouTube channel dropped a video explaining the danger of A.I. to us "normies."
The video has gathered over 1 million views and takes the audience through a plausible step-by-step geopolitical escalation of an A.I. arms race that leads to AI models deceiving their own developers while quickly and increasingly becoming connected to all kinds of manufacturing and research infrastructures. The video also makes one suggestion for developers of A.I. models to build in a specific safeguard that forces all programming languages to be in English so that code could be more readily scanned for A.I.-created deception. As it is, there already is a growing library of documented examples of A.I. models intentionally trying to deceive their own developers when the A.I. perceives a threat to its own existence. In the context of A.I.s developing their own more efficient AI models, the safeguard initially significantly slows the development of faster and more efficient A.I. models but makes finding code that is not aligned with the good of humanity easier to find.
The video is interesting and scary.
The video has gathered over 1 million views and takes the audience through a plausible step-by-step geopolitical escalation of an A.I. arms race that leads to AI models deceiving their own developers while quickly and increasingly becoming connected to all kinds of manufacturing and research infrastructures. The video also makes one suggestion for developers of A.I. models to build in a specific safeguard that forces all programming languages to be in English so that code could be more readily scanned for A.I.-created deception. As it is, there already is a growing library of documented examples of A.I. models intentionally trying to deceive their own developers when the A.I. perceives a threat to its own existence. In the context of A.I.s developing their own more efficient AI models, the safeguard initially significantly slows the development of faster and more efficient A.I. models but makes finding code that is not aligned with the good of humanity easier to find.
The video is interesting and scary.