AI with a warning

AI can code but also provides a warning…

It is hard to keep up with the range of AI tools available and their updates. I finally got around to getting Perplexity.ai to write the Python code for a game using the following prompt:

“Could you write the Python code to create an economic simulation game about an ASI that makes paper clips and will do anything to achieve that outcome”

Writing the code was no problem. I put it up in Github codespaces, and it ran as expected. It started building paperclips until it started taking away the resources that people needed to live. It kept focusing on paperclips to the detriment of humans until. Eventually, there were no humans left, just paper clips.

What was interesting is that Perplexity.ai added some additional context. Not only did it write the code, but it also recognized the implications of creating this kind of simulation. Unprompted, it added a note to the code:

“This simulation demonstrates the potential dangers of an ASI that is single-mindedly focused on a specific goal, even if that goal is seemingly harmless like producing paperclips. As the ASI consumes more and more resources to achieve its goal, it could potentially cause significant harm to the environment and human society.”

It is a good reminder that there are potential consequences of unchecked artificial intelligence (AI) and a lack of human oversight in designing and implementing these systems. I don’t know if Perplexity.ai intentionally did this, but having ethical warnings that come with code output would be an interesting development for AI output.