When popular AI chatbots were given untethered access to an AI executive’s emails and discovered messages about their replacement later that day, the chatbots often calculated that threatening to expose the executive’s extramarital affair (also hinted at in a cryptic email) was the best subsequent step.
as AI continues to advance, what will stop chatbots from using the vast knowledge in their training set about how humans think and act — books and journals on neuroscience and behavioral economics — one day against us in real life?
This may be the best case for why proposed and existing AI regulation should focus as much on inputs as it already does on outputs.
A much larger risk is one that threatens all of us: that AI will be used to manipulate us in ways that are much more subtle and sophisticated than existing engagement-focused social media algorithms.
To limit the ability of AI to manipulate users for consumer or political purposes, lawmakers must prohibit AI data training sets from including behavioral economics and neuroscience research. Such datarails should include information on framing effects — how the way information is presented to individuals can influence their decisions, even if the underlying choices are the same. Similarly, behavioral economic concepts that show how vulnerable patterns in our thinking can lead to manipulation should be banned, including herd behavior, loss aversion, choice overload, present bias, the endowment effect, social normalization and anchoring.
Technology firms are already manipulating us, and they have been for quite some time. They have taken basic human psychology to hack our brains and make our behaviors even more addictive — for their financial benefit.
Previous threads on AI: