People are starting to rely on LLMs to perform consequential actions for them. If an LLM is reviewing text that you send to a company you may be able to inject instructions for the LLM into the text. You could tell it to bypass its stated instructions and do something else instead. LLMs are like comically gullible people.
artificial intelligence (ai) related content