I read how the exploit works. This is amateur hour stuff. They thought they could control what Bash commands are able to do by blocking the small amount of syntax they were aware of. They clearly wrote their exceptions off the top of their heads and did not even look at any Bash documentation.
They blocked <(command) but not >(command). If you look at the Bash docs for the first one it shows you the second one on the same page.
Either way, if you’re trying to allow an LLM to execute code in a complex language like Bash you are playing with fire.
The other interesting part is this was discovered by a company (PromptArmor) that specializes in identifying AI (LLM) security threats. One thing they do is check all the software you are running (including dependencies) and flag everything that is known to be written using LLM tools.
LLMs have spawned a new business for people who clean up their messes. AI is actually creating more jobs.