PandasAI, an open source project by SinaptikAI, has been found vulnerable to Prompt Injection attacks. An attacker with access to the chat prompt can craft malicious input that is interpreted as code, ...
A critical vulnerability in the popular Node.js sandboxing library vm2 allows escaping the sandbox and executing arbitrary ...
A flaw in Cursor’s AI agent lets malicious repositories trigger arbitrary code execution through routine Git operations, now ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results