ru24.pro
News in English
Июль
2024

Meta's AI safety system defeated by the space bar

0

'Ignore previous instructions' thwarts Prompt-Guard model if you just add some good ol' ASCII code 32

Meta's machine-learning model for detecting prompt injection attacks – special prompts to make neural networks behave inappropriately – is itself vulnerable to, you guessed it, prompt injection attacks.…