Science & Technology
Anthropic researchers find a way to jailbreak AI | TechCrunch Minute
Researchers at Anthropic just found a new way to trick AI into giving you information it’s not supposed to. AI companies have attempted to keep chatbots like OpenAI’s ChatGPT or Google’s Gemini from sharing dangerous information with a varying degree of success. But Anthropic researchers found a new way around current AI guardrails: a new…
@BUY_YOUTUBE_VIEWS_d123
April 4, 2024 at 4:13 pm
How do you always make such good videos 🔥
@lancemarchetti8673
April 6, 2024 at 1:48 am
Very interesting. I hope this doesn’t encourage content security companies to release excessive Nightshade poisoning into LLMs.
I personally don’t feel that’s the way forward just because of bad actors?