Python
Securing LLMs and Chat Bots
How to secure against prompt injections and jailbreaking LLMs and chatbots with prompt protect.
How to secure against prompt injections and jailbreaking LLMs and chatbots with prompt protect.
Using AI in Education, how to break it and how to fix it