Security

AI, LLM and attack vectors

The bad news is that once you start mixing and matching tools yourself there’s nothing those vendors can do to protect you! Any time you combine those three lethal ingredients together you are ripe for exploitation.

With the rise of LLMs is now ripe for exploitation. Target your LLM to your email? What happens when there is mallicious prompts in plain text? Will be interesting to see how this plays out.