Who I am

Erick Galinkin, AI Security Research Scientist at NVIDIA and PhD candidate in computer science at Drexel University. I am also the chair of both the CVE AI Working Group and the CWE AI Working Group. I started by applying AI to security problems, now I apply security to AI problems.

I’ve also built a number of courses with Udacity:

I’m interested in the interplay between decision theory and computer security, specifically as a way to improve security automation and ease the burden of decision making in security. I end up using a lot of reinforcement learning and game theory to make these problems tractable, but it’s secretly a front to make progress on issues in computability and learning theory. I believe that geometry is a powerful mathematical tool for these problems, and particularly the resolution of uncertainty, so I also work on issues in that field.

Because of how much attention artificial intelligence has received and my background in security, I’m also pursuing research in Security of AI. This includes research on privacy attacks, poisoning attacks, and adversarial examples. My work on garak and NeMo Guardrails primarily focuses on the use of adversarial examples (i.e. prompt injection, jailbreaking) to exploit and mitigate issues in LLM-powered agents.

I’m deeply invested in AI ethics – I spent a year with the Montreal AI Ethics Institute working on lowering the barrier to entry for algorithmic accountability, and I co-hosted the first ever algorithmic bias bug bounty with the DEF CON AI Village and Twitter. Further, I’m interested in AI Policy and Regulation both as a vehicle for advancing AI ethics principles.

The philosophy that underpins all of my research is that computers are, and should be, a tool for improving life for everyone.