3 min read

[BLOG-001] Ground Floors, Georgetown, and AI Red Teaming.

[BLOG-001] Ground Floors, Georgetown, and AI Red Teaming.

First Post

Hi Reader! 👋 

I've never blogged before. It's not that I never wanted to, but I felt I didn't have much to add to the conversation. Maybe it was a failure of imagination and hard work on my part. Maybe I cared too much about saying something meaningful and groundbreaking. Either way, they're both assumptions. Everyone has something to say, right?

Please don't expect to read anything groundbreaking. I plan to blog about my thought space and discuss AI-related shenanigans.

That said - here goes nothing!

Ground Floors

My foray into cybersecurity began in 2013-14, and I felt like the industry had already lifted off a long time ago. I wasn't part of the ground floor in the 90s, and my experience in the industry felt like I was playing an endless game of catch-up. Running a bootstrapped start-up might have something to do with it, but I digress.

Despite my professional frustrations, I know what the ground floor of something impactful is like. I was part of Prishtina's newfound hardcore and punk scene, and it felt great. Our band melded obscene elements in the Albanian language with gnarly fuzz and frenetic drumming. It was uncharted territory ripe for exploration, and the experience left me with an intuition of what it's like to be part of something special in its embryonic state. The prospect of a tight-knit community, non-commercialized research, and the potential for doing something impactful keep me wide-eyed, both professionally and personally. Professionally, I'm always looking for that ground floor, and I think I've found another one:

If AI is the internet of my generation, then AI Red Teaming is the coveted ground floor for a new generation of hackers.

At least, that's the impression Georgetown has left me with.

Georgetown University

My understanding of Artificial Intelligence before attending Georgetown was superficial at best. I had tangential knowledge of ML and Deep Learning with a limited understanding of the underlying concept and its application. In 2019, I held a talk at DebugCon discussing adversarial techniques for exploiting AI systems. The discussion was niche, somewhat impractical, and limited to a software engineering crowd. Even my contextual understanding was narrow. I couldn't see the forest from the tree. That all changed while at Georgetown University's Security Studies Program at SFS.

Georgetown University catapulted me into the world of AI alignment. My first thorough introduction to ANI, AGI, and ASI concepts was in AI & National Security class with John Bansemer. We took a bottom-up approach to understanding AI systems.

1) First, we learned about the history of AI, its development over the years, its engineering winters, and contemporary developments.

2) Second, we learned about the technical underpinnings of modern AI systems with a focus on Deep Learning. We narrowed the scope to ANI and limited our discussions to not go overboard with AGI and ASI alignment philosophies.

3) Third, we learned about the practical implementation of ANI systems, alignment challenges, failure states, opportunities, and risks. There were many debates in class, specifically about the use of AI in autonomous weapons. Andrew Imbrie's class on the Theory of Security came in handy when speaking to ultra-right-wing hawks in our class (it would make an interesting blog post).

4) We discussed the systems-level impact of AI, such as those in the economy. I especially enjoyed Keynesian economic arguments on the impact of AI in labor markets. We had an interesting debate among classmates on the centralization of both production and labor as well as multi-agent collusion in price-setting AI.

5) Lastly, we discussed how the AI Triad (Data, Algorithms, Compute) shapes National Security Strategies. Lot's of arguments and debates about how and if AI should be regulated. On a more practical (and less political) approach, we learned about groups of amazing researchers doing cool work behind the scenes making sure AI is safe and secure. This was the first time I'd heard of AI Red Teams. So, I got acquainted with the basics from arXiv papers, CSET, and occasional blog posts.

AI Red Teaming

I thought I'd give AI Red Teaming a shot. Easy enough - I thought prompt injections and data extraction techniques were a good place to start. Given its maturity and complex features, OpenAI's ChatGPT seemed like the perfect platform to pick on. I found a few vulnerabilities in the platform and thought I'd share them with my friends on LinkedIn.

The response was incendiary.

Immediately, I received feedback from colleagues, replicating some of the techniques and inspecting the platform's problems. A debate among colleagues began. Were these findings bugs or features? My friends couldn't agree, and OpenAI's vague documentation didn't help. I tried submitting my findings to OpenAI's Bugcrowd platform but was told that my findings were out of scope. They pointed me to another submission form on OpenAI's website. I sent in my findings and never heard back. Despite these challenges, things seem not yet set in stone, and a "tu pa, tu ba" approach is a telltale sign of a ground-floor environment. I love it.

Excited and full of ideas, I kicked off a small student research group at Georgetown, focusing on AI and cyber. Reach out!