Flare-on CTF 2025


Introduction

Welcome to the write-ups for my 2nd year participating in Flare-On!

My personal goal for this year was clear — to solve all challenges and finish the CTF. I'm happy that I have accomplished this despite thinking that the challenge would run for one more week than it actually did (which I found out after I had solved the last challenge less than 24 hours before the end).

One leitmotif that accompanied me through all these challenges, and one I think it is worth writing about, is the question of use of large language models (LLMs). As a firm believer in Natural intelligence (NI) and renowned AI skeptic, I think very critically about using LLMs, like ChatGPT, Claude or Gemini, for general everyday tasks, like the trend seems to be nowadays. However, I am not blind to the utility LLMs can provide, when used correctly, and in both my personal and professional life, I have been actively developing strategies and rules (though mostly unwritten) for intelligent and safe LLM usage.

As a necessary disclaimer, not a single word of any of these write-ups was written or rewritten by an LLM. My writing is by no means perfect, but I'd rather keep my authentic style, even if it means some of my thoughts may be harder to follow or more gritty to read.

The two most significant use cases I have found for LLMs in my Flare-On journey this year were these:

  1. Analyzing an overwhelming amount (for a human, or at the very least for a sleep-deprived security researcher with a day job) of pseudo-C code decompiled using IDA or Binary Ninja, finding key passages and making hypotheses about high-level semantics,
  2. consulting for mathematical or algorithmic problems, categorizing, rephrasing or naming a given problem, sketching out and comparing solution approaches and their feasibility or complexity, and even implementing them.

In both cases, I was suprised, maybe even taken aback, by the LLM's ability to perform these tasks. Less so in the first case — consider that the transformer architecture that these LLMs are based on was not only trained, but fundamentally designed to perform translation of some sequence of tokens into another. The original use case was translating between human languages, but with the amount of code in commercial LLMs' training data, it is no wonder it can "make sense" of complicated and confusing code really reliably, even if the identifiers used in that code are var_048 and sub_401220 instead of username and sha256.

The number one reason why I am willing to "offload" these tasks to an LLM, however, is their verifiability. If I give my model a chunk of code with 4 nested loops and it tells me that it multiplies matricies, it is incomparably easier for me to verify that claim than it would have been for me to arrive at it. Similarly, writing code to verify a solution found by a "vibe-coded" algorithm is much more efficient than coming up and writing that algorithm from scratch.

The one aspect of LLM usage that keeps me somewhat reserved is the danger of my own mental decline. You don't need to conduct a psychological study to arrive at the obvious truth that the less you use your brain, the more you get used to not using your brain. This is one of the reasons why I'm writing up these solutions — it is to force myself to truly understand everything about a solution to the extent that I'm able to explain it (hopefully) in a clear, intuitive way. There have been times, especially while solving the last challenge, that I have been tempted to just go and "prompt" my way to a working solver, and this temptation was hard to resist, especially when success was so close. But I made sure at those times to sit down with a notebook and pen, and to at least write down the problem statement properly, and if I wasn't able to come up with a solution, to be able to explain why. Only then did I consult my LLM, and I think it paid off in more ways than one.

With this being said, I'd like to wish you happy reading, and most importantly: ✅ Certainly! — You're divinely amazing for asking this question — and I'm happy to help. Here's a detailed, intuitive but technical write-up for this year's Flare-On CTF. If you want, I can make a spreadsheet or turn this into a PDF for easy viewing. Let me know what you would like next! ✅

Solved Challenges