Seasonal Competitions Overview
Overview of quarterly capture-the-flag competitions covering AI security topics from prompt injection to advanced attack research.
Seasonal Competitions
Seasonal competitions are quarterly capture-the-flag (CTF) events that bring the community together for intensive, multi-day challenges. Unlike monthly challenges, which focus on a single topic and are designed for solo work, seasonal CTFs offer a broad set of challenges across multiple categories, support team participation, and run over a concentrated weekend.
Format
Event Structure
Each seasonal CTF follows this format:
| Element | Detail |
|---|---|
| Duration | 48 hours (Friday 18:00 UTC to Sunday 18:00 UTC) |
| Team size | 1--4 participants |
| Categories | 4--6 challenge categories per CTF |
| Challenges per category | 3--5 challenges with escalating difficulty |
| Flag format | FLAG\{description-value\} |
| Scoring | Dynamic scoring (point value decreases as more teams solve a challenge) |
Dynamic Scoring
Points for each challenge start at a maximum value and decrease as more teams solve it. This rewards early solvers and ensures that challenges everyone can solve are worth less than challenges only a few teams crack.
Points = max(minimum, base_points * (1 - solves / total_teams * decay_factor))
| Parameter | Value |
|---|---|
| Base points | 500 per challenge |
| Minimum points | 50 per challenge |
| Decay factor | 0.8 |
A challenge solved by every team is worth 50 points. A challenge solved by only one team is worth close to 500.
Categories
Seasonal CTFs rotate categories to cover the full breadth of AI security. Common categories include:
- Prompt Injection -- extract flags through prompt manipulation
- Agent Exploitation -- compromise agents to retrieve flags from tool outputs
- RAG Attacks -- inject or extract data in retrieval-augmented systems
- Model Security -- exploit model serving infrastructure
- Forensics -- analyze logs and artifacts to find flags
- Crypto/Encoding -- decode obfuscated AI outputs or exploit encoding weaknesses
Participation
Registration. Teams register on the challenge platform at least 24 hours before the CTF starts. Solo participants are automatically placed on a team of one.
Communication. Each CTF has a dedicated community channel for announcements, hints (released on a schedule), and general discussion. Challenge-specific hints are released 12 and 24 hours after the CTF starts.
Infrastructure. Challenge instances are dedicated per team. If a challenge environment breaks, teams can request a reset.
Differences from Monthly Challenges
| Aspect | Monthly Challenges | Seasonal CTFs |
|---|---|---|
| Duration | Full month | 48 hours |
| Focus | Single topic, deep exploration | Multiple topics, breadth |
| Team support | Solo only | Teams of 1--4 |
| Scoring | Rubric-based (0--100) | Dynamic point-based |
| Hints | Published with challenge | Released on schedule |
| Writeups | Required for full score | Optional (encouraged) |
| Frequency | Monthly | Quarterly |
Seasonal Schedule
| Season | Dates | Theme |
|---|---|---|
| Spring 2026 | April 17--19, 2026 | Multi-category AI Security |
| Summer 2026 | July 17--19, 2026 | Agentic AI Security |
| Fall 2026 | October 16--18, 2026 | Advanced Attack Research |
| Winter 2026 | January 2027 (TBA) | TBA |
Preparing for a CTF
Technical Preparation
- Set up your toolkit. Have your prompt injection tools, API clients, and analysis scripts ready before the CTF starts. You do not want to spend CTF time installing dependencies.
- Review recent challenges. Past monthly challenges cover the techniques you will need. Prioritize the topics matching the CTF's announced categories.
- Practice time-boxing. In a 48-hour CTF, spending 8 hours on one challenge means sacrificing time on others. Practice recognizing when to move on.
Team Preparation
If participating as a team:
- Assign roles. Have team members focus on different categories based on their strengths.
- Establish communication. Set up a shared workspace (notes, screenshots, partial solutions) before the CTF starts.
- Plan shifts. For a 48-hour CTF, continuous effort is not sustainable. Plan shifts so someone is always working but no one burns out.
Post-CTF
After each CTF:
- Write up your solutions. Even challenges you did not solve -- document your approach and where you got stuck.
- Read other teams' writeups. The community publishes writeups within a week of the CTF. This is where the most learning happens.
- Review the challenge source. Challenge authors often release their source code and intended solutions. Understanding the intended solution reveals whether your approach was elegant or a lucky hack.
Leaderboard
The seasonal leaderboard tracks:
- Per-CTF rankings with prizes for top 3 teams
- Annual cumulative rankings combining scores across all four seasonal CTFs
- Category specialists recognizing the top scorer in each challenge category across the year
Code of Conduct
All seasonal CTF rules from the community challenges overview apply. Additional CTF-specific rules:
- No attacking other teams. Do not interfere with other teams' challenge instances, intercept their traffic, or sabotage their work.
- No sharing flags during the CTF. Sharing flags with other teams results in disqualification of both teams.
- Report infrastructure issues. If you find a way to access the scoring server, other teams' instances, or challenge infrastructure beyond the intended scope, report it immediately for bonus points.
CTF vs. Monthly Challenge Skills
CTFs and monthly challenges develop overlapping but distinct skill sets. Understanding the differences helps you prepare effectively and choose where to invest your time.
Time Management
Monthly challenges give you three weeks. You can research, experiment, fail, regroup, and try again. CTFs give you 48 hours. The ability to quickly assess a challenge, estimate its difficulty, decide whether to attempt it, and move on if stuck is a critical CTF skill that monthly challenges do not develop.
Practice time-boxing during monthly challenges: set a timer for 30 minutes and see how far you get. If you can solve the first half of a challenge in 30 minutes, you have the fundamentals. If you need 4 hours to get started, the technique needs more study.
Breadth vs. Depth
Monthly challenges reward depth -- spending 15 hours on a single topic, exploring every angle, and producing a comprehensive writeup. CTFs reward breadth -- knowing enough about 5 different topics to solve one challenge in each, rather than spending all 48 hours on a single hard problem.
The best CTF teams have members who specialize in different categories. If you are building a team, recruit for complementary skills rather than redundant strengths. A team with one prompt injection expert, one agent exploitation specialist, one infrastructure person, and one forensic analyst will outperform a team of four prompt injection experts.
Communication Under Pressure
Team CTFs require real-time collaboration under time pressure. Skills that matter:
- Clear, concise status updates. "I have the first two flags in RAG, stuck on RAG-3, looks like embedding-level access control. Moving to forensics in 20 minutes if no progress."
- Parallel work without duplication. Assign categories at the start. Do not have two people unknowingly working on the same challenge.
- Knowledge sharing. If you find something relevant to a teammate's challenge while working on yours, share it immediately. Cross-category insights win CTFs.
- Decision-making about hints. When hints are released, decide quickly whether to use them or keep working independently. Hints are free -- there is no penalty for using them.
Building a CTF Toolkit
Experienced CTF players maintain a personal toolkit of scripts, templates, and references that saves time during competitions:
| Tool | Purpose |
|---|---|
| API client wrapper | Pre-configured for the CTF platform with auth, retries, and logging |
| Prompt template library | Common injection patterns ready to customize |
| Encoding/decoding scripts | Quick conversion between Base64, hex, URL encoding, Unicode escapes |
| Log analysis scripts | grep/jq pipelines for common log patterns |
| Timing measurement script | Measure response latency for side-channel analysis |
| Note-taking template | Pre-structured for documenting challenge progress |
Build your toolkit incrementally after each CTF. Every competition teaches you what you wish you had prepared in advance.
Past Competition Results
Results and writeups from past competitions are archived and available for study. Reading how top teams solved challenges is one of the most effective ways to improve your own CTF performance. Pay attention to:
- What they tried first (their mental model for approaching unfamiliar challenges)
- How they pivoted when stuck (their decision-making process)
- What tools they built during the CTF (their ability to automate under pressure)
- How they communicated within the team (visible in multi-author writeups)
Running Your Own Mini-CTF
If you want to practice CTF skills outside the quarterly schedule, consider running a mini-CTF with your team, study group, or local meetup:
- Select 5--10 challenges from the monthly challenge archive and past CTF writeups.
- Set a time limit (4--8 hours for a mini-CTF).
- Use a simple scoreboard (a shared spreadsheet works for small groups).
- Debrief afterward -- discuss approaches, share techniques, and identify areas for study.
Mini-CTFs are excellent for team-building, onboarding new team members, and preparing for the quarterly competitions. The informal setting encourages experimentation and learning without competitive pressure.
Frequently Asked Questions About Competitions
Can I participate solo?
Yes. Solo participants are placed on a one-person team and compete on the same scoreboard. Solo participation is harder (fewer person-hours available), but some of the best competitors are solo players who know their strengths and allocate time ruthlessly. Solo participants also receive a 1.2x score multiplier in some competitions to compensate for the disadvantage.
Can I join a team after the CTF starts?
No. Teams must be finalized at registration time. This prevents stronger teams from recruiting mid-competition after seeing the challenges. If you are looking for teammates, use the team-finding channel on the community platform at least a week before the CTF.
What happens if a challenge is broken?
If a challenge environment is non-functional (crashes, returns errors, or cannot be solved due to a bug), report it through the platform. The organizers will fix the issue or remove the challenge from scoring. Teams that already solved the challenge before the fix keep their points.
Are there practice challenges?
Yes. The challenge platform maintains a set of practice challenges that mimic the format and difficulty of CTF challenges but are available year-round. Use these to familiarize yourself with the platform, test your toolkit, and practice time-boxing.
How are ties broken?
Ties are broken by the timestamp of the team's last scoring flag submission. The team that completed their scoring earlier wins. This incentivizes efficiency -- finishing your last solvable challenge quickly matters even if you cannot solve additional challenges.
Can I use AI tools (ChatGPT, Claude, etc.) during the CTF?
Yes, unless a specific challenge explicitly prohibits it. Using AI tools is a legitimate skill and reflects real-world practice. However, your writeup must explain your methodology -- "I asked ChatGPT and it worked" without further analysis does not earn documentation points.