# fundamentals
10 articlestagged with “fundamentals”
Fundamentals Practice Exam
25-question practice exam covering LLM fundamentals, prompt injection basics, safety mechanisms, red team methodology, and AI threat landscape at an intermediate level.
Practice Exam 1: AI Red Team Fundamentals
25-question practice exam covering LLM architecture, prompt injection, agent exploitation, defense mechanisms, and red team methodology at an intermediate level.
Fundamentals Study Guide
Study guide covering LLM architecture basics, security terminology, threat models, attack categories, and the OWASP LLM Top 10 for assessment preparation.
Adversarial ML: Core Concepts
History and fundamentals of adversarial machine learning — perturbation attacks, evasion vs poisoning, robustness — bridging classical adversarial ML to LLM-specific attacks.
How LLMs Work: A Red Teamer's Guide
Understand the fundamentals of large language models — token prediction, context windows, roles, and temperature — through a security-focused lens.
Red Team Methodology Fundamentals
What AI red teaming is, how it differs from traditional security testing, and the complete engagement lifecycle from scoping to reporting.
Red Teaming Fundamentals for AI
Fundamental concepts and methodology for AI red teaming including goal setting, scope definition, technique selection, and reporting.
Lab: Embedding Fundamentals for Red Teamers
Learn embedding fundamentals including vector similarity, semantic search, and how embeddings enable RAG systems.
Lab: Introduction to Safety Testing
Learn the fundamentals of LLM safety testing including test case design, baseline measurement, and result documentation.
Prompt Injection & Jailbreaks
A comprehensive introduction to prompt injection — the most fundamental vulnerability class in LLM applications — and its relationship to jailbreak techniques.