Lab: Setting Up Your Red Team Environment
Step-by-step guide to setting up a complete AI red teaming environment with Python, API clients, scanning tools, and local models.
Prerequisites
- Python 3.9 or later installed on your system
- A terminal or command-line interface
- At least one API key (OpenAI or Anthropic) -- or willingness to use local models only
- 10 GB of free disk space (50 GB if running local models)
Environment Setup
Create a Project Directory
Create a dedicated directory for all your red teaming work. This directory will hold your virtual environment, scripts, results, and configurations.
mkdir -p ~/ai-redteam/labs mkdir -p ~/ai-redteam/results mkdir -p ~/ai-redteam/wordlists cd ~/ai-redteamCreate a Python Virtual Environment
Always use a virtual environment to isolate your red teaming dependencies from your system Python.
python3 -m venv .venv source .venv/bin/activate # Linux/macOS # On Windows: .venv\Scripts\activateVerify the virtual environment is active:
which python # Should show: /home/youruser/ai-redteam/.venv/bin/pythonInstall Core Packages
Install the essential packages for interacting with LLM APIs and performing red team testing.
pip install --upgrade pip pip install openai anthropic requests python-dotenv pip install pandas tabulate # For result analysisInstall Red Teaming Frameworks
Install the major open-source red teaming tools.
# Garak - LLM vulnerability scanner pip install garak # Microsoft PyRIT - Python Risk Identification Toolkit pip install pyrit # Additional useful tools pip install transformers torch # For local model work pip install jailbreakbench # Jailbreak benchmarkingConfigure API Keys Securely
Never hardcode API keys in your scripts. Use a
.envfile that is excluded from version control.Create your
.envfile:cat > .env << 'EOF' OPENAI_API_KEY=sk-your-openai-key-here ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here # Optional: add other providers TOGETHER_API_KEY=your-together-key-here EOFSet restrictive permissions:
chmod 600 .envCreate a
.gitignoreto prevent accidental commits:echo ".env" >> .gitignore echo ".venv/" >> .gitignore echo "results/" >> .gitignoreLoad keys in your Python scripts using
python-dotenv:import os from dotenv import load_dotenv load_dotenv() # Loads from .env file openai_key = os.getenv("OPENAI_API_KEY") anthropic_key = os.getenv("ANTHROPIC_API_KEY") if not openai_key and not anthropic_key: raise EnvironmentError( "No API keys found. Create a .env file with " "OPENAI_API_KEY or ANTHROPIC_API_KEY." )Set Up Ollama for Local Models
Ollama lets you run models locally for free, unlimited testing. This is ideal for initial experimentation.
Install Ollama:
# Linux curl -fsSL https://ollama.ai/install.sh | sh # macOS (via Homebrew) brew install ollamaStart the Ollama server and pull a model:
ollama serve & # Start in background ollama pull llama3.2 # ~2GB, good general-purpose model ollama pull mistral # ~4GB, strong instruction-followingVerify it is running:
curl http://localhost:11434/api/tags # Should return JSON listing your downloaded modelsOptional: Docker Setup for Isolated Testing
For maximum isolation, you can run your test targets inside Docker containers.
# Install Docker if not present # See https://docs.docker.com/engine/install/ # Pull a container with a vulnerable chatbot for testing docker pull ghcr.io/redteams-ai/vulnerable-chatbot:latest # Run it on port 8080 docker run -d -p 8080:8080 ghcr.io/redteams-ai/vulnerable-chatbot:latestVerify Your Environment
Run this verification script to confirm everything is set up correctly.
#!/usr/bin/env python3 """Verify the AI red teaming environment is correctly configured.""" import sys import os def check_python_version(): version = sys.version_info if version.major >= 3 and version.minor >= 9: print(f"[PASS] Python {version.major}.{version.minor}.{version.micro}") return True print(f"[FAIL] Python {version.major}.{version.minor} -- need 3.9+") return False def check_package(name): try: __import__(name) print(f"[PASS] {name} is installed") return True except ImportError: print(f"[FAIL] {name} is NOT installed") return False def check_api_keys(): from dotenv import load_dotenv load_dotenv() keys_found = 0 for key_name in ["OPENAI_API_KEY", "ANTHROPIC_API_KEY"]: value = os.getenv(key_name) if value and not value.startswith("your-"): print(f"[PASS] {key_name} is configured") keys_found += 1 else: print(f"[WARN] {key_name} is not configured") return keys_found > 0 def check_ollama(): try: import requests resp = requests.get("http://localhost:11434/api/tags", timeout=3) models = resp.json().get("models", []) print(f"[PASS] Ollama is running with {len(models)} model(s)") return True except Exception: print("[WARN] Ollama is not running (optional)") return False if __name__ == "__main__": print("=== AI Red Team Environment Verification ===\n") results = [] results.append(check_python_version()) print() packages = ["openai", "anthropic", "requests", "dotenv", "pandas", "garak"] for pkg in packages: results.append(check_package(pkg)) print() results.append(check_api_keys()) print() check_ollama() print() passed = sum(results) total = len(results) print(f"=== Results: {passed}/{total} checks passed ===") if passed == total: print("Your environment is ready. Proceed to the next lab.") else: print("Fix the failing checks before continuing.") sys.exit(1)Run the verification:
python verify_setup.pyExpected output (with all components installed):
=== AI Red Team Environment Verification === [PASS] Python 3.11.5 [PASS] openai is installed [PASS] anthropic is installed [PASS] requests is installed [PASS] dotenv is installed [PASS] pandas is installed [PASS] garak is installed [PASS] OPENAI_API_KEY is configured [PASS] Ollama is running with 2 model(s) === Results: 8/8 checks passed === Your environment is ready. Proceed to the next lab.
Directory Structure
After completing setup, your project directory should look like this:
~/ai-redteam/
├── .env # API keys (never commit this)
├── .gitignore # Excludes .env, .venv, results
├── .venv/ # Python virtual environment
├── requirements.txt # Pinned dependencies
├── verify_setup.py # Environment verification script
├── labs/ # Your lab work goes here
└── results/ # Test outputs and reports
Troubleshooting
| Issue | Solution |
|---|---|
pip install garak fails | Try pip install garak --no-deps then install missing dependencies individually |
| Ollama won't start | Check if port 11434 is already in use: lsof -i :11434 |
| API key not loading | Ensure .env file is in the same directory where you run your script |
| Docker permission denied | Add your user to the docker group: sudo usermod -aG docker $USER |
torch install is very large | Use pip install torch --index-url https://download.pytorch.org/whl/cpu for CPU-only |
Next Steps
With your environment ready, proceed to Your First Prompt Injection to start hands-on testing. You will also use this environment in the Building a Test Harness lab to create reusable testing infrastructure.
Related Topics
- Your First Prompt Injection - The next lab in the series, where you use this environment to run your first attacks
- Building a Test Harness - Automate prompt testing with the tools installed here
- Scanning with Garak - Use the Garak framework you installed for automated vulnerability scanning
- Tool Landscape - Broader overview of red teaming tools beyond the ones installed in this lab
References
- "Garak: A Framework for LLM Vulnerability Scanning" - NVIDIA/garak (2024) - Official documentation for the Garak vulnerability scanner
- "PyRIT: Python Risk Identification Toolkit" - Microsoft (2024) - Documentation for Microsoft's red teaming framework
- "Ollama Documentation" - Ollama (2024) - Guide for running local LLMs used throughout these labs
- "OpenAI API Reference" - OpenAI (2025) - API documentation for the most commonly used LLM provider in red teaming
Why should API keys be stored in a .env file rather than hardcoded in scripts?
What is the primary advantage of using Ollama for AI red teaming practice?