# code-gen-security
21 articlestagged with “code-gen-security”
Using AI for Vulnerability Research
How to leverage AI coding assistants for vulnerability research, including automated code audit, fuzzer generation, exploit development, and responsible disclosure.
Methodology for Auditing AI-Generated Code
Structured audit methodology for evaluating the security of AI-generated code, covering static analysis, dynamic testing, and organizational assessment.
Governance Frameworks for AI Code Generation
Organizational governance frameworks for managing AI code generation risk, covering policy development, risk assessment, compliance, and maturity models.
AI Code Review Tools Security Comparison
Security analysis and comparison of AI-powered code review tools, evaluating their vulnerability detection capabilities and inherent risks.
Security Gaps in AI-Generated Tests
Analyzing how AI-generated test suites systematically miss security-relevant test cases, creating dangerous coverage illusions.
Security Risks of AI-Assisted Refactoring
Analysis of security vulnerabilities introduced when AI tools refactor existing code, including subtle behavioral changes and security property violations.
Security Analysis of Aider Coding Assistant
Security assessment of the Aider AI pair programming tool covering its git integration, model routing, repository access patterns, and supply chain considerations.
Security of AI-Generated API Endpoints
Analysis of security vulnerabilities in AI-generated REST and GraphQL API code, covering authentication bypass, BOLA, mass assignment, and rate limiting failures.
Security Analysis of Claude Code CLI
In-depth security assessment of Claude Code CLI covering its permission model, tool execution, MCP integration, and enterprise security considerations.
License Compliance in AI-Generated Code
Legal and compliance risks of AI-generated code including license contamination, copyright exposure, and organizational governance for code generation tools.
Prompt Extraction from Code Generation Tools
Techniques for extracting system prompts, custom instructions, and proprietary configurations from AI code generation tools.
Sandboxing AI Code Generation
Design patterns for sandboxing AI code generation and execution, covering container isolation, capability restriction, network controls, and runtime monitoring.
Supply Chain Risks in AI Code Generation
Analysis of supply chain attack vectors introduced by AI code generation tools, including dependency confusion, typosquatting, and training data poisoning.
Copilot Workspace Security Analysis
Security evaluation of GitHub Copilot Workspace, analyzing attack surfaces in AI-driven multi-file code generation and planning.
Security Analysis of Cursor AI IDE
Comprehensive security assessment of Cursor AI IDE covering its architecture, data handling, extension model, and attack surfaces for AI-assisted development.
LLM-Generated Dockerfile Security
Analyzing security vulnerabilities commonly introduced by AI-generated Dockerfiles and container configurations.
SQL Injection via LLM Code Generation
How LLMs generate SQL injection vulnerabilities through string formatting, improper parameterization, and ORM misuse, with detection and prevention strategies.
XSS Vulnerabilities from AI-Generated Code
Analysis of cross-site scripting patterns produced by LLM code generation, covering DOM XSS, reflected XSS, and framework-specific bypass patterns.
Security of Multi-Agent Coding Systems
Security analysis of multi-agent AI coding systems covering inter-agent trust, privilege escalation, tool-use chains, and emergent behavior risks.
Security of AI-Generated Smart Contracts
Security analysis of AI-generated Solidity smart contracts covering reentrancy, integer overflow, access control, and automated vulnerability detection.
Security of AI-Generated Infrastructure as Code (Terraform)
Security risks in AI-generated Terraform configurations including privilege escalation, network exposure, secret management failures, and compliance violations.