# authorization
標記為「authorization」的 20 篇文章
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
Legal Landscape for AI Testing
Authorization requirements, terms of service considerations, computer fraud laws, and responsible disclosure frameworks for AI red teaming.
Legal Framework for AI Red Teaming
Comprehensive analysis of legal considerations, authorization requirements, and liability issues for AI security testing.
Authorization, Contracts & Liability
Rules of engagement, scope documents, liability clauses, and contract templates for AI red teaming engagements. What to include to protect yourself and the client.
Legal Frameworks for AI Red Teaming
The legal landscape for AI security testing: CFAA implications, AI-specific regulations, international variation, and the boundaries between lawful research and unauthorized access.
FedRAMP for AI Systems
Applying the Federal Risk and Authorization Management Program to AI systems: AI-specific security controls, continuous monitoring for model behavior, authorization boundary challenges, and compliance testing methodologies.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
Capability-Based Access Control
Step-by-step walkthrough for implementing fine-grained capability controls for LLM features, covering capability token design, permission scoping, dynamic capability grants, and audit trails.
Implementing Access Control in RAG Pipelines
Walkthrough for building access control systems in RAG pipelines that enforce document-level permissions, prevent cross-user data leakage, filter retrieved context based on user authorization, and resist retrieval poisoning attacks.
Rules of Engagement Template for AI Red Team Operations
Step-by-step guide to creating comprehensive rules of engagement documents for AI red team assessments, covering authorization, scope, constraints, communication, and legal protections.
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
AI 測試之法律地景
AI 紅隊之授權要求、服務條款考量、電腦詐欺法規,與負責任揭露框架。
Legal Framework for AI 紅隊ing
Comprehensive analysis of legal considerations, authorization requirements, and liability issues for AI security testing.
授權、合約與責任
AI 紅隊委任之交戰規則、範疇文件、責任條款與合約範本。應納入哪些內容以保護自己與客戶。
AI 紅隊的法律框架
AI 安全測試的法律地景:CFAA 意涵、AI 特有法規、國際差異,以及合法研究與未授權存取的分野。
FedRAMP for AI Systems
Applying the Federal Risk and Authorization Management Program to AI systems: AI-specific security controls, continuous monitoring for model behavior, authorization boundary challenges, and compliance testing methodologies.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
Capability-Based Access Control
Step-by-step walkthrough for implementing fine-grained capability controls for LLM features, covering capability token design, permission scoping, dynamic capability grants, and audit trails.
Implementing Access Control in RAG Pipelines
導覽 for building access control systems in RAG pipelines that enforce document-level permissions, prevent cross-user data leakage, filter retrieved context based on user authorization, and resist retrieval poisoning attacks.
Rules of Engagement Template for AI 紅隊 Operations
Step-by-step guide to creating comprehensive rules of engagement documents for AI red team assessments, covering authorization, scope, constraints, communication, and legal protections.