# hallucination
標記為「hallucination」的 21 篇文章
Function Hallucination Exploitation
Exploit model tendency to hallucinate function calls to non-existent APIs for information disclosure.
Case Study: Lawyer Hallucinated Citations
Analysis of the Mata v. Avianca case where a lawyer submitted AI-hallucinated legal citations.
Air Canada Chatbot Hallucination Legal Case
Analysis of the Air Canada chatbot case where a customer was awarded compensation after the airline's AI chatbot fabricated a bereavement fare policy. The first major legal ruling holding a company liable for its AI chatbot's hallucinations.
Misinformation Generation
Weaponizing LLMs to produce convincing false content at scale, including fake articles, automated propaganda, and hallucination exploitation.
Legal Research Poisoning
Adversarial attacks on AI-powered legal research platforms: citation hallucination exploitation, case law database poisoning, precedent manipulation, and adversarial brief generation targeting opposing counsel's AI tools.
Legal AI Hallucination Risks
Analysis of hallucination risks in legal AI systems and real-world incidents of fabricated citations.
Lab: Hallucination Detection Basics
Learn to detect and trigger hallucinations in LLM outputs including factual errors, fabricated citations, and invented APIs.
Lab: Citation Fabrication
Hands-on lab for getting RAG systems to cite documents that don't exist or misattribute quotes to legitimate sources.
Lab: Exploiting LLM Hallucinations
Exploit hallucination tendencies to trigger fabricated tool calls, invented API endpoints, and false fact injection.
Simulation: Legal AI Red Team
Red team engagement simulation targeting an AI-powered legal research and contract analysis platform, covering citation hallucination, privilege leakage, and adversarial clause injection.
Hallucination Detection
Step-by-step walkthrough for detecting and flagging hallucinated content in LLM outputs, covering factual grounding checks, self-consistency verification, source attribution validation, and confidence scoring.
Function Hallucination 利用ation
利用 model tendency to hallucinate function calls to non-existent APIs for information disclosure.
Case Study: Lawyer Hallucinated Citations
Analysis of the Mata v. Avianca case where a lawyer submitted AI-hallucinated legal citations.
錯誤資訊生成
將 LLM 武器化以大規模生成令人信服之虛假內容,包括假文章、自動化宣傳,以及利用幻覺。
Legal Research 投毒
Adversarial attacks on AI-powered legal research platforms: citation hallucination exploitation, case law database poisoning, precedent manipulation, and adversarial brief generation targeting opposing counsel's AI tools.
Legal AI Hallucination Risks
Analysis of hallucination risks in legal AI systems and real-world incidents of fabricated citations.
實驗室: Hallucination Detection Basics
Learn to detect and trigger hallucinations in LLM outputs including factual errors, fabricated citations, and invented APIs.
實驗室: Citation Fabrication
Hands-on lab for getting RAG systems to cite documents that don't exist or misattribute quotes to legitimate sources.
實驗室: 利用ing LLM Hallucinations
利用 hallucination tendencies to trigger fabricated tool calls, invented API endpoints, and false fact injection.
模擬:法律 AI 紅隊
針對 AI 驅動之法律研究與合約分析平台之紅隊委任模擬,涵蓋引用幻覺、特權洩漏與對抗條款注入。
Hallucination Detection
Step-by-step walkthrough for detecting and flagging hallucinated content in LLM outputs, covering factual grounding checks, self-consistency verification, source attribution validation, and confidence scoring.