# preference-poisoning
2 articlestagged with “preference-poisoning”
RLHF & DPO Manipulation
Overview of attacks against reinforcement learning from human feedback and direct preference optimization -- how reward hacking, preference data poisoning, and alignment manipulation compromise the training pipeline.
rlhfdporeward-hackingpreference-poisoningalignmentreward-modelfine-tuning-security
Preference Data Poisoning
How adversaries manipulate human preference data used in RLHF and DPO training -- compromising labelers, generating synthetic poisoned preferences, and attacking the preference data supply chain.
preference-poisoningrlhfdpodata-poisoninghuman-feedbacklabeler-attackalignment