# feature-store
8 articlestagged with “feature-store”
Vertex AI Attack Surface
Red team methodology for Vertex AI: prediction endpoint abuse, custom training security gaps, feature store poisoning, model monitoring evasion, and pipeline exploitation.
Manipulating Feature Stores
Advanced techniques for attacking feature stores used in ML systems, including feature poisoning, schema manipulation, serving layer exploitation, and integrity attacks against platforms like Feast, Tecton, and Databricks Feature Store.
Feature Store Security
Securing feature stores used in ML pipelines against poisoning and unauthorized access.
Feature Store Access Control
Access control strategies for feature stores: feature-level permissions, cross-team data leakage prevention, PII protection in features, service account management, and implementing least-privilege access for ML feature infrastructure.
Feature Poisoning Attacks
Techniques for poisoning feature store data to manipulate model behavior: direct feature value manipulation, time-travel attacks, online/offline store consistency exploitation, and targeted entity-level feature poisoning.
Feature Store Security (Llmops Security)
Security overview of ML feature stores (Feast, Tecton, Vertex Feature Store): architecture and trust model, attack surfaces in online and offline stores, and the security implications of centralized feature management for ML systems.
Vertex AI Red Team Walkthrough
End-to-end walkthrough for red teaming Google Cloud Vertex AI: prediction endpoint testing, Model Garden security assessment, Feature Store probing, and Cloud Logging analysis.
Vertex AI Red Team Walkthrough (Platform Walkthrough)
Complete red team walkthrough for Google Vertex AI: testing prediction endpoints, Model Garden assessments, Feature Store probing, and exploiting Vertex AI Agents and Extensions.