You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 2_0_vulns/LLM04_DataModelPoisoning.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,12 +11,12 @@ Moreover, models distributed through shared repositories or open-source platform
11
11
### Common Examples of Vulnerability
12
12
13
13
1. Malicious actors introduce harmful data during training, leading to biased outputs. Techniques like "Split-View Data Poisoning" or "Frontrunning Poisoning" exploit model training dynamics to achieve this.
14
-
(Ref. link: [Split-View Data Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/main/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%201%20Split-View%20Data%20Poisoning.jpeg))
2. Attackers can inject harmful content directly into the training process, compromising the model’s output quality.
17
-
3. Users unknowingly inject sensitive or proprietary information during interactions, which could be exposed in subsequent outputs.
18
-
4. Unverified training data increases the risk of biased or erroneous outputs.
19
-
5. Lack of resource access restrictions may allow the ingestion of unsafe data, resulting in biased outputs.
14
+
(Ref. link: [Split-View Data Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/aad68f8521119596abb567d94fbd10bdd652ac82/docs/conferences/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%201%20Split-View%20Data%20Poisoning.jpeg))
0 commit comments