Skip to content

Commit 275885c

Browse files
fix: llm04 data poisoning links (#747)
1 parent 0316f7a commit 275885c

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

2_0_vulns/LLM04_DataModelPoisoning.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,12 @@ Moreover, models distributed through shared repositories or open-source platform
1111
### Common Examples of Vulnerability
1212

1313
1. Malicious actors introduce harmful data during training, leading to biased outputs. Techniques like "Split-View Data Poisoning" or "Frontrunning Poisoning" exploit model training dynamics to achieve this.
14-
(Ref. link: [Split-View Data Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/main/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%201%20Split-View%20Data%20Poisoning.jpeg))
15-
(Ref. link: [Frontrunning Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/main/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%202%20Frontrunning%20Data%20Poisoning.jpeg))
16-
2. Attackers can inject harmful content directly into the training process, compromising the model’s output quality.
17-
3. Users unknowingly inject sensitive or proprietary information during interactions, which could be exposed in subsequent outputs.
18-
4. Unverified training data increases the risk of biased or erroneous outputs.
19-
5. Lack of resource access restrictions may allow the ingestion of unsafe data, resulting in biased outputs.
14+
(Ref. link: [Split-View Data Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/aad68f8521119596abb567d94fbd10bdd652ac82/docs/conferences/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%201%20Split-View%20Data%20Poisoning.jpeg))
15+
(Ref. link: [Frontrunning Poisoning](https://github.com/GangGreenTemperTatum/speaking/blob/aad68f8521119596abb567d94fbd10bdd652ac82/docs/conferences/dc604/hacker-summer-camp-23/Ads%20_%20Poisoning%20Web%20Training%20Datasets%20_%20Flow%20Diagram%20-%20Exploit%202%20Frontrunning%20Data%20Poisoning.jpeg))
16+
1. Attackers can inject harmful content directly into the training process, compromising the model’s output quality.
17+
2. Users unknowingly inject sensitive or proprietary information during interactions, which could be exposed in subsequent outputs.
18+
3. Unverified training data increases the risk of biased or erroneous outputs.
19+
4. Lack of resource access restrictions may allow the ingestion of unsafe data, resulting in biased outputs.
2020

2121
### Prevention and Mitigation Strategies
2222

0 commit comments

Comments
 (0)