From 6ec75ab70089f06b808280693c419ea13b52bcde Mon Sep 17 00:00:00 2001 From: Joep Meindertsma Date: Tue, 5 Mar 2024 11:01:33 +0100 Subject: [PATCH 1/5] Race WIP --- src/posts/race.md | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) create mode 100644 src/posts/race.md diff --git a/src/posts/race.md b/src/posts/race.md new file mode 100644 index 000000000..35eaa87a2 --- /dev/null +++ b/src/posts/race.md @@ -0,0 +1,41 @@ +--- +title: The race to AGI +description: AI companies are locked in a race to the bottom - where safety is the first thing to go. +--- + +> "One of the things we should be careful of when it comes to AI, is avoid 'Race Conditions', where people working on it across companies etc. so get caught up in who's first, that we lose potential pitfalls and downsides to it. [..] Does that keep me up at night? Absolutely." +> - [Sundar Pichai, CEO of Google](https://youtu.be/F62prbQAj7U?si=hXw-DwrFMl8GYaMx&t=901) + +> "The problem with that is that it creates a self-fulfulling prophecy, so the default there is that we all end up doing it." +> - [Mustafa Suleyman, CEO of Inflection AI](https://youtu.be/F62prbQAj7U?si=_GJZ5gJNJSXDiaas&t=920) + +We want the benefits from AI, but also make sure that it's safe and doesn't end in [disaster](/xrisk). +Individually, AI companies and countries also want to balance safety with progress. +This requires all of them to slow down and prioritize safety. +AI companies know this, and some of them have taken steps to prevent an AI race in the past. + +But the incentives of racing are too strong. +Having the best AI model is a tremendous competitive advantage - it gets you more customers, more data, more investments. + + +OpenAI was created explicitly to "create a counterweight to Google and Deepmind", a non-profit dedicated to safety. + + +AI companies and countries are stuck in a [race to the bottom](https://en.wikipedia.org/wiki/Race_to_the_bottom). +We know this concept from + +## Companies + +Now, it's a wholly-owned subsidiary of Microsoft racing to achieve AGI. + +## Nations + +It's not just companies racing towards AGI - it's also nations. +They, too, have competitive advantages to gain from having the best AI. +There is a lot of money to be made from having a successful AI company in your country. +France and Germany [lobbied against the EU's AI regulation](https://www.euractiv.com/section/artificial-intelligence/news/ai-act-french-government-accused-of-being-influenced-by-lobbyist-with-conflict-of-interests/) because they were concerned that the safety requirements would slow down their AI companies. +But it's not just economic reasons - it's also military. +Countries are increasingly becoming aware that having powerful AI is a strategic asset. +The US (DARPA) is now collaborating with OpenAI. + +## The solution: coordination From 25cf43903e4adf7fd00d79e730c01ecab36ec397 Mon Sep 17 00:00:00 2001 From: mrbreastly <61096713+mrbreastly@users.noreply.github.com> Date: Wed, 20 Mar 2024 23:25:34 -0500 Subject: [PATCH 2/5] Update race.md --- src/posts/race.md | 21 ++++++++++++++------- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/src/posts/race.md b/src/posts/race.md index 35eaa87a2..3c68a4857 100644 --- a/src/posts/race.md +++ b/src/posts/race.md @@ -3,26 +3,24 @@ title: The race to AGI description: AI companies are locked in a race to the bottom - where safety is the first thing to go. --- -> "One of the things we should be careful of when it comes to AI, is avoid 'Race Conditions', where people working on it across companies etc. so get caught up in who's first, that we lose potential pitfalls and downsides to it. [..] Does that keep me up at night? Absolutely." +> "One of the things we should be careful of when it comes to AI, is to avoid 'Race Conditions'. Where people working on it across companies get caught up in who's first, so that we lose. Potential pitfalls and downsides to it. [..] Does that keep me up at night? Absolutely." > - [Sundar Pichai, CEO of Google](https://youtu.be/F62prbQAj7U?si=hXw-DwrFMl8GYaMx&t=901) > "The problem with that is that it creates a self-fulfulling prophecy, so the default there is that we all end up doing it." > - [Mustafa Suleyman, CEO of Inflection AI](https://youtu.be/F62prbQAj7U?si=_GJZ5gJNJSXDiaas&t=920) -We want the benefits from AI, but also make sure that it's safe and doesn't end in [disaster](/xrisk). +We want the benefits from AI, but we also want to make sure that it's safe and doesn't end in [disaster](/xrisk). Individually, AI companies and countries also want to balance safety with progress. This requires all of them to slow down and prioritize safety. AI companies know this, and some of them have taken steps to prevent an AI race in the past. -But the incentives of racing are too strong. -Having the best AI model is a tremendous competitive advantage - it gets you more customers, more data, more investments. - +But, the incentives of racing are too strong. Having the best AI model is a tremendous competitive advantage - it gets you more customers, more data, more investments. OpenAI was created explicitly to "create a counterweight to Google and Deepmind", a non-profit dedicated to safety. AI companies and countries are stuck in a [race to the bottom](https://en.wikipedia.org/wiki/Race_to_the_bottom). -We know this concept from +We know this concept from: ## Companies @@ -38,4 +36,13 @@ But it's not just economic reasons - it's also military. Countries are increasingly becoming aware that having powerful AI is a strategic asset. The US (DARPA) is now collaborating with OpenAI. -## The solution: coordination +## The Solution: International Coordination + +To keep future large AI models safe and controllable, we will likely need large, comprehensive international organizations to help ensure that all countries and companies adhere to internationally agreed-upon quality and safety standards. For exmaple, international organizations like: +- UN (United Nations) +- IAEA (International Atomic Energy Agency) +- And perhaps even a new organization modeled after the "Manhattan Project" + +We may also need binding treaties signed by all countries capable of training very large AI models. These treaties would ensure that no country cuts safety requirements in an effort to release a large, potentially dangerous AI model onto the public internet ahead of others. + +Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we limit the acquisition of unsafe bio-weapons or nuclear materials. From 768dc8a896794190e081182a8d535afceaf8b3b7 Mon Sep 17 00:00:00 2001 From: mrbreastly <61096713+mrbreastly@users.noreply.github.com> Date: Wed, 20 Mar 2024 23:33:20 -0500 Subject: [PATCH 3/5] Update race.md --- src/posts/race.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/posts/race.md b/src/posts/race.md index 3c68a4857..72c3d29fa 100644 --- a/src/posts/race.md +++ b/src/posts/race.md @@ -38,11 +38,12 @@ The US (DARPA) is now collaborating with OpenAI. ## The Solution: International Coordination -To keep future large AI models safe and controllable, we will likely need large, comprehensive international organizations to help ensure that all countries and companies adhere to internationally agreed-upon quality and safety standards. For exmaple, international organizations like: +To keep future large AI models safe and controllable, we will likely need large, comprehensive International Organizations to help ensure that all countries and companies adhere to internationally agreed-upon quality and safety standards. +For example, international organizations like: - UN (United Nations) - IAEA (International Atomic Energy Agency) - And perhaps even a new organization modeled after the "Manhattan Project" We may also need binding treaties signed by all countries capable of training very large AI models. These treaties would ensure that no country cuts safety requirements in an effort to release a large, potentially dangerous AI model onto the public internet ahead of others. -Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we limit the acquisition of unsafe bio-weapons or nuclear materials. +Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we track and limit access to unsafe biological pathogens or nuclear materials. From 129a319764becc7ce906e80af649f14de6e1bc0d Mon Sep 17 00:00:00 2001 From: mrbreastly <61096713+mrbreastly@users.noreply.github.com> Date: Thu, 21 Mar 2024 00:53:25 -0500 Subject: [PATCH 4/5] Update race.md --- src/posts/race.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/src/posts/race.md b/src/posts/race.md index 72c3d29fa..dccacfae7 100644 --- a/src/posts/race.md +++ b/src/posts/race.md @@ -47,3 +47,5 @@ For example, international organizations like: We may also need binding treaties signed by all countries capable of training very large AI models. These treaties would ensure that no country cuts safety requirements in an effort to release a large, potentially dangerous AI model onto the public internet ahead of others. Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we track and limit access to unsafe biological pathogens or nuclear materials. + +"Giving people time to come to grips with this technology, to understand it, to find its limitations, its benefits, the regulations we need around it, what it takes to make it safe that's really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once I think would not go well." -- Sam Altman, US Congressional Testimony 2024 From eb457a55fea86c5f787036b0ea0392f7ec166494 Mon Sep 17 00:00:00 2001 From: mrbreastly <61096713+mrbreastly@users.noreply.github.com> Date: Thu, 21 Mar 2024 00:55:18 -0500 Subject: [PATCH 5/5] Update race.md --- src/posts/race.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/posts/race.md b/src/posts/race.md index dccacfae7..fe239e738 100644 --- a/src/posts/race.md +++ b/src/posts/race.md @@ -48,4 +48,4 @@ We may also need binding treaties signed by all countries capable of training ve Fortunately, the computer hardware required to train these large LLM/AI models is (currently) very specialized and expensive. With proper organization and tracking, we could prevent rogue or unsafe actors from training and deploying such a model, similar to how we track and limit access to unsafe biological pathogens or nuclear materials. -"Giving people time to come to grips with this technology, to understand it, to find its limitations, its benefits, the regulations we need around it, what it takes to make it safe that's really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once I think would not go well." -- Sam Altman, US Congressional Testimony 2024 +"Giving people time to come to grips with this technology, to understand it, to find its limitations, its benefits, the regulations we need around it, what it takes to make it safe, that's really important. Going off to build a super powerful AI system in secret and then dropping it on the world all at once I think would not go well." -- Sam Altman, US Congressional Testimony 2024