diff --git a/docs/bee/faq.md b/docs/bee/faq.md index 591b68eb3..1945cfdb3 100644 --- a/docs/bee/faq.md +++ b/docs/bee/faq.md @@ -13,7 +13,7 @@ Depending on your needs you can run ultra-light, light or full node. ### What are the differences between Bee node types? -A bee node can be configured to run in various modes based on specific use cases and requirements. [See here](/docs/bee/installation/quick-start) for an overview of the differences. +A bee node can be configured to run in various modes based on specific use cases and requirements. [See here](/docs/bee/installation/getting-started) for an overview of the differences. #### What are the requirements for running a Bee node? diff --git a/docs/bee/installation/build-from-source.md b/docs/bee/installation/build-from-source.md index e8e200390..82fe156dd 100644 --- a/docs/bee/installation/build-from-source.md +++ b/docs/bee/installation/build-from-source.md @@ -11,7 +11,7 @@ Prerequisites for installing direct from source are: - **go** - download the latest release from [golang.org](https://golang.org/dl). - **git** - download from [git-scm.com](https://git-scm.com/). -- **make** - usually included in most operating systems. +- **make** - [make](https://www.gnu.org/software/make/) is usually included by default in most UNIX operating systems, and can be installed and used on almost any other operating system where it is not included by default. ### Build from Source diff --git a/docs/bee/installation/docker.md b/docs/bee/installation/docker.md index 017c66d29..f758abf32 100644 --- a/docs/bee/installation/docker.md +++ b/docs/bee/installation/docker.md @@ -1,294 +1,46 @@ --- -title: Docker +title: Docker Install id: docker --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +# Docker Install -Docker is one option for running a Bee node, and when combined with Docker Compose, it even offers a convenient solution for spinning up and managing a small "hive" of Bee nodes. - -Docker containers for Bee are hosted at [Docker Hub](https://hub.docker.com/r/ethersphere/bee). +The following is a guide for installing a Bee node using Docker. Docker images for Bee are hosted at [Docker Hub](https://hub.docker.com/r/ethersphere/bee). Using Docker to operate your Bee node offers :::caution -While it is possible to run multiple Bee nodes on a single machine, due to the high rate of I/O operations required by a full Bee node in operation, it is not recommended to run more than a handful of Bee nodes on the same physical disk (depending on the disk speed). +In the examples below we specify the exact version number of the image using the 2.2.0 tag. It's recommended to only use the exact version number tags. Make sure to check that you're on the latest version of Bee by reviewing the tags for Bee on [Docker Hub](https://hub.docker.com/r/ethersphere/bee/tags), and replace 2.2.0 in the commands below if there is a newer full release. ::: - -## Install Docker and Docker Compose - -:::info -The steps for setting up Docker and Docker Compose may vary slightly from system to system, so take note system specific commands and make sure to modify them for your own system as needed. +:::warning +Note that in all the examples below we map the Bee API to 127.0.0.1 (localhost), since we do not want to expose our Bee API endpoint to the public internet, as that would allow anyone to control our node. Make sure you do the same, and it's also recommended to use a firewall to protect access to your node(s). ::: - - - - - -### For Debian-based Systems (e.g., Ubuntu, Debian) - -#### Step 1: Install Docker - -1. **Update the package list:** - - ```bash - sudo apt-get update - ``` - -2. **Install necessary packages:** - - ```bash - sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release - ``` - -3. **Add Docker’s official GPG key:** - - ```bash - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg - ``` - -4. **Add Docker’s official repository to APT sources:** - - ```bash - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) latest" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null - ``` - -5. **Update the package list again:** - - ```bash - sudo apt-get update - ``` - -6. **Install Docker packages:** - - ```bash - sudo apt-get install -y docker-ce docker-ce-cli containerd.io - ``` - -#### Step 2: Install Docker Compose Plugin - :::info -Skip this section if you are running Bee with Docker only. +This guide sets options using environment variables as a part of the Docker startup commands such as `-e BEE_API_ADDR=":1633"`, however there are [several other methods available for configuring options](/docs/bee/working-with-bee/configuration). ::: -1. **Update the package list:** - - ```bash - sudo apt-get update - ``` - -2. **Install the Docker Compose plugin:** - - ```bash - sudo apt-get install docker-compose-plugin - ``` - -3. **Verify the installation:** - - ```bash - docker compose version - ``` - - - - - -### For RPM-based Systems (e.g., CentOS, Fedora) - -#### Step 1: Install Docker - -1. **Install necessary packages:** - - ```bash - sudo yum install -y yum-utils device-mapper-persistent-data lvm2 - ``` - -2. **Add Docker’s official repository:** - - ```bash - sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo - ``` - -3. **Install Docker packages:** - - ```bash - sudo yum install -y docker-ce docker-ce-cli containerd.io - ``` - -4. **Start and enable Docker:** - - ```bash - sudo systemctl start docker - sudo systemctl enable docker - ``` - -#### Step 2: Install Docker Compose Plugin - -1. **Update the package list:** - - ```bash - sudo yum update - ``` - -2. **Install the Docker Compose plugin:** - - ```bash - sudo yum install docker-compose-plugin - ``` - -3. **Verify the installation:** - - ```bash - docker compose version - ``` - - - - - - -## Bee with Docker - -This section will guide you through setting up and running a single Bee node using Docker only without Docker Compose. - -### Step 1: Create directories - -Create home directory: - -```bash -mkdir bee-node -cd bee-node -``` -Create data directory and change permissions - -```bash -mkdir .bee -sudo chown -R 999:999 .bee -``` - -### Step 2: Bee Node Configuration - -Based on your preferred node type, copy one of the three sample configurations below: - - - - - -#### Full node sample configuration - -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org - -# BEE MODE: FULL NODE CONFIGURATION -full-node: true -swap-enable: true -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org -``` - - - - - -#### Light node sample configuration - -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org - -# BEE MODE: LIGHT CONFIGURATION -full-node: false -swap-enable: true -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org -``` - - - - - -#### Ultra light node sample configuration - -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org - -# BEE MODE: ULTRA LIGHT CONFIGURATION -swap-enable: false -full-node: false -``` - - - - - -Save the configuration into a YAML configuration file: +:::info +**Bee Modes:** -```bash -sudo vi ./bee.yml -``` +Bee nodes can be run in multiple modes with different functionalities. To run a node in full mode, both `BEE_FULL_NODE` and `BEE_SWAP_ENABLE` must be set to `true`. To run a light node (uploads and downloads only), set `BEE_FULL_NODE` to `false` and `BEE_SWAP_ENABLE` to `true`, or to run in ultra light mode (free tier downloads only) set both `BEE_FULL_NODE` and `BEE_SWAP_ENABLE` to `false`. -Print out the configuration to make sure it was properly saved: +For more information on the different functionalities of each mode, as well as their different system requirements, refer to the [Getting Started guide](/docs/bee/installation/getting-started). +::: -```bash -cat ./bee.yml -``` +## Node setup process -### Step 3: Run Bee Node with Docker +This section will guide you through setting up and running a single full Bee node using Docker. In the guide, we use a single line command for running our Bee node, with the Bee config options being set through environment variables, and a single volume hosted for our node's data. -Use the following command to start up your node: +### Start node ```bash -docker run -d --name bee-node \ - -v "$(pwd)/.bee:/home/bee/.bee" \ - -v "$(pwd)/bee.yml:/home/bee/bee.yml" \ +docker run -d --name bee-1 \ + --restart always \ -p 127.0.0.1:1633:1633 \ -p 1634:1634 \ ethersphere/bee:2.4.0 start --config /home/bee/bee.yml ``` + :::info Command breakdown: @@ -309,9 +61,9 @@ Command breakdown: 1. **`ethersphere/bee:2.4.0`**: This specifies the Docker image to use for the container. In this case, it is the `ethersphere/bee` image with the tag `2.4.0`. 1. **`start --config /home/bee/bee.yml`**: This specifies the command to run inside the container. It starts the Bee node using the configuration file located at `/home/bee/bee.yml`. -::: + ::: -Note that we have mapped the Bee API and Debug API to 127.0.0.1 (localhost), this is to ensure that these APIs are not available publicly, as that would allow anyone to control our node. +Note that we have mapped the Bee API and Debug API to 127.0.0.1 (localhost), this is to ensure that these APIs are not available publicly, as that would allow anyone to control our node. Check that the node is running: @@ -330,163 +82,64 @@ e53aaa4e76ec ethersphere/bee:2.4.0 "bee start --config …" 17 seconds ago And check the logs: ```bash -docker logs -f bee-node +docker logs -f bee-1 ``` -The output should contain a line which prints the address of your node. Copy this address and save it for use in the next section. +The output should contain a line which prints a message notifying you of the minimum required xDAI for running a node as well as the address of your node. Copy the address and save it for use in the next section. ```bash -"time"="2024-07-15 12:23:57.906429" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0005750003895" "address"="0xf50Bae90a99cfD15Db5809720AC1390d09a25d60" +"time"="2024-09-24 22:06:51.363708" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0003576874793" "address"="0x91A7e3AC06020750D32CeffbEeFD55B4c5e42bd6" ``` -### Step 4: Funding (Full and Light Nodes Only) - - - -To obtain xDAI and fund your node, you can [follow the instructions](https://docs.ethswarm.org/docs/installation/install#4-fund-node) from the main install section. +You can use `Ctrl + C` to exit the logs. -### Step 5: Add Stake - -To add stake, make a POST request to the `/stake` endpoint and input the amount you wish to stake in PLUR as a parameter after `/stake`. For example, to stake an amount equal to 10 xBZZ: +Before moving on to funding, stop your node: ```bash -curl -X POST localhost:1633/stake/100000000000000000 +docker stop bee-1 ``` -Note that since we have mapped our host and container to the same port, we can use the default `1633` port to make our request. If you are running multiple nodes, make sure to update this command for other nodes which will be mapped to different ports on the host machine. - - -## Bee with Docker Compose - -By adding Docker Compose to our setup, we can simplify the management of our configuration by saving it in a `docker-compose.yml` file rather than specifying it all in the startup command. It also lays the foundation for running multiple nodes at once. First we will review how to run a single node with Docker Compose. - -### Step 1: Create directory for node(s) +And let's confirm that it has stopped: ```bash -mkdir bee-nodes -cd bee-nodes -``` - -### Step 2: Create home directory for first node - -```shell -mkdir node_01 +docker ps ``` -### Step 3: Create data directory and change permissions +We can confirm no Docker container processes are currently running. -```shell -mkdir node_01/.bee -sudo chown -R 999:999 node_01/.bee +```bash +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` -Here we change ownership to match the UID and GID of the user specified in the [Bee Dockerfile](https://github.com/ethersphere/bee/blob/master/Dockerfile). - -### Step 4: Bee node configuration +### Fund node -Below are sample configurations for different node types. +Check the logs from the previous step. Look for the line which says: -:::info -The `blockchain-rpc-endpoint` entry is set to use the free and public `https://xdai.fairdatasociety.org` RPC endpoint, which is fine for testing things out but may not be stable enough for extended use. If you are running your own Gnosis Node or using a RPC provider service, make sure to update this value with your own endpoint. -::: - - - - - - -#### Full node sample configuration - -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org - -# BEE MODE: FULL NODE CONFIGURATION -full-node: true -swap-enable: true -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org ``` - - - - - -#### Light node sample configuration - -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org - -# BEE MODE: LIGHT CONFIGURATION -full-node: false -swap-enable: true -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org +"time"="2024-09-24 18:15:34.520716" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" ``` - +That address is your node's address on Gnosis Chain which needs to be funded with xDAI and xBZZ. Copy it and save it for the next step. - - -#### Ultra light node sample configuration +xDAI is widely available from many different centralized and decentralized exchanges, just make sure that you are getting xDAI on Gnosis Chain, and not DAI on some other chain. See [this page](https://www.ethswarm.org/get-bzz) for a list of resources for getting xBZZ (again, make certain that you are getting the Gnosis Chain version, and not BZZ on Ethereum). -```yml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 +After acquiring some xDAI and some xBZZ, send them to the address you copied above. -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org +**_How Much to Send?_** -# BEE MODE: ULTRA LIGHT CONFIGURATION -swap-enable: false -full-node: false -``` +Only a very small amount of xDAI is needed to get started, 0.1 is more than enough. - +You can start with just 2 or 3 xBZZ for uploading small amounts of data, but you will need at least 10 xBZZ if you plan on staking. - +### Initialize full node -Copy the Docker configuration for the node type you choose and save it into a YAML configuration file: +After you have a small amount of xDAI in your node's Gnosis Chain address, you can now restart your node using the same command as before so that it can issue the required smart contract transactions and also sync data. ```bash -sudo vi ./node_01/bee.yml +docker start bee-1 ``` -And print out the configuration to make sure it was properly saved: +Let's check the logs to see what's happening: ```bash cat ./node_01/bee.yml @@ -521,46 +174,107 @@ Note that we are mapping to 127.0.0.1 (localhost), since we do not want to expos Copy the configuration and save it in a YAML file like we did in the previous step. Make sure that you are saving it to the root directory. ```bash -sudo vi ./docker-compose.yml -``` - -And print out the contents of the file to make sure it was saved properly: +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2024-09-24 22:21:04.543661" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2024-09-24 22:21:04.590823" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="02f0e59eafa3c5c06542c0a7a7fe9579c55a163cf1d28d9f6945a34469f88d1b2a" +"time"="2024-09-24 22:21:04.686430" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="02ea739530bbf48eed49197f21660f3b6564709b95bf558dc3b472688c34096418" +"time"="2024-09-24 22:21:04.686464" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x8288F1c8e3dE7c3bf42Ae67fa840EC61481D085e" +"time"="2024-09-24 22:21:04.700711" "level"="info" "logger"="node" "msg"="using overlay address" "address"="22dc155fe072e131449ec7ea2f77de16f4735f06257ebaa5daf2fdcf14267fd9" +"time"="2024-09-24 22:21:04.700741" "level"="info" "logger"="node" "msg"="starting with an enabled chain backend" +"time"="2024-09-24 22:21:05.298019" "level"="info" "logger"="node" "msg"="connected to blockchain backend" "version"="Nethermind/v1.28.0+9c4816c2/linux-x64/dotnet8.0.8" +"time"="2024-09-24 22:21:05.485287" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2024-09-24 22:21:05.498845" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="[::]:1633" +"time"="2024-09-24 22:21:05.871498" "level"="info" "logger"="node" "msg"="using default factory address" "chain_id"=100 "factory_address"="0xC2d5A532cf69AA9A1378737D8ccDEF884B6E7420" +"time"="2024-09-24 22:21:06.059179" "level"="info" "logger"="node/chequebook" "msg"="no chequebook found, deploying new one." +"time"="2024-09-24 22:21:07.386747" "level"="info" "logger"="node/chequebook" "msg"="deploying new chequebook" "tx"="0x375ca5a5e0510f8ab307e783cf316dc6bf698c15902a080ade3c1ea0c6059510" +"time"="2024-09-24 22:21:19.101428" "level"="info" "logger"="node/transaction" "msg"="pending transaction confirmed" "sender_address"="0x8288F1c8e3dE7c3bf42Ae67fa840EC61481D085e" "tx"="0x375ca5a5e0510f8ab307e783cf316dc6bf698c15902a080ade3c1ea0c6059510" +"time"="2024-09-24 22:21:19.101450" "level"="info" "logger"="node/chequebook" "msg"="chequebook deployed" "chequebook_address"="0x66127e4393956F11947e9f54599787f9E455173d" +"time"="2024-09-24 22:21:19.506515" "level"="info" "logger"="node" "msg"="using datadir" "path"="/home/bee/.bee" +"time"="2024-09-24 22:21:19.518258" "level"="info" "logger"="migration-RefCountSizeInc" "msg"="starting migration of replacing chunkstore items to increase refCnt capacity" +"time"="2024-09-24 22:21:19.518283" "level"="info" "logger"="migration-RefCountSizeInc" "msg"="migration complete" +"time"="2024-09-24 22:21:19.566160" "level"="info" "logger"="node" "msg"="starting reserve repair tool, do not interrupt or kill the process..." +"time"="2024-09-24 22:21:19.566232" "level"="info" "logger"="node" "msg"="removed all bin index entries" +"time"="2024-09-24 22:21:19.566239" "level"="info" "logger"="node" "msg"="removed all chunk bin items" "total_entries"=0 +"time"="2024-09-24 22:21:19.566243" "level"="info" "logger"="node" "msg"="counted all batch radius entries" "total_entries"=0 +"time"="2024-09-24 22:21:19.566247" "level"="info" "logger"="node" "msg"="parallel workers" "count"=20 +"time"="2024-09-24 22:21:19.566271" "level"="info" "logger"="node" "msg"="migrated all chunk entries" "new_size"=0 "missing_chunks"=0 "invalid_sharky_chunks"=0 +"time"="2024-09-24 22:21:19.566294" "level"="info" "logger"="migration-step-04" "msg"="starting sharky recovery" +"time"="2024-09-24 22:21:19.664643" "level"="info" "logger"="migration-step-04" "msg"="finished sharky recovery" +"time"="2024-09-24 22:21:19.664728" "level"="info" "logger"="migration-step-05" "msg"="start removing upload items" +"time"="2024-09-24 22:21:19.664771" "level"="info" "logger"="migration-step-05" "msg"="finished removing upload items" +"time"="2024-09-24 22:21:19.664786" "level"="info" "logger"="migration-step-06" "msg"="start adding stampHash to BatchRadiusItems, ChunkBinItems and StampIndexItems" +"time"="2024-09-24 22:21:19.664837" "level"="info" "logger"="migration-step-06" "msg"="finished migrating items" "seen"=0 "migrated"=0 +"time"="2024-09-24 22:21:19.664897" "level"="info" "logger"="node" "msg"="waiting to sync postage contract data, this may take a while... more info available in Debug loglevel" +``` + +Your node will take some time to finish [syncing postage contract data](https://docs.ethswarm.org/docs/develop/access-the-swarm/buy-a-stamp-batch/) as indicated by the final line: ```bash -cat ./docker-compose.yml +"msg"="waiting to sync postage contract data, this may take a while... more info available in Debug loglevel" ``` -Now check that you have everything set up properly: +You may need to wait 5 - 10 minutes for your node to finish syncing in this step. -```bash -tree -a . -``` - -Your folder structure should look like this: +Eventually you will be able to see when your node finishes syncing, and the logs will indicate your node is starting in full node mode: ```bash -. -├── docker-compose.yml -└── node_01 - ├── .bee - └── bee.yml +"time"="2024-09-24 22:30:19.154067" "level"="info" "logger"="node" "msg"="starting in full mode" +"time"="2024-09-24 22:30:19.155320" "level"="info" "logger"="node/multiresolver" "msg"="name resolver: no name resolution service provided" +"time"="2024-09-24 22:30:19.341032" "level"="info" "logger"="node/storageincentives" "msg"="entered new phase" "phase"="reveal" "round"=237974 "block"=36172090 +"time"="2024-09-24 22:30:33.610825" "level"="info" "logger"="node/kademlia" "msg"="disconnected peer" "peer_address"="6ceb30c7afc11716f866d19b7eeda9836757031ed056b61961e949f6e705b49e" ``` -### Step 6: Run bee node with docker compose: +Your node will now begin syncing chunks from the network, this process can take several hours. You check your node's progress with the `/status` endpoint: +```bash +curl -s http://localhost:1633/status | jq ``` -docker compose up -d -``` - -The node is started in detached mode by using the `-d` flag so that it will run in the background. - -Check that node is running: ```bash -docker ps +{ + "overlay": "22dc155fe072e131449ec7ea2f77de16f4735f06257ebaa5daf2fdcf14267fd9", + "proximity": 256, + "beeMode": "full", + "reserveSize": 686217, + "reserveSizeWithinRadius": 321888, + "pullsyncRate": 497.8747754074074, + "storageRadius": 11, + "connectedPeers": 148, + "neighborhoodSize": 4, + "batchCommitment": 74510761984, + "isReachable": false, + "lastSyncedBlock": 36172390 +} ``` -If we did everything properly we should see our node listed here: +We can see that our node has not yet finished syncing chunks since the `pullsyncRate` is around 497 chunks per second. Once the node is fully synced, this value will go to zero. It can take several hours for syncing to complete, but we do not need to wait until our node is full synced before staking, so we can move directly to the next step. + +### Stake node + +You can use the following command to stake 10 xBZZ: ```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS @@ -568,99 +282,49 @@ CONTAINER ID IMAGE COMMAND CREATED e53aaa4e76ec ethersphere/bee:2.4.0 "bee start --config …" 17 seconds ago Up 16 seconds 127.0.0.1:1636->1633/tcp, 0.0.0.0:1637->1634/tcp, :::1637->1634/tcp, bee-node_01 ``` -Now let's check our logs: +If the staking transaction is successful a `txHash` will be returned: -```bash -docker logs -f bee-node_01 +``` +{"txHash":"0x258d64720fe7abade794f14ef3261534ff823ef3e2e0011c431c31aea75c2dd5"} ``` -If everything went smoothly, we should see the logs from our Bee node. Unless you are running a node in ultra light mode, you should see a warning message in your logs which looks like this at the bottom of the logs: +We can also confirm that our node has been staked with the `/stake` endpoint: ```bash -"time"="2024-07-15 12:23:57.906429" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0005750003895" "address"="0xf50Bae90a99cfD15Db5809720AC1390d09a25d60" +curl localhost:1633/stake ``` -This is because in order for a light or full node to operate, your node is required to set up a chequebook contract on Gnosis Chain, which requires xDAI in order to pay for transaction fees. Find the `address` value and copy it for the next step: - -### Step 7: xDAI funding (full and light nodes only) - -You can fund your node by transferring xDAI and xBZZ to the address you copied from the logs in the previous step. - -To obtain xDAI and fund your node, you can [follow the instructions](/docs/bee/installation/install#4-fund-node) from the main install section. - -You can also try the [node-funder](https://github.com/ethersphere/node-funder) tool, which is especially helpful when you are running multiple nodes, as is described in the next section. - -### Step 8: Add stake - -To add stake, make a POST request to the `/stake` endpoint and input the amount you wish to stake in PLUR as a parameter after `/stake`. In the example below we have input a PLUR value equal to 10 xBZZ. - -:::info -The Bee API will not be available while your node is warming up, so wait until your node is fully initialized before staking. -::: +The results will be displayed in PLUR units (1 PLUR is equal to 1e-16 xBZZ). If you have properly staked the minimum 10 xBZZ, you should see the output below: ```bash -curl -X POST localhost:1633/stake/100000000000000000 +{"stakedAmount":"100000000000000000"} ``` -Note that since we have mapped our host and container to the same port, we can use the default `1633` port to make our request. If you are running multiple Bees, make sure to update this command for other nodes which will be mapped to different ports on the host machine. - -## Running a Hive - -In order to run multiple Bee nodes as a "hive", all we need to do is repeat the process for running one node and then extend our Docker Compose configuration. - -To start with, shut down your node from the first part of this guide if it is still running: +Congratulations! You have now installed your Bee node and are connected to the network as a full staking node. Your node will now be in the process of syncing chunks from the network. Once it is fully synced, your node will finally be eligible for earning staking rewards. -```shell -docker compose down -``` +### Set Target Neighborhood -### Step 1: Create new directories for additional node(s) +When installing your Bee node it will automatically be assigned a neighborhood. However, when running a full node with staking there are benefits to periodically updating your node's neighborhood. Learn more about why and how to set your node's target neighborhood [here](/docs/bee/installation/set-target-neighborhood). -Now create a new directory for your second node: +### Logs and monitoring +Docker provides convenient built-in tools for logging and monitoring your node, which you've already encountered if you've read through earlier sections of this guide. For a more detailed guide, [refer to the section on logging](/docs/bee/working-with-bee/logs-and-files). -```shell -mkdir node_02 -``` +**Viewing node logs:** -We also create a new data directory and set ownership to match the user in the official [Bee Dockerfile](https://github.com/ethersphere/bee/blob/master/Dockerfile). +To monitor your node’s logs in real-time, use the following command: -```shell -mkdir node_02/.bee -sudo chown -R 999:999 node_02/.bee +```bash +docker logs -f bee-1 ``` -Repeat this process for however many new nodes you want to add. +This command will continuously output the logs of your Bee node, helping you track its operations. The `-f` flag ensures that you see new log entries as they are written. Press `Ctrl + C` to stop following the logs. -### Step 2: Create new configuration file(s) +You can read more about how Docker manages container logs [in their official docs](https://docs.docker.com/reference/cli/docker/container/logs/). -And add a `bee.yml` configuration file. You can use the same configuration as for your first node. Here we will use the configuration for a full node: +**Checking the Node's status with the Bee API** -```yaml -# GENERAL BEE CONFIGURATION -api-addr: :1633 -p2p-addr: :1634 -password: aaa4eabb0813df71afa45d -data-dir: /home/bee/.bee -cors-allowed-origins: ["*"] - -# DEBUG CONFIGURATION -verbosity: 5 - -# BEE MAINNET CONFIGURATION -bootnode: /dnsaddr/mainnet.ethswarm.org - -# BEE MODE: FULL NODE CONFIGURATION -full-node: true -swap-enable: true -blockchain-rpc-endpoint: https://xdai.fairdatasociety.org -``` - -```bash -sudo vi ./node_02/bee.yml -``` - -After saving the configuration, print out the configuration to make sure it was properly saved: +To check your node's status as a staking node, we can use the `/redistributionstate` endpoint: ```bash cat ./node_02/bee.yml @@ -686,9 +350,9 @@ services: - 1634:1634 # p2p port bee_02: container_name: bee-node_02 - image: ethersphere/bee:2.4.0 + image: ethersphere/bee:2.5.0 command: start --config /home/bee/bee.yml - volumes: + volumes: - ./node_02/.bee:/home/bee/.bee - ./node_02/bee.yml:/home/bee/bee.yml ports: @@ -698,12 +362,12 @@ services: Here is a list of the changes we made to extend our setup: - 1. Created an additional named service with a new unique name (bee_02). - 1. Created a unique name for each `container_name` value (bee-node_01 --> bee-node_02). - 1. Made sure that `volumes` has the correct directory for each node (./node_01/ --> ./node_02/). - 1. Updated the `ports` we map to so that each node has its own set of ports (ie, for node_02, we map 127.0.0.1:1636 to 1633 because node_01 is already using 127.0.0.1:1633, and do the same with the rest of the ports). +1. Created an additional named service with a new unique name (bee_02). +1. Created a unique name for each `container_name` value (bee-node_01 --> bee-node_02). +1. Made sure that `volumes` has the correct directory for each node (./node_01/ --> ./node_02/). +1. Updated the `ports` we map to so that each node has its own set of ports (ie, for node_02, we map 127.0.0.1:1636 to 1633 because node_01 is already using 127.0.0.1:1633, and do the same with the rest of the ports). -### Step 4: Start up the hive +### Step 4: Start up the hive Start up the hive: @@ -739,12 +403,14 @@ Copy the address from the logs: ```shell docker logs -f bee-node_02 ``` + And copy the second address: + ```shell "time"="2024-07-23 11:54:08.532812" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_xdai_amount"="0.000500000002" "address"="0xa4DBEa11CE6D089455d1397c0eC3D705f830De69" ``` -### Step 5: Fund nodes +### Step 5: Fund nodes You can fund your nodes by sending xDAI and xBZZ the addresses you collected from the previous step. @@ -754,7 +420,6 @@ Since you're running a hive, the [node-funder](https://github.com/ethersphere/no If you plan on staking, you will also want to [get some xBZZ](https://www.ethswarm.org/get-bzz) to stake. You will need 10 xBZZ for each node. - ### Step 6: Add stake :::info @@ -763,17 +428,66 @@ The Bee API will not be available while your nodes are warming up, so wait until In order to stake you simply need to call the `/stake` endpoint with an amount of stake in PLUR as a parameter for each node. - For bee-node_01: ```bash -curl -X POST localhost:1633/stake/100000000000000000 +{ + "minimumGasFunds": "11080889201250000", + "hasSufficientFunds": true, + "isFrozen": false, + "isFullySynced": true, + "phase": "claim", + "round": 212859, + "lastWonRound": 207391, + "lastPlayedRound": 210941, + "lastFrozenRound": 210942, + "lastSelectedRound": 212553, + "lastSampleDuration": 491687776653, + "block": 32354719, + "reward": "1804537795127017472", + "fees": "592679945236926714", + "isHealthy": true +} ``` -And for bee-node_02, note that we updated the port to match the one for the Bee API address we mapped to in the Docker Compose file: +For a complete breakdown of this output, check out [this section in the Bee docs](https://docs.ethswarm.org/docs/bee/working-with-bee/bee-api#redistributionstate). + +You can read more other important endpoints for monitoring your Bee node in the [official Bee docs](https://docs.ethswarm.org/docs/bee/working-with-bee/bee-api), and you can find complete information about all available endpoints in [the API reference docs](https://docs.ethswarm.org/api/). + +**Stopping Your Node** + +To gracefully stop your Bee node, use the following command: ```bash -curl -X POST localhost:1636/stake/100000000000000000 +docker stop bee-1 ``` -You may also wish to make use of the [node-funder](https://github.com/ethersphere/node-funder) tool, which in addition to allowing you to fund multiple addresses at once, also allows you to stake multiple addresses at once. \ No newline at end of file +Replace `bee-1` with the name of your node if you've given it a different name. + +## Back Up Keys + +Once your node is up and running, make sure to [back up your keys](/docs/bee/working-with-bee/backups). + +## Getting help + +The CLI has documentation built-in. Running `bee` gives you an entry point to the documentation. Running `bee start -h` from within your Docker container or `bee start --help` will tell you how you can configure your Bee node via the command line arguments. + +You may also check out the [configuration guide](/docs/bee/working-with-bee/configuration), or simply run your Bee terminal command with the `--help` flag, eg. `bee start --help` or `bee --help`. + +## Next Steps to Consider + +### Access the Swarm + +If you'd like to start uploading or downloading files to Swarm, [start here](/docs/develop/access-the-swarm/introduction). + +### Explore the API + +The [Bee API](/docs/bee/working-with-bee/bee-api) is the primary method for interacting with Bee and getting information about Bee. After installing Bee and getting it up and running, it's a good idea to start getting familiar with the API. + +### Run a hive! + +If you would like to run a hive of many Bees, check out the [hive operators](/docs/bee/installation/hive) section for information on how to operate and monitor many Bees at once. + +### Start building DAPPs on Swarm + +If you would like to start building decentralised applications on Swarm, check out our section for [developing with Bee](/docs/develop/introduction). diff --git a/docs/bee/installation/fund-your-node.md b/docs/bee/installation/fund-your-node.md index 094c8b491..97e66e605 100644 --- a/docs/bee/installation/fund-your-node.md +++ b/docs/bee/installation/fund-your-node.md @@ -3,21 +3,113 @@ title: Fund Your Node id: fund-your-node --- -In order to start your Bee node on the _mainnet_, its Ethereum wallet must be -funded with: +## **Fund Your Node** -- 1 [xBZZ](/docs/references/glossary#xbzz-token), for traffic - accounting (this is optional, [see below](#basic-deployment)) +Bee nodes require varying amounts of either [xDAI](/docs/references/tokens#xdai) or [xBZZ](/docs/references/tokens#xbzz) funds, depending on the node type and use case. The amount and type of tokens required depend on the following factors: -- some [xDAI](/docs/references/glossary#xdai-token), to pay the gas fees of - a couple of transactions on the [Gnosis - Chain](/docs/references/glossary#gnosis-chain). +- Whether the node is an [ultra-light, light, or full node](/docs/bee/installation/getting-started#node-types). +- Whether the node operator wishes to [download](/docs/develop/access-the-swarm/upload-and-download/#download-a-file) or [upload data](/docs/develop/access-the-swarm/buy-a-stamp-batch) and how much data they intend to handle. +- Whether the node operator wishes to participate in the [storage incentives system](/docs/bee/working-with-bee/staking/) and/or the [bandwidth incentives system](/docs/concepts/incentives/bandwidth-incentives/). -Take note that xBZZ is the [bridged](/docs/references/glossary#bridged-tokens) version of BZZ from Ethereum to the Gnosis Chain. +## **xDAI Requirements** -### A node's wallet +xDAI is required to pay for gas fees on the Gnosis Chain. There are **_four categories of transactions_** that require xDAI for on-chain interactions: -When your Bee node is installed, an Ethereum wallet is also created. This wallet +### **1. [Buying Postage Stamp Batches](/docs/concepts/incentives/postage-stamps) (Light / Full Nodes)** + +Postage stamp batches must be purchased to upload data to Swarm. The fees for issuing stamp batches are minimal. For example, [this stamp batch creation transaction](https://gnosisscan.io/tx/0xdc350c059b7bfc10de3d71be71774dda395e2ff770ed6dc83a63c14a418d2be8) cost only **0.00050416 xDAI**. + +Additionally, xBZZ is required based on the volume of storage purchased—see [the xBZZ section below](/docs/bee/installation/fund-your-node#xbzz-requirements) for details. + +### **2. [Stake Management Transactions](/docs/bee/working-with-bee/staking/#maximize-rewards) (Full Nodes Only)** + +Stake management transactions include: + +- [Adding stake](/docs/bee/working-with-bee/staking/#add-stake). +- [Partial stake withdrawals](/docs/bee/working-with-bee/staking/#partial-stake-withdrawals). +- [Stake migration](/api/#tag/Staking/paths/~1stake/delete) when a new staking contract is deployed. + +Each of these transactions requires a small amount of xDAI to pay for Gnosis Chain gas fees. For example, this [staking transaction](https://gnosisscan.io/tx/0x3a3a5119e54c59f76b60c05bf434ef3d5ec1a3ec47875c3bf1da66dafccf5f72) added **10 xBZZ** in stake (denominated in [PLUR](/docs/references/glossary/#plur) as 1e16 PLUR (100,000,000,000,000,000 PLUR). The xDAI cost for the transaction was minimal—only **0.00026872 xDAI**. + +See the section below for details on required xBZZ stake amounts. + +### **3. [Storage Incentives Transactions](/docs/concepts/incentives/overview) (Full Nodes Only)** + +Full nodes with at least **10 xBZZ** in stake are eligible to earn storage incentives. They may choose to [double their reserve size and stake a total of 20 xBZZ](/docs/bee/working-with-bee/staking/#reserve-doubling) to maximize earning potential. + +Participating in storage incentives requires nodes to wait for their neighborhood to be selected and then send on-chain transactions for a chance to earn xBZZ. + +There are three types of storage incentive transactions: **commit, reveal, and claim** ([details here](/docs/concepts/incentives/redistribution-game/#redistribution-game-details)). Each requires only a small amount of xDAI and typically occurs a few times per month. However, over time, xDAI may need to be replenished if depleted. + +As an example reference, the gas costs from several months ago were: + +- [Claim](https://gnosisscan.io/tx/0x88f83b0267539c663461e449f87118864ff9b801eaf6ea0fedadc1d824685181): **0.0009953 xDAI** +- [Commit](https://gnosisscan.io/tx/0x91bdf7363535fb405547c50742d6070cd249dd4c2fc00d494c79b3dbf516b1f3): **0.0002918 xDAI** +- [Reveal](https://gnosisscan.io/tx/0x625dd6cd3cf8f9c1dfe27335884994b43519b0a59e0bb3968bd663d200d1772b): **0.0002918 xDAI** + +_Note that while the gas costs today are roughly similar to the examples above, gas fees may change over time due to potential network congestion and a variety of other factors._ + +### **4. [Bandwidth Incentives Transactions](/docs/concepts/incentives/bandwidth-incentives) (Light and Full Nodes)** + +When initializing a new light or full node, deploying a bandwidth incentives contract (also called a **SWAP contract**) is required. The xDAI gas fees for this are minimal. + +For example, [this SWAP contract deployment transaction](https://gnosisscan.io/tx/0xc17b023ba22a9b2c2c27a40ce88d68caf95eb02e17ae57e9c56810b7b33a6ebc) cost only **0.00058154 xDAI**. + +## **xBZZ Requirements** + +xBZZ is used to pay for storing and retrieving data on Swarm. It is required for **_three categories of transactions_**: + +### **1. [Buying Postage Stamp Batches](/docs/concepts/incentives/postage-stamps) (Light / Full Nodes)** + +To upload data, postage stamp batches must be purchased. The required xBZZ amount varies based on: + +- **Storage volume** needed. +- **Storage duration** required. + +Stamp batches use two parameters: **depth** (determines data capacity) and **amount** (determines storage duration). See [this page](/docs/develop/access-the-swarm/buy-a-stamp-batch) for details on selecting appropriate values. + +### **2. [Staking](/docs/bee/working-with-bee/staking/) (Full Nodes Only)** + +A **minimum stake of 10 xBZZ** is required to participate in storage incentives. Nodes opting for [reserve doubling](/docs/bee/working-with-bee/staking/#reserve-doubling) may stake **20 xBZZ** to optimize earnings. + +### **3. [Bandwidth (SWAP) Payments](/docs/concepts/incentives/bandwidth-incentives) (Light / Full Nodes)** + +Bandwidth payments are required for downloading and uploading data. + +- **Ultra-light nodes**: Free-tier downloads only; no uploads. +- **Light and full nodes**: Must deploy a [SWAP contract](/docs/concepts/incentives/bandwidth-incentives) before making bandwidth payments. + +The **SWAP contract deployment fee** is minimal—for example, [this transaction](https://gnosisscan.io/tx/0x09438217f75516df1319eb772d503126ab38ecf52e6d9fd626411a238e0d687a) cost **0.00018542 xDAI**. + +:::info +**Cost Estimates for Bandwidth Payments** + +- **Downloading 1GB**: ~**0.5 xBZZ** in SWAP payments. +- **Uploading**: Requires a funded SWAP contract. + +Running a **full/light node** with `swap-enable` turned on allows nodes to **earn bandwidth incentives** by providing bandwidth to others. Actual xBZZ costs depend on network activity and should be actively monitored. +::: + +## Token Requirements Based on Node Type and Use Case + +The amount of **xDAI** and **xBZZ** required to run a Bee node depends on the node type and intended use case. While **no tokens** are required to run an **ultra-light node**, both **light and full nodes** require some **xDAI** for gas fees and **xBZZ** for data transactions. + +### **Token Requirement Table** + +| **Use Case** | **Supported Node Type** | **Amount of xDAI Required** | **Amount of xBZZ Required** | +| --------------------------------------- | ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- | +| **Free tier downloads (no uploads)** | Ultra-Light, Light, Full | None | None | +| **Downloading beyond free tier** | Light, Full | A small amount such as **~0.1 xDAI** is more than enough to deploy the [SWAP/chequebook contract](/docs/concepts/incentives/bandwidth-incentives) | **~0.1 xBZZ** is enough to get started uploading smaller amounts of data, but more will be required once entering the GB range | +| **Uploading Data** | Light, Full | **~0.1 xDAI** is more than enough for both the initial SWAP/chequebook deployment transaction and the postage stamp batch purchase gas fees | **~0.1 xBZZ** will be enough to upload and store a small amount of data for a short period, but [considerably more xBZZ is required](/docs/develop/access-the-swarm/buy-a-stamp-batch#setting-stamp-batch-parameters-and-options) to store larger amounts of data for a longer time(scales with uploaded volume) | +| **Staking** | Full | **0.1 xDAI** is a reasonable minimum for getting started, more is recommended for long term operation. Staking related transactions occur several times a month and can cost up to around 0.001 xBZZ per transaction. | **10 xBZZ** (minimum required stake, **20 xBZZ** is required for staking with a [doubled reserve](/docs/bee/working-with-bee/staking/#reserve-doubling)). Stake is generally speaking not withdrawable. | +| **Participating in storage incentives** | Full | **Small amount of xDAI** (for commit, reveal, claim transactions) | **10 xBZZ** (minimum required stake) | +| **Bandwidth (SWAP) payments** | Light, Full | **~0.0005 xDAI** (for initial SWAP contract deployment) | **Scales with bandwidth usage** (~0.5 xBZZ per GB downloaded) | | + +This table provides a general guideline, but actual xDAI and xBZZ usage will depend on individual node activity and transaction fees at the time. + +### A Node's Wallet + +When your Bee node is installed, a Gnosis Chain wallet is also created. This wallet is used by Bee to interact with the blockchain (e.g. for sending and receiving cheques, or for making purchases of postage stamps, etc.). @@ -27,69 +119,75 @@ When your node has downloaded enough content to exceed the free tier threshold, then _cheques_ are sent to peers to provide payment in return for their services. -In order to send these cheques, a _chequebook_ must be deployed on the +In order to send these cheques, a [_chequebook_](/docs/concepts/incentives/bandwidth-incentives#chequebook-contract) must be deployed on the blockchain for your node, and for full speed operation it can be funded with BZZ. This deployment happens when a node initialises for the first time. Your Bee node will warn you in its log if there aren't enough funds in its wallet for deploying the chequebook. You can [configure](/docs/bee/working-with-bee/configuration) the amount of xBZZ to -be sent from the node's wallet. It is 1 xBZZ by default, but it can be set to -zero. +be sent from the node's wallet using the `swap-initial-deposit` option. It is 0 xBZZ by default, but it is recommended to deposit more xBZZ it you intend to download / upload any significant amount of data, as your node will exceed its free bandwidth threshold otherwise. -## Joining the swarm (mainnet) +## Joining the Swarm (mainnet) -### Basic deployment +### Basic Deployment If you want to get your Bee node up and running as easily as possible, then you -can set its -[`--swap-initial-deposit`](/docs/bee/working-with-bee/configuration) +can set its [`swap-initial-deposit`](/docs/bee/working-with-bee/configuration) value to zero. This means that your node's chequebook will not get funded with xBZZ, meaning that other nodes will only serve it within the free tier bandwidth threshold. Since gas fees on the [Gnosis Chain](https://www.gnosis.io/) are very low, you won't need much xDAI either to get started. You may acquire a small amount -for free by using the official Gnosis Chain xDAI faucet [xDAI Faucet](https://gnosisfaucet.com/). The required amount is a function of the current transaction fee on chain, but 0.01 xDAI should be -more than enough to start up your node. +for free by using the official Gnosis Chain xDAI faucet [xDAI Faucet](https://faucet.gnosischain.com/). The required amount is a function of the current transaction fee on chain, but 0.01 xDAI should be more than enough to start up your node. You can use the [Blockscout](https://blockscout.com/xdai/mainnet/) block explorer to inspect what's going on with your wallet by searching for its -Ethereum address. +Gnosis Chain address. -### Full performance node +### Full node If you want to run a full node, or upload a lot of content, then you may need more xDAI for gas. To acquire this, you may convert DAI on the main Ethereum -network to xDAI using the -[Gnosis Chain bridge](https://bridge.gnosischain.com/), -or buy xDAI -[directly using fiat](https://buyxdai.com/). +network to xDAI using the [Gnosis Chain bridge](https://bridge.gnosischain.com/), +or buy xDAI [directly using fiat](https://buyxdai.com/). -You will also need to fund your node with more xBZZ for full speed access, or to -purchase postage stamps to upload content. To bridge BZZ from the Ethereum -mainet to the [Gnosis Chain](https://www.gnosis.io/), you may use the [Gnosis Chain Bridge](https://bridge.gnosischain.com/). +To find out what your node's Gnosis Chain address is, you can use the `/addresses` endpoint (the [jq](https://jqlang.github.io/jq/) part of the command is and optional but recommended tool to make it easier to read json output): -To find out what your node's Ethereum address is, please consult your relevant -installation guide or check your logs! +```bash +curl -s localhost:1633/addresses | jq +``` + +```json +{ + "overlay": "46275b02b644a81c8776e2459531be2b2f34a94d47947feb03bc1e209678176c", + "underlay": [ + "/ip4/127.0.0.1/tcp/7072/p2p/16Uiu2HAmTbaZndBa43PdBHEekjQQEdHqcyPgPc3oQwLoB2hRf1jq", + "/ip4/192.168.0.10/tcp/7072/p2p/16Uiu2HAmTbaZndBa43PdBHEekjQQEdHqcyPgPc3oQwLoB2hRf1jq", + "/ip6/::1/tcp/7072/p2p/16Uiu2HAmTbaZndBa43PdBHEekjQQEdHqcyPgPc3oQwLoB2hRf1jq" + ], + "ethereum": "0x0b546f2817d0d889bd70e244c1227f331f2edf74", + "public_key": "03660e8dbcf3fda791e8e2e50bce658a96d766e68eb6caa00ce2bb87c1937f02a5" +} +``` + +The value in the `ethereum` field is your Gnosis Chain address (the `ethereum` keyname is used as Gnosis Chain is an Ethereum sidechain and shares the same address format). # Configure Your Wallet App To interact with the BZZ ecosystem, you will need to make a couple of small configuration additions to your wallet software. In the case of e.g. MetaMask, -you'll need to -[add the Gnosis Chain network](https://docs.gnosischain.com/tools/wallets/metamask/), -and then -[add a custom token](https://metamask.zendesk.com/hc/en-us/articles/360015489031-How-to-add-unlisted-tokens-custom-tokens-in-MetaMask). +you'll need to [add the Gnosis Chain network](https://docs.gnosischain.com/tools/wallets/metamask/), and then [add a custom token](https://support.metamask.io/manage-crypto/portfolio/how-to-import-a-token-in-metamask-portfolio/). The canonical addresses for the BZZ token on the various blockchains are as follows: -| Blockchain | Contract address | -| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | -| Ethereum, BZZ | [`0x19062190b1925b5b6689d7073fdfc8c2976ef8cb`](https://ethplorer.io/address/0x19062190b1925b5b6689d7073fdfc8c2976ef8cb) | -| Gnosis Chain, xBZZ | [`0xdBF3Ea6F5beE45c02255B2c26a16F300502F68da`](https://blockscout.com/xdai/mainnet/tokens/0xdBF3Ea6F5beE45c02255B2c26a16F300502F68da/) | -| Sepolia (testnet), sBZZ | [`0x543dDb01Ba47acB11de34891cD86B675F04840db`](https://sepolia.etherscan.io/address/0x543dDb01Ba47acB11de34891cD86B675F04840db) | +| Blockchain | Contract address | +| ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------- | +| Ethereum, BZZ | [`0x19062190b1925b5b6689d7073fdfc8c2976ef8cb`](https://ethplorer.io/address/0x19062190b1925b5b6689d7073fdfc8c2976ef8cb) | +| Gnosis Chain, xBZZ | [`0xdBF3Ea6F5beE45c02255B2c26a16F300502F68da`](https://blockscout.com/xdai/mainnet/tokens/0xdBF3Ea6F5beE45c02255B2c26a16F300502F68da/) | +| Sepolia (testnet), sBZZ | [`0x543dDb01Ba47acB11de34891cD86B675F04840db`](https://sepolia.etherscan.io/address/0x543dDb01Ba47acB11de34891cD86B675F04840db) | # Accessing Your Node's Wallet @@ -108,29 +206,7 @@ sudo cat /var/lib/bee/password # Testnet -A Bee node needs Sepolia ETH and sBZZ in its wallet to be able to properly -interact with the test network. One way to acquire these funds is to -sign into our Discord and request Sepolia ETH and sBZZ test tokens from the -*faucet bot* to your node's Ethereum address. +A Bee node needs Sepolia ETH and sBZZ in its wallet to properly interact with the test network. To acquire these funds, you can use one of the faucets from [this list](https://faucetlink.to/sepolia) to request Sepolia ETH. To get Sepolia BZZ (sBZZ), you can use this [Uniswap market](https://app.uniswap.org/swap?outputCurrency=0x543dDb01Ba47acB11de34891cD86B675F04840db&inputCurrency=ETH), just make sure that you've switched to the Sepolia network in your browser wallet. To find out what your node's Ethereum address is, please consult the installation guide or check the logs! - -Once you have the address: - -1. join our [Discord server](https://discord.gg/wdghaQsGq5) -2. navigate to the [#faucet](https://discord.gg/TVgKhsGEbc) channel -3. [verify your username](https://discord.gg/tXGPdzZQaV) -4. request test tokens from the *faucet bot* - -To request the tokens you must **type** (not copy paste) the following, replacing the address with your own: - -``` -/faucet sprinkle 0xabeeecdef123452a40f6ea9f598596ca8556bd57 -``` - -If you have problems, please let us know by making a post in the [#faucet](https://discord.gg/TVgKhsGEbc) channel, we will do our best to provide tokens to everyone. - -Note that you should use a Chromium-based client (e.g., Chrome, native Discord client) to type the faucet command, as support for other browsers is spotty. It's reported to not work on Firefox, for example. - -Transactions may take a while to complete, please be patient. We're also keen for you to join us in the swarm, and indeed you soon will! 🐝   🐝   🐝 diff --git a/docs/bee/installation/getting-started.md b/docs/bee/installation/getting-started.md new file mode 100644 index 000000000..bbc6bced5 --- /dev/null +++ b/docs/bee/installation/getting-started.md @@ -0,0 +1,235 @@ +--- +title: Getting Started +id: getting-started +--- + +In this guide, we cover the basic background information you need to know to start running a Bee node, such as: + +* [A list of Bee node types and their various features.](/docs/bee/installation/getting-started#node-types) +* [General requirements for running Bee nodes.](/docs/bee/installation/getting-started#general-node-requirements) +* [Specific requirements based on node type.](/docs/bee/installation/getting-started#node-requirements-by-node-type) +* [How to choose the right node type.](/docs/bee/installation/getting-started#choosing-node-type-based-on-use-case) +* [How to choose the appropriate installation method.](/docs/bee/installation/getting-started#choosing-installation-method) + +This guide will walk you through how to choose the appropriate the node type, installation method, operating system, tools, and network setup for your particular needs. For new Bee users, it's recommended to read through this entire guide without skipping any sections before moving on to other pages. + +## Node Types + +Bee is a versatile piece of software that caters to a diverse array of use cases. It can be run in several different modes, each of which offers different features which are best suited for different users. There are three main categories of nodes: full nodes, light nodes, and ultra-light nodes. Node type is set by modifying the [appropriate configuration options](/docs/bee/working-with-bee/configuration#set-bee-node-type). + +### Ultra-Light Node + +The ultra-light configuration allows for limited access to the Swarm network and enables a node to download only small amounts of data from the Swarm network. It does not allow for uploads. Ultra-light nodes may not earn any type of incentives. + +### Light Node + +A light node can both download and upload data over the Swarm network. Light nodes may also earn bandwidth incentives. + +### Full Node + +Full nodes offer the highest potential for earning xBZZ rewards. Like light nodes, a full node can upload and download data over the Swarm network and earn bandwidth incentives. Additionally, a full node can also earn xBZZ by sharing its disk space with the network. + +### Features Comparison Chart + +| Feature | Full Node | Light Node |Ultra-Light Node| +|--------------|-----------|------------|------------| +| Downloading | ✅ | ✅ |✅ | | +| Uploading | ✅ | ✅ | ❌ | +| Can exceed free download limits by paying xBZZ | ✅ | ✅ |❌ +|Sharing disk space with network|✅| ❌ |❌| +|[Storage incentives](/docs/concepts/incentives/overview#storage-incentives)|✅| ❌ |❌| +|[SWAP incentives](/docs/concepts/incentives/bandwidth-incentives)|✅| ✅ |❌| +|[PSS messaging](/docs/concepts/pss/)|✅| ✅ |✅ | +|Gnosis Chain Connection|✅| ✅ |❌ | + +## General Node Requirements / Recommendations + +The requirements and recommendations outlined below depend on your intended node type and intended use case. Review them carefully in order to determine which ones best suit your needs. + +### Recommended Operating Systems + +It is preferable to use one of the officially supported operating systems. Refer to the [Bee repo releases section](https://github.com/ethersphere/bee/releases) for a list of releases for each supported operating system. It is also possible to [build Bee from source](/docs/bee/installation/build-from-source) for operating systems not included on the official release list in case you have this requirement. + +If you are using [Swarm Desktop](/docs/desktop/introduction/) rather than running the core Bee client directly, any commonly available operating system is a good choice (macOS, Windows, Ubuntu, etc.). + +:::info +A note on operating systems. While it is possible to run Bee on a wide variety of different operating systems, much of the existing tooling and documentation is designed primarily for Unix-based systems. So generally speaking, some flavor of Linux or macOS is probably the best choice. +::: + +:::info +In case you only have access to Windows, [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) is an excellent option which will allow you to run a Linux terminal which you can use to follow along with the guides in these docs. +::: + + +### Essential Tools + +While the tools listed below are not strictly required, they are highly recommended as they simplify interacting with Bee nodes. Some, like `jq` and `curl`, are essential for following the examples in these docs. Others, such as `swarm-cli` and `bee-js`, provide convenient ways to manage Bee nodes without manually constructing complex HTTP requests. + +#### 1. `jq` – JSON Formatting + +[`jq`](https://jqlang.github.io/jq/) is widely used in this documentation to format API responses, making them more readable. + +:::caution +***Strongly recommended*** for anyone working directly with the Bee API. +::: + +#### 2. `curl` – API Requests + +[`curl`](https://curl.se/) is the primary tool used in this documentation for interacting with the Bee API. It is pre-installed on most UNIX-based systems and newer Windows versions. If unavailable, you can install [`curl for Windows`](https://curl.se/windows/). + +An alternative is [`wget`](https://www.gnu.org/software/wget/), though feature-rich API clients like [Insomnia](https://insomnia.rest/) or [Postman](https://www.postman.com/) may also be useful for saving and organizing requests. + +*These tools are generally not relevant for Swarm Desktop users but are essential for those interacting directly with their Bee client.* + +:::caution +`curl` or one of its alternatives is ***Required*** for sending API requests to the Bee client. +::: + +#### 3. Swarm CLI – Command Line Control + +[Swarm CLI](https://docs.ethswarm.org/docs/bee/working-with-bee/swarm-cli/) provides an easy way to interact with Bee nodes via terminal commands. It is built on [Bee JS](/docs/develop/tools-and-features/bee-js) and serves as a simpler alternative to manually crafting HTTP requests. + +:::info +Recommended for node operators and developers, not Swarm Desktop users. +::: + +#### 4. Bee JS – API Integration for Developers + +[Bee JS](/docs/develop/tools-and-features/bee-js) is an npm package for integrating Bee functionality into Node.js applications. It abstracts API interactions, eliminating the need for manual HTTP requests except for a small number of edge cases (such as new features which have not yet been added to `bee-js`). + +:::info +Best suited for Node.js developers who want to interact with Bee programmatically. +::: + + +### Token Requirements + +* A small amount of xDAI to pay for Gnosis Chain transactions, 0.1 xDAI should be enough +* 10 xBZZ (BZZ on Gnosis Chain) is required for staking +* A small amount of xBZZ for downloading and uploading from Swarm. You can start with 1 xBZZ and add more according to your usage needs. + +### Network Requirements + +#### RPC Endpoints + +Both full and light nodes require a Gnosis Chain RPC endpoint which can be obtained either by running your own node or from an RPC endpoint 3rd party provider such as [Infura](https://www.infura.io). You can also find some free RPC endpoints such as [this one](https://xdai.fairdatasociety.org) offered by the Fair Data Society, or from one of the other free options available at the [Gnosis Chain docs](https://docs.gnosischain.com/tools/RPC%20Providers/). + + +#### NAT and Port Forwarding + +If you are running on a home network, you may need to configure your router to use [port forwarding](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide) or take other steps to ensure your node is reachable by other nodes on the network. See [here](https://docs.ethswarm.org/docs/bee/installation/connectivity/#navigating-through-the-nat) for more guidance. If you are running on a VPS or cloud based server you will likely have no issues. + +## Node Requirements By Specific Node Type + + +### Ultra-Light Node +An ultra-light node has minimal hardware requirements and can operate on practically any modern computer or VPS, including devices with baseline specs. It can even run on single-board computers like Raspberry Pi. + +**Average Specs for Ultra-Light Node:** +- **Processor**: Single-core or dual-core processor, 1 GHz or higher (e.g., Intel Atom, ARM Cortex-A series). +- **RAM**: 1 GB or higher. +- **Storage**: 8 GB HDD or SSD. +- **Internet Connection**: A stable internet connection with at least 1 Mbps download/upload speed. + +No RPC endpoint is required for ultra-light nodes. + + +### Light Node +A light node has slightly higher requirements than an ultra-light node due to the ability to upload and download data over the network. It consumes more bandwidth and requires a Gnosis Chain RPC endpoint for purchasing stamps and participating in bandwidth incentives. + +**Average Specs for Light Node:** +- **Processor**: Dual-core processor, 1.5 GHz or higher (e.g., Intel Celeron, AMD Athlon, or similar). +- **RAM**: 2 GB or higher. +- **Storage**: 16 GB SSD or HDD. +- **Internet Connection**: A stable internet connection with at least 5 Mbps download/upload speed. + +These specs are achievable with most commercially available laptops, desktops, or low-cost servers. For users planning to handle high amounts of data transfers, faster internet connections and slightly more RAM (e.g., 4 GB) are recommended for optimal performance. + + +These recommendations reflect the typical capabilities of affordable, readily available hardware suitable for running light and ultra-light nodes without significant bottlenecks. + +### Full Node + +A full node has significantly greater requirements since it is responsible for storing and syncing data from the Swarm network, and the requirements will be even higher if it is staking xBZZ and participating in the redistribution system for a chance to win xBZZ rewards. + +The minimum recommended specifications for a full staking node are: + +* Dual core, recent generation, 2ghz processor +* 8gb RAM +* 30gb SSD +* Stable internet connection +* HDD drives are discouraged for full nodes due to their low speeds. + +Since there can be considerable variability in the performance of processors and RAM from different years and brands even with nominally similar specifications, it's recommended to [test your node's performance using the `/rchash` endpoint](https://docs.ethswarm.org/docs/bee/working-with-bee/bee-api/#rchash) to make sure that it is performant enough to participate in the redistribution game. + +Note that there are additional hardware requirements if you choose to [run your own Gnosis Chain node](https://docs.gnosischain.com/node/#environment-and-hardware) in order to provide your Bee node(s) with the required RPC endpoint. + +:::info +Staking is not required to run a full node, but is necessary to earn storage incentives. An altruistic person may want to run a full node without putting up any stake, and in fact, could possibly earn enough xBZZ from bandwidth (swap/cheque) compensation to be able to stake at some point in the future. Learn more in the [staking section](/docs/bee/working-with-bee/staking) +::: + +:::caution +While it is possible to run multiple Bee nodes on a single machine, due to the high rate of I/O operations required by a full Bee node in operation, it is not recommended to run more than a handful of Bee nodes on the same physical disk (depending on the disk speed). +::: + + +## Choosing Node Type Based on Use Case + +Different node types best suit different use cases: + +### Basic Interactions with the Swarm network + +If you only need to download a small amount of data from the Swarm network an ***ultra light node*** could be the right choice for you. This will allow you to download a limited amount of data but does not support uploading data. + +If you want upload and download from Swarm and perhaps earn a small amount of bandwidth incentives, then running a ***light node*** in the background on your laptop or desktop computer could be the right choice for you. This will enable direct access to the Swarm network from your web browser and other applications. + +:::info +The [Swarm Desktop app](https://www.ethswarm.org/build/desktop) offers an easy way to automatically set up a light or ultra-light node and interact with it through a graphical user interface. +::: + +### Developing a DAPP on Swarm + +In order to develop a DAPP on Swarm, you will likely want to run either a ***light node*** or a ***full node***. For many use cases, a light node will be sufficient. However if you need to access certain features such as GSOC, then running a full node will be required. + +Depending on your specific needs as a developer, even the [Swarm Desktop app](https://www.ethswarm.org/build/desktop) may be sufficient. + +### Support the Network and Earn xBZZ by Running a Full Node + +If you wish to earn [xBZZ](/docs/bee/working-with-bee/cashing-out) storage and bandwidth incentives and contribute to the strength of the Swarm network, running a **full node** is the right choice for you. It's easy to set up on a VPS, colocation, or any home computer that's connected to the internet. + +Since each full Bee node shares up to 2^22 chunks (~16GB of data), and due to the economics of running a Bee node, serious node operators will likely wish to scale up their operations to run multiple Bee nodes together in a hive so that they can take advantage of all the available disk space they have to share and maximize their earnings. While there are many possible approaches to doing so, and there is no one officially recommended method, you may consider tools such as [Docker](https://www.docker.com/), [Docker Compose](https://docs.docker.com/compose/), or [Kubernetes](https://kubernetes.io/) in order to orchestrate the deployment of a larger number of Bee nodes. + + +## Choosing Installation Method + +You can interact with the Swarm network by installing the Bee client through a variety of different methods. Below is a (non-exhaustive) list of some of the most common methods for installing a Bee client. + +### [Swarm Desktop](/docs/desktop/introduction) + +If you are looking to get started with exploring Swarm and interacting with the network in as simple and easy a way as possible, then [Swarm Desktop](/docs/desktop/introduction) is the way to go. + +Swarm Desktop offers an easy and convenient-to-use graphical user interface so that users can easily upload and download from the Swarm, host their websites, and access a variety of Swarm DAPPs that come pre-bundled with Swarm Desktop. + +### [Shell Script Install](/docs/bee/installation/shell-script-install) + +If you're ready to go beyond the GUI based Swarm Desktop, then [the shell script install](/docs/bee/installation/shell-script-install) method may be right for you. This method uses a simple shell script to detect your operating system and environment and install the correct version of Bee for your machine. It's a convenient and minimalistic way of getting started with Swarm. + +Because the shell script installation is so minimalistic, it may require some additional tinkering to get it working the way you want it to. For example, it will not come set up to run in the background as a service out of the box, and logs will not be automatically saved. + +### [Docker Install](/docs/bee/installation/docker) + +While the [Docker based installation](/docs/bee/installation/docker) method requires additional tooling not needed with the shell script install method, it also comes with several advantages which make it easier to operate your node across multiple different types of environments and also makes it easier to spin up multiple nodes at once. Combining it with tools like [Docker Compose](https://docs.docker.com/compose/) can open up even more options. + +Unlike the shell script installation method, Docker already comes with easy-to-use tools for running your containerized Bee node as a background process and for dealing with logs from your node. + +### [Package Manager Install](/docs/bee/installation/package-manager-install) + +The Bee client can be [installed through a variety of package managers](/docs/bee/installation/package-manager-install) including [APT](https://en.wikipedia.org/wiki/APT_(software)), [RPM](https://en.wikipedia.org/wiki/RPM_Package_Manager), and [Homebrew](https://en.wikipedia.org/wiki/Homebrew_(package_manager)). + +In comparison with the shell script install, this installation method comes with the advantage of setting up Bee to run as a service in the background, and also sets up some basic log management with `journalctl` (APT / RPM) or with `launchd` for Homebrew. + +One of the disadvantages is that it can be less flexible than either the Docker or shell script install methods. + +### [Building From Source](/docs/bee/installation/build-from-source) + +For more advanced users, you may wish to build from source. You can find instructions for doing so [here](/docs/bee/installation/build-from-source). While this may be the most flexible of all methods, it's also the most difficult and requires the most hands-on setup and so is recommended for more advanced users / use cases. \ No newline at end of file diff --git a/docs/bee/installation/hive.md b/docs/bee/installation/hive.md index 7c005ce68..75893e342 100644 --- a/docs/bee/installation/hive.md +++ b/docs/bee/installation/hive.md @@ -8,8 +8,6 @@ wishing to scale up their Bee operation, or set up a commercial Bee hive should seek to run many instances of Bee simultaneously. Read [The Book of Swarm](https://www.ethswarm.org/the-book-of-swarm-2.pdf) for more information on how the swarm comes together. -Swarm provides tooling to help you install many Bees at once. - ### Docker Up to date [Docker images for Bee](/docs/bee/installation/docker) are provided. diff --git a/docs/bee/installation/install.md b/docs/bee/installation/install.md deleted file mode 100644 index 17a51830f..000000000 --- a/docs/bee/installation/install.md +++ /dev/null @@ -1,852 +0,0 @@ ---- -title: Install Bee -id: install ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - - - - - - - - - - - - - -It is easy to set up a Bee light node on small and inexpensive computers, such as a Raspberry Pi 4, spare hardware you have lying around, or even a cheap cloud hosted VPS (we recommend small, independent providers and colocations). When running a full node however, it's important to meet the minimum required specifications. - -## Recommended Hardware Specifications - -### Full Nodes - -Minimum recommended specifications for each full node: - -- Dual core, recent generation, 2ghz processor -- 8gb RAM -- 30gb SSD -- Stable internet connection - -HDD drives are discouraged for full nodes due to their low speeds. - -Note that there are additional [hardware requirements](https://docs.gnosischain.com/node/#environment-and-hardware) if you choose to run your own Gnosis Chain node in order to provide your Bee node(s) with the required RPC endpoint. See [configuration step](/docs/bee/installation/install#set-blockchain-rpc-endpoint) for more details. - -### Light and UltraLight Nodes - -The minimum required hardware specifications for light and ultralight nodes are very low, and can be run on practically any commercially available computer or microcomputer such as a Raspberry Pi. - -## Note on Startup Methods -:::caution - When a node is started using the `bee start` command the node process will be bound to the terminal session and will exit if the terminal is closed. - - If Bee was installed using one of the supported package managers it is set up to run as a service in the background with tools such as `systemctl` or `brew services` (which also use the `bee start` command[under the hood](https://github.com/ethersphere/bee/blob/master/packaging/bee.service)). - - Depending on which of these startup methods was used, the default Bee directories will be different. See the [configuration page](/docs/bee/working-with-bee/configuration) for more information about default data and config directories. -::: - - - -## Installation Steps - -1. [Install Bee](/docs/bee/installation/install#1-install-bee) -1. [Configure Bee](/docs/bee/installation/install#2-configure-bee) -1. [Find Bee Address](/docs/bee/installation/install#3-find-bee-address) -1. [Fund node](/docs/bee/installation/install#4-fund-node) (Not required for ultra-light nodes) -1. [Wait for Initialisation](/docs/bee/installation/install#5-wait-for-initialisation) -1. [Check Bee Status](/docs/bee/installation/install#6-check-if-bee-is-working) -1. [Back Up Keys](/docs/bee/installation/install#7-back-up-keys) -1. [Deposit Stake](/docs/bee/installation/install#8-deposit-stake-optional) (Full node only, optional) - - -## 1. Install Bee - -### Package manager install - -Bee is available for Linux in .rpm and .deb package format for a variety of system architectures, and is available for MacOS through Homebrew. See the [releases](https://github.com/ethersphere/bee/releases) page of the Bee repo for all available packages. One of the advantages of this method is that it automatically sets up Bee to run as a service as a part of the install process. - - - - - -Get GPG key: - -```bash -curl -fsSL https://repo.ethswarm.org/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/ethersphere-apt-keyring.gpg -``` - -Set up repo inside apt-get sources: - -```bash -echo \ - "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ethersphere-apt-keyring.gpg] https://repo.ethswarm.org/apt \ - * *" | sudo tee /etc/apt/sources.list.d/ethersphere.list > /dev/null -``` - -Install package: - -```bash -sudo apt-get update -sudo apt-get install bee -``` - - - - - -Set up repo: - -```bash -sudo echo "[ethersphere] -name=Ethersphere Repo -baseurl=https://repo.ethswarm.org/yum/ -enabled=1 -gpgcheck=0" > /etc/yum.repos.d/ethersphere.repo -``` -Install package: - -```bash -yum install bee -``` - - - -```bash -brew tap ethersphere/tap -brew install swarm-bee -``` - - - - -You should see the following output to your terminal after a successful install (your default 'Config' location will vary depending on your operating system): - -```bash -Reading package lists... Done -Building dependency tree... Done -Reading state information... Done -The following NEW packages will be installed: - bee -0 upgraded, 1 newly installed, 0 to remove and 37 not upgraded. -Need to get 0 B/27.2 MB of archives. -After this operation, 50.8 MB of additional disk space will be used. -Selecting previously unselected package bee. -(Reading database ... 82381 files and directories currently installed.) -Preparing to unpack .../archives/bee_2.4.0_amd64.deb ... -Unpacking bee (2.4.0) ... -Setting up bee (2.4.0) ... - -Logs: journalctl -f -u bee.service -Config: /etc/bee/bee.yaml - -Bee requires a Gnosis Chain RPC endpoint to function. By default this is expected to be found at ws://localhost:8546. - -Please see https://docs.ethswarm.org/docs/installation/install for more details on how to configure your node. - -After you finish configuration run 'sudo bee-get-addr' and fund your node with XDAI, and also XBZZ if so desired. - -Created symlink /etc/systemd/system/multi-user.target.wants/bee.service → /lib/systemd/system/bee.service. -``` - -### Shell script install - -The [Bee install shell script](https://github.com/ethersphere/bee/blob/master/install.sh) for Linux automatically detects its execution environment and installs the latest stable version of Bee. - -:::info -Note that this install method copies precompiled binaries directly to the `/usr/local/bin` directory, so Bee installed through this method cannot be managed or uninstalled with package manager command line tools like `dpkg`, `rpm`, and `brew`. - -Also note that unlike the package install method, this install method will not set up Bee to run as a service (such as with `systemctl` or `brew services`). -::: - -Use either of the following commands to run the script and install Bee: - -#### wget - -```bash -wget -q -O - https://raw.githubusercontent.com/ethersphere/bee/master/install.sh | TAG=v2.4.0 bash -``` - -#### curl - -```bash -curl -s https://raw.githubusercontent.com/ethersphere/bee/master/install.sh | TAG=v2.4.0 bash -``` -### Build from source -If neither of the above methods works for your system, you can see our guide for [building directly from source](/docs/bee/installation/build-from-source). - -## 2. Configure Bee - -Before starting Bee for the first time you will need to make sure it is properly configured. - -See the [configuration](/docs/bee/working-with-bee/configuration) section for more details. - -### Config for the Bee Service - -When installing Bee with a package manager the configuration file for the Bee service will be automatically generated. - -Check that the file was successfully generated and contains the [default configuration](https://github.com/ethersphere/bee/blob/master/packaging) for your system: - - - - - -```bash - test -f /etc/bee/bee.yaml && echo "$FILE exists." - cat /etc/bee/bee.yaml -``` - - - - - -```bash - test -f /opt/homebrew/etc/swarm-bee/bee.yaml && echo "$FILE exists." - cat /opt/homebrew/etc/swarm-bee/bee.yaml -``` - - - - - -```bash - test -f /usr/local/etc/swarm-bee/bee.yaml && echo "$FILE exists." - cat /usr/local/etc/swarm-bee/bee.yaml -``` - - - -The configuration printed to the terminal should match the default configuration for your operating system. See the [the packaging section of the Bee repo](https://github.com/ethersphere/bee/tree/master/packaging) for the default configurations for a variety of systems. In particular, pay attention to the `config` and `data-dir` values, as these differ depending on your system. - -If your config file is missing you will need to create it yourself. - -:::info -You may be aware of the `bee printconfig` command which prints out a complete default Bee configuration. However, note that it outputs the default `data` and `config` directories for running Bee with `bee start`, and will need to be updated to use [the default locations for your system](https://github.com/ethersphere/bee/tree/master/packaging) if you plan on running Bee as a service with `systemctl` or `brew services`. -::: - - - - -Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/bee.yaml). - -```bash -sudo touch /etc/bee/bee.yaml -sudo vi /etc/bee/bee.yaml -``` - - - - - -Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/homebrew-arm64/bee.yaml). - -```bash -sudo touch /opt/homebrew/etc/swarm-bee/bee.yaml -sudo sudo vi /opt/homebrew/etc/swarm-bee/bee.yaml -``` - - - - - -Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/homebrew-amd64/bee.yaml). - -```bash -sudo touch /usr/local/etc/swarm-bee/bee.yaml -sudo vi /usr/local/etc/swarm-bee/bee.yaml -``` - - - - -### Config for `bee start` - -When running your node using `bee start` you can set options using either command line flags, environment variables, or a YAML configuration file. See the configuration section for [more information on setting options for running a node with `bee start`](/docs/bee/working-with-bee/configuration). - -No default YAML configuration file is generated to be used with the `bee start` command, so it must be generated and placed in the default config directory if you wish to use it to set your node's options. You can view the default configuration including the default config directory for your system with the `bee printconfig` command. - -```bash -root@user-bee:~# bee printconfig -``` - -Check the configuration printed to your terminal. Note that the values for `config` and `data-dir` will vary slightly depending on your operating system. - -```bash -# allow to advertise private CIDRs to the public network -allow-private-cidrs: false -# HTTP API listen address -api-addr: 127.0.0.1:1633 -# chain block time -block-time: "15" -# rpc blockchain endpoint -blockchain-rpc-endpoint: "" -# initial nodes to connect to -bootnode: [] -# cause the node to always accept incoming connections -bootnode-mode: false -# cache capacity in chunks, multiply by 4096 to get approximate capacity in bytes -cache-capacity: "1000000" -# enable forwarded content caching -cache-retrieval: true -# enable chequebook -chequebook-enable: true -# config file (default is $HOME/.bee.yaml) -config: /root/.bee.yaml -# origins with CORS headers enabled -cors-allowed-origins: [] -# data directory -data-dir: /root/.bee -# size of block cache of the database in bytes -db-block-cache-capacity: "33554432" -# disables db compactions triggered by seeks -db-disable-seeks-compaction: true -# number of open files allowed by database -db-open-files-limit: "200" -# size of the database write buffer in bytes -db-write-buffer-size: "33554432" -# cause the node to start in full mode -full-node: false -# help for printconfig -help: false -# triggers connect to main net bootnodes. -mainnet: true -# NAT exposed address -nat-addr: "" -# suggester for target neighborhood -neighborhood-suggester: https://api.swarmscan.io/v1/network/neighborhoods/suggestion -# ID of the Swarm network -network-id: "1" -# P2P listen address -p2p-addr: :1634 -# enable P2P WebSocket transport -p2p-ws-enable: false -# password for decrypting keys -password: "" -# path to a file that contains password for decrypting keys -password-file: "" -# percentage below the peers payment threshold when we initiate settlement -payment-early-percent: 50 -# threshold in BZZ where you expect to get paid from your peers -payment-threshold: "13500000" -# excess debt above payment threshold in percentages where you disconnect from your peer -payment-tolerance-percent: 25 -# postage stamp contract address -postage-stamp-address: "" -# postage stamp contract start block number -postage-stamp-start-block: "0" -# enable pprof mutex profile -pprof-mutex: false -# enable pprof block profile -pprof-profile: false -# price oracle contract address -price-oracle-address: "" -# redistribution contract address -redistribution-address: "" -# ENS compatible API endpoint for a TLD and with contract address, can be repeated, format [tld:][contract-addr@]url -resolver-options: [] -# forces the node to resync postage contract data -resync: false -# staking contract address -staking-address: "" -# lru memory caching capacity in number of statestore entries -statestore-cache-capacity: "100000" -# protect nodes from getting kicked out on bootnode -static-nodes: [] -# enable storage incentives feature -storage-incentives-enable: true -# gas price in wei to use for deployment and funding -swap-deployment-gas-price: "" -# enable swap -swap-enable: false -# swap blockchain endpoint -swap-endpoint: "" -# swap factory addresses -swap-factory-address: "" -# initial deposit if deploying a new chequebook -swap-initial-deposit: "0" -# neighborhood to target in binary format (ex: 111111001) for mining the initial overlay -target-neighborhood: "" -# admin username to get the security token -token-encryption-key: "" -# enable tracing -tracing-enable: false -# endpoint to send tracing data -tracing-endpoint: 127.0.0.1:6831 -# host to send tracing data -tracing-host: "" -# port to send tracing data -tracing-port: "" -# service name identifier for tracing -tracing-service-name: bee -# bootstrap node using postage snapshot from the network -use-postage-snapshot: false -# log verbosity level 0=silent, 1=error, 2=warn, 3=info, 4=debug, 5=trace -verbosity: info -# time to warmup the node before some major protocols can be kicked off -warmup-time: 5m0s -# send a welcome message string during handshakes -welcome-message: "" -# withdrawal target addresses -withdrawal-addresses-whitelist: [] -``` - -If you do wish to use a YAML file to manage your configuration, simply generate a new file in the same directory as shown for `config` from the `bee printconfig` output. For us, that is `/root/.bee.yaml` (make sure to change this directory to match the value for the `config` directory which is output from `bee printconfig` on your system). - -```bash -touch /root/.bee.yaml -vi /root/.bee.yaml -``` - -You can then populate your `.bee.yaml` file with the default config output from `bee printconfig` to get started and save the file. - -### Set Bee API Address -:::danger -Make sure that your api-addr (default 1633) is never exposed to the internet. It is good practice to employ one or more firewalls that block traffic on every port except for those you are expecting to be open. -::: - -If you are not using a firewall or other method to protect your node, it's recommended that you change your Bee API address from the default `1633` to `127.0.0.1:1633` to ensure that it is not publicly exposed to the internet. - -```yaml -## HTTP API listen address (default ":1633") -api-addr: 127.0.0.1:1633 -``` - -### Set node type - -#### Full Node, Light Node, Ultra-light Node - -See the [quick start guide](/docs/bee/installation/quick-start) if you're not sure which type of node to run. - -To run Bee as a full node both `full-node` and `swap-enable` must be set to `true`, and a valid and stable Gnosis Chain RPC endpoint must be specified with `blockchain-rpc-endpoint`. - -```yaml -## bee.yaml -full-node: true -``` - -To run Bee as a light node `full-node` must be set to `false` and `swap-enable` must both be set to `true`, and a valid and stable Gnosis Chain RPC endpoint must be specified with `blockchain-rpc-endpoint`. - -```yaml -## bee.yaml -full-node: false -``` - -To run Bee as an ultra-light node `full-node` and `swap-enable` must both be set to `false`. No Gnosis Chain endpoint is required, and `blockchain-rpc-endpoint` can be left to its default value of an empty string. - -```yaml -## bee.yaml -full-node: false -swap-enable: false -``` - -### Set blockchain RPC endpoint - -Full and light Bee nodes require a Gnosis Chain RPC endpoint so they can interact with and deploy their chequebook contract, see the latest view of the current postage stamp batches, and interact with and top-up postage stamp batches. A blockchain RPC endpoint is not required for nodes running in ultra-light mode. - -We strongly recommend you [run your own Gnosis Chain node](https://docs.gnosischain.com/node/) if you are planning to run a full node, and especially if you plan to run a [hive of nodes](/docs/bee/installation/hive). - -If you do not wish to run your own Gnosis Chain node and are willing to trust a third party, you may also consider using an RPC endpoint provider such as [GetBlock](https://getblock.io/). - -For running a light node or for testing out a single full node you may also consider using one of the [free public RPC endpoints](https://docs.gnosischain.com/tools/RPC%20Providers/) listed in the Gnosis Chain documentation. However the providers of these endpoints make no [SLA](https://business.adobe.com/blog/basics/service-level-agreements-slas-a-complete-guide#what-is-a-service-level-agreement-sla) or availability guarantees, and is therefore not recommended for full node operators. - -To set your RPC endpoint provider, specify it with the `blockchain-rpc-endpoint` value, which is set to an empty string by default. - -```yaml -## bee.yaml -blockchain-rpc-endpoint: https://rpc.gnosis.gateway.fm -``` - -:::info -The gateway.fm RPC endpoint in the example is great for learning how to set up Bee, but for the sake of security and reliability it's recommended that you run your [run your own Gnosis Chain node](https://docs.gnosischain.com/node/) rather than relying on a third party provider. -::: - - -### Configure Swap Initial Deposit (Optional) - -When running your Bee node with SWAP enabled for the first time, your node will deploy a 'chequebook' contract using the canonical factory contract which is deployed by Swarm. Once the chequebook is deployed, Bee will (optionally) deposit a certain amount of xBZZ in the chequebook contract so that it can pay other nodes in return for their services. The amount of xBZZ transferred to the chequebook is set by the `swap-initial-deposit` configuration setting (it may be left at the default value of zero or commented out). - -### NAT address - -Swarm is all about sharing and storing chunks of data. To enable other -Bees (also known as _peers_) to connect to your Bee, you must -broadcast your public IP address in order to ensure that Bee is reachable on -the correct p2p port (default `1634`). We recommend that you [manually -configure your external IP and check -connectivity](/docs/bee/installation/connectivity) to ensure your Bee is -able to receive connections from other peers. - -First determine your public IP address: - -```bash -curl icanhazip.com -``` - -``` -123.123.123.123 -``` - -Then configure your node, including your p2p port (default 1634). - -```yaml -## bee.yaml -nat-addr: "123.123.123.123:1634" -``` -### ENS Resolution (Optional) - -The [ENS](https://ens.domains/) domain resolution system is used to host websites on Bee, and in order to use this your Bee must be connected to a mainnet Ethereum blockchain node. We recommend you run your own ethereum node. An option for resource restricted devices is geth+nimbus and a guide can be found [here](https://ethereum-on-arm-documentation.readthedocs.io/en/latest/). Other options include [dappnode](https://dappnode.io/), [nicenode](https://www.nicenode.xyz/), [stereum](https://stereum.net/) and [avado](https://ava.do/). - - -If you do not wish to run your own Ethereum node you may use a blockchain API service provider such as Infura. After signing up for [Infura's](https://infura.io) API service, simply set your `--resolver-options` to `https://mainnet.infura.io/v3/your-api-key`. - -```yaml -## bee.yaml -resolver-options: ["https://mainnet.infura.io/v3/<>"] -``` - -### Set Target Neighborhood (Optional) - -In older versions of Bee, [neighborhood](/docs/concepts/DISC/neighborhoods) assignment was random by default. However, we can maximize a node's chances of winning xBZZ and also strengthen the resiliency of the network by strategically assigning neighborhoods to new nodes (see the [staking section](/docs/bee/working-with-bee/staking) for more details). - -Therefore the default Bee configuration now includes the `neighborhood-suggester` option which is set by default to to use the Swarmscan neighborhood suggester (`https://api.swarmscan.io/v1/network/neighborhoods/suggestion`). An alternative suggester URL could be used as long as it returns a JSON file in the same format `{"neighborhood":"101000110101"}`, however only the Swarmscan suggester is officially recommended. - - -:::info -The Swarmscan neighborhood selector will return the least populated neighborhood (or its least populated sub-neighborhood in case the sub-neighborhoods are imbalanced). Furthermore, the suggester will temporarily de-prioritize previously suggested neighborhoods based on the assumption that a new node is being created in each suggested neighborhood so that multiple nodes do not simultaneously attempt to join the same neighborhood. -::: - -#### Setting Neighborhood Manually - -It's recommended to use the default `neighborhood-suggester` configuration for choosing your node's neighborhood, however you may also set your node's neighborhood manually using the `target-neighborhood` option. - -To use this option, it's first necessary to identify potential target neighborhoods. A convenient tool for finding underpopulated neighborhoods is available at the [Swarmscan website](https://swarmscan.io/neighborhoods). This tool provides the leading binary bits of target neighborhoods in order of least populated to most. Simply copy the leading bits from one of the least populated neighborhoods (for example, `0010100001`) and use it to set `target-neighborhood`. After doing so, an overlay address within that neighborhood will be generated when starting Bee for the first time. - -```yaml -## bee.yaml -target-neighborhood: "0010100001" -``` - -There is also a [Swarmscan API endpoint](https://api.swarmscan.io/#tag/Network/paths/~1v1~1network~1neighborhoods~1suggestion/get) which you can use to get a suggested neighborhood programmatically: - -```bash -curl https://api.swarmscan.io/v1/network/neighborhoods/suggestion -``` -A suggested neighborhood will be returned: - -```bash -{"neighborhood":"1111110101"} -``` - - -## 3. Find Bee address - -:::danger - In the following section we print our `swarm.key` file contents to the terminal. Do not share the contents of your `swarm.key` or any other keys with anyone as it controls access to your Gnosis Chain account and can be used to withdraw assets. -::: - -As part of the process of starting a Bee full or light node the node must issue a Gnosis Chain transaction to set up its chequebook contract. We need to find our node's Gnosis Chain address in order to deposit xDAI which will be used to pay for this initial Gnosis Chain transaction. We can find our node's address by reading it directly from our key file. The location for your key file will differ depending on your system and startup method: - -### Bee Service - -The default keys directory for a Bee node set up with a package manager to run as a service will differ depending on your system: - - - - - -```bash -sudo cat /var/lib/bee/keys/swarm.key -``` - -```bash -{"address":"215693a6e6cf0a27441075fd98c31d48e3a3a100","crypto":{"cipher":"aes-128-ctr","ciphertext":"9e2706f1ce135dde449af5c529e80d560fb73007f1edb1636efcf4572eed1265","cipherparams":{"iv":"64b6482b8e04881446d88f4f9003ec78"},"kdf":"scrypt","kdfparams":{"n":32768,"r":8,"p":1,"dklen":32,"salt":"3da537f2644274e3a90b1f6e1fbb722c32cbd06be56b8f55c2ff8fa7a522fb22"},"mac":"11b109b7267d28f332039768c4117b760deed626c16c9c1388103898158e583b"},"version":3,"id":"d4f7ee3e-21af-43de-880e-85b6f5fa7727"} -``` -The `address` field contains the Gnosis Chain address of the node, simply add the `0x` prefix and save it for the next step (0x215693a6e6cf0a27441075fd98c31d48e3a3a100). - - - - - - - - -```bash -sudo cat /opt/homebrew/var/lib/swarm-bee/keys/swarm.key -``` - -```bash -{"address":"215693a6e6cf0a27441075fd98c31d48e3a3a100","crypto":{"cipher":"aes-128-ctr","ciphertext":"9e2706f1ce135dde449af5c529e80d560fb73007f1edb1636efcf4572eed1265","cipherparams":{"iv":"64b6482b8e04881446d88f4f9003ec78"},"kdf":"scrypt","kdfparams":{"n":32768,"r":8,"p":1,"dklen":32,"salt":"3da537f2644274e3a90b1f6e1fbb722c32cbd06be56b8f55c2ff8fa7a522fb22"},"mac":"11b109b7267d28f332039768c4117b760deed626c16c9c1388103898158e583b"},"version":3,"id":"d4f7ee3e-21af-43de-880e-85b6f5fa7727"} -``` -The `address` field contains the Gnosis Chain address of the node, simply add the `0x` prefix and save it for the next step (0x215693a6e6cf0a27441075fd98c31d48e3a3a100). - - - - - - -```bash -sudo cat /usr/local/var/lib/swarm-bee/keys/swarm.key -``` - -```bash -{"address":"215693a6e6cf0a27441075fd98c31d48e3a3a100","crypto":{"cipher":"aes-128-ctr","ciphertext":"9e2706f1ce135dde449af5c529e80d560fb73007f1edb1636efcf4572eed1265","cipherparams":{"iv":"64b6482b8e04881446d88f4f9003ec78"},"kdf":"scrypt","kdfparams":{"n":32768,"r":8,"p":1,"dklen":32,"salt":"3da537f2644274e3a90b1f6e1fbb722c32cbd06be56b8f55c2ff8fa7a522fb22"},"mac":"11b109b7267d28f332039768c4117b760deed626c16c9c1388103898158e583b"},"version":3,"id":"d4f7ee3e-21af-43de-880e-85b6f5fa7727"} -``` -The `address` field contains the Gnosis Chain address of the node, simply add the `0x` prefix and save it for the next step (0x215693a6e6cf0a27441075fd98c31d48e3a3a100). - - - - - -### For `bee start` - -The default keys directory when running Bee with the `bee start` command will depend on your operating system. Run the `bee printconfig` command to see the default config directory for your operating system, and look for the `data-dir` value. - -```bash -data-dir: /root/.bee -``` - -Your keys folder is found in the root of the `data-dir` directory. We can print our key data to the terminal to find our node's address: - -```bash -sudo cat /root/.bee/keys/swarm.key -``` - - -```bash -{"address":"215693a6e6cf0a27441075fd98c31d48e3a3a100","crypto":{"cipher":"aes-128-ctr","ciphertext":"9e2706f1ce135dde449af5c529e80d560fb73007f1edb1636efcf4572eed1265","cipherparams":{"iv":"64b6482b8e04881446d88f4f9003ec78"},"kdf":"scrypt","kdfparams":{"n":32768,"r":8,"p":1,"dklen":32,"salt":"3da537f2644274e3a90b1f6e1fbb722c32cbd06be56b8f55c2ff8fa7a522fb22"},"mac":"11b109b7267d28f332039768c4117b760deed626c16c9c1388103898158e583b"},"version":3,"id":"d4f7ee3e-21af-43de-880e-85b6f5fa7727"} -``` - -The `address` field contains the Gnosis Chain address of the node, simply add the `0x` prefix and save it for the next step (0x215693a6e6cf0a27441075fd98c31d48e3a3a100). - -## 4. Fund Node - -:::info - We recommend not holding a high value of xBZZ or xDAI in your nodes' wallet. Please consider regularly removing accumulated funds. -::: - -To fund your node with xDAI you can use a Gnosis Chain compatible wallet such as Metamask, or a centralized exchange which supports xDAI withdrawals to Gnosis Chain. If you already have some DAI on Ethereum, you can use the [Gnosis Chain Bridge](https://bridge.gnosischain.com/) to mint xDAI on Gnosis Chain. - -After acquiring some xDAI, you can fund your node by sending some xDAI to the address you saved from the previous step (1 xDAI is more sufficient). You can optionally also send some xBZZ to your node which you can use to pay for storage on Swarm. - -While depositing xBZZ is optional, node operators who intend to download or upload large amounts of data on Swarm may wish to deposit some xBZZ in order to pay for SWAP settlements. - -For nodes which stake xBZZ and participate in the storage incentives system, small amounts of xDAI are used regularly to pay for staking related transactions on Gnosis Chain, so xDAI must be periodically topped up. See the [staking section](/docs/bee/working-with-bee/staking#check-redistribution-status) for more information. - -After sending xDAI and optionally xBZZ to the Gnosis Chain address collected in the previous step, restart the node: - -### Bee Service - - - - -```bash -sudo systemctl restart bee -``` - - - - - -```bash -brew services restart swarm-bee -``` - - - - -### For `bee start` - -Restart your terminal and run `bee start`: - -```bash -bee start -``` - - -## 5. Wait for Initialisation - -When first started in full or light mode, Bee must deploy a chequebook to the Gnosis Chain blockchain, and sync the postage stamp batch store so that it can check chunks for validity when storing or forwarding them. This can take a while, so please be patient! Once this is complete, you will see Bee starting to add peers and connect to the network. - -You can keep an eye on progress by watching the logs while this is taking place. - - - - - - - -```bash -sudo journalctl --lines=100 --follow --unit bee -``` - - - - - - -```bash -tail -f /opt/homebrew/var/log/swarm-bee/bee.log -``` - - - - - - -```bash -tail -f /usr/local/var/log/swarm-bee/bee.log -``` - - - - -*If you've started your node with `bee start`, simply observe the logs printed to your terminal.* - -If all goes well, you will see your node automatically begin to connect to other Bee nodes all over the world. - -``` -INFO[2020-08-29T11:55:16Z] greeting from peer: b6ae5b22d4dc93ce5ee46a9799ef5975d436eb63a4b085bfc104fcdcbda3b82c -``` - -Now your node will begin to request chunks of data that fall within your _radius of responsibilty_ - data that you will then serve to other p2p clients running in the swarm. Your node will then begin to -respond to requests for these chunks from other peers. - -:::tip Incentivisation -In Swarm, storing, serving and forwarding chunks of data to other nodes can earn you rewards! Follow [this guide](/docs/bee/working-with-bee/cashing-out) to learn how to regularly cash out cheques other nodes send you in return for your services so that you can get your xBZZ! -::: - -Your Bee client has now generated an elliptic curve key pair similar to an Ethereum wallet. These are stored in your [data directory](/docs/bee/working-with-bee/configuration), in the `keys` folder. - -:::danger Keep Your Keys and Password Safe! -Your keys and password are very important, back up these files and -store them in a secure place that only you have access to. With great -privacy comes great responsibility - while no-one will ever be able to -guess your key - you will not be able to recover them if you lose them -either, so be sure to look after them well and [keep secure -backups](/docs/bee/working-with-bee/backups). -::: - -## 6. Check if Bee is Working - -First check that the correct version of Bee is installed: - -```bash -bee version -``` - -``` -2.4.0 -``` - -Once the Bee node has been funded, the chequebook deployed, and postage stamp -batch store synced, its HTTP [API](/docs/bee/working-with-bee/bee-api) -will start listening at `localhost:1633`. - -To check everything is working as expected, send a GET request to localhost port 1633. - -```bash -curl localhost:1633 -``` - -``` -Ethereum Swarm Bee -``` - -Great! Our API is listening! - -Next, let's see if we have connected with any peers by querying the API which listens at port 1633 by default (`localhost:1633`). - -:::info -Here we are using the `jq` [utility](https://stedolan.github.io/jq/) to parse our javascript. Use your package manager to install `jq`, or simply remove everything after and including the first `|` to view the raw json without it. -::: - -```bash -curl -s localhost:1633/peers | jq ".peers | length" -``` - -``` -87 -``` - -Perfect! We are accumulating peers, this means you are connected to -the network, and ready to start [using -Bee](/docs/develop/access-the-swarm/introduction) to [upload and -download](/docs/develop/access-the-swarm/upload-and-download) content or host -and browse [websites](/docs/develop/access-the-swarm/host-your-website) hosted -on the Swarm network. - -Welcome to the swarm! 🐝 🐝 🐝 🐝 🐝 - -## 7. Back Up Keys - -Once your node is up and running, make sure to [back up your keys](/docs/bee/working-with-bee/backups). - -## 8. Deposit Stake (Optional) - -While depositing stake is not required to run a Bee node, it is required in order for a node to receive rewards for sharing storage with the network. You will need to [deposit xBZZ to the staking contract](/docs/bee/working-with-bee/staking) for your node. To do this, send a minimum of 10 xBZZ to your nodes' wallet and run: - -```bash -curl -X POST localhost:1633/stake/100000000000000000 -``` - -This will initiate a transaction on-chain which deposits the specified amount of xBZZ into the staking contract. - -Storage incentive rewards are only available for full nodes which are providing storage capacity to the network. - -Note that SWAP rewards are available to all full and light nodes, regardless of whether or not they stake xBZZ in order to participate in the storage incentives system. - -## Getting help - -The CLI has documentation built-in. Running `bee` gives you an entry point to the documentation. Running `bee start -h` or `bee start --help` will tell you how you can configure your Bee node via the command line arguments. - -You may also check out the [configuration guide](/docs/bee/working-with-bee/configuration), or simply run your Bee terminal command with the `--help` flag, eg. `bee start --help` or `bee --help`. - -## Next Steps to Consider - -### Access the Swarm -If you'd like to start uploading or downloading files to Swarm, [start here](/docs/develop/access-the-swarm/introduction). - -### Explore the API -The [Bee API](/docs/bee/working-with-bee/bee-api) is the primary method for interacting with Bee and getting information about Bee. After installing Bee and getting it up and running, it's a good idea to start getting familiar with the API. - -### Run a hive! -If you would like to run a hive of many Bees, check out the [hive operators](/docs/bee/installation/hive) section for information on how to operate and monitor many Bees at once. - -### Start building DAPPs on Swarm -If you would like to start building decentralised applications on Swarm, check out our section for [developing with Bee](/docs/develop/introduction). diff --git a/docs/bee/installation/package-manager.md b/docs/bee/installation/package-manager.md new file mode 100644 index 000000000..f64cb4285 --- /dev/null +++ b/docs/bee/installation/package-manager.md @@ -0,0 +1,484 @@ +--- +title: Package Manager Install +id: package-manager-install +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +The Bee client can be [installed through a variety of package managers](/docs/bee/installation/package-manager-install) including [APT](https://en.wikipedia.org/wiki/APT_(software)), [RPM](https://en.wikipedia.org/wiki/RPM_Package_Manager), and [Homebrew](https://en.wikipedia.org/wiki/Homebrew_(package_manager)). + +:::caution + When installed using a package manager, Bee is set up so it can be started to run as a background service using `systemctl` or `brew services` (depending on the package manager used.) + + However, a package manager installed Bee node can also be started using the standard `bee start` command. + + When a node is started using the `bee start` command the node process will be bound to the terminal session and will exit if the terminal is closed. + + Depending on which of these startup methods was used, [*the default Bee directories will be different*](/docs/bee/working-with-bee/configuration#default-data-and-config-directories). For each startup method, a different default data directory is used, so each startup method will essentially be spinning up a totally different node. +::: + + +## Install Bee + +Bee is available for Linux in .rpm and .deb package format for a variety of system architectures, and is available for MacOS through Homebrew. See the [releases](https://github.com/ethersphere/bee/releases) page of the Bee repo for all available packages. One of the advantages of this method is that it automatically sets up Bee to run as a service as a part of the install process. + + + + + +Get GPG key: + +```bash +curl -fsSL https://repo.ethswarm.org/apt/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/ethersphere-apt-keyring.gpg +``` + +Set up repo inside apt-get sources: + +```bash +echo \ + "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ethersphere-apt-keyring.gpg] https://repo.ethswarm.org/apt \ + * *" | sudo tee /etc/apt/sources.list.d/ethersphere.list > /dev/null +``` + +Install package: + +```bash +sudo apt-get update +sudo apt-get install bee +``` + + + + + +Set up repo: + +```bash +sudo echo "[ethersphere] +name=Ethersphere Repo +baseurl=https://repo.ethswarm.org/yum/ +enabled=1 +gpgcheck=0" > /etc/yum.repos.d/ethersphere.repo +``` +Install package: + +```bash +yum install bee +``` + + + +```bash +brew tap ethersphere/tap +brew install swarm-bee +``` + + + + +You should see the following output to your terminal after a successful install (your default 'Config' location will vary depending on your operating system): + +```bash +Reading package lists... Done +Building dependency tree... Done +Reading state information... Done +The following NEW packages will be installed: + bee +0 upgraded, 1 newly installed, 0 to remove and 37 not upgraded. +Need to get 0 B/27.2 MB of archives. +After this operation, 50.8 MB of additional disk space will be used. +Selecting previously unselected package bee. +(Reading database ... 82381 files and directories currently installed.) +Preparing to unpack .../archives/bee_2.3.0_amd64.deb ... +Unpacking bee (2.3.0) ... +Setting up bee (2.3.0) ... + +Logs: journalctl -f -u bee.service +Config: /etc/bee/bee.yaml + +Bee requires a Gnosis Chain RPC endpoint to function. By default this is expected to be found at ws://localhost:8546. + +Please see https://docs.ethswarm.org/docs/installation/install for more details on how to configure your node. + +After you finish configuration run 'sudo bee-get-addr' and fund your node with XDAI, and also XBZZ if so desired. + +Created symlink /etc/systemd/system/multi-user.target.wants/bee.service → /lib/systemd/system/bee.service. +``` + + +## Configure Bee + +When Bee is installed using a package manager, a `bee.yaml` file containing the default configuration will be generated. + +:::info +While this package manager install guide uses the `bee.yaml` file for setting configuration options, there are [several other available methods for setting node options](/docs/bee/working-with-bee/configuration). +::: + +After installation, you can check that the file was successfully generated and contains the [default configuration](https://github.com/ethersphere/bee/blob/master/packaging) for your system: + + + + + +```bash + test -f /etc/bee/bee.yaml && echo "$FILE exists." + cat /etc/bee/bee.yaml +``` + + + + + +```bash + test -f /opt/homebrew/etc/swarm-bee/bee.yaml && echo "$FILE exists." + cat /opt/homebrew/etc/swarm-bee/bee.yaml +``` + + + + + +```bash + test -f /usr/local/etc/swarm-bee/bee.yaml && echo "$FILE exists." + cat /usr/local/etc/swarm-bee/bee.yaml +``` + + + +The configuration printed to the terminal should match the default configuration for your operating system. See the [the packaging section of the Bee repo](https://github.com/ethersphere/bee/tree/master/packaging) for the default configurations for a variety of systems. In particular, pay attention to the `config` and `data-dir` values, as these differ depending on your system. + +If your config file is missing you will need to create it yourself. + + + + + +Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/bee.yaml). + +```bash +sudo touch /etc/bee/bee.yaml +sudo vi /etc/bee/bee.yaml +``` + + + + + +Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/homebrew-arm64/bee.yaml). + +```bash +sudo touch /opt/homebrew/etc/swarm-bee/bee.yaml +sudo sudo vi /opt/homebrew/etc/swarm-bee/bee.yaml +``` + + + + + +Create the `bee.yaml` config file and save it with the [the default configuration](https://github.com/ethersphere/bee/blob/master/packaging/homebrew-amd64/bee.yaml). + +```bash +sudo touch /usr/local/etc/swarm-bee/bee.yaml +sudo vi /usr/local/etc/swarm-bee/bee.yaml +``` + + + + +### Set Node Type + +See the [Getting Started guide](/docs/bee/installation/getting-started) if you're not sure which type of node to run. + +Once you've decided which node type is appropriate for you, refer to the [configuration section](/docs/bee/working-with-bee/configuration#set-bee-node-type) for instructions on setting the options for your preferred node type. + +### Set Target Neighborhood + +When installing your Bee node it will automatically be assigned a neighborhood. However, when running a full node with staking there are benefits to periodically updating your node's neighborhood. Learn more about why and how to set your node's target neighborhood [here](/docs/bee/installation/set-target-neighborhood). + + +## Start Node + +Use the appropriate command for your system to start your node: + + + + +```bash +sudo systemctl start bee +``` + + + + + +```bash +brew services start swarm-bee +``` + + + + + +```bash +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2024-09-24 18:15:34.383102" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2024-09-24 18:15:34.428546" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="0373fe2ab33ab836635fc35864cf708fa0f4a775c0cf76ca851551e7787b58d040" +"time"="2024-09-24 18:15:34.520686" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="03a341032724f1f9bb04f1d9b22607db485cccd74174331c701f3a6957d94d95c1" +"time"="2024-09-24 18:15:34.520716" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +"time"="2024-09-24 18:15:34.533789" "level"="info" "logger"="node" "msg"="fetching target neighborhood from suggester" "url"="https://api.swarmscan.io/v1/network/neighborhoods/suggestion" +"time"="2024-09-24 18:15:36.773501" "level"="info" "logger"="node" "msg"="mining a new overlay address to target the selected neighborhood" "target"="00100010110" +"time"="2024-09-24 18:15:36.776550" "level"="info" "logger"="node" "msg"="using overlay address" "address"="22d502d022de0f8e9d477bc61144d0d842d9d82b8241568c6fe4e41f0b466615" +"time"="2024-09-24 18:15:36.776576" "level"="info" "logger"="node" "msg"="starting with an enabled chain backend" +"time"="2024-09-24 18:15:37.388997" "level"="info" "logger"="node" "msg"="connected to blockchain backend" "version"="erigon/2.60.7/linux-amd64/go1.21.5" +"time"="2024-09-24 18:15:37.577840" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2024-09-24 18:15:37.593747" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="127.0.0.1:1633" +"time"="2024-09-24 18:15:37.969782" "level"="info" "logger"="node" "msg"="using default factory address" "chain_id"=100 "factory_address"="0xC2d5A532cf69AA9A1378737D8ccDEF884B6E7420" +"time"="2024-09-24 18:15:38.160249" "level"="info" "logger"="node/chequebook" "msg"="no chequebook found, deploying new one." +"time"="2024-09-24 18:15:38.728534" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0003750000017" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +``` + +Take note of the lines: + +```bash +"time"="2024-09-24 18:15:34.520716" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +``` + +and + +```bash +"time"="2024-09-24 18:15:38.728534" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0003750000017" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +``` + +The address referred to in both of these lines is your node's Gnosis Chain address. The second one indicates that the address does not have enough xDAI in order to deploy your node's chequebook contract which is used to pay for bandwidth incentives. You will see this warning if you have configured your node to run as a `full` or `light` node, but it should be absent for `ultra-light` nodes. + +## Fund Node + +Depending on your chosen node type, (full, light, or ultra-light), you will want to fund your node with differing amounts of xBZZ and xDAI. See [this section](/docs/bee/installation/fund-your-node) for more information on how to fund your node. + + +### Restart and Wait for Initialisation + +After funding your node, use the appropriate command for your system below and wait for it to initialize: + + + + +```bash +sudo systemctl start bee +``` + + + + + +```bash +brew services start swarm-bee +``` + + + + +When first started in full or light mode, Bee must deploy a chequebook to the Gnosis Chain blockchain, and sync the postage stamp batch store so that it can check chunks for validity when storing or forwarding them. This can take a while, so please be patient! Once this is complete, you will see Bee starting to add peers and connect to the network. + +You can keep an eye on progress by watching the logs while this is taking place. + + + + + +```bash +sudo journalctl --lines=100 --follow --unit bee +``` + + + + + + +```bash +tail -f /opt/homebrew/var/log/swarm-bee/bee.log +``` + + + + + + +```bash +tail -f /usr/local/var/log/swarm-bee/bee.log +``` + + + + +*If you've started your node with `bee start`, simply observe the logs printed to your terminal.* + +If all goes well, you will see your node automatically begin to connect to other Bee nodes all over the world. + +``` +INFO[2020-08-29T11:55:16Z] greeting from peer: b6ae5b22d4dc93ce5ee46a9799ef5975d436eb63a4b085bfc104fcdcbda3b82c +``` + +Now your node will begin to request chunks of data that fall within your _radius of responsibilty_ - data that you will then serve to other p2p clients running in the swarm. Your node will then begin to +respond to requests for these chunks from other peers. + +:::tip Incentivisation +In Swarm, storing, serving and forwarding chunks of data to other nodes can earn you rewards! Follow [this guide](/docs/bee/working-with-bee/cashing-out) to learn how to regularly cash out cheques other nodes send you in return for your services so that you can get your xBZZ! +::: + +Your Bee client has now generated an elliptic curve key pair similar to an Ethereum wallet. These are stored in your [data directory](/docs/bee/working-with-bee/configuration), in the `keys` folder. + +:::danger Keep Your Keys and Password Safe! +Your keys and password are very important, back up these files and +store them in a secure place that only you have access to. With great +privacy comes great responsibility - while no-one will ever be able to +guess your key - you will not be able to recover them if you lose them +either, so be sure to look after them well and [keep secure +backups](/docs/bee/working-with-bee/backups). +::: + +## Check if Bee is Working + +First check that the correct version of Bee is installed: + +```bash +bee version +``` + +``` +2.3.0 +``` + +Once the Bee node has been funded, the chequebook deployed, and postage stamp +batch store synced, its HTTP [API](/docs/bee/working-with-bee/bee-api) +will start listening at `localhost:1633` (for `full` or `light` nodes - for an `ultra-light` node, it should be initialized and the API should be available more rapidly). + +To check everything is working as expected, send a GET request to localhost port 1633. + +```bash +curl localhost:1633 +``` + +``` +Ethereum Swarm Bee +``` + +Great! Our API is listening! + +Next, let's see if we have connected with any peers by querying the API which listens at port 1633 by default (`localhost:1633`). + +:::info +Here we are using the `jq` [utility](https://stedolan.github.io/jq/) to parse our javascript. Use your package manager to install `jq`, or simply remove everything after and including the first `|` to view the raw json without it. +::: + +```bash +curl -s localhost:1633/peers | jq ".peers | length" +``` + +``` +87 +``` + +Perfect! We are accumulating peers, this means you are connected to +the network, and ready to start [using +Bee](/docs/develop/access-the-swarm/introduction) to [upload and +download](/docs/develop/access-the-swarm/upload-and-download) content or host +and browse [websites](/docs/develop/access-the-swarm/host-your-website) hosted +on the Swarm network. + +Welcome to the swarm! 🐝 🐝 🐝 🐝 🐝 + +## Back Up Keys + +Once your node is up and running, make sure to [back up your keys](/docs/bee/working-with-bee/backups). + +## Deposit Stake (Optional) + +While depositing stake is not required to run a Bee node, it is required in order for a node to receive rewards for sharing storage with the network. You will need to [deposit xBZZ to the staking contract](/docs/bee/working-with-bee/staking) for your node. To do this, send a minimum of 10 xBZZ to your nodes' wallet and run: + +```bash +curl -X POST localhost:1633/stake/100000000000000000 +``` + +This will initiate a transaction on-chain which deposits the specified amount of xBZZ into the staking contract. + +Storage incentive rewards are only available for full nodes which are providing storage capacity to the network. + +Note that SWAP rewards are available to all full and light nodes, regardless of whether or not they stake xBZZ in order to participate in the storage incentives system. + + + +## Next Steps to Consider + + +### Access the Swarm +If you'd like to start uploading or downloading files to Swarm, [start here](/docs/develop/access-the-swarm/introduction). + +### Explore the API +The [Bee API](/docs/bee/working-with-bee/bee-api) is the primary method for interacting with Bee and getting information about Bee. After installing Bee and getting it up and running, it's a good idea to start getting familiar with the API. + +### Run a hive! +If you would like to run a hive of many Bees, check out the [hive operators](/docs/bee/installation/hive) section for information on how to operate and monitor many Bees at once. + +### Start building DAPPs on Swarm +If you would like to start building decentralised applications on Swarm, check out our section for [developing with Bee](/docs/develop/introduction). + + diff --git a/docs/bee/installation/quick-start.md b/docs/bee/installation/quick-start.md deleted file mode 100644 index 991f4412f..000000000 --- a/docs/bee/installation/quick-start.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Quick Start -id: quick-start ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -Bee is a versatile piece of software that caters to a diverse array of use cases. It can be run in several different modes which each offer different features which are best suited for different users. There are three main categories of nodes: full nodes, light nodes, and ultra-light nodes. - -### Comparison of Node Types -| Feature | Full Node | Light Node |Ultra-light Node| -|--------------|-----------|------------|------------| -| Downloading | ✅ | ✅ |✅ | | -| Uploading | ✅ | ✅ | ❌ | -| Can exceed free download limits by paying xBZZ | ✅ | ✅ |❌ -|Sharing disk space with network|✅| ❌ |❌| -|Storage incentives|✅| ❌ |❌| -|SWAP incentives|✅| ✅ |❌| -|PSS messaging|✅| ✅ |✅ | -|Gnosis Chain Connection|✅| ✅ |❌ | - - -:::info -The Swarm network includes two incentives protocols which each give Bee nodes incentives to participate in maintaining the network in a healthy way. - -**Storage incentives:** - - By participating in the storage incentives protocol, full nodes which store and share data chunks with the network have a chance to earn xBZZ. Staked xBZZ is required to earn storage incentives. Learn more in the [staking section](/docs/bee/working-with-bee/staking). - -**SWAP incentives:** - - The SWAP incentives protocol encourages full or light (but not ultra-light) nodes to share bandwidth with other nodes in exchange for payments from other nodes either [in-kind](https://www.investopedia.com/terms/p/paymentinkind.asp) or as a cheque to be settled at a future date. SWAP requires a chequebook contract to be set up on Gnosis Chain for each participating node. -::: - - - - - - -## Which type of node is the right choice? -Different node types best suit different use cases: - -### Interact with the Swarm network - -If you want to interact with the Bee ecosystem in a decentralised way, -but not earn xBZZ by storing or forwarding chunks, simply run a Bee -[light node](/docs/bee/working-with-bee/light-nodes) in the background on -your laptop or desktop computer. This will enable direct access to the -Swarm network from your web browser and other applications. - -If you only need to download a small amount of data from the Swarm network an [ultra light node](/docs/develop/access-the-swarm/ultra-light-nodes) could be the right choice for you. This will allow you to download a limited amount of data but does not support uploading data. - -To run a light or ultra-light node [install Bee](/docs/bee/installation/install) with the recommended configuration settings for your chosen node type. - -:::info -The [Swarm Desktop app](https://www.ethswarm.org/build/desktop) offers an easy way to automatically set up a light or ultra-light node and interact with it through a graphical user interface. -::: - -### Support the Network and Earn xBZZ by Running a Full Node - -Earn [xBZZ](/docs/bee/working-with-bee/cashing-out) and help keep Swarm strong by running your own **full node**. It's easy to set up on a VPS, colocation, or any home computer that's connected to the internet. - -To run a full node [install Bee](/docs/bee/installation/install) with the recommended configuration settings for a full node. - -:::info -Staking is not required to run a full node, but is necessary to earn storage incentives. An altruistic person may want to run a full node without putting up any stake, and in fact, could possibly earn enough xBZZ from bandwidth (swap/cheque) compensation to be able to stake at some point in the future. Learn more in the [staking section](/docs/bee/working-with-bee/staking) -::: -### Run Your Own Hive of Nodes - -Take it to the next level by keeping a whole hive of Bees! We provide -tooling and monitoring to help you manage large deployments of -multiple Bee nodes: [Bee Hives](/docs/bee/installation/hive). - diff --git a/docs/bee/installation/set-target-neighborhood.md b/docs/bee/installation/set-target-neighborhood.md new file mode 100644 index 000000000..8a115a748 --- /dev/null +++ b/docs/bee/installation/set-target-neighborhood.md @@ -0,0 +1,38 @@ +--- +title: Set Target Neighborhood +id: set-target-neighborhood +--- + + +### Set Target Neighborhood + +In older versions of Bee, [neighborhood](/docs/concepts/DISC/neighborhoods) assignment was random by default. However, we can maximize a node's chances of winning xBZZ and also strengthen the resiliency of the network by strategically assigning neighborhoods to new nodes (see the [staking section](/docs/bee/working-with-bee/staking) for more details). + +Therefore the default Bee configuration now includes the `neighborhood-suggester` option which is set by default to to use the Swarmscan neighborhood suggester (`https://api.swarmscan.io/v1/network/neighborhoods/suggestion`). An alternative suggester URL could be used as long as it returns a JSON file in the same format `{"neighborhood":"101000110101"}`, however only the Swarmscan suggester is officially recommended. + + +:::info +The Swarmscan neighborhood selector will return the least populated neighborhood (or its least populated sub-neighborhood in case the sub-neighborhoods are imbalanced). Furthermore, the suggester will temporarily de-prioritize previously suggested neighborhoods based on the assumption that a new node is being created in each suggested neighborhood so that multiple nodes do not simultaneously attempt to join the same neighborhood. +::: + +#### Setting Neighborhood Manually + +It's recommended to use the default `neighborhood-suggester` configuration for choosing your node's neighborhood, however you may also set your node's neighborhood manually using the `target-neighborhood` option. + +To use this option, it's first necessary to identify potential target neighborhoods. A convenient tool for finding underpopulated neighborhoods is available at the [Swarmscan website](https://swarmscan.io/neighborhoods). This tool provides the leading binary bits of target neighborhoods in order of least populated to most. Simply copy the leading bits from one of the least populated neighborhoods (for example, `0010100001`) and use it to set `target-neighborhood`. After doing so, an overlay address within that neighborhood will be generated when starting Bee for the first time. + +```yaml +## bee.yaml +target-neighborhood: "0010100001" +``` + +There is also a [Swarmscan API endpoint](https://api.swarmscan.io/#tag/Network/paths/~1v1~1network~1neighborhoods~1suggestion/get) which you can use to get a suggested neighborhood programmatically: + +```bash +curl https://api.swarmscan.io/v1/network/neighborhoods/suggestion +``` +A suggested neighborhood will be returned: + +```bash +{"neighborhood":"1111110101"} +``` diff --git a/docs/bee/installation/shell-script.md b/docs/bee/installation/shell-script.md new file mode 100644 index 000000000..25f742f9b --- /dev/null +++ b/docs/bee/installation/shell-script.md @@ -0,0 +1,659 @@ +--- +title: Shell Script Install +id: shell-script-install +--- + + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +# Swarm Shell Script Installation Guide + +The following is a guide to get you started running a Bee node on Swarm using [the official shell script provided by Swarm](https://github.com/ethersphere/bee/blob/master/install.sh) which automatically detects your system and installs the correct version of Bee. This installation method is an excellent choice if you're looking for a minimalistic and flexible option for your Bee node installation. + +:::warning +Note that we append 127.0.0.1 (localhost) to our Bee API's port (1633 by default), since we do not want to expose our Bee API endpoint to the public internet, as that would allow anyone to control our node. Make sure you do the same, and it's also recommended to use a firewall to restrict access to your node(s). +::: + +:::info +This guide uses command line flag options in the node startup commands such as `--blockchain-rpc-endpoint`, however there are [several other methods available for configuring options](/docs/bee/working-with-bee/configuration). +::: + +:::info +**Bee Modes** + +Bee nodes can be run in multiple modes with different functionalities. To run a node in full mode, both `--full-node` and `--swap-enable` must be set to `true`. To run a light node (uploads and downloads only), set `--full-node` to false and `--swap-enable` to `true`, or to run in ultra-light mode (free tier downloads only) set both `--full-node` and `--swap-enable` to false. + +For more information on the different functionalities of each mode, as well as their different system requirements, refer to the [Getting Started guide](/docs/bee/installation/getting-started). +::: + + +## Install and Start Your Node +Below is a step-by-step guide for installing and setting up your Bee node using the shell script installation method. + +### Run Shell Script + + +Run the install shell script using either `curl` or `wget`: + +:::caution +In the example below, the version is specified using `TAG=v2.4.0`. Make sure that you [check if there is a newer tagged version of Bee](https://github.com/ethersphere/bee/tags) and if so, modify the commands below to use the most recent tag number so that you have the latest version of Bee. +::: + +:::info +Note that while this shell script supports many commonly used Unix-like systems, it is not quite a universal installer tool. The architectures it supports include: + +**1. Linux:** +- `linux-386` (32-bit x86) +- `linux-amd64` (64-bit x86) +- `linux-arm64` (64-bit ARM) +- `linux-armv6` (32-bit ARM v6) + +**2. macOS (Darwin):** +- `darwin-arm64` (Apple Silicon, M1/M2/M3) +- `darwin-amd64` (Intel-based Mac) + +This means the script should work on most modern Linux distributions and macOS versions that match the supported architectures, but not on Windows. However, you may consider using [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) to run Linux on Windows as an alternative. +::: + +:::caution +You may need to install [`curl`](https://curl.se/) or [`wget`](https://www.gnu.org/software/wget/) if your system doesn't have one of them pre-installed and the shell script command fails to run. +::: + + + + + +```bash +curl -s https://raw.githubusercontent.com/ethersphere/bee/master/install.sh | TAG=v2.4.0 bash +``` + + + +**wget** + +```bash +wget -q -O - https://raw.githubusercontent.com/ethersphere/bee/master/install.sh | TAG=v2.4.0 bash +``` + + + + +Let's check that the script ran properly: + +```bash= +bee +``` + +If the script ran without any problems you should see this: + +```bash= +Ethereum Swarm Bee + +Usage: + bee [command] + +Available Commands: + start Start a Swarm node + dev Start a Swarm node in development mode + init Initialise a Swarm node + deploy Deploy and fund the chequebook contract + version Print version number + db Perform basic DB related operations + split Split a file into chunks + printconfig Print default or provided configuration in yaml format + help Help about any command + completion Generate the autocompletion script for the specified shell + +Flags: + --config string config file (default is $HOME/.bee.yaml) + -h, --help help for bee + +Use "bee [command] --help" for more information about a command. +``` + + +### Node Startup Commands + +Let's try starting up our node for the first time with the command below. Make sure to pick a [strong password](https://xkcd.com/936/) of your own: + +Below are startup commands configured for each of the three Bee node types. + + + + +For the full node, we have `--full-node` and `--swap-enable` both enabled, and we've used `--blockchain-rpc-endpoint` to set our RPC endpoint as `https://xdai.fairdatasociety.org`. Your RPC endpoint may differ depending on your setup. + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --full-node \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org +``` + + +For the light node, we have removed `--full-node`, the rest remains the same as the full node setup. + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org +``` + + + For the ultra-light node, we omit all three of the relevant settings to disable them (since they default to `false`), `--full-node`, `--swap-enable`, and `--blockchain-rpc-endpoint`. + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --api-addr 127.0.0.1:1633 +``` + + + + + +:::info + +Command explained: + +1. **`bee start`**: This is the command to start the Bee node. + +2. **`--password flummoxedgranitecarrot`**: The password to decrypt the private key associated with the node. Replace "flummoxedgranitecarrot" with your actual password. + +3. **`--full-node`**: This option enables the node to run in full mode, sharing its disk with the network, and becoming eligible for staking. + +4. **`--swap-enable`**: This flag enables SWAP, which is the bandwidth incentives scheme for Swarm. It will initiate a transaction to set up the SWAP chequebook on Gnosis Chain (required for light and full nodes). + +5. **`--api-addr 127.0.0.1:1633`**: Specifies that the Bee API will be accessible locally only via `127.0.0.1` on port `1633` and not accessible to the public. + +6. **`--blockchain-rpc-endpoint https://xdai.fairdatasociety.org`**: Sets the RPC endpoint for interacting with the Gnosis blockchain (required for light and full nodes). +::: + +### Example Startup Output + + + + +Here you can see that the node has started up successfully, but our node still needs to be funded with xDAI and xBZZ (xDAI for Gnosis Chain transactions and xBZZ for uploads/downloads and staking). Continue to the next section for funding instructions. + +```bash +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2024-09-24 18:15:34.383102" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2024-09-24 18:15:34.428546" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="0373fe2ab33ab836635fc35864cf708fa0f4a775c0cf76ca851551e7787b58d040" +"time"="2024-09-24 18:15:34.520686" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="03a341032724f1f9bb04f1d9b22607db485cccd74174331c701f3a6957d94d95c1" +"time"="2024-09-24 18:15:34.520716" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +"time"="2024-09-24 18:15:34.533789" "level"="info" "logger"="node" "msg"="fetching target neighborhood from suggester" "url"="https://api.swarmscan.io/v1/network/neighborhoods/suggestion" +"time"="2024-09-24 18:15:36.773501" "level"="info" "logger"="node" "msg"="mining a new overlay address to target the selected neighborhood" "target"="00100010110" +"time"="2024-09-24 18:15:36.776550" "level"="info" "logger"="node" "msg"="using overlay address" "address"="22d502d022de0f8e9d477bc61144d0d842d9d82b8241568c6fe4e41f0b466615" +"time"="2024-09-24 18:15:36.776576" "level"="info" "logger"="node" "msg"="starting with an enabled chain backend" +"time"="2024-09-24 18:15:37.388997" "level"="info" "logger"="node" "msg"="connected to blockchain backend" "version"="erigon/2.60.7/linux-amd64/go1.21.5" +"time"="2024-09-24 18:15:37.577840" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2024-09-24 18:15:37.593747" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="127.0.0.1:1633" +"time"="2024-09-24 18:15:37.969782" "level"="info" "logger"="node" "msg"="using default factory address" "chain_id"=100 "factory_address"="0xC2d5A532cf69AA9A1378737D8ccDEF884B6E7420" +"time"="2024-09-24 18:15:38.160249" "level"="info" "logger"="node/chequebook" "msg"="no chequebook found, deploying new one." +"time"="2024-09-24 18:15:38.728534" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.0003750000017" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +``` + + + + + +Here you can see that the node has started up successfully, but our node still needs to be funded with xDAI and xBZZ (xDAI for Gnosis Chain transactions and xBZZ for uploads/downloads). Continue to the next section for funding instructions. + +```bash +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2025-01-24 12:57:21.274657" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2025-01-24 12:57:21.274854" "level"="warning" "logger"="node" "msg"="your node is outdated, please check for the latest version" +"time"="2025-01-24 12:57:21.449064" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="03c356839a5570c758e812d0c248b135f0dc8ffa2b8404a97597e456f4fe5f7ee8" +"time"="2025-01-24 12:57:21.805033" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="036c63b7c544ad401a5dbfb463f71cda265eec74c1d0d9cbc9db2abd6b3e4f11e9" +"time"="2025-01-24 12:57:21.805124" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x5c39545873Bd663b0bB0716ED87dE0E399Aae419" +"time"="2025-01-24 12:57:21.815765" "level"="info" "logger"="node" "msg"="using overlay address" "address"="74539eab1dbd5c722bb8ba10cef55f715e38f298b706fb1866af49f4fd15d8d3" +"time"="2025-01-24 12:57:21.815855" "level"="info" "logger"="node" "msg"="starting with an enabled chain backend" +"time"="2025-01-24 12:57:21.861341" "level"="info" "logger"="node" "msg"="connected to blockchain backend" "version"="Nethermind/v1.30.1+2b75a75a/linux-x64/dotnet9.0.0" +"time"="2025-01-24 12:57:21.869117" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2025-01-24 12:57:21.880930" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="127.0.0.1:1633" +"time"="2025-01-24 12:57:21.897675" "level"="info" "logger"="node" "msg"="using default factory address" "chain_id"=100 "factory_address"="0xC2d5A532cf69AA9A1378737D8ccDEF884B6E7420" +"time"="2025-01-24 12:57:21.911463" "level"="info" "logger"="node/chequebook" "msg"="no chequebook found, deploying new one." +"time"="2025-01-24 12:57:21.938038" "level"="warning" "logger"="node/chequebook" "msg"="cannot continue until there is at least min xDAI (for Gas) available on address" "min_amount"="0.000250000002" "address"="0x5c39545873Bd663b0bB0716ED87dE0E399Aae419" +``` + + + + If you've started in ultra-light mode, you should see output which looks something like this, and you're done! Your node is now successfully running in ultra-light mode. You can now skip down to the final section on this page about logs and monitoring. + +```bash + root@noah-bee:~# bee start \ + --password flummoxedgranitecarrot \ + --api-addr 127.0.0.1:1633 + +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2025-01-24 12:51:06.981505" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2025-01-24 12:51:06.981658" "level"="warning" "logger"="node" "msg"="your node is outdated, please check for the latest version" +"time"="2025-01-24 12:51:07.131555" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="03c356839a5570c758e812d0c248b135f0dc8ffa2b8404a97597e456f4fe5f7ee8" +"time"="2025-01-24 12:51:07.402847" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="036c63b7c544ad401a5dbfb463f71cda265eec74c1d0d9cbc9db2abd6b3e4f11e9" +"time"="2025-01-24 12:51:07.402915" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x5c39545873Bd663b0bB0716ED87dE0E399Aae419" +"time"="2025-01-24 12:51:07.416074" "level"="info" "logger"="node" "msg"="using overlay address" "address"="74539eab1dbd5c722bb8ba10cef55f715e38f298b706fb1866af49f4fd15d8d3" +"time"="2025-01-24 12:51:07.416149" "level"="info" "logger"="node" "msg"="starting with a disabled chain backend" +"time"="2025-01-24 12:51:07.416242" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2025-01-24 12:51:07.428047" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="127.0.0.1:1633" +"time"="2025-01-24 12:51:07.464425" "level"="info" "logger"="node" "msg"="using datadir" "path"="/root/.bee" +"time"="2025-01-24 12:51:07.486853" "level"="info" "logger"="migration-RefCountSizeInc" "msg"="starting migration of replacing chunkstore items to increase refCnt capacity" +"time"="2025-01-24 12:51:07.486921" "level"="info" "logger"="migration-RefCountSizeInc" "msg"="migration complete" +"time"="2025-01-24 12:51:07.489133" "level"="info" "logger"="node" "msg"="starting reserve repair tool, do not interrupt or kill the process..." +"time"="2025-01-24 12:51:07.489346" "level"="info" "logger"="node" "msg"="removed all bin index entries" +"time"="2025-01-24 12:51:07.489430" "level"="info" "logger"="node" "msg"="removed all chunk bin items" "total_entries"=0 +"time"="2025-01-24 12:51:07.489482" "level"="info" "logger"="node" "msg"="counted all batch radius entries" "total_entries"=0 +"time"="2025-01-24 12:51:07.489520" "level"="info" "logger"="node" "msg"="parallel workers" "count"=2 +"time"="2025-01-24 12:51:07.489612" "level"="info" "logger"="node" "msg"="migrated all chunk entries" "new_size"=0 "missing_chunks"=0 "invalid_sharky_chunks"=0 +"time"="2025-01-24 12:51:07.489659" "level"="info" "logger"="migration-step-04" "msg"="starting sharky recovery" +"time"="2025-01-24 12:51:07.514853" "level"="info" "logger"="migration-step-04" "msg"="finished sharky recovery" +"time"="2025-01-24 12:51:07.515253" "level"="info" "logger"="migration-step-05" "msg"="start removing upload items" +"time"="2025-01-24 12:51:07.515374" "level"="info" "logger"="migration-step-05" "msg"="finished removing upload items" +"time"="2025-01-24 12:51:07.515434" "level"="info" "logger"="migration-step-06" "msg"="start adding stampHash to BatchRadiusItems, ChunkBinItems and StampIndexItems" +"time"="2025-01-24 12:51:07.515571" "level"="info" "logger"="migration-step-06" "msg"="finished migrating items" "seen"=0 "migrated"=0 +"time"="2025-01-24 12:51:07.517270" "level"="info" "logger"="node" "msg"="starting in ultra-light mode" +``` + + + + + +## Fund and Stake + +Running a full node for the purpose of earning xBZZ by sharing disk space and participating in the redistribution game requires a minimum of 10 xBZZ and a small amount of xDAI (for initializing the chequebook contract and for paying for redistribution-related transactions). + +While running a light node requires a small amount of xDAI to pay for initializing the chequebook contract and a smaller amount of xBZZ to pay for uploads and downloads. + +### Fund node + +Check the logs from the previous step. Look for the line which says: + +``` +"time"="2024-09-24 18:15:34.520716" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +``` +That address is your node's address on Gnosis Chain which needs to be funded with xDAI and xBZZ. Copy it and save it for the next step. + +You can also use a `curl` command with the Bee API to find your node's address: + +```bash +curl localhost:1633/addresses | jq +``` + +```json +{ + "overlay": "b1978be389998e8c8596ef3c3a54214e2d4db764898ec17ec1ad5f19cdf7cc59", + "underlay": [ + "/ip4/127.0.0.1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", + "/ip4/172.25.128.69/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", + "/ip6/::1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ" + ], + "ethereum": "0xd22cc790e2aef341827e1e49cc631d2a16898cd9", + "publicKey": "023b26ce8b78ed8cdb07f3af3d284c95bee5e038e7c5d0c397b8a5e33424f5d790", + "pssPublicKey": "039ceb9c1f0afedf79991d86d89ccf4e96511cf656b43971dc3e878173f7462487" +} +``` + +The `ethereum` address is your node's Gnosis Chain address (while Gnosis is a distinct chain from Ethereum, it uses the same address format and is sometimes referenced as such within the Bee API.) + +xDAI is widely available from many different centralized and decentralized exchanges, just make sure that you are getting xDAI on Gnosis Chain, and not DAI on some other chain. See [this page](https://www.ethswarm.org/get-bzz) for a list of resources for getting xBZZ (again, make certain that you are getting the Gnosis Chain version, and not BZZ on Ethereum). + +After acquiring some xDAI and some xBZZ, send them to the address you copied above. + +***How Much to Send?*** + +Only a very small amount of xDAI is needed to get started, 0.1 xDAI is more than enough. + +You can start with just 2 or 3 xBZZ for uploading small amounts of data, but you will need at least 10 xBZZ if you plan on staking. + + +### Initialize full node + +After sending the required tokens of ~0.1 xDAI and 10 xBZZ (or a smaller amount of xBZZ if you don't plan on staking) to your node's Gnosis Chain address, close the bee process in your terminal (`Ctrl + C`). Then start it again with the same command: + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --full-node \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org +``` +After funding and restarting your node, the logs printed to the terminal should look something like this: + +```bash +Welcome to Swarm.... Bzzz Bzzzz Bzzzz + \ / + \ o ^ o / + \ ( ) / + ____________(%%%%%%%)____________ + ( / / )%%%%%%%( \ \ ) + (___/___/__/ \__\___\___) + ( / /(%%%%%%%)\ \ ) + (__/___/ (%%%%%%%) \___\__) + /( )\ + / (%%%%%) \ + (%%%) + ! + +DISCLAIMER: +This software is provided to you "as is", use at your own risk and without warranties of any kind. +It is your responsibility to read and understand how Swarm works and the implications of running this software. +The usage of Bee involves various risks, including, but not limited to: +damage to hardware or loss of funds associated with the Ethereum account connected to your node. +No developers or entity involved will be liable for any claims and damages associated with your use, +inability to use, or your interaction with other nodes or the software. + +version: 2.2.0-06a0aca7 - planned to be supported until 11 December 2024, please follow https://ethswarm.org/ + +"time"="2024-09-24 18:57:16.710417" "level"="info" "logger"="node" "msg"="bee version" "version"="2.2.0-06a0aca7" +"time"="2024-09-24 18:57:16.760154" "level"="info" "logger"="node" "msg"="swarm public key" "public_key"="0373fe2ab33ab836635fc35864cf708fa0f4a775c0cf76ca851551e7787b58d040" +"time"="2024-09-24 18:57:16.854594" "level"="info" "logger"="node" "msg"="pss public key" "public_key"="03a341032724f1f9bb04f1d9b22607db485cccd74174331c701f3a6957d94d95c1" +"time"="2024-09-24 18:57:16.854651" "level"="info" "logger"="node" "msg"="using ethereum address" "address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" +"time"="2024-09-24 18:57:16.866697" "level"="info" "logger"="node" "msg"="using overlay address" "address"="22d502d022de0f8e9d477bc61144d0d842d9d82b8241568c6fe4e41f0b466615" +"time"="2024-09-24 18:57:16.866730" "level"="info" "logger"="node" "msg"="starting with an enabled chain backend" +"time"="2024-09-24 18:57:17.485408" "level"="info" "logger"="node" "msg"="connected to blockchain backend" "version"="erigon/2.60.1/linux-amd64/go1.21.5" +"time"="2024-09-24 18:57:17.672282" "level"="info" "logger"="node" "msg"="using chain with network network" "chain_id"=100 "network_id"=1 +"time"="2024-09-24 18:57:17.686479" "level"="info" "logger"="node" "msg"="starting debug & api server" "address"="127.0.0.1:1633" +"time"="2024-09-24 18:57:18.065029" "level"="info" "logger"="node" "msg"="using default factory address" "chain_id"=100 "factory_address"="0xC2d5A532cf69AA9A1378737D8ccDEF884B6E7420" +"time"="2024-09-24 18:57:18.252410" "level"="info" "logger"="node/chequebook" "msg"="no chequebook found, deploying new one." +"time"="2024-09-24 18:57:19.576100" "level"="info" "logger"="node/chequebook" "msg"="deploying new chequebook" "tx"="0xf7bc9c5b04e96954c7f70cecfe717cad9cdc5d64b6ec080b2cbe712166ce262a" +"time"="2024-09-24 18:57:27.619377" "level"="info" "logger"="node/transaction" "msg"="pending transaction confirmed" "sender_address"="0x1A801dd3ec955E905ca424a85C3423599bfb0E66" "tx"="0xf7bc9c5b04e96954c7f70cecfe717cad9cdc5d64b6ec080b2cbe712166ce262a" +"time"="2024-09-24 18:57:27.619437" "level"="info" "logger"="node/chequebook" "msg"="chequebook deployed" "chequebook_address"="0x261a07a63dC1e7200d51106155C8929b432181fb" +``` + +Here we can see that after our node has been funded, it was able to issue the transactions for deploying the chequebook contract, which is a prerequisite for running a staking node. + +Next your node will begin to sync [postage stamp data](/docs/develop/access-the-swarm/buy-a-stamp-batch), which can take ~5 to 10 minutes. You will see this log message while your node is syncing postage stamp data: + +```bash +"time"="2024-09-24 22:21:19.664897" "level"="info" "logger"="node" "msg"="waiting to sync postage contract data, this may take a while... more info available in Debug loglevel" +``` + +After your node finishes syncing postage stamp data it will start in full node mode and begin to sync all the chunks of data it is responsible for storing as a full node: + + +```bash +"time"="2024-09-24 22:30:19.154067" "level"="info" "logger"="node" "msg"="starting in full mode" +"time"="2024-09-24 22:30:19.155320" "level"="info" "logger"="node/multiresolver" "msg"="name resolver: no name resolution service provided" +"time"="2024-09-24 22:30:19.341032" "level"="info" "logger"="node/storageincentives" "msg"="entered new phase" "phase"="reveal" "round"=237974 "block"=36172090 +"time"="2024-09-24 22:30:33.610825" "level"="info" "logger"="node/kademlia" "msg"="disconnected peer" "peer_address"="6ceb30c7afc11716f866d19b7eeda9836757031ed056b61961e949f6e705b49e" +``` + +This process can take a while, up to several hours depending on your system and network. You can check the progress of your node through the logs which print out to the Bee API: + +You check your node's progress with the `/status` endpoint: + +:::info +The [`jq` utility](https://jqlang.github.io/jq/) is used here to automatically format the output from the Bee API. It can help make API output more readable. You may need to install it, the exact steps will depend on your Linux distro and package manager of choice. Also feel free to remove the `| jq` from the command as it is only a convenience, not a requirement. +::: + +```bash +curl -s http://localhost:1633/status | jq +``` + +```bash +{ + "overlay": "22dc155fe072e131449ec7ea2f77de16f4735f06257ebaa5daf2fdcf14267fd9", + "proximity": 256, + "beeMode": "full", + "reserveSize": 686217, + "reserveSizeWithinRadius": 321888, + "pullsyncRate": 497.8747754074074, + "storageRadius": 11, + "connectedPeers": 148, + "neighborhoodSize": 4, + "batchCommitment": 74510761984, + "isReachable": false, + "lastSyncedBlock": 36172390 +} +``` +We can see that our node has not yet finished syncing chunks since the `pullsyncRate` is around 497 chunks per second. Once the node is fully synced, this value will go to zero. However, we do not need to wait until our node is fully synced in order to stake our node, so we can now move immediately to the next step. + + +### Stake node + +Now we're ready to begin staking. We will slightly modify our startup command so that it now runs in the background instead of taking control of our terminal: + +```bash +nohup bee start \ + --password flummoxedgranitecarrot \ + --full-node \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org > bee.log 2>&1 & +``` + +:::info +1. **`nohup`**: This ensures that the `bee start` process will continue even after the terminal is closed. + +2. **`> bee.log 2>&1`**: Redirects both standard output and standard error to a log file called `bee.log`. + +3. **`&`**: This sends the process to the background, allowing the terminal to be used for other commands while the Bee node continues running. +::: + +Let's check the Bee API to confirm the node is running: + +``` +curl localhost:1633 +``` +If the node is running we should see: +``` +Ethereum Swarm Bee +``` + +Now with our node properly running in the background, we're ready to stake our node. You can use the following command to stake 10 xBZZ: + +```bash +curl -XPOST localhost:1633/stake/100000000000000000 +``` + +If the staking transaction is successful a `txHash` will be returned: + +``` +{"txHash":"0x258d64720fe7abade794f14ef3261534ff823ef3e2e0011c431c31aea75c2dd5"} +``` + +We can also confirm that our node has been staked with the `/stake` endpoint: + +```bash +curl localhost:1633/stake +``` + +The results will be displayed in PLUR units (1 PLUR is equal to 1e-16 xBZZ). If you have properly staked the minimum 10 xBZZ, you should see the output below: + +```bash +{"stakedAmount":"100000000000000000"} +``` + +Congratulations! You have now installed your Bee node and are connected to the network as a full staking node. Your node will now be in the process of syncing chunks from the network. Once the node is fully synced, your node will finally be eligible to earn staking rewards. + +### Set Target Neighborhood + +When installing your Bee node it will automatically be assigned a neighborhood. However, when running a full node with staking there are benefits to periodically updating your node's neighborhood. Learn more about why and how to set your node's target neighborhood [here](/docs/bee/installation/set-target-neighborhood). + + +### Logs and monitoring + +:::info +You can learn more about Bee logs [here](/docs/bee/working-with-bee/logs-and-files). +::: + +With our previously modified command, our Bee node will now be running in the background and the logs will be written to the `bee.log` file. To review our node's logs we can simply view the file contents: + +```bash +cat bee.log +``` + +The file will continue to update with all the latest logs as they are output: + +```bash +"time"="2024-09-27 18:05:34.096641" "level"="info" "logger"="node/kademlia" "msg"="connected to peer" "peer_address"="03b48e678938d63c0761c74a805fbe0446684c9c417330c2bec600ecfd6c492f" "proximity_order"=8 +"time"="2024-09-27 18:05:35.168425" "level"="info" "logger"="node/kademlia" "msg"="connected to peer" "peer_address"="0e9388fff473a9c74535337c32cc74d8f921514d2635d0c4a49c6e8022f5594e" "proximity_order"=4 +"time"="2024-09-27 18:05:35.532723" "level"="info" "logger"="node/kademlia" "msg"="disconnected peer" "peer_address"="3c195cd8882ee537d170e92d959ad6bd72a76a50097a671c72646e83b45a1832" +``` + +There are many different ways to monitor your Bee node's process, but one convenient way to do so is the [bashtop command line tool](https://github.com/aristocratos/bashtop). The method of [installation](https://github.com/aristocratos/bashtop?tab=readme-ov-file#installation) will vary depending on your system. + +After installation, we can launch it with the `bashtop` command: + +```bash +bashtop +``` + +![](/img/bashtop_01.png) + +We can use the `f` key to filter for our Bee node's specific process by searching for the `bee` keyword (use the arrow keys to navigate and `enter` to select). From here we can view info about our node's process, or shut it down using the `t` key (for "terminate"). + +![](/img/bashtop_02.png) + +**Checking the Node's status with the Bee API** + +To check your node's status as a staking node, we can use the `/redistributionstate` endpoint: + +```bash +curl -s http://localhost:1633/redistributionstate | jq +``` + +Below is the output for a node that has been running for several days: + +```bash +{ + "minimumGasFunds": "11080889201250000", + "hasSufficientFunds": true, + "isFrozen": false, + "isFullySynced": true, + "phase": "claim", + "round": 212859, + "lastWonRound": 207391, + "lastPlayedRound": 210941, + "lastFrozenRound": 210942, + "lastSelectedRound": 212553, + "lastSampleDuration": 491687776653, + "block": 32354719, + "reward": "1804537795127017472", + "fees": "592679945236926714", + "isHealthy": true +} +``` + +For a complete breakdown of this output, check out [this section in the Bee docs](/docs/bee/working-with-bee/bee-api#redistributionstate). + +You can read more other important endpoints for monitoring your Bee node in the [official Bee docs](/docs/bee/working-with-bee/bee-api), and you can find complete information about all available endpoints in [the API reference docs](/api/). + + +## Back Up Keys + +Once your node is up and running, make sure to [back up your keys](/docs/bee/working-with-bee/backups). + +## Getting help + +The CLI has built-in documentation. Running `bee` gives you an entry point to the documentation. Running `bee start -h` or `bee start --help` will tell you how you can configure your Bee node via the command line arguments. + +You may also check out the [configuration guide](/docs/bee/working-with-bee/configuration), or simply run your Bee terminal command with the `--help` flag, eg. `bee start --help` or `bee --help`. + + +## Next Steps to Consider + + +### Access the Swarm +If you'd like to start uploading or downloading files to Swarm, [start here](/docs/develop/access-the-swarm/introduction). + +### Explore the API +The [Bee API](/docs/bee/working-with-bee/bee-api) is the primary method for interacting with Bee and getting information about Bee. After installing Bee and getting it up and running, it's a good idea to start getting familiar with the API. + +### Run a hive! +If you would like to run a hive of many Bees, check out the [hive operators](/docs/bee/installation/hive) section for information on how to operate and monitor many Bees at once. + +### Start building DAPPs on Swarm +If you would like to start building decentralised applications on Swarm, check out our section for [developing with Bee](/docs/develop/introduction). diff --git a/docs/bee/working-with-bee/backups.md b/docs/bee/working-with-bee/backups.md index 8a3487ba6..90f5c98ad 100644 --- a/docs/bee/working-with-bee/backups.md +++ b/docs/bee/working-with-bee/backups.md @@ -3,33 +3,54 @@ title: Backups id: backups --- -## Files +## Bee Files A full Bee node backup includes the `kademlia-metrics`, `keys`, `localstore`, `password`, `stamperstore`, `statestore`, and `password` files. The node should be stopped before taking a backup and not restarted until restoring the node from the backup to prevent the node from getting out of sync with the network. -A node's data including keys and stamp data are found in the data directory specified in its [configuration](configuration). - -Key data in backup files allows access to Bee node's Gnosis account. If lost or stolen it could lead to the loss of all assets in that account. Furthermore the `stamperstore` contains postage stamp data, and postage stamps will not be recoverable if it is lost. +Key data from the `keys` directory allows access to Bee node's Gnosis account (provided that you have also made sure to back the password for your keys). If your keys and password are lost or stolen it could lead to the loss of all assets in that account. Furthermore the `stamperstore` contains postage stamp data, and postage stamps will not be recoverable if it is lost. :::info Don't forget - it's not a backup until you're sure the backup files work! Make sure to test restoring from backup files to prevent loss of assets due to data loss or corruption. ::: -### Ubuntu / Debian / Raspbian / CentOS package managers - -For Linux installations from package managers _yum_ or _apt_, the data directory is located at: +## Package Manager Default Service File Locations -```bash -/var/lib/bee -``` +The default file locations for Bee nodes installed to run as a service through a package manager can all be found on GitHub in the Bee repo within the respective directories for each service manager within [packaging directory](https://github.com/ethersphere/bee/tree/master/packaging). -It may also be useful to include the `bee.yaml` config file in a backup so that configuration can be easily restored. The default location of the config file is: +The directory structure looks like this: ```bash -/etc/bee +noah@NoahM16:~/bee/packaging$ tree -L 2 +. +├── bee-get-addr +├── bee.service +├── bee.yaml +├── deb +│   ├── postinst +│   ├── postrm +│   ├── preinst +│   └── prerm +├── default +├── docker +│   ├── docker-compose.yml +│   ├── env +│   └── README.md +├── homebrew-amd64 +│   ├── bee-get-addr +│   └── bee.yaml +├── homebrew-arm64 +│   ├── bee-get-addr +│   └── bee.yaml +├── rpm +│   ├── post +│   ├── postun +│   ├── pre +│   └── preun +└── scoop + └── bee.yaml ``` -This guide uses the default package manager location for the data folder, make sure to change the commands to match your data folder's location if it in a different directory. + ### Binary package install @@ -51,9 +72,9 @@ docker cp bee_bee_1:/home/bee/.bee/ bee ### Data types -The data directory contains three directories. Its default location depends on the node install method used. +The data directory contains three directories. Its default location depends on the node install method and startup method used. + -For shell script install the location is `/home//.bee` and for package manager installs it is `/var/lib/bee`. The directory structure is as follows: ``` ├── kademlia-metrics @@ -106,8 +127,11 @@ sudo cp -r /var/lib/bee/ backup ### Back-up your password -Depending on your configuration, your password may not be located in the `/bee` data directory which was copied in the previous step. If it has been specified in an environment variable or in your [`bee.yaml` configuration file](/docs/bee/working-with-bee/configuration#default-data-and-config-directories), make sure to copy it and save it together with the rest of your backup files or write it down in a safe place. +Depending on your [configuration](/docs/bee/working-with-bee/configuration) method, your password may be located in a variety of different locations. If you use a `.yaml` file for your configuration, then it might be found directly under the `password` option, or it could be that the location of your password file is recorded by the `password-file` option. In either case, make sure to record the password or password file as a part of your backup. + +The same holds true for the to other configuration methods. If you use environment variables for specifying your configuration options, your password itself will likely be specified in a `.env` file somewhere which contains either the password itself in the `BEE_PASSWORD` variable or the location of your password file in the `BEE_PASSWORD_FILE` variable. +The same again holds true for the command line flag method. Make sure you have the password you use with the `--password` command line flag or the password file specified by the `--password-file` flag saved in your backup. ### Back-up blockchain keys only diff --git a/docs/bee/working-with-bee/configuration.md b/docs/bee/working-with-bee/configuration.md index 17a5ebcce..ddefbfe85 100644 --- a/docs/bee/working-with-bee/configuration.md +++ b/docs/bee/working-with-bee/configuration.md @@ -6,86 +6,20 @@ id: configuration import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; -## Default Data and Config Directories - -Depending on the operating system and startup method used, the default data and configuration directories for your node will differ. - -### Bee Service Default Directories - -When installed using a package manager, Bee is set up to run as a service with default data and configuration directories set up automatically during the installation. The examples below include default directories for Linux and macOS. You can find the complete details of default directories for different operating systems in the `bee.yaml` files included in the [packaging folder of the Bee repo](https://github.com/ethersphere/bee/tree/master/packaging). - - - - - -The default data folder and config file locations: - -```yaml -data-dir: /var/lib/bee -config: /etc/bee/bee.yaml -``` - - - - - -The default data folder and config file locations: - -```yaml -data-dir: /opt/homebrew/var/lib/swarm-bee -config: /opt/homebrew/etc/swarm-bee/bee.yaml -``` - - - - - -The default data folder and config file locations: - -```yaml -data-dir: /usr/local/var/lib/swarm-bee/ -config: /usr/local/etc/swarm-bee/bee.yaml -``` - - - - -### `bee start` Default Directories - -For all operating systems, the default data and config directories for the `bee start` startup method can be found using the `bee printconfig` command: - -This will print out a complete default Bee node configuration file to the terminal, the `config` and `data-dir` values show the default directories for your system: - -```yaml -config: /root/.bee.yaml -data-dir: /root/.bee -``` - -:::info -The default directories for your system may differ from the example above, so make sure to run the `bee printconfig` command to view the default directories for your system. -::: - ## Configuration Methods and Priority There are three methods of configuration which each have different priority levels. Configuration is processed in the following ascending order of preference: -1. Command Line Arguments +1. Command Line Flags 2. Environment Variables 3. YAML Configuration File :::info -All three methods may be used when running Bee using `bee start`. +All three methods may be used when running Bee using `bee start`. However when Bee is started as a service with tools like `systemctl` or `brew services`, only the YAML configuration file is supported by default. ::: - ### Command Line Arguments Run `bee start --help` in your Terminal to list the available command line arguments as follows: @@ -156,7 +90,7 @@ Flags: --withdrawal-addresses-whitelist strings withdrawal target addresses Global Flags: - --config string config file (default is $HOME/.bee.yaml) + --config string config file (default is $HOME/.bee.yaml) ``` ### Environment variables @@ -304,10 +238,10 @@ withdrawal-addresses-whitelist: [] ``` :::info -Note that depending on whether Bee is started directly with the `bee start` command or started as a service with `systemctl` / `brew services`, the default directory for the YAML configuration file (shown in the `config` option above) [will be different](/docs/bee/working-with-bee/configuration). +Note that depending on whether Bee is started directly with the `bee start` command or started as a service with `systemctl` / `brew services`, the default directory for the YAML configuration file (shown in the `config` option above) [will be different](/docs/bee/working-with-bee/configuration). ::: -To change your node's configuration, simply edit the YAML file and restart Bee: +To change your node's configuration, simply edit the YAML file and restart Bee: -## Manually generating a config file for `bee start` +## Manually generating YAML config file for `bee start` -No YAML file is generated during installation in the default config directory used when Bee is started with `bee start`, so you must generate one if you wish to use a YAML file to specify your configuration options. To do this you can use the `bee printconfig` command to print out a set of default options and save it to a new file in the default location: +No YAML file is generated during installation when using the [shell script install method](/docs/bee/installation/shell-script-install), so you must generate one if you wish to use a YAML file to specify your configuration options. To do this you can use the `bee printconfig` command to print out a set of default options and save it to a new file in the default location: ```bash bee printconfig &> $HOME/.bee.yaml ``` -## Restoring default config files for Bee service +:::info +Note that `bee printconfig` prints the default configuration for your node, not the current configuration including any changes. +::: + +Moreover, when using `bee.yaml` together with the `bee start` command, you must use the `--config` flag to specify where you have saved your configuration file or else your node will ignore it. This can be a good option if you have changed many default options and want to have them cleanly organized in a file that can be used to specify options when running your node node directly with `bee start`. + +## Restoring default YAML config file You can find the default configurations for your system in the [packaging folder of the Bee repo](https://github.com/ethersphere/bee/tree/master/packaging). If your configuration file is missing you can simply copy the contents of the file into a new `bee.yaml` file in the default configuration directory shown in the `bee.yaml` file for your system. +## Default Data and Config Directories + +Depending on the operating system and startup method used, the default data and configuration directories for your node will differ. + +### Bee Service Default Directories (Package Manager Install) +When installed using a package manager, Bee is set up to run as a service with default data and configuration directories set up automatically during the installation. The examples below include default directories for Linux and macOS. You can find the complete details of default directories for different operating systems in the `bee.yaml` files included in the [packaging folder of the Bee repo](https://github.com/ethersphere/bee/tree/master/packaging). + + + + +The default data folder and config file locations: + +```yaml +data-dir: /var/lib/bee +config: /etc/bee/bee.yaml +``` + + + + + +The default data folder and config file locations: + +```yaml +data-dir: /opt/homebrew/var/lib/swarm-bee +config: /opt/homebrew/etc/swarm-bee/bee.yaml +``` + + + + + +The default data folder and config file locations: + +```yaml +data-dir: /usr/local/var/lib/swarm-bee/ +config: /usr/local/etc/swarm-bee/bee.yaml +``` + + + + +### `bee start` Default Directories + +For all operating systems, the default data and config directories for the `bee start` startup method can be found using the `bee printconfig` command: + +This will print out a complete default Bee node configuration file to the terminal, the `config` and `data-dir` values show the default directories for your system: + +```yaml +config: /root/.bee.yaml +data-dir: /root/.bee +``` + +:::info +The default directories for your system may differ from the example above, so make sure to run the `bee printconfig` command to view the default directories for your system. +::: + +## Set Bee Node Type + +You can set your node's mode of operation by modifying its configuration options. There are three node types: `full`, `light`, and `ultra-light`. If you're not sure which type of node is right for you, check out the [Getting Started guide](/docs/bee/installation/getting-started). + +There are three configuration options that must be configured to set your node type. These options are listed below in each of the supported formats (command line flags, environment variables, and yaml values.): + +1. `--full-node` / `BEE_FULL_NODE` / `full-node` +2. `--swap-enable` / `BEE_SWAP_ENABLE` / `swap-enable` +3. `--blockchain-rpc-endpoint` / `BEE_BLOCKCHAIN_RPC_ENDPOINT` / `blockchain-rpc-endpoint` + +A `password` option is also required for all modes, and can either be set directly as a configuration option or alternatively a file can be used by setting the `password-file` option to the path where your password file is located. + +:::info +In the list above, we've provided the configuration options for each node type in all three configuration formats. + +Note that configuration options are processed in this order, as mentioned above: + +1. Command Line Flags +2. Environment Variables +3. YAML Configuration File + +::: + +:::info +In the examples below, the RPC endpoint is set as `https://xdai.fairdatasociety.org`. Your RPC endpoint may differ depending on whether you are running your own Gnosis Chain node or using a third-party provider. Free RPC providers are listed in the [Gnosis Chain docs](https://docs.gnosischain.com/node/), while commercial providers such as [Infura](https://www.infura.io/) offer more reliable options. +::: + + + + + +### Full Node Configuration + +**Command Line Flags:** + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --full-node \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org +``` + +**Environment Variables:** + +```bash +export BEE_PASSWORD=flummoxedgranitecarrot +export BEE_FULL_NODE=true +export BEE_SWAP_ENABLE=true +export BEE_BLOCKCHAIN_RPC_ENDPOINT=https://xdai.fairdatasociety.org +bee start +``` + +**YAML Configuration File:** + +```yaml +password: flummoxedgranitecarrot +full-node: true +swap-enable: true +blockchain-rpc-endpoint: https://xdai.fairdatasociety.org +``` + + + + + +### Light Node Configuration + +**Command Line Flags:** + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --swap-enable \ + --api-addr 127.0.0.1:1633 \ + --blockchain-rpc-endpoint https://xdai.fairdatasociety.org +``` + +**Environment Variables:** + +```bash +export BEE_PASSWORD=flummoxedgranitecarrot +export BEE_FULL_NODE=false +export BEE_SWAP_ENABLE=true +export BEE_BLOCKCHAIN_RPC_ENDPOINT=https://xdai.fairdatasociety.org +bee start +``` + +**YAML Configuration File:** + +```yaml +password: flummoxedgranitecarrot +full-node: false +swap-enable: true +blockchain-rpc-endpoint: https://xdai.fairdatasociety.org +``` + + + + + +### Ultra-Light Node Configuration + +**Command Line Flags:** + +```bash +bee start \ + --password flummoxedgranitecarrot \ + --api-addr 127.0.0.1:1633 +``` + +**Environment Variables:** + +```bash +export BEE_PASSWORD=flummoxedgranitecarrot +export BEE_FULL_NODE=false +export BEE_SWAP_ENABLE=false +export BEE_BLOCKCHAIN_RPC_ENDPOINT= +bee start +``` + +**YAML Configuration File:** + +```yaml +password: flummoxedgranitecarrot +full-node: false +swap-enable: false +blockchain-rpc-endpoint: "" +``` + + + + -## Sepolia Testnet Configuration +## Sepolia Testnet Configuration -Connecting to the Swarm testnet is as simple as adding the flag `--mainnet false` to your bee commandline, or `mainnet: false` to your configuration file. Swarm testnet smart contracts are deployed on Sepolia, so if you want to run a light or full node you will need to add a Sepolia RPC to your configuration and fund your node with Sepolia ETH. There are many public faucets you can use to obtain Sepolia ETH, such as [this one from Infura](https://www.infura.io/faucet/sepolia). +Connecting to the Swarm testnet is as simple as adding the flag `--mainnet false` to your bee commandline, or `mainnet: false` to your configuration file. Swarm testnet smart contracts are deployed on Sepolia, so if you want to run a light or full node you will need to add a Sepolia RPC to your configuration and fund your node with Sepolia ETH. There are many public [faucets](https://faucetlink.to/sepolia)you can use to obtain Sepolia ETH. -To get Sepolia BZZ (sBZZ) you can use [this Uniswap market](https://app.uniswap.org/swap?outputCurrency=0x543dDb01Ba47acB11de34891cD86B675F04840db&inputCurrency=ETH); just make sure that you've switched to the Sepolia network in your browser wallet. +To get Sepolia BZZ (sBZZ) you can use [this Uniswap market](https://app.uniswap.org/swap?outputCurrency=0x543dDb01Ba47acB11de34891cD86B675F04840db&inputCurrency=ETH); just make sure that you've switched to the Sepolia network in your browser wallet. Here is an example of a full configuration for a testnet full node: diff --git a/docs/bee/working-with-bee/logs-and-files.md b/docs/bee/working-with-bee/logs-and-files.md index 1cf8091c7..c9597013a 100644 --- a/docs/bee/working-with-bee/logs-and-files.md +++ b/docs/bee/working-with-bee/logs-and-files.md @@ -1,3 +1,7 @@ +--- +title: Logging in Bee +id: logs-and-files +--- # Logging in Bee diff --git a/docs/bee/working-with-bee/staking.md b/docs/bee/working-with-bee/staking.md index 8b2b05c4e..8f1def8c9 100644 --- a/docs/bee/working-with-bee/staking.md +++ b/docs/bee/working-with-bee/staking.md @@ -3,20 +3,59 @@ title: Staking id: staking --- -In order to participate in the redistribution of xBZZ from uploaders to storers, storers must first deposit a non-refundable xBZZ stake with a smart contract. Then, they are going to be chosen for payout with a probability proportional to their stake in their neighborhood, as long as they can log storing the part of the content that they are supposed to be storing according to protocol rules. +In order to earn storage incentives by participating in the redistribution of xBZZ from uploaders to storers, storers must first deposit a non-refundable xBZZ stake with a smart contract. Then, they are going to be chosen for payout with a probability proportional to their stake in their neighborhood, as long as they can log storing the part of the content that they are supposed to be storing according to protocol rules. + +:::danger +Staked xBZZ CANNOT be withdrawn after being staked under typical circumstances. Only stake your xBZZ if you really plan on participating in staking as a full node, as you will not be able to withdraw it later. +::: In order to participate in redistribution, storers need to do the following: -- Join the network and download all the data that the protocol assigns to them. They can only participate if they are fully synchronised with the network. +- Join the network and download all the data that the protocol assigns to them. They can only participate if they are fully synchronized with the network. - Deposit a stake with the staking contract. There is a minimum staking requirement, presently 10 xBZZ. It can change in the future. - Stay online and fully synced, so that when a redistribution round comes, their node can check whether their neighborhood (nodes that are assigned the same content to store) has been selected and if so, they can perform a certain calculation (a random sampling) on their content and submit the result to the redistribution contract. This happens in two phases (commit and reveal), so that the nodes cannot know the results of others’ calculations when committing to their own. - Round length is estimated around 15 minutes (152 blocks to be precise), though it can be extended. Amongst the nodes that agree with the correct result, one is chosen — with a probability in proportion to their stake — as the winner. The winner must execute an on-chain transaction claiming their reward, which is the entire pot of storage rent paid since the previous round, or even more, if the previous pot has not been claimed at that time. +## Add xDAI + +In order to stake and continue to participate in the storage incentives system your node will need to continually issue related transactions on the Gnosis Chain blockchain. Therefore you will need to fund your node with some xDAI before you can get started with staking. + +:::info +You can check exactly how much xDAI is required to get started with staking from the `/redistributionstate` endpoint: + +```bash +root@noah-bee:~# curl localhost:1633/redistributionstate | jq + % Total % Received % Xferd Average Speed Time Time Time Current + Dload Upload Total Spent Left Speed +100 304 100 304 0 0 15258 0 --:--:-- --:--:-- --:--:-- 16000 +{ + "minimumGasFunds": "3750000030000000", + "hasSufficientFunds": true, + "isFrozen": false, + "isFullySynced": false, + "phase": "reveal", + "round": 253280, + "lastWonRound": 0, + "lastPlayedRound": 0, + "lastFrozenRound": 0, + "lastSelectedRound": 0, + "lastSampleDurationSeconds": 0, + "block": 38498620, + "reward": "0", + "fees": "0", + "isHealthy": true +} +``` +The `"3750000030000000"` value listed for `"minimumGasFunds"` is the minimum required amount of xDAI denominated in Wei ($1 \text{xDAI} = 10^{18} \text{ Wei}$) required for staking. That is equivalent to 0.00375000003 xDAI. However, it's recommended to add more than just the minimum amount, since it will quickly be used up by storage incentives related transaction fees. As little as a 1 xDAI should last for quite a while since an average incentives related transaction fee is as small as around 0.001 xDAI or even considerably less. +::: + + ## Add stake -Bee has builtin endpoints for depositing the stake. Currently the minimum staking requirement is 10 xBZZ, so make sure that there is enough tokens in the node's wallet and you must have some native token as well for paying the gas. +Bee has builtin endpoints for depositing the stake. Currently the minimum staking requirement is 10 xBZZ, so make sure that there is enough xBZZ tokens in the node's wallet. You must also have some native xDAI tokens as well for paying the gas fees for staking and storage incentives related transactions. + Then you can run the following command to stake 10 xBZZ. The amount is given in PLUR which is the smallest denomination of xBZZ and `1 xBZZ == 1e16 PLUR`. diff --git a/docs/bee/working-with-bee/uninstalling-bee.md b/docs/bee/working-with-bee/uninstalling-bee.md index 4afe137cb..a8292935b 100644 --- a/docs/bee/working-with-bee/uninstalling-bee.md +++ b/docs/bee/working-with-bee/uninstalling-bee.md @@ -7,7 +7,7 @@ id: uninstalling-bee Choose the appropriate uninstall method based on the install method used: -### Package Manager Install +### Package Manager This method can be used for package manager based [installs](/docs/bee/installation/install#shell-script-install) of the official Debian, RPM, and Homebrew packages. @@ -17,7 +17,7 @@ This will remove your keyfiles so make certain that you have a [full backup](/do #### Debian -To uninstall Bee and completely remove all associated files including keys and configuration, run: +To uninstall Bee and completely remove all associated files including keys and configuration, run: ```bash sudo apt-get purge bee @@ -29,8 +29,64 @@ sudo apt-get purge bee sudo yum remove bee ``` -### Binary Install -If Bee is installed using the [automated shell script](/docs/bee/installation/install#shell-script-install) or by [building from source](/docs/bee/installation/build-from-source), Bee can be uninstalled by directly removing the installed file. +## Uninstalling Bee (Shell Script / Binary Install) + +If Bee was installed using the [automated shell script](/docs/bee/installation/shell-script-install) or as a binary by [building from source](/docs/bee/installation/build-from-source), it can be uninstalled by manually removing the installed binary, configuration files, and data directories. + +### Identify Data and Config Locations + +The shell script install method may result in slightly different default data and configuration locations based on your system. The easiest way to find these locations is to check the default configuration using the `bee printconfig` command: + +```bash +bee printconfig +``` + +The output from this command contains several dozen default configuration values, however we only include the two we need in the example output below, `config` and `data-dir`. These will reveal the default locations for the configuration files and data directory according to our specific system. + +These values will look something like this: + +```bash +# config file (default is $HOME/.bee.yaml) +config: /home/noah/.bee.yaml +# data directory +data-dir: /home/noah/.bee +``` + +### Backup Files (Optional) + +**1. Remove the Bee Binary** + +First, check if the Bee binary exists: + +```bash +ls -l /usr/local/bin/bee +``` + +If it exists, remove it: + +```bash +sudo rm -f /usr/local/bin/bee +``` + +Verify that the binary has been removed: + +```bash +ls -l /usr/local/bin/bee +``` + +If Bee was built from source but not moved [as described in step 6](/docs/bee/installation/build-from-source) of the instructions for building from source, check the default build directory: + +```bash +ls -l ~/bee +``` + +If it exists, remove it: + +```bash +rm -rf ~/bee +``` + +Verify removal: ```bash sudo rm "/usr/local/bin/bee" @@ -38,10 +94,10 @@ sudo rm "/usr/local/bin/bee" ## Remove Bee Data Files -To completely remove all Bee files from your system you will also need to remove the config and data files. +To completely remove all Bee files from your system you will also need to remove the config and data files. :::danger -Node keys, password, chunks and state files are stored in the data folder. [Make backups](/docs/bee/working-with-bee/backups) of your data folder to prevent losing keys and data. +Node keys, password, chunks and state files are stored in the data folder. [Make backups](/docs/bee/working-with-bee/backups) of your data folder to prevent losing keys and data. ::: ### Bee diff --git a/docs/concepts/incentives/overview.mdx b/docs/concepts/incentives/overview.mdx index 6619cc1ec..843aeae6e 100644 --- a/docs/concepts/incentives/overview.mdx +++ b/docs/concepts/incentives/overview.mdx @@ -3,34 +3,44 @@ title: Incentives Overview id: overview --- -import { globalVariables } from '/src/config/globalVariables'; +import { globalVariables } from '/src/config/globalVariables' +A key challenge in a decentralized data network is incentivizing users to store data and provide bandwidth. Swarm addresses this with two incentive mechanisms: **storage incentives**, which reward nodes for storing data over time, and **bandwidth incentives**, which reward nodes for transmitting data across the network. Together, these mechanisms establish a self-sustaining economic system where nodes are compensated for contributing resources honestly. -One of the key challenges in a decentralised data network is incentivizing users to store data and provide bandwidth. Swarm addresses this challenge with two incentives systems, one which rewards nodes for sharing their storage space and another which rewards them for sharing bandwidth. The incentives system consists of multiple elements which work together to build a self sustaining economic system where nodes are rewarded for honestly providing their resources to the network. +Swarm's storage incentives are detailed in the [Future Proof Storage](https://www.ethswarm.org/swarm-storage-incentives.pdf) paper and [The Book of Swarm](https://papers.ethswarm.org/p/book-of-swarm/). -:::info -Swarm's storage incentives protocols are defined in depth in the [Future Proof Storage](https://www.ethswarm.org/swarm-storage-incentives.pdf) paper published by the Swarm team, and are also discussed in [The Book of Swarm](https://papers.ethswarm.org/p/book-of-swarm/). -::: +## Storage Incentives -## Storage Incentives +Storage incentives reward node operators for providing disk space and reliably storing data. The system is governed by three interconnected smart contracts: -Storage incentives are used to reward node operators for providing their disk space to the network and storing the data they are responsible for storing over time. The storage incentives system is composed of three smart contracts which work together to enact a self regulating economic system. The postage stamp contract manages payments for uploading data, the redistribution contract manages the redistribution of those payments to storer nodes, and the price oracle contract uses data from the redistribution contract to set the price for postage stamps in the postage stamp contract. +- **Postage Stamp Contract** – Handles payments for uploading data by way of purchasing "postage stamp batches". +- **Redistribution Contract** – Distributes payments for postage stamps to nodes that store data. +- **Price Oracle Contract** – Uses network redundancy data to determine postage stamp prices. +If you want to dig into the code, check out the [incentives contracts repo](https://github.com/ethersphere/storage-incentives) +You can find the on-chain address for each contract within the docs [here](/docs/references/smart-contracts#storage-incentives-contracts), however since the contracts there are updated manually, they may at times fall slightly behind the most recent changes. For the most up to date address for each storage incentives contract refer to the [storage incentives ABI repo](https://github.com/ethersphere/go-storage-incentives-abi/commits/master/abi/abi_mainnet.go), and you can also find past addresses of older versions of the incentives contracts by reviewing previous commits. -### Postage Stamps +### Postage Stamps -Postage stamps are used to pre-purchase the right to upload data on storm, much in the same way that real life postage stamps are used to pre-pay for use of the postal service. Postage stamps are purchased in batches rather than one by one, and are consumed when uploading data to Swarm. Postage stamp batches are purchased using xBZZ through the postage stamp smart contract . the xBZZ used to pay for postage stamp batches serve as the funds which are redistributed as storage incentives in the redistribution game. The price of postage stamps is set by the price oracle. Read more [here](/docs/concepts/incentives/postage-stamps). +Postage stamps are required to upload data to Swarm, similar to how real-world postage stamps prepay for mail delivery. Instead of being purchased individually, they are bought in batches using xBZZ through the postage stamp smart contract. -### Redistribution Game +The xBZZ used to buy postage stamps is later redistributed as storage incentives. The **price oracle contract** adjusts postage stamp pricing based on network redundancy to ensure a sustainable level of storage. You can find more details about postage stamps [here](/docs/concepts/incentives/postage-stamps). -The redistribution game is used to redistribute the xBZZ paid into the postage stamp contract to full staking nodes which contribute their disk space to the network. The game is designed in such a way that the most profitable way to participate is to honestly store all the data for which a node is responsible. The game's rules are determined by the redistribution smart contract. The results of the game also supply the utilization signal which is used by the price oracle to set the price for postage stamps. Read more [here](/docs/concepts/incentives/postage-stamps). +### Redistribution Game -### Price Oracle +The redistribution game determines how xBZZ from postage stamp purchases is distributed among full staking nodes that store data. The system is designed so that **honestly storing assigned data** is the most profitable strategy. Rules for this process are encoded in the [redistribution smart contract](https://github.com/ethersphere/storage-incentives). -The price oracle contract uses a utilization signal derived from the redistribution contract to set the price for postage stamps through the postage stamp contract. The utilization signal is based on a measure of data redundancy in the network. The postage stamp price is increased or decreased in order to maintain a healthy degree of redundancy. Read more [here](/docs/concepts/incentives/price-oracle). +Additionally, the game generates a **utilization signal**, which the price oracle uses to regulate postage stamp prices. Read more [here](/docs/concepts/incentives/redistribution-game). +### Price Oracle -## Bandwidth Incentives +The **price oracle contract** dynamically adjusts postage stamp prices using the utilization signal from the redistribution contract. This mechanism ensures optimal redundancy by increasing or decreasing costs as needed. Read more [here](/docs/concepts/incentives/price-oracle). + +## Bandwidth Incentives + +Nodes in Swarm not only store data but also serve and relay data across the network. **Bandwidth incentives** compensate nodes for these services. + +The **Swarm Accounting Protocol (SWAP)** facilitates bandwidth payments between nodes, which can be settled either **in-kind** (data exchange) or via **cheques** processed through a **chequebook contract** on Gnosis Chain. SWAP applies to full and light nodes but not ultra-light nodes. + +Read more [here](/docs/concepts/incentives/bandwidth-incentives). -In addition to storing data over time, nodes must also serve the data they store and must also relay data and messages to other nodes in the network. -Bandwidth incentives are used to reward nodes for relaying data across the network, both by serving up the data they store themselves and by serving as an intermediary relayer of data between other peers. The Swarm Accounting Protocol (SWAP) defines how bandwidth incentives work. At the core of SWAP is the concept of cheques along with the chequebook contract. Read more [here](/docs/concepts/incentives/bandwidth-incentives). \ No newline at end of file diff --git a/docs/concepts/incentives/redistribution-game.md b/docs/concepts/incentives/redistribution-game.md index 75dae81e7..9646259f5 100644 --- a/docs/concepts/incentives/redistribution-game.md +++ b/docs/concepts/incentives/redistribution-game.md @@ -10,7 +10,7 @@ The redistribution game is used to redistribute the xBZZ which is accumulated by ### Redistribution Game Details -When someone wants to upload data to Swarm, they do so by buying postage stamp batches with xBZZ. The xBZZ is collected and later paid out to storage provider nodes as a part of the redistribution game. Every 152 Gnosis Chain blocks a single [neighborhood](/docs/concepts/DISC/neighborhoods) is selected to play the redistribution game. For each round of the game, one node from the selected neighborhood will have the chance to win a reward which is paid out from the accumulated xBZZ. +When someone wants to upload data to Swarm, they do so by buying postage stamp batches with xBZZ. The xBZZ is collected and later paid out to storage provider nodes as a part of the redistribution game. Every 152 Gnosis Chain blocks ***a single [neighborhood](/docs/concepts/DISC/neighborhoods)*** is selected to play the redistribution game. For each round of the game, one node from the selected neighborhood will have the chance to win a reward which is paid out from the accumulated xBZZ. The game has 3 phases, `commit`, `reveal`, and `claim`. In the `reveal` phase of a previous game, an "anchor" address is randomly generated and used to determine the neighborhood for the current round. diff --git a/docs/concepts/introduction.md b/docs/concepts/introduction.md index e4cefd9cc..07e85e1ea 100644 --- a/docs/concepts/introduction.md +++ b/docs/concepts/introduction.md @@ -8,7 +8,7 @@ Swarm is a peer-to-peer network of Bee nodes that collectively provide censorshi ## Bee Client -Bee is a Swarm client implemented in Go and is the basic building block for the Swarm network. Bee nodes collectively work together to form a private, decentralized, and self sustaining network for permissionless publishing and data storage. You can learn more about how Bee clients work by reading about the [concepts and protocols](/docs/concepts/what-is-swarm/) which underpin the Swarm network. To get hands on experience working with Swarm, you can start by learning how to [install and operate a Bee node](/docs/bee/installation/quick-start). +Bee is a Swarm client implemented in Go and is the basic building block for the Swarm network. Bee nodes collectively work together to form a private, decentralized, and self sustaining network for permissionless publishing and data storage. You can learn more about how Bee clients work by reading about the [concepts and protocols](/docs/concepts/what-is-swarm/) which underpin the Swarm network. To get hands on experience working with Swarm, you can start by learning how to [install and operate a Bee node](/docs/bee/installation/getting-started). ## Swarm Foundation diff --git a/docs/develop/access-the-swarm/buy-a-stamp-batch.md b/docs/develop/access-the-swarm/buy-a-stamp-batch.md index 5118b9512..fa1ebf39c 100644 --- a/docs/develop/access-the-swarm/buy-a-stamp-batch.md +++ b/docs/develop/access-the-swarm/buy-a-stamp-batch.md @@ -2,15 +2,48 @@ title: Buy a Batch of Stamps id: buy-a-stamp-batch --- + import VolumeAndDurationCalc from '@site/src/components/VolumeAndDurationCalc.js'; import AmountAndDepthCalc from '@site/src/components/AmountAndDepthCalc.js'; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +A postage batch is required to upload data to Swarm. Postage stamp batches represent _right to write_ data on Swarm's [DISC (Distributed Immutable Store of Chunks)](/docs/concepts/DISC/). The parameters which control the duration and quantity of data that can be stored by a postage batch are `depth` and `amount`, with `depth` determining data volume that can be uploaded by the batch and `amount` determining storage duration of data uploaded with the batch. + +:::info +The storage volume and duration are both non-deterministic. Volume is non-deterministic due to the details of how [postage stamp batch utilization](/docs/concepts/incentives/postage-stamps#batch-utilisation) works. While duration is non-deterministic due to price changes made by the [price oracle contract](/docs/concepts/incentives/price-oracle). + + **Storage volume and `depth`:** + When purchasing stamp batches for larger volumes of data (by increasing the `depth` value), the amount of data which can be stored becomes increasingly more predictable. For example, at `depth` 22 a batch can store between 4.93 GB and 17.18 GB, while at `depth` 28, a batch can store between 1.0 and 1.1 TB of data, and at higher depths the difference between the minimum and maximum storage volumes approach the same value. -A postage batch is required to upload data to Swarm. Postage stamp batches represent _right to write_ data on Swarm's [DISC (Distributed Immutable Store of Chunks)](/docs/concepts/DISC/). The parameters which control the duration and quantity of data that can be stored by a postage batch are `depth` and `amount`, with `depth` determining data volume that can be uploaded by the batch and `amount` determining storage duration of data uploaded with the batch. + **Storage duration and `amount`:** + + The duration of time for which a batch can store data is also non-deterministic since the price of storage is automatically adjusted over time by the [price oracle contract](/docs/concepts/incentives/price-oracle). However, limits have been placed on how swiftly the price of storage can change, so there is no danger of a rapid change in price causing postage batches to unexpectedly expire due to a rapid increase in price. You can view a history of price changes by inspecting [the events emitted by the oracle contract](https://gnosisscan.io/address/0x47EeF336e7fE5bED98499A4696bce8f28c1B0a8b#events), or also through the [Swarmscan API](https://api.swarmscan.io/v1/events/storage-price-oracle/price-update). As you can see, if and when postage batch prices are updated, the updates are quite small. Still, since it is not entirely deterministic, it is important to monitor your stamp batch TTL (time to live) as it will change along with price oracle changes. You can inspect your batch's TTL using the `/stamps` endpoint of the API: + + ```bash + root@noah-bee:~# curl -s localhost:1633/stamps | jq + { + "stamps": [ + { + "batchID": "f56af59cc2c785a3b45bbf3e46c3c4b20f80379339ef337b5bbf45ebe5629a66", + "utilization": 0, + "usable": true, + "label": "", + "depth": 17, + "amount": "432072000", + "bucketDepth": 16, + "blockNumber": 38498819, + "immutableFlag": true, + "exists": true, + "batchTTL": 82943 + } + ] + } + ``` + Here we can see from the `batchTTL` that `82943` seconds remain, or approximately 23 hours. + ::: For a deeper understanding of how `depth` and `amount` parameters determine the data volume and storage duration of a postage batch, see the [postage stamp page](/docs/concepts/incentives/postage-stamps/). @@ -22,45 +55,16 @@ xBZZ can be obtained from a variety of different centralized and decentralized e xDAI can be obtained from a wide range of centralized and decentralized exchanges. See [this list of exchanges](https://docs.gnosischain.com/about/tokens/xdai) from the Gnosis Chain documentation to get started. -## Buying a stamp batch - -When interacting with the Bee API directly, `amount` and `depth` are passed as path parameters: +You can learn more details from the [Fund Your Node](/docs/bee/installation/fund-your-node/) section. -```bash -curl -s -X POST http://localhost:1633/stamps// -``` +## Buying a stamp batch -And with Swarm CLI, they are set using option flags: +When interacting with Swarm CLI, the parameters amount and depth are set using option flags: ```bash swarm-cli stamp buy --depth --amount ``` - - - -#### API - -```bash -curl -s -X POST http://localhost:1633/stamps/100000000/20 -``` - -```bash -{ - "batchID": "8fcec40c65841e0c3c56679315a29a6495d32b9ed506f2757e03cdd778552c6b", - "txHash": "0x51c77ac171efd930eca8f3a77e3fcd5aca0a7353b84d5562f8e9c13f5907b675" -} -``` - - - - - #### Swarm CLI ```bash @@ -76,30 +80,24 @@ When a mutable stamp reaches full capacity, it still permits new content uploads ? Confirm the purchase Yes Stamp ID: f4b9830676f4eeed4982c051934e64113dc348d7f5d2ab4398d371be0fbcdbf5 ``` - - - - :::info Once your batch has been purchased, it will take a few minutes for other Bee nodes in the Swarm to catch up and register your batch. Allow some time for your batch to propagate in the network before proceeding to the next step. ::: - ## Setting stamp batch parameters and options When purchasing a batch of stamps there are several parameters and options which must be considered. The `depth` parameter will control how many chunks can be uploaded with a batch of stamps. The `amount` parameter determines how much xBZZ will be allocated per chunk, and therefore also controls how long the chunks will be stored. While the `immutable` header option sets the batch as either mutable or immutable, which can significantly alter the behavior of the batch utilisation (more details below). - ### Choosing `depth` :::caution The minimum value for `depth` is 17, however a higher depth value is recommended for most use cases due to the [mechanics of stamp batch utilisation](/docs/concepts/incentives/postage-stamps/#batch-utilisation). See [the depths utilisation table](/docs/concepts/incentives/postage-stamps/#effective-utilisation-table) to help decide which depth is best for your use case. ::: -One notable aspect of batch utilisation is that the entire batch is considered fully utilised as soon as any one of its buckets are filled. This means that the actual amount of chunks storable by a batch is less than the nominal maximum amount. +One notable aspect of batch utilisation is that the entire batch is considered fully utilised as soon as any one of its buckets are filled. This means that the actual amount of chunks storable by a batch is less than the nominal maximum amount. -See the [postage stamp page](/docs/concepts/incentives/postage-stamps) for a more complete explanation of how batch utilisation works and a [table](/docs/concepts/incentives/postage-stamps#effective-utilisation-table) with the specific amounts of data which can be safely uploaded for each `depth` value. +See the [postage stamp page](/docs/concepts/incentives/postage-stamps) for a more complete explanation of how batch utilisation works and a [table](/docs/concepts/incentives/postage-stamps#effective-utilisation-table) with the specific amounts of data which can be safely uploaded for each `depth` value. ### Choosing `amount` @@ -115,10 +113,10 @@ Depending on the use case, uploaders may desire to use mutable or immutable batc ## Calculators -The following postage batch calculators allow you to conveniently find the depth and amount values for a given storage duration and storage volume, or to find the storage duration and storage volume for a given depth and amount. The results will display the cost in xBZZ for the postage batch. The current pricing information is sourced from the Swarmscan API and will vary over time. +The following postage batch calculators allow you to conveniently find the depth and amount values for a given storage duration and storage volume, or to find the storage duration and storage volume for a given depth and amount. The results will display the cost in xBZZ for the postage batch. The current pricing information is sourced from the Swarmscan API and will vary over time. :::info -The 'effective volume' is the volume of data that can safely stored for each storage depth. The 'theoretical max volume' is significantly lower than the effective volume at lower depths and the two values trend towards the same value at higher depths. The lowest depth with an effective volume above zero is 22, with an effective depth of 4.93 GB. Lower depth values can be used for smaller uploads but do not come with the same storage guarantees. [Learn more here](/docs/concepts/incentives/postage-stamps#effective-utilisation-table). +The 'effective volume' is the volume of data that can safely stored for each storage depth. The 'theoretical max volume' is significantly lower than the effective volume at lower depths and the two values trend towards the same value at higher depths. The lowest depth with an effective volume above zero is 22, with an effective depth of 4.93 GB. Lower depth values can be used for smaller uploads but do not come with the same storage guarantees. [Learn more here](/docs/concepts/incentives/postage-stamps#effective-utilisation-table). ::: ### Depth & Amount to Time & Volume Calculator @@ -127,55 +125,10 @@ The 'effective volume' is the volume of data that can safely stored for each sto ### Time & Volume to Depth & Amount Calculator -The recommended depth in this calculator's results is the lowest depth value whose [effective volume](/docs/concepts/incentives/postage-stamps#effective-utilisation-table) is greater than the entered volume. +The recommended depth in this calculator's results is the lowest depth value whose [effective volume](/docs/concepts/incentives/postage-stamps#effective-utilisation-table) is greater than the entered volume. -## Viewing Stamps - -To check on your stamps, send a GET request to the stamp endpoint. - - - - - -#### API - - -```bash -curl http://localhost:1633/stamps -``` - -```bash -{ - "stamps": [ - { - "batchID": "f4b9830676f4eeed4982c051934e64113dc348d7f5d2ab4398d371be0fbcdbf5", - "utilization": 0, - "usable": true, - "label": "", - "depth": 20, - "amount": "100000000", - "bucketDepth": 16, - "blockNumber": 30643611, - "immutableFlag": true, - "exists": true, - "batchTTL": 20588, - "expired": false - } - ] -} -``` - - - - - #### Swarm CLI ```bash @@ -190,65 +143,20 @@ TTL: 5 hours 42 minutes 18 seconds Expires: 2023-10-26 ``` - - - :::info -It is not possible to reupload unencrypted content which was stamped using an expired postage stamp. +It is not possible to reupload unencrypted content which was stamped using an expired postage stamp. ::: - ## Checking the remaining TTL (time to live) of your batch :::info At present, TTL is a primitive calculation based on the current storage price and the assumption that storage price will remain static in the future. As more data is uploaded into Swarm, the price of storage will begin to increase. For data that it is important to keep alive, make sure your batches have plenty of time to live! ::: -In order to make sure your *batch* has sufficient *remaining balance* to be stored and served by nodes in its [*area of responsibility*](/docs/references/glossary#2-area-of-responsibility-related-depths), you must regularly check on its _time to live_ and act accordingly. The *time to live* is the number of seconds before the chunks will be considered for garbage collection by nodes in the network. - -The remaining *time to live* in seconds is shown in the API in the returned json object as the value for `batchTTL`, and with Swarm CLI you will see the formatted TTL as the `TTL` value. - - - - - -#### API - - -```bash -curl http://localhost:1633/stamps -``` - -```bash -{ - "stamps": [ - { - "batchID": "f4b9830676f4eeed4982c051934e64113dc348d7f5d2ab4398d371be0fbcdbf5", - "utilization": 0, - "usable": true, - "label": "", - "depth": 20, - "amount": "100000000", - "bucketDepth": 16, - "blockNumber": 30643611, - "immutableFlag": true, - "exists": true, - "batchTTL": 20588, - "expired": false - } - ] -} -``` - - +In order to make sure your _batch_ has sufficient _remaining balance_ to be stored and served by nodes in its [_area of responsibility_](/docs/references/glossary#2-area-of-responsibility-related-depths), you must regularly check on its _time to live_ and act accordingly. The _time to live_ is the number of seconds before the chunks will be considered for garbage collection by nodes in the network. - +The remaining _time to live_ in seconds is shown in the API in the returned json object as the value for `batchTTL`, and with Swarm CLI you will see the formatted TTL as the `TTL` value. #### Swarm CLI @@ -264,8 +172,6 @@ TTL: 5 hours 42 minutes 18 seconds Expires: 2023-10-26 ``` - - ## Top up your batch @@ -273,26 +179,7 @@ Expires: 2023-10-26 Don't let your batch run out! If it does, you will need to restamp and resync your content. ::: -If your batch is starting to run out, or you would like to extend the life of your batch to protect against storage price rises, you can increase the batch TTL by topping up your batch using the stamps endpoint, passing in the relevant batchID into the HTTP PATCH request. - - - - - -#### API - -```bash -curl -X PATCH "http://localhost:1633/stamps/topup/6d32e6f1b724f8658830e51f8f57aa6029f82ee7a30e4fc0c1bfe23ab5632b27/10000000" -``` - - - - +If your batch is starting to run out, or you would like to extend the life of your batch to protect against storage price rises, you can increase the batch TTL by topping up your batch using swarm-cli's stamp topup command, passing in the relevant batchID. #### Swarm CLI @@ -303,6 +190,7 @@ swarm-cli stamp list ``` Copy stamp ID. + ```bash Stamp ID: daa8c5b36e1cf481b10118a8b02430a6f22618deaa6ba5aa4ea660de66aa62db Usage: 13% @@ -319,6 +207,7 @@ swarm-cli stamp topup --amount 10000000 --stamp daa8c5b36e1cf481b10118a8b02430a ``` Wait for topup transaction to complete. + ```bash ⬡ ⬡ ⬢ Topup in progress. This may take a while. Stamp ID: daa8c5b36e1cf481b10118a8b02430a6f22618deaa6ba5aa4ea660de66aa62db @@ -326,100 +215,10 @@ Depth: 20 Amount: 100000001000 ``` - - - - ## Dilute your batch In order to store more data with a batch of stamps, you must "dilute" the batch. Dilution simply refers to increasing the depth of the batch, thereby allowing it to store a greater number of chunks. As dilution only increases the the depth of a batch and does not automatically top up the batch with more xBZZ, dilution will decrease the TTL of the batch. Therefore if you wish to store more with your batch but don't want to decrease its TTL, you will need to both dilute and top up your batch with more xBZZ. - - - - -#### API - - -Here we call the `/stamps` endpoint and find a batch with `depth` 24 and a `batchTTL` of 2083223 which we wish to dilute: - -```bash -curl http://localhost:1633/stamps -``` - -```json -{ - "stamps": [ - { - "batchID": "0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5", - "utilization": 0, - "usable": true, - "label": "", - "depth": 24, - "amount": "10000000000", - "bucketDepth": 16, - "blockNumber": 29717348, - "immutableFlag": false, - "exists": true, - "batchTTL": 2083223, - "expired": false - } - ] -} -``` - -Next we call the [`dilute`](/api/#tag/Postage-Stamps/paths/~1stamps~1dilute~1{batch_id}~1{depth}/patch) endpoint to increase the `depth` of the batch using the `batchID` and our new `depth` of 26: - -```bash -curl -s -XPATCH http://localhost:1633/stamps/dilute/0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5/26 -``` -And a `txHash` of our successful transaction: - -```bash -{ - "batchID": "0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5", - "txHash": "0x298e80358b3257292752edb2535a1cd84440c074451b61f78fab349aea4962b7" -} -``` - -And finally we use the `/stamps` endpoint again to confirm the new `depth` and decreased `batchTTL`: - -```bash -curl http://localhost:1633/stamps -``` - -We can see the new `depth` of 26 and a decreased `batchTTL` of 519265. - -```json -{ - "stamps": [ - { - "batchID": "0e4dd16cc435730a25ba662eb3da46e28d260c61c31713b6f4abf8f8c2548ae5", - "utilization": 0, - "usable": true, - "label": "", - "depth": 26, - "amount": "10000000000", - "bucketDepth": 16, - "blockNumber": 29717348, - "immutableFlag": false, - "exists": true, - "batchTTL": 519265, - "expired": false - } - ] -} -``` - - - - - #### Swarm CLI List available stamps, make sure to use the `--verbose` flag so that we can see the batch depth. @@ -459,9 +258,6 @@ Depth: 20 Amount: 100010002000 ``` - - - ## Stewardship The stewardship endpoint in combination with [pinning](/docs/develop/access-the-swarm/pinning) can be used to guarantee that important content is always available. It is used for checking whether the content for a Swarm reference is retrievable and for re-uploading the content if it is not. @@ -476,9 +272,10 @@ An HTTP GET request to the `stewardship` endpoint checks to see whether the cont curl "http://localhost:1633/stewardship/c0c2b70b01db8cdfaf114cde176a1e30972b556c7e72d5403bea32e c0207136f" ``` + ```json { - "isRetrievable": true + "isRetrievable": true } ``` @@ -488,4 +285,4 @@ If the content is not retrievable, an HTTP PUT request can be used to re-upload curl -X PUT "http://localhost:1633/stewardship/c0c2b70b01db8cdfaf114cde176a1e30972b556c7e72d5403bea32ec0207136f" ``` -Note that for the re-upload to succeed, the associated content must be available locally, either pinned or cached. Since it isn't easy to predict if the content will be cached, for important content pinning is recommended. \ No newline at end of file +Note that for the re-upload to succeed, the associated content must be available locally, either pinned or cached. Since it isn't easy to predict if the content will be cached, for important content pinning is recommended. diff --git a/docs/develop/access-the-swarm/pinning.md b/docs/develop/access-the-swarm/pinning.md index ce7d4f77b..0490fbc42 100644 --- a/docs/develop/access-the-swarm/pinning.md +++ b/docs/develop/access-the-swarm/pinning.md @@ -3,43 +3,68 @@ title: Pinning id: pinning --- -Each Bee node is configured to reserve a certain amount of memory on your computer's hard drive to store and serve chunks within their _neighborhood of responsibility_ for other nodes in the Swarm network. Once this alloted space has been filled, each Bee node deletes older chunks to make way for newer ones as they are uploaded by the network. +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +Each Bee node is configured to reserve a certain amount of memory on your computer's hard drive to store and serve chunks within their _neighborhood of responsibility_ for other nodes in the Swarm network. Once this allotted space has been filled, each Bee node deletes older chunks to make way for newer ones as they are uploaded by the network. Each time a chunk is accessed, it is moved back to the end of the deletion queue, so that regularly accessed content stays alive in the network and is not deleted by a node's garbage collection routine. Bee nodes provide a facility to **pin** important content so that it is not deleted by the node's garbage collection routine. Chunks can be _pinned_ either during upload, or retrospectively using the Swarm reference. - ## Pin During Upload + + + + To store content so that it will persist even when Bee's garbage collection routine is deleting old chunks, we simply pass the `Swarm-Pin` header set to `true` when uploading. ```bash -curl -H "Swarm-Pin: true" -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" --data-binary @bee.mp4 localhost:1633/bzz\?bee.mp4 +curl -H "Swarm-Pin: true" -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" --data-binary @bee.mp4 localhost:1633/bzz?bee.mp4 ``` -```json -{ - "reference": "1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30" -} + + + +To pin content during upload using swarm-cli: + +```bash +swarm-cli pinning pin --file bee.mp4 --stamp 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264 ``` + + + + ## Administer Pinned Content + + + To check what content is currently pinned on your node, query the `pins` endpoint of your Bee API: ```bash curl localhost:1633/pins ``` -```json -{ - "references": [ - "1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30" - ] -} + + + +To check pinned content using swarm-cli: + +```bash +swarm-cli pinning list ``` + + + or, to check for specific references: ```bash @@ -48,19 +73,32 @@ curl localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a8782740 A `404` response indicates the content is not available. -### Unpinning Content +## Unpinning Content + + + We can unpin content by sending a `DELETE` request to the pinning endpoint using the same reference: -````bash -curl -XDELETE http://localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30 -`` +```bash +curl -X DELETE http://localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30 -```json -{"message":"OK","code":200} -```` +``` + + -Now, when check again, we will get a `404` error as the content is no longer pinned. + +To unpin content using swarm-cli: + +```bash +swarm-cli pinning unpin --hash 1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30 +``` + + + + + +Now, when checking again, we will get a `404` error as the content is no longer pinned. ```bash curl localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a87827402f20cbb30 @@ -71,11 +109,13 @@ curl localhost:1633/pins/1bfe7c3ce4100ae7f02b62e38d3e8d4c3a86ea368349614a8782740 ``` :::info -Pinning and unpinning is possible for files (as in the example) and also the chunks, directories, and bytes endpoints. See the [API](/api/) documentation for more details. -::: +Pinning and unpinning is possible for files (as in the example) and also the chunks, directories, and bytes endpoints. See the [API](/api/) documentation for more details.::: + +## Pinning Already Uploaded Content -### Pinning Already Uploaded Content + + The previous example showed how we can pin content upon upload. It is also possible to pin content that is already uploaded and present in the Swarm. To do so, we can send a `POST` request including the swarm reference to the files pinning endpoint. @@ -84,11 +124,20 @@ To do so, we can send a `POST` request including the swarm reference to the file curl -X POST http://localhost:1633/pins/7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f ``` -```json -{ "message": "OK", "code": 200 } + + + +To pin already uploaded content using swarm-cli: + +```bash +swarm-cli pinning pin --hash 7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f ``` -The `pins` operation will attempt to fetch the content from the network if it is not available on the local node. + + + + +The pins operation will attempt to fetch the content from the network if it is not available on the local node. Now, if we query our files pinning endpoint again, the swarm reference will be returned. @@ -98,10 +147,10 @@ curl http://localhost:1633/pins/7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d3816 ```json { - "reference": "7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f" + "reference": "7b344ea68c699b0eca8bb4cfb3a77eb24f5e4e8ab50d38165e0fb48368350e8f" } ``` -:::warning +::::warning While the pin operation will attempt to fetch content from the network if it is not available locally, we advise you to ensure that the content is available locally before calling the pin operation. If the content, for whatever reason, is only fetched partially from the network, the pin operation only partly succeeds and leaves the internal administration of pinning in an inconsistent state. ::: diff --git a/docs/develop/access-the-swarm/store-with-encryption.md b/docs/develop/access-the-swarm/store-with-encryption.md index d6a26cc92..2ca39f464 100644 --- a/docs/develop/access-the-swarm/store-with-encryption.md +++ b/docs/develop/access-the-swarm/store-with-encryption.md @@ -9,10 +9,10 @@ The Bee client provides a facility to encrypt files and directories while upload # Encrypt and Upload a File -To encrypt a file simply include the `Swarm-Encrypt: true` header with your HTTP request. +To encrypt a file simply include the `--encrypt` flag to your `swarm-cli` upload command. ```bash -curl -F file=@bee.jpg -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" -H "Swarm-Encrypt: true" http://localhost:1633/bzz +swarm-cli upload bee.jpg --stamp 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264 --encrypt ``` When successful, the Bee client will return a 64 byte reference, instead of the usual 32 bytes. @@ -21,7 +21,7 @@ More information on how to buy a postage stamp batch and get its batch id can be ```json { - "reference": "f7b1a45b70ee91d3dbfd98a2a692387f24db7279a9c96c447409e9205cf265baef29bf6aa294264762e33f6a18318562c86383dd8bfea2cec14fae08a8039bf3" + "reference": "f7b1a45b70ee91d3dbfd98a2a692387f24db7279a9c96c447409e9205cf265baef29bf6aa294264762e33f6a18318562c86383dd8bfea2cec14fae08a8039bf3" } ``` @@ -32,17 +32,17 @@ The second part of the reference is a 64 character decryption key which is requi It is important that this data not be sent in requests to a public gateway as this would mean that gateway would be able to decrypt your data. However, if you are running a node on your local machine, you may safely use the API bound to `localhost`. The key material is never exposed to the network so your data remains safe. :::info -Encryption is disabled by default on all Swarm Gateways to keep your data safe. [Install Bee on your computer](/docs/bee/installation/quick-start) to use the encryption feature. +Encryption is disabled by default on all Swarm Gateways to keep your data safe. [Install Bee on your computer](/docs/bee/installation/getting-started) to use the encryption feature. ::: # Download and Decrypt a File -To retrieve your file, simply supply the full 64 byte string to the files endpoint, and the Bee client will download and decrypt all the relevant chunks and restore them to their original format. +To retrieve your file, simply enter the full 64-byte string to swarm-cli's download command, and the Bee client will download and decrypt all the relevant chunks and restore them to their original format. ```bash -curl -OJ http://localhost:1633/bzz/f7b1a45b70ee91d3dbfd98a2a692387f24db7279a9c96c447409e9205cf265baef29bf6aa294264762e33f6a18318562c86383dd8bfea2cec14fae08a8039bf3 +swarm-cli download f7b1a45b70ee91d3dbfd98a2a692387f24db7279a9c96c447409e9205cf265baef29bf6aa294264762e33f6a18318562c86383dd8bfea2cec14fae08a8039bf3 ``` :::danger -Never use public gateways when requesting full encrypted references. The hash contains sensitive key information which should be kept private. Run [your own node](/docs/bee/installation/quick-start) to use Bee's encryption features. +Never use public gateways when requesting full encrypted references. The hash contains sensitive key information which should be kept private. Run [your own node](/docs/bee/installation/getting-started) to use Bee's encryption features. ::: diff --git a/docs/develop/access-the-swarm/upload-and-download.md b/docs/develop/access-the-swarm/upload-and-download.md index b8dd91f1d..803d61308 100644 --- a/docs/develop/access-the-swarm/upload-and-download.md +++ b/docs/develop/access-the-swarm/upload-and-download.md @@ -3,29 +3,20 @@ title: Upload and Download id: upload-and-download --- +When you upload your files to the Swarm, they are split into 4kb _chunks_ and then distributed to nodes in the network that are responsible for storing and serving these parts of your content. +To learn more about how Swarm's decentralized storage solution works, check out the ["Concepts" section](/docs/concepts/what-is-swarm). -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -When you upload your files to the Swarm, they are split into 4kb -_chunks_ and then distributed to nodes in the network that are -responsible for storing and serving these parts of your content. -To learn more about how Swarm's decentralized storage solution works, -check out the ["Concepts" section](/docs/concepts/what-is-swarm). - -In order for you to be able to upload any data to the network, -you must first purchase [postage stamps](/docs/concepts/incentives/postage-stamps) -and then use those stamps to upload your data. Keep on reading below to learn how. +In order for you to be able to upload any data to the network, you must first purchase [postage stamps](/docs/concepts/incentives/postage-stamps) and then use those stamps to upload your data. Keep on reading below to learn how. ## Uploads and Download Endpoints Overview -There are three endpoints which can be used for uploading and downloading data from Swarm, and each endpoint has different usage. +There are three endpoints which can be used for uploading and downloading data from Swarm, and each endpoint has different usage. 1. [`/bytes`](/api/#tag/Bytes) - Used for uploading raw data, lacks convenience features present in the `/bzz` endpoint but allows for greater customization for advanced use cases. -1. [`/bzz`](/api/#tag/BZZ) - Used for general download and uploads of files or collections of files. -1. [`/chunks`](/api/#tag/Chunk) - Used for downloading and uploading individual chunks, and also for uploading streams of chunks. +2. [`/bzz`](/api/#tag/BZZ) - Used for general download and uploads of files or collections of files. +3. [`/chunks`](/api/#tag/Chunk) - Used for downloading and uploading individual chunks, and also for uploading streams of chunks. -Generally speaking, the `/bzz` endpoint is appropriate for general common use cases such as uploading websites, sharing files, etc., while the `/chunks` and `bytes` endpoints allow for more complex uses cases. In this guide, we focus on the usage of the `/bzz` endpoint. +Generally speaking, the `/bzz` endpoint is appropriate for general common use cases such as uploading websites, sharing files, etc., while the `/chunks` and `/bytes` endpoints allow for more complex use cases. In this guide, we focus on the usage of the `/bzz` endpoint. ## Upload a File @@ -39,78 +30,25 @@ To upload data to the swarm, you must perform the following steps: ## Purchasing Your Batch of Stamps -In order to upload your data to swarm, you must agree to burn (spend) -some of your xBZZ to signify to storer and fowarder nodes that this -content is valued. Before you proceed to the next step, you must buy -stamps! See this guide on how to [purchase an appropriate batch of stamps](/docs/develop/access-the-swarm/buy-a-stamp-batch). +In order to upload your data to Swarm, you must agree to burn (spend) some of your xBZZ to signify to storer and forwarder nodes that this content is valued. +Before you proceed to the next step, you must buy stamps! See this guide on how to [purchase an appropriate batch of stamps](/docs/develop/access-the-swarm/buy-a-stamp-batch). ## Using Stamps to Upload a File -Once your Bee node is running, a HTTP API is enabled for you to interact with. The command line utility [curl](https://ec.haxx.se/http/http-multipart) is a great way to interact with a Bee node's API. Swarm CLI alternative commands are also included as a more user-friendly way of interacting with your Bee node's API. - +Once your Bee node is running, you can use the `swarm-cli` to interact with the network. - - +### Swarm CLI -#### API - -First, let's check to see if the API is running as expected... - -```bash -curl http://localhost:1633 -``` - -``` -Ethereum Swarm Bee -``` - -Once running, a file can be uploaded by making an HTTP POST request to the `bzz` endpoint of the Bee API. - -Here, you must specify your _Batch ID_ in the `Swarm-Postage-Batch-Id` header, the file name in the `name` query parameter, and also pass the appropriate mime type in the `Content-Type` header. - -You may also wish to employ the erasure coding feature to add greater protection for your data, see [erasure coding page](/docs/develop/access-the-swarm/erasure-coding) for more details on its usage. - -```bash - curl -X POST -H "Swarm-Postage-Batch-Id: 54ba8e39a4f74ccfc7f903121e4d5d0fc40732b19efef5c8894d1f03bdd0f4c5" -H "Content-Type: text/plain" -H "Swarm-Encrypt: false" -v --data-binary "@test.txt" localhost:1633/bzz -``` - -:::danger -Data uploaded to Swarm is always public. In Swarm, sensitive files -must be [encrypted](/docs/develop/access-the-swarm/store-with-encryption) -before uploading to ensure their contents always remains private. -::: - -When succesful, a JSON formatted response will be returned, containing -a **swarm reference** or **hash** which is the _address_ of the -uploaded file, for example: - -```json -{ - "reference": "22cbb9cedca08ca8d50b0319a32016174ceb8fbaa452ca5f0a77b804109baa00" -} -``` - -Keep this _address_ safe, as we'll use it to retrieve our content later on. - - - - - -#### Swarm CLI -We have a `test.txt` file we wish to upload, let's check its contents. +We have a `test.txt` file we wish to upload, let's check its contents: ```bash cat test.txt This is a test file It will be used to test uploading and downloading from Swarm + ``` -Check that our node is operating normally. +Check that our node is operating normally. ```bash swarm-cli status @@ -120,7 +58,7 @@ swarm-cli status Bee API: http://localhost:1633 [OK] -Version: 2.0.0-50fcec7b +Version: 2.5.0 Mode: full Topology @@ -165,6 +103,7 @@ Remaining Capacity: 7.50 GB TTL: 91 days 1 hour 45 minutes 28 seconds Expires: 2024-02-01 ``` + Use the stamp ID to upload our file. ```bash @@ -178,7 +117,7 @@ Swarm hash: 1ffd2b67c8f34596a0b8375be29423c2d47e7995fcac8dd83fbd34e3d839b5a2 URL: http://localhost:1633/bzz/1ffd2b67c8f34596a0b8375be29423c2d47e7995fcac8dd83fbd34e3d839b5a2/ Stamp ID: daa8c5b3 Usage: 7% -Remaining Capacity: 7.50 GB +Remaining Capacity: 7.50 GB ``` Let's check that the file is downloadable. @@ -195,9 +134,6 @@ cat test.txt This is a test file It will be used to test uploading and downloading from Swarm ``` - - - In Swarm, every piece of data has a unique _address_ which is a unique and reproducible cryptographic hash digest. If you upload the same file twice, you will always receive the same hash. This makes working with data in Swarm super secure! @@ -209,47 +145,17 @@ Once your file has been **completely synced with the network**, you will be able ## Download a File -Once your file is uploaded to Swarm it can be easily downloaded. - - - - - -#### API - -Uploaded files can be retrieved with a simple HTTP GET request. - -Substitute the _hash_ in the last part of the URL with the reference -to your own data. - -:::tip -Make sure to include the trailing slash after the hash. -::: -```bash -curl -OJL http://localhost:1633/bzz/c02e7d943fbc0e753540f377853b7181227a83e773870847765143681511c97d/ -``` - -You may even simply navigate to the URL in your browser: - -[http://localhost:1633/bzz/22cb...aa00](http://localhost:1633/bzz/22cbb9cedca08ca8d50b0319a32016174ceb8fbaa452ca5f0a77b804109baa00) - - - - - +Once your file is uploaded to Swarm it can be easily downloaded. #### Swarm CLI -Simply use the `swarm-cli download` command followed by the hash of the file you wish to download. + +Simply use the `swarm-cli download` command followed by the hash of the file you wish to download. ```bash swarm-cli download 1ffd2b67c8f34596a0b8375be29423c2d47e7995fcac8dd83fbd34e3d839b5a2 test.txt OK ``` + And let's print out the file contents to confirm it was downloaded properly. ```bash @@ -257,11 +163,8 @@ cat test.txt This is a test file It will be used to test uploading and downloading from Swarm ``` - - - -## Upload a Directory +## Upload a Directory It is possible to use Bee to upload directories of files all at once. @@ -301,30 +204,17 @@ tar -cf ../my_website.tar . cd .. ``` -Next, simply POST the `tar` file as binary data to Bee's `dir` endpoint, taking care to include the header `Content Type: application/x-tar`. - :::info -In order to upload your data to swarm, you must agree to burn some of your xBZZ to signify to storer and fowarder nodes that the content is important. Before you progress to the next step, you must buy stamps! See this guide on how to [purchase an appropriate batch of stamps](/docs/develop/access-the-swarm/buy-a-stamp-batch). +For instances where a single page app has a JavaScript router that handles url queries itself, simply pass `index.html` as the error document. Bee will pass over control to the JavaScript served by the `index.html` file in the circumstance that a path does not yield a file from the manifest. ::: +Upload the tar file using swarm-cli: + ```bash -curl \ - -X POST \ - -H "Content-Type: application/x-tar" \ - -H "Swarm-Index-Document: index.html" \ - -H "Swarm-Error-Document: error.html" \ - -H "Swarm-Collection: true" \ - -H "Swarm-Postage-Batch-Id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" \ - --data-binary @my_website.tar http://localhost:1633/bzz +swarm-cli upload my_website.tar --stamp daa8c5b36e1cf481b10118a8b02430a6f22618deaa6ba5aa4ea660de66aa62db ``` -:::info -For instances where a single page app has a JavaScript router that handles url queries itself, simply pass `index.html` as the error document. Bee will pass over control to the JavaScript served by the `index.html` file in the circumstance that a path does not yield a file from the manifest. -::: - -When the upload is successful, Bee will return a JSON document containing the Swarm Reference. - -```json +``` { "reference": "b25c89a401d9f26811680476619a1eb4a4e189e614bc6161cbfd8b343214917b" } @@ -344,8 +234,6 @@ Once your data has been [fully processed into the network](/docs/develop/access- If you are not able to download your file from a different Bee node, you may be experiencing connection issues, see [troubleshooting connectivity](/docs/bee/installation/connectivity) for assistance. - - ## Public Gateways To share files with someone who isn't running a Bee node yet, simply change the host in the link to be one of our public gateways. Send the link to your friends, and they will be able to download the file too! @@ -354,10 +242,9 @@ To share files with someone who isn't running a Bee node yet, simply change the - ## Deferred and Direct Uploads -By default your bee instance will handle uploads in a _deferred_ manner, meaning that the data will be completely uploaded to your node locally before being then being uploaded to the Swarm network. +By default your bee instance will handle uploads in a _deferred_ manner, meaning that the data will be completely uploaded to your node locally before being then being uploaded to the Swarm network. In contrast, for a direct upload, the data will be completely uploaded to the Swarm network directly. @@ -371,4 +258,3 @@ curl \ -H "swarm-postage-batch-id: 78a26be9b42317fe6f0cbea3e47cbd0cf34f533db4e9c91cf92be40eb2968264" \ --data-binary @my_data.tar http://localhost:1633/bzz ``` - diff --git a/docs/develop/tools-and-features/starting-a-test-network.md b/docs/develop/tools-and-features/starting-a-test-network.md index 3fbf8e59b..6331a8e93 100644 --- a/docs/develop/tools-and-features/starting-a-test-network.md +++ b/docs/develop/tools-and-features/starting-a-test-network.md @@ -24,7 +24,7 @@ swap-enable: false mainnet: false blockchain-rpc-endpoint: https://sepolia.dev.fairdatasociety.org verbosity: 5 -full-node: true +full-node: true ``` **config_2.yaml** @@ -41,7 +41,7 @@ swap-enable: false mainnet: false blockchain-rpc-endpoint: https://sepolia.dev.fairdatasociety.org verbosity: 5 -full-node: true +full-node: true ``` Note that for each node, we provide a different `api-addr`. If we had not specified different addresses here, we @@ -106,15 +106,15 @@ curl localhost:1633/addresses | jq ```json { - "overlay": "b1978be389998e8c8596ef3c3a54214e2d4db764898ec17ec1ad5f19cdf7cc59", - "underlay": [ - "/ip4/127.0.0.1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", - "/ip4/172.25.128.69/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", - "/ip6/::1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ" - ], - "ethereum": "0xd22cc790e2aef341827e1e49cc631d2a16898cd9", - "publicKey": "023b26ce8b78ed8cdb07f3af3d284c95bee5e038e7c5d0c397b8a5e33424f5d790", - "pssPublicKey": "039ceb9c1f0afedf79991d86d89ccf4e96511cf656b43971dc3e878173f7462487" + "overlay": "b1978be389998e8c8596ef3c3a54214e2d4db764898ec17ec1ad5f19cdf7cc59", + "underlay": [ + "/ip4/127.0.0.1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", + "/ip4/172.25.128.69/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ", + "/ip6/::1/tcp/1634/p2p/QmQHgcpizgoybDtrQXCWRSGdTP526ufeMFn1PyeGd1zMEZ" + ], + "ethereum": "0xd22cc790e2aef341827e1e49cc631d2a16898cd9", + "publicKey": "023b26ce8b78ed8cdb07f3af3d284c95bee5e038e7c5d0c397b8a5e33424f5d790", + "pssPublicKey": "039ceb9c1f0afedf79991d86d89ccf4e96511cf656b43971dc3e878173f7462487" } ``` @@ -135,7 +135,7 @@ welcome-message: "Bzz Bzz Bzz" swap-enable: false blockchain-rpc-endpoint: https://sepolia.dev.fairdatasociety.org verbosity: 5 -full-node: true +full-node: true ``` Now, we can shut our second node and reboot with the new configuration. @@ -154,14 +154,12 @@ curl -s http://localhost:1733/peers | jq Congratulations! You have made your own tiny two bee Swarm! 🐝 🐝 - ## Funding Nodes -While you have successfully set up two nodes, they are currently unfunded with either sETH or sBZZ. Sepolia ETH (sETH) is required for issuing transactions on the Sepolia testnet, and Sepolia BZZ (sBZZ) is required for your node to operate as a full staking node. +While you have successfully set up two nodes, they are currently unfunded with either sETH or sBZZ. Sepolia ETH (sETH) is required for issuing transactions on the Sepolia testnet, and Sepolia BZZ (sBZZ) is required for your node to operate as a full staking node. To fund our nodes, we need to first collect the blockchain addresses for each node. We can use the `/addresses` endpoint for this: - ```bash curl localhost:1633/addresses | jq ``` @@ -180,12 +178,12 @@ curl localhost:1633/addresses | jq } ``` -Then copy the address in the "ethereum" field. This is the address you need to send sETH and sBZZ to. There are many public faucets you can use to obtain Sepolia ETH, such as [this one](https://www.infura.io/faucet/sepolia) from Infura. +Then copy the address in the "ethereum" field. This is the address you need to send sETH and sBZZ to. There are many public faucets you can use to obtain Sepolia ETH, [here is a curated list](https://faucetlink.to/sepolia). To get Sepolia BZZ (sBZZ) you can use [this Uniswap market](https://app.uniswap.org/swap?outputCurrency=0x543dDb01Ba47acB11de34891cD86B675F04840db&inputCurrency=ETH), just make sure that you've switched to the Sepolia network in your browser wallet. You will need to send only a very small amount of sETH such as 0.01 sETH, to get started. You will need 10 sBZZ to run a full node with staking. -After sending sETH and sBZZ to your node's address which you copied above, restart your node and it should begin operating properly as a full node. +After sending sETH and sBZZ to your node's address which you copied above, restart your node and it should begin operating properly as a full node. -Repeat these same steps with the other node in order to complete a private test network of two full nodes. \ No newline at end of file +Repeat these same steps with the other node in order to complete a private test network of two full nodes. diff --git a/docs/references/glossary.md b/docs/references/glossary.md index 1bebf0675..40a7d8cc0 100644 --- a/docs/references/glossary.md +++ b/docs/references/glossary.md @@ -3,10 +3,9 @@ title: Glossary id: glossary --- - ## Swarm -The Swarm network consists of a collection of [Bee nodes](#bee) which work together to enable decentralised data storage for the next generation of censorship-resistant, unstoppable, serverless dapps. +The Swarm network consists of a collection of [Bee nodes](#bee) which work together to enable decentralised data storage for the next generation of censorship-resistant, unstoppable, serverless dapps. Swarm is also the name of the core organization that oversees the development and success of the Bee Swarm as a whole. They can be found at [ethswarm.org](https://www.ethswarm.org/). @@ -26,7 +25,7 @@ Bee nodes can act as both client and service provider, or solely as client or se ## Overlay -An overlay network is a virtual or logical network built on top of some lower level "underlay" network. Examples include the Internet as an overlay network built on top of the telephone network, and the p2p Bittorent network built on top of the Internet. +An overlay network is a virtual or logical network built on top of some lower level "underlay" network. Examples include the Internet as an overlay network built on top of the telephone network, and the p2p Bittorent network built on top of the Internet. With Swarm, the overlay network is based on a [Kademlia DHT](https://en.wikipedia.org/wiki/Kademlia) with overlay addresses derived from each node's [Gnosis](#gnosis-chain) address. Swarm's overlay network addresses are permanent identifiers for each node and do not change over time. @@ -36,11 +35,11 @@ Overlay addresses are a Keccak256 hash of a node’s Gnosis Chain address and th ## Neighborhood -[Neighborhoods](/docs/concepts/DISC/neighborhoods) are nodes which are grouped together based on their overlay addresses and are responsible for storing the same chunks of data. The chunks which each neighborhood are responsible for storing are defined by the proximity order of the nodes and the chunks. +[Neighborhoods](/docs/concepts/DISC/neighborhoods) are nodes which are grouped together based on their overlay addresses and are responsible for storing the same chunks of data. The chunks which each neighborhood are responsible for storing are defined by the proximity order of the nodes and the chunks. ## Sister Neighborhood -A sister neighborhood is composed of nodes in the other half of an old neighborhood after a neighborhood split. +A sister neighborhood is composed of nodes in the other half of an old neighborhood after a neighborhood split. ## Parent Neighborhood @@ -48,25 +47,23 @@ A parent neighborhood is the neighborhood one proximity order shallower than the ## Underlay -An underlay network is the low level network on which an overlay network is built. It allows nodes to find each other, communicate, and transfer data. Swarm's underlay network is a p2p network built with [libp2p](https://libp2p.io/). Nodes are assigned underlay addresses which in contrast to their overlay addresses are not permanent and may change over time. +An underlay network is the low level network on which an overlay network is built. It allows nodes to find each other, communicate, and transfer data. Swarm's underlay network is a p2p network built with [libp2p](https://libp2p.io/). Nodes are assigned underlay addresses which in contrast to their overlay addresses are not permanent and may change over time. ## Swap -Swap is the p2p accounting protocol used for Bee nodes. It allows for the automated accounting and settlement of services between Bee nodes in the Swarm network. In the case that services exchanged between nodes is balanced equally, no settlement is necessary. In the case that one node is unequally indebted to another, settlement is made to clear the node's debts. Two key elements of the Swap protocol are [cheques and the chequebook contract](#cheques--chequebook). - +Swap is the p2p accounting protocol used for Bee nodes. It allows for the automated accounting and settlement of services between Bee nodes in the Swarm network. In the case that services exchanged between nodes is balanced equally, no settlement is necessary. In the case that one node is unequally indebted to another, settlement is made to clear the node's debts. Two key elements of the Swap protocol are [cheques and the chequebook contract](#cheques--chequebook). ## Cheques & Chequebook -Cheques are the off-chain method of accounting used by the Swap protocol where the issuing node signs a cheque specifying a beneficiary, a date, and an amount, and gives it to the recipient node as a token of promise to pay at a later date. +Cheques are the off-chain method of accounting used by the Swap protocol where the issuing node signs a cheque specifying a beneficiary, a date, and an amount, and gives it to the recipient node as a token of promise to pay at a later date. -The chequebook is the smart contract where the cheque issuer's funds are stored and where the beneficiary can cash the cheque received. +The chequebook is the smart contract where the cheque issuer's funds are stored and where the beneficiary can cash the cheque received. -The cheque and chequebook system reduces the number of required on-chain transactions by allowing multiple cheques to accumulate and be settled together as a group, and in the case that the balance of cheques between nodes is equal, no settlement transaction is required at all. +The cheque and chequebook system reduces the number of required on-chain transactions by allowing multiple cheques to accumulate and be settled together as a group, and in the case that the balance of cheques between nodes is equal, no settlement transaction is required at all. ## Postage Stamps -Postage stamps can be purchased with [xBZZ](#xbzz-token) and represent the right to store data on the Swarm network. In order to upload data to Swarm, a user must purchase a batch of stamps which they can then use to upload an equivalent amount of data to the network. - +Postage stamps can be purchased with [xBZZ](#xbzz-token) and represent the right to store data on the Swarm network. In order to upload data to Swarm, a user must purchase a batch of stamps which they can then use to upload an equivalent amount of data to the network. ## Kademlia @@ -74,11 +71,11 @@ Kademlia is a distributed hash table (DHT) which is commonly used in distributed ## Kademlia distance -Kademlia introduces an XOR based distance metric to define the relatedness of two addresses. In Kademlia nodes have numeric ids with the same length and format taken from the same namespace as the keys of the key/value pairs. Kademlia distance between node ids and keys is calculated through the XOR bitwise operation done over any ids or keys. +Kademlia introduces an XOR based distance metric to define the relatedness of two addresses. In Kademlia nodes have numeric ids with the same length and format taken from the same namespace as the keys of the key/value pairs. Kademlia distance between node ids and keys is calculated through the XOR bitwise operation done over any ids or keys. Note: For a Kademlia DHT, any standardized numerical format can be used for ids. However, within Swarm, ids are derived from a Keccak256 digest and are represented as 256 bit hexadecimal numbers. They are referred to as addresses or hashes. -Swarm hash: +Swarm hash: > eada6722670c6de6da7d0470167bf14f6e4dc1b98476da94a7330041adec26a3 @@ -86,12 +83,12 @@ In the examples which follow, we use short binary numbers to increase example cl Example: We have a Kademlia DHT consisting of only ten nodes with ids of 1 - 10. We want to find the distance between node 4 and 7. In order to do that, we perform the XOR bitwise operation: -4 | 0100 +4 | 0100 7 | 0111 ————XOR -3 | 0011 +3 | 0011 -And we find that the distance between the two nodes is 3. +And we find that the distance between the two nodes is 3. ## Chunk @@ -101,39 +98,37 @@ When data is uploaded to Swarm, it is broken down into 4kb sized pieces which ar Proximity Order is a concept defined in The Book of Swarm and is closely related to Kademlia distance. In contrast to distance which is an exact measure of the relatedness of two nodes, PO is a discrete measure relatedness between two nodes. By "discrete", we mean that PO is a general measure of relatedness rather than an exact measure of relatedness like the XOR distance metric of Kademlia. -Proximity order is defined as the number of shared prefix bits of any two addresses. It is found by performing the XOR bitwise operation on the two addresses and counting how many leading 0 there are before the first 1. +Proximity order is defined as the number of shared prefix bits of any two addresses. It is found by performing the XOR bitwise operation on the two addresses and counting how many leading 0 there are before the first 1. Taking the previous example used in the Kademlia distance definition: -4 | 0100 +4 | 0100 7 | 0111 ————XOR -3 | 0011 +3 | 0011 In the result we find that the distance is 3, and that there are two leading zeros. Therefore for the PO of these two nodes is 2. +Both Proximity Order and distance are measures of the relatedness of ids, however Kademlia distance is a more exact measurement. -Both Proximity Order and distance are measures of the relatedness of ids, however Kademlia distance is a more exact measurement. - - -Taking the previous example used in the Kademlia distance definition: +Taking the previous example used in the Kademlia distance definition: -5 | 0101 +5 | 0101 7 | 0111 ————XOR -2 | 0010 +2 | 0010 -Here we find that the distance between 5 and 7 is 2, and the PO is also two. Although 5 is closer to 7 than 4 is to 7, they both fall within the same PO, since PO is only concerned with the shared leading bits. PO is a fundamental concept to Swarm’s design and is used as the basic unit of relatedness when discussing the addresses of chunks and nodes. PO is also closely related to the concept of depth. +Here we find that the distance between 5 and 7 is 2, and the PO is also two. Although 5 is closer to 7 than 4 is to 7, they both fall within the same PO, since PO is only concerned with the shared leading bits. PO is a fundamental concept to Swarm’s design and is used as the basic unit of relatedness when discussing the addresses of chunks and nodes. PO is also closely related to the concept of depth. ## Depth types There are three fundamental categories of depth: - + ### 1. Topology related depth This depth is defined in relation to the connection topology of a single node as the subject in relation to all the other nodes it is connected to. It is referred to using several different terms which all refer to the same concept (Connectivity depth / Kademlia depth / neighborhood depth / physical depth) -Connectivity depth refers to the saturation level of the node’s topology - the level to which the topology of a node’s connections has Kademlia connectivity. Defined as one level deeper than the deepest fully saturated level. A PO is defined as saturated if it has at least the minimum required level of connected nodes, which is set at 8 nodes in the current implementation of Swarm. +Connectivity depth refers to the saturation level of the node’s topology - the level to which the topology of a node’s connections has Kademlia connectivity. Defined as one level deeper than the deepest fully saturated level. A PO is defined as saturated if it has at least the minimum required level of connected nodes, which is set at 8 nodes in the current implementation of Swarm. The output from the Bee API's `topology` endpoint: @@ -141,36 +136,36 @@ The output from the Bee API's `topology` endpoint: Here we can see the depth is 8, meaning that PO bin 7 is the deepest fully saturated PO bin: -![](/img/depths2.png) +![](/img/depths2.png) Here we can confirm that at least 8 nodes are connected in bin 7. -Connectivity depth is defined from the point of view of individual nodes, it is not defined as characteristic of the entire network. However, given a uniform distribution of node ids within the namespace and given enough nodes, all nodes should converge towards the same connectivity depth. +Connectivity depth is defined from the point of view of individual nodes, it is not defined as characteristic of the entire network. However, given a uniform distribution of node ids within the namespace and given enough nodes, all nodes should converge towards the same connectivity depth. While this is sometimes referred to as Kademlia depth, the term “Kademlia depth” is not defined within the original Kademlia paper, rather it refers to the depth at which the network in question (Swarms) has the characteristics which fulfill the requirements described in the Kademlia paper. - + ### 2. Area of responsibility related depths -Area of responsibility refers to which chunks a node is responsible for storing. There are two concepts of depth related to a node’s area of responsibility - storage depth and reserve depth. Both reserve depth and storage depth are measures of PO which define the chunks a node is responsible for storing. - +Area of responsibility refers to which chunks a node is responsible for storing. There are two concepts of depth related to a node’s area of responsibility - storage depth and reserve depth. Both reserve depth and storage depth are measures of PO which define the chunks a node is responsible for storing. + ### 2a. Reserve Depth The PO which measures the node’s area of responsibility based on the theoretical 100% utilisation of all postage stamp batches (all the chunks which are eligible to be uploaded and stored are uploaded and stored). Has an inverse relationship with area of responsibility - as depth grows, area of responsibility gets smaller. ### 2b. Storage Depth -The PO which measures the node’s effective area of responsibility. Storage depth will equal reserve depth in the case of 100% utilisation - however 100% utilisation is uncommon. If after syncing all the chunks within the node’s area of responsibility at its reserve depth and the node still has sufficient space left, then the storage depth will decrease so that the area of responsibility doubles. - +The PO which measures the node’s effective area of responsibility. Storage depth will equal reserve depth in the case of 100% utilisation - however 100% utilisation is uncommon. If after syncing all the chunks within the node’s area of responsibility at its reserve depth and the node still has sufficient space left, then the storage depth will decrease so that the area of responsibility doubles. ### 3. Postage stamp batch and chunk related depths ### 3a. Batch depth + Batch depth is the value `d` which is defined in relation to the size of a postage stamp batch. The size of a batch is defined as the number of chunks which can be stamped by that batch (also referred to as the number of slots per batch, with one chunk per slot). The size is calculated by: -* $$2^{d}$$ -* $$d$$ is a value selected by the batch issuer which determines how much data can be stamped with the batch +- $$2^{d}$$ +- $$d$$ is a value selected by the batch issuer which determines how much data can be stamped with the batch -### 3b. Bucket depth +### 3b. Bucket depth Bucket depth is the constant value which defines how many buckets the address space for chunks is divided into for postage stamp batches. Bucket depth is set to 16, and the number of buckets is defined as $$2^{bucket depth}$$ @@ -180,12 +175,11 @@ PLUR (name inspired by the [PLUR principles](https://en.wikipedia.org/wiki/PLUR) ## Bridged Tokens -Bridged tokens are tokens from one blockchain which have been _bridged_ to another chain through a smart contract powered bridge. For example, xDAI and xBZZ on Gnosis Chain are the bridged version of DAI and BZZ on Ethereum. +Bridged tokens are tokens from one blockchain which have been _bridged_ to another chain through a smart contract powered bridge. For example, xDAI and xBZZ on Gnosis Chain are the bridged version of DAI and BZZ on Ethereum. ## BZZ Token -BZZ is Swarm's [ERC-20](https://ethereum.org/en/developers/docs/standards/tokens/erc-20/) token issued on Ethereum. - +BZZ is Swarm's [ERC-20](https://ethereum.org/en/developers/docs/standards/tokens/erc-20/) token issued on Ethereum. ## xBZZ Token @@ -204,9 +198,3 @@ xDAI is [DAI](https://developer.makerdao.com/dai/1/) [bridged](#bridged-tokens) ## Sepolia Sepolia is an Ethereum testnet. It is an environment where smart contracts can be developed and tested without spending cryptocurrency with real value, and without putting valuable assets at risk. Tokens on Sepolia are often prefixed with a lower-case 's', example: 'sBZZ' and because this is a test network carry no monetary value. It is an environment where Bee smart contracts can be tested and interacted with without any risk of monetary loss. - -## Faucet - -A cryptocurrency faucet supplies small amounts of cryptocurrency to requestors (typically for testing purposes). - -It supplies small amounts of sBZZ and Sepolia ETH for anyone who submits a request at the [Swarm Discord](https://discord.gg/wdghaQsGq5) server by using the `/faucet` command in the #develop-on-swarm channel. diff --git a/docusaurus.config.js b/docusaurus.config.js index 0f0c0f3ba..7fa54703e 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -36,10 +36,6 @@ module.exports = { to: '/docs/bee/working-with-bee/configuration', from: '/docs/working-with-bee/configuration', }, - { - to: '/docs/bee/installation/quick-start', - from: '/docs/installation/quick-start', - }, { to: '/docs/develop/access-the-swarm/buy-a-stamp-batch', from: '/docs/develop/access-the-swarm/keep-your-data-alive', @@ -159,7 +155,7 @@ module.exports = { position: 'left', items: [ { - to: 'docs/bee/installation/quick-start', + to: 'docs/bee/installation/getting-started', label: 'Installation' }, { diff --git a/sidebars.js b/sidebars.js index ab63a1432..e6e1f0594 100644 --- a/sidebars.js +++ b/sidebars.js @@ -47,11 +47,13 @@ module.exports = { type: 'category', label: 'Installation', items: [ - 'bee/installation/quick-start', - 'bee/installation/install', + 'bee/installation/getting-started', + 'bee/installation/shell-script-install', + 'bee/installation/docker', + 'bee/installation/package-manager-install', 'bee/installation/build-from-source', + 'bee/installation/set-target-neighborhood', 'bee/installation/hive', - 'bee/installation/docker', 'bee/installation/connectivity', 'bee/installation/fund-your-node', ], diff --git a/src/config/globalVariables.js b/src/config/globalVariables.js index 960092225..c6c949651 100644 --- a/src/config/globalVariables.js +++ b/src/config/globalVariables.js @@ -1,5 +1,4 @@ export const globalVariables = { - exampleVariable: 'Hello, World!', postageStampContract: '0x45a1502382541Cd610CC9068e88727426b696293', stakingContract: '0x445B848e16730988F871c4a09aB74526d27c2Ce8', redistributionContract: '0xFfF73fd14537277B3F3807e1AB0F85E17c0ABea5', diff --git a/src/pages/index.js b/src/pages/index.js index ceff8bcc6..f6382e08f 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -44,7 +44,7 @@ function Home() {

Install the Swarm Desktop client to quickly spin up a Bee node and start interacting with the Swarm network.

- +
diff --git a/static/img/bashtop_01.png b/static/img/bashtop_01.png new file mode 100644 index 000000000..bfa505819 Binary files /dev/null and b/static/img/bashtop_01.png differ diff --git a/static/img/bashtop_02.png b/static/img/bashtop_02.png new file mode 100644 index 000000000..0e77fc086 Binary files /dev/null and b/static/img/bashtop_02.png differ