A tool for testing Ceph locally using nested rootless podman containers
ceph-devstack is a tool that can deploy and manage containerized versions of teuthology and its associated services, to test Ceph (or just teuthology) on your local machine. It lets you avoid:
- Accessing Ceph's Sepia lab
- Needing dedicated storage devices to test Ceph OSDs
Basically, the goal is that you can test your Ceph branch locally using containers as storage test nodes.
It is currently under active development and has not yet had a formal release.
☑︎ CentOS 9.Stream should work out of the box
☑︎ CentOS 8.Stream mostly works - but has not yet passed a Ceph test
☐ A recent Fedora should work but has not been tested
☒ Ubuntu does not currently ship a new enough podman
☒ MacOS will require special effort to support since podman operations are done inside a VM
- A supported operating system
- podman 4.0+ using the
crunruntime.- On CentOS 8, modify
/etc/containers/containers.confto set the runtime
- On CentOS 8, modify
- Linux kernel 5.12+, or 4.15+ and
fuse-overlayfs - cgroup v2
- On CentOS 8, see ./docs/cgroup_v2.md
- With podman <5.0, podman's DNS plugin, from the
podman-pluginspackage - A user account that has
sudoaccess and also is a member of thediskgroup - The following sysctl settings:
fs.aio-max-nr=1048576kernel.pid_max=4194304
- If using SELinux in enforcing mode:
setsebool -P container_manage_cgroup=truesetsebool -P container_use_devices=true
ceph-devstack doctor will check the above and report any issues along with suggested remedies; its --fix flag will apply them for you.
sudo usermod -a -G disk $(whoami) # and re-login afterward
git clone https://github.com/ceph/teuthology/
cd teuthology && ./bootstrap
python3 -m venv venv
source ./venv/bin/activate
python3 -m pip install git+https://github.com/zmc/ceph-devstack.gitceph-devstack 's default configuration is here. It can be extended by placing a file at ~/.config/ceph-devstack/config.toml or by using the --config-file flag.
ceph-devstack config dump will output the current configuration.
As an example, the following configuration will use a local image for paddles with the tag TEST; it will also create ten testnode containers; and will build its teuthology container from the git repo at ~/src/teuthology:
containers:
paddles:
image: localhost/paddles:TEST
testnode:
count: 10
teuthology:
repo: ~/src/teuthology
By default, pre-built container images are pulled from quay.io/ceph-infra. The images can be overridden via the config file. It's also possible to build images from on-disk git repositories.
First, you'll want to pull all the images:
ceph-devstack pullOptional: If building any images from repos:
ceph-devstack buildNext, you can start the containers with:
ceph-devstack startOnce everything is started, a message similar to this will be logged:
View test results at http://smithi065.front.sepia.ceph.com:8081/
This link points to the running Pulpito instance. Test archives are also stored in the --data-dir (default: ~/.local/share/ceph-devstack).
To watch teuthology's output, you can:
podman logs -f teuthologyIf you want testnode containers to be replaced as they are stopped and destroyed, you can:
ceph-devstack watchWhen finished, this command removes all the resources that were created:
ceph-devstack removeBy default, we run the teuthology:no-ceph suite to self-test teuthology. If we wanted to test Ceph itself, we could use the orch:cephadm:smoke-small suite:
export TEUTHOLOGY_SUITE=orch:cephadm:smoke-smallIt's possible to skip the automatic suite-scheduling behavior:
export TEUTHOLOGY_SUITE=noneIf you need to use "real" testnodes and have access to a lab, there are a few additonal steps to take. We will use the Sepia lab as an example below:
To give the teuthology container access to your SSH private key (via podman secret):
export SSH_PRIVKEY_PATH=$HOME/.ssh/id_rsaTo lock machines from the lab:
ssh teuthology.front.sepia.ceph.com
~/teuthology/virtualenv/bin/teuthology-lock \
--lock-many 1 \
--machine-type smithi \
--desc "teuthology dev testing"Once you have your machines locked, you need to provide a list of their hostnames and their machine type:
export TEUTHOLOGY_TESTNODES="smithiXXX.front.sepia.ceph.com,smithiYYY.front.sepia.ceph.com"
export TEUTHOLOGY_MACHINE_TYPE="smithi"- First fork the repo if you have not done so.
- Clone your forked repo
git clone https://github.com/<user-name>/ceph-devstack- Setup the remote repo as upstream (this will prevent creating additional branches)
git remote add upstream https://github.com/zmc/ceph-devstack- Create virtual env in the root directory of ceph-devstack & install python dependencies
python3 -m venv venv
./venv/bin/pip3 install -e .- Activate venv
source venv/bin/activate- Run doctor command to check & fix the dependencies that you need for ceph-devstack
ceph-devstack -v doctor --fix- Build, Create and Start the all containers in ceph-devstack
ceph-devstack -v build
ceph-devstack -v create
ceph-devstack -v start- Test the containers by waiting for teuthology to finish and print the logs
ceph-devstack wait teuthology
podman logs -f teuthology