In order to test Mayastor, you'll need to be able to run Mayastor, follow that guide for persistent hugepages & kernel module setup.
Or, for ad-hoc:
-
Ensure at least 3072 2 MiB hugepages.
echo 3072 | sudo tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-
Ensure several kernel modules are installed:
modprobe xfs nvme_fabrics nvme_tcp nvme_rdma
-
Ensure docker is installed and the service is running (OS specific)
The Mayastor integration tests leverage docker in order to create a "cluster" with multiple components running as their own docker container within the same network. Specifically, the control-plane integration tests make use of the deployer which can setup these "clusters" for you, along with a very extensive range of options.
Starting a deployer "cluster", is then very simple:
deployer start -s -i 2 -w 5s
[/core] [10.1.0.3] /home/tiago/git/mayastor/controller/target/debug/core --store etcd.cluster:2379
[/etcd] [10.1.0.2] /nix/store/7fvflmxl9a8hfznsc1sddp5az1gjlavf-etcd-3.5.13/bin/etcd --data-dir /tmp/etcd-data --advertise-client-urls http://[::]:2379 --listen-client-urls http://[::]:2379 --heartbeat-interval=1 --election-timeout=5
[/io-engine-1] [10.1.0.5] /bin/io-engine -N io-engine-1 -g 10.1.0.5:10124 -R https://core:50051 --api-versions V1 -r /host/tmp/io-engine-1.sock --ptpl-dir /host/tmp/ptpl/io-engine-1 -p etcd.cluster:2379
[/io-engine-2] [10.1.0.6] /bin/io-engine -N io-engine-2 -g 10.1.0.6:10124 -R https://core:50051 --api-versions V1 -r /host/tmp/io-engine-2.sock --ptpl-dir /host/tmp/ptpl/io-engine-2 -p etcd.cluster:2379
[/rest] [10.1.0.4] /home/tiago/git/mayastor/controller/target/debug/rest --dummy-certificates --https rest:8080 --http rest:8081 --workers=1 --no-authNOTE: Use
--io-engine-isolateto given each engine a different cpu core NOTE: Use--developer-delayedfor sleep delay on each engine, reducing cpu usage NOTE: For all options, checkdeployer start --help
And with this we have a dual io-engine cluster which we can interact with.
rest-plugin get nodes
ID GRPC ENDPOINT STATUS VERSION
io-engine-2 10.1.0.6:10124 Online v1.0.0-997-g17488f4a7da3
io-engine-1 10.1.0.5:10124 Online v1.0.0-997-g17488f4a7da3You can also use the swagger-ui available on the localhost:8081.
At the end of your experiment, remember to bring down the cluster:
deployer stopTODO: We're still writing this! Sorry! Let us know if you want us to prioritize this!
Mayastor's unit tests, integration tests, and documentation tests via the conventional cargo test.
An important note: Some tests need to run as root, and so invoke sudo.
Remember to enter the nix-shell before running any of the commands herein
All tests share a deployer "cluster" and network and therefore this means they need to run one at a time.
Example, testing the deployer-cluster crate:
cargo test -p deployer-cluster -- --test-threads 1 --nocaptureTo test all crates, simply use the provided script:
./scripts/rust/test.shThere is a bit of extra setup for the python virtual environment.
To prepare:
tests/bdd/setup.shThen, to run the tests:
./scripts/python/test.shIf you want to run the tests manually, you can also do the following:
. tests/bdd/setup.sh # source the virtual environment
pytest tests/bdd/features/csi/node/test_parameters.py -xYou can test with a custom io-engine by specifying environment variables:
-
binary
unset IO_ENGINE_BIN export IO_ENGINE_IMAGE=docker.io/tiagolobocastro/mayastor-io-engine:my-tag
-
image
unset IO_ENGINE_IMAGE export IO_ENGINE_BIN=~/mayastor/io-engine/target/debug/io-engine
If you need a K8s cluster, we have a terraform deployment available here. It can be used to deploy K8s on libvirt and lxd.
Warning
Please note that deployment on lxd is very experimental at the moment.
See for example: #1541
TODO: We're still writing this! Sorry! Let us know if you want us to prioritize this!
In the meantime, refer to the README for more help
❯ terraform apply --var="worker_vcpu=4" --var="worker_memory=8192" --var="worker_nodes=3" --auto-approve
...
Apply complete! Resources: 25 added, 0 changed, 0 destroyed.
Outputs:
kluster = <<EOT
[master]
ksmaster-1 ansible_host=10.0.0.223 ansible_user=tiago ansible_ssh_private_key_file=/home/tiago/.ssh/id_rsa ansible_ssh_common_args='-o StrictHostKeyChecking=no'
[nodes]
ksworker-1 ansible_host=10.0.0.89 ansible_user=tiago ansible_ssh_private_key_file=/home/tiago/.ssh/id_rsa ansible_ssh_common_args='-o StrictHostKeyChecking=no'
ksworker-2 ansible_host=10.0.0.157 ansible_user=tiago ansible_ssh_private_key_file=/home/tiago/.ssh/id_rsa ansible_ssh_common_args='-o StrictHostKeyChecking=no'
ksworker-3 ansible_host=10.0.0.57 ansible_user=tiago ansible_ssh_private_key_file=/home/tiago/.ssh/id_rsa ansible_ssh_common_args='-o StrictHostKeyChecking=no'
EOTAt the end of your experiment, remember to bring down the cluster:
❯ terraform destroy --auto-approve