* validatecluster fixes Signed-off-by: Derek Nola <derek.nola@suse.com>
10 KiB
Testing Standards in K3s
Testing in K3s comes in 4 forms:
This document will explain when each test should be written and how each test should be generated, formatted, and run.
Note: all shell commands given are relative to the root k3s repo directory.
Unit Tests
Unit tests should be written when a component or function of a package needs testing. Unit tests should be used for "white box" testing.
Framework
All unit tests in K3s follow a Table Driven Test style. Specifically, K3s unit tests are automatically generated using the gotests tool. This is built into the Go vscode extension, has documented integrations for other popular editors, or can be run via command line. Additionally, a set of custom templates are provided to extend the generated test's functionality. To use these templates, call:
gotests --template_dir=<PATH_TO_K3S>/contrib/gotests_templates
Or in vscode, edit the Go extension setting Go: Generate Tests Flags
and add --template_dir=<PATH_TO_K3S>/contrib/gotests_templates
as an item.
To facilitate unit test creation, see tests/util/runtime.go
helper functions.
Format
All unit tests should be placed within the package of the file they test.
All unit test files should be named: <FILE_UNDER_TEST>_test.go
.
All unit test functions should be named: Test_Unit<FUNCTION_TO_TEST>
or Test_Unit<RECEIVER>_<METHOD_TO_TEST>
.
See the etcd unit test as an example.
Running
go test ./pkg/... -run Unit
Note: As unit tests call functions directly, they are the primary drivers of K3s's code coverage metric.
Integration Tests
Integration tests should be used to test a specific functionality of k3s that exists across multiple Go packages, either via exported function calls, or more often, CLI comands. Integration tests should be used for "black box" testing.
Framework
All integration tests in K3s follow a Behavior Diven Development (BDD) style. Specifically, K3s uses Ginkgo and Gomega to drive the tests.
To generate an initial test, the command ginkgo bootstrap
can be used.
To facilitate K3s CLI testing, see tests/util/cmd.go
helper functions.
Format
Integration tests can be placed in two areas:
- Next to the go package they intend to test.
- In
tests/integration/<TEST_NAME>
for package agnostic testing.
Package specific integration tests should use the <PACKAGE_UNDER_TEST>_test
package.
Package agnostic integration tests should use the integration
package.
All integration test files should be named: <TEST_NAME>_int_test.go
.
All integration test functions should be named: Test_Integration<TEST_NAME>
.
See the etcd snapshot test as a package specific example.
See the local storage test as a package agnostic example.
Running
Integration tests can be run with no k3s cluster present, each test will spin up and kill the appropriate k3s server it needs.
Note: Integration tests must be run as root, prefix the commands below with sudo -E env "PATH=$PATH"
if a sudo user.
go test ./pkg/... ./tests/integration/... -run Integration
Integration tests can be run on an existing single-node cluster via compile time flag, tests will skip if the server is not configured correctly.
go test -ldflags "-X 'github.com/rancher/k3s/tests/util.existingServer=True'" ./pkg/... ./tests/integration/... -run Integration
Integration tests can also be run via a Sonobuoy plugin on an existing single-node cluster.
./scripts/build-tests-sonobuoy
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy run --plugin ./dist/artifacts/k3s-int-tests.yaml
Check the sonobuoy status and retrieve results
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy status
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy retrieve
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml sonobuoy results <TAR_FILE_FROM_RETRIEVE>
Smoke Tests
Smoke tests are defined under the tests/vagrant path at the root of this repository.
The sub-directories therein contain fixtures for running simple clusters to assert correct behavior for "happy path"
scenarios. These fixtures are mostly self-contained Vagrantfiles describing single-node installations that are
easily spun up with Vagrant for the libvirt
and virtualbox
providers:
- Install Script ➡️ on proposed changes to install.sh
- CentOS 7 (stand-in for RHEL 7)
- CentOS 8 (stand-in for RHEL 8)
- Leap 15.3 (stand-in for SLES)
- MicroOS (stand-in for SLE-Micro)
- Ubuntu 20.04 (Focal Fossa)
- Control Groups ➡️ on any code change
- mode=unified (cgroups v2)
- Fedora 34 (rootfull + rootless)
- mode=unified (cgroups v2)
- Snapshotter ➡️ on any code change
When adding new installer test(s) please copy the prevalent style for the Vagrantfile
.
Ideally, the boxes used for additional assertions will support the default virtualbox
provider which
enables them to be used by our Github Actions Workflow(s). See:
Framework
If you are new to Vagrant, Hashicorp has written some pretty decent introductory tutorials and docs, see:
- https://learn.hashicorp.com/collections/vagrant/getting-started
- https://www.vagrantup.com/docs/installation
Plugins and Providers
The libvirt
and vmware_desktop
providers cannot be used without first installing the relevant plugins
which are vagrant-libvirt
and
vagrant-vmware-desktop
, respectively.
Much like the default virtualbox
provider these will do
nothing useful without also installing the relevant server runtimes and/or client programs.
Environment Variables
These can be set on the CLI or exported before invoking Vagrant:
TEST_VM_CPUS
(default ➡️ 2)
The number of vCPU for the guest to use.TEST_VM_MEMORY
(default ➡️ 2048)
The number of megabytes of memory for the guest to use.TEST_VM_BOOT_TIMEOUT
(default ➡️ 600)
The time in seconds that Vagrant will wait for the machine to boot and be accessible.
Running
The Install Script tests can be run by changing to the fixture directory and invoking vagrant up
, e.g.:
cd tests/vagrant/install/centos-8
vagrant up
# the following provisioners are optional. the do not run by default but are invoked
# explicitly by github actions workflow to avoid certain timeout issues on slow runners
vagrant provision --provision-with=k3s-wait-for-node
vagrant provision --provision-with=k3s-wait-for-coredns
vagrant provision --provision-with=k3s-wait-for-local-storage
vagrant provision --provision-with=k3s-wait-for-metrics-server
vagrant provision --provision-with=k3s-wait-for-traefik
vagrant provision --provision-with=k3s-status
vagrant provision --provision-with=k3s-procps
The Control Groups and Snapshotter tests require that k3s binary is built at dist/artifacts/k3s
.
They are invoked similarly, i.e. vagrant up
, but with different sets of named shell provisioners.
Take a look at the individual Vagrantfiles and/or the Github Actions workflows that harness them to get
an idea of how they can be invoked.
End-to-End (E2E) Tests
E2E tests cover multi-node K3s configuration and administration: bringup, update, teardown etc. across a wide range of operating systems. E2E tests are run nightly as part of K3s quality assurance (QA).
Framework
End-to-end tests utilize Ginkgo and Gomega like the integration tests, but rely on Vagrant to provide the underlying cluster configuration.
Currently tested operating systems are:
- Ubuntu 20.04
- Leap 15.3 (stand-in for SLE-Server)
- MicroOS (stand-in for SLE-Micro)
Format
All E2E tests should be placed under tests/e2e/<TEST_NAME>
.
All E2E test functions should be named: Test_E2E<TEST_NAME>
.
A E2E test consists of two parts:
Vagrantfile
: a vagrant file which describes and configures the VMs upon which the cluster and test will run<TEST_NAME>.go
: A go test file which callsvagrant up
and controls the actual testing of the cluster
See the validate cluster test as an example.
Running
Generally, E2E tests are run as a nightly Jenkins job for QA. They can still be run locally but additional setup may be required. By default, all E2E tests are designed with libvirt
as the underlying VM provider. Instructions for installing libvirt and its associated vagrant plugin, vagrant-libvirt
can be found here. VirtualBox
is also supported as a backup VM provider.
Once setup is complete, E2E tests can be run with:
go test ./tests/e2e/... -run E2E
Contributing New Or Updated Tests
We gladly accept new and updated tests of all types. If you wish to create
a new test or update an existing test, please submit a PR with a title that includes the words <NAME_OF_TEST> (Created/Updated)
.