Compare commits

...

235 Commits

Author SHA1 Message Date
Brad Davidson f9130d537d Fix embedded mirror blocked by SAR RBAC and re-enable test
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-31 08:33:18 -07:00
Katherine Door 7a0ea3c953
Add write-kubeconfig-group flag to server (#9233)
* Add write-kubeconfig-group flag to server
* update kubectl unable to read config message for kubeconfig mode/group

Signed-off-by: Katherine Pata <me@kitty.sh>
2024-05-30 23:45:34 -07:00
Brad Davidson 307f07bd61 Fix issue caused by sole server marked as failed under load
If health checks are failing for all servers, make a second pass through the server list with health-checks ignored before returning failure

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-30 11:47:23 -07:00
Brad Davidson ed23a2bb48 Fix netpol crash when node remains tained unintialized
It is concievable that users might take more than 60 seconds to deploy their own cloud-provider. Instead of exiting, we should wait forever, but with more logging to indicate what's being waited on.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 23:34:44 -07:00
github-actions[bot] f2e7c01acf chore: Bump Trivy version
Made with ❤️️ by updatecli
2024-05-28 20:12:36 -07:00
dependabot[bot] 4cb4542c3a Bump ubuntu from 22.04 to 24.04 in /tests/e2e/scripts
Bumps ubuntu from 22.04 to 24.04.

---
updated-dependencies:
- dependency-name: ubuntu
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-28 20:12:14 -07:00
Brad Davidson 84b578ec74 Use busybox tar to avoid issues with fchmodat2 on arm
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 20:11:46 -07:00
dependabot[bot] 86875c97bb Bump alpine from 3.18 to 3.20 in /package
Bumps alpine from 3.18 to 3.20.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-28 20:11:46 -07:00
dependabot[bot] de4cda57e6 Bump alpine from 3.18 to 3.20 in /conformance
Bumps alpine from 3.18 to 3.20.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-28 20:09:39 -07:00
Brad Davidson 2eca3f1e2c Update golangci-lint to stop using deprecated skip files/dirs
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 16:24:57 -07:00
Brad Davidson f8e0648304 Convert remaining http handlers over to use util.SendError
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 16:24:57 -07:00
Brad Davidson ff679fb3ab Refactor supervisor listener startup and add metrics
* Refactor agent supervisor listener startup and authn/authz to use upstream
  auth delegators to perform for SubjectAccessReview for access to
  metrics.
* Convert spegel and pprof handlers over to new structure.
* Promote bind-address to agent flag to allow setting supervisor bind
  address for both agent and server.
* Promote enable-pprof to agent flag to allow profiling agents. Access
  to the pprof endpoint now requires client cert auth, similar to the
  spegel registry api endpoint.
* Add prometheus metrics handler.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 16:24:57 -07:00
Brad Davidson 3d14092f76 Fix issue with k3s-etcd informers not starting
Start shared informer caches when k3s-etcd controller wins leader election. Previously, these were only started when the main k3s apiserver controller won an election. If the leaders ended up going to different nodes, some informers wouldn't be started

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 15:48:15 -07:00
Anuj Garg eb192197eb Updating the script binary_size_check to complete the command name by adding .exe extension to the k3s binary name to make it available to run stat command
Signed-off-by: Anuj Garg <anujgarg@microsoft.com>
2024-05-28 13:30:53 -07:00
Brad Davidson 6683fcdb65 Bump klipper-helm image for tls secret support
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-28 13:12:47 -07:00
Brian Downs c2738231ec
update channel server for may 2024 (#10137) 2024-05-28 08:55:41 -07:00
thomasferrandiz 6e6f7995e7
Merge pull request #10146 from thomasferrandiz/flannel-v0.25.2
Bump flannel version to v0.25.2
2024-05-28 09:17:47 +02:00
Manuel Buil 3f62ec3207 Add extra log in e2e tests
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-05-27 16:11:12 +02:00
Nikos Pitsillos 99f543a2d4 fix: use absolute path
Signed-off-by: Nikos Pitsillos <npitsillos@gmail.com>
2024-05-27 16:10:57 +02:00
Nikos Pitsillos 86b2554772 test: copy vpn-auth-file to guest
Signed-off-by: Nikos Pitsillos <npitsillos@gmail.com>
2024-05-27 16:10:57 +02:00
Nikos Pitsillos b8f101fd89 test: increment agentCount
Signed-off-by: Nikos Pitsillos <npitsillos@gmail.com>
2024-05-27 16:10:57 +02:00
Nikos Pitsillos ab29054887 test: use absolute path to auth file
Signed-off-by: Nikos Pitsillos <npitsillos@gmail.com>
2024-05-27 16:10:57 +02:00
Nikos Pitsillos a8f88aa9e5 test: add agent with auth file
Signed-off-by: Nikos Pitsillos <npitsillos@gmail.com>
2024-05-27 16:10:57 +02:00
Thomas Ferrandiz 6dcd52eb8e Use TrafficManager interface when calling flannel
Signed-off-by: Thomas Ferrandiz <thomas.ferrandiz@suse.com>
2024-05-27 13:05:18 +00:00
Thomas Ferrandiz af7bcc3900 Bump flannel version to v0.25.2
Signed-off-by: Thomas Ferrandiz <thomas.ferrandiz@suse.com>
2024-05-27 13:05:18 +00:00
Brad Davidson aadec85501 Fix go.mod
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-24 13:04:16 -07:00
huangzy 6fcaad553d allow helm controller set owner reference
Signed-off-by: huangzy <huangzynn@outlook.com>
2024-05-24 12:44:10 -07:00
Robert Rose 6886c0977f Follow directory symlinks in auto deploying manifests (#9288)
Signed-off-by: Robert Rose <robert.rose@mailbox.org>
2024-05-24 12:42:25 -07:00
0xMALVEE 3e48386c6e git_workflow filename correction
Signed-off-by: 0xMALVEE <m.alvee8141@gmail.com>
2024-05-24 12:41:11 -07:00
zouxianyu c1cb5d63b9 add missing kernel config check
Signed-off-by: zouxianyu <2979121738@qq.com>
2024-05-24 12:40:25 -07:00
linxin f24ba9d3a9 Validate resolv.conf for presence of nameserver entries
Co-authored-by: Brad Davidson <brad@oatmail.org>
Signed-off-by: linxin <linxin@geedgenetworks.com>
2024-05-24 12:39:34 -07:00
Brad Davidson 2669d67a9b Bump kine to v0.11.9 to fix pagination
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-24 11:34:36 -07:00
Brad Davidson afdcc83afe bump minio-go to v7.0.70
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-24 10:29:17 -07:00
Max 423675b955
Create ADR for branching strategy (#10147)
Signed-off-by: rancher-max <max.ross@suse.com>
2024-05-24 10:03:22 -07:00
Roberto Bonafiglia aa36341f66 Update kube-router version to v2.1.2
Signed-off-by: Roberto Bonafiglia <roberto.bonafiglia@suse.com>
2024-05-24 17:05:29 +02:00
Brad Davidson 5a0162d8ee Drop check for legacy traefik v1 chart
We have been bundling traefik v2 for three years, its time to drop the legacy chart check

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 14:13:13 -07:00
Brad Davidson 37f97b33c9 Add support for svclb pod PriorityClassName
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 14:11:15 -07:00
Brad Davidson b453630478 Update local-path-provisioner helper script
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 14:00:00 -07:00
Brad Davidson 095ecdb034 Fix issue with local traffic policy for single-stack services on dual-stack nodes.
Just enable IP forwarding for all address families regardless of service address families.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:54:30 -07:00
Brad Davidson e8950a0a3b Fix issue installing artifacts from builds with multiple runs
Also makes error handling and variable capitalization consistent with other functions.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:50:24 -07:00
Brad Davidson 5cf4d75749 Bump spegel version
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:48:38 -07:00
Brad Davidson bf8b15e7ae bump etcd to v3.5.13
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:37:49 -07:00
Brad Davidson aaa578785c Bump containerd to v1.7.17
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:37:49 -07:00
Brad Davidson 30999f9a07 Switch stargz over to cri registry config_path
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:35:15 -07:00
Brad Davidson 7374010c0c Use fixed stream server bind address for cri-dockerd
Will now use 127.0.0.1:10010, same as containerd's CRI

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:33:27 -07:00
Brad Davidson 5f6b813cc8 Add WithSkipMissing to not fail import on missing blobs
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-05-23 13:32:22 -07:00
Manuel Buil 811de8b819 Fix bug when using tailscale config by file
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-05-23 11:55:20 +02:00
Brian Downs 80978b5b9a
Update to v1.30.1 (#10105) 2024-05-17 13:39:14 -07:00
Harrison Affel 1d22b6971f windows changes
Signed-off-by: Harrison Affel <harrisonaffel@gmail.com>
2024-05-16 14:40:27 -07:00
Hussein Galal 1cd7986b50
Update channels with 1.30 (#10097)
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-05-15 19:37:47 +03:00
Manuel Buil dba30ab21c Replace deprecated ruby function
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-05-13 09:41:28 +02:00
ShylajaDevadiga 14549535f1
Fix e2e tests (#10061)
Signed-off-by: ShylajaDevadiga <shylaja.devadiga@suse.com>
Co-authored-by: ShylajaDevadiga <shylaja.devadiga@suse.com>
2024-05-06 11:18:25 -07:00
Derek Nola 6531fb79b0
Deprecate pod-infra-container-image kubelet flag (#7409)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-05-06 10:39:10 -07:00
Hussein Galal 144f5ad333
Kubernetes V1.30.0-k3s1 (#10063)
* kubernetes 1.30.0-k3s1

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update go version to v1.22.2

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update dynamiclistener and helm-controller

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update go in go.mod to 1.22.2

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update go in Dockerfiles

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update cri-dockerd

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Add proctitle package with linux and windows constraints

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* go mod tidy

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fixing setproctitle function

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update dynamiclistener to v0.6.0-rc1

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2024-05-06 19:42:27 +03:00
Derek Nola fe7d114c6a
Bump E2E opensuse leap to 15.6, fix btrfs test (#10057)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-05-02 10:51:00 -07:00
Derek Nola 0981f0069d
Add E2E Split Server to Drone, support parrallel testing in Drone (#9940)
* Fix SE old test name
* E2E: support multiple VMs at once in CI with time prefix
* Add local binary support to split server test, add to drone CI
* Cleanup old VMs in drone

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-04-29 13:57:22 -07:00
Pedro Tashima 5c94ce2cf8
update stable channel to v1.29.4+k3s1 (#10031)
Signed-off-by: tashima42 <pedro.tashima@suse.com>
2024-04-29 09:58:06 -03:00
Brad Davidson 94e29e2ef5 Make /db/info available anonymously from localhost
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-22 19:34:43 -07:00
Brad Davidson d3b60543e7 Fix 10 second etcd-snapshot request timeout
The default clientaccess request timeout is too short. Wait longer by default, and add the s3 timeout if s3 is enabled.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-19 23:26:51 -07:00
Brad Davidson 5b431ca531 Fix on-demand snapshots not honoring folder
Also fix etcd s3 tests to actually check that the files are saved to s3 🙃

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-19 23:26:51 -07:00
Pedro Tashima d973fadbed
Update to v1.29.4 (#9960)
Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>
2024-04-16 19:57:56 -03:00
Derek Nola 06b6444904
Add startup testlet on preloaded images (#9941)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-04-15 09:52:50 -07:00
Derek Nola 4e26ee1f84
Match setup-go caching key in GitHub Actions (#9890)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-04-15 09:52:24 -07:00
Roberto Bonafiglia 81cd630f87 Update kube-router to v2.1.0
Signed-off-by: Roberto Bonafiglia <roberto.bonafiglia@suse.com>
2024-04-12 09:00:57 +02:00
Thomas Anderson c59820a52a Allow LPP to read helper logs (#9834)
Signed-off-by: Thomas Anderson <127358482+zc-devs@users.noreply.github.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-11 12:31:54 -07:00
Brad Davidson 3f906bee79 Update packaged manifests
* Update traefik chart to bump image tag and fix quoting
* Fix image quoting in flat manifests
* Update local-path-provisioner config to stop using deprecated hostpath volume type

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-11 09:22:51 -07:00
Brad Davidson b10cd8fe28 Bump latest to v1.29.3+k3s1
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-11 08:52:40 -07:00
Brad Davidson 4cc73b1fee Actually fix agent certificate rotation
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-10 09:21:01 -07:00
Brad Davidson 08f1022663 Don't log 'apiserver disabled' error sent by etcd-only nodes
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-09 15:36:33 -07:00
Brad Davidson 7d9abc9f07 Improve etcd load-balancer startup behavior
Prefer the address of the etcd member being joined, and seed the full address list immediately on startup.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-09 15:36:33 -07:00
Brad Davidson fe465cc832 Move etcd snapshot management CLI to request/response
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-09 15:21:26 -07:00
Brad Davidson 0792461885 Bump containerd and cri-dockerd
Bump containerd to v1.7.15
Bump cri-dockerd to v0.3.12

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-09 11:09:30 -07:00
Manuel Buil a064ae2f17 Add quotes to avoid useless updatecli updates
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-04-08 19:49:29 +02:00
Brad Davidson 60248c42de Add supervisor cert/key to rotate list
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-05 10:59:17 -07:00
Derek Nola 9846a72e92
Bump spegel to v0.0.20-k3s1 (#9863)
* Bump spegel to v0.0.20-k3s1

* Remove deprecated libp2p Pretty function

* Remove quic-go pin
   Pinned version is now out of date,  indirect dependencies are now newer, with CVE issue fixed
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-04-05 08:43:19 -07:00
HaoTian Qi 0e118fe6d3
fix: agent volume in example docker compose (#9838)
Signed-off-by: 117503445 <t117503445@gmail.com>
2024-04-04 10:36:47 -07:00
Brad Davidson f2961fb5d2 Add workaround for containerd hosts.toml bug
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-04-03 20:47:54 -07:00
github-actions[bot] 49414a8def
chore: Bump Trivy version (#9840)
Made with ❤️️ by updatecli

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-04-02 12:02:20 -07:00
Manuel Buil 52712859c5 Add updatecli policy to update k3s-root
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-04-02 09:06:38 +02:00
Brad Davidson 7f659759dd Add certificate expiry check and warnings
* Add ADR
* Add `k3s certificate check` command.
* Add periodic check and events when certs are about to expire.
* Add metrics for certificate validity remaining, labeled by cert subject

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-28 12:05:21 -07:00
Derek Nola 6624273a97 Fix embeddedmirror test
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-28 10:12:54 -07:00
Derek Nola 93bcaccad1 E2E setup: Only install jq when we need it
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-28 10:12:54 -07:00
Derek Nola c98ca14198 Add wasm test to e2e matrix
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-28 10:12:54 -07:00
Derek Nola 6a42c6fcfe
Remove old pinned dependencies (#9806)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-28 10:09:48 -07:00
Derek Nola 14f54d0b26
Transition from deprecated pointer library to ptr (#9801)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-28 10:07:02 -07:00
Vitor Savian 5d69d6e782 Add tls for kine
Signed-off-by: Vitor Savian <vitor.savian@suse.com>

Bump kine

Signed-off-by: Vitor Savian <vitor.savian@suse.com>

Add integration tests for kine with tls

Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-03-28 11:12:07 -03:00
Brad Davidson c51d7bfbd1 Add health-check support to loadbalancer
* Adds support for health-checking loadbalancer servers. If a
  health-check fails when dialing, all existing connections to the
  server will be closed.
* Wires up a remotedialer tunnel connectivity check as the health check
  for supervisor/apiserver connections.
* Wires up a simple ping request to the supervisor port as the health
  check for etcd connections.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-27 16:50:27 -07:00
Brad Davidson edb0440017 Fix etcd snapshot reconcile for agentless nodes
Disable cleanup of orphaned snapshots and patching of node annotations if running agentless

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-27 16:44:36 -07:00
Brad Davidson 7474a6fa43 Add /etc/passwd and /etc/group to k3s docker image
Fixes `cannot find name for user ID 0: No such file or directory` errors when checking user info in docker image

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-27 16:41:46 -07:00
Brian Downs 6c52235848
update channel server (#9808) 2024-03-27 14:28:39 -07:00
Derek Nola c47c85e5da
Move to ubuntu 23.10 for E2E tests (#9755)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-27 09:55:13 -07:00
github-actions[bot] b5d0d4ee21
Bump Trivy version (#9780)
Made with ❤️️ by updatecli

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-27 09:20:44 -07:00
Derek Nola 41377540fd
Use ubuntu latest for better golang caching keys (#9711)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-27 09:19:56 -07:00
Derek Nola 5461c3e1c1 Bump k3s-root
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-27 09:19:37 -07:00
Vitor Savian 3f649e3bcb Add a new error when kine is with disable apiserver or disable etcd
Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-03-27 10:59:34 -03:00
Brad Davidson f099bfa508 Fix error when image has already been pulled
CRI and containerd APIs disagree about the registry names - CRI supports
index.docker.io as an alias for docker.io, while containerd does not.
Use the actual stored RepoTag to determine what image to ask containerd for.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-26 16:19:40 -07:00
Brad Davidson 65cd606832 Respect cloud-provider fields set by kubelet
Don't clobber the providerID field and instance-type/region/zone labels if provided by the kubelet. This allows the user to set these to the correct values when using the embedded CCM in a real cloud environment.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-26 16:18:34 -07:00
Brad Davidson d7cdbb7d4d Send error response if member list cannot be retrieved
Prevents joining nodes from being stuck with bad initial member list if there is a transient failure, or if they try to join themselves

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-26 15:17:15 -07:00
Brad Davidson 7a2a2d075c Move error response generation code into util
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-26 15:17:15 -07:00
Brian Downs 8aecc26b0f
Update to v1.29.3-k3s1 and Go 1.21.8 (#9747) 2024-03-17 13:33:54 -07:00
Brad Davidson bba3e3c66b Fix wildcard entry upstream fallback
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-12 23:31:16 -07:00
Derek Nola 364dfd8b89 Fix flaky check in btrfs test
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-08 10:54:28 -08:00
Derek Nola 21c170512c Fix e2e vagrant cacheing
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-08 10:54:28 -08:00
Derek Nola aea81c0822 Run docker tests in E2E GH Action
Build image with new input option
Run most of the basic docker tests in E2E
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-08 10:54:28 -08:00
John ec5d34dac0
remove repetitive words (#9671)
Signed-off-by: hishope <csqiye@126.com>
2024-03-08 09:44:16 -08:00
Brad Davidson fe2ca9ecf1 Warn and suppress duplicate registry mirror endpoints
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-07 16:30:06 -08:00
Derek Nola 9bd4c8a9fc
Bump upload and download actions to v4 (#9666)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-07 15:56:43 -08:00
Brad Davidson 2a091a693a Bump metrics-server to v0.7.0
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-07 12:45:29 -08:00
Derek Nola 1c8be1d011 Improve E2E Aftersuite cleanup
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-06 14:04:05 -08:00
Derek Nola af4c51bfc3 Move to ubuntu 2204 for all E2E tests
Simplify node roles

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-06 14:04:05 -08:00
Derek Nola da7312d082 Convert snapshotter test in e2e test
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-06 14:04:05 -08:00
Derek Nola d022a506d5 Migrate E2E tests to GitHub Actions
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-06 14:04:05 -08:00
Derek Nola 75ccaf9942 Allow non-sudo vagrant
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-06 14:04:05 -08:00
Brad Davidson 6f331ea7b5 Include flannel version in flannel cni plugin version
We were misreporting the flannel version as the flannel cni plugin version; restore the actual flannel version as build metadata

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-06 09:46:48 -08:00
github-actions[bot] d37d7a40da
Bump Trivy version (#9528)
Made with ❤️️ by updatecli

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-03-06 08:52:55 -08:00
Roberto Bonafiglia 88c431aea5 Adjust first node-ip based on configured clusterCIDR
Signed-off-by: Roberto Bonafiglia <roberto.bonafiglia@suse.com>
2024-03-06 11:10:41 +01:00
Manuel Buil 1fe0371e95 Improve tailscale e2e test
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-03-06 08:26:36 +01:00
Rishikesh Nair 82cfacb2f3 Update contrib/util/check-config.sh
Co-authored-by: Brad Davidson <brad@oatmail.org>
Signed-off-by: Rishikesh Nair <42700059+rishinair11@users.noreply.github.com>
2024-03-05 15:10:36 -08:00
Rishikesh Nair ce0765c9f8 Rename `RAW_OUTPUT` -> `NO_COLOR`
Also, if NO_COLOR is empty, output will be colored, otherwise not colored.

Signed-off-by: Rishikesh Nair <alienware505@gmail.com>
2024-03-05 15:10:36 -08:00
Rishi ff7cfa2235 Disable color outputs using RAW_OUTPUT env var
Setting this environment variable will not wrap the text in color ANSI code, so that we can print a raw output.

Signed-off-by: Rishikesh Nair <alienware505@gmail.com>
2024-03-05 15:10:36 -08:00
Vitor Savian 59c724f7a6 Fix wildcard with embbeded registry test
Signed-off-by: Vitor Savian <vitor.savian@suse.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-05 14:38:36 -08:00
Flavio Castelli f82d438f39 e2e tests: cover WebAssembly integration
Add a e2e test that runs some demo WebAssembly applications
using the dedicated containerd shims.

Note: this is not an integration test because we need to install some
binaries (the special containerd shims) on the host.

Signed-off-by: Flavio Castelli <fcastelli@suse.com>
2024-03-05 13:12:08 -08:00
Flavio Castelli 64e4f0e6e7 fix: use correct wasm shims names
Fix the wasm shim detection and the containerd configuration generation.

Prior to this commit, the binary and the `RuntimeType` values were not
correct.

Signed-off-by: Flavio Castelli <fcastelli@suse.com>
2024-03-05 13:12:08 -08:00
Tal Yitzhak 2c4773a5aa
chore(deps): Remediating CVEs found by trivy; CVE-2023-45142 on otelrestful and CVE-2023-48795 on golang.org/x/crypto (#9513)
Signed-off-by: Tal Yitzhak <taly@lightrun.com>
Co-authored-by: Tal Yitzhak <taly@lightrun.com>
2024-03-05 10:56:38 -08:00
Brad Davidson 091a5c8965 Don't register embedded registry address as an upstream registry
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 15:11:26 -08:00
Brad Davidson b5a4846e9d Remove filtering of wildcard mirror entry
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 15:11:26 -08:00
Brad Davidson 84a071a81e Add env var to allow spegel mirroring of `latest` tag
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 15:11:26 -08:00
Philip Laine 26feb25c40 Bump spegel to v0.0.18-k3s4
Signed-off-by: Philip Laine <philip.laine@gmail.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 15:11:26 -08:00
Brad Davidson 88d30f940d Use and version flannel/cni-plugin properly
Moves us closer to using the proper upstream for our flannel CNI plugin, instead of the snapshot that is vendored into our plugins fork.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 13:36:13 -08:00
Brad Davidson 0b3593205a Move snapshot-retention to EtcdSnapshotFlags in order to support loading from config
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 12:09:29 -08:00
Brad Davidson 3576ed4327 Clean up snapshotDir create/exists logic
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 12:09:29 -08:00
Brad Davidson b164d7a270 Fix additional corner cases in registries handling
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-04 11:59:33 -08:00
Derek Nola 29c73e6965
Fix setup-go typos (#9634)
* Fix setup-go typos

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-04 10:18:36 -08:00
Derek Nola 935ad1dbac
Move docker tests into tests folder (#9555)
* Move docker tests into tests folder
* Remove old test certs
* Update TESTING.md with docker test inf

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-04 09:15:40 -08:00
Derek Nola 138a107f4c
Reenable Install and Snapshotter Testing (#9601)
* Use regular ubuntu runners for install and snapshotter tests
* Workaround for vagrant box caching
* Update testing readme
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-04 09:11:04 -08:00
Brooks Newberry 81a60de256
update stable channel to v1.28.7+k3s1 (#9615) 2024-03-01 14:40:41 -08:00
Brad Davidson 109e3e454c Bump helm-controller/klipper-helm versions
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-01 13:55:36 -08:00
Brad Davidson 82432a2df7 Fix issue with etcd node name missing hostname
* Set ServerNodeName in snapshot CLI setup
* Raise errer if ServerNodeName ends up empty some other way
* Fix status controller to use etcd node name annotation instead of prefix checking

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-01 13:52:53 -08:00
Brad Davidson 513c3416e7 Tweak netpol node wait logs
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-01 12:01:34 -08:00
Brad Davidson be569f65a9 Fix NodeHosts on dual-stack clusters
* Add both dual-stack addresses to the node hosts file
* Add hostname to hosts file as alias for node name to ensure consistent resolution

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-03-01 11:59:59 -08:00
Edgar Lee 8c83b5e0f3 Rootless mode also bind service nodePort to host for LoadBalancer type
Signed-off-by: Edgar Lee <edgarhinshunlee@gmail.com>
2024-03-01 10:43:19 -08:00
Derek Nola 3e948aa0d5
Correct formatting of GH PR sha256sum artifact (#9472)
* Conform to how the install script wants the sha256sum name
* Remove no-op sed for GH PR install

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-01 08:45:01 -08:00
Derek Nola 8f777d04f8
Better GitHub CI caching strategy for golang (#9495)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-03-01 08:41:09 -08:00
Manuel Buil 736fb2bc8d Add an integration test for flannel-backend=none
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-03-01 12:08:09 +01:00
Manuel Buil 3b4f13f28d Update klipper-lb image version
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-03-01 11:28:12 +01:00
Derek Nola fa37d03395
Update install test OS matrix (#9480)
* Remove old cgroupsv2 test
* Consolidate install test clauses into functions
* Unpin vagrant-k3s plugin version, run latest
* Add ubuntu-2204 as install test, remove ubuntu-focal
* Update nightly install matrix
* Move to Leap 15.5
* Consolidate vagrant box caching key to improve cache hits on all VM testing

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-29 15:41:56 -08:00
Derek Nola 922c5a6bed
Unit Testing Matrix and Actions bump (#9479)
cache is now on by default

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-29 15:41:05 -08:00
Derek Nola 57e11c72d1
Testing ADR (#9562)
* Update contributing with new links
* Testing ADR

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-29 15:36:11 -08:00
Brad Davidson 86f102134e Fix netpol startup when flannel is disabled
Don't break out of the poll loop if we can't get the node, RBAC might not be ready yet.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-26 14:58:48 -08:00
Brad Davidson fae0d99863 Use 3/2/1 cluster for split role test
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-21 12:21:19 -08:00
Derek Nola f90fd7b744 Change default number of etcd nodes in E2E splitserver test
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-21 12:21:19 -08:00
Derek Nola fae41a8b2a Rename AgentReady to ContainerRuntimeReady for better clarity
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-21 12:21:19 -08:00
Derek Nola 91cc2feed2 Restore original order of agent startup functions
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-21 12:21:19 -08:00
Brooks Newberry 1c1746114c
remove e2e logs drone step (#9517)
Signed-off-by: Brooks Newberry <brooks@newberry.com>
2024-02-16 06:32:55 -08:00
Derek Nola 085ccbb0ac
Fix drone publish for arm (#9503)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-15 16:53:10 -08:00
Brooks Newberry 3e13e3619c
Update Kubernetes to v1.29.2 (#9493)
Signed-off-by: Brooks Newberry <brooks@newberry.com>
2024-02-15 12:48:20 -08:00
Brad Davidson de825845b2 Bump kine and set NotifyInterval to what the apiserver expects
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-09 14:22:38 -08:00
Edgar Lee 0ac4c6a056 Expose rootless containerd socket directories for external access
Signed-off-by: Edgar Lee <edgarhinshunlee@gmail.com>
2024-02-09 14:22:03 -08:00
Edgar Lee 14c6c63b30 Expose rootless state dir under ~/.rancher/k3s/rootless
Signed-off-by: Edgar Lee <edgarhinshunlee@gmail.com>
2024-02-09 14:21:52 -08:00
Oleg Matskiv e3b237fc35 Don't verify the node password if the local host is not running an agent
Signed-off-by: Oleg Matskiv <oleg.matskiv@gmail.com>
2024-02-09 14:21:43 -08:00
Mikhail Vorobyov 701e7e45ce Fix iptables check when sbin isn't in user PATH
Signed-off-by: Mikhail Vorobyov <mikhail.vorobev@uni.lu>
2024-02-09 13:59:47 -08:00
Derek Nola fa11850563
Readd `k3s secrets-encrypt rotate-keys` with correct support for KMSv2 GA (#9340)
* Reorder copy order for caching
* Enable longer http timeout requests

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Setup reencrypt controller to run on all apiserver nodes
* Fix reencryption for disabling secrets encryption, reenable drone tests
2024-02-09 11:37:37 -08:00
Oliver Larsson cfc3a124ee
[Testing]: Test_UnitApplyContainerdQoSClassConfigFileIfPresent (Created) (#8945)
Problem:
Function not tested.

Solution:
Unit test added.

Signed-off-by: Oliver Larsson <larsson.e.oliver@gmail.com>
2024-02-09 11:28:06 -08:00
Roberto Bonafiglia cc04edf05f Update Kube-router to v2.0.1
Signed-off-by: Roberto Bonafiglia <roberto.bonafiglia@suse.com>
2024-02-09 20:14:51 +01:00
Harrison Affel a36cc736bc allow executors to define containerd and docker behavior
Signed-off-by: Harrison Affel <harrisonaffel@gmail.com>
2024-02-09 15:51:35 -03:00
Derek Nola b1323935dc
Add codcov secret for integration tests on Push (#9422)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-08 09:01:36 -08:00
Brad Davidson 753c00f30c Consistently handle component exit on shutdown
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-07 10:23:54 -08:00
Brad Davidson 9e076db724 Bump cri-dockerd
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-07 10:23:54 -08:00
Vitor Savian e9cec46a23 Runtimes refactor using exec.LookPath
Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-02-07 15:06:16 -03:00
Vitor Savian f9ee66f4d8 Changed how lastHeartBeatTime works in the etcd condition
Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-02-07 15:05:33 -03:00
Paulo Gomes 358c4d6aa9
build: Align drone base images (#8959)
Align the base images used in drone with the images used across the
ecosystem.

Signed-off-by: Paulo Gomes <paulo.gomes@suse.com>
2024-02-07 09:25:06 -08:00
Manuel Buil 950473e35f Bump flannel version
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-02-07 10:19:06 +01:00
Brad Davidson 8224a3a7f6 Fix ipv6 endpoint address selection for on-demand snapshots
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 18:02:36 -08:00
Brad Davidson 888f866dae Fix issue with coredns node hosts controller
The nodes controller was reading from the configmaps cache, but doesn't add any handlers, so if no other controller added configmap handlers, the cache would remain empty.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 18:02:06 -08:00
Brad Davidson 77ba9904d1 Bump CNI plugins to v1.4.0
Ref: https://github.com/rancher/plugins/compare/v1.3.0-k3s1...v1.4.0-k3s2

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 17:49:14 -08:00
Brad Davidson 6ec1926f88 Add check for etcd-snapshot-dir and fix panic in Walk
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 17:47:33 -08:00
Brad Davidson 82e3c32c9f Retry startup snapshot reconcile
The reconcile may run before the kubelet has created the node object; retry until it succeeds

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 17:46:24 -08:00
Brad Davidson 4005600d4e Fix excessive retry on snapshot reconcile
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-06 17:46:24 -08:00
Pedro Tashima 6a57db553f
update channel (#9388)
Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>
Co-authored-by: Pedro Tashima <pedro.tashima@suse.com>
2024-02-06 22:14:52 -03:00
dependabot[bot] 5c92345423
Bump codecov/codecov-action from 3 to 4 (#9353)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-06 16:33:59 -08:00
github-actions[bot] a324146b76
Bump Trivy version (#9237)
* chore: Bump Trivy version

Made with ❤️️ by updatecli

* chore: Bump Trivy version

Made with ❤️️ by updatecli

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-06 16:33:34 -08:00
Derek Nola fcd1108e73
Add ability to install K3s PR Artifact from GitHub (#9185)
* Add support for INSTALL_K3s_PR

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Add sha256sum to K3s PR artifacts

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Update install sha256sum

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Revert whitespace changes

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-02-06 16:30:12 -08:00
github-actions[bot] f249fcc2f1
Bump Local Path Provisioner version (#8953)
* chore: Bump Local Path Provisioner version
---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-02-06 16:57:07 -06:00
Brad Davidson 57482a1c1b Bump helm-controller to fix issue with ChartContent
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-02 12:39:51 -08:00
Brad Davidson c635818956 Bump runc and helm-controller versions
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-01 18:51:51 -08:00
Brad Davidson 97a22632b9 gofmt config_test.go
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-01 18:51:51 -08:00
Brad Davidson 29848dea3d Fix issues with certs.d template generation
* Fix issue with bare host or IP as endpoint
* Fix issue with localhost registries not defaulting to http.
* Move the registry template prep to a separate function,
  and adds tests of that function so that we can ensure we're
  generating the correct content.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-02-01 12:09:13 -08:00
caroline-suse-rancher 6d77b7a920
Merge pull request #9278 from k3s-io/cdavis-stale-action
New stale action
2024-01-19 17:43:08 -05:00
caroline-suse-rancher 2d98c44fb3
Delete old stalebot
delete .github/stale.yml

Signed-off-by: caroline-suse-rancher <caroline.davis@suse.com>
2024-01-19 16:06:18 -05:00
caroline-suse-rancher cef7e9e2dc
New stale action
This PR adds a new github stale action. This will replace our previous (and now deprecated) stalebot. Two notable differences are that issues will now go stale after 45 days of inactivity, and the most commonly used priority labels have been added for exemption.

Docs and list of inputs for stale action for reference here.

Signed-off-by: caroline-suse-rancher <caroline.davis@suse.com>
2024-01-19 16:04:46 -05:00
Pedro Tashima d8907ce62c
Update to v1.29.1 (#9259)
Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>
Co-authored-by: Pedro Tashima <pedro.tashima@suse.com>
2024-01-18 10:15:18 -03:00
Vitor Savian 9a70021a9e Error getting node in setEtcdStatusCondition
Signed-off-by: Vitor Savian <vitor.savian@suse.com>

Added retry and changed nodes for

Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-01-11 22:06:36 -03:00
Brad Davidson c87e6e5f7e Move proxy dialer out of init() and fix crash
* Fixes issue where proxy support only honored server address via K3S_URL, not CLI or config.
* Fixes crash when agent proxy is enabled, but proxy env vars do not return a proxy URL for the server address (server URL is in NO_PROXY list).
* Adds tests

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-11 16:12:15 -08:00
Derek Nola 5303aa60e9
Fix nonexistent dependency repositories (#9213)
* Fix nonexistent dependency repositories

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Restore matching go.sum

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-01-11 11:01:49 -08:00
Brad Davidson 76fa022045 Enable network policy controller metrics
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-11 10:19:39 -08:00
Brad Davidson c5a299d0ed Bump quic-go for CVE-2023-49295
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-11 10:09:33 -08:00
Brad Davidson 6072476432 Add e2e test for embedded registry mirror
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson 37e9b87f62 Add embedded registry implementation
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson ef90da5c6e Add server CLI flag and config fields for embedded registry
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson b8f3967ad1 Add ADR for embedded registry
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson 77846d63c1 Propagate errors up from config.Get
Fixes crash when killing agent while waiting for config from server

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson 16d29398ad Move registries.yaml load into agent config
Moving it into config.Agent so that we can use or modify it outside the context of containerd setup

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Brad Davidson 5c99bdd9bd Pin images instead of locking layers with lease
Layer leases never did what we wanted anyways, and this is the new approved interface for ensuring that images do not get GCd

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-09 15:23:05 -08:00
Ian Cardoso df5e983fc8
add e2e startup test for rootless k3s (#8383)
* add test for rootless k3s

Signed-off-by: Ian Cardoso <osodracnai@gmail.com>

* fix comments

Signed-off-by: Ian Cardoso <osodracnai@gmail.com>

* Cleanup rootless e2e test, simplify logic

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Ian Cardoso <osodracnai@gmail.com>
Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: Derek Nola <derek.nola@suse.com>
2024-01-09 10:39:54 -08:00
ShylajaDevadiga 64dbbba996
update s3 e2e test (#9025)
Signed-off-by: ShylajaDevadiga <shylaja.devadiga@suse.com>
Co-authored-by: ShylajaDevadiga <shylaja.devadiga@suse.com>
2024-01-09 10:29:32 -08:00
Vitor Savian 4a92ced8ee Handle etcd status condition when cluster reset and disable etcd
Signed-off-by: Vitor Savian <vitor.savian@suse.com>

Set condition if node is unhealthy

Signed-off-by: Vitor Savian <vitor.savian@suse.com>
2024-01-09 11:20:41 -03:00
Aofei Sheng 8d2c40cdac
Use `ipFamilyPolicy: RequireDualStack` for dual-stack kube-dns (#8984)
Signed-off-by: Aofei Sheng <aofei@aofeisheng.com>
2024-01-09 00:44:03 +02:00
github-actions[bot] ac8fe8de2b
fix: update trivy from 0.46.1 to 0.48.1 (#8812)
Signed-off-by: matttrach <matttrach@gmail.com>
Co-authored-by: matttrach <matttrach@gmail.com>
2024-01-08 15:14:23 -06:00
Manuel Buil 6330e26bb3 Wait for taint to be gone in the node before starting the netpol controller
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-01-08 12:04:18 +01:00
ifNil 102ff76328
Print error when downloading file error inside install script (#6874)
* Print error when downloading file error inside install script
* Update install.sh.sha256sum

Signed-off-by: yhw <2278069802@qq.com>
2024-01-04 21:30:33 -08:00
Brad Davidson eae221f9e5 Fix OS PRETTY_NAME on tagged releases
These were always showing up as dev due to the build arg not being set by the drone step.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 19:42:28 -08:00
Brad Davidson b297996b92 Add runtime checking of golang version
Forces other groups packaging k3s to intentionally choose to build k3s with an unvalidated golang version

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 17:22:46 -08:00
Lex Rivera 5fe074b540
Add more paths to crun runtime detection (#9086)
* add usr/local paths for crun detection

Signed-off-by: Lex Rivera <me@lex.io>
2024-01-04 16:51:13 -08:00
Brad Davidson c45524e662 Add support for containerd cri registry config_path
Render cri registry mirrors.x.endpoints and configs.x.tls into config_path; keep
using mirrors.x.rewrites and configs.x.auth those do not yet have an
equivalent in the new format.

The new config file format allows disabling containerd's fallback to the
default endpoint when using mirror endpoints; a new CLI flag is added to
control that behavior.

This also re-shares some code that was unnecessarily split into parallel
implementations for linux/windows versions. There is probably more work
to be done on this front but it's a good start.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 16:50:26 -08:00
Brad Davidson 319dca3e82 Fix nil map in full snapshot configmap reconcile
If a full reconcile wins the race against sync of an individual snapshot resource, or someone intentionally deletes the configmap, the data map could be nil and cause a crash.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 16:49:58 -08:00
Brad Davidson db7091b3f6 Handle logging flags when parsing kube-proxy args
Also adds a test to ensure this continues to work.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 16:23:03 -08:00
Brad Davidson 1e663622d2 Fix the OTHER log message that prints the wrong variable
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 15:23:39 -08:00
Brad Davidson 08ccea5cb6 Fix install script checksum
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-04 12:57:31 -08:00
Pedro Tashima 9d21b8a135
add system-agent-installer-k3s step to ga release (#9153)
Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>
Co-authored-by: Pedro Tashima <pedro.tashima@suse.com>
2024-01-04 13:38:57 -03:00
Ivan Shapovalov a7fe1aaaa5 Dockerfile.dapper: set $HOME properly
`$HOME` refers to `$DAPPER_SOURCE`, which is set in the same expression
and is thus not visible at the time of substitution.

This problem is not immediately visible with Docker, Inc.'s docker
merely because it resets an unset `$HOME` to `/root` (but still breaking
the Go cache). Under podman, this problem is immediately visible because
an unset `$HOME` remains unset and subsequently breaks the `go generate`
invocation.

Fixes #9089.

Signed-off-by: Ivan Shapovalov <intelfx@intelfx.name>
2024-01-03 14:20:34 -08:00
Manuel Buil 30449e0128 Add 2>dev/null when checking nm-cloud systemd unit
Signed-off-by: Manuel Buil <mbuil@suse.com>
2024-01-03 09:36:11 +01:00
Derek Nola 0ad5d65a1e
Added support for env *_PROXY variables for agent loadbalancer (#9118)
Signed-off-by: Yodo <pierre@azmed.co>
Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: Pierre <129078893+pierre-az@users.noreply.github.com>
2024-01-02 17:13:30 -08:00
Brad Davidson a27d660a24 Add ServiceLB support for PodHostIPs FeatureGate
If the feature-gate is enabled, use status.hostIPs for dual-stack externalTrafficPolicy=Local support

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2024-01-02 16:00:09 -08:00
Harsimran Singh Maan baaab250a7
Silence SELinux warning on INSTALL_K3S_SKIP_SELINUX_RPM (#8703)
When k3s is installed with INSTALL_K3S_SKIP_SELINUX_RPM=true or
INSTALL_K3S_SKIP_DOWNLOAD=true or INSTALL_K3S_SKIP_DOWNLOAD=selinux,
the following message(or similar) is seen on Amazon Linux 2023/Centos
```
[INFO]  Skipping installation of SELinux RPM
[WARN]  Failed to find the k3s-selinux policy, please install:
    dnf install -y container-selinux
    dnf install -y https://rpm.rancher.io/k3s/stable/common/centos/8/noarch/

[INFO]  Creating /usr/bin/kubectl symlink to k3s
```

whereas now

```
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/bin/kubectl symlink to k3s
```

Signed-off-by: Harsimran Singh Maan <maan.harry@gmail.com>
2024-01-02 12:30:07 -08:00
Derek Nola aca1c2fd11
Add a retry around updating a secrets-encrypt node annotations (#9039)
* Add a retry around updating a se node annotations

Signed-off-by: Derek Nola <derek.nola@suse.com>
2024-01-02 12:21:37 -08:00
Pierre bbd68f3a50
Rebase & Squash (#9070)
Signed-off-by: Yodo <pierre@azmed.co>
2024-01-02 12:05:36 -08:00
Pedro Tashima c7a8eef977
update stable channel to v1.28.5+k3s1 and add v1.29 channel (#9110)
* update stable channel to v1.28.5+k3s1

Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>

* add v1.29 channel

Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>

---------

Signed-off-by: Pedro Tashima <pedro.tashima@suse.com>
Co-authored-by: Pedro Tashima <pedro.tashima@suse.com>
2024-01-02 14:44:06 -03:00
Nishant Singh d87851d46e
chore: Update Code of Conduct to Redirect to CNCF CoC (#9104)
This commit updates the Code of Conduct to redirect to the latest version of the CNCF Code of Conduct.
Instead of maintaining a separate CoC text, it now link directly to the CNCF CoC for consistency and alignment with industry best practices.

Signed-off-by: tesla59 <nishant@heim.id>
2024-01-02 11:44:46 -05:00
dependabot[bot] 9d9fbf4ff4
Bump actions/setup-go from 4 to 5 (#9036)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-02 11:04:25 -05:00
github-actions[bot] 798eecf112
chore: Update sonobuoy image versions (#8910)
Made with ❤️️ by updatecli

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2024-01-02 10:59:39 -05:00
Derek Nola 3190a5faa2
Remove rotate-keys subcommand (#9079)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2023-12-20 12:26:41 -08:00
Hussein Galal 9411196406
Update flannel to v0.24.0 and remove multiclustercidr flag (#9075)
* update flannel to v0.24.0

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* remove multiclustercidr flag

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
2023-12-20 00:25:38 +02:00
Hussein Galal 7101af36bb
Update Kubernetes to v1.29.0+k3s1 (#9052)
* Update to v1.29.0

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update to v1.29.0

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update go to 1.21.5

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update golangci-lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update flannel to 0.23.0-k3s1

This update uses k3s' fork of flannel to allow the removal of
multicluster cidr flag logic from the code

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* fix flannel calls

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* update cri-tools to version v1.29.0-k3s1

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Remove GOEXPERIMENT=nounified from arm builds

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Skip golangci-lint

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Fix setup logging with newer go version

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Move logging flags to components arguments

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* add sysctl commands to the test script

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

* Update scripts/test

Signed-off-by: Brad Davidson <brad@oatmail.org>

* disable secretsencryption tests

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>

---------

Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Signed-off-by: Brad Davidson <brad@oatmail.org>
Co-authored-by: Brad Davidson <brad@oatmail.org>
2023-12-19 05:14:02 +02:00
Derek Nola bf3f29f9e8
Only publish to code_cov on merged E2E builds (#9051)
Signed-off-by: Derek Nola <derek.nola@suse.com>
2023-12-19 04:30:13 +02:00
Brad Davidson 231cb6ed20
Remove GA feature-gates (#8970)
Remove KubeletCredentialProviders and JobTrackingWithFinalizers feature-gates, both of which are GA and cannot be disabled.

Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
2023-12-14 22:57:24 +02:00
259 changed files with 11120 additions and 5178 deletions

View File

@ -31,7 +31,7 @@ steps:
- pull_request
- name: build
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader, unprivileged_github_token ]
environment:
GITHUB_TOKEN:
@ -48,7 +48,7 @@ steps:
path: /var/run/docker.sock
- name: validate-cross-compilation
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
commands:
- dapper validate-cross-compilation
volumes:
@ -73,7 +73,7 @@ steps:
- tag
- name: github_binary_release
image: ibuildthecloud/github-release:v0.0.1
image: plugins/github-release
settings:
api_key:
from_secret: github_token
@ -102,6 +102,8 @@ steps:
repo: "rancher/k3s"
username:
from_secret: docker_username
build_args_from_env:
- DRONE_TAG
when:
instance:
- drone-publish.k3s.io
@ -112,7 +114,7 @@ steps:
- tag
- name: test
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader ]
environment:
ENABLE_REGISTRY: 'true'
@ -129,23 +131,6 @@ steps:
- name: docker
path: /var/run/docker.sock
- name: github_e2e_logs_release
image: ibuildthecloud/github-release:v0.0.1
settings:
api_key:
from_secret: github_token
prerelease: true
files:
- "dist/artifacts/e2e-*.log"
when:
instance:
- drone-publish.k3s.io
ref:
- refs/head/master
- refs/tags/*
event:
- tag
volumes:
- name: docker
host:
@ -167,7 +152,7 @@ trigger:
steps:
- name: build
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
commands:
- dapper ci
- echo "${DRONE_TAG}-amd64" | sed -e 's/+/-/g' >.tags
@ -176,7 +161,7 @@ steps:
path: /var/run/docker.sock
- name: test
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
environment:
ENABLE_REGISTRY: 'true'
commands:
@ -226,7 +211,7 @@ steps:
- pull_request
- name: build
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader ]
environment:
AWS_SECRET_ACCESS_KEY:
@ -241,7 +226,7 @@ steps:
path: /var/run/docker.sock
- name: github_binary_release
image: ibuildthecloud/github-release:v0.0.1
image: plugins/github-release
settings:
api_key:
from_secret: github_token
@ -270,6 +255,8 @@ steps:
repo: "rancher/k3s"
username:
from_secret: docker_username
build_args_from_env:
- DRONE_TAG
when:
instance:
- drone-publish.k3s.io
@ -280,7 +267,7 @@ steps:
- tag
- name: test
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader ]
environment:
ENABLE_REGISTRY: 'true'
@ -335,6 +322,11 @@ steps:
- pull_request
- name: build
# Keeping Dapper at v0.5.0 for armv7, as newer versions fails with
# Bad system call on this architecture. xref:
#
# https://github.com/k3s-io/k3s/pull/8959#discussion_r1439736566
# https://drone-pr.k3s.io/k3s-io/k3s/7922/3/3
image: rancher/dapper:v0.5.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader ]
environment:
@ -350,7 +342,7 @@ steps:
path: /var/run/docker.sock
- name: github_binary_release
image: ibuildthecloud/github-release:v0.0.1
image: plugins/github-release:linux-arm
settings:
api_key:
from_secret: github_token
@ -379,6 +371,8 @@ steps:
repo: "rancher/k3s"
username:
from_secret: docker_username
build_args_from_env:
- DRONE_TAG
when:
instance:
- drone-publish.k3s.io
@ -389,6 +383,7 @@ steps:
- tag
- name: test
# Refer to comment for arm/build.
image: rancher/dapper:v0.5.0
secrets: [ AWS_SECRET_ACCESS_KEY-k3s-ci-uploader, AWS_ACCESS_KEY_ID-k3s-ci-uploader ]
environment:
@ -442,7 +437,7 @@ steps:
- pull_request
- name: validate_go_mods
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
commands:
- docker build --target test-mods -t k3s:mod -f Dockerfile.test .
- docker run -i k3s:mod
@ -496,7 +491,6 @@ steps:
- DOCKER_USERNAME
- DOCKER_PASSWORD
- DRONE_TAG
trigger:
instance:
- drone-publish.k3s.io
@ -599,7 +593,7 @@ steps:
- pull_request
- name: build-e2e-image
image: rancher/dapper:v0.5.0
image: rancher/dapper:v0.6.0
commands:
- DOCKER_BUILDKIT=1 docker build --target test-e2e -t test-e2e -f Dockerfile.test .
- SKIP_VALIDATE=true SKIP_AIRGAP=true GOCOVER=1 dapper ci
@ -623,37 +617,24 @@ steps:
- mkdir -p dist/artifacts
- cp /tmp/artifacts/* dist/artifacts/
- docker stop registry && docker rm registry
# Cleanup VMs running, happens if a previous test panics
# Cleanup inactive domains, happens if previous test is canceled
- |
VMS=$(virsh list --name | grep '_server-\|_agent-' || true)
if [ -n "$VMS" ]; then
for vm in $VMS
do
virsh destroy $vm
virsh undefine $vm --remove-all-storage
done
fi
VMS=$(virsh list --name --inactive | grep '_server-\|_agent-' || true)
if [ -n "$VMS" ]; then
for vm in $VMS
do
virsh undefine $vm
done
fi
# Cleanup VMs that are older than 2h. Happens if a previous test panics or is canceled
- tests/e2e/scripts/cleanup_vms.sh
- docker run -d -p 5000:5000 -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io --name registry registry:2
- cd tests/e2e/validatecluster
- vagrant destroy -f
- go test -v -timeout=45m ./validatecluster_test.go -ci -local
- cp ./coverage.out /tmp/artifacts/validate-coverage.out
- cd ../secretsencryption
- vagrant destroy -f
- go test -v -timeout=30m ./secretsencryption_test.go -ci -local
- cp ./coverage.out /tmp/artifacts/se-coverage.out
- cd ../startup
- vagrant destroy -f
- go test -v -timeout=30m ./startup_test.go -ci -local
- cp ./coverage.out /tmp/artifacts/startup-coverage.out
- |
cd tests/e2e/validatecluster
vagrant destroy -f
go test -v -timeout=45m ./validatecluster_test.go -ci -local
cp ./coverage.out /tmp/artifacts/validate-coverage.out
- |
cd ../secretsencryption
vagrant destroy -f
go test -v -timeout=30m ./secretsencryption_test.go -ci -local
cp ./coverage.out /tmp/artifacts/se-coverage.out
- |
cd ../splitserver
vagrant destroy -f
go test -v -timeout=30m ./splitserver_test.go -ci -local
cp ./coverage.out /tmp/artifacts/split-coverage.out
- |
if [ "$DRONE_BUILD_EVENT" = "pull_request" ]; then
cd ../upgradecluster
@ -679,13 +660,13 @@ steps:
files:
- /tmp/artifacts/validate-coverage.out
- /tmp/artifacts/se-coverage.out
- /tmp/artifacts/startup-coverage.out
- /tmp/artifacts/split-coverage.out
- /tmp/artifacts/upgrade-coverage.out
flags:
- e2etests
when:
event:
- pull_request
- push
volumes:
- name: cache

29
.github/actions/setup-go/action.yaml vendored Normal file
View File

@ -0,0 +1,29 @@
name: 'Setup golang with master only caching'
description: 'A composite action that installs golang, but with a caching strategy that only updates the cache on master branch.'
runs:
using: 'composite'
steps:
- uses: actions/setup-go@v5
with:
go-version-file: 'go.mod' # Just use whatever version is in the go.mod file
cache: ${{ github.ref == 'refs/heads/master' }}
- name: Prepare for go cache
if: ${{ github.ref != 'refs/heads/master' }}
shell: bash
run: |
echo "GO_CACHE=$(go env GOCACHE)" | tee -a "$GITHUB_ENV"
echo "GO_MODCACHE=$(go env GOMODCACHE)" | tee -a "$GITHUB_ENV"
echo "GO_VERSION=$(go env GOVERSION | tr -d 'go')" | tee -a "$GITHUB_ENV"
- name: Setup read-only cache
if: ${{ github.ref != 'refs/heads/master' }}
uses: actions/cache/restore@v4
with:
path: |
${{ env.GO_MODCACHE }}
${{ env.GO_CACHE }}
# Match the cache key to the setup-go action https://github.com/actions/setup-go/blob/main/src/cache-restore.ts#L34
key: setup-go-${{ runner.os }}-${{ env.ImageOS }}-go-${{ env.GO_VERSION }}-${{ hashFiles('go.sum') }}
restore-keys: |
setup-go-${{ runner.os }}-

View File

@ -0,0 +1,33 @@
name: 'Setup Vagrant and Libvirt'
description: 'A composite action that installs latest versions of vagrant and libvirt for use on ubuntu based runners'
runs:
using: 'composite'
steps:
- name: Add vagrant to apt-get sources
shell: bash
run: |
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list
- name: Install vagrant and libvirt
shell: bash
run: |
sudo apt-get update
sudo apt-get install -y libvirt-daemon libvirt-daemon-system vagrant
sudo systemctl enable --now libvirtd
- name: Build vagrant dependencies
shell: bash
run: |
sudo apt-get build-dep -y vagrant ruby-libvirt
sudo apt-get install -y --no-install-recommends libxslt-dev libxml2-dev libvirt-dev ruby-bundler ruby-dev zlib1g-dev
# This is a workaround for the libvirt group not being available in the current shell
# https://github.com/actions/runner-images/issues/7670#issuecomment-1900711711
- name: Make the libvirt socket rw accessible to everyone
shell: bash
run: |
sudo chmod a+rw /var/run/libvirt/libvirt-sock
- name: Install vagrant-libvirt plugin
shell: bash
run: vagrant plugin install vagrant-libvirt

55
.github/stale.yml vendored
View File

@ -1,55 +0,0 @@
# Configuration for probot-stale - https://github.com/probot/stale
# Number of days of inactivity before an Issue or Pull Request becomes stale
daysUntilStale: 180
# Number of days of inactivity before an Issue or Pull Request with the stale label is closed.
# Set to false to disable. If disabled, issues still need to be closed manually, but will remain marked as stale.
daysUntilClose: 14
# Only issues or pull requests with all of these labels are check if stale. Defaults to `[]` (disabled)
onlyLabels: []
# Issues or Pull Requests with these labels will never be considered stale. Set to `[]` to disable
exemptLabels:
- internal
- kind/bug
- kind/bug-qa
- kind/task
- kind/feature
- kind/enhancement
- kind/design
- kind/ci-improvements
- kind/performance
- kind/flaky-test
- kind/documentation
- kind/backport
- priority/backlog
- priority/critical-urgent
- priority/important-longterm
- priority/important-soon
# Set to true to ignore issues in a project (defaults to false)
exemptProjects: true
# Set to true to ignore issues in a milestone (defaults to false)
exemptMilestones: false
# Set to true to ignore issues with an assignee (defaults to false)
exemptAssignees: true
# Label to use when marking as stale
staleLabel: status/stale
# Comment to post when marking as stale. Set to `false` to disable
markComment: >
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label)
for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the
issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the
latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.
# Limit the number of actions per hour, from 1-30. Default is 30
limitPerRun: 30
# Limit to only `issues`
only: issues

View File

@ -7,6 +7,10 @@ on:
type: boolean
required: false
default: false
upload-image:
type: boolean
required: false
default: false
permissions:
contents: read
@ -14,7 +18,7 @@ permissions:
jobs:
build:
name: Build
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout K3s
@ -22,7 +26,10 @@ jobs:
- name: Build K3s binary
run: |
DOCKER_BUILDKIT=1 SKIP_IMAGE=1 SKIP_AIRGAP=1 SKIP_VALIDATE=1 GOCOVER=1 make
sha256sum dist/artifacts/k3s | sed 's|dist/artifacts/||' > dist/artifacts/k3s.sha256sum
- name: Build K3s image
if: inputs.upload-image == true
run: make package-image
- name: bundle repo
if: inputs.upload-repo == true
run: |
@ -30,13 +37,16 @@ jobs:
mv ../k3s-repo.tar.gz .
- name: "Upload K3s directory"
if: inputs.upload-repo == true
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: k3s-repo.tar.gz
path: k3s-repo.tar.gz
- name: "Save K3s image"
if: inputs.upload-image == true
run: docker image save rancher/k3s -o ./dist/artifacts/k3s-image.tar
- name: "Upload K3s binary"
if: inputs.upload-repo == false
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: k3s
path: dist/artifacts/k3s
path: dist/artifacts/k3s*

View File

@ -1,88 +0,0 @@
name: Control Group
on:
push:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/cgroup/**"
- ".github/**"
- "!.github/workflows/cgroup.yaml"
pull_request:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/cgroup/**"
- ".github/**"
- "!.github/workflows/cgroup.yaml"
workflow_dispatch: {}
permissions:
contents: read
jobs:
build:
uses: ./.github/workflows/build-k3s.yaml
test:
name: "Conformance Test"
needs: build
# nested virtualization is only available on macOS hosts
runs-on: macos-12
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
vm: [fedora]
mode: [unified]
max-parallel: 1
defaults:
run:
working-directory: tests/cgroup/${{ matrix.mode }}/${{ matrix.vm }}
steps:
- name: "Checkout"
uses: actions/checkout@v4
with: { fetch-depth: 1 }
- name: "Download Binary"
uses: actions/download-artifact@v3
with: { name: k3s, path: dist/artifacts/ }
- name: "Vagrant Cache"
uses: actions/cache@v3
with:
path: |
~/.vagrant.d/boxes
~/.vagrant.d/gems
key: cgroup-${{ hashFiles(format('tests/cgroup/{0}/{1}/Vagrantfile', matrix.mode, matrix.vm)) }}
id: vagrant-cache
continue-on-error: true
- name: "Vagrant Plugin(s)"
run: vagrant plugin install vagrant-k3s vagrant-reload
- name: "Vagrant Up"
run: vagrant up
- name: "K3s Prepare"
run: vagrant provision --provision-with=k3s-prepare
- name: ⏬ "K3s Install"
run: vagrant provision --provision-with=k3s-install
- name: ⏩ "K3s Start"
run: vagrant provision --provision-with=k3s-start
- name: "K3s Ready" # wait for k3s to be ready
run: vagrant provision --provision-with=k3s-ready
- name: "K3s Status" # kubectl get node,all -A -o wide
run: vagrant provision --provision-with=k3s-status
- name: "Sonobuoy (--mode=quick)"
env: {TEST_RESULTS_PATH: rootfull}
run: vagrant provision --provision-with=k3s-sonobuoy
- name: "K3s Stop" # stop k3s rootfull
run: vagrant ssh -- sudo systemctl stop k3s-server
- name: "Vagrant Reload"
run: vagrant reload
- name: "[Rootless] Starting K3s"
run: vagrant ssh -- systemctl --user start k3s-rootless
- name: "[Rootless] K3s Ready"
env: {TEST_KUBECONFIG: /home/vagrant/.kube/k3s.yaml}
run: vagrant provision --provision-with=k3s-ready
# - name: "[Rootless] Sonobuoy (--mode=quick)"
# env: {TEST_KUBECONFIG: /home/vagrant/.kube/k3s.yaml, TEST_RESULTS_PATH: rootless}
# run: vagrant provision --provision-with=k3s-sonobuoy

118
.github/workflows/e2e.yaml vendored Normal file
View File

@ -0,0 +1,118 @@
name: E2E Test Coverage
on:
push:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/e2e**"
- ".github/**"
- "!.github/workflows/e2e.yaml"
pull_request:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/e2e**"
- ".github/**"
- "!.github/workflows/e2e.yaml"
workflow_dispatch: {}
permissions:
contents: read
jobs:
build:
uses: ./.github/workflows/build-k3s.yaml
with:
upload-image: true
e2e:
name: "E2E Tests"
needs: build
runs-on: ubuntu-latest
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
etest: [startup, s3, btrfs, externalip, privateregistry, embeddedmirror, wasm]
max-parallel: 3
steps:
- name: "Checkout"
uses: actions/checkout@v4
with: {fetch-depth: 1}
- name: Set up vagrant and libvirt
uses: ./.github/actions/vagrant-setup
- name: "Vagrant Cache"
uses: actions/cache@v4
with:
path: |
~/.vagrant.d/boxes
key: vagrant-box-ubuntu-2204
- name: "Vagrant Plugin(s)"
run: vagrant plugin install vagrant-k3s vagrant-reload vagrant-scp
- name: Install Go
uses: actions/setup-go@v5
with:
go-version-file: 'go.mod'
cache: false
- name: Install Kubectl
run: |
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
- name: "Download k3s binary"
uses: actions/download-artifact@v4
with:
name: k3s
path: ./dist/artifacts
- name: Run ${{ matrix.etest }} Test
env:
E2E_GOCOVER: "true"
run: |
chmod +x ./dist/artifacts/k3s
cd tests/e2e/${{ matrix.etest }}
go test -v -timeout=45m ./${{ matrix.etest}}_test.go -ci -local
- name: On Failure, Launch Debug Session
uses: lhotari/action-upterm@v1
if: ${{ failure() }}
with:
## If no one connects after 5 minutes, shut down server.
wait-timeout-minutes: 5
- name: Upload Results To Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: tests/e2e/${{ matrix.etest }}/coverage.out
flags: e2etests # optional
verbose: true # optional (default = false)
docker:
needs: build
name: Docker Tests
runs-on: ubuntu-latest
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
dtest: [basics, bootstraptoken, cacerts, lazypull, upgrade]
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: "Download k3s image"
uses: actions/download-artifact@v4
with:
name: k3s
path: ./dist/artifacts
- name: Load k3s image
run: docker image load -i ./dist/artifacts/k3s-image.tar
- name: Run ${{ matrix.dtest }} Test
run: |
chmod +x ./dist/artifacts/k3s
. ./tests/docker/test-helpers
. ./tests/docker/test-run-${{ matrix.dtest }}
echo "Did test-run-${{ matrix.dtest }} pass $?"

View File

@ -25,13 +25,13 @@ jobs:
test:
name: "Smoke Test"
needs: build
runs-on: macos-12
runs-on: ubuntu-latest
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
vm: [centos-7, rocky-8, rocky-9, fedora, opensuse-leap, ubuntu-focal]
max-parallel: 2
vm: [centos-7, rocky-8, rocky-9, fedora, opensuse-leap, ubuntu-2204]
max-parallel: 3
defaults:
run:
working-directory: tests/install/${{ matrix.vm }}
@ -41,30 +41,32 @@ jobs:
- name: "Checkout"
uses: actions/checkout@v4
with: {fetch-depth: 1}
- name: Set up vagrant and libvirt
uses: ./.github/actions/vagrant-setup
- name: "Vagrant Cache"
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: |
~/.vagrant.d/boxes
~/.vagrant.d/gems
key: install-${{ hashFiles(format('tests/install/{0}/Vagrantfile', matrix.vm)) }}
id: vagrant-cache
continue-on-error: true
~/.vagrant.d/boxes
key: vagrant-box-${{ matrix.vm }}
- name: "Vagrant Plugin(s)"
run: vagrant plugin install vagrant-k3s vagrant-reload vagrant-scp
- name: "Download k3s binary"
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: k3s
path: tests/install/${{ matrix.vm }}
- name: "Vagrant Up"
run: vagrant up --no-provision
- name: "Upload k3s binary"
- name: "Upload k3s binary to VM"
run: |
chmod +x k3s
vagrant scp k3s /tmp/k3s
vagrant ssh -c "sudo mv /tmp/k3s /usr/local/bin/k3s"
vagrant provision --provision-with=k3s-upload
- name: Add binary to PATH
if: matrix.vm == 'centos-7' || matrix.vm == 'rocky-8' || matrix.vm == 'rocky-9' || matrix.vm == 'opensuse-leap'
run: vagrant provision --provision-with=add-bin-path
- name: "⏩ Install K3s"
run: |
vagrant provision --provision-with=k3s-prepare
@ -87,3 +89,11 @@ jobs:
run: vagrant provision --provision-with=k3s-status
- name: "k3s-procps"
run: vagrant provision --provision-with=k3s-procps
- name: Cleanup VM
run: vagrant destroy -f
- name: On Failure, launch debug session
uses: lhotari/action-upterm@v1
if: ${{ failure() }}
with:
## If no one connects after 5 minutes, shut down server.
wait-timeout-minutes: 5

View File

@ -30,15 +30,15 @@ env:
jobs:
build:
uses: ./.github/workflows/build-k3s.yaml
test:
itest:
needs: build
name: Integration Tests
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
timeout-minutes: 45
strategy:
fail-fast: false
matrix:
itest: [certrotation, etcdrestore, localstorage, startup, custometcdargs, etcdsnapshot, kubeflags, longhorn, secretsencryption]
itest: [certrotation, etcdrestore, localstorage, startup, custometcdargs, etcdsnapshot, kubeflags, longhorn, secretsencryption, flannelnone]
max-parallel: 3
steps:
- name: Checkout
@ -46,16 +46,9 @@ jobs:
with:
fetch-depth: 1
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '1.20.11'
check-latest: true
cache: true
cache-dependency-path: |
**/go.sum
**/go.mod
uses: ./.github/actions/setup-go
- name: "Download k3s binary"
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: k3s
path: ./dist/artifacts
@ -65,14 +58,17 @@ jobs:
mkdir -p $GOCOVERDIR
sudo -E env "PATH=$PATH" go test -v -timeout=45m ./tests/integration/${{ matrix.itest }}/... -run Integration
- name: On Failure, Launch Debug Session
uses: lhotari/action-upterm@v1
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3
timeout-minutes: 5
with:
## If no one connects after 5 minutes, shut down server.
wait-timeout-minutes: 5
- name: Generate coverage report
run: go tool covdata textfmt -i $GOCOVERDIR -o ${{ matrix.itest }}.out
- name: Upload Results To Codecov
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./${{ matrix.itest }}.out
flags: inttests # optional
verbose: true # optional (default = false)
verbose: true # optional (default = false)

View File

@ -10,18 +10,14 @@ permissions:
jobs:
test:
name: "Smoke Test"
runs-on: macos-12
runs-on: ubuntu-latest
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
channel: [stable]
vm: [centos-7, rocky-8, fedora, opensuse-leap, ubuntu-focal]
include:
- {channel: latest, vm: rocky-8}
- {channel: latest, vm: ubuntu-focal}
- {channel: latest, vm: opensuse-leap}
max-parallel: 2
channel: [stable, latest]
vm: [rocky-8, fedora, opensuse-leap, ubuntu-2204]
max-parallel: 4
defaults:
run:
working-directory: tests/install/${{ matrix.vm }}
@ -31,15 +27,15 @@ jobs:
- name: "Checkout"
uses: actions/checkout@v4
with: {fetch-depth: 1}
- name: Set up vagrant and libvirt
uses: ./.github/actions/vagrant-setup
- name: "Vagrant Cache"
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: |
~/.vagrant.d/boxes
~/.vagrant.d/gems
key: install-${{ matrix.vm }}-${{ hashFiles('tests/install/${{ matrix.vm }}/Vagrantfile') }}
~/.vagrant.d/boxes
key: vagrant-box-${{ matrix.vm }}
id: vagrant-cache
continue-on-error: true
- name: "Vagrant Plugin(s)"
run: vagrant plugin install vagrant-k3s vagrant-reload
- name: "Vagrant Up ⏩ Install K3s"
@ -60,4 +56,4 @@ jobs:
- name: "k3s-status"
run: vagrant provision --provision-with=k3s-status
- name: "k3s-procps"
run: vagrant provision --provision-with=k3s-procps
run: vagrant provision --provision-with=k3s-procps

View File

@ -1,73 +0,0 @@
name: Snapshotter
on:
push:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/snapshotter/**"
- ".github/**"
- "!.github/workflows/snapshotter.yaml"
pull_request:
paths-ignore:
- "**.md"
- "channel.yaml"
- "install.sh"
- "tests/**"
- "!tests/snapshotter/**"
- ".github/**"
- "!.github/workflows/snapshotter.yaml"
workflow_dispatch: {}
permissions:
contents: read
jobs:
build:
uses: ./.github/workflows/build-k3s.yaml
test:
name: "Smoke Test"
needs: build
# nested virtualization is only available on macOS hosts
runs-on: macos-12
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
vm: [opensuse-leap]
snapshotter: [btrfs]
max-parallel: 1
defaults:
run:
working-directory: tests/snapshotter/${{ matrix.snapshotter }}/${{ matrix.vm }}
env:
VAGRANT_EXPERIMENTAL: disks
steps:
- name: "Checkout"
uses: actions/checkout@v4
with: { fetch-depth: 1 }
- name: "Download Binary"
uses: actions/download-artifact@v3
with: { name: k3s, path: dist/artifacts/ }
- name: "Vagrant Cache"
uses: actions/cache@v3
with:
path: |
~/.vagrant.d/boxes
~/.vagrant.d/gems
key: snapshotter-${{ hashFiles(format('tests/snapshotter/{0}/{1}/Vagrantfile', matrix.snapshotter, matrix.vm)) }}
id: vagrant-cache
continue-on-error: true
- name: "Vagrant Plugin(s)"
run: vagrant plugin install vagrant-k3s
- name: "Vagrant Up ⏩ Install K3s"
run: vagrant up
- name: "⏳ Node"
run: vagrant provision --provision-with=k3s-wait-for-node
- name: "⏳ CoreDNS"
run: vagrant provision --provision-with=k3s-wait-for-coredns
- name: "k3s-status" # kubectl get node,all -A -o wide
run: vagrant provision --provision-with=k3s-status
- name: "k3s-snapshots" # if no snapshots then we fail
run: vagrant provision --provision-with=k3s-snapshots

51
.github/workflows/stale.yml vendored Normal file
View File

@ -0,0 +1,51 @@
name: Stalebot
on:
schedule:
- cron: '0 20 * * *'
workflow_dispatch:
permissions:
contents: write
issues: write
jobs:
stalebot:
runs-on: ubuntu-latest
steps:
- name: Close Stale Issues
uses: actions/stale@v9.0.0
with:
# ensure PRs are exempt
days-before-pr-stale: -1
day-before-pr-closed: -1
days-before-issue-stale: 45
days-before-issue-close: 14
stale-issue-label: status/stale
exempt-all-milestones: true
exempt-all-assignees: true
exempt-issue-labels:
internal,
kind/bug,
kind/bug-qa,
kind/task,
kind/feature,
kind/enhancement,
kind/design,
kind/ci-improvements,
kind/performance,
kind/flaky-test,
kind/documentation,
kind/epic,
kind/upstream-issue,
priority/backlog,
priority/critical-urgent,
priority/important-longterm,
priority/important-soon,
priority/low,
priority/medium,
priority/high,
priority/urgent,
stale-issue-message: >
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label)
for 45 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the
issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the
latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.

View File

@ -28,10 +28,7 @@ permissions:
jobs:
test:
name: Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, ubuntu-22.04]
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout
@ -39,25 +36,20 @@ jobs:
with:
fetch-depth: 1
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '1.20.11'
check-latest: true
cache: true
cache-dependency-path: |
**/go.sum
**/go.mod
uses: ./.github/actions/setup-go
- name: Run Unit Tests
run: |
go test -coverpkg=./... -coverprofile=coverage.out ./pkg/... -run Unit
go tool cover -func coverage.out
- name: On Failure, Launch Debug Session
if: ${{ failure() }}
uses: mxschmitt/action-tmate@v3
timeout-minutes: 5
- name: Upload Results To Codecov
uses: codecov/codecov-action@v3
uses: lhotari/action-upterm@v1
with:
wait-timeout-minutes: 5
- name: Upload Results To Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./coverage.out
flags: unittests # optional
verbose: true # optional (default = false)

View File

@ -23,10 +23,10 @@ jobs:
uses: actions/checkout@v4
- name: Install Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: 'stable'
cache: false
- name: Delete leftover UpdateCLI branches
run: |
gh pr list --search "is:closed is:pr head:updatecli_" --json headRefName --jq ".[].headRefName" | sort -u > closed_prs_branches.txt

View File

@ -10,7 +10,10 @@
]
},
"run": {
"skip-dirs": [
"deadline": "5m"
},
"issues": {
"exclude-dirs": [
"build",
"contrib",
"manifests",
@ -18,12 +21,9 @@
"scripts",
"vendor"
],
"skip-files": [
"exclude-files": [
"/zz_generated_"
],
"deadline": "5m"
},
"issues": {
"exclude-rules": [
{
"linters": "typecheck",
@ -43,4 +43,4 @@
}
]
}
}
}

View File

@ -1,40 +1,4 @@
k3s observes the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md), reproduced below for emphasis.
### Contributor Code of Conduct
As contributors and maintainers of this project, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, and other activities.
We are committed to making participation in this project a harassment-free experience for
everyone, regardless of level of experience, gender, gender identity and expression,
sexual orientation, disability, personal appearance, body size, race, ethnicity, age,
religion, or nationality.
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery
* Personal attacks
* Trolling or insulting/derogatory comments
* Public or private harassment
* Publishing others' private information, such as physical or electronic addresses,
without explicit permission
* Other unethical or unprofessional conduct.
Project maintainers have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are not
aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers
commit themselves to fairly and consistently applying these principles to every aspect
of managing this project. Project maintainers who do not follow or enforce the Code of
Conduct may be permanently removed from the project team.
This code of conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Rancher administrator on [Slack](https://slack.rancher.io), or <conduct@suse.com>.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 1.2.0, available at
http://contributor-covenant.org/version/1/2/0/
# Community Code of Conduct
k3s observes the [CNCF Community Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a Rancher administrator on [Slack](https://slack.rancher.io), or <conduct@suse.com>.

View File

@ -2,9 +2,9 @@
Thanks for taking the time to contribute to K3s!
Please review and follow the [Code of Conduct](https://github.com/k3s-io/k3s/blob/master/CODE_OF_CONDUCT.md).
Please review and follow the [Code of Conduct](CODE_OF_CONDUCT.md).
Contributing is not limited to writing code and submitting a PR. Feel free to submit an [issue](https://github.com/k3s-io/k3s/issues/new/choose) or comment on an existing one to report a bug, provide feedback, or suggest a new feature. You can also join the discussion on [slack](https://slack.rancher.io/).
Contributing is not limited to writing code and submitting a PR. Feel free to submit an [issue](https://github.com/k3s-io/k3s/issues/new/choose) or comment on an existing one to report a bug, provide feedback, or suggest a new feature. You can also join the discussion on [slack](https://rancher-users.slack.com/channels/k3s).
Of course, contributing code is more than welcome! To keep things simple, if you're fixing a small issue, you can simply submit a PR and we will pick it up. However, if you're planning to submit a bigger PR to implement a new feature or fix a relatively complex bug, please open an issue that explains the change and the motivation for it. If you're addressing a bug, please explain how to reproduce it.
@ -12,7 +12,11 @@ If you're interested in contributing documentation, please note the following:
- Doc issues are raised in this repository, and they are tracked under the `kind/documentation` label.
- Pull requests are submitted to the K3s documentation source in the [k3s-io docs repository.](https://github.com/k3s-io/docs).
If you're interested in contributing new tests, please see the `TESTING.md` in the tests directory.
If you're interested in contributing new tests, please see the [TESTING.md](./tests/TESTING.md).
## Code Convetion
See the [code convetions documentation](./docs/contrib/code_conventions.md) for more information on how to write code for K3s.
### Opening PRs and organizing commits
PRs should generally address only 1 issue at a time. If you need to fix two bugs, open two separate PRs. This will keep the scope of your pull requests smaller and allow them to be reviewed and merged more quickly.

View File

@ -1,4 +1,4 @@
ARG GOLANG=golang:1.20.11-alpine3.18
ARG GOLANG=golang:1.22.2-alpine3.18
FROM ${GOLANG}
# Set proxy environment variables
@ -22,7 +22,7 @@ RUN apk -U --no-cache add \
RUN python3 -m pip install awscli
# Install Trivy
ENV TRIVY_VERSION="0.46.1"
ENV TRIVY_VERSION="0.51.4"
RUN case "$(go env GOARCH)" in \
arm64) TRIVY_ARCH="ARM64" ;; \
amd64) TRIVY_ARCH="64bit" ;; \
@ -43,7 +43,7 @@ RUN rm -rf /go/src /go/pkg
# Install golangci-lint for amd64
RUN if [ "$(go env GOARCH)" = "amd64" ]; then \
curl -sL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.51.2; \
curl -sL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.55.2; \
fi
# Set SELINUX environment variable
@ -56,9 +56,10 @@ ENV DAPPER_RUN_ARGS="--privileged -v k3s-cache:/go/src/github.com/k3s-io/k3s/.ca
DAPPER_SOURCE="/go/src/github.com/k3s-io/k3s/" \
DAPPER_OUTPUT="./bin ./dist ./build/out ./build/static ./pkg/static ./pkg/deploy" \
DAPPER_DOCKER_SOCKET=true \
HOME=${DAPPER_SOURCE} \
CROSS=true \
STATIC_BUILD=true
# Set $HOME separately because it refers to $DAPPER_SOURCE, set above
ENV HOME=${DAPPER_SOURCE}
WORKDIR ${DAPPER_SOURCE}

View File

@ -1,4 +1,4 @@
ARG GOLANG=golang:1.20.11-alpine3.18
ARG GOLANG=golang:1.22.2-alpine3.18
FROM ${GOLANG} as infra
ARG http_proxy=$http_proxy
@ -46,9 +46,9 @@ RUN --mount=type=cache,id=gomod,target=/go/pkg/mod \
./scripts/download
COPY ./cmd ./cmd
COPY ./pkg ./pkg
COPY ./tests ./tests
COPY ./.git ./.git
COPY ./pkg ./pkg
RUN --mount=type=cache,id=gomod,target=/go/pkg/mod \
--mount=type=cache,id=gobuild,target=/root/.cache/go-build \
./scripts/build

View File

@ -1,4 +1,4 @@
ARG GOLANG=golang:1.20.11-alpine3.18
ARG GOLANG=golang:1.22.2-alpine3.18
FROM ${GOLANG}
COPY --from=plugins/manifest:1.2.3 /bin/* /bin/

View File

@ -1,4 +1,4 @@
ARG GOLANG=golang:1.20.11-alpine3.18
ARG GOLANG=golang:1.22.2-alpine3.18
FROM ${GOLANG} as test-base
RUN apk -U --no-cache add bash jq
@ -14,11 +14,11 @@ ENTRYPOINT ["/bin/test-mods"]
FROM test-base as test-k3s
RUN apk -U --no-cache add git gcc musl-dev docker curl coreutils python3 openssl py3-pip procps findutils
RUN apk -U --no-cache add git gcc musl-dev docker curl coreutils python3 openssl py3-pip procps findutils yq
RUN python3 -m pip install awscli
ENV SONOBUOY_VERSION 0.57.0
ENV SONOBUOY_VERSION 0.57.1
RUN OS=linux; \
ARCH=$(go env GOARCH); \
@ -40,11 +40,11 @@ FROM vagrantlibvirt/vagrant-libvirt:0.12.1 AS test-e2e
RUN apt-get update && apt-get install -y docker.io
ENV VAGRANT_DISABLE_STRICT_DEPENDENCY_ENFORCEMENT=1
RUN vagrant plugin install vagrant-k3s vagrant-reload vagrant-scp
RUN vagrant box add generic/ubuntu2004 --provider libvirt --force
RUN vagrant box add generic/ubuntu2204 --provider libvirt --force
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"; \
chmod +x ./kubectl; \
mv ./kubectl /usr/local/bin/kubectl
RUN GO_VERSION=go1.20.11; \
RUN GO_VERSION=go1.21.5; \
curl -O -L "https://golang.org/dl/${GO_VERSION}.linux-amd64.tar.gz"; \
rm -rf /usr/local/go; \
tar -C /usr/local -xzf ${GO_VERSION}.linux-amd64.tar.gz;

View File

@ -1,7 +1,7 @@
# Example channels config
channels:
- name: stable
latest: v1.28.4+k3s2
latest: v1.29.5+k3s1
- name: latest
latestRegexp: .*
excludeRegexp: (^[^+]+-|v1\.25\.5\+k3s1|v1\.26\.0\+k3s1)
@ -53,6 +53,12 @@ channels:
- name: v1.28
latestRegexp: v1\.28\..*
excludeRegexp: ^[^+]+-
- name: v1.29
latestRegexp: v1\.29\..*
excludeRegexp: ^[^+]+-
- name: v1.30
latestRegexp: v1\.30\..*
excludeRegexp: ^[^+]+-
github:
owner: k3s-io
repo: k3s

View File

@ -16,6 +16,7 @@ func main() {
app := cmds.NewApp()
app.Commands = []cli.Command{
cmds.NewCertCommands(
cert.Check,
cert.Rotate,
cert.RotateCA,
),

View File

@ -19,7 +19,7 @@ import (
"github.com/k3s-io/k3s/pkg/untar"
"github.com/k3s-io/k3s/pkg/version"
"github.com/pkg/errors"
"github.com/rancher/wrangler/pkg/resolvehome"
"github.com/rancher/wrangler/v3/pkg/resolvehome"
"github.com/sirupsen/logrus"
"github.com/spf13/pflag"
"github.com/urfave/cli"
@ -76,6 +76,7 @@ func main() {
cmds.NewCertCommands(
certCommand,
certCommand,
certCommand,
),
cmds.NewCompletionCommand(internalCLIAction(version.Program+"-completion", dataDir, os.Args)),
}

View File

@ -72,6 +72,7 @@ func main() {
secretsencrypt.RotateKeys,
),
cmds.NewCertCommands(
cert.Check,
cert.Rotate,
cert.RotateCA,
),

View File

@ -1,5 +1,5 @@
FROM alpine:3.18
ENV SONOBUOY_VERSION 0.57.0
FROM alpine:3.20
ENV SONOBUOY_VERSION 0.57.1
RUN apk add curl tar gzip
RUN curl -sfL https://github.com/vmware-tanzu/sonobuoy/releases/download/v${SONOBUOY_VERSION}/sonobuoy_${SONOBUOY_VERSION}_linux_amd64.tar.gz | tar xvzf - -C /usr/bin
COPY run-test.sh /usr/bin

View File

@ -55,6 +55,10 @@ is_set_as_module() {
}
color() {
if [ -n "$NO_COLOR" ]; then
return
fi
codes=
if [ "$1" = 'bold' ]; then
codes=1
@ -384,7 +388,7 @@ flags="
CGROUPS CGROUP_PIDS CGROUP_CPUACCT CGROUP_DEVICE CGROUP_FREEZER CGROUP_SCHED CPUSETS MEMCG
KEYS
VETH BRIDGE BRIDGE_NETFILTER
IP_NF_FILTER IP_NF_TARGET_MASQUERADE
IP_NF_FILTER IP_NF_TARGET_MASQUERADE IP_NF_TARGET_REJECT
NETFILTER_XT_MATCH_ADDRTYPE NETFILTER_XT_MATCH_CONNTRACK NETFILTER_XT_MATCH_IPVS NETFILTER_XT_MATCH_COMMENT NETFILTER_XT_MATCH_MULTIPORT
IP_NF_NAT NF_NAT
POSIX_MQUEUE

View File

@ -1,7 +1,6 @@
# to run define K3S_TOKEN, K3S_VERSION is optional, eg:
# K3S_TOKEN=${RANDOM}${RANDOM}${RANDOM} docker-compose up
version: '3'
services:
server:
@ -45,6 +44,9 @@ services:
environment:
- K3S_URL=https://server:6443
- K3S_TOKEN=${K3S_TOKEN:?err}
volumes:
- k3s-agent:/var/lib/rancher/k3s
volumes:
k3s-server: {}
k3s-agent: {}

View File

@ -50,7 +50,7 @@ documentation can be referenced for more information.
* K3s will allow joining agents to the cluster using bootstrap token secrets.
* K3s will NOT allow joining servers to the cluster using bootstrap token secrets.
* K3s will include a `k3s token` subcommand that allows for token create/list/delete operations, similar to
the the functionality offered by `kubeadm`.
the functionality offered by `kubeadm`.
* K3s will enable the `tokencleaner` controller, in order to ensure that bootstrap token secrets are cleaned
up when their TTL expires.
* K3s agent bootstrap functionality will allow a agent to connect the cluster using existing [Node

View File

@ -0,0 +1,30 @@
# Add Support for Checking and Alerting on Certificate Expiry
Date: 2024-03-26
## Status
Accepted
## Context
The certificates generated by K3s have two lifecycles:
* Certificate authority certificates expire 3650 days (roughly 10 years) from their moment of issuance.
The CA certificates are not automatically renewed, and require manual intervention to extend their validity.
* Leaf certificates (client and server certs) expire 365 days (roughly 1 year) from their moment of issuance.
The certificates are automatically renewed if they are within 90 days of expiring at the time K3s starts.
K3s does not currently expose any information about certificate validity.
There are no metrics, CLI tools, or events that an administrator can use to track when certificates must be renewed or rotated to avoid outages when certificates expire.
The best we can do at the moment is recommend that administrators either restart their nodes regularly to ensure that certificates are renewed within the 90 day window, or manually rotate their certs yearly.
We do not have any guidance around renewing the CA certs, which will be a major undertaking for users as their clusters approach the 10-year mark. We currently have a bit of runway on this issue, as K3s has not been around for 10 years.
## Decision
* K3s will add a CLI command to print certificate validity. It will be grouped alongside the command used to rotate the leaf certificates (`k3s certificate rotate`).
* K3s will add an internal controller that maintains metrics for certificate expiration, and creates Events when certificates are about to or have expired.
## Consequences
This will require additional documentation, CLI subcommands, and QA work to validate the process steps.

View File

@ -0,0 +1,43 @@
# Package spegel Distributed Registry Mirror
Date: 2023-12-07
## Status
Accepted
## Context
Embedded registry mirror support has been on the roadmap for some time, to address multiple challenges:
* Upstream registries may enforce pull limits or otherwise throttle access to images.
* In edge scenarios, bandwidth is at a premium, if external access is available at all.
* Distributing airgap image tarballs to nodes, and ensuring that images remain available, is an ongoing
hurdle to adoption.
* Deploying an in-cluster registry, or hosting a registry outside the cluster, put significant
burden on administrators, and suffer from chicken-or-egg bootstrapping issues.
An ideal embedded registry would have several characteristics:
* Allow stateless configuration such that nodes can come and go at any time.
* Integrate into existing containerd registry mirror support.
* Integrate into existing containerd image stores such that an additional copy of layer data is not required.
* Use existing cluster authentication mechanisms to prevent unauthorized access to the registry.
* Operate with minimal added CPU and memory overhead.
## Decision
* We will embed spegel within K3s, and use it to host a distributed registry mirror.
* The distributed registry mirror will be enabled cluster-wide via server CLI flag.
* Selection of upstream registries to mirror will be implemented via the existing `registries.yaml`
configuration file.
* The registry API will be served via HTTPS on every node's private IP at port 6443. On servers this will
use the existing supervisor listener; on agents a new listener will be created for this purpose.
* The default IPFS/libp2p port of 5001 will be used for P2P layer discovery.
* Access to the registry API and P2P network will require proof of cluster membership, enforced via
client certificate or preshared key.
* Hybrid/multicloud support is out of scope; when the distributed registry mirror is enabled, cluster
members are assumed to be directly accessible to each other via their internal IP on the listed ports.
## Consequences
* The size of our self-extracting binary and Docker images increase by several megabytes.
* We take on the support burden of keeping spegel up to date, and supporting its use within K3s.

View File

@ -0,0 +1,21 @@
# Branching Strategy in Github
Proposal Date: 2024-05-23
## Status
Accepted
## Context
K3s is released at the same cadence as upstream Kubernetes. This requires management of multiple versions at any given point in time. The current branching strategy uses `release-v[MAJOR].[MINOR]`, with the `master` branch corresponding to the highest version released based on [semver](https://semver.org/). Github's Tags are then used to cut releases, which are just point-in-time snapshots of the specified branch at a given point. As there is the potential for bugs and regressions to be on present on any given branch, this branching and release strategy requires a code freeze to QA the branch without new potentially breaking changes going in.
## Decision
All code changes go into the `master` branch. We maintain branches for all current release versions in the format `release-v[MAJOR].[MINOR]`. When changes made in master are necessary in a release, they should be backported directly into the release branches. If ever there are changes required only in the release branches and not in master, such as when bumping the kubernetes version from upstream, those can be made directly into the release branches themselves.
## Consequences
- Allows for constant development, with code freeze only relevant for the release branches.
- This requires maintaining one additional branch than the current workflow, which also means one additional issue.
- Testing would be more constant from the master branch.
- Minor release captain will have to cut the new branch as soon as they bring in that new minor version.

62
docs/adrs/testing-2024.md Normal file
View File

@ -0,0 +1,62 @@
# Testing in K3s
Date: 2024-02-23
# Context
## Background
Currently, Testing in K3s is categorized into various types and spread across Github Actions and Drone CI. The types are as follows:
GitHub Actions:
- Unit Tests: For testing individual components and functions, following a "white box" approach.
- Integration Tests: Test functionalities across multiple packages, using "black box" testing.
- Smoke Tests: Simple tests to ensure basic functionality works as expected. Broken into:
- Cgroup: Tests cgroupv2 support.
- Snapshotter: tests btrfs and overlayfs snapshotter support.
- Install tests: Tests the installation of K3s on various OSes.
Drone CI:
- Docker Tests: Run clusters in containers to test basic functionality. Broken into:
- Basic Tests: Run clusters in containers to test basic functionality.
- Sonobuoy Conformance Tests: Run clusters in containers to validate K8s conformance. Runs on multiple database backends.
- End-to-End (E2E) Tests: Cover multi-node configuration/administration.
- Performance Tests: Use Terraform to test large-scale deployments of K3s clusters. These were legacy tests and are never run in CI.
## Problems
- The current testing infrastructure is complex and fragmented, leading to maintenance overhead. Not all testing is grouped inside the [tests directory](../../tests/).
- GitHub Actions had limited resources, making it unsuitable for running larger tests.
- GitHub Actions only supported hardware virtualiztion on Mac runners and that support was often broken.
- Drone CI cannot handle individual testing failures. If a single test fails, the entire build is marked as failed.
## New Developments
As of late January 2024, GitHub Actions has made significant improvements:
- The resources available to open source GitHub Actions have been doubled, with 4 CPU cores and 16GB of RAM. See blog post [here](https://github.blog/2024-01-17-github-hosted-runners-double-the-power-for-open-source/).
- Standard (i.e. free) Linux runners now support Nested Virtualization
## Decision
We will move towards a single testing platform, GitHub Actions, and leverage the recent improvements in resources and nested virtualization support. This will involve the following changes:
- Test distribution based on size and complexity:
- Unit, Integration: Will continue to run in GitHub Actions due to their smaller scale and faster execution times.
- Install Test, Docker Basic, and E2E Tests: Will run in GitHub Actions on standard linux runners thanks to recent enhancements.
- Docker Conformance and large E2E Tests (2+ nodes): Still utilize Drone CI for resource-intensive scenarios.
- Consolidating all testing-related files within the "tests" directory for better organization and clarity.
- Cgroup smoke tests will be removed. As multiple Operating Systems now support CgroupV2 by default, these tests are no longer relevant.
- Snapshotter smoke test will be converted into a full E2E test.
- Remove of old performance tests, as they are no longer relevant. Scale testing is already handled by QA as needed.
Tracking these changes is with [this issue](https://github.com/k3s-io/k3s/issues/9477)
## Consequences
- The testing infrastructure will be more organized and easier to maintain.
- The move to GitHub Actions will allow for faster feedback on PRs and issues.
- The removal of old tests will reduce the maintenance overhead.
- New testing process can be used as a model for related projects.

2
docs/contrib/development.md Normal file → Executable file
View File

@ -73,7 +73,7 @@ As described in the [Testing documentation](../../tests/TESTING.md), all the smo
These topics already have been addressed on their respective documents:
- [Git Workflow](./git-workflow.md)
- [Git Workflow](./git_workflow.md)
- [Building](../../BUILDING.md)
- [Testing](../../tests/TESTING.md)

View File

@ -7,6 +7,6 @@
1. if you already have a fork, sync it
1. add your fork repo as "origin"
1. fetch all objects from both repos into your local copy
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. it is important to follow these steps because Go is very particular about the file structure (it uses the file structure to infer the urls it will pull dependencies from)
1. this is why it is important that the repo is in the github.com/k3s-io directory, and that the repo's directory is "k3s" matching the upstream copy's name
`$HOME/go/src/github.com/k3s-io/k3s`

View File

@ -5,5 +5,5 @@
1. clone kubernetes/kubernetes repo into that directory as "upstream"
1. add k3s-io/kubernetes repo as "k3s-io"
1. fetch all objects from both repos into your local copy
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. it is important to follow these steps because Go is very particular about the file structure (it uses the file structure to infer the urls it will pull dependencies from)
1. this is why it is important that the repo is in the github.com/kubernetes directory, and that the repo's directory is "kubernetes" matching the upstream copy's name `$HOME/go/src/github.com/kubernetes/kubernetes`

View File

@ -13,7 +13,7 @@ This guide helps you navigate the creation of those variables.
1. set NEW_K8S_CLIENT to the client version which corresponds with the newly released k8s version
1. set OLD_K3S_VER to the previous k3s version (the one which corresponds to the previous k8s version), replacing the plus symbol with a dash (eg. for "v1.25.0+k3s1" use "v1.25.0-k3s1")
1. set NEW_K3S_VER to the k3s version which corresponds to the newly released k8s version, replacing the plus symbol with a dash
1. set RELEASE_BRANCH to the the k3s release branch which corresponds to the newly released k8s version
1. set RELEASE_BRANCH to the k3s release branch which corresponds to the newly released k8s version
1. set GOPATH to the path to the "go" directory (usually $HOME/go)
1. set GOVERSION to the version of go which the newly released k8s version uses
1. you can find this in the kubernetes/kubernetes repo

View File

@ -8,7 +8,7 @@ After the RCs are cut you need to generate the KDM PR within a few hours
1. clear out (remove) kontainer-driver-metadata repo if is already there (just makes things smoother with a new clone)
1. fork kdm repo
1. clone your fork into that directory as "origin" (you won't need a local copy of upstream)
1. it is important to follow these steps because Go is very particular about the file structure (it uses the the file structure to infer the urls it will pull dependencies from)
1. it is important to follow these steps because Go is very particular about the file structure (it uses the file structure to infer the urls it will pull dependencies from)
1. go generate needs to be able to fully use Go as expected, so it is important to get the file structure correct
1. this is why it is important that the repo is in the github.com/rancher directory, and that the repo's directory is "kontainer-driver-metadata" matching the upstream copy's name
1. $HOME/go/src/github.com/rancher/kontainer-driver-metadata

View File

@ -286,6 +286,7 @@ Once QA signs off on a RC:
3. Publish.
4. Reiterate the previous checking processes and update KDM specifications accordingly with the GA release tags.
5. CI has completed, and artifacts have been created. Announce the GA and inform that k3s is thawed in the Slack release thread.
6. Create a `system-agent-installer-k3s` release with a matching tag.
##### `After 24 hours`:
1. Uncheck prerelease, and save.
2. Update channel server

505
go.mod
View File

@ -1,310 +1,340 @@
module github.com/k3s-io/k3s
go 1.20
go 1.22.2
replace (
github.com/Microsoft/hcsshim => github.com/Microsoft/hcsshim v0.11.0
github.com/Mirantis/cri-dockerd => github.com/k3s-io/cri-dockerd v0.3.4-k3s3 // k3s/release-1.28
github.com/cloudnativelabs/kube-router/v2 => github.com/k3s-io/kube-router/v2 v2.0.0-20230925161250-364f994b140b
github.com/containerd/containerd => github.com/k3s-io/containerd v1.7.11-k3s2
github.com/coreos/go-systemd => github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e
github.com/docker/distribution => github.com/docker/distribution v2.8.2+incompatible
github.com/docker/docker => github.com/docker/docker v24.0.0-rc.2.0.20230801142700-69c9adb7d386+incompatible
github.com/docker/libnetwork => github.com/docker/libnetwork v0.8.0-dev.2.0.20190624125649-f0e46a78ea34
github.com/Mirantis/cri-dockerd => github.com/k3s-io/cri-dockerd v0.3.12-k3s1.30-3 // k3s/release-1.30
github.com/cloudnativelabs/kube-router/v2 => github.com/k3s-io/kube-router/v2 v2.1.2
github.com/containerd/containerd => github.com/k3s-io/containerd v1.7.17-k3s1
github.com/docker/distribution => github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker => github.com/docker/docker v25.0.4+incompatible
github.com/emicklei/go-restful/v3 => github.com/emicklei/go-restful/v3 v3.9.0
github.com/golang/protobuf => github.com/golang/protobuf v1.5.3
github.com/golang/protobuf => github.com/golang/protobuf v1.5.4
github.com/googleapis/gax-go/v2 => github.com/googleapis/gax-go/v2 v2.12.0
github.com/juju/errors => github.com/k3s-io/nocode v0.0.0-20200630202308-cb097102c09f
github.com/kubernetes-sigs/cri-tools => github.com/k3s-io/cri-tools v1.26.0-rc.0-k3s1
github.com/opencontainers/runc => github.com/opencontainers/runc v1.1.10
github.com/opencontainers/runtime-spec => github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78
github.com/opencontainers/selinux => github.com/opencontainers/selinux v1.10.1
github.com/rancher/wrangler => github.com/rancher/wrangler v1.1.1-0.20230818201331-3604a6be798d
go.etcd.io/etcd/api/v3 => github.com/k3s-io/etcd/api/v3 v3.5.9-k3s1
go.etcd.io/etcd/client/pkg/v3 => github.com/k3s-io/etcd/client/pkg/v3 v3.5.9-k3s1
go.etcd.io/etcd/client/v2 => github.com/k3s-io/etcd/client/v2 v2.305.9-k3s1
go.etcd.io/etcd/client/v3 => github.com/k3s-io/etcd/client/v3 v3.5.9-k3s1
go.etcd.io/etcd/etcdutl/v3 => github.com/k3s-io/etcd/etcdutl/v3 v3.5.9-k3s1
go.etcd.io/etcd/pkg/v3 => github.com/k3s-io/etcd/pkg/v3 v3.5.9-k3s1
go.etcd.io/etcd/raft/v3 => github.com/k3s-io/etcd/raft/v3 v3.5.9-k3s1
go.etcd.io/etcd/server/v3 => github.com/k3s-io/etcd/server/v3 v3.5.9-k3s1
go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful => go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.35.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc => go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.35.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp => go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.1
go.opentelemetry.io/contrib/propagators/b3 => go.opentelemetry.io/contrib/propagators/b3 v1.13.0
go.opentelemetry.io/otel => go.opentelemetry.io/otel v1.13.0
go.opentelemetry.io/otel/exporters/otlp/internal/retry => go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.13.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric => go.opentelemetry.io/otel/exporters/otlp/otlpmetric v0.32.1
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc => go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v0.32.1
go.opentelemetry.io/otel/exporters/otlp/otlptrace => go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.13.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc => go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.13.0
go.opentelemetry.io/otel/metric => go.opentelemetry.io/otel/metric v0.32.1
go.opentelemetry.io/otel/sdk => go.opentelemetry.io/otel/sdk v1.13.0
go.opentelemetry.io/otel/trace => go.opentelemetry.io/otel/trace v1.13.0
go.opentelemetry.io/proto/otlp => go.opentelemetry.io/proto/otlp v0.19.0
golang.org/x/crypto => golang.org/x/crypto v0.1.0
github.com/kubernetes-sigs/cri-tools => github.com/k3s-io/cri-tools v1.29.0-k3s1
github.com/open-policy-agent/opa => github.com/open-policy-agent/opa v0.59.0 // github.com/Microsoft/hcsshim using bad version v0.42.2
github.com/opencontainers/runc => github.com/k3s-io/runc v1.1.12-k3s1
github.com/opencontainers/selinux => github.com/opencontainers/selinux v1.11.0
github.com/prometheus/client_golang => github.com/prometheus/client_golang v1.18.0
github.com/prometheus/common => github.com/prometheus/common v0.45.0
github.com/spegel-org/spegel => github.com/k3s-io/spegel v0.0.23-0.20240516234953-f3d2c4072314
github.com/ugorji/go => github.com/ugorji/go v1.2.11
go.etcd.io/etcd/api/v3 => github.com/k3s-io/etcd/api/v3 v3.5.13-k3s1
go.etcd.io/etcd/client/pkg/v3 => github.com/k3s-io/etcd/client/pkg/v3 v3.5.13-k3s1
go.etcd.io/etcd/client/v2 => github.com/k3s-io/etcd/client/v2 v2.305.13-k3s1
go.etcd.io/etcd/client/v3 => github.com/k3s-io/etcd/client/v3 v3.5.13-k3s1
go.etcd.io/etcd/etcdutl/v3 => github.com/k3s-io/etcd/etcdutl/v3 v3.5.13-k3s1
go.etcd.io/etcd/pkg/v3 => github.com/k3s-io/etcd/pkg/v3 v3.5.13-k3s1
go.etcd.io/etcd/raft/v3 => github.com/k3s-io/etcd/raft/v3 v3.5.13-k3s1
go.etcd.io/etcd/server/v3 => github.com/k3s-io/etcd/server/v3 v3.5.13-k3s1
go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful => go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.44.0
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc => go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.45.0
golang.org/x/crypto => golang.org/x/crypto v0.17.0
golang.org/x/net => golang.org/x/net v0.17.0
golang.org/x/sys => golang.org/x/sys v0.6.0
golang.org/x/sys => golang.org/x/sys v0.18.0
google.golang.org/genproto => google.golang.org/genproto v0.0.0-20230525234035-dd9d682886f9
google.golang.org/grpc => google.golang.org/grpc v1.58.3
gopkg.in/square/go-jose.v2 => gopkg.in/square/go-jose.v2 v2.6.0
k8s.io/api => github.com/k3s-io/kubernetes/staging/src/k8s.io/api v1.28.4-k3s1
k8s.io/apiextensions-apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/apiextensions-apiserver v1.28.4-k3s1
k8s.io/apimachinery => github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery v1.28.4-k3s1
k8s.io/apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver v1.28.4-k3s1
k8s.io/cli-runtime => github.com/k3s-io/kubernetes/staging/src/k8s.io/cli-runtime v1.28.4-k3s1
k8s.io/client-go => github.com/k3s-io/kubernetes/staging/src/k8s.io/client-go v1.28.4-k3s1
k8s.io/cloud-provider => github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider v1.28.4-k3s1
k8s.io/cluster-bootstrap => github.com/k3s-io/kubernetes/staging/src/k8s.io/cluster-bootstrap v1.28.4-k3s1
k8s.io/code-generator => github.com/k3s-io/kubernetes/staging/src/k8s.io/code-generator v1.28.4-k3s1
k8s.io/component-base => github.com/k3s-io/kubernetes/staging/src/k8s.io/component-base v1.28.4-k3s1
k8s.io/component-helpers => github.com/k3s-io/kubernetes/staging/src/k8s.io/component-helpers v1.28.4-k3s1
k8s.io/controller-manager => github.com/k3s-io/kubernetes/staging/src/k8s.io/controller-manager v1.28.4-k3s1
k8s.io/cri-api => github.com/k3s-io/kubernetes/staging/src/k8s.io/cri-api v1.28.4-k3s1
k8s.io/csi-translation-lib => github.com/k3s-io/kubernetes/staging/src/k8s.io/csi-translation-lib v1.28.4-k3s1
k8s.io/dynamic-resource-allocation => github.com/k3s-io/kubernetes/staging/src/k8s.io/dynamic-resource-allocation v1.28.4-k3s1
k8s.io/endpointslice => github.com/k3s-io/kubernetes/staging/src/k8s.io/endpointslice v1.28.4-k3s1
k8s.io/api => github.com/k3s-io/kubernetes/staging/src/k8s.io/api v1.30.1-k3s1
k8s.io/apiextensions-apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/apiextensions-apiserver v1.30.1-k3s1
k8s.io/apimachinery => github.com/k3s-io/kubernetes/staging/src/k8s.io/apimachinery v1.30.1-k3s1
k8s.io/apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/apiserver v1.30.1-k3s1
k8s.io/cli-runtime => github.com/k3s-io/kubernetes/staging/src/k8s.io/cli-runtime v1.30.1-k3s1
k8s.io/client-go => github.com/k3s-io/kubernetes/staging/src/k8s.io/client-go v1.30.1-k3s1
k8s.io/cloud-provider => github.com/k3s-io/kubernetes/staging/src/k8s.io/cloud-provider v1.30.1-k3s1
k8s.io/cluster-bootstrap => github.com/k3s-io/kubernetes/staging/src/k8s.io/cluster-bootstrap v1.30.1-k3s1
k8s.io/code-generator => github.com/k3s-io/kubernetes/staging/src/k8s.io/code-generator v1.30.1-k3s1
k8s.io/component-base => github.com/k3s-io/kubernetes/staging/src/k8s.io/component-base v1.30.1-k3s1
k8s.io/component-helpers => github.com/k3s-io/kubernetes/staging/src/k8s.io/component-helpers v1.30.1-k3s1
k8s.io/controller-manager => github.com/k3s-io/kubernetes/staging/src/k8s.io/controller-manager v1.30.1-k3s1
k8s.io/cri-api => github.com/k3s-io/kubernetes/staging/src/k8s.io/cri-api v1.30.1-k3s1
k8s.io/csi-translation-lib => github.com/k3s-io/kubernetes/staging/src/k8s.io/csi-translation-lib v1.30.1-k3s1
k8s.io/dynamic-resource-allocation => github.com/k3s-io/kubernetes/staging/src/k8s.io/dynamic-resource-allocation v1.30.1-k3s1
k8s.io/endpointslice => github.com/k3s-io/kubernetes/staging/src/k8s.io/endpointslice v1.30.1-k3s1
k8s.io/klog => github.com/k3s-io/klog v1.0.0-k3s2 // k3s-release-1.x
k8s.io/klog/v2 => github.com/k3s-io/klog/v2 v2.100.1-k3s1 // k3s-main
k8s.io/kms => github.com/k3s-io/kubernetes/staging/src/k8s.io/kms v1.28.4-k3s1
k8s.io/kube-aggregator => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-aggregator v1.28.4-k3s1
k8s.io/kube-controller-manager => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-controller-manager v1.28.4-k3s1
k8s.io/kube-proxy => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-proxy v1.28.4-k3s1
k8s.io/kube-scheduler => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-scheduler v1.28.4-k3s1
k8s.io/kubectl => github.com/k3s-io/kubernetes/staging/src/k8s.io/kubectl v1.28.4-k3s1
k8s.io/kubelet => github.com/k3s-io/kubernetes/staging/src/k8s.io/kubelet v1.28.4-k3s1
k8s.io/kubernetes => github.com/k3s-io/kubernetes v1.28.4-k3s1
k8s.io/legacy-cloud-providers => github.com/k3s-io/kubernetes/staging/src/k8s.io/legacy-cloud-providers v1.28.4-k3s1
k8s.io/metrics => github.com/k3s-io/kubernetes/staging/src/k8s.io/metrics v1.28.4-k3s1
k8s.io/mount-utils => github.com/k3s-io/kubernetes/staging/src/k8s.io/mount-utils v1.28.4-k3s1
k8s.io/node-api => github.com/k3s-io/kubernetes/staging/src/k8s.io/node-api v1.28.4-k3s1
k8s.io/pod-security-admission => github.com/k3s-io/kubernetes/staging/src/k8s.io/pod-security-admission v1.28.4-k3s1
k8s.io/sample-apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-apiserver v1.28.4-k3s1
k8s.io/sample-cli-plugin => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-cli-plugin v1.28.4-k3s1
k8s.io/sample-controller => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-controller v1.28.4-k3s1
mvdan.cc/unparam => mvdan.cc/unparam v0.0.0-20210104141923-aac4ce9116a7
k8s.io/klog/v2 => github.com/k3s-io/klog/v2 v2.120.1-k3s1 // k3s-main
k8s.io/kms => github.com/k3s-io/kubernetes/staging/src/k8s.io/kms v1.30.1-k3s1
k8s.io/kube-aggregator => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-aggregator v1.30.1-k3s1
k8s.io/kube-controller-manager => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-controller-manager v1.30.1-k3s1
k8s.io/kube-proxy => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-proxy v1.30.1-k3s1
k8s.io/kube-scheduler => github.com/k3s-io/kubernetes/staging/src/k8s.io/kube-scheduler v1.30.1-k3s1
k8s.io/kubectl => github.com/k3s-io/kubernetes/staging/src/k8s.io/kubectl v1.30.1-k3s1
k8s.io/kubelet => github.com/k3s-io/kubernetes/staging/src/k8s.io/kubelet v1.30.1-k3s1
k8s.io/kubernetes => github.com/k3s-io/kubernetes v1.30.1-k3s1
k8s.io/legacy-cloud-providers => github.com/k3s-io/kubernetes/staging/src/k8s.io/legacy-cloud-providers v1.30.1-k3s1
k8s.io/metrics => github.com/k3s-io/kubernetes/staging/src/k8s.io/metrics v1.30.1-k3s1
k8s.io/mount-utils => github.com/k3s-io/kubernetes/staging/src/k8s.io/mount-utils v1.30.1-k3s1
k8s.io/node-api => github.com/k3s-io/kubernetes/staging/src/k8s.io/node-api v1.30.1-k3s1
k8s.io/pod-security-admission => github.com/k3s-io/kubernetes/staging/src/k8s.io/pod-security-admission v1.30.1-k3s1
k8s.io/sample-apiserver => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-apiserver v1.30.1-k3s1
k8s.io/sample-cli-plugin => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-cli-plugin v1.30.1-k3s1
k8s.io/sample-controller => github.com/k3s-io/kubernetes/staging/src/k8s.io/sample-controller v1.30.1-k3s1
sourcegraph.com/sourcegraph/go-diff => github.com/sourcegraph/go-diff v0.6.0
)
require (
github.com/Microsoft/hcsshim v0.11.4
github.com/Microsoft/hcsshim v0.12.3
github.com/Mirantis/cri-dockerd v0.0.0-00010101000000-000000000000
github.com/blang/semver/v4 v4.0.0
github.com/cloudnativelabs/kube-router/v2 v2.0.0-00010101000000-000000000000
github.com/containerd/aufs v1.0.0
github.com/containerd/cgroups/v3 v3.0.2
github.com/containerd/containerd v1.7.3
github.com/containerd/fuse-overlayfs-snapshotter v1.0.5
github.com/containerd/stargz-snapshotter v0.14.4-0.20230913082252-7275d45b185c
github.com/containerd/containerd v1.7.16
github.com/containerd/fuse-overlayfs-snapshotter v1.0.8
github.com/containerd/stargz-snapshotter v0.15.1
github.com/containerd/zfs v1.1.0
github.com/coreos/go-iptables v0.7.0
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/docker/docker v24.0.5+incompatible
github.com/coreos/go-systemd/v22 v22.5.0
github.com/docker/docker v25.0.5+incompatible
github.com/erikdubbelboer/gspt v0.0.0-20190125194910-e68493906b83
github.com/flannel-io/flannel v0.22.2
github.com/flannel-io/flannel v0.25.2
github.com/go-bindata/go-bindata v3.1.2+incompatible
github.com/go-logr/logr v1.4.1
github.com/go-logr/stdr v1.2.3-0.20220714215716-96bad1d688c5
github.com/go-sql-driver/mysql v1.7.1
github.com/go-test/deep v1.0.7
github.com/golang/mock v1.6.0
github.com/google/cadvisor v0.47.3
github.com/google/uuid v1.3.0
github.com/gorilla/mux v1.8.0
github.com/gorilla/websocket v1.5.0
github.com/google/cadvisor v0.49.0
github.com/google/uuid v1.6.0
github.com/gorilla/mux v1.8.1
github.com/gorilla/websocket v1.5.1
github.com/ipfs/go-ds-leveldb v0.5.0
github.com/ipfs/go-log/v2 v2.5.1
github.com/joho/godotenv v1.5.1
github.com/json-iterator/go v1.1.12
github.com/k3s-io/helm-controller v0.15.4
github.com/k3s-io/kine v0.11.0
github.com/klauspost/compress v1.17.2
github.com/k3s-io/helm-controller v0.16.1
github.com/k3s-io/kine v0.11.9
github.com/klauspost/compress v1.17.7
github.com/kubernetes-sigs/cri-tools v0.0.0-00010101000000-000000000000
github.com/lib/pq v1.10.2
github.com/mattn/go-sqlite3 v1.14.17
github.com/minio/minio-go/v7 v7.0.33
github.com/libp2p/go-libp2p v0.33.2
github.com/mattn/go-sqlite3 v1.14.19
github.com/minio/minio-go/v7 v7.0.70
github.com/mwitkow/go-http-dialer v0.0.0-20161116154839-378f744fb2b8
github.com/natefinch/lumberjack v2.0.0+incompatible
github.com/onsi/ginkgo/v2 v2.11.0
github.com/onsi/gomega v1.27.10
github.com/opencontainers/runc v1.1.7
github.com/onsi/ginkgo/v2 v2.16.0
github.com/onsi/gomega v1.32.0
github.com/opencontainers/runc v1.1.12
github.com/opencontainers/selinux v1.11.0
github.com/otiai10/copy v1.7.0
github.com/pkg/errors v0.9.1
github.com/rancher/dynamiclistener v0.3.6
github.com/rancher/lasso v0.0.0-20230830164424-d684fdeb6f29
github.com/prometheus/client_golang v1.19.1
github.com/prometheus/common v0.49.0
github.com/rancher/dynamiclistener v0.6.0-rc1
github.com/rancher/lasso v0.0.0-20240430201833-6f3def65ffc5
github.com/rancher/remotedialer v0.3.0
github.com/rancher/wharfie v0.5.3
github.com/rancher/wrangler v1.1.1
github.com/rancher/wharfie v0.6.4
github.com/rancher/wrangler/v3 v3.0.0-rc2
github.com/robfig/cron/v3 v3.0.1
github.com/rootless-containers/rootlesskit v1.0.1
github.com/sirupsen/logrus v1.9.3
github.com/spegel-org/spegel v1.0.18
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
github.com/urfave/cli v1.22.14
github.com/vishvananda/netlink v1.2.1-beta.2
github.com/yl2chen/cidranger v1.0.2
go.etcd.io/etcd/api/v3 v3.5.9
go.etcd.io/etcd/client/pkg/v3 v3.5.9
go.etcd.io/etcd/client/v3 v3.5.9
go.etcd.io/etcd/api/v3 v3.5.13
go.etcd.io/etcd/client/pkg/v3 v3.5.13
go.etcd.io/etcd/client/v3 v3.5.13
go.etcd.io/etcd/etcdutl/v3 v3.5.9
go.etcd.io/etcd/server/v3 v3.5.9
go.uber.org/zap v1.24.0
golang.org/x/crypto v0.15.0
golang.org/x/net v0.17.0
golang.org/x/sync v0.3.0
golang.org/x/sys v0.14.0
google.golang.org/grpc v1.58.3
go.etcd.io/etcd/server/v3 v3.5.13
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.22.0
golang.org/x/net v0.24.0
golang.org/x/sync v0.7.0
golang.org/x/sys v0.19.0
google.golang.org/grpc v1.63.2
gopkg.in/yaml.v2 v2.4.0
inet.af/tcpproxy v0.0.0-20200125044825-b6bb9b5b8252
k8s.io/api v0.28.4
k8s.io/apimachinery v0.28.4
k8s.io/apiserver v0.28.4
k8s.io/api v0.30.1
k8s.io/apimachinery v0.30.1
k8s.io/apiserver v0.30.1
k8s.io/cli-runtime v0.22.2
k8s.io/client-go v11.0.1-0.20190409021438-1a26190bd76a+incompatible
k8s.io/cloud-provider v0.28.4
k8s.io/cloud-provider v0.30.1
k8s.io/cluster-bootstrap v0.0.0
k8s.io/component-base v0.28.4
k8s.io/component-helpers v0.28.4
k8s.io/cri-api v0.29.0-alpha.0
k8s.io/klog/v2 v2.100.1
k8s.io/component-base v0.30.1
k8s.io/component-helpers v0.30.1
k8s.io/cri-api v0.30.1
k8s.io/klog/v2 v2.120.1
k8s.io/kube-proxy v0.0.0
k8s.io/kubectl v0.25.0
k8s.io/kubernetes v1.28.4
k8s.io/utils v0.0.0-20230406110748-d93618cff8a2
sigs.k8s.io/yaml v1.3.0
k8s.io/kubernetes v1.30.1
k8s.io/utils v0.0.0-20240310230437-4693a0247e57
sigs.k8s.io/yaml v1.4.0
)
require (
cloud.google.com/go/compute v1.23.0 // indirect
cloud.google.com/go/compute v1.23.3 // indirect
cloud.google.com/go/compute/metadata v0.2.3 // indirect
dario.cat/mergo v1.0.0 // indirect
github.com/AdaLogics/go-fuzz-headers v0.0.0-20230811130428-ced1acdcaa24 // indirect
github.com/AdamKorcz/go-118-fuzz-build v0.0.0-20230306123547-8075edf89bb0 // indirect
github.com/Azure/azure-sdk-for-go v68.0.0+incompatible // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.29 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.23 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/autorest/mocks v0.4.2 // indirect
github.com/Azure/go-autorest/autorest/to v0.4.0 // indirect
github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b // indirect
github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/Rican7/retry v0.1.0 // indirect
github.com/antlr/antlr4/runtime/Go/antlr/v4 v4.0.0-20230305170008-8188dc5388df // indirect
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e // indirect
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a // indirect
github.com/avast/retry-go/v4 v4.3.2 // indirect
github.com/avast/retry-go/v4 v4.6.0 // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver v3.5.1+incompatible // indirect
github.com/bronze1man/goStrongswanVici v0.0.0-20201105010758-936f38b697fd // indirect
github.com/bronze1man/goStrongswanVici v0.0.0-20221114103242-3f6dc524986c // indirect
github.com/canonical/go-dqlite v1.5.1 // indirect
github.com/cenkalti/backoff/v4 v4.2.1 // indirect
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
github.com/checkpoint-restore/go-criu/v5 v5.3.0 // indirect
github.com/cilium/ebpf v0.9.1 // indirect
github.com/container-orchestrated-devices/container-device-interface v0.5.4 // indirect
github.com/container-storage-interface/spec v1.8.0 // indirect
github.com/containerd/btrfs/v2 v2.0.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/containerd/console v1.0.3 // indirect
github.com/containerd/continuity v0.4.2 // indirect
github.com/containerd/continuity v0.4.3 // indirect
github.com/containerd/fifo v1.1.0 // indirect
github.com/containerd/go-cni v1.1.9 // indirect
github.com/containerd/go-runc v1.0.0 // indirect
github.com/containerd/imgcrypt v1.1.7 // indirect
github.com/containerd/imgcrypt v1.1.8 // indirect
github.com/containerd/log v0.1.0 // indirect
github.com/containerd/nri v0.4.0 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.14.3 // indirect
github.com/containerd/ttrpc v1.2.2 // indirect
github.com/containerd/nri v0.6.1 // indirect
github.com/containerd/stargz-snapshotter/estargz v0.15.1 // indirect
github.com/containerd/ttrpc v1.2.4 // indirect
github.com/containerd/typeurl v1.0.2 // indirect
github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/containernetworking/cni v1.1.2 // indirect
github.com/containernetworking/plugins v1.2.0 // indirect
github.com/containers/ocicrypt v1.1.6 // indirect
github.com/containernetworking/plugins v1.4.1 // indirect
github.com/containers/ocicrypt v1.1.10 // indirect
github.com/coreos/go-oidc v2.2.1+incompatible // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
github.com/cyphar/filepath-securejoin v0.2.3 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/cyphar/filepath-securejoin v0.2.4 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/daviddengcn/go-colortext v1.0.0 // indirect
github.com/docker/cli v24.0.5+incompatible // indirect
github.com/docker/distribution v2.8.2+incompatible // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.2.0 // indirect
github.com/distribution/reference v0.5.0 // indirect
github.com/docker/cli v24.0.7+incompatible // indirect
github.com/docker/distribution v2.8.3+incompatible // indirect
github.com/docker/docker-credential-helpers v0.7.0 // indirect
github.com/docker/go-connections v0.4.0 // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/elastic/gosigar v0.14.2 // indirect
github.com/emicklei/go-restful v2.16.0+incompatible // indirect
github.com/emicklei/go-restful/v3 v3.10.2 // indirect
github.com/emicklei/go-restful/v3 v3.11.3 // indirect
github.com/euank/go-kmsg-parser v2.0.0+incompatible // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
github.com/fatih/camelcase v1.0.0 // indirect
github.com/felixge/httpsnoop v1.0.3 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/fsnotify/fsnotify v1.7.0 // indirect
github.com/fvbommel/sortorder v1.1.0 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-logr/logr v1.2.4 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.22.3 // indirect
github.com/go-jose/go-jose/v3 v3.0.3 // indirect
github.com/go-openapi/jsonpointer v0.20.2 // indirect
github.com/go-openapi/jsonreference v0.20.4 // indirect
github.com/go-openapi/swag v0.22.9 // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
github.com/goccy/go-json v0.10.2 // indirect
github.com/godbus/dbus/v5 v5.1.0 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/gofrs/uuid v4.4.0+incompatible // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.5.0 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/golang/protobuf v1.5.4 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/btree v1.1.2 // indirect
github.com/google/cel-go v0.16.1 // indirect
github.com/google/gnostic-models v0.6.8 // indirect
github.com/google/go-cmp v0.5.9 // indirect
github.com/google/cel-go v0.17.8 // indirect
github.com/google/gnostic-models v0.6.9-0.20230804172637-c7be7c783f49 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/google/go-containerregistry v0.14.0 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/pprof v0.0.0-20230323073829-e72429f035bd // indirect
github.com/google/s2a-go v0.1.5 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/pprof v0.0.0-20240207164012-fb44976bdcd5 // indirect
github.com/google/s2a-go v0.1.7 // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.2.5 // indirect
github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect
github.com/googleapis/gax-go/v2 v2.12.0 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.16.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect
github.com/hanwen/go-fuse/v2 v2.3.0 // indirect
github.com/hanwen/go-fuse/v2 v2.4.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/go-retryablehttp v0.7.4 // indirect
github.com/imdario/mergo v0.3.13 // indirect
github.com/hashicorp/go-version v1.6.0 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/hashicorp/golang-lru/arc/v2 v2.0.5 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.5 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/imdario/mergo v0.3.16 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/intel/goresctrl v0.3.0 // indirect
github.com/ipfs/boxo v0.10.0 // indirect
github.com/ipfs/go-cid v0.4.1 // indirect
github.com/ipfs/go-datastore v0.6.0 // indirect
github.com/ipfs/go-log v1.0.5 // indirect
github.com/ipld/go-ipld-prime v0.20.0 // indirect
github.com/jackc/pgerrcode v0.0.0-20220416144525-469b46aa5efa // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20221227161230-091c0ba34f0a // indirect
github.com/jackc/pgx/v5 v5.4.2 // indirect
github.com/jonboulle/clockwork v0.3.0 // indirect
github.com/jackc/pgx/v5 v5.5.4 // indirect
github.com/jackc/puddle/v2 v2.2.1 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/jbenet/goprocess v0.1.4 // indirect
github.com/jonboulle/clockwork v0.4.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/josharian/native v1.1.0 // indirect
github.com/karrick/godirwalk v1.17.0 // indirect
github.com/klauspost/cpuid/v2 v2.1.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.7 // indirect
github.com/koron/go-ssdp v0.0.4 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/libopenstorage/openstorage v1.0.0 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.1.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-libp2p-kad-dht v0.25.2 // indirect
github.com/libp2p/go-libp2p-kbucket v0.6.3 // indirect
github.com/libp2p/go-libp2p-record v0.2.0 // indirect
github.com/libp2p/go-libp2p-routing-helpers v0.7.2 // indirect
github.com/libp2p/go-msgio v0.3.0 // indirect
github.com/libp2p/go-nat v0.2.0 // indirect
github.com/libp2p/go-netroute v0.2.1 // indirect
github.com/libp2p/go-reuseport v0.4.0 // indirect
github.com/libp2p/go-yamux/v4 v4.0.1 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/lithammer/dedent v1.1.0 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect
github.com/mdlayher/genetlink v1.3.2 // indirect
github.com/mdlayher/netlink v1.7.2 // indirect
github.com/mdlayher/socket v0.4.1 // indirect
github.com/miekg/dns v1.1.58 // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/highwayhash v1.0.2 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/minio/sha256-simd v1.0.0 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible // indirect
github.com/mistifyio/go-zfs/v3 v3.0.1 // indirect
github.com/mitchellh/go-homedir v1.1.0 // indirect
@ -316,89 +346,111 @@ require (
github.com/moby/sys/sequential v0.5.0 // indirect
github.com/moby/sys/signal v0.7.0 // indirect
github.com/moby/sys/symlink v0.2.0 // indirect
github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect
github.com/moby/sys/user v0.1.0 // indirect
github.com/moby/term v0.5.0 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/mohae/deepcopy v0.0.0-20170929034955-c48cc78d4826 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/mrunalp/fileutils v0.5.1 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.12.3 // indirect
github.com/multiformats/go-multiaddr-dns v0.3.1 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.0 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.5.0 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/nats-io/jsm.go v0.0.31-0.20220317133147-fe318f464eee // indirect
github.com/nats-io/jwt/v2 v2.5.3 // indirect
github.com/nats-io/nats-server/v2 v2.10.5 // indirect
github.com/nats-io/nats.go v1.31.0 // indirect
github.com/nats-io/nkeys v0.4.6 // indirect
github.com/nats-io/jwt/v2 v2.5.5 // indirect
github.com/nats-io/nats-server/v2 v2.10.12 // indirect
github.com/nats-io/nats.go v1.34.0 // indirect
github.com/nats-io/nkeys v0.4.7 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.0-rc3 // indirect
github.com/opencontainers/runtime-spec v1.1.0 // indirect
github.com/opencontainers/image-spec v1.1.0 // indirect
github.com/opencontainers/runtime-spec v1.2.0 // indirect
github.com/opencontainers/runtime-tools v0.9.1-0.20221107090550-2e043c6bd626 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pelletier/go-toml v1.9.5 // indirect
github.com/pelletier/go-toml/v2 v2.2.2 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pierrec/lz4 v2.6.0+incompatible // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/polydawn/refmt v0.89.0 // indirect
github.com/pquerna/cachecontrol v0.1.0 // indirect
github.com/prometheus/client_golang v1.16.0 // indirect
github.com/prometheus/client_model v0.4.0 // indirect
github.com/prometheus/common v0.44.0 // indirect
github.com/prometheus/procfs v0.10.1 // indirect
github.com/prometheus/client_model v0.6.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/quic-go/qpack v0.4.0 // indirect
github.com/quic-go/quic-go v0.42.0 // indirect
github.com/quic-go/webtransport-go v0.6.0 // indirect
github.com/raulk/go-watchdog v1.3.0 // indirect
github.com/rs/xid v1.5.0 // indirect
github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/seccomp/libseccomp-golang v0.10.0 // indirect
github.com/shengdoushi/base58 v1.0.0 // indirect
github.com/soheilhy/cmux v0.1.5 // indirect
github.com/spf13/cobra v1.7.0 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/spf13/afero v1.11.0 // indirect
github.com/spf13/cobra v1.8.0 // indirect
github.com/stefanberger/go-pkcs11uri v0.0.0-20230803200340-78284954bff6 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635 // indirect
github.com/syndtr/goleveldb v1.0.0 // indirect
github.com/tchap/go-patricia/v2 v2.3.1 // indirect
github.com/tidwall/btree v1.6.0 // indirect
github.com/tmc/grpc-websocket-proxy v0.0.0-20220101234140-673ab2c3ae75 // indirect
github.com/urfave/cli/v2 v2.23.5 // indirect
github.com/urfave/cli/v2 v2.26.0 // indirect
github.com/vbatts/tar-split v0.11.5 // indirect
github.com/vishvananda/netns v0.0.4 // indirect
github.com/vmware/govmomi v0.30.6 // indirect
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
github.com/xiang90/probing v0.0.0-20221125231312-a49e3df8f510 // indirect
github.com/xlab/treeprint v1.2.0 // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
go.etcd.io/bbolt v1.3.7 // indirect
go.etcd.io/etcd/client/v2 v2.305.9 // indirect
go.etcd.io/etcd/pkg/v3 v3.5.9 // indirect
go.etcd.io/etcd/raft/v3 v3.5.9 // indirect
go.etcd.io/bbolt v1.3.9 // indirect
go.etcd.io/etcd/client/v2 v2.305.13 // indirect
go.etcd.io/etcd/pkg/v3 v3.5.13 // indirect
go.etcd.io/etcd/raft/v3 v3.5.13 // indirect
go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.35.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.45.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.1 // indirect
go.opentelemetry.io/otel v1.19.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 // indirect
go.opentelemetry.io/otel/metric v1.19.0 // indirect
go.opentelemetry.io/otel/sdk v1.19.0 // indirect
go.opentelemetry.io/otel/trace v1.19.0 // indirect
go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.42.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.46.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/sdk v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
go.opentelemetry.io/proto/otlp v1.0.0 // indirect
go.starlark.net v0.0.0-20230525235612-a134d8f9ddca // indirect
go.uber.org/atomic v1.10.0 // indirect
go.uber.org/dig v1.17.1 // indirect
go.uber.org/fx v1.20.1 // indirect
go.uber.org/mock v0.4.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/exp v0.0.0-20230307190834-24139beb5833 // indirect
golang.org/x/mod v0.11.0 // indirect
golang.org/x/oauth2 v0.11.0 // indirect
golang.org/x/term v0.13.0 // indirect
golang.org/x/exp v0.0.0-20240222234643-814bf88cf225 // indirect
golang.org/x/mod v0.17.0 // indirect
golang.org/x/oauth2 v0.17.0 // indirect
golang.org/x/term v0.19.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/time v0.4.0 // indirect
golang.org/x/tools v0.10.0 // indirect
golang.org/x/time v0.5.0 // indirect
golang.org/x/tools v0.20.0 // indirect
golang.zx2c4.com/wireguard v0.0.0-20230325221338-052af4a8072b // indirect
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20230429144221-925a1e7659e6 // indirect
google.golang.org/api v0.138.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20230815205213-6bfd019c3878 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20230803162519-f966b187b2e5 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20230807174057-1744710a1577 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gonum.org/v1/gonum v0.13.0 // indirect
google.golang.org/api v0.152.0 // indirect
google.golang.org/appengine v1.6.8 // indirect
google.golang.org/genproto v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240123012728-ef4313101c80 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240228224816-df926f6c8641 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/gcfg.v1 v1.2.3 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.67.0 // indirect
@ -406,29 +458,32 @@ require (
gopkg.in/square/go-jose.v2 v2.6.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.28.4 // indirect
k8s.io/cli-runtime v0.22.2 // indirect
k8s.io/code-generator v0.28.4 // indirect
k8s.io/apiextensions-apiserver v0.30.1 // indirect
k8s.io/code-generator v0.30.1 // indirect
k8s.io/controller-manager v0.25.4 // indirect
k8s.io/csi-translation-lib v0.0.0 // indirect
k8s.io/dynamic-resource-allocation v0.0.0 // indirect
k8s.io/endpointslice v0.0.0 // indirect
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d // indirect
k8s.io/gengo v0.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/gengo/v2 v2.0.0-20240228010128-51d4e06bde70 // indirect
k8s.io/kms v0.0.0 // indirect
k8s.io/kube-aggregator v0.28.4 // indirect
k8s.io/kube-aggregator v0.30.1 // indirect
k8s.io/kube-controller-manager v0.0.0 // indirect
k8s.io/kube-openapi v0.0.0-20230717233707-2695361300d9 // indirect
k8s.io/kube-proxy v0.0.0 // indirect
k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect
k8s.io/kube-scheduler v0.0.0 // indirect
k8s.io/kubelet v0.0.0 // indirect
k8s.io/kubelet v0.28.6 // indirect
k8s.io/legacy-cloud-providers v0.0.0 // indirect
k8s.io/metrics v0.0.0 // indirect
k8s.io/mount-utils v0.28.4 // indirect
k8s.io/mount-utils v0.30.1 // indirect
k8s.io/pod-security-admission v0.0.0 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.1.2 // indirect
lukechampine.com/blake3 v1.2.1 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.29.0 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
sigs.k8s.io/knftables v0.0.14 // indirect
sigs.k8s.io/kustomize/api v0.13.5-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kustomize/v5 v5.0.4-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/kustomize/kyaml v0.14.3-0.20230601165947-6ce0bf390ce3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect
tags.cncf.io/container-device-interface v0.7.2 // indirect
tags.cncf.io/container-device-interface/specs-go v0.7.0 // indirect
)

1189
go.sum

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@ import (
k3scrd "github.com/k3s-io/k3s/pkg/crd"
_ "github.com/k3s-io/k3s/pkg/generated/controllers/k3s.cattle.io/v1"
"github.com/rancher/wrangler/pkg/crd"
"github.com/rancher/wrangler/v3/pkg/crd"
)
func main() {

81
install.sh Normal file → Executable file
View File

@ -44,6 +44,10 @@ set -o noglob
# Commit of k3s to download from temporary cloud storage.
# * (for developer & QA use)
#
# - INSTALL_K3S_PR
# PR build of k3s to download from Github Artifacts.
# * (for developer & QA use)
#
# - INSTALL_K3S_BIN_DIR
# Directory to install k3s binary, links, and uninstall script to, or use
# /usr/local/bin as the default
@ -92,6 +96,7 @@ set -o noglob
# Defaults to 'stable'.
GITHUB_URL=https://github.com/k3s-io/k3s/releases
GITHUB_PR_URL=""
STORAGE_URL=https://k3s-ci-builds.s3.amazonaws.com
DOWNLOADER=
@ -337,6 +342,7 @@ verify_downloader() {
setup_tmp() {
TMP_DIR=$(mktemp -d -t k3s-install.XXXXXXXXXX)
TMP_HASH=${TMP_DIR}/k3s.hash
TMP_ZIP=${TMP_DIR}/k3s.zip
TMP_BIN=${TMP_DIR}/k3s.bin
cleanup() {
code=$?
@ -350,7 +356,10 @@ setup_tmp() {
# --- use desired k3s version if defined or find version from channel ---
get_release_version() {
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
if [ -n "${INSTALL_K3S_PR}" ]; then
VERSION_K3S="PR ${INSTALL_K3S_PR}"
get_pr_artifact_url
elif [ -n "${INSTALL_K3S_COMMIT}" ]; then
VERSION_K3S="commit ${INSTALL_K3S_COMMIT}"
elif [ -n "${INSTALL_K3S_VERSION}" ]; then
VERSION_K3S=${INSTALL_K3S_VERSION}
@ -414,7 +423,7 @@ get_k3s_selinux_version() {
# --- download from github url ---
download() {
[ $# -eq 2 ] || fatal 'download needs exactly 2 arguments'
set +e
case $DOWNLOADER in
curl)
curl -o $1 -sfL $2
@ -429,17 +438,24 @@ download() {
# Abort if download command failed
[ $? -eq 0 ] || fatal 'Download failed'
set -e
}
# --- download hash from github url ---
download_hash() {
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
HASH_URL=${STORAGE_URL}/k3s${SUFFIX}-${INSTALL_K3S_COMMIT}.sha256sum
if [ -n "${INSTALL_K3S_PR}" ]; then
info "Downloading hash ${GITHUB_PR_URL}"
curl -o ${TMP_ZIP} -H "Authorization: Bearer $GITHUB_TOKEN" -L ${GITHUB_PR_URL}
unzip -p ${TMP_ZIP} k3s.sha256sum > ${TMP_HASH}
else
HASH_URL=${GITHUB_URL}/download/${VERSION_K3S}/sha256sum-${ARCH}.txt
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
HASH_URL=${STORAGE_URL}/k3s${SUFFIX}-${INSTALL_K3S_COMMIT}.sha256sum
else
HASH_URL=${GITHUB_URL}/download/${VERSION_K3S}/sha256sum-${ARCH}.txt
fi
info "Downloading hash ${HASH_URL}"
download ${TMP_HASH} ${HASH_URL}
fi
info "Downloading hash ${HASH_URL}"
download ${TMP_HASH} ${HASH_URL}
HASH_EXPECTED=$(grep " k3s${SUFFIX}$" ${TMP_HASH})
HASH_EXPECTED=${HASH_EXPECTED%%[[:blank:]]*}
}
@ -456,9 +472,47 @@ installed_hash_matches() {
return 1
}
# Use the GitHub API to identify the artifact associated with a given PR
get_pr_artifact_url() {
github_api_url=https://api.github.com/repos/k3s-io/k3s
# Check if jq is installed
if ! [ -x "$(command -v jq)" ]; then
fatal "Installing PR builds requires jq"
fi
if [ -z "${GITHUB_TOKEN}" ]; then
fatal "Installing PR builds requires GITHUB_TOKEN with k3s-io/k3s repo authorization"
fi
# GET request to the GitHub API to retrieve the latest commit SHA from the pull request
commit_id=$(curl -s -H "Authorization: Bearer $GITHUB_TOKEN" "$github_api_url/pulls/$INSTALL_K3S_PR" | jq -r '.head.sha')
# GET request to the GitHub API to retrieve the Build workflow associated with the commit
wf_raw=$(curl -s -H "Authorization: Bearer $GITHUB_TOKEN" "$github_api_url/commits/$commit_id/check-runs")
build_workflow=$(printf "%s" "$wf_raw" | jq -r '.check_runs[] | select(.name == "build / Build")')
# Extract the Run ID from the build workflow and lookup artifacts associated with the run
run_id=$(echo "$build_workflow" | jq -r ' .details_url' | awk -F'/' '{print $(NF-2)}' | sort -rn | head -1)
# Extract the artifact ID for the "k3s" artifact
artifacts=$(curl -s -H "Authorization: Bearer $GITHUB_TOKEN" "$github_api_url/actions/runs/$run_id/artifacts")
artifacts_url=$(echo "$artifacts" | jq -r '.artifacts[] | select(.name == "k3s") | .archive_download_url')
GITHUB_PR_URL=$artifacts_url
}
# --- download binary from github url ---
download_binary() {
if [ -n "${INSTALL_K3S_COMMIT}" ]; then
if [ -n "${INSTALL_K3S_PR}" ]; then
# Since Binary and Hash are zipped together, check if TMP_ZIP already exists
if ! [ -f ${TMP_ZIP} ]; then
info "Downloading K3s artifact ${GITHUB_PR_URL}"
curl -o ${TMP_ZIP} -H "Authorization: Bearer $GITHUB_TOKEN" -L ${GITHUB_PR_URL}
fi
# extract k3s binary from zip
unzip -p ${TMP_ZIP} k3s > ${TMP_BIN}
return
elif [ -n "${INSTALL_K3S_COMMIT}" ]; then
BIN_URL=${STORAGE_URL}/k3s${SUFFIX}-${INSTALL_K3S_COMMIT}
else
BIN_URL=${GITHUB_URL}/download/${VERSION_K3S}/k3s${SUFFIX}
@ -547,10 +601,11 @@ setup_selinux() {
if [ "$INSTALL_K3S_SKIP_SELINUX_RPM" = true ] || can_skip_download_selinux || [ ! -d /usr/share/selinux ]; then
info "Skipping installation of SELinux RPM"
else
get_k3s_selinux_version
install_selinux_rpm ${rpm_site} ${rpm_channel} ${rpm_target} ${rpm_site_infix}
return
fi
get_k3s_selinux_version
install_selinux_rpm ${rpm_site} ${rpm_channel} ${rpm_target} ${rpm_site_infix}
policy_error=fatal
if [ "$INSTALL_K3S_SELINUX_WARN" = true ] || [ "${ID_LIKE:-}" = coreos ] || [ "${VARIANT_ID:-}" = coreos ]; then
@ -904,7 +959,7 @@ TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=${BIN_DIR}/k3s \\
@ -1000,7 +1055,7 @@ openrc_start() {
}
has_working_xtables() {
if command -v "$1-save" 1> /dev/null && command -v "$1-restore" 1> /dev/null; then
if $SUDO sh -c "command -v \"$1-save\"" 1> /dev/null && $SUDO sh -c "command -v \"$1-restore\"" 1> /dev/null; then
if $SUDO $1-save 2>/dev/null | grep -q '^-A CNI-HOSTPORT-MASQ -j MASQUERADE$'; then
warn "Host $1-save/$1-restore tools are incompatible with existing rules"
else

View File

@ -1 +1 @@
5f785120f00ef4b0aba205161232d2a04b6e7a75332cae7059fcc1f517340777 install.sh
696c6a93262b3e1f06a78841b8a82c238a8f17755824c024baad652b18bc92bc install.sh

View File

@ -9,7 +9,7 @@ Type=notify
EnvironmentFile=-/etc/default/%N
EnvironmentFile=-/etc/sysconfig/%N
EnvironmentFile=-/etc/systemd/system/k3s.service.env
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null'
ExecStart=/usr/local/bin/k3s server
KillMode=process
Delegate=yes

View File

@ -48,6 +48,7 @@ func main() {
secretsencrypt.RotateKeys,
),
cmds.NewCertCommands(
cert.Check,
cert.Rotate,
cert.RotateCA,
),

View File

@ -120,7 +120,7 @@ spec:
k8s-app: kube-dns
containers:
- name: coredns
image: %{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-coredns-coredns:1.10.1
image: "%{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-coredns-coredns:1.10.1"
imagePullPolicy: IfNotPresent
resources:
limits:

View File

@ -10,7 +10,7 @@ metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "configmaps"]
resources: ["nodes", "persistentvolumeclaims", "configmaps", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
@ -67,7 +67,7 @@ spec:
effect: "NoSchedule"
containers:
- name: local-path-provisioner
image: %{SYSTEM_DEFAULT_REGISTRY}%rancher/local-path-provisioner:v0.0.24
image: "%{SYSTEM_DEFAULT_REGISTRY}%rancher/local-path-provisioner:v0.0.26"
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
@ -92,6 +92,7 @@ kind: StorageClass
metadata:
name: local-path
annotations:
defaultVolumeType: "local"
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
@ -114,39 +115,13 @@ data:
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
mkdir -m 0777 -p ${absolutePath}
chmod 700 ${absolutePath}/..
set -eu
mkdir -m 0777 -p "${VOL_DIR}"
chmod 700 "${VOL_DIR}/.."
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutePath=$OPTARG
;;
s)
sizeInBytes=$OPTARG
;;
m)
volMode=$OPTARG
;;
esac
done
rm -rf ${absolutePath}
set -eu
rm -rf "${VOL_DIR}"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
@ -155,5 +130,5 @@ data:
spec:
containers:
- name: helper-pod
image: %{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-library-busybox:1.36.1
image: "%{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-library-busybox:1.36.1"
imagePullPolicy: IfNotPresent

View File

@ -44,7 +44,7 @@ spec:
emptyDir: {}
containers:
- name: metrics-server
image: %{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-metrics-server:v0.6.3
image: "%{SYSTEM_DEFAULT_REGISTRY}%rancher/mirrored-metrics-server:v0.7.0"
args:
- --cert-dir=/tmp
- --secure-port=10250

View File

@ -5,7 +5,7 @@ metadata:
name: traefik-crd
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-25.0.2+up25.0.0.tgz
chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-25.0.3+up25.0.0.tgz
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
@ -13,13 +13,14 @@ metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-25.0.2+up25.0.0.tgz
chart: https://%{KUBERNETES_API}%/static/charts/traefik-25.0.3+up25.0.0.tgz
set:
global.systemDefaultRegistry: "%{SYSTEM_DEFAULT_REGISTRY_RAW}%"
valuesContent: |-
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
deployment:
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesIngress:
publishedService:
@ -27,7 +28,7 @@ spec:
priorityClassName: "system-cluster-critical"
image:
repository: "rancher/mirrored-library-traefik"
tag: "2.10.5"
tag: "2.10.7"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"

View File

@ -1,18 +1,23 @@
FROM alpine:3.18 as base
RUN apk add -U ca-certificates tar zstd tzdata
FROM alpine:3.20 as base
RUN apk add -U ca-certificates zstd tzdata
COPY build/out/data.tar.zst /
RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \
tar -xa -C /image -f /data.tar.zst && \
zstdcat -d /data.tar.zst | tar -xa -C /image && \
echo "root:x:0:0:root:/:/bin/sh" > /image/etc/passwd && \
echo "root:x:0:" > /image/etc/group && \
cp /etc/ssl/certs/ca-certificates.crt /image/etc/ssl/certs/ca-certificates.crt
FROM scratch
ARG VERSION="dev"
FROM scratch as collect
ARG DRONE_TAG="dev"
COPY --from=base /image /
COPY --from=base /usr/share/zoneinfo /usr/share/zoneinfo
RUN mkdir -p /etc && \
echo 'hosts: files dns' > /etc/nsswitch.conf && \
echo "PRETTY_NAME=\"K3s ${VERSION}\"" > /etc/os-release && \
echo "PRETTY_NAME=\"K3s ${DRONE_TAG}\"" > /etc/os-release && \
chmod 1777 /tmp
FROM scratch
COPY --from=collect / /
VOLUME /var/lib/kubelet
VOLUME /var/lib/rancher/k3s
VOLUME /var/lib/cni

View File

@ -690,7 +690,7 @@ TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service'
ExecStartPre=/bin/sh -xc '! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service 2>/dev/null'
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=${BIN_DIR}/k3s ${CMD_K3S} \$${CMD_K3S_ARGS_VAR}

View File

@ -16,6 +16,7 @@ import (
"os/exec"
"path/filepath"
"regexp"
"strconv"
"strings"
"time"
@ -26,11 +27,13 @@ import (
"github.com/k3s-io/k3s/pkg/clientaccess"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/daemons/control/deps"
"github.com/k3s-io/k3s/pkg/spegel"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/k3s-io/k3s/pkg/vpn"
"github.com/pkg/errors"
"github.com/rancher/wrangler/pkg/slice"
"github.com/rancher/wharfie/pkg/registries"
"github.com/rancher/wrangler/v3/pkg/slice"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/json"
"k8s.io/apimachinery/pkg/util/wait"
@ -47,8 +50,8 @@ const (
// so this is somewhat computationally expensive on the server side, and is retried with jitter
// to avoid having clients hammer on the server at fixed periods.
// A call to this will bock until agent configuration is successfully returned by the
// server.
func Get(ctx context.Context, agent cmds.Agent, proxy proxy.Proxy) *config.Node {
// server, or the context is cancelled.
func Get(ctx context.Context, agent cmds.Agent, proxy proxy.Proxy) (*config.Node, error) {
var agentConfig *config.Node
var err error
@ -64,7 +67,7 @@ func Get(ctx context.Context, agent cmds.Agent, proxy proxy.Proxy) *config.Node
cancel()
}
}, 5*time.Second, 1.0, true)
return agentConfig
return agentConfig, err
}
// KubeProxyDisabled returns a bool indicating whether or not kube-proxy has been disabled in the
@ -197,7 +200,16 @@ func ensureNodePassword(nodePasswordFile string) (string, error) {
return "", err
}
nodePassword := hex.EncodeToString(password)
return nodePassword, os.WriteFile(nodePasswordFile, []byte(nodePassword+"\n"), 0600)
if err = os.WriteFile(nodePasswordFile, []byte(nodePassword+"\n"), 0600); err != nil {
return nodePassword, err
}
if err = configureACL(nodePassword); err != nil {
return nodePassword, err
}
return nodePassword, nil
}
func upgradeOldNodePasswordPath(oldNodePasswordFile, newNodePasswordFile string) {
@ -304,19 +316,22 @@ func isValidResolvConf(resolvConfFile string) bool {
nameserver := regexp.MustCompile(`^nameserver\s+([^\s]*)`)
scanner := bufio.NewScanner(file)
foundNameserver := false
for scanner.Scan() {
ipMatch := nameserver.FindStringSubmatch(scanner.Text())
if len(ipMatch) == 2 {
ip := net.ParseIP(ipMatch[1])
if ip == nil || !ip.IsGlobalUnicast() {
return false
} else {
foundNameserver = true
}
}
}
if err := scanner.Err(); err != nil {
return false
}
return true
return foundNameserver
}
func locateOrGenerateResolvConf(envInfo *cmds.Agent) string {
@ -359,7 +374,7 @@ func get(ctx context.Context, envInfo *cmds.Agent, proxy proxy.Proxy) (*config.N
// If the supervisor and externally-facing apiserver are not on the same port, tell the proxy where to find the apiserver.
if controlConfig.SupervisorPort != controlConfig.HTTPSPort {
isIPv6 := utilsnet.IsIPv6(net.ParseIP([]string{envInfo.NodeIP.String()}[0]))
if err := proxy.SetAPIServerPort(ctx, controlConfig.HTTPSPort, isIPv6); err != nil {
if err := proxy.SetAPIServerPort(controlConfig.HTTPSPort, isIPv6); err != nil {
return nil, errors.Wrapf(err, "failed to setup access to API Server port %d on at %s", controlConfig.HTTPSPort, proxy.SupervisorURL())
}
}
@ -444,6 +459,14 @@ func get(ctx context.Context, envInfo *cmds.Agent, proxy proxy.Proxy) (*config.N
}
}
if controlConfig.ClusterIPRange != nil {
if utilsnet.IPFamilyOfCIDR(controlConfig.ClusterIPRange) != utilsnet.IPFamilyOf(nodeIPs[0]) && len(nodeIPs) > 1 {
firstNodeIP := nodeIPs[0]
nodeIPs[0] = nodeIPs[1]
nodeIPs[1] = firstNodeIP
}
}
nodeExternalIPs, err := util.ParseStringSliceToIPs(envInfo.NodeExternalIP)
if err != nil {
return nil, fmt.Errorf("invalid node-external-ip: %w", err)
@ -501,12 +524,14 @@ func get(ctx context.Context, envInfo *cmds.Agent, proxy proxy.Proxy) (*config.N
SELinux: envInfo.EnableSELinux,
ContainerRuntimeEndpoint: envInfo.ContainerRuntimeEndpoint,
ImageServiceEndpoint: envInfo.ImageServiceEndpoint,
MultiClusterCIDR: controlConfig.MultiClusterCIDR,
EnablePProf: envInfo.EnablePProf,
EmbeddedRegistry: controlConfig.EmbeddedRegistry,
FlannelBackend: controlConfig.FlannelBackend,
FlannelIPv6Masq: controlConfig.FlannelIPv6Masq,
FlannelExternalIP: controlConfig.FlannelExternalIP,
EgressSelectorMode: controlConfig.EgressSelectorMode,
ServerHTTPSPort: controlConfig.HTTPSPort,
SupervisorMetrics: controlConfig.SupervisorMetrics,
Token: info.String(),
}
nodeConfig.FlannelIface = flannelIface
@ -560,20 +585,27 @@ func get(ctx context.Context, envInfo *cmds.Agent, proxy proxy.Proxy) (*config.N
}
nodeConfig.Containerd.Opt = filepath.Join(envInfo.DataDir, "agent", "containerd")
nodeConfig.Containerd.Log = filepath.Join(envInfo.DataDir, "agent", "containerd", "containerd.log")
nodeConfig.Containerd.Registry = filepath.Join(envInfo.DataDir, "agent", "etc", "containerd", "certs.d")
nodeConfig.Containerd.NoDefault = envInfo.ContainerdNoDefault
nodeConfig.Containerd.Debug = envInfo.Debug
applyContainerdStateAndAddress(nodeConfig)
applyCRIDockerdAddress(nodeConfig)
applyContainerdQoSClassConfigFileIfPresent(envInfo, nodeConfig)
applyContainerdQoSClassConfigFileIfPresent(envInfo, &nodeConfig.Containerd)
nodeConfig.Containerd.Template = filepath.Join(envInfo.DataDir, "agent", "etc", "containerd", "config.toml.tmpl")
nodeConfig.Certificate = servingCert
nodeConfig.AgentConfig.NodeIPs = nodeIPs
listenAddress, _, _, err := util.GetDefaultAddresses(nodeIPs[0])
if err != nil {
return nil, errors.Wrap(err, "cannot configure IPv4/IPv6 node-ip")
if envInfo.BindAddress != "" {
nodeConfig.AgentConfig.ListenAddress = envInfo.BindAddress
} else {
listenAddress, _, _, err := util.GetDefaultAddresses(nodeIPs[0])
if err != nil {
return nil, errors.Wrap(err, "cannot configure IPv4/IPv6 node-ip")
}
nodeConfig.AgentConfig.ListenAddress = listenAddress
}
nodeConfig.AgentConfig.NodeIP = nodeIPs[0].String()
nodeConfig.AgentConfig.ListenAddress = listenAddress
nodeConfig.AgentConfig.NodeIPs = nodeIPs
nodeConfig.AgentConfig.NodeExternalIPs = nodeExternalIPs
// if configured, set NodeExternalIP to the first IPv4 address, for legacy clients
@ -662,13 +694,47 @@ func get(ctx context.Context, envInfo *cmds.Agent, proxy proxy.Proxy) (*config.N
nodeConfig.AgentConfig.NodeLabels = envInfo.Labels
nodeConfig.AgentConfig.ImageCredProvBinDir = envInfo.ImageCredProvBinDir
nodeConfig.AgentConfig.ImageCredProvConfig = envInfo.ImageCredProvConfig
nodeConfig.AgentConfig.PrivateRegistry = envInfo.PrivateRegistry
nodeConfig.AgentConfig.DisableCCM = controlConfig.DisableCCM
nodeConfig.AgentConfig.DisableNPC = controlConfig.DisableNPC
nodeConfig.AgentConfig.MinTLSVersion = controlConfig.MinTLSVersion
nodeConfig.AgentConfig.CipherSuites = controlConfig.CipherSuites
nodeConfig.AgentConfig.Rootless = envInfo.Rootless
nodeConfig.AgentConfig.PodManifests = filepath.Join(envInfo.DataDir, "agent", DefaultPodManifestPath)
nodeConfig.AgentConfig.ProtectKernelDefaults = envInfo.ProtectKernelDefaults
nodeConfig.AgentConfig.DisableServiceLB = envInfo.DisableServiceLB
nodeConfig.AgentConfig.VLevel = cmds.LogConfig.VLevel
nodeConfig.AgentConfig.VModule = cmds.LogConfig.VModule
nodeConfig.AgentConfig.LogFile = cmds.LogConfig.LogFile
nodeConfig.AgentConfig.AlsoLogToStderr = cmds.LogConfig.AlsoLogToStderr
privRegistries, err := registries.GetPrivateRegistries(envInfo.PrivateRegistry)
if err != nil {
return nil, err
}
nodeConfig.AgentConfig.Registry = privRegistries.Registry
if nodeConfig.EmbeddedRegistry {
psk, err := hex.DecodeString(controlConfig.IPSECPSK)
if err != nil {
return nil, err
}
if len(psk) < 32 {
return nil, errors.New("insufficient PSK bytes")
}
conf := spegel.DefaultRegistry
conf.ExternalAddress = nodeConfig.AgentConfig.NodeIP
conf.InternalAddress = controlConfig.Loopback(false)
conf.RegistryPort = strconv.Itoa(controlConfig.SupervisorPort)
conf.ClientCAFile = clientCAFile
conf.ClientCertFile = clientK3sControllerCert
conf.ClientKeyFile = clientK3sControllerKey
conf.ServerCAFile = serverCAFile
conf.ServerCertFile = servingKubeletCert
conf.ServerKeyFile = servingKubeletKey
conf.PSK = psk[:32]
conf.InjectMirror(nodeConfig)
}
if err := validateNetworkConfig(nodeConfig); err != nil {
return nil, err

View File

@ -22,20 +22,32 @@ func applyCRIDockerdAddress(nodeConfig *config.Node) {
nodeConfig.CRIDockerd.Address = "unix:///run/k3s/cri-dockerd/cri-dockerd.sock"
}
func applyContainerdQoSClassConfigFileIfPresent(envInfo *cmds.Agent, nodeConfig *config.Node) {
blockioPath := filepath.Join(envInfo.DataDir, "agent", "etc", "containerd", "blockio_config.yaml")
func applyContainerdQoSClassConfigFileIfPresent(envInfo *cmds.Agent, containerdConfig *config.Containerd) {
containerdConfigDir := filepath.Join(envInfo.DataDir, "agent", "etc", "containerd")
blockioPath := filepath.Join(containerdConfigDir, "blockio_config.yaml")
// Set containerd config if file exists
if _, err := os.Stat(blockioPath); !errors.Is(err, os.ErrNotExist) {
logrus.Infof("BlockIO configuration file found")
nodeConfig.Containerd.BlockIOConfig = blockioPath
if fileInfo, err := os.Stat(blockioPath); !errors.Is(err, os.ErrNotExist) {
if fileInfo.Mode().IsRegular() {
logrus.Infof("BlockIO configuration file found")
containerdConfig.BlockIOConfig = blockioPath
}
}
rdtPath := filepath.Join(envInfo.DataDir, "agent", "etc", "containerd", "rdt_config.yaml")
rdtPath := filepath.Join(containerdConfigDir, "rdt_config.yaml")
// Set containerd config if file exists
if _, err := os.Stat(rdtPath); !errors.Is(err, os.ErrNotExist) {
logrus.Infof("RDT configuration file found")
nodeConfig.Containerd.RDTConfig = rdtPath
if fileInfo, err := os.Stat(rdtPath); !errors.Is(err, os.ErrNotExist) {
if fileInfo.Mode().IsRegular() {
logrus.Infof("RDT configuration file found")
containerdConfig.RDTConfig = rdtPath
}
}
}
// configureACL will configure an Access Control List for the specified file.
// On Linux, this function is a no-op
func configureACL(file string) error {
return nil
}

View File

@ -0,0 +1,165 @@
//go:build linux
// +build linux
package config
import (
"os"
"path/filepath"
"reflect"
"testing"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/tests"
)
func Test_UnitApplyContainerdQoSClassConfigFileIfPresent(t *testing.T) {
configControl := config.Control{
DataDir: "/tmp/k3s/",
}
if err := tests.GenerateDataDir(&configControl); err != nil {
t.Errorf("Test_UnitApplyContainerdQoSClassConfigFileIfPresent() setup failed = %v", err)
}
defer tests.CleanupDataDir(&configControl)
containerdConfigDir := filepath.Join(configControl.DataDir, "agent", "etc", "containerd")
os.MkdirAll(containerdConfigDir, 0700)
type args struct {
envInfo *cmds.Agent
containerdConfig *config.Containerd
}
tests := []struct {
name string
args args
setup func() error
teardown func()
want *config.Containerd
}{
{
name: "No config file",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
return nil
},
teardown: func() {},
want: &config.Containerd{},
},
{
name: "BlockIO config file",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
_, err := os.Create(filepath.Join(containerdConfigDir, "blockio_config.yaml"))
return err
},
teardown: func() {
os.Remove(filepath.Join(containerdConfigDir, "blockio_config.yaml"))
},
want: &config.Containerd{
BlockIOConfig: filepath.Join(containerdConfigDir, "blockio_config.yaml"),
},
},
{
name: "RDT config file",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
_, err := os.Create(filepath.Join(containerdConfigDir, "rdt_config.yaml"))
return err
},
teardown: func() {
os.Remove(filepath.Join(containerdConfigDir, "rdt_config.yaml"))
},
want: &config.Containerd{
RDTConfig: filepath.Join(containerdConfigDir, "rdt_config.yaml"),
},
},
{
name: "Both config files",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
_, err := os.Create(filepath.Join(containerdConfigDir, "blockio_config.yaml"))
if err != nil {
return err
}
_, err = os.Create(filepath.Join(containerdConfigDir, "rdt_config.yaml"))
return err
},
teardown: func() {
os.Remove(filepath.Join(containerdConfigDir, "blockio_config.yaml"))
os.Remove(filepath.Join(containerdConfigDir, "rdt_config.yaml"))
},
want: &config.Containerd{
BlockIOConfig: filepath.Join(containerdConfigDir, "blockio_config.yaml"),
RDTConfig: filepath.Join(containerdConfigDir, "rdt_config.yaml"),
},
},
{
name: "BlockIO path is a directory",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
return os.Mkdir(filepath.Join(containerdConfigDir, "blockio_config.yaml"), 0700)
},
teardown: func() {
os.Remove(filepath.Join(containerdConfigDir, "blockio_config.yaml"))
},
want: &config.Containerd{},
},
{
name: "RDT path is a directory",
args: args{
envInfo: &cmds.Agent{
DataDir: configControl.DataDir,
},
containerdConfig: &config.Containerd{},
},
setup: func() error {
return os.Mkdir(filepath.Join(containerdConfigDir, "rdt_config.yaml"), 0700)
},
teardown: func() {
os.Remove(filepath.Join(containerdConfigDir, "rdt_config.yaml"))
},
want: &config.Containerd{},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.setup()
defer tt.teardown()
envInfo := tt.args.envInfo
containerdConfig := tt.args.containerdConfig
applyContainerdQoSClassConfigFileIfPresent(envInfo, containerdConfig)
if !reflect.DeepEqual(containerdConfig, tt.want) {
t.Errorf("applyContainerdQoSClassConfigFileIfPresent() = %+v\nWant %+v", containerdConfig, tt.want)
}
})
}
}

View File

@ -6,8 +6,11 @@ package config
import (
"path/filepath"
"github.com/k3s-io/k3s/pkg/agent/util/acl"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/pkg/errors"
"golang.org/x/sys/windows"
)
func applyContainerdStateAndAddress(nodeConfig *config.Node) {
@ -19,6 +22,22 @@ func applyCRIDockerdAddress(nodeConfig *config.Node) {
nodeConfig.CRIDockerd.Address = "npipe:////.pipe/cri-dockerd"
}
func applyContainerdQoSClassConfigFileIfPresent(envInfo *cmds.Agent, nodeConfig *config.Node) {
func applyContainerdQoSClassConfigFileIfPresent(envInfo *cmds.Agent, containerdConfig *config.Containerd) {
// QoS-class resource management not supported on windows.
}
// configureACL will configure an Access Control List for the specified file,
// ensuring that only the LocalSystem and Administrators Group have access to the file contents
func configureACL(file string) error {
// by default Apply will use the current user (LocalSystem in the case of a Windows service)
// as the owner and current user group as the allowed group
// additionally, we define a DACL to permit access to the file to the local system and all administrators
if err := acl.Apply(file, nil, nil, []windows.EXPLICIT_ACCESS{
acl.GrantSid(windows.GENERIC_ALL, acl.LocalSystemSID()),
acl.GrantSid(windows.GENERIC_ALL, acl.BuiltinAdministratorsSID()),
}...); err != nil {
return errors.Wrapf(err, "failed to configure Access Control List For %s", file)
}
return nil
}

View File

@ -0,0 +1,243 @@
package containerd
import (
"fmt"
"net"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/containerd/containerd/remotes/docker"
"github.com/k3s-io/k3s/pkg/agent/templates"
util2 "github.com/k3s-io/k3s/pkg/agent/util"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/spegel"
"github.com/k3s-io/k3s/pkg/version"
"github.com/rancher/wharfie/pkg/registries"
"github.com/sirupsen/logrus"
)
type HostConfigs map[string]templates.HostConfig
// writeContainerdConfig renders and saves config.toml from the filled template
func writeContainerdConfig(cfg *config.Node, containerdConfig templates.ContainerdConfig) error {
var containerdTemplate string
containerdTemplateBytes, err := os.ReadFile(cfg.Containerd.Template)
if err == nil {
logrus.Infof("Using containerd template at %s", cfg.Containerd.Template)
containerdTemplate = string(containerdTemplateBytes)
} else if os.IsNotExist(err) {
containerdTemplate = templates.ContainerdConfigTemplate
} else {
return err
}
parsedTemplate, err := templates.ParseTemplateFromConfig(containerdTemplate, containerdConfig)
if err != nil {
return err
}
return util2.WriteFile(cfg.Containerd.Config, parsedTemplate)
}
// writeContainerdHosts merges registry mirrors/configs, and renders and saves hosts.toml from the filled template
func writeContainerdHosts(cfg *config.Node, containerdConfig templates.ContainerdConfig) error {
mirrorAddr := net.JoinHostPort(spegel.DefaultRegistry.InternalAddress, spegel.DefaultRegistry.RegistryPort)
hosts := getHostConfigs(containerdConfig.PrivateRegistryConfig, containerdConfig.NoDefaultEndpoint, mirrorAddr)
// Clean up previous configuration templates
os.RemoveAll(cfg.Containerd.Registry)
// Write out new templates
for host, config := range hosts {
hostDir := filepath.Join(cfg.Containerd.Registry, host)
hostsFile := filepath.Join(hostDir, "hosts.toml")
hostsTemplate, err := templates.ParseHostsTemplateFromConfig(templates.HostsTomlTemplate, config)
if err != nil {
return err
}
if err := os.MkdirAll(hostDir, 0700); err != nil {
return err
}
if err := util2.WriteFile(hostsFile, hostsTemplate); err != nil {
return err
}
}
return nil
}
// getHostConfigs merges the registry mirrors/configs into HostConfig template structs
func getHostConfigs(registry *registries.Registry, noDefaultEndpoint bool, mirrorAddr string) HostConfigs {
hosts := map[string]templates.HostConfig{}
// create config for default endpoints
for host, config := range registry.Configs {
if c, err := defaultHostConfig(host, mirrorAddr, config); err != nil {
logrus.Errorf("Failed to generate config for registry %s: %v", host, err)
} else {
if host == "*" {
host = "_default"
}
hosts[host] = *c
}
}
// create endpoints for mirrors
for host, mirror := range registry.Mirrors {
// create the default config, if it wasn't explicitly mentioned in the config section
config, ok := hosts[host]
if !ok {
if c, err := defaultHostConfig(host, mirrorAddr, configForHost(registry.Configs, host)); err != nil {
logrus.Errorf("Failed to generate config for registry %s: %v", host, err)
continue
} else {
if noDefaultEndpoint {
c.Default = nil
} else if host == "*" {
c.Default = &templates.RegistryEndpoint{URL: &url.URL{}}
}
config = *c
}
}
// track which endpoints we've already seen to avoid creating duplicates
seenEndpoint := map[string]bool{}
// TODO: rewrites are currently copied from the mirror settings into each endpoint.
// In the future, we should allow for per-endpoint rewrites, instead of expecting
// all mirrors to have the same structure. This will require changes to the registries.yaml
// structure, which is defined in rancher/wharfie.
for i, endpoint := range mirror.Endpoints {
registryName, url, override, err := normalizeEndpointAddress(endpoint, mirrorAddr)
if err != nil {
logrus.Warnf("Ignoring invalid endpoint URL %d=%s for %s: %v", i, endpoint, host, err)
} else if _, ok := seenEndpoint[url.String()]; ok {
logrus.Warnf("Skipping duplicate endpoint URL %d=%s for %s", i, endpoint, host)
} else {
seenEndpoint[url.String()] = true
var rewrites map[string]string
// Do not apply rewrites to the embedded registry endpoint
if url.Host != mirrorAddr {
rewrites = mirror.Rewrites
}
ep := templates.RegistryEndpoint{
Config: configForHost(registry.Configs, registryName),
Rewrites: rewrites,
OverridePath: override,
URL: url,
}
if i+1 == len(mirror.Endpoints) && endpointURLEqual(config.Default, &ep) {
// if the last endpoint is the default endpoint, move it there
config.Default = &ep
} else {
config.Endpoints = append(config.Endpoints, ep)
}
}
}
if host == "*" {
host = "_default"
}
hosts[host] = config
}
// Clean up hosts and default endpoints where resulting config leaves only defaults
for host, config := range hosts {
// if this host has no endpoints and the default has no config, delete this host
if len(config.Endpoints) == 0 && !endpointHasConfig(config.Default) {
delete(hosts, host)
}
}
return hosts
}
// normalizeEndpointAddress normalizes the endpoint address.
// If successful, it returns the registry name, URL, and a bool indicating if the endpoint path should be overridden.
// If unsuccessful, an error is returned.
// Scheme and hostname logic should match containerd:
// https://github.com/containerd/containerd/blob/v1.7.13/remotes/docker/config/hosts.go#L99-L131
func normalizeEndpointAddress(endpoint, mirrorAddr string) (string, *url.URL, bool, error) {
// Ensure that the endpoint address has a scheme so that the URL is parsed properly
if !strings.Contains(endpoint, "://") {
endpoint = "//" + endpoint
}
endpointURL, err := url.Parse(endpoint)
if err != nil {
return "", nil, false, err
}
port := endpointURL.Port()
// set default scheme, if not provided
if endpointURL.Scheme == "" {
// localhost on odd ports defaults to http, unless it's the embedded mirror
if docker.IsLocalhost(endpointURL.Host) && port != "" && port != "443" && endpointURL.Host != mirrorAddr {
endpointURL.Scheme = "http"
} else {
endpointURL.Scheme = "https"
}
}
registry := endpointURL.Host
endpointURL.Host, _ = docker.DefaultHost(registry)
// This is the reverse of the DefaultHost normalization
if endpointURL.Host == "registry-1.docker.io" {
registry = "docker.io"
}
switch endpointURL.Path {
case "", "/", "/v2":
// If the path is empty, /, or /v2, use the default path.
endpointURL.Path = "/v2"
return registry, endpointURL, false, nil
}
return registry, endpointURL, true, nil
}
func defaultHostConfig(host, mirrorAddr string, config registries.RegistryConfig) (*templates.HostConfig, error) {
_, url, _, err := normalizeEndpointAddress(host, mirrorAddr)
if err != nil {
return nil, fmt.Errorf("invalid endpoint URL %s for %s: %v", host, host, err)
}
if host == "*" {
url = nil
}
return &templates.HostConfig{
Program: version.Program,
Default: &templates.RegistryEndpoint{
URL: url,
Config: config,
},
}, nil
}
func configForHost(configs map[string]registries.RegistryConfig, host string) registries.RegistryConfig {
// check for config under modified hostname. If the hostname is unmodified, or there is no config for
// the modified hostname, return the config for the default hostname.
if h, _ := docker.DefaultHost(host); h != host {
if c, ok := configs[h]; ok {
return c
}
}
return configs[host]
}
// endpointURLEqual compares endpoint URL strings
func endpointURLEqual(a, b *templates.RegistryEndpoint) bool {
var au, bu string
if a != nil && a.URL != nil {
au = a.URL.String()
}
if b != nil && b.URL != nil {
bu = b.URL.String()
}
return au == bu
}
func endpointHasConfig(ep *templates.RegistryEndpoint) bool {
if ep != nil {
return ep.OverridePath || ep.Config.Auth != nil || ep.Config.TLS != nil || len(ep.Rewrites) > 0
}
return false
}

View File

@ -4,7 +4,6 @@
package containerd
import (
"context"
"os"
"github.com/containerd/containerd"
@ -13,19 +12,20 @@ import (
stargz "github.com/containerd/stargz-snapshotter/service"
"github.com/docker/docker/pkg/parsers/kernel"
"github.com/k3s-io/k3s/pkg/agent/templates"
util2 "github.com/k3s-io/k3s/pkg/agent/util"
"github.com/k3s-io/k3s/pkg/cgroups"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/version"
"github.com/opencontainers/runc/libcontainer/userns"
"github.com/pkg/errors"
"github.com/rancher/wharfie/pkg/registries"
"github.com/sirupsen/logrus"
"golang.org/x/sys/unix"
"k8s.io/kubernetes/pkg/kubelet/util"
)
const socketPrefix = "unix://"
const (
socketPrefix = "unix://"
runtimesPath = "/usr/local/nvidia/toolkit:/opt/kwasm/bin:/usr/sbin:/usr/local/sbin:/usr/bin:/usr/local/bin"
)
func getContainerdArgs(cfg *config.Node) []string {
args := []string{
@ -38,14 +38,9 @@ func getContainerdArgs(cfg *config.Node) []string {
return args
}
// setupContainerdConfig generates the containerd.toml, using a template combined with various
// SetupContainerdConfig generates the containerd.toml, using a template combined with various
// runtime configurations and registry mirror settings provided by the administrator.
func setupContainerdConfig(ctx context.Context, cfg *config.Node) error {
privRegistries, err := registries.GetPrivateRegistries(cfg.AgentConfig.PrivateRegistry)
if err != nil {
return err
}
func SetupContainerdConfig(cfg *config.Node) error {
isRunningInUserNS := userns.RunningInUserNS()
_, _, controllers := cgroups.CheckCgroups()
// "/sys/fs/cgroup" is namespaced
@ -60,23 +55,28 @@ func setupContainerdConfig(ctx context.Context, cfg *config.Node) error {
cfg.AgentConfig.Systemd = !isRunningInUserNS && controllers["cpuset"] && os.Getenv("INVOCATION_ID") != ""
}
extraRuntimes := findContainerRuntimes(os.DirFS(string(os.PathSeparator)))
// set the path to include the runtimes and then remove the aditional path entries
// that we added after finding the runtimes
originalPath := os.Getenv("PATH")
os.Setenv("PATH", runtimesPath)
extraRuntimes := findContainerRuntimes()
os.Setenv("PATH", originalPath)
// Verifies if the DefaultRuntime can be found
if _, ok := extraRuntimes[cfg.DefaultRuntime]; !ok && cfg.DefaultRuntime != "" {
return errors.Errorf("default runtime %s was not found", cfg.DefaultRuntime)
}
var containerdTemplate string
containerdConfig := templates.ContainerdConfig{
NodeConfig: cfg,
DisableCgroup: disableCgroup,
SystemdCgroup: cfg.AgentConfig.Systemd,
IsRunningInUserNS: isRunningInUserNS,
EnableUnprivileged: kernel.CheckKernelVersion(4, 11, 0),
PrivateRegistryConfig: privRegistries.Registry,
PrivateRegistryConfig: cfg.AgentConfig.Registry,
ExtraRuntimes: extraRuntimes,
Program: version.Program,
NoDefaultEndpoint: cfg.Containerd.NoDefault,
}
selEnabled, selConfigured, err := selinuxStatus()
@ -90,21 +90,11 @@ func setupContainerdConfig(ctx context.Context, cfg *config.Node) error {
logrus.Warnf("SELinux is enabled for "+version.Program+" but process is not running in context '%s', "+version.Program+"-selinux policy may need to be applied", SELinuxContextType)
}
containerdTemplateBytes, err := os.ReadFile(cfg.Containerd.Template)
if err == nil {
logrus.Infof("Using containerd template at %s", cfg.Containerd.Template)
containerdTemplate = string(containerdTemplateBytes)
} else if os.IsNotExist(err) {
containerdTemplate = templates.ContainerdConfigTemplate
} else {
return err
}
parsedTemplate, err := templates.ParseTemplateFromConfig(containerdTemplate, containerdConfig)
if err != nil {
if err := writeContainerdConfig(cfg, containerdConfig); err != nil {
return err
}
return util2.WriteFile(cfg.Containerd.Config, parsedTemplate)
return writeContainerdHosts(cfg, containerdConfig)
}
func Client(address string) (*containerd.Client, error) {

File diff suppressed because it is too large Load Diff

View File

@ -4,16 +4,11 @@
package containerd
import (
"context"
"os"
"github.com/containerd/containerd"
"github.com/k3s-io/k3s/pkg/agent/templates"
util2 "github.com/k3s-io/k3s/pkg/agent/util"
"github.com/k3s-io/k3s/pkg/daemons/config"
util3 "github.com/k3s-io/k3s/pkg/util"
"github.com/pkg/errors"
"github.com/rancher/wharfie/pkg/registries"
"github.com/sirupsen/logrus"
"k8s.io/kubernetes/pkg/kubelet/util"
)
@ -26,43 +21,27 @@ func getContainerdArgs(cfg *config.Node) []string {
return args
}
// setupContainerdConfig generates the containerd.toml, using a template combined with various
// SetupContainerdConfig generates the containerd.toml, using a template combined with various
// runtime configurations and registry mirror settings provided by the administrator.
func setupContainerdConfig(ctx context.Context, cfg *config.Node) error {
privRegistries, err := registries.GetPrivateRegistries(cfg.AgentConfig.PrivateRegistry)
if err != nil {
return err
}
func SetupContainerdConfig(cfg *config.Node) error {
if cfg.SELinux {
logrus.Warn("SELinux isn't supported on windows")
}
var containerdTemplate string
containerdConfig := templates.ContainerdConfig{
NodeConfig: cfg,
DisableCgroup: true,
SystemdCgroup: false,
IsRunningInUserNS: false,
PrivateRegistryConfig: privRegistries.Registry,
PrivateRegistryConfig: cfg.AgentConfig.Registry,
NoDefaultEndpoint: cfg.Containerd.NoDefault,
}
containerdTemplateBytes, err := os.ReadFile(cfg.Containerd.Template)
if err == nil {
logrus.Infof("Using containerd template at %s", cfg.Containerd.Template)
containerdTemplate = string(containerdTemplateBytes)
} else if os.IsNotExist(err) {
containerdTemplate = templates.ContainerdConfigTemplate
} else {
return err
}
parsedTemplate, err := templates.ParseTemplateFromConfig(containerdTemplate, containerdConfig)
if err != nil {
if err := writeContainerdConfig(cfg, containerdConfig); err != nil {
return err
}
return util2.WriteFile(cfg.Containerd.Config, parsedTemplate)
return writeContainerdHosts(cfg, containerdConfig)
}
func Client(address string) (*containerd.Client, error) {

View File

@ -14,9 +14,9 @@ import (
"github.com/containerd/containerd"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/leases"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/pkg/cri/constants"
"github.com/containerd/containerd/pkg/cri/labels"
"github.com/containerd/containerd/reference/docker"
"github.com/k3s-io/k3s/pkg/agent/cri"
util2 "github.com/k3s-io/k3s/pkg/agent/util"
@ -25,19 +25,22 @@ import (
"github.com/natefinch/lumberjack"
"github.com/pkg/errors"
"github.com/rancher/wharfie/pkg/tarfile"
"github.com/rancher/wrangler/pkg/merr"
"github.com/rancher/wrangler/v3/pkg/merr"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
runtimeapi "k8s.io/cri-api/pkg/apis/runtime/v1"
)
var (
// In addition to using the CRI pinned label, we add our own label to indicate that
// the image was pinned by the import process, so that we can clear the pin on subsequent startups.
// ref: https://github.com/containerd/containerd/blob/release/1.7/pkg/cri/labels/labels.go
k3sPinnedImageLabelKey = "io.cattle." + version.Program + ".pinned"
k3sPinnedImageLabelValue = "pinned"
)
// Run configures and starts containerd as a child process. Once it is up, images are preloaded
// or pulled from files found in the agent images directory.
func Run(ctx context.Context, cfg *config.Node) error {
if err := setupContainerdConfig(ctx, cfg); err != nil {
return err
}
args := getContainerdArgs(cfg)
stdOut := io.Writer(os.Stdout)
stdErr := io.Writer(os.Stderr)
@ -92,24 +95,26 @@ func Run(ctx context.Context, cfg *config.Node) error {
cmd.Env = append(env, cenv...)
addDeathSig(cmd)
if err := cmd.Run(); err != nil {
err := cmd.Run()
if err != nil && !errors.Is(err, context.Canceled) {
logrus.Errorf("containerd exited: %s", err)
os.Exit(1)
}
os.Exit(1)
os.Exit(0)
}()
if err := cri.WaitForService(ctx, cfg.Containerd.Address, "containerd"); err != nil {
return err
}
return preloadImages(ctx, cfg)
return PreloadImages(ctx, cfg)
}
// preloadImages reads the contents of the agent images directory, and attempts to
// PreloadImages reads the contents of the agent images directory, and attempts to
// import into containerd any files found there. Supported compressed types are decompressed, and
// any .txt files are processed as a list of images that should be pre-pulled from remote registries.
// If configured, imported images are retagged as being pulled from additional registries.
func preloadImages(ctx context.Context, cfg *config.Node) error {
func PreloadImages(ctx context.Context, cfg *config.Node) error {
fileInfo, err := os.Stat(cfg.Images)
if os.IsNotExist(err) {
return nil
@ -134,38 +139,29 @@ func preloadImages(ctx context.Context, cfg *config.Node) error {
}
defer client.Close()
// Image pulls must be done using the CRI client, not the containerd client.
// Repository mirrors and rewrites are handled by the CRI service; if you pull directly
// using the containerd image service it will ignore the configured settings.
criConn, err := cri.Connection(ctx, cfg.Containerd.Address)
if err != nil {
return err
}
defer criConn.Close()
imageClient := runtimeapi.NewImageServiceClient(criConn)
// Ensure that our images are imported into the correct namespace
ctx = namespaces.WithNamespace(ctx, constants.K8sContainerdNamespace)
// At startup all leases from k3s are cleared
ls := client.LeasesService()
existingLeases, err := ls.List(ctx)
if err != nil {
return err
// At startup all leases from k3s are cleared; we no longer use leases to lock content
if err := clearLeases(ctx, client); err != nil {
return errors.Wrap(err, "failed to clear leases")
}
for _, lease := range existingLeases {
if lease.ID == version.Program {
logrus.Debugf("Deleting existing lease: %v", lease)
ls.Delete(ctx, lease)
}
// Clear the pinned labels on all images previously pinned by k3s
if err := clearLabels(ctx, client); err != nil {
return errors.Wrap(err, "failed to clear pinned labels")
}
// Any images found on import are given a lease that never expires
lease, err := ls.Create(ctx, leases.WithID(version.Program))
if err != nil {
return err
}
// Ensure that our images are locked by the lease
ctx = leases.WithLease(ctx, lease.ID)
for _, fileInfo := range fileInfos {
if fileInfo.IsDir() {
continue
@ -174,7 +170,7 @@ func preloadImages(ctx context.Context, cfg *config.Node) error {
start := time.Now()
filePath := filepath.Join(cfg.Images, fileInfo.Name())
if err := preloadFile(ctx, cfg, client, criConn, filePath); err != nil {
if err := preloadFile(ctx, cfg, client, imageClient, filePath); err != nil {
logrus.Errorf("Error encountered while importing %s: %v", filePath, err)
continue
}
@ -186,7 +182,8 @@ func preloadImages(ctx context.Context, cfg *config.Node) error {
// preloadFile handles loading images from a single tarball or pre-pull image list.
// This is in its own function so that we can ensure that the various readers are properly closed, as some
// decompressing readers need to be explicitly closed and others do not.
func preloadFile(ctx context.Context, cfg *config.Node, client *containerd.Client, criConn *grpc.ClientConn, filePath string) error {
func preloadFile(ctx context.Context, cfg *config.Node, client *containerd.Client, imageClient runtimeapi.ImageServiceClient, filePath string) error {
var images []images.Image
if util2.HasSuffixI(filePath, ".txt") {
file, err := os.Open(filePath)
if err != nil {
@ -194,28 +191,102 @@ func preloadFile(ctx context.Context, cfg *config.Node, client *containerd.Clien
}
defer file.Close()
logrus.Infof("Pulling images from %s", filePath)
return prePullImages(ctx, criConn, file)
images, err = prePullImages(ctx, client, imageClient, file)
if err != nil {
return errors.Wrap(err, "failed to pull images from "+filePath)
}
} else {
opener, err := tarfile.GetOpener(filePath)
if err != nil {
return err
}
imageReader, err := opener()
if err != nil {
return err
}
defer imageReader.Close()
logrus.Infof("Importing images from %s", filePath)
images, err = client.Import(ctx, imageReader, containerd.WithAllPlatforms(true), containerd.WithSkipMissing())
if err != nil {
return errors.Wrap(err, "failed to import images from "+filePath)
}
}
opener, err := tarfile.GetOpener(filePath)
if err := labelImages(ctx, client, images); err != nil {
return errors.Wrap(err, "failed to add pinned label to images")
}
if err := retagImages(ctx, client, images, cfg.AgentConfig.AirgapExtraRegistry); err != nil {
return errors.Wrap(err, "failed to retag images")
}
for _, image := range images {
logrus.Infof("Imported %s", image.Name)
}
return nil
}
// clearLeases deletes any leases left by previous versions of k3s.
// We no longer use leases to lock content; they only locked the
// blobs, not the actual images.
func clearLeases(ctx context.Context, client *containerd.Client) error {
ls := client.LeasesService()
existingLeases, err := ls.List(ctx)
if err != nil {
return err
}
for _, lease := range existingLeases {
if lease.ID == version.Program {
logrus.Debugf("Deleting existing lease: %v", lease)
ls.Delete(ctx, lease)
}
}
return nil
}
imageReader, err := opener()
// clearLabels removes the pinned labels on all images in the image store that were previously pinned by k3s
func clearLabels(ctx context.Context, client *containerd.Client) error {
var errs []error
imageService := client.ImageService()
images, err := imageService.List(ctx, fmt.Sprintf("labels.%q==%s", k3sPinnedImageLabelKey, k3sPinnedImageLabelValue))
if err != nil {
return err
}
defer imageReader.Close()
logrus.Infof("Importing images from %s", filePath)
images, err := client.Import(ctx, imageReader, containerd.WithAllPlatforms(true))
if err != nil {
return err
for _, image := range images {
delete(image.Labels, k3sPinnedImageLabelKey)
delete(image.Labels, labels.PinnedImageLabelKey)
if _, err := imageService.Update(ctx, image, "labels"); err != nil {
errs = append(errs, errors.Wrap(err, "failed to delete labels from image "+image.Name))
}
}
return merr.NewErrors(errs...)
}
return retagImages(ctx, client, images, cfg.AgentConfig.AirgapExtraRegistry)
// labelImages adds labels to the listed images, indicating that they
// are pinned by k3s and should not be pruned.
func labelImages(ctx context.Context, client *containerd.Client, images []images.Image) error {
var errs []error
imageService := client.ImageService()
for i, image := range images {
if image.Labels[k3sPinnedImageLabelKey] == k3sPinnedImageLabelValue &&
image.Labels[labels.PinnedImageLabelKey] == labels.PinnedImageLabelValue {
continue
}
if image.Labels == nil {
image.Labels = map[string]string{}
}
image.Labels[k3sPinnedImageLabelKey] = k3sPinnedImageLabelValue
image.Labels[labels.PinnedImageLabelKey] = labels.PinnedImageLabelValue
updatedImage, err := imageService.Update(ctx, image, "labels")
if err != nil {
errs = append(errs, errors.Wrap(err, "failed to add labels to image "+image.Name))
} else {
images[i] = updatedImage
}
}
return merr.NewErrors(errs...)
}
// retagImages retags all listed images as having been pulled from the given remote registries.
@ -227,24 +298,27 @@ func retagImages(ctx context.Context, client *containerd.Client, images []images
for _, image := range images {
name, err := parseNamedTagged(image.Name)
if err != nil {
errs = append(errs, errors.Wrap(err, "failed to parse image name"))
errs = append(errs, errors.Wrap(err, "failed to parse tags for image "+image.Name))
continue
}
logrus.Infof("Imported %s", image.Name)
for _, registry := range registries {
image.Name = fmt.Sprintf("%s/%s:%s", registry, docker.Path(name), name.Tag())
newName := fmt.Sprintf("%s/%s:%s", registry, docker.Path(name), name.Tag())
if newName == image.Name {
continue
}
image.Name = newName
if _, err = imageService.Create(ctx, image); err != nil {
if errdefs.IsAlreadyExists(err) {
if err = imageService.Delete(ctx, image.Name); err != nil {
errs = append(errs, errors.Wrap(err, "failed to delete existing image"))
errs = append(errs, errors.Wrap(err, "failed to delete existing image "+image.Name))
continue
}
if _, err = imageService.Create(ctx, image); err != nil {
errs = append(errs, errors.Wrap(err, "failed to tag after deleting existing image"))
errs = append(errs, errors.Wrap(err, "failed to tag after deleting existing image "+image.Name))
continue
}
} else {
errs = append(errs, errors.Wrap(err, "failed to tag image"))
errs = append(errs, errors.Wrap(err, "failed to tag image "+image.Name))
continue
}
}
@ -269,30 +343,47 @@ func parseNamedTagged(name string) (docker.NamedTagged, error) {
}
// prePullImages asks containerd to pull images in a given list, so that they
// are ready when the containers attempt to start later.
func prePullImages(ctx context.Context, conn *grpc.ClientConn, images io.Reader) error {
imageClient := runtimeapi.NewImageServiceClient(conn)
scanner := bufio.NewScanner(images)
// are ready when the containers attempt to start later. If the image already exists,
// or is successfully pulled, information about the image is retrieved from the image store.
// NOTE: Pulls MUST be done via CRI API, not containerd API, in order to use mirrors and rewrites.
func prePullImages(ctx context.Context, client *containerd.Client, imageClient runtimeapi.ImageServiceClient, imageList io.Reader) ([]images.Image, error) {
errs := []error{}
images := []images.Image{}
imageService := client.ImageService()
scanner := bufio.NewScanner(imageList)
for scanner.Scan() {
line := strings.TrimSpace(scanner.Text())
resp, err := imageClient.ImageStatus(ctx, &runtimeapi.ImageStatusRequest{
name := strings.TrimSpace(scanner.Text())
if status, err := imageClient.ImageStatus(ctx, &runtimeapi.ImageStatusRequest{
Image: &runtimeapi.ImageSpec{
Image: line,
Image: name,
},
})
if err == nil && resp.Image != nil {
}); err == nil && status.Image != nil && len(status.Image.RepoTags) > 0 {
logrus.Infof("Image %s has already been pulled", name)
for _, tag := range status.Image.RepoTags {
if image, err := imageService.Get(ctx, tag); err != nil {
errs = append(errs, err)
} else {
images = append(images, image)
}
}
continue
}
logrus.Infof("Pulling image %s...", line)
_, err = imageClient.PullImage(ctx, &runtimeapi.PullImageRequest{
logrus.Infof("Pulling image %s", name)
if _, err := imageClient.PullImage(ctx, &runtimeapi.PullImageRequest{
Image: &runtimeapi.ImageSpec{
Image: line,
Image: name,
},
})
if err != nil {
logrus.Errorf("Failed to pull %s: %v", line, err)
}); err != nil {
errs = append(errs, err)
} else {
if image, err := imageService.Get(ctx, name); err != nil {
errs = append(errs, err)
} else {
images = append(images, image)
}
}
}
return nil
return images, merr.NewErrors(errs...)
}

View File

@ -6,7 +6,7 @@ package containerd
import (
"errors"
"io/fs"
"path/filepath"
"os/exec"
"github.com/k3s-io/k3s/pkg/agent/templates"
"github.com/sirupsen/logrus"
@ -17,61 +17,40 @@ import (
type runtimeConfigs map[string]templates.ContainerdRuntimeConfig
// searchForRuntimes searches for runtimes and add into foundRuntimes
// It checks install locations provided via potentitalRuntimes variable.
// The binaries are searched at the locations specivied by locationsToCheck.
// The given fs.FS should represent the filesystem root directory to search in.
func searchForRuntimes(root fs.FS, potentialRuntimes runtimeConfigs, locationsToCheck []string, foundRuntimes runtimeConfigs) {
// Check these locations in order. The GPU operator's installation should
// take precedence over the package manager's installation.
// It checks the PATH for the executables
func searchForRuntimes(potentialRuntimes runtimeConfigs, foundRuntimes runtimeConfigs) {
// Fill in the binary location with just the name of the binary,
// and check against each of the possible locations. If a match is found,
// set the location to the full path.
for runtimeName, runtimeConfig := range potentialRuntimes {
for _, location := range locationsToCheck {
binaryPath := filepath.Join(location, runtimeConfig.BinaryName)
logrus.Debugf("Searching for %s container runtime at /%s", runtimeName, binaryPath)
if info, err := fs.Stat(root, binaryPath); err == nil {
if info.IsDir() {
logrus.Debugf("Found %s container runtime at /%s, but it is a directory. Skipping.", runtimeName, binaryPath)
continue
}
runtimeConfig.BinaryName = filepath.Join("/", binaryPath)
logrus.Infof("Found %s container runtime at %s", runtimeName, runtimeConfig.BinaryName)
foundRuntimes[runtimeName] = runtimeConfig
break
logrus.Debugf("Searching for %s container runtime", runtimeName)
path, err := exec.LookPath(runtimeConfig.BinaryName)
if err != nil {
if errors.Is(err, fs.ErrNotExist) {
logrus.Debugf("%s container runtime not found in $PATH: %v", runtimeName, err)
} else {
if errors.Is(err, fs.ErrNotExist) {
logrus.Debugf("%s container runtime not found at /%s", runtimeName, binaryPath)
} else {
logrus.Errorf("Error searching for %s container runtime at /%s: %v", runtimeName, binaryPath, err)
}
logrus.Debugf("Error searching for %s in $PATH: %v", runtimeName, err)
}
continue
}
logrus.Infof("Found %s container runtime at %s", runtimeName, path)
runtimeConfig.BinaryName = path
foundRuntimes[runtimeName] = runtimeConfig
}
}
// findContainerRuntimes is a function that searches for all the runtimes and
// return a list with all the runtimes that have been found
func findContainerRuntimes(root fs.FS) runtimeConfigs {
func findContainerRuntimes() runtimeConfigs {
foundRuntimes := runtimeConfigs{}
findCRunContainerRuntime(root, foundRuntimes)
findNvidiaContainerRuntimes(root, foundRuntimes)
findWasiRuntimes(root, foundRuntimes)
findCRunContainerRuntime(foundRuntimes)
findNvidiaContainerRuntimes(foundRuntimes)
findWasiRuntimes(foundRuntimes)
return foundRuntimes
}
// findCRunContainerRuntime finds if crun is available in the system and adds to foundRuntimes
func findCRunContainerRuntime(root fs.FS, foundRuntimes runtimeConfigs) {
// Check these locations in order.
locationsToCheck := []string{
"usr/sbin", // Path when installing via package manager
"usr/bin", // Path when installing via package manager
}
// Fill in the binary location with just the name of the binary,
// and check against each of the possible locations. If a match is found,
// set the location to the full path.
func findCRunContainerRuntime(foundRuntimes runtimeConfigs) {
potentialRuntimes := runtimeConfigs{
"crun": {
RuntimeType: "io.containerd.runc.v2",
@ -79,25 +58,10 @@ func findCRunContainerRuntime(root fs.FS, foundRuntimes runtimeConfigs) {
},
}
searchForRuntimes(root, potentialRuntimes, locationsToCheck, foundRuntimes)
searchForRuntimes(potentialRuntimes, foundRuntimes)
}
// findNvidiaContainerRuntimes finds the nvidia runtimes that are are available on the system
// and adds to foundRuntimes. It checks install locations used by the nvidia
// gpu operator and by system package managers. The gpu operator installation
// takes precedence over the system package manager installation.
// The given fs.FS should represent the filesystem root directory to search in.
func findNvidiaContainerRuntimes(root fs.FS, foundRuntimes runtimeConfigs) {
// Check these locations in order. The GPU operator's installation should
// take precedence over the package manager's installation.
locationsToCheck := []string{
"usr/local/nvidia/toolkit", // Path when installing via GPU Operator
"usr/bin", // Path when installing via package manager
}
// Fill in the binary location with just the name of the binary,
// and check against each of the possible locations. If a match is found,
// set the location to the full path.
func findNvidiaContainerRuntimes(foundRuntimes runtimeConfigs) {
potentialRuntimes := runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
@ -108,54 +72,41 @@ func findNvidiaContainerRuntimes(root fs.FS, foundRuntimes runtimeConfigs) {
BinaryName: "nvidia-container-runtime-experimental",
},
}
searchForRuntimes(root, potentialRuntimes, locationsToCheck, foundRuntimes)
searchForRuntimes(potentialRuntimes, foundRuntimes)
}
// findWasiRuntimes finds the WebAssembly (WASI) container runtimes that
// are available on the system and adds to foundRuntimes. It checks install locations used by the kwasm
// operator and by system package managers. The kwasm operator installation
// takes precedence over the system package manager installation.
// The given fs.FS should represent the filesystem root directory to search in.
func findWasiRuntimes(root fs.FS, foundRuntimes runtimeConfigs) {
// Check these locations in order.
locationsToCheck := []string{
"opt/kwasm/bin", // Path when installing via kwasm Operator
"usr/bin", // Path when installing via package manager
"usr/sbin", // Path when installing via package manager
}
// Fill in the binary location with just the name of the binary,
// and check against each of the possible locations. If a match is found,
// set the location to the full path.
func findWasiRuntimes(foundRuntimes runtimeConfigs) {
potentialRuntimes := runtimeConfigs{
"lunatic": {
RuntimeType: "io.containerd.lunatic.v2",
RuntimeType: "io.containerd.lunatic.v1",
BinaryName: "containerd-shim-lunatic-v1",
},
"slight": {
RuntimeType: "io.containerd.slight.v2",
RuntimeType: "io.containerd.slight.v1",
BinaryName: "containerd-shim-slight-v1",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "containerd-shim-spin-v1",
BinaryName: "containerd-shim-spin-v2",
},
"wws": {
RuntimeType: "io.containerd.wws.v2",
RuntimeType: "io.containerd.wws.v1",
BinaryName: "containerd-shim-wws-v1",
},
"wasmedge": {
RuntimeType: "io.containerd.wasmedge.v2",
RuntimeType: "io.containerd.wasmedge.v1",
BinaryName: "containerd-shim-wasmedge-v1",
},
"wasmer": {
RuntimeType: "io.containerd.wasmer.v2",
RuntimeType: "io.containerd.wasmer.v1",
BinaryName: "containerd-shim-wasmer-v1",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
RuntimeType: "io.containerd.wasmtime.v1",
BinaryName: "containerd-shim-wasmtime-v1",
},
}
searchForRuntimes(root, potentialRuntimes, locationsToCheck, foundRuntimes)
searchForRuntimes(potentialRuntimes, foundRuntimes)
}

View File

@ -4,17 +4,15 @@
package containerd
import (
"io/fs"
"os"
"path/filepath"
"reflect"
"testing"
"testing/fstest"
)
func Test_UnitFindContainerRuntimes(t *testing.T) {
executable := &fstest.MapFile{Mode: 0755}
type args struct {
root fs.FS
exec []string
}
tests := []struct {
@ -24,39 +22,102 @@ func Test_UnitFindContainerRuntimes(t *testing.T) {
}{
{
name: "No runtimes",
args: args{
root: fstest.MapFS{},
},
args: args{},
want: runtimeConfigs{},
},
{
name: "Found crun, nvidia and wasm",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/bin/crun": executable,
"opt/kwasm/bin/containerd-shim-lunatic-v1": executable,
exec: []string{
"nvidia-container-runtime",
"crun",
"containerd-shim-lunatic-v1",
},
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
BinaryName: "/tmp/testExecutables/nvidia-container-runtime",
},
"crun": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/crun",
BinaryName: "/tmp/testExecutables/crun",
},
"lunatic": {
RuntimeType: "io.containerd.lunatic.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-lunatic-v1",
RuntimeType: "io.containerd.lunatic.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-lunatic-v1",
},
},
},
{
name: "Found only wasm",
args: args{
exec: []string{
"containerd-shim-lunatic-v1",
"containerd-shim-wasmtime-v1",
"containerd-shim-lunatic-v1",
"containerd-shim-slight-v1",
"containerd-shim-spin-v2",
"containerd-shim-wws-v1",
"containerd-shim-wasmedge-v1",
"containerd-shim-wasmer-v1",
},
},
want: runtimeConfigs{
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-wasmtime-v1",
},
"lunatic": {
RuntimeType: "io.containerd.lunatic.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-lunatic-v1",
},
"slight": {
RuntimeType: "io.containerd.slight.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-slight-v1",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/tmp/testExecutables/containerd-shim-spin-v2",
},
"wws": {
RuntimeType: "io.containerd.wws.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-wws-v1",
},
"wasmedge": {
RuntimeType: "io.containerd.wasmedge.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-wasmedge-v1",
},
"wasmer": {
RuntimeType: "io.containerd.wasmer.v1",
BinaryName: "/tmp/testExecutables/containerd-shim-wasmer-v1",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
foundRuntimes := findContainerRuntimes(tt.args.root)
tempDirPath := filepath.Join(os.TempDir(), "testExecutables")
err := os.Mkdir(tempDirPath, 0755)
if err != nil {
t.Errorf("Error creating directory: %v", err)
}
defer os.RemoveAll(tempDirPath)
for _, execName := range tt.args.exec {
execPath := filepath.Join(tempDirPath, execName)
if err := createExec(execPath); err != nil {
t.Errorf("Failed to create executable %s: %v", execPath, err)
}
}
originalPath := os.Getenv("PATH")
os.Setenv("PATH", tempDirPath)
defer os.Setenv("PATH", originalPath)
foundRuntimes := findContainerRuntimes()
if !reflect.DeepEqual(foundRuntimes, tt.want) {
t.Errorf("findContainerRuntimes = %+v\nWant = %+v", foundRuntimes, tt.want)
}
@ -64,733 +125,14 @@ func Test_UnitFindContainerRuntimes(t *testing.T) {
}
}
func Test_UnitSearchContainerRuntimes(t *testing.T) {
executable := &fstest.MapFile{Mode: 0755}
locationsToCheck := []string{
"usr/local/nvidia/toolkit", // Path for nvidia shim when installing via GPU Operator
"opt/kwasm/bin", // Path for wasm shim when installing via the kwasm operator
"usr/bin", // Path when installing via package manager
"usr/sbin", // Path when installing via package manager
func createExec(path string) error {
if err := os.WriteFile(path, []byte{}, 0755); err != nil {
return err
}
potentialRuntimes := runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "nvidia-container-runtime",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "containerd-shim-spin-v1",
},
if err := os.Chmod(path, 0755); err != nil {
return err
}
type args struct {
root fs.FS
potentialRuntimes runtimeConfigs
locationsToCheck []string
}
tests := []struct {
name string
args args
want runtimeConfigs
}{
{
name: "No runtimes",
args: args{
root: fstest.MapFS{},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{},
},
{
name: "Nvidia runtime in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
},
},
{
name: "Two runtimes in separate directories",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"opt/kwasm/bin/containerd-shim-spin-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-spin-v1",
},
},
},
{
name: "Same runtime in two directories",
args: args{
root: fstest.MapFS{
"usr/bin/containerd-shim-spin-v1": executable,
"opt/kwasm/bin/containerd-shim-spin-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-spin-v1",
},
},
},
{
name: "Both runtimes in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/containerd-shim-spin-v1": executable,
"usr/bin/nvidia-container-runtime": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/usr/bin/containerd-shim-spin-v1",
},
},
},
{
name: "Both runtimes in both directories",
args: args{
root: fstest.MapFS{
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
"usr/bin/nvidia-container-runtime": executable,
"usr/bin/containerd-shim-spin-v1": executable,
"opt/kwasm/bin/containerd-shim-spin-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-spin-v1",
},
},
},
{
name: "Both runtimes in /usr/bin and one duplicate in /usr/local/nvidia/toolkit",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/bin/containerd-shim-spin-v1": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/usr/bin/containerd-shim-spin-v1",
},
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
},
},
{
name: "Runtime is a directory",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": &fstest.MapFile{
Mode: fs.ModeDir,
},
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{},
},
{
name: "Runtime in both directories, but one is a directory",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime": &fstest.MapFile{
Mode: fs.ModeDir,
},
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
foundRuntimes := runtimeConfigs{}
searchForRuntimes(tt.args.root, tt.args.potentialRuntimes, tt.args.locationsToCheck, foundRuntimes)
if !reflect.DeepEqual(foundRuntimes, tt.want) {
t.Errorf("findContainerRuntimes() = %+v\nWant = %+v", foundRuntimes, tt.want)
}
})
}
}
func Test_UnitSearchWasiRuntimes(t *testing.T) {
executable := &fstest.MapFile{Mode: 0755}
locationsToCheck := []string{
"usr/local/nvidia/toolkit", // Path for nvidia shim when installing via GPU Operator
"opt/kwasm/bin", // Path for wasm shim when installing via the kwasm operator
"usr/bin", // Path when installing via package manager
"usr/sbin", // Path when installing via package manager
}
potentialRuntimes := runtimeConfigs{
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "containerd-shim-wasmtime-v1",
},
"lunatic": {
RuntimeType: "io.containerd.lunatic.v2",
BinaryName: "containerd-shim-lunatic-v1",
},
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "containerd-shim-slight-v1",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "containerd-shim-spin-v1",
},
"wws": {
RuntimeType: "io.containerd.wws.v2",
BinaryName: "containerd-shim-wws-v1",
},
"wasmedge": {
RuntimeType: "io.containerd.wasmedge.v2",
BinaryName: "containerd-shim-wasmedge-v1",
},
"wasmer": {
RuntimeType: "io.containerd.wasmer.v2",
BinaryName: "containerd-shim-wasmer-v1",
},
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "nvidia-container-runtime",
},
}
type args struct {
root fs.FS
potentialRuntimes runtimeConfigs
locationsToCheck []string
}
tests := []struct {
name string
args args
want runtimeConfigs
}{
{
name: "No runtimes",
args: args{
root: fstest.MapFS{},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{},
},
{
name: "wasmtime runtime in /usr/sbin",
args: args{
root: fstest.MapFS{
"usr/sbin/containerd-shim-wasmtime-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "/usr/sbin/containerd-shim-wasmtime-v1",
},
},
},
{
name: "lunatic runtime in /opt/kwasm/bin/",
args: args{
root: fstest.MapFS{
"opt/kwasm/bin/containerd-shim-lunatic-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"lunatic": {
RuntimeType: "io.containerd.lunatic.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-lunatic-v1",
},
},
},
{
name: "Two runtimes in separate directories",
args: args{
root: fstest.MapFS{
"usr/bin/containerd-shim-wasmer-v1": executable,
"opt/kwasm/bin/containerd-shim-slight-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-slight-v1",
},
"wasmer": {
RuntimeType: "io.containerd.wasmer.v2",
BinaryName: "/usr/bin/containerd-shim-wasmer-v1",
},
},
},
{
name: "Same runtime in two directories",
args: args{
root: fstest.MapFS{
"usr/bin/containerd-shim-wasmedge-v1": executable,
"opt/kwasm/bin/containerd-shim-wasmedge-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"wasmedge": {
RuntimeType: "io.containerd.wasmedge.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-wasmedge-v1",
},
},
},
{
name: "All runtimes in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/containerd-shim-lunatic-v1": executable,
"usr/bin/containerd-shim-slight-v1": executable,
"usr/bin/containerd-shim-spin-v1": executable,
"usr/bin/containerd-shim-wws-v1": executable,
"usr/bin/containerd-shim-wasmedge-v1": executable,
"usr/bin/containerd-shim-wasmer-v1": executable,
"usr/bin/containerd-shim-wasmtime-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"lunatic": {
RuntimeType: "io.containerd.lunatic.v2",
BinaryName: "/usr/bin/containerd-shim-lunatic-v1",
},
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "/usr/bin/containerd-shim-slight-v1",
},
"spin": {
RuntimeType: "io.containerd.spin.v2",
BinaryName: "/usr/bin/containerd-shim-spin-v1",
},
"wws": {
RuntimeType: "io.containerd.wws.v2",
BinaryName: "/usr/bin/containerd-shim-wws-v1",
},
"wasmedge": {
RuntimeType: "io.containerd.wasmedge.v2",
BinaryName: "/usr/bin/containerd-shim-wasmedge-v1",
},
"wasmer": {
RuntimeType: "io.containerd.wasmer.v2",
BinaryName: "/usr/bin/containerd-shim-wasmer-v1",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "/usr/bin/containerd-shim-wasmtime-v1",
},
},
},
{
name: "Both runtimes in both directories",
args: args{
root: fstest.MapFS{
"opt/kwasm/bin/containerd-shim-slight-v1": executable,
"opt/kwasm/bin/containerd-shim-wasmtime-v1": executable,
"usr/bin/containerd-shim-slight-v1": executable,
"usr/bin/containerd-shim-wasmtime-v1": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-slight-v1",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-wasmtime-v1",
},
},
},
{
name: "Preserve already found runtimes",
args: args{
root: fstest.MapFS{
"opt/kwasm/bin/containerd-shim-wasmtime-v1": executable,
"usr/bin/nvidia-container-runtime": executable,
},
locationsToCheck: locationsToCheck,
potentialRuntimes: potentialRuntimes,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-wasmtime-v1",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
foundRuntimes := runtimeConfigs{}
searchForRuntimes(tt.args.root, tt.args.potentialRuntimes, tt.args.locationsToCheck, foundRuntimes)
if !reflect.DeepEqual(foundRuntimes, tt.want) {
t.Errorf("searchForRuntimes = %+v\nWant = %+v", foundRuntimes, tt.want)
}
})
}
}
func Test_UnitSearchNvidiaContainerRuntimes(t *testing.T) {
executable := &fstest.MapFile{Mode: 0755}
locationsToCheck := []string{
"usr/local/nvidia/toolkit", // Path for nvidia shim when installing via GPU Operator
"opt/kwasm/bin", // Path for wasm shim when installing via the kwasm operator
"usr/bin", // Path when installing via package manager
"usr/sbin", // Path when installing via package manager
}
potentialRuntimes := runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "nvidia-container-runtime",
},
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "nvidia-container-runtime-experimental",
},
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "containerd-shim-slight-v1",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "containerd-shim-wasmtime-v1",
},
}
type args struct {
root fs.FS
potentialRuntimes runtimeConfigs
locationsToCheck []string
}
tests := []struct {
name string
args args
want runtimeConfigs
}{
{
name: "No runtimes",
args: args{
root: fstest.MapFS{},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{},
},
{
name: "Nvidia runtime in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
},
},
{
name: "Experimental runtime in /usr/local/nvidia/toolkit",
args: args{
root: fstest.MapFS{
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
},
},
{
name: "Two runtimes in separate directories",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
},
},
{
name: "Experimental runtime in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime-experimental": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime-experimental",
},
},
},
{
name: "Same runtime in two directories",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime-experimental": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime-experimental": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental",
},
},
},
{
name: "Both runtimes in /usr/bin",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime-experimental": executable,
"usr/bin/nvidia-container-runtime": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime-experimental",
},
},
},
{
name: "Both runtimes in both directories",
args: args{
root: fstest.MapFS{
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime-experimental": executable,
"usr/bin/nvidia-container-runtime": executable,
"usr/bin/nvidia-container-runtime-experimental": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental",
},
},
},
{
name: "Both runtimes in /usr/local/nvidia/toolkit",
args: args{
root: fstest.MapFS{
"usr/local/nvidia/toolkit/nvidia-container-runtime": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime-experimental": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime",
},
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental",
},
},
},
{
name: "Both runtimes in /usr/bin and one duplicate in /usr/local/nvidia/toolkit",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/bin/nvidia-container-runtime-experimental": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime-experimental": executable,
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
"nvidia-experimental": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/local/nvidia/toolkit/nvidia-container-runtime-experimental",
},
},
},
{
name: "Runtime is a directory",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": &fstest.MapFile{
Mode: fs.ModeDir,
},
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{},
},
{
name: "Runtime in both directories, but one is a directory",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime": &fstest.MapFile{
Mode: fs.ModeDir,
},
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
},
},
{
name: "Preserve already found runtimes",
args: args{
root: fstest.MapFS{
"usr/bin/nvidia-container-runtime": executable,
"opt/kwasm/bin/containerd-shim-wasmtime-v1": executable,
"opt/kwasm/bin/containerd-shim-slight-v1": executable,
"usr/local/nvidia/toolkit/nvidia-container-runtime": &fstest.MapFile{
Mode: fs.ModeDir,
},
},
potentialRuntimes: potentialRuntimes,
locationsToCheck: locationsToCheck,
},
want: runtimeConfigs{
"slight": {
RuntimeType: "io.containerd.slight.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-slight-v1",
},
"wasmtime": {
RuntimeType: "io.containerd.wasmtime.v2",
BinaryName: "/opt/kwasm/bin/containerd-shim-wasmtime-v1",
},
"nvidia": {
RuntimeType: "io.containerd.runc.v2",
BinaryName: "/usr/bin/nvidia-container-runtime",
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
foundRuntimes := runtimeConfigs{}
searchForRuntimes(tt.args.root, tt.args.potentialRuntimes, tt.args.locationsToCheck, foundRuntimes)
if !reflect.DeepEqual(foundRuntimes, tt.want) {
t.Errorf("searchForRuntimes() = %+v\nWant = %+v", foundRuntimes, tt.want)
}
})
}
return nil
}

View File

@ -5,6 +5,7 @@ package cridockerd
import (
"context"
"errors"
"os"
"runtime/debug"
"strings"
@ -37,7 +38,12 @@ func Run(ctx context.Context, cfg *config.Node) error {
logrus.WithField("stack", string(debug.Stack())).Fatalf("cri-dockerd panic: %v", err)
}
}()
logrus.Fatalf("cri-dockerd exited: %v", command.ExecuteContext(ctx))
err := command.ExecuteContext(ctx)
if err != nil && !errors.Is(err, context.Canceled) {
logrus.Errorf("cri-dockerd exited: %v", err)
os.Exit(1)
}
os.Exit(0)
}()
return cri.WaitForService(ctx, cfg.CRIDockerd.Address, "cri-dockerd")
@ -47,6 +53,7 @@ func getDockerCRIArgs(cfg *config.Node) []string {
argsMap := map[string]string{
"container-runtime-endpoint": cfg.CRIDockerd.Address,
"cri-dockerd-root-directory": cfg.CRIDockerd.Root,
"streaming-bind-addr": "127.0.0.1:10010",
}
if dualNode, _ := utilsnet.IsDualStackIPs(cfg.AgentConfig.NodeIPs); dualNode {

View File

@ -23,8 +23,9 @@ import (
"github.com/flannel-io/flannel/pkg/backend"
"github.com/flannel-io/flannel/pkg/ip"
"github.com/flannel-io/flannel/pkg/iptables"
"github.com/flannel-io/flannel/pkg/subnet/kube"
"github.com/flannel-io/flannel/pkg/trafficmngr/iptables"
"github.com/joho/godotenv"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/context"
@ -47,7 +48,7 @@ var (
FlannelExternalIPv6Annotation = FlannelBaseAnnotation + "/public-ipv6-overwrite"
)
func flannel(ctx context.Context, flannelIface *net.Interface, flannelConf, kubeConfigFile string, flannelIPv6Masq bool, multiClusterCIDR bool, netMode int) error {
func flannel(ctx context.Context, flannelIface *net.Interface, flannelConf, kubeConfigFile string, flannelIPv6Masq bool, netMode int) error {
extIface, err := LookupExtInterface(flannelIface, netMode)
if err != nil {
return errors.Wrap(err, "failed to find the interface")
@ -58,8 +59,7 @@ func flannel(ctx context.Context, flannelIface *net.Interface, flannelConf, kube
kubeConfigFile,
FlannelBaseAnnotation,
flannelConf,
false,
multiClusterCIDR)
false)
if err != nil {
return errors.Wrap(err, "failed to create the SubnetManager")
}
@ -81,49 +81,36 @@ func flannel(ctx context.Context, flannelIface *net.Interface, flannelConf, kube
if err != nil {
return errors.Wrap(err, "failed to register flannel network")
}
trafficMngr := &iptables.IPTablesManager{}
err = trafficMngr.Init(ctx, &sync.WaitGroup{})
if err != nil {
return errors.Wrap(err, "failed to initialize flannel ipTables manager")
}
if netMode == (ipv4+ipv6) || netMode == ipv4 {
net, err := config.GetFlannelNetwork(&bn.Lease().Subnet)
if err != nil {
return errors.Wrap(err, "failed to get flannel network details")
if config.Network.Empty() {
return errors.New("ipv4 mode requested but no ipv4 network provided")
}
iptables.CreateIP4Chain("nat", "FLANNEL-POSTRTG")
iptables.CreateIP4Chain("filter", "FLANNEL-FWD")
getMasqRules := func() []iptables.IPTablesRule {
if config.HasNetworks() {
return iptables.MasqRules(config.Networks, bn.Lease())
}
return iptables.MasqRules([]ip.IP4Net{config.Network}, bn.Lease())
}
getFwdRules := func() []iptables.IPTablesRule {
return iptables.ForwardRules(net.String())
}
go iptables.SetupAndEnsureIP4Tables(getMasqRules, 60)
go iptables.SetupAndEnsureIP4Tables(getFwdRules, 50)
}
if config.IPv6Network.String() != emptyIPv6Network {
ip6net, err := config.GetFlannelIPv6Network(&bn.Lease().IPv6Subnet)
if err != nil {
return errors.Wrap(err, "failed to get ipv6 flannel network details")
}
if flannelIPv6Masq {
logrus.Debugf("Creating IPv6 masquerading iptables rules for %s network", config.IPv6Network.String())
iptables.CreateIP6Chain("nat", "FLANNEL-POSTRTG")
getRules := func() []iptables.IPTablesRule {
if config.HasIPv6Networks() {
return iptables.MasqIP6Rules(config.IPv6Networks, bn.Lease())
}
return iptables.MasqIP6Rules([]ip.IP6Net{config.IPv6Network}, bn.Lease())
}
go iptables.SetupAndEnsureIP6Tables(getRules, 60)
}
iptables.CreateIP6Chain("filter", "FLANNEL-FWD")
getRules := func() []iptables.IPTablesRule {
return iptables.ForwardRules(ip6net.String())
}
go iptables.SetupAndEnsureIP6Tables(getRules, 50)
//setup masq rules
prevNetwork := ReadCIDRFromSubnetFile(subnetFile, "FLANNEL_NETWORK")
prevSubnet := ReadCIDRFromSubnetFile(subnetFile, "FLANNEL_SUBNET")
prevIPv6Network := ReadIP6CIDRFromSubnetFile(subnetFile, "FLANNEL_IPV6_NETWORK")
prevIPv6Subnet := ReadIP6CIDRFromSubnetFile(subnetFile, "FLANNEL_IPV6_SUBNET")
if flannelIPv6Masq {
err = trafficMngr.SetupAndEnsureMasqRules(ctx, config.Network, prevSubnet, prevNetwork, config.IPv6Network, prevIPv6Subnet, prevIPv6Network, bn.Lease(), 60)
} else {
//set empty flannel ipv6 Network to prevent masquerading
err = trafficMngr.SetupAndEnsureMasqRules(ctx, config.Network, prevSubnet, prevNetwork, ip.IP6Net{}, prevIPv6Subnet, prevIPv6Network, bn.Lease(), 60)
}
if err != nil {
return errors.Wrap(err, "failed to setup masq rules")
}
//setup forward rules
trafficMngr.SetupAndEnsureForwardRules(ctx, config.Network, config.IPv6Network, 50)
if err := WriteSubnetFile(subnetFile, config.Network, config.IPv6Network, true, bn, netMode); err != nil {
// Continue, even though it failed.
@ -238,3 +225,37 @@ func WriteSubnetFile(path string, nw ip.IP4Net, nwv6 ip.IP6Net, ipMasq bool, bn
return os.Rename(tempFile, path)
//TODO - is this safe? What if it's not on the same FS?
}
// ReadCIDRFromSubnetFile reads the flannel subnet file and extracts the value of IPv4 network CIDRKey
func ReadCIDRFromSubnetFile(path string, CIDRKey string) ip.IP4Net {
var prevCIDR ip.IP4Net
if _, err := os.Stat(path); !os.IsNotExist(err) {
prevSubnetVals, err := godotenv.Read(path)
if err != nil {
logrus.Errorf("Couldn't fetch previous %s from subnet file at %s: %v", CIDRKey, path, err)
} else if prevCIDRString, ok := prevSubnetVals[CIDRKey]; ok {
err = prevCIDR.UnmarshalJSON([]byte(prevCIDRString))
if err != nil {
logrus.Errorf("Couldn't parse previous %s from subnet file at %s: %v", CIDRKey, path, err)
}
}
}
return prevCIDR
}
// ReadIP6CIDRFromSubnetFile reads the flannel subnet file and extracts the value of IPv6 network CIDRKey
func ReadIP6CIDRFromSubnetFile(path string, CIDRKey string) ip.IP6Net {
var prevCIDR ip.IP6Net
if _, err := os.Stat(path); !os.IsNotExist(err) {
prevSubnetVals, err := godotenv.Read(path)
if err != nil {
logrus.Errorf("Couldn't fetch previous %s from subnet file at %s: %v", CIDRKey, path, err)
} else if prevCIDRString, ok := prevSubnetVals[CIDRKey]; ok {
err = prevCIDR.UnmarshalJSON([]byte(prevCIDRString))
if err != nil {
logrus.Errorf("Couldn't parse previous %s from subnet file at %s: %v", CIDRKey, path, err)
}
}
}
return prevCIDR
}

View File

@ -4,6 +4,7 @@ import (
"context"
"fmt"
"net"
"os"
"path/filepath"
goruntime "runtime"
"strings"
@ -74,10 +75,12 @@ func Run(ctx context.Context, nodeConfig *config.Node, nodes typedcorev1.NodeInt
return errors.Wrap(err, "failed to check netMode for flannel")
}
go func() {
err := flannel(ctx, nodeConfig.FlannelIface, nodeConfig.FlannelConfFile, nodeConfig.AgentConfig.KubeConfigKubelet, nodeConfig.FlannelIPv6Masq, nodeConfig.MultiClusterCIDR, netMode)
err := flannel(ctx, nodeConfig.FlannelIface, nodeConfig.FlannelConfFile, nodeConfig.AgentConfig.KubeConfigKubelet, nodeConfig.FlannelIPv6Masq, netMode)
if err != nil && !errors.Is(err, context.Canceled) {
logrus.Fatalf("flannel exited: %v", err)
logrus.Errorf("flannel exited: %v", err)
os.Exit(1)
}
os.Exit(0)
}()
return nil

110
pkg/agent/https/https.go Normal file
View File

@ -0,0 +1,110 @@
package https
import (
"context"
"net/http"
"strconv"
"sync"
"github.com/gorilla/mux"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/generated/clientset/versioned/scheme"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"k8s.io/apiserver/pkg/authentication/authenticator"
"k8s.io/apiserver/pkg/authorization/authorizer"
genericapifilters "k8s.io/apiserver/pkg/endpoints/filters"
apirequest "k8s.io/apiserver/pkg/endpoints/request"
"k8s.io/apiserver/pkg/server"
"k8s.io/apiserver/pkg/server/options"
)
// RouterFunc provides a hook for components to register additional routes to a request router
type RouterFunc func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error)
var once sync.Once
var router *mux.Router
var err error
// Start returns a router with authn/authz filters applied.
// The first time it is called, the router is created and a new HTTPS listener is started if the handler is nil.
// Subsequent calls will return the same router.
func Start(ctx context.Context, nodeConfig *config.Node, runtime *config.ControlRuntime) (*mux.Router, error) {
once.Do(func() {
router = mux.NewRouter().SkipClean(true)
config := server.Config{}
if runtime == nil {
// If we do not have an existing handler, set up a new listener
tcp, lerr := util.ListenWithLoopback(ctx, nodeConfig.AgentConfig.ListenAddress, strconv.Itoa(nodeConfig.ServerHTTPSPort))
if lerr != nil {
err = lerr
return
}
serving := options.NewSecureServingOptions()
serving.Listener = tcp
serving.CipherSuites = nodeConfig.AgentConfig.CipherSuites
serving.MinTLSVersion = nodeConfig.AgentConfig.MinTLSVersion
serving.ServerCert = options.GeneratableKeyCert{
CertKey: options.CertKey{
CertFile: nodeConfig.AgentConfig.ServingKubeletCert,
KeyFile: nodeConfig.AgentConfig.ServingKubeletKey,
},
}
if aerr := serving.ApplyTo(&config.SecureServing); aerr != nil {
err = aerr
return
}
} else {
// If we have an existing handler, wrap it
router.NotFoundHandler = runtime.Handler
runtime.Handler = router
}
authn := options.NewDelegatingAuthenticationOptions()
authn.DisableAnonymous = true
authn.SkipInClusterLookup = true
authn.ClientCert = options.ClientCertAuthenticationOptions{
ClientCA: nodeConfig.AgentConfig.ClientCA,
}
authn.RemoteKubeConfigFile = nodeConfig.AgentConfig.KubeConfigKubelet
if applyErr := authn.ApplyTo(&config.Authentication, config.SecureServing, nil); applyErr != nil {
err = applyErr
return
}
authz := options.NewDelegatingAuthorizationOptions()
authz.AlwaysAllowPaths = []string{ // skip authz for paths that should not use SubjectAccessReview; basically everything that will use this router other than metrics
"/v1-" + version.Program + "/p2p", // spegel libp2p peer discovery
"/v2/*", // spegel registry mirror
"/debug/pprof/*", // profiling
}
authz.RemoteKubeConfigFile = nodeConfig.AgentConfig.KubeConfigKubelet
if applyErr := authz.ApplyTo(&config.Authorization); applyErr != nil {
err = applyErr
return
}
router.Use(filterChain(config.Authentication.Authenticator, config.Authorization.Authorizer))
if config.SecureServing != nil {
_, _, err = config.SecureServing.Serve(router, 0, ctx.Done())
}
})
return router, err
}
// filterChain runs the kubernetes authn/authz filter chain using the mux middleware API
func filterChain(authn authenticator.Request, authz authorizer.Authorizer) mux.MiddlewareFunc {
return func(handler http.Handler) http.Handler {
requestInfoResolver := &apirequest.RequestInfoFactory{}
failedHandler := genericapifilters.Unauthorized(scheme.Codecs)
handler = genericapifilters.WithAuthorization(handler, authz, scheme.Codecs)
handler = genericapifilters.WithAuthentication(handler, authn, failedHandler, nil, nil)
handler = genericapifilters.WithRequestInfo(handler, requestInfoResolver)
handler = genericapifilters.WithCacheControl(handler)
return handler
}
}

View File

@ -16,7 +16,10 @@ import (
// server tracks the connections to a server, so that they can be closed when the server is removed.
type server struct {
// This mutex protects access to the connections map. All direct access to the map should be protected by it.
mutex sync.Mutex
address string
healthCheck func() bool
connections map[net.Conn]struct{}
}
@ -31,7 +34,9 @@ type serverConn struct {
// actually balance connections, but instead fails over to a new server only
// when a connection attempt to the currently selected server fails.
type LoadBalancer struct {
mutex sync.Mutex
// This mutex protects access to servers map and randomServers list.
// All direct access to the servers map/list should be protected by it.
mutex sync.RWMutex
proxy *tcpproxy.Proxy
serviceName string
@ -123,28 +128,11 @@ func New(ctx context.Context, dataDir, serviceName, serverURL string, lbServerPo
}
logrus.Infof("Running load balancer %s %s -> %v [default: %s]", serviceName, lb.localAddress, lb.ServerAddresses, lb.defaultServerAddress)
go lb.runHealthChecks(ctx)
return lb, nil
}
func (lb *LoadBalancer) SetDefault(serverAddress string) {
lb.mutex.Lock()
defer lb.mutex.Unlock()
_, hasOriginalServer := sortServers(lb.ServerAddresses, lb.defaultServerAddress)
// if the old default server is not currently in use, remove it from the server map
if server := lb.servers[lb.defaultServerAddress]; server != nil && !hasOriginalServer {
defer server.closeAll()
delete(lb.servers, lb.defaultServerAddress)
}
// if the new default server doesn't have an entry in the map, add one
if _, ok := lb.servers[serverAddress]; !ok {
lb.servers[serverAddress] = &server{connections: make(map[net.Conn]struct{})}
}
lb.defaultServerAddress = serverAddress
logrus.Infof("Updated load balancer %s default server address -> %s", lb.serviceName, serverAddress)
}
func (lb *LoadBalancer) Update(serverAddresses []string) {
if lb == nil {
return
@ -166,7 +154,11 @@ func (lb *LoadBalancer) LoadBalancerServerURL() string {
return lb.localServerURL
}
func (lb *LoadBalancer) dialContext(ctx context.Context, network, address string) (net.Conn, error) {
func (lb *LoadBalancer) dialContext(ctx context.Context, network, _ string) (net.Conn, error) {
lb.mutex.RLock()
defer lb.mutex.RUnlock()
var allChecksFailed bool
startIndex := lb.nextServerIndex
for {
targetServer := lb.currentServerAddress
@ -174,7 +166,7 @@ func (lb *LoadBalancer) dialContext(ctx context.Context, network, address string
server := lb.servers[targetServer]
if server == nil || targetServer == "" {
logrus.Debugf("Nil server for load balancer %s: %s", lb.serviceName, targetServer)
} else {
} else if allChecksFailed || server.healthCheck() {
conn, err := server.dialContext(ctx, network, targetServer)
if err == nil {
return conn, nil
@ -198,7 +190,11 @@ func (lb *LoadBalancer) dialContext(ctx context.Context, network, address string
startIndex = maxIndex
}
if lb.nextServerIndex == startIndex {
return nil, errors.New("all servers failed")
if allChecksFailed {
return nil, errors.New("all servers failed")
}
logrus.Debugf("Health checks for all servers in load balancer %s have failed: retrying with health checks ignored", lb.serviceName)
allChecksFailed = true
}
}
}

View File

@ -2,15 +2,61 @@ package loadbalancer
import (
"context"
"errors"
"fmt"
"math/rand"
"net"
"net/url"
"os"
"strconv"
"time"
"github.com/k3s-io/k3s/pkg/version"
http_dialer "github.com/mwitkow/go-http-dialer"
"github.com/pkg/errors"
"golang.org/x/net/http/httpproxy"
"golang.org/x/net/proxy"
"github.com/sirupsen/logrus"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
)
var defaultDialer = &net.Dialer{}
var defaultDialer proxy.Dialer = &net.Dialer{}
// SetHTTPProxy configures a proxy-enabled dialer to be used for all loadbalancer connections,
// if the agent has been configured to allow use of a HTTP proxy, and the environment has been configured
// to indicate use of a HTTP proxy for the server URL.
func SetHTTPProxy(address string) error {
// Check if env variable for proxy is set
if useProxy, _ := strconv.ParseBool(os.Getenv(version.ProgramUpper + "_AGENT_HTTP_PROXY_ALLOWED")); !useProxy || address == "" {
return nil
}
serverURL, err := url.Parse(address)
if err != nil {
return errors.Wrapf(err, "failed to parse address %s", address)
}
// Call this directly instead of using the cached environment used by http.ProxyFromEnvironment to allow for testing
proxyFromEnvironment := httpproxy.FromEnvironment().ProxyFunc()
proxyURL, err := proxyFromEnvironment(serverURL)
if err != nil {
return errors.Wrapf(err, "failed to get proxy for address %s", address)
}
if proxyURL == nil {
logrus.Debug(version.ProgramUpper + "_AGENT_HTTP_PROXY_ALLOWED is true but no proxy is configured for URL " + serverURL.String())
return nil
}
dialer, err := proxyDialer(proxyURL)
if err != nil {
return errors.Wrapf(err, "failed to create proxy dialer for %s", proxyURL)
}
defaultDialer = dialer
logrus.Debugf("Using proxy %s for agent connection to %s", proxyURL, serverURL)
return nil
}
func (lb *LoadBalancer) setServers(serverAddresses []string) bool {
serverAddresses, hasOriginalServer := sortServers(serverAddresses, lb.defaultServerAddress)
@ -29,7 +75,11 @@ func (lb *LoadBalancer) setServers(serverAddresses []string) bool {
for addedServer := range newAddresses.Difference(curAddresses) {
logrus.Infof("Adding server to load balancer %s: %s", lb.serviceName, addedServer)
lb.servers[addedServer] = &server{connections: make(map[net.Conn]struct{})}
lb.servers[addedServer] = &server{
address: addedServer,
connections: make(map[net.Conn]struct{}),
healthCheck: func() bool { return true },
}
}
for removedServer := range curAddresses.Difference(newAddresses) {
@ -62,8 +112,8 @@ func (lb *LoadBalancer) setServers(serverAddresses []string) bool {
}
func (lb *LoadBalancer) nextServer(failedServer string) (string, error) {
lb.mutex.Lock()
defer lb.mutex.Unlock()
lb.mutex.RLock()
defer lb.mutex.RUnlock()
if len(lb.randomServers) == 0 {
return "", errors.New("No servers in load balancer proxy list")
@ -84,20 +134,33 @@ func (lb *LoadBalancer) nextServer(failedServer string) (string, error) {
return lb.currentServerAddress, nil
}
// dialContext dials a new connection, and adds its wrapped connection to the map
// dialContext dials a new connection using the environment's proxy settings, and adds its wrapped connection to the map
func (s *server) dialContext(ctx context.Context, network, address string) (net.Conn, error) {
conn, err := defaultDialer.DialContext(ctx, network, address)
conn, err := defaultDialer.Dial(network, address)
if err != nil {
return nil, err
}
// don't lock until adding the connection to the map, otherwise we may block
// while waiting for the dial to time out
// Wrap the connection and add it to the server's connection map
s.mutex.Lock()
defer s.mutex.Unlock()
conn = &serverConn{server: s, Conn: conn}
s.connections[conn] = struct{}{}
return conn, nil
wrappedConn := &serverConn{server: s, Conn: conn}
s.connections[wrappedConn] = struct{}{}
return wrappedConn, nil
}
// proxyDialer creates a new proxy.Dialer that routes connections through the specified proxy.
func proxyDialer(proxyURL *url.URL) (proxy.Dialer, error) {
if proxyURL.Scheme == "http" || proxyURL.Scheme == "https" {
// Create a new HTTP proxy dialer
httpProxyDialer := http_dialer.New(proxyURL)
return httpProxyDialer, nil
} else if proxyURL.Scheme == "socks5" {
// For SOCKS5 proxies, use the proxy package's FromURL
return proxy.FromURL(proxyURL, proxy.Direct)
}
return nil, fmt.Errorf("unsupported proxy scheme: %s", proxyURL.Scheme)
}
// closeAll closes all connections to the server, and removes their entries from the map
@ -105,10 +168,12 @@ func (s *server) closeAll() {
s.mutex.Lock()
defer s.mutex.Unlock()
logrus.Debugf("Closing %d connections to load balancer server", len(s.connections))
for conn := range s.connections {
// Close the connection in a goroutine so that we don't hold the lock while doing so.
go conn.Close()
if l := len(s.connections); l > 0 {
logrus.Infof("Closing %d connections to load balancer server %s", len(s.connections), s.address)
for conn := range s.connections {
// Close the connection in a goroutine so that we don't hold the lock while doing so.
go conn.Close()
}
}
}
@ -121,3 +186,61 @@ func (sc *serverConn) Close() error {
delete(sc.server.connections, sc)
return sc.Conn.Close()
}
// SetDefault sets the selected address as the default / fallback address
func (lb *LoadBalancer) SetDefault(serverAddress string) {
lb.mutex.Lock()
defer lb.mutex.Unlock()
_, hasOriginalServer := sortServers(lb.ServerAddresses, lb.defaultServerAddress)
// if the old default server is not currently in use, remove it from the server map
if server := lb.servers[lb.defaultServerAddress]; server != nil && !hasOriginalServer {
defer server.closeAll()
delete(lb.servers, lb.defaultServerAddress)
}
// if the new default server doesn't have an entry in the map, add one
if _, ok := lb.servers[serverAddress]; !ok {
lb.servers[serverAddress] = &server{
address: serverAddress,
healthCheck: func() bool { return true },
connections: make(map[net.Conn]struct{}),
}
}
lb.defaultServerAddress = serverAddress
logrus.Infof("Updated load balancer %s default server address -> %s", lb.serviceName, serverAddress)
}
// SetHealthCheck adds a health-check callback to an address, replacing the default no-op function.
func (lb *LoadBalancer) SetHealthCheck(address string, healthCheck func() bool) {
lb.mutex.Lock()
defer lb.mutex.Unlock()
if server := lb.servers[address]; server != nil {
logrus.Debugf("Added health check for load balancer %s: %s", lb.serviceName, address)
server.healthCheck = healthCheck
} else {
logrus.Errorf("Failed to add health check for load balancer %s: no server found for %s", lb.serviceName, address)
}
}
// runHealthChecks periodically health-checks all servers. Any servers that fail the health-check will have their
// connections closed, to force clients to switch over to a healthy server.
func (lb *LoadBalancer) runHealthChecks(ctx context.Context) {
previousStatus := map[string]bool{}
wait.Until(func() {
lb.mutex.RLock()
defer lb.mutex.RUnlock()
for address, server := range lb.servers {
status := server.healthCheck()
if status == false && previousStatus[address] == true {
// Only close connections when the server transitions from healthy to unhealthy;
// we don't want to re-close all the connections every time as we might be ignoring
// health checks due to all servers being marked unhealthy.
defer server.closeAll()
}
previousStatus[address] = status
}
}, time.Second, ctx.Done())
logrus.Debugf("Stopped health checking for load balancer %s", lb.serviceName)
}

View File

@ -0,0 +1,156 @@
package loadbalancer
import (
"fmt"
"net"
"os"
"strings"
"testing"
"github.com/k3s-io/k3s/pkg/version"
"github.com/sirupsen/logrus"
)
var defaultEnv map[string]string
var proxyEnvs = []string{version.ProgramUpper + "_AGENT_HTTP_PROXY_ALLOWED", "HTTP_PROXY", "HTTPS_PROXY", "NO_PROXY", "http_proxy", "https_proxy", "no_proxy"}
func init() {
logrus.SetLevel(logrus.DebugLevel)
}
func prepareEnv(env ...string) {
defaultDialer = &net.Dialer{}
defaultEnv = map[string]string{}
for _, e := range proxyEnvs {
if v, ok := os.LookupEnv(e); ok {
defaultEnv[e] = v
os.Unsetenv(e)
}
}
for _, e := range env {
k, v, _ := strings.Cut(e, "=")
os.Setenv(k, v)
}
}
func restoreEnv() {
for _, e := range proxyEnvs {
if v, ok := defaultEnv[e]; ok {
os.Setenv(e, v)
} else {
os.Unsetenv(e)
}
}
}
func Test_UnitSetHTTPProxy(t *testing.T) {
type args struct {
address string
}
tests := []struct {
name string
args args
setup func() error
teardown func() error
wantErr bool
wantDialer string
}{
{
name: "Default Proxy",
args: args{address: "https://1.2.3.4:6443"},
wantDialer: "*net.Dialer",
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=", "HTTP_PROXY=", "HTTPS_PROXY=", "NO_PROXY=")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
{
name: "Agent Proxy Enabled",
args: args{address: "https://1.2.3.4:6443"},
wantDialer: "*http_dialer.HttpTunnel",
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=true", "HTTP_PROXY=http://proxy:8080", "HTTPS_PROXY=http://proxy:8080", "NO_PROXY=")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
{
name: "Agent Proxy Enabled with Bogus Proxy",
args: args{address: "https://1.2.3.4:6443"},
wantDialer: "*net.Dialer",
wantErr: true,
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=true", "HTTP_PROXY=proxy proxy", "HTTPS_PROXY=proxy proxy", "NO_PROXY=")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
{
name: "Agent Proxy Enabled with Bogus Server",
args: args{address: "https://1.2.3.4:k3s"},
wantDialer: "*net.Dialer",
wantErr: true,
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=true", "HTTP_PROXY=http://proxy:8080", "HTTPS_PROXY=http://proxy:8080", "NO_PROXY=")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
{
name: "Agent Proxy Enabled but IP Excluded",
args: args{address: "https://1.2.3.4:6443"},
wantDialer: "*net.Dialer",
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=true", "HTTP_PROXY=http://proxy:8080", "HTTPS_PROXY=http://proxy:8080", "NO_PROXY=1.2.0.0/16")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
{
name: "Agent Proxy Enabled but Domain Excluded",
args: args{address: "https://server.example.com:6443"},
wantDialer: "*net.Dialer",
setup: func() error {
prepareEnv(version.ProgramUpper+"_AGENT_HTTP_PROXY_ALLOWED=true", "HTTP_PROXY=http://proxy:8080", "HTTPS_PROXY=http://proxy:8080", "NO_PROXY=*.example.com")
return nil
},
teardown: func() error {
restoreEnv()
return nil
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
defer tt.teardown()
if err := tt.setup(); err != nil {
t.Errorf("Setup for SetHTTPProxy() failed = %v", err)
return
}
err := SetHTTPProxy(tt.args.address)
t.Logf("SetHTTPProxy() error = %v", err)
if (err != nil) != tt.wantErr {
t.Errorf("SetHTTPProxy() error = %v, wantErr %v", err, tt.wantErr)
}
if dialerType := fmt.Sprintf("%T", defaultDialer); dialerType != tt.wantDialer {
t.Errorf("Got wrong dialer type %s, wanted %s", dialerType, tt.wantDialer)
}
})
}
}

View File

@ -11,15 +11,21 @@ import (
"runtime"
"strings"
"sync"
"time"
"github.com/cloudnativelabs/kube-router/v2/pkg/version"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
cloudproviderapi "k8s.io/cloud-provider/api"
"github.com/cloudnativelabs/kube-router/v2/pkg/controllers/netpol"
"github.com/cloudnativelabs/kube-router/v2/pkg/healthcheck"
krmetrics "github.com/cloudnativelabs/kube-router/v2/pkg/metrics"
"github.com/cloudnativelabs/kube-router/v2/pkg/options"
"github.com/cloudnativelabs/kube-router/v2/pkg/utils"
"github.com/cloudnativelabs/kube-router/v2/pkg/version"
"github.com/coreos/go-iptables/iptables"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/metrics"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
v1core "k8s.io/api/core/v1"
@ -28,6 +34,12 @@ import (
"k8s.io/client-go/tools/clientcmd"
)
func init() {
// ensure that kube-router exposes metrics through the same registry used by Kubernetes components
krmetrics.DefaultRegisterer = metrics.DefaultRegisterer
krmetrics.DefaultGatherer = metrics.DefaultGatherer
}
// Run creates and starts a new instance of the kube-router network policy controller
// The code in this function is cribbed from the upstream controller at:
// https://github.com/cloudnativelabs/kube-router/blob/ee9f6d890d10609284098229fa1e283ab5d83b93/pkg/cmd/kube-router.go#L78
@ -55,6 +67,28 @@ func Run(ctx context.Context, nodeConfig *config.Node) error {
return err
}
// kube-router netpol requires addresses to be available in the node object.
// Wait until the uninitialized taint has been removed, at which point the addresses should be set.
// TODO: Replace with non-deprecated PollUntilContextTimeout when our and Kubernetes code migrate to it
if err := wait.PollImmediateInfiniteWithContext(ctx, 2*time.Second, func(ctx context.Context) (bool, error) {
// Get the node object
node, err := client.CoreV1().Nodes().Get(ctx, nodeConfig.AgentConfig.NodeName, metav1.GetOptions{})
if err != nil {
logrus.Infof("Network policy controller waiting to get Node %s: %v", nodeConfig.AgentConfig.NodeName, err)
return false, nil
}
// Check for the taint that should be removed by cloud-provider when the node has been initialized.
for _, taint := range node.Spec.Taints {
if taint.Key == cloudproviderapi.TaintExternalCloudProvider {
logrus.Infof("Network policy controller waiting for removal of %s taint", cloudproviderapi.TaintExternalCloudProvider)
return false, nil
}
}
return true, nil
}); err != nil {
return errors.Wrapf(err, "network policy controller failed to wait for %s taint to be removed from Node %s", cloudproviderapi.TaintExternalCloudProvider, nodeConfig.AgentConfig.NodeName)
}
krConfig := options.NewKubeRouterConfig()
var serviceIPs []string
for _, elem := range nodeConfig.AgentConfig.ServiceCIDRs {
@ -65,7 +99,7 @@ func Run(ctx context.Context, nodeConfig *config.Node) error {
krConfig.EnableIPv6 = nodeConfig.AgentConfig.EnableIPv6
krConfig.NodePortRange = strings.ReplaceAll(nodeConfig.AgentConfig.ServiceNodePortRange.String(), "-", ":")
krConfig.HostnameOverride = nodeConfig.AgentConfig.NodeName
krConfig.MetricsEnabled = false
krConfig.MetricsEnabled = true
krConfig.RunFirewall = true
krConfig.RunRouter = false
krConfig.RunServiceProxy = false
@ -114,22 +148,31 @@ func Run(ctx context.Context, nodeConfig *config.Node) error {
ipSetHandlers[v1core.IPv6Protocol] = ipset
}
// Start kube-router healthcheck server. Netpol requires it
// Start kube-router healthcheck controller; netpol requires it
hc, err := healthcheck.NewHealthController(krConfig)
if err != nil {
return err
}
// Initialize all healthcheck timers. Otherwise, the system reports incorrect heartbeat missing messages
// Start kube-router metrics controller to avoid complaints about metrics heartbeat missing
mc, err := krmetrics.NewMetricsController(krConfig)
if err != nil {
return nil
}
// Initialize all healthcheck timers. Otherwise, the system reports heartbeat missing messages
hc.SetAlive()
wg.Add(1)
go hc.RunCheck(healthCh, stopCh, &wg)
wg.Add(1)
go metricsRunCheck(mc, healthCh, stopCh, &wg)
npc, err := netpol.NewNetworkPolicyController(client, krConfig, podInformer, npInformer, nsInformer, &sync.Mutex{},
iptablesCmdHandlers, ipSetHandlers)
if err != nil {
return errors.Wrap(err, "unable to initialize Network Policy Controller")
return errors.Wrap(err, "unable to initialize network policy controller")
}
podInformer.AddEventHandler(npc.PodEventHandler)
@ -137,8 +180,29 @@ func Run(ctx context.Context, nodeConfig *config.Node) error {
npInformer.AddEventHandler(npc.NetworkPolicyEventHandler)
wg.Add(1)
logrus.Infof("Starting the netpol controller version %s, built on %s, %s", version.Version, version.BuildDate, runtime.Version())
logrus.Infof("Starting network policy controller version %s, built on %s, %s", version.Version, version.BuildDate, runtime.Version())
go npc.Run(healthCh, stopCh, &wg)
return nil
}
// metricsRunCheck is a stub version of mc.Run() that doesn't start up a dedicated http server.
func metricsRunCheck(mc *krmetrics.Controller, healthChan chan<- *healthcheck.ControllerHeartbeat, stopCh <-chan struct{}, wg *sync.WaitGroup) {
t := time.NewTicker(3 * time.Second)
defer wg.Done()
// register metrics for this controller
krmetrics.BuildInfo.WithLabelValues(runtime.Version(), version.Version).Set(1)
krmetrics.DefaultRegisterer.MustRegister(krmetrics.BuildInfo)
for {
healthcheck.SendHeartBeat(healthChan, "MC")
select {
case <-stopCh:
t.Stop()
return
case <-t.C:
logrus.Debugf("Kube-router network policy controller metrics tick")
}
}
}

View File

@ -2,6 +2,7 @@ package proxy
import (
"context"
"net"
sysnet "net"
"net/url"
"strconv"
@ -14,13 +15,14 @@ import (
type Proxy interface {
Update(addresses []string)
SetAPIServerPort(ctx context.Context, port int, isIPv6 bool) error
SetAPIServerPort(port int, isIPv6 bool) error
SetSupervisorDefault(address string)
IsSupervisorLBEnabled() bool
SupervisorURL() string
SupervisorAddresses() []string
APIServerURL() string
IsAPIServerLBEnabled() bool
SetHealthCheck(address string, healthCheck func() bool)
}
// NewSupervisorProxy sets up a new proxy for retrieving supervisor and apiserver addresses. If
@ -38,9 +40,13 @@ func NewSupervisorProxy(ctx context.Context, lbEnabled bool, dataDir, supervisor
supervisorURL: supervisorURL,
apiServerURL: supervisorURL,
lbServerPort: lbServerPort,
context: ctx,
}
if lbEnabled {
if err := loadbalancer.SetHTTPProxy(supervisorURL); err != nil {
return nil, err
}
lb, err := loadbalancer.New(ctx, dataDir, loadbalancer.SupervisorServiceName, supervisorURL, p.lbServerPort, isIPv6)
if err != nil {
return nil, err
@ -67,6 +73,7 @@ type proxy struct {
apiServerEnabled bool
apiServerURL string
apiServerPort string
supervisorURL string
supervisorPort string
initialSupervisorURL string
@ -75,6 +82,7 @@ type proxy struct {
apiServerLB *loadbalancer.LoadBalancer
supervisorLB *loadbalancer.LoadBalancer
context context.Context
}
func (p *proxy) Update(addresses []string) {
@ -93,6 +101,18 @@ func (p *proxy) Update(addresses []string) {
p.supervisorAddresses = supervisorAddresses
}
func (p *proxy) SetHealthCheck(address string, healthCheck func() bool) {
if p.supervisorLB != nil {
p.supervisorLB.SetHealthCheck(address, healthCheck)
}
if p.apiServerLB != nil {
host, _, _ := net.SplitHostPort(address)
address = net.JoinHostPort(host, p.apiServerPort)
p.apiServerLB.SetHealthCheck(address, healthCheck)
}
}
func (p *proxy) setSupervisorPort(addresses []string) []string {
var newAddresses []string
for _, address := range addresses {
@ -111,12 +131,13 @@ func (p *proxy) setSupervisorPort(addresses []string) []string {
// load-balancing is enabled, another load-balancer is started on a port one below the supervisor
// load-balancer, and the address of this load-balancer is returned instead of the actual apiserver
// addresses.
func (p *proxy) SetAPIServerPort(ctx context.Context, port int, isIPv6 bool) error {
func (p *proxy) SetAPIServerPort(port int, isIPv6 bool) error {
u, err := url.Parse(p.initialSupervisorURL)
if err != nil {
return errors.Wrapf(err, "failed to parse server URL %s", p.initialSupervisorURL)
}
u.Host = sysnet.JoinHostPort(u.Hostname(), strconv.Itoa(port))
p.apiServerPort = strconv.Itoa(port)
u.Host = sysnet.JoinHostPort(u.Hostname(), p.apiServerPort)
p.apiServerURL = u.String()
p.apiServerEnabled = true
@ -126,7 +147,7 @@ func (p *proxy) SetAPIServerPort(ctx context.Context, port int, isIPv6 bool) err
if lbServerPort != 0 {
lbServerPort = lbServerPort - 1
}
lb, err := loadbalancer.New(ctx, p.dataDir, loadbalancer.APIServerServiceName, p.apiServerURL, lbServerPort, isIPv6)
lb, err := loadbalancer.New(p.context, p.dataDir, loadbalancer.APIServerServiceName, p.apiServerURL, lbServerPort, isIPv6)
if err != nil {
return err
}

View File

@ -11,15 +11,15 @@ import (
"strings"
"time"
systemd "github.com/coreos/go-systemd/daemon"
systemd "github.com/coreos/go-systemd/v22/daemon"
"github.com/k3s-io/k3s/pkg/agent/config"
"github.com/k3s-io/k3s/pkg/agent/containerd"
"github.com/k3s-io/k3s/pkg/agent/cridockerd"
"github.com/k3s-io/k3s/pkg/agent/flannel"
"github.com/k3s-io/k3s/pkg/agent/netpol"
"github.com/k3s-io/k3s/pkg/agent/proxy"
"github.com/k3s-io/k3s/pkg/agent/syssetup"
"github.com/k3s-io/k3s/pkg/agent/tunnel"
"github.com/k3s-io/k3s/pkg/certmonitor"
"github.com/k3s-io/k3s/pkg/cgroups"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/clientaccess"
@ -27,8 +27,11 @@ import (
"github.com/k3s-io/k3s/pkg/daemons/agent"
daemonconfig "github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/daemons/executor"
"github.com/k3s-io/k3s/pkg/metrics"
"github.com/k3s-io/k3s/pkg/nodeconfig"
"github.com/k3s-io/k3s/pkg/profile"
"github.com/k3s-io/k3s/pkg/rootless"
"github.com/k3s-io/k3s/pkg/spegel"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/pkg/errors"
@ -43,14 +46,19 @@ import (
typedcorev1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/tools/cache"
toolswatch "k8s.io/client-go/tools/watch"
"k8s.io/component-base/cli/globalflag"
"k8s.io/component-base/logs"
app2 "k8s.io/kubernetes/cmd/kube-proxy/app"
kubeproxyconfig "k8s.io/kubernetes/pkg/proxy/apis/config"
utilsnet "k8s.io/utils/net"
utilpointer "k8s.io/utils/pointer"
utilsptr "k8s.io/utils/ptr"
)
func run(ctx context.Context, cfg cmds.Agent, proxy proxy.Proxy) error {
nodeConfig := config.Get(ctx, cfg, proxy)
nodeConfig, err := config.Get(ctx, cfg, proxy)
if err != nil {
return errors.Wrap(err, "failed to retrieve agent configuration")
}
dualCluster, err := utilsnet.IsDualStackCIDRs(nodeConfig.AgentConfig.ClusterCIDRs)
if err != nil {
@ -97,6 +105,28 @@ func run(ctx context.Context, cfg cmds.Agent, proxy proxy.Proxy) error {
nodeConfig.AgentConfig.EnableIPv4 = enableIPv4
nodeConfig.AgentConfig.EnableIPv6 = enableIPv6
if nodeConfig.EmbeddedRegistry {
if nodeConfig.Docker || nodeConfig.ContainerRuntimeEndpoint != "" {
return errors.New("embedded registry mirror requires embedded containerd")
}
if err := spegel.DefaultRegistry.Start(ctx, nodeConfig); err != nil {
return errors.Wrap(err, "failed to start embedded registry")
}
}
if nodeConfig.SupervisorMetrics {
if err := metrics.DefaultMetrics.Start(ctx, nodeConfig); err != nil {
return errors.Wrap(err, "failed to serve metrics")
}
}
if nodeConfig.EnablePProf {
if err := profile.DefaultProfiler.Start(ctx, nodeConfig); err != nil {
return errors.Wrap(err, "failed to serve pprof")
}
}
if err := setupCriCtlConfig(cfg, nodeConfig); err != nil {
return err
}
@ -117,21 +147,23 @@ func run(ctx context.Context, cfg cmds.Agent, proxy proxy.Proxy) error {
}
if nodeConfig.Docker {
if err := cridockerd.Run(ctx, nodeConfig); err != nil {
if err := executor.Docker(ctx, nodeConfig); err != nil {
return err
}
} else if nodeConfig.ContainerRuntimeEndpoint == "" {
if err := containerd.Run(ctx, nodeConfig); err != nil {
if err := containerd.SetupContainerdConfig(nodeConfig); err != nil {
return err
}
if err := executor.Containerd(ctx, nodeConfig); err != nil {
return err
}
}
// the agent runtime is ready to host workloads when containerd is up and the airgap
// the container runtime is ready to host workloads when containerd is up and the airgap
// images have finished loading, as that portion of startup may block for an arbitrary
// amount of time depending on how long it takes to import whatever the user has placed
// in the images directory.
if cfg.AgentReady != nil {
close(cfg.AgentReady)
if cfg.ContainerRuntimeReady != nil {
close(cfg.ContainerRuntimeReady)
}
notifySocket := os.Getenv("NOTIFY_SOCKET")
@ -184,8 +216,8 @@ func run(ctx context.Context, cfg cmds.Agent, proxy proxy.Proxy) error {
// When running rootless, we do not attempt to set conntrack sysctls - this behavior is copied from kubeadm.
func getConntrackConfig(nodeConfig *daemonconfig.Node) (*kubeproxyconfig.KubeProxyConntrackConfiguration, error) {
ctConfig := &kubeproxyconfig.KubeProxyConntrackConfiguration{
MaxPerCore: utilpointer.Int32Ptr(0),
Min: utilpointer.Int32Ptr(0),
MaxPerCore: utilsptr.To(int32(0)),
Min: utilsptr.To(int32(0)),
TCPEstablishedTimeout: &metav1.Duration{},
TCPCloseWaitTimeout: &metav1.Duration{},
}
@ -195,6 +227,7 @@ func getConntrackConfig(nodeConfig *daemonconfig.Node) (*kubeproxyconfig.KubePro
}
cmd := app2.NewProxyCommand()
globalflag.AddGlobalFlags(cmd.Flags(), cmd.Name(), logs.SkipLoggingConfigurationFlags())
if err := cmd.ParseFlags(daemonconfig.GetArgs(map[string]string{}, nodeConfig.AgentConfig.ExtraKubeProxyArgs)); err != nil {
return nil, err
}
@ -231,18 +264,25 @@ func RunStandalone(ctx context.Context, cfg cmds.Agent) error {
return err
}
nodeConfig := config.Get(ctx, cfg, proxy)
nodeConfig, err := config.Get(ctx, cfg, proxy)
if err != nil {
return errors.Wrap(err, "failed to retrieve agent configuration")
}
if err := executor.Bootstrap(ctx, nodeConfig, cfg); err != nil {
return err
}
if cfg.AgentReady != nil {
close(cfg.AgentReady)
if cfg.ContainerRuntimeReady != nil {
close(cfg.ContainerRuntimeReady)
}
if err := tunnelSetup(ctx, nodeConfig, cfg, proxy); err != nil {
return err
}
if err := certMonitorSetup(ctx, nodeConfig, cfg); err != nil {
return err
}
<-ctx.Done()
return ctx.Err()
@ -404,7 +444,7 @@ func updateLegacyAddressLabels(agentConfig *daemonconfig.Agent, nodeLabels map[s
if ls.Has(cp.InternalIPKey) || ls.Has(cp.HostnameKey) {
result := map[string]string{
cp.InternalIPKey: agentConfig.NodeIP,
cp.HostnameKey: agentConfig.NodeName,
cp.HostnameKey: getHostname(agentConfig),
}
if agentConfig.NodeExternalIP != "" {
@ -422,7 +462,7 @@ func updateAddressAnnotations(nodeConfig *daemonconfig.Node, nodeAnnotations map
agentConfig := &nodeConfig.AgentConfig
result := map[string]string{
cp.InternalIPKey: util.JoinIPs(agentConfig.NodeIPs),
cp.HostnameKey: agentConfig.NodeName,
cp.HostnameKey: getHostname(agentConfig),
}
if agentConfig.NodeExternalIP != "" {
@ -479,6 +519,10 @@ func setupTunnelAndRunAgent(ctx context.Context, nodeConfig *daemonconfig.Node,
if err := tunnelSetup(ctx, nodeConfig, cfg, proxy); err != nil {
return err
}
if err := certMonitorSetup(ctx, nodeConfig, cfg); err != nil {
return err
}
if !agentRan {
return agent.Agent(ctx, nodeConfig, proxy)
}
@ -517,3 +561,20 @@ func tunnelSetup(ctx context.Context, nodeConfig *daemonconfig.Node, cfg cmds.Ag
}
return tunnel.Setup(ctx, nodeConfig, proxy)
}
func certMonitorSetup(ctx context.Context, nodeConfig *daemonconfig.Node, cfg cmds.Agent) error {
if cfg.ClusterReset {
return nil
}
return certmonitor.Setup(ctx, nodeConfig, cfg.DataDir)
}
// getHostname returns the actual system hostname.
// If the hostname cannot be determined, or is invalid, the node name is used.
func getHostname(agentConfig *daemonconfig.Agent) string {
hostname, err := os.Hostname()
if err != nil || hostname == "" || strings.Contains(hostname, "localhost") {
return agentConfig.NodeName
}
return hostname
}

99
pkg/agent/run_test.go Normal file
View File

@ -0,0 +1,99 @@
package agent
import (
"reflect"
"testing"
"time"
daemonconfig "github.com/k3s-io/k3s/pkg/daemons/config"
v1alpha1 "k8s.io/kube-proxy/config/v1alpha1"
kubeproxyconfig "k8s.io/kubernetes/pkg/proxy/apis/config"
kubeproxyconfigv1alpha1 "k8s.io/kubernetes/pkg/proxy/apis/config/v1alpha1"
utilsptr "k8s.io/utils/ptr"
)
func Test_UnitGetConntrackConfig(t *testing.T) {
// There are only helpers to default the typed config, so we have to set defaults on the typed config,
// then convert it to the internal config representation in order to use it for tests.
typedConfig := &v1alpha1.KubeProxyConfiguration{}
defaultConfig := &kubeproxyconfig.KubeProxyConfiguration{}
kubeproxyconfigv1alpha1.SetDefaults_KubeProxyConfiguration(typedConfig)
if err := kubeproxyconfigv1alpha1.Convert_v1alpha1_KubeProxyConfiguration_To_config_KubeProxyConfiguration(typedConfig, defaultConfig, nil); err != nil {
t.Fatalf("Failed to generate default KubeProxyConfiguration: %v", err)
}
customConfig := defaultConfig.DeepCopy()
customConfig.Conntrack.Min = utilsptr.To(int32(100))
customConfig.Conntrack.TCPCloseWaitTimeout.Duration = 42 * time.Second
type args struct {
nodeConfig *daemonconfig.Node
}
tests := []struct {
name string
args args
want *kubeproxyconfig.KubeProxyConntrackConfiguration
wantErr bool
}{
{
name: "Default args",
args: args{
nodeConfig: &daemonconfig.Node{
AgentConfig: daemonconfig.Agent{
ExtraKubeProxyArgs: []string{},
},
},
},
want: &defaultConfig.Conntrack,
wantErr: false,
},
{
name: "Logging args",
args: args{
nodeConfig: &daemonconfig.Node{
AgentConfig: daemonconfig.Agent{
ExtraKubeProxyArgs: []string{"v=9"},
},
},
},
want: &defaultConfig.Conntrack,
wantErr: false,
},
{
name: "Invalid args",
args: args{
nodeConfig: &daemonconfig.Node{
AgentConfig: daemonconfig.Agent{
ExtraKubeProxyArgs: []string{"conntrack-tcp-timeout-close-wait=invalid", "bogus=true"},
},
},
},
want: nil,
wantErr: true,
},
{
name: "Conntrack args",
args: args{
nodeConfig: &daemonconfig.Node{
AgentConfig: daemonconfig.Agent{
ExtraKubeProxyArgs: []string{"conntrack-tcp-timeout-close-wait=42s", "conntrack-min=100"},
},
},
},
want: &customConfig.Conntrack,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := getConntrackConfig(tt.args.nodeConfig)
if (err != nil) != tt.wantErr {
t.Errorf("getConntrackConfig() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("getConntrackConfig() = %+v\nWant = %+v", got, tt.want)
}
})
}
}

View File

@ -1,6 +1,10 @@
package templates
import (
"bytes"
"net/url"
"text/template"
"github.com/rancher/wharfie/pkg/registries"
"github.com/k3s-io/k3s/pkg/daemons/config"
@ -17,7 +21,87 @@ type ContainerdConfig struct {
SystemdCgroup bool
IsRunningInUserNS bool
EnableUnprivileged bool
NoDefaultEndpoint bool
PrivateRegistryConfig *registries.Registry
ExtraRuntimes map[string]ContainerdRuntimeConfig
Program string
}
type RegistryEndpoint struct {
OverridePath bool
URL *url.URL
Rewrites map[string]string
Config registries.RegistryConfig
}
type HostConfig struct {
Default *RegistryEndpoint
Program string
Endpoints []RegistryEndpoint
}
const HostsTomlTemplate = `
{{- /* */ -}}
# File generated by {{ .Program }}. DO NOT EDIT.
{{ with $e := .Default }}
{{- if $e.URL }}
server = "{{ $e.URL }}"
capabilities = ["pull", "resolve", "push"]
{{ end }}
{{- if $e.Config.TLS }}
{{- if $e.Config.TLS.CAFile }}
ca = [{{ printf "%q" $e.Config.TLS.CAFile }}]
{{- end }}
{{- if or $e.Config.TLS.CertFile $e.Config.TLS.KeyFile }}
client = [[{{ printf "%q" $e.Config.TLS.CertFile }}, {{ printf "%q" $e.Config.TLS.KeyFile }}]]
{{- end }}
{{- if $e.Config.TLS.InsecureSkipVerify }}
skip_verify = true
{{- end }}
{{ end }}
{{ end }}
[host]
{{ range $e := .Endpoints -}}
[host."{{ $e.URL }}"]
capabilities = ["pull", "resolve"]
{{- if $e.OverridePath }}
override_path = true
{{- end }}
{{- if $e.Config.TLS }}
{{- if $e.Config.TLS.CAFile }}
ca = [{{ printf "%q" $e.Config.TLS.CAFile }}]
{{- end }}
{{- if or $e.Config.TLS.CertFile $e.Config.TLS.KeyFile }}
client = [[{{ printf "%q" $e.Config.TLS.CertFile }}, {{ printf "%q" $e.Config.TLS.KeyFile }}]]
{{- end }}
{{- if $e.Config.TLS.InsecureSkipVerify }}
skip_verify = true
{{- end }}
{{ end }}
{{- if $e.Rewrites }}
[host."{{ $e.URL }}".rewrite]
{{- range $pattern, $replace := $e.Rewrites }}
"{{ $pattern }}" = "{{ $replace }}"
{{- end }}
{{ end }}
{{ end -}}
`
func ParseTemplateFromConfig(templateBuffer string, config interface{}) (string, error) {
out := new(bytes.Buffer)
t := template.Must(template.New("compiled_template").Funcs(templateFuncs).Parse(templateBuffer))
template.Must(t.New("base").Parse(ContainerdConfigTemplate))
if err := t.Execute(out, config); err != nil {
return "", err
}
return out.String(), nil
}
func ParseHostsTemplateFromConfig(templateBuffer string, config interface{}) (string, error) {
out := new(bytes.Buffer)
t := template.Must(template.New("compiled_template").Funcs(templateFuncs).Parse(templateBuffer))
if err := t.Execute(out, config); err != nil {
return "", err
}
return out.String(), nil
}

View File

@ -3,11 +3,11 @@
package templates
import (
"bytes"
"text/template"
)
const ContainerdConfigTemplate = `
{{- /* */ -}}
# File generated by {{ .Program }}. DO NOT EDIT. Use config.toml.tmpl instead.
version = 2
@ -44,19 +44,11 @@ cri_keychain_image_service_path = "{{ .NodeConfig.AgentConfig.ImageServiceSocket
[plugins."io.containerd.snapshotter.v1.stargz".cri_keychain]
enable_keychain = true
{{end}}
[plugins."io.containerd.snapshotter.v1.stargz".registry]
config_path = "{{ .NodeConfig.Containerd.Registry }}"
{{ if .PrivateRegistryConfig }}
{{ if .PrivateRegistryConfig.Mirrors }}
[plugins."io.containerd.snapshotter.v1.stargz".registry.mirrors]{{end}}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
[plugins."io.containerd.snapshotter.v1.stargz".registry.mirrors."{{$k}}"]
endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
{{if $v.Rewrites}}
[plugins."io.containerd.snapshotter.v1.stargz".registry.mirrors."{{$k}}".rewrite]
{{range $pattern, $replace := $v.Rewrites}}
"{{$pattern}}" = "{{$replace}}"
{{end}}
{{end}}
{{end}}
{{range $k, $v := .PrivateRegistryConfig.Configs }}
{{ if $v.Auth }}
[plugins."io.containerd.snapshotter.v1.stargz".registry.configs."{{$k}}".auth]
@ -65,13 +57,6 @@ enable_keychain = true
{{ if $v.Auth.Auth }}auth = {{ printf "%q" $v.Auth.Auth }}{{end}}
{{ if $v.Auth.IdentityToken }}identitytoken = {{ printf "%q" $v.Auth.IdentityToken }}{{end}}
{{end}}
{{ if $v.TLS }}
[plugins."io.containerd.snapshotter.v1.stargz".registry.configs."{{$k}}".tls]
{{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
{{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
{{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
{{ if $v.TLS.InsecureSkipVerify }}insecure_skip_verify = true{{end}}
{{end}}
{{end}}
{{end}}
{{end}}
@ -95,20 +80,10 @@ enable_keychain = true
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = {{ .SystemdCgroup }}
{{ if .PrivateRegistryConfig }}
{{ if .PrivateRegistryConfig.Mirrors }}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]{{end}}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."{{$k}}"]
endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
{{if $v.Rewrites}}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."{{$k}}".rewrite]
{{range $pattern, $replace := $v.Rewrites}}
"{{$pattern}}" = "{{$replace}}"
{{end}}
{{end}}
{{end}}
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "{{ .NodeConfig.Containerd.Registry }}"
{{ if .PrivateRegistryConfig }}
{{range $k, $v := .PrivateRegistryConfig.Configs }}
{{ if $v.Auth }}
[plugins."io.containerd.grpc.v1.cri".registry.configs."{{$k}}".auth]
@ -117,13 +92,6 @@ enable_keychain = true
{{ if $v.Auth.Auth }}auth = {{ printf "%q" $v.Auth.Auth }}{{end}}
{{ if $v.Auth.IdentityToken }}identitytoken = {{ printf "%q" $v.Auth.IdentityToken }}{{end}}
{{end}}
{{ if $v.TLS }}
[plugins."io.containerd.grpc.v1.cri".registry.configs."{{$k}}".tls]
{{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
{{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
{{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
{{ if $v.TLS.InsecureSkipVerify }}insecure_skip_verify = true{{end}}
{{end}}
{{end}}
{{end}}
@ -136,12 +104,9 @@ enable_keychain = true
{{end}}
`
func ParseTemplateFromConfig(templateBuffer string, config interface{}) (string, error) {
out := new(bytes.Buffer)
t := template.Must(template.New("compiled_template").Parse(templateBuffer))
template.Must(t.New("base").Parse(ContainerdConfigTemplate))
if err := t.Execute(out, config); err != nil {
return "", err
}
return out.String(), nil
// Linux config templates do not need fixups
var templateFuncs = template.FuncMap{
"deschemify": func(s string) string {
return s
},
}

View File

@ -4,16 +4,17 @@
package templates
import (
"bytes"
"net/url"
"strings"
"text/template"
)
const ContainerdConfigTemplate = `
{{- /* */ -}}
# File generated by {{ .Program }}. DO NOT EDIT. Use config.toml.tmpl instead.
version = 2
root = "{{ replace .NodeConfig.Containerd.Root }}"
state = "{{ replace .NodeConfig.Containerd.State }}"
root = {{ printf "%q" .NodeConfig.Containerd.Root }}
state = {{ printf "%q" .NodeConfig.Containerd.State }}
plugin_dir = ""
disabled_plugins = []
required_plugins = []
@ -107,14 +108,15 @@ oom_score = 0
privileged_without_host_devices = false
base_runtime_spec = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "{{ replace .NodeConfig.AgentConfig.CNIBinDir }}"
conf_dir = "{{ replace .NodeConfig.AgentConfig.CNIConfDir }}"
bin_dir = {{ printf "%q" .NodeConfig.AgentConfig.CNIBinDir }}
conf_dir = {{ printf "%q" .NodeConfig.AgentConfig.CNIConfDir }}
max_conf_num = 1
conf_template = ""
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
config_path = {{ printf "%q" .NodeConfig.Containerd.Registry }}
{{ if .PrivateRegistryConfig }}
{{range $k, $v := .PrivateRegistryConfig.Configs }}
[plugins."io.containerd.grpc.v1.cri".registry.auths]
{{ if $v.Auth }}
[plugins."io.containerd.grpc.v1.cri".registry.configs.auth."{{$k}}"]
{{ if $v.Auth.Username }}username = {{ printf "%q" $v.Auth.Username }}{{end}}
@ -122,35 +124,15 @@ oom_score = 0
{{ if $v.Auth.Auth }}auth = {{ printf "%q" $v.Auth.Auth }}{{end}}
{{ if $v.Auth.IdentityToken }}identitytoken = {{ printf "%q" $v.Auth.IdentityToken }}{{end}}
{{end}}
[plugins."io.containerd.grpc.v1.cri".registry.configs]
{{ if $v.TLS }}
[plugins."io.containerd.grpc.v1.cri".registry.configs.tls."{{$k}}"]
{{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
{{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
{{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
{{ if $v.TLS.InsecureSkipVerify }}insecure_skip_verify = true{{end}}
{{end}}
{{end}}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
{{ if .PrivateRegistryConfig.Mirrors }}
{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."{{$k}}"]
endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
{{if $v.Rewrites}}
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."{{$k}}".rewrite]
{{range $pattern, $replace := $v.Rewrites}}
"{{$pattern}}" = "{{$replace}}"
{{end}}
{{end}}
{{end}}
{{end}}
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = ""
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "{{ replace .NodeConfig.Containerd.Opt }}"
path = {{ printf "%q" .NodeConfig.Containerd.Opt }}
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.metadata.v1.bolt"]
@ -161,27 +143,16 @@ oom_score = 0
default = ["windows", "windows-lcow"]
`
func ParseTemplateFromConfig(templateBuffer string, config interface{}) (string, error) {
out := new(bytes.Buffer)
funcs := template.FuncMap{
"replace": func(s string) string {
return strings.ReplaceAll(s, "\\", "\\\\")
},
"deschemify": func(s string) string {
if strings.HasPrefix(s, "npipe:") {
u, err := url.Parse(s)
if err != nil {
return ""
}
return u.Path
// Windows config templates need named pipe addresses fixed up
var templateFuncs = template.FuncMap{
"deschemify": func(s string) string {
if strings.HasPrefix(s, "npipe:") {
u, err := url.Parse(s)
if err != nil {
return ""
}
return s
},
}
t := template.Must(template.New("compiled_template").Funcs(funcs).Parse(templateBuffer))
template.Must(t.New("base").Parse(ContainerdConfigTemplate))
if err := t.Execute(out, config); err != nil {
return "", err
}
return out.String(), nil
return u.Path
}
return s
},
}

View File

@ -3,6 +3,7 @@ package tunnel
import (
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"os"
@ -289,7 +290,9 @@ func (a *agentTunnel) watchEndpoints(ctx context.Context, apiServerReady <-chan
disconnect := map[string]context.CancelFunc{}
for _, address := range proxy.SupervisorAddresses() {
if _, ok := disconnect[address]; !ok {
disconnect[address] = a.connect(ctx, wg, address, tlsConfig)
conn := a.connect(ctx, wg, address, tlsConfig)
disconnect[address] = conn.cancel
proxy.SetHealthCheck(address, conn.connected)
}
}
@ -361,7 +364,9 @@ func (a *agentTunnel) watchEndpoints(ctx context.Context, apiServerReady <-chan
for _, address := range proxy.SupervisorAddresses() {
validEndpoint[address] = true
if _, ok := disconnect[address]; !ok {
disconnect[address] = a.connect(ctx, nil, address, tlsConfig)
conn := a.connect(ctx, nil, address, tlsConfig)
disconnect[address] = conn.cancel
proxy.SetHealthCheck(address, conn.connected)
}
}
@ -403,32 +408,54 @@ func (a *agentTunnel) authorized(ctx context.Context, proto, address string) boo
return false
}
type agentConnection struct {
cancel context.CancelFunc
connected func() bool
}
// connect initiates a connection to the remotedialer server. Incoming dial requests from
// the server will be checked by the authorizer function prior to being fulfilled.
func (a *agentTunnel) connect(rootCtx context.Context, waitGroup *sync.WaitGroup, address string, tlsConfig *tls.Config) context.CancelFunc {
func (a *agentTunnel) connect(rootCtx context.Context, waitGroup *sync.WaitGroup, address string, tlsConfig *tls.Config) agentConnection {
wsURL := fmt.Sprintf("wss://%s/v1-"+version.Program+"/connect", address)
ws := &websocket.Dialer{
TLSClientConfig: tlsConfig,
}
// Assume that the connection to the server will succeed, to avoid failing health checks while attempting to connect.
// If we cannot connect, connected will be set to false when the initial connection attempt fails.
connected := true
once := sync.Once{}
if waitGroup != nil {
waitGroup.Add(1)
}
ctx, cancel := context.WithCancel(rootCtx)
auth := func(proto, address string) bool {
return a.authorized(rootCtx, proto, address)
}
onConnect := func(_ context.Context, _ *remotedialer.Session) error {
connected = true
logrus.WithField("url", wsURL).Info("Remotedialer connected to proxy")
if waitGroup != nil {
once.Do(waitGroup.Done)
}
return nil
}
// Start remotedialer connect loop in a goroutine to ensure a connection to the target server
go func() {
for {
remotedialer.ClientConnect(ctx, wsURL, nil, ws, func(proto, address string) bool {
return a.authorized(rootCtx, proto, address)
}, func(_ context.Context, _ *remotedialer.Session) error {
if waitGroup != nil {
once.Do(waitGroup.Done)
}
return nil
})
// ConnectToProxy blocks until error or context cancellation
err := remotedialer.ConnectToProxy(ctx, wsURL, nil, auth, ws, onConnect)
connected = false
if err != nil && !errors.Is(err, context.Canceled) {
logrus.WithField("url", wsURL).WithError(err).Error("Remotedialer proxy error; reconecting...")
// wait between reconnection attempts to avoid hammering the server
time.Sleep(endpointDebounceDelay)
}
// If the context has been cancelled, exit the goroutine instead of retrying
if ctx.Err() != nil {
if waitGroup != nil {
once.Do(waitGroup.Done)
@ -438,7 +465,10 @@ func (a *agentTunnel) connect(rootCtx context.Context, waitGroup *sync.WaitGroup
}
}()
return cancel
return agentConnection{
cancel: cancel,
connected: func() bool { return connected },
}
}
// isKubeletPort returns true if the connection is to a reserved TCP port on a loopback address.

View File

@ -0,0 +1,166 @@
//go:build windows
// +build windows
package acl
import (
"fmt"
"golang.org/x/sys/windows"
"unsafe"
)
// TODO: Remove in favor of the rancher/permissions repository once that is setup
func BuiltinAdministratorsSID() *windows.SID {
return mustGetSid(windows.WinBuiltinAdministratorsSid)
}
func LocalSystemSID() *windows.SID {
return mustGetSid(windows.WinLocalSystemSid)
}
func mustGetSid(sidType windows.WELL_KNOWN_SID_TYPE) *windows.SID {
sid, err := windows.CreateWellKnownSid(sidType)
if err != nil {
panic(err)
}
return sid
}
// GrantSid creates an EXPLICIT_ACCESS instance granting permissions to the provided SID.
func GrantSid(accessPermissions windows.ACCESS_MASK, sid *windows.SID) windows.EXPLICIT_ACCESS {
return windows.EXPLICIT_ACCESS{
AccessPermissions: accessPermissions,
AccessMode: windows.GRANT_ACCESS,
Inheritance: windows.SUB_CONTAINERS_AND_OBJECTS_INHERIT,
Trustee: windows.TRUSTEE{
TrusteeForm: windows.TRUSTEE_IS_SID,
TrusteeValue: windows.TrusteeValueFromSID(sid),
},
}
}
// Apply performs both Chmod and Chown at the same time, where the filemode's owner and group will correspond to
// the provided owner and group (or the current owner and group, if they are set to nil)
func Apply(path string, owner *windows.SID, group *windows.SID, access ...windows.EXPLICIT_ACCESS) error {
if path == "" {
return fmt.Errorf("path cannot be empty")
}
return apply(path, owner, group, access...)
}
// apply performs a Chmod (if owner and group are provided) and sets a custom ACL based on the provided EXPLICIT_ACCESS rules
// To create EXPLICIT_ACCESS rules, see the helper functions in pkg/access
func apply(path string, owner *windows.SID, group *windows.SID, access ...windows.EXPLICIT_ACCESS) error {
// assemble arguments
args := securityArgs{
path: path,
owner: owner,
group: group,
access: access,
}
securityInfo := args.ToSecurityInfo()
if securityInfo == 0 {
// nothing to change
return nil
}
dacl, err := args.ToDACL()
if err != nil {
return err
}
return windows.SetNamedSecurityInfo(
path,
windows.SE_FILE_OBJECT,
securityInfo,
owner,
group,
dacl,
nil,
)
}
type securityArgs struct {
path string
owner *windows.SID
group *windows.SID
access []windows.EXPLICIT_ACCESS
}
func (a *securityArgs) ToSecurityInfo() windows.SECURITY_INFORMATION {
var securityInfo windows.SECURITY_INFORMATION
if a.owner != nil {
// override owner
securityInfo |= windows.OWNER_SECURITY_INFORMATION
}
if a.group != nil {
// override group
securityInfo |= windows.GROUP_SECURITY_INFORMATION
}
if len(a.access) != 0 {
// override DACL
securityInfo |= windows.DACL_SECURITY_INFORMATION
securityInfo |= windows.PROTECTED_DACL_SECURITY_INFORMATION
}
return securityInfo
}
func (a *securityArgs) ToSecurityAttributes() (*windows.SecurityAttributes, error) {
// define empty security descriptor
sd, err := windows.NewSecurityDescriptor()
if err != nil {
return nil, err
}
err = sd.SetOwner(a.owner, false)
if err != nil {
return nil, err
}
err = sd.SetGroup(a.group, false)
if err != nil {
return nil, err
}
// define security attributes using descriptor
var sa windows.SecurityAttributes
sa.Length = uint32(unsafe.Sizeof(sa))
sa.SecurityDescriptor = sd
if len(a.access) == 0 {
// security attribute should simply inherit parent rules
sa.InheritHandle = 1
return &sa, nil
}
// apply provided access rules to the DACL
dacl, err := a.ToDACL()
if err != nil {
return nil, err
}
err = sd.SetDACL(dacl, true, false)
if err != nil {
return nil, err
}
// set the protected DACL flag to prevent the DACL of the security descriptor from being modified by inheritable ACEs
// (i.e. prevent parent folders from modifying this ACL)
err = sd.SetControl(windows.SE_DACL_PROTECTED, windows.SE_DACL_PROTECTED)
if err != nil {
return nil, err
}
return &sa, nil
}
func (a *securityArgs) ToDACL() (*windows.ACL, error) {
if len(a.access) == 0 {
// No rules were specified
return nil, nil
}
return windows.ACLFromEntries(a.access, nil)
}

View File

@ -0,0 +1,136 @@
package certmonitor
import (
"context"
"crypto/x509"
"fmt"
"os"
"path/filepath"
"strings"
"time"
daemonconfig "github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/daemons/control/deps"
"github.com/k3s-io/k3s/pkg/metrics"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/util/services"
"github.com/k3s-io/k3s/pkg/version"
"github.com/prometheus/client_golang/prometheus"
certutil "github.com/rancher/dynamiclistener/cert"
"github.com/rancher/wrangler/v3/pkg/merr"
"github.com/sirupsen/logrus"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/wait"
)
var (
// Check certificates twice an hour. Kubernetes events have a TTL of 1 hour by default,
// so similar events should be aggregated and refreshed by the event recorder as long
// as they are created within the TTL period.
certCheckInterval = time.Minute * 30
controllerName = version.Program + "-cert-monitor"
certificateExpirationSeconds = prometheus.NewGaugeVec(prometheus.GaugeOpts{
Name: version.Program + "_certificate_expiration_seconds",
Help: "Remaining lifetime on the certificate.",
}, []string{"subject", "usages"})
)
// Setup starts the certificate expiration monitor
func Setup(ctx context.Context, nodeConfig *daemonconfig.Node, dataDir string) error {
logrus.Debugf("Starting %s with monitoring period %s", controllerName, certCheckInterval)
metrics.DefaultRegisterer.MustRegister(certificateExpirationSeconds)
client, err := util.GetClientSet(nodeConfig.AgentConfig.KubeConfigKubelet)
if err != nil {
return err
}
recorder := util.BuildControllerEventRecorder(client, controllerName, metav1.NamespaceDefault)
// This is consistent with events attached to the node generated by the kubelet
// https://github.com/kubernetes/kubernetes/blob/612130dd2f4188db839ea5c2dea07a96b0ad8d1c/pkg/kubelet/kubelet.go#L479-L485
nodeRef := &corev1.ObjectReference{
Kind: "Node",
Name: nodeConfig.AgentConfig.NodeName,
UID: types.UID(nodeConfig.AgentConfig.NodeName),
Namespace: "",
}
// Create a dummy controlConfig just to hold the paths for the server certs
controlConfig := daemonconfig.Control{
DataDir: filepath.Join(dataDir, "server"),
Runtime: &daemonconfig.ControlRuntime{},
}
deps.CreateRuntimeCertFiles(&controlConfig)
caMap := map[string][]string{}
nodeList := services.Agent
if _, err := os.Stat(controlConfig.DataDir); err == nil {
nodeList = services.All
caMap, err = services.FilesForServices(controlConfig, services.CA)
if err != nil {
return err
}
}
nodeMap, err := services.FilesForServices(controlConfig, nodeList)
if err != nil {
return err
}
go wait.Until(func() {
logrus.Debugf("Running %s certificate expiration check", controllerName)
if err := checkCerts(nodeMap, time.Hour*24*daemonconfig.CertificateRenewDays); err != nil {
message := fmt.Sprintf("Node certificates require attention - restart %s on this node to trigger automatic rotation: %v", version.Program, err)
recorder.Event(nodeRef, corev1.EventTypeWarning, "CertificateExpirationWarning", message)
}
if err := checkCerts(caMap, time.Hour*24*365); err != nil {
message := fmt.Sprintf("Certificate authority certificates require attention - check %s documentation and begin planning rotation: %v", version.Program, err)
recorder.Event(nodeRef, corev1.EventTypeWarning, "CACertificateExpirationWarning", message)
}
}, certCheckInterval, ctx.Done())
return nil
}
func checkCerts(fileMap map[string][]string, warningPeriod time.Duration) error {
errs := merr.Errors{}
now := time.Now()
warn := now.Add(warningPeriod)
for service, files := range fileMap {
for _, file := range files {
basename := filepath.Base(file)
certs, _ := certutil.CertsFromFile(file)
for _, cert := range certs {
usages := []string{}
if cert.KeyUsage&x509.KeyUsageCertSign != 0 {
usages = append(usages, "CertSign")
}
for _, eku := range cert.ExtKeyUsage {
switch eku {
case x509.ExtKeyUsageServerAuth:
usages = append(usages, "ServerAuth")
case x509.ExtKeyUsageClientAuth:
usages = append(usages, "ClientAuth")
}
}
certificateExpirationSeconds.WithLabelValues(cert.Subject.String(), strings.Join(usages, ",")).Set(cert.NotAfter.Sub(now).Seconds())
if now.Before(cert.NotBefore) {
errs = append(errs, fmt.Errorf("%s/%s: certificate %s is not valid before %s", service, basename, cert.Subject, cert.NotBefore.Format(time.RFC3339)))
} else if now.After(cert.NotAfter) {
errs = append(errs, fmt.Errorf("%s/%s: certificate %s expired at %s", service, basename, cert.Subject, cert.NotAfter.Format(time.RFC3339)))
} else if warn.After(cert.NotAfter) {
errs = append(errs, fmt.Errorf("%s/%s: certificate %s will expire within %d days at %s", service, basename, cert.Subject, daemonconfig.CertificateRenewDays, cert.NotAfter.Format(time.RFC3339)))
}
}
}
}
return merr.NewErrors(errs...)
}

View File

@ -1,28 +1,38 @@
package agent
import (
"context"
"crypto/tls"
"fmt"
"os"
"path/filepath"
"runtime"
"github.com/erikdubbelboer/gspt"
"github.com/gorilla/mux"
"github.com/k3s-io/k3s/pkg/agent"
"github.com/k3s-io/k3s/pkg/agent/https"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/datadir"
k3smetrics "github.com/k3s-io/k3s/pkg/metrics"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/profile"
"github.com/k3s-io/k3s/pkg/spegel"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/k3s-io/k3s/pkg/vpn"
"github.com/rancher/wrangler/pkg/signals"
"github.com/rancher/wrangler/v3/pkg/signals"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
func Run(ctx *cli.Context) error {
// Validate build env
cmds.MustValidateGolang()
// hide process arguments from ps output, since they may contain
// database credentials or other secrets.
gspt.SetProcTitle(os.Args[0] + " agent")
proctitle.SetProcTitle(os.Args[0] + " agent")
// Evacuate cgroup v2 before doing anything else that may fork.
if err := cmds.EvacuateCgroup2(); err != nil {
@ -81,20 +91,41 @@ func Run(ctx *cli.Context) error {
contextCtx := signals.SetupSignalContext()
go cmds.WriteCoverage(contextCtx)
if cmds.AgentConfig.VPNAuthFile != "" {
cmds.AgentConfig.VPNAuth, err = util.ReadFile(cmds.AgentConfig.VPNAuthFile)
if cfg.VPNAuthFile != "" {
cfg.VPNAuth, err = util.ReadFile(cfg.VPNAuthFile)
if err != nil {
return err
}
}
// Starts the VPN in the agent if config was set up
if cmds.AgentConfig.VPNAuth != "" {
err := vpn.StartVPN(cmds.AgentConfig.VPNAuth)
if cfg.VPNAuth != "" {
err := vpn.StartVPN(cfg.VPNAuth)
if err != nil {
return err
}
}
// Until the agent is run and retrieves config from the server, we won't know
// if the embedded registry is enabled. If it is not enabled, these are not
// used as the registry is never started.
registry := spegel.DefaultRegistry
registry.Bootstrapper = spegel.NewAgentBootstrapper(cfg.ServerURL, cfg.Token, cfg.DataDir)
registry.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, nil)
}
// same deal for metrics - these are not used if the extra metrics listener is not enabled.
metrics := k3smetrics.DefaultMetrics
metrics.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, nil)
}
// and for pprof as well
pprof := profile.DefaultProfiler
pprof.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, nil)
}
return agent.Run(contextCtx, cfg)
}

View File

@ -5,10 +5,9 @@ import (
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/erikdubbelboer/gspt"
"github.com/k3s-io/k3s/pkg/agent/util"
"github.com/k3s-io/k3s/pkg/bootstrap"
"github.com/k3s-io/k3s/pkg/cli/cmds"
@ -16,44 +15,19 @@ import (
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/daemons/control/deps"
"github.com/k3s-io/k3s/pkg/datadir"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/server"
"github.com/k3s-io/k3s/pkg/util/services"
"github.com/k3s-io/k3s/pkg/version"
"github.com/otiai10/copy"
"github.com/pkg/errors"
certutil "github.com/rancher/dynamiclistener/cert"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
)
const (
adminService = "admin"
apiServerService = "api-server"
controllerManagerService = "controller-manager"
schedulerService = "scheduler"
etcdService = "etcd"
programControllerService = "-controller"
authProxyService = "auth-proxy"
cloudControllerService = "cloud-controller"
kubeletService = "kubelet"
kubeProxyService = "kube-proxy"
k3sServerService = "-server"
)
var services = []string{
adminService,
apiServerService,
controllerManagerService,
schedulerService,
etcdService,
version.Program + programControllerService,
authProxyService,
cloudControllerService,
kubeletService,
kubeProxyService,
version.Program + k3sServerService,
}
func commandSetup(app *cli.Context, cfg *cmds.Server, sc *server.Config) (string, error) {
gspt.SetProcTitle(os.Args[0])
proctitle.SetProcTitle(os.Args[0])
dataDir, err := datadir.Resolve(cfg.DataDir)
if err != nil {
@ -64,18 +38,84 @@ func commandSetup(app *cli.Context, cfg *cmds.Server, sc *server.Config) (string
if cfg.Token == "" {
fp := filepath.Join(sc.ControlConfig.DataDir, "token")
tokenByte, err := os.ReadFile(fp)
if err != nil {
if err != nil && !os.IsNotExist(err) {
return "", err
}
cfg.Token = string(bytes.TrimRight(tokenByte, "\n"))
}
sc.ControlConfig.Token = cfg.Token
sc.ControlConfig.Runtime = config.NewRuntime(nil)
return dataDir, nil
}
func Check(app *cli.Context) error {
if err := cmds.InitLogging(); err != nil {
return err
}
return check(app, &cmds.ServerConfig)
}
func check(app *cli.Context, cfg *cmds.Server) error {
var serverConfig server.Config
_, err := commandSetup(app, cfg, &serverConfig)
if err != nil {
return err
}
deps.CreateRuntimeCertFiles(&serverConfig.ControlConfig)
if err := validateCertConfig(); err != nil {
return err
}
if len(cmds.ServicesList) == 0 {
// detecting if the command is being run on an agent or server based on presence of the server data-dir
_, err := os.Stat(serverConfig.ControlConfig.DataDir)
if err != nil {
if !os.IsNotExist(err) {
return err
}
logrus.Infof("Agent detected, checking agent certificates")
cmds.ServicesList = services.Agent
} else {
logrus.Infof("Server detected, checking agent and server certificates")
cmds.ServicesList = services.All
}
}
fileMap, err := services.FilesForServices(serverConfig.ControlConfig, cmds.ServicesList)
if err != nil {
return err
}
now := time.Now()
warn := now.Add(time.Hour * 24 * config.CertificateRenewDays)
for service, files := range fileMap {
logrus.Info("Checking certificates for " + service)
for _, file := range files {
// ignore errors, as some files may not exist, or may not contain certs.
// Only check whatever exists and has certs.
certs, _ := certutil.CertsFromFile(file)
for _, cert := range certs {
if now.Before(cert.NotBefore) {
logrus.Errorf("%s: certificate %s is not valid before %s", file, cert.Subject, cert.NotBefore.Format(time.RFC3339))
} else if now.After(cert.NotAfter) {
logrus.Errorf("%s: certificate %s expired at %s", file, cert.Subject, cert.NotAfter.Format(time.RFC3339))
} else if warn.After(cert.NotAfter) {
logrus.Warnf("%s: certificate %s will expire within %d days at %s", file, cert.Subject, config.CertificateRenewDays, cert.NotAfter.Format(time.RFC3339))
} else {
logrus.Infof("%s: certificate %s is ok, expires at %s", file, cert.Subject, cert.NotAfter.Format(time.RFC3339))
}
}
}
}
return nil
}
func Rotate(app *cli.Context) error {
if err := cmds.InitLogging(); err != nil {
return err
@ -97,163 +137,94 @@ func rotate(app *cli.Context, cfg *cmds.Server) error {
return err
}
agentDataDir := filepath.Join(dataDir, "agent")
tlsBackupDir, err := backupCertificates(serverConfig.ControlConfig.DataDir, agentDataDir)
if err != nil {
return err
}
if len(cmds.ServicesList) == 0 {
// detecting if the command is being run on an agent or server
// detecting if the command is being run on an agent or server based on presence of the server data-dir
_, err := os.Stat(serverConfig.ControlConfig.DataDir)
if err != nil {
if !os.IsNotExist(err) {
return err
}
logrus.Infof("Agent detected, rotating agent certificates")
cmds.ServicesList = []string{
kubeletService,
kubeProxyService,
version.Program + programControllerService,
}
cmds.ServicesList = services.Agent
} else {
logrus.Infof("Server detected, rotating server certificates")
cmds.ServicesList = []string{
adminService,
etcdService,
apiServerService,
controllerManagerService,
cloudControllerService,
schedulerService,
version.Program + k3sServerService,
version.Program + programControllerService,
authProxyService,
kubeletService,
kubeProxyService,
}
logrus.Infof("Server detected, rotating agent and server certificates")
cmds.ServicesList = services.All
}
}
fileList := []string{}
fileMap, err := services.FilesForServices(serverConfig.ControlConfig, cmds.ServicesList)
if err != nil {
return err
}
// back up all the files
agentDataDir := filepath.Join(dataDir, "agent")
tlsBackupDir, err := backupCertificates(serverConfig.ControlConfig.DataDir, agentDataDir, fileMap)
if err != nil {
return err
}
// The dynamiclistener cache file can't be simply deleted, we need to create a trigger
// file to indicate that the cert needs to be regenerated on startup.
for _, service := range cmds.ServicesList {
logrus.Infof("Rotating certificates for %s service", service)
switch service {
case adminService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientAdminCert,
serverConfig.ControlConfig.Runtime.ClientAdminKey)
case apiServerService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientKubeAPICert,
serverConfig.ControlConfig.Runtime.ClientKubeAPIKey,
serverConfig.ControlConfig.Runtime.ServingKubeAPICert,
serverConfig.ControlConfig.Runtime.ServingKubeAPIKey)
case controllerManagerService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientControllerCert,
serverConfig.ControlConfig.Runtime.ClientControllerKey)
case schedulerService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientSchedulerCert,
serverConfig.ControlConfig.Runtime.ClientSchedulerKey)
case etcdService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientETCDCert,
serverConfig.ControlConfig.Runtime.ClientETCDKey,
serverConfig.ControlConfig.Runtime.ServerETCDCert,
serverConfig.ControlConfig.Runtime.ServerETCDKey,
serverConfig.ControlConfig.Runtime.PeerServerClientETCDCert,
serverConfig.ControlConfig.Runtime.PeerServerClientETCDKey)
case cloudControllerService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientCloudControllerCert,
serverConfig.ControlConfig.Runtime.ClientCloudControllerKey)
case version.Program + k3sServerService:
if service == version.Program+services.ProgramServer {
dynamicListenerRegenFilePath := filepath.Join(serverConfig.ControlConfig.DataDir, "tls", "dynamic-cert-regenerate")
if err := os.WriteFile(dynamicListenerRegenFilePath, []byte{}, 0600); err != nil {
return err
}
logrus.Infof("Rotating dynamic listener certificate")
case version.Program + programControllerService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientK3sControllerCert,
serverConfig.ControlConfig.Runtime.ClientK3sControllerKey,
filepath.Join(agentDataDir, "client-"+version.Program+"-controller.crt"),
filepath.Join(agentDataDir, "client-"+version.Program+"-controller.key"))
case authProxyService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientAuthProxyCert,
serverConfig.ControlConfig.Runtime.ClientAuthProxyKey)
case kubeletService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientKubeletKey,
serverConfig.ControlConfig.Runtime.ServingKubeletKey,
filepath.Join(agentDataDir, "client-kubelet.crt"),
filepath.Join(agentDataDir, "client-kubelet.key"),
filepath.Join(agentDataDir, "serving-kubelet.crt"),
filepath.Join(agentDataDir, "serving-kubelet.key"))
case kubeProxyService:
fileList = append(fileList,
serverConfig.ControlConfig.Runtime.ClientKubeProxyCert,
serverConfig.ControlConfig.Runtime.ClientKubeProxyKey,
filepath.Join(agentDataDir, "client-kube-proxy.crt"),
filepath.Join(agentDataDir, "client-kube-proxy.key"))
default:
logrus.Fatalf("%s is not a recognized service", service)
}
}
for _, file := range fileList {
if err := os.Remove(file); err == nil {
logrus.Debugf("file %s is deleted", file)
// remove all files
for service, files := range fileMap {
logrus.Info("Rotating certificates for " + service)
for _, file := range files {
if err := os.Remove(file); err == nil {
logrus.Debugf("file %s is deleted", file)
}
}
}
logrus.Infof("Successfully backed up certificates for all services to path %s, please restart %s server or agent to rotate certificates", tlsBackupDir, version.Program)
logrus.Infof("Successfully backed up certificates to %s, please restart %s server or agent to rotate certificates", tlsBackupDir, version.Program)
return nil
}
func backupCertificates(serverDataDir, agentDataDir string) (string, error) {
func backupCertificates(serverDataDir, agentDataDir string, fileMap map[string][]string) (string, error) {
backupDirName := fmt.Sprintf("tls-%d", time.Now().Unix())
serverTLSDir := filepath.Join(serverDataDir, "tls")
tlsBackupDir := filepath.Join(serverDataDir, "tls-"+strconv.Itoa(int(time.Now().Unix())))
tlsBackupDir := filepath.Join(agentDataDir, backupDirName)
// backup the server TLS dir if it exists
if _, err := os.Stat(serverTLSDir); err != nil {
return "", err
}
if err := copy.Copy(serverTLSDir, tlsBackupDir); err != nil {
return "", err
}
certs := []string{
"client-" + version.Program + "-controller.crt",
"client-" + version.Program + "-controller.key",
"client-kubelet.crt",
"client-kubelet.key",
"serving-kubelet.crt",
"serving-kubelet.key",
"client-kube-proxy.crt",
"client-kube-proxy.key",
}
for _, cert := range certs {
agentCert := filepath.Join(agentDataDir, cert)
tlsBackupCert := filepath.Join(tlsBackupDir, cert)
if err := util.CopyFile(agentCert, tlsBackupCert, true); err != nil {
if !os.IsNotExist(err) {
return "", err
}
} else {
tlsBackupDir = filepath.Join(serverDataDir, backupDirName)
if err := copy.Copy(serverTLSDir, tlsBackupDir); err != nil {
return "", err
}
}
return tlsBackupDir, nil
}
func validService(svc string) bool {
for _, service := range services {
if svc == service {
return true
for _, files := range fileMap {
for _, file := range files {
if strings.HasPrefix(file, agentDataDir) {
cert := filepath.Base(file)
tlsBackupCert := filepath.Join(tlsBackupDir, cert)
if err := util.CopyFile(file, tlsBackupCert, true); err != nil {
return "", err
}
}
}
}
return false
return tlsBackupDir, nil
}
func validateCertConfig() error {
for _, s := range cmds.ServicesList {
if !validService(s) {
return errors.New("Service " + s + " is not recognized")
if !services.IsValid(s) {
return errors.New("service " + s + " is not recognized")
}
}
return nil

View File

@ -20,12 +20,14 @@ type Agent struct {
LBServerPort int
ResolvConf string
DataDir string
BindAddress string
NodeIP cli.StringSlice
NodeExternalIP cli.StringSlice
NodeName string
PauseImage string
Snapshotter string
Docker bool
ContainerdNoDefault bool
ContainerRuntimeEndpoint string
DefaultRuntime string
ImageServiceEndpoint string
@ -35,6 +37,7 @@ type Agent struct {
VPNAuth string
VPNAuthFile string
Debug bool
EnablePProf bool
Rootless bool
RootlessAlreadyUnshared bool
WithNodeID bool
@ -50,7 +53,7 @@ type Agent struct {
Taints cli.StringSlice
ImageCredProvBinDir string
ImageCredProvConfig string
AgentReady chan<- struct{}
ContainerRuntimeReady chan<- struct{}
AgentShared
}
@ -220,6 +223,21 @@ var (
Usage: "(agent/networking) (experimental) Disable the agent's client-side load-balancer and connect directly to the configured server address",
Destination: &AgentConfig.DisableLoadBalancer,
}
DisableDefaultRegistryEndpointFlag = &cli.BoolFlag{
Name: "disable-default-registry-endpoint",
Usage: "(agent/containerd) Disables containerd's fallback default registry endpoint when a mirror is configured for that registry",
Destination: &AgentConfig.ContainerdNoDefault,
}
EnablePProfFlag = &cli.BoolFlag{
Name: "enable-pprof",
Usage: "(experimental) Enable pprof endpoint on supervisor port",
Destination: &AgentConfig.EnablePProf,
}
BindAddressFlag = &cli.StringFlag{
Name: "bind-address",
Usage: "(listener) " + version.Program + " bind address (default: 0.0.0.0)",
Destination: &AgentConfig.BindAddress,
}
)
func NewAgentCommand(action func(ctx *cli.Context) error) cli.Command {
@ -269,8 +287,10 @@ func NewAgentCommand(action func(ctx *cli.Context) error) cli.Command {
PauseImageFlag,
SnapshotterFlag,
PrivateRegistryFlag,
DisableDefaultRegistryEndpointFlag,
AirgapExtraRegistryFlag,
NodeIPFlag,
BindAddressFlag,
NodeExternalIPFlag,
ResolvConfFlag,
FlannelIfaceFlag,
@ -279,6 +299,7 @@ func NewAgentCommand(action func(ctx *cli.Context) error) cli.Command {
ExtraKubeletArgs,
ExtraKubeProxyArgs,
// Experimental flags
EnablePProfFlag,
&cli.BoolFlag{
Name: "rootless",
Usage: "(experimental) Run rootless",

View File

@ -23,7 +23,7 @@ var (
DataDirFlag,
&cli.StringSliceFlag{
Name: "service,s",
Usage: "List of services to rotate certificates for. Options include (admin, api-server, controller-manager, scheduler, " + version.Program + "-controller, " + version.Program + "-server, cloud-controller, etcd, auth-proxy, kubelet, kube-proxy)",
Usage: "List of services to manage certificates for. Options include (admin, api-server, controller-manager, scheduler, supervisor, " + version.Program + "-controller, " + version.Program + "-server, cloud-controller, etcd, auth-proxy, kubelet, kube-proxy)",
Value: &ServicesList,
},
}
@ -54,13 +54,21 @@ var (
}
)
func NewCertCommands(rotate, rotateCA func(ctx *cli.Context) error) cli.Command {
func NewCertCommands(check, rotate, rotateCA func(ctx *cli.Context) error) cli.Command {
return cli.Command{
Name: CertCommand,
Usage: "Manage K3s certificates",
SkipFlagParsing: false,
SkipArgReorder: true,
Subcommands: []cli.Command{
{
Name: "check",
Usage: "Check " + version.Program + " component certificates on disk",
SkipFlagParsing: false,
SkipArgReorder: true,
Action: check,
Flags: CertRotateCommandFlags,
},
{
Name: "rotate",
Usage: "Rotate " + version.Program + " component certificates on disk",

View File

@ -21,6 +21,14 @@ var EtcdSnapshotFlags = []cli.Flag{
Destination: &AgentConfig.NodeName,
},
DataDirFlag,
ServerToken,
&cli.StringFlag{
Name: "server, s",
Usage: "(cluster) Server to connect to",
EnvVar: version.ProgramUpper + "_URL",
Value: "https://127.0.0.1:6443",
Destination: &ServerConfig.ServerURL,
},
&cli.StringFlag{
Name: "dir,etcd-snapshot-dir",
Usage: "(db) Directory to save etcd on-demand snapshot. (default: ${data-dir}/db/snapshots)",
@ -37,6 +45,12 @@ var EtcdSnapshotFlags = []cli.Flag{
Usage: "(db) Compress etcd snapshot",
Destination: &ServerConfig.EtcdSnapshotCompress,
},
&cli.IntFlag{
Name: "snapshot-retention,etcd-snapshot-retention",
Usage: "(db) Number of snapshots to retain.",
Destination: &ServerConfig.EtcdSnapshotRetention,
Value: defaultSnapshotRentention,
},
&cli.BoolFlag{
Name: "s3,etcd-s3",
Usage: "(db) Enable backup to S3",
@ -140,12 +154,7 @@ func NewEtcdSnapshotCommands(delete, list, prune, save func(ctx *cli.Context) er
SkipFlagParsing: false,
SkipArgReorder: true,
Action: prune,
Flags: append(EtcdSnapshotFlags, &cli.IntFlag{
Name: "snapshot-retention",
Usage: "(db) Number of snapshots to retain.",
Destination: &ServerConfig.EtcdSnapshotRetention,
Value: defaultSnapshotRentention,
}),
Flags: EtcdSnapshotFlags,
},
},
Flags: EtcdSnapshotFlags,

27
pkg/cli/cmds/golang.go Normal file
View File

@ -0,0 +1,27 @@
package cmds
import (
"fmt"
"runtime"
"strings"
"github.com/k3s-io/k3s/pkg/version"
"github.com/sirupsen/logrus"
)
func ValidateGolang() error {
k8sVersion, _, _ := strings.Cut(version.Version, "+")
if version.UpstreamGolang == "" {
return fmt.Errorf("kubernetes golang build version not set - see 'golang: upstream version' in https://github.com/kubernetes/kubernetes/blob/%s/build/dependencies.yaml", k8sVersion)
}
if v, _, _ := strings.Cut(runtime.Version(), " "); version.UpstreamGolang != v {
return fmt.Errorf("incorrect golang build version - kubernetes %s should be built with %s, runtime version is %s", k8sVersion, version.UpstreamGolang, v)
}
return nil
}
func MustValidateGolang() {
if err := ValidateGolang(); err != nil {
logrus.Fatalf("Failed to validate golang version: %v", err)
}
}

View File

@ -1,9 +1,7 @@
package cmds
import (
"flag"
"fmt"
"strconv"
"sync"
"time"
@ -73,10 +71,6 @@ func checkUnixTimestamp() error {
}
func setupLogging() {
flag.Set("v", strconv.Itoa(LogConfig.VLevel))
flag.Set("vmodule", LogConfig.VModule)
flag.Set("alsologtostderr", strconv.FormatBool(Debug))
flag.Set("logtostderr", strconv.FormatBool(!Debug))
if Debug {
logrus.SetLevel(logrus.DebugLevel)
}

View File

@ -10,8 +10,8 @@ import (
"os/signal"
"syscall"
systemd "github.com/coreos/go-systemd/daemon"
"github.com/erikdubbelboer/gspt"
systemd "github.com/coreos/go-systemd/v22/daemon"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/version"
"github.com/natefinch/lumberjack"
"github.com/pkg/errors"
@ -42,7 +42,7 @@ func forkIfLoggingOrReaping() error {
}
if enableLogRedirect || enableReaping {
gspt.SetProcTitle(os.Args[0] + " init")
proctitle.SetProcTitle(os.Args[0] + " init")
pwd, err := os.Getwd()
if err != nil {

View File

@ -86,7 +86,7 @@ func NewSecretsEncryptCommands(status, enable, disable, prepare, rotate, reencry
},
{
Name: "rotate-keys",
Usage: "(experimental) Dynamically add a new secrets encryption key and re-encrypt secrets",
Usage: "(experimental) Dynamically rotates secrets encryption keys and re-encrypt secrets",
SkipArgReorder: true,
Action: rotateKeys,
Flags: EncryptFlags,

View File

@ -45,11 +45,10 @@ type Server struct {
DisableAgent bool
KubeConfigOutput string
KubeConfigMode string
KubeConfigGroup string
HelmJobImage string
TLSSan cli.StringSlice
TLSSanSecurity bool
BindAddress string
EnablePProf bool
ExtraAPIArgs cli.StringSlice
ExtraEtcdArgs cli.StringSlice
ExtraSchedulerArgs cli.StringSlice
@ -60,11 +59,11 @@ type Server struct {
DatastoreCAFile string
DatastoreCertFile string
DatastoreKeyFile string
KineTLS bool
AdvertiseIP string
AdvertisePort int
DisableScheduler bool
ServerURL string
MultiClusterCIDR bool
FlannelBackend string
FlannelIPv6Masq bool
FlannelExternalIP bool
@ -77,6 +76,7 @@ type Server struct {
DisableAPIServer bool
DisableControllerManager bool
DisableETCD bool
EmbeddedRegistry bool
ClusterInit bool
ClusterReset bool
ClusterResetRestorePath string
@ -86,6 +86,7 @@ type Server struct {
EncryptSkip bool
SystemDefaultRegistry string
StartupHooks []StartupHook
SupervisorMetrics bool
EtcdSnapshotName string
EtcdDisableSnapshots bool
EtcdExposeMetrics bool
@ -177,11 +178,7 @@ var ServerFlags = []cli.Flag{
VModule,
LogFile,
AlsoLogToStderr,
&cli.StringFlag{
Name: "bind-address",
Usage: "(listener) " + version.Program + " bind address (default: 0.0.0.0)",
Destination: &ServerConfig.BindAddress,
},
BindAddressFlag,
&cli.IntFlag{
Name: "https-listen-port",
Usage: "(listener) HTTPS listen port",
@ -220,11 +217,6 @@ var ServerFlags = []cli.Flag{
Destination: &ServerConfig.FlannelBackend,
Value: "vxlan",
},
&cli.BoolFlag{
Name: "multi-cluster-cidr",
Usage: "(experimental/networking) Enable multiClusterCIDR",
Destination: &ServerConfig.MultiClusterCIDR,
},
&cli.BoolFlag{
Name: "flannel-ipv6-masq",
Usage: "(networking) Enable IPv6 masquerading for pod",
@ -259,6 +251,12 @@ var ServerFlags = []cli.Flag{
Destination: &ServerConfig.KubeConfigMode,
EnvVar: version.ProgramUpper + "_KUBECONFIG_MODE",
},
&cli.StringFlag{
Name: "write-kubeconfig-group",
Usage: "(client) Write kubeconfig with this group",
Destination: &ServerConfig.KubeConfigGroup,
EnvVar: version.ProgramUpper + "_KUBECONFIG_GROUP",
},
&cli.StringFlag{
Name: "helm-job-image",
Usage: "(helm) Default image to use for helm jobs",
@ -315,6 +313,12 @@ var ServerFlags = []cli.Flag{
Usage: "(flags) Customized flag for kube-cloud-controller-manager process",
Value: &ServerConfig.ExtraCloudControllerArgs,
},
&cli.BoolFlag{
Name: "kine-tls",
Usage: "(experimental/db) Enable TLS on the kine etcd server socket",
Destination: &ServerConfig.KineTLS,
Hidden: true,
},
&cli.StringFlag{
Name: "datastore-endpoint",
Usage: "(db) Specify etcd, NATS, MySQL, Postgres, or SQLite (default) data source name",
@ -489,6 +493,16 @@ var ServerFlags = []cli.Flag{
Usage: "(experimental/components) Disable running etcd",
Destination: &ServerConfig.DisableETCD,
},
&cli.BoolFlag{
Name: "embedded-registry",
Usage: "(experimental/components) Enable embedded distributed container registry; requires use of embedded containerd; when enabled agents will also listen on the supervisor port",
Destination: &ServerConfig.EmbeddedRegistry,
},
&cli.BoolFlag{
Name: "supervisor-metrics",
Usage: "(experimental/components) Enable serving " + version.Program + " internal metrics on the supervisor port; when enabled agents will also listen on the supervisor port",
Destination: &ServerConfig.SupervisorMetrics,
},
NodeNameFlag,
WithNodeIDFlag,
NodeLabels,
@ -499,6 +513,7 @@ var ServerFlags = []cli.Flag{
CRIEndpointFlag,
DefaultRuntimeFlag,
ImageServiceEndpointFlag,
DisableDefaultRegistryEndpointFlag,
PauseImageFlag,
SnapshotterFlag,
PrivateRegistryFlag,
@ -526,11 +541,7 @@ var ServerFlags = []cli.Flag{
Destination: &ServerConfig.EncryptSecrets,
},
// Experimental flags
&cli.BoolFlag{
Name: "enable-pprof",
Usage: "(experimental) Enable pprof endpoint on supervisor port",
Destination: &ServerConfig.EnablePProf,
},
EnablePProfFlag,
&cli.BoolFlag{
Name: "rootless",
Usage: "(experimental) Run rootless",

View File

@ -1,99 +1,94 @@
package etcdsnapshot
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"os"
"path/filepath"
"slices"
"sort"
"strings"
"text/tabwriter"
"time"
"github.com/erikdubbelboer/gspt"
k3s "github.com/k3s-io/k3s/pkg/apis/k3s.cattle.io/v1"
"github.com/k3s-io/k3s/pkg/cli/cmds"
daemonconfig "github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/clientaccess"
"github.com/k3s-io/k3s/pkg/cluster/managed"
"github.com/k3s-io/k3s/pkg/etcd"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/server"
util2 "github.com/k3s-io/k3s/pkg/util"
"github.com/rancher/wrangler/pkg/signals"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
"gopkg.in/yaml.v2"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/cli-runtime/pkg/printers"
)
type etcdCommand struct {
etcd *etcd.ETCD
ctx context.Context
}
var timeout = 2 * time.Minute
// commandSetup setups up common things needed
// for each etcd command.
func commandSetup(app *cli.Context, cfg *cmds.Server, config *server.Config) (*etcdCommand, error) {
ctx := signals.SetupSignalContext()
gspt.SetProcTitle(os.Args[0])
func commandSetup(app *cli.Context, cfg *cmds.Server) (*etcd.SnapshotRequest, *clientaccess.Info, error) {
// hide process arguments from ps output, since they may contain
// database credentials or other secrets.
proctitle.SetProcTitle(os.Args[0] + " etcd-snapshot")
nodeName := app.String("node-name")
if nodeName == "" {
h, err := os.Hostname()
if err != nil {
return nil, err
}
nodeName = h
sr := &etcd.SnapshotRequest{}
// Operation and name are set by the command handler.
// Compression, dir, and retention take the server defaults if not overridden on the CLI.
if app.IsSet("etcd-snapshot-compress") {
sr.Compress = &cfg.EtcdSnapshotCompress
}
if app.IsSet("etcd-snapshot-dir") {
sr.Dir = &cfg.EtcdSnapshotDir
}
if app.IsSet("etcd-snapshot-retention") {
sr.Retention = &cfg.EtcdSnapshotRetention
}
os.Setenv("NODE_NAME", nodeName)
if cfg.EtcdS3 {
sr.S3 = &etcd.SnapshotRequestS3{}
sr.S3.AccessKey = cfg.EtcdS3AccessKey
sr.S3.Bucket = cfg.EtcdS3BucketName
sr.S3.Endpoint = cfg.EtcdS3Endpoint
sr.S3.EndpointCA = cfg.EtcdS3EndpointCA
sr.S3.Folder = cfg.EtcdS3Folder
sr.S3.Insecure = cfg.EtcdS3Insecure
sr.S3.Region = cfg.EtcdS3Region
sr.S3.SecretKey = cfg.EtcdS3SecretKey
sr.S3.SkipSSLVerify = cfg.EtcdS3SkipSSLVerify
sr.S3.Timeout = metav1.Duration{Duration: cfg.EtcdS3Timeout}
// extend request timeout to allow the S3 operation to complete
timeout += cfg.EtcdS3Timeout
}
dataDir, err := server.ResolveDataDir(cfg.DataDir)
if err != nil {
return nil, err
return nil, nil, err
}
config.DisableAgent = true
config.ControlConfig.DataDir = dataDir
config.ControlConfig.EtcdSnapshotName = cfg.EtcdSnapshotName
config.ControlConfig.EtcdSnapshotDir = cfg.EtcdSnapshotDir
config.ControlConfig.EtcdSnapshotCompress = cfg.EtcdSnapshotCompress
config.ControlConfig.EtcdListFormat = strings.ToLower(cfg.EtcdListFormat)
config.ControlConfig.EtcdS3 = cfg.EtcdS3
config.ControlConfig.EtcdS3Endpoint = cfg.EtcdS3Endpoint
config.ControlConfig.EtcdS3EndpointCA = cfg.EtcdS3EndpointCA
config.ControlConfig.EtcdS3SkipSSLVerify = cfg.EtcdS3SkipSSLVerify
config.ControlConfig.EtcdS3AccessKey = cfg.EtcdS3AccessKey
config.ControlConfig.EtcdS3SecretKey = cfg.EtcdS3SecretKey
config.ControlConfig.EtcdS3BucketName = cfg.EtcdS3BucketName
config.ControlConfig.EtcdS3Region = cfg.EtcdS3Region
config.ControlConfig.EtcdS3Folder = cfg.EtcdS3Folder
config.ControlConfig.EtcdS3Insecure = cfg.EtcdS3Insecure
config.ControlConfig.EtcdS3Timeout = cfg.EtcdS3Timeout
config.ControlConfig.Runtime = daemonconfig.NewRuntime(nil)
config.ControlConfig.Runtime.ETCDServerCA = filepath.Join(dataDir, "tls", "etcd", "server-ca.crt")
config.ControlConfig.Runtime.ClientETCDCert = filepath.Join(dataDir, "tls", "etcd", "client.crt")
config.ControlConfig.Runtime.ClientETCDKey = filepath.Join(dataDir, "tls", "etcd", "client.key")
config.ControlConfig.Runtime.KubeConfigAdmin = filepath.Join(dataDir, "cred", "admin.kubeconfig")
e := etcd.NewETCD()
if err := e.SetControlConfig(&config.ControlConfig); err != nil {
return nil, err
if cfg.Token == "" {
fp := filepath.Join(dataDir, "token")
tokenByte, err := os.ReadFile(fp)
if err != nil {
return nil, nil, err
}
cfg.Token = string(bytes.TrimRight(tokenByte, "\n"))
}
info, err := clientaccess.ParseAndValidateToken(cmds.ServerConfig.ServerURL, cfg.Token, clientaccess.WithUser("server"))
return sr, info, err
}
initialized, err := e.IsInitialized()
if err != nil {
return nil, err
func wrapServerError(err error) error {
if errors.Is(err, context.DeadlineExceeded) {
// if the request timed out the server log likely won't contain anything useful,
// since the operation may have actualy succeeded despite the client timing out the request.
return err
}
if !initialized {
return nil, fmt.Errorf("etcd database not found in %s", config.ControlConfig.DataDir)
}
sc, err := server.NewContext(ctx, config, false)
if err != nil {
return nil, err
}
config.ControlConfig.Runtime.K3s = sc.K3s
config.ControlConfig.Runtime.Core = sc.Core
return &etcdCommand{etcd: e, ctx: ctx}, nil
return errors.Wrap(err, "see server log for details")
}
// Save triggers an on-demand etcd snapshot operation
@ -105,20 +100,40 @@ func Save(app *cli.Context) error {
}
func save(app *cli.Context, cfg *cmds.Server) error {
var serverConfig server.Config
if len(app.Args()) > 0 {
return util2.ErrCommandNoArgs
}
ec, err := commandSetup(app, cfg, &serverConfig)
// Save always sets retention to 0 to disable automatic pruning.
// Prune can be run manually after save, if desired.
app.Set("etcd-snapshot-retention", "0")
sr, info, err := commandSetup(app, cfg)
if err != nil {
return err
}
serverConfig.ControlConfig.EtcdSnapshotRetention = 0 // disable retention check
sr.Operation = etcd.SnapshotOperationSave
sr.Name = []string{cfg.EtcdSnapshotName}
return ec.etcd.Snapshot(ec.ctx)
b, err := json.Marshal(sr)
if err != nil {
return err
}
r, err := info.Post("/db/snapshot", b, clientaccess.WithTimeout(timeout))
if err != nil {
return wrapServerError(err)
}
resp := &managed.SnapshotResult{}
if err := json.Unmarshal(r, resp); err != nil {
return err
}
for _, name := range resp.Created {
logrus.Infof("Snapshot %s saved.", name)
}
return nil
}
func Delete(app *cli.Context) error {
@ -129,19 +144,42 @@ func Delete(app *cli.Context) error {
}
func delete(app *cli.Context, cfg *cmds.Server) error {
var serverConfig server.Config
ec, err := commandSetup(app, cfg, &serverConfig)
if err != nil {
return err
}
snapshots := app.Args()
if len(snapshots) == 0 {
return errors.New("no snapshots given for removal")
}
return ec.etcd.DeleteSnapshots(ec.ctx, app.Args())
sr, info, err := commandSetup(app, cfg)
if err != nil {
return err
}
sr.Operation = etcd.SnapshotOperationDelete
sr.Name = snapshots
b, err := json.Marshal(sr)
if err != nil {
return err
}
r, err := info.Post("/db/snapshot", b, clientaccess.WithTimeout(timeout))
if err != nil {
return wrapServerError(err)
}
resp := &managed.SnapshotResult{}
if err := json.Unmarshal(r, resp); err != nil {
return err
}
for _, name := range resp.Deleted {
logrus.Infof("Snapshot %s deleted.", name)
}
for _, name := range snapshots {
if !slices.Contains(resp.Deleted, name) {
logrus.Warnf("Snapshot %s not found.", name)
}
}
return nil
}
func List(app *cli.Context) error {
@ -163,30 +201,48 @@ func validEtcdListFormat(format string) bool {
}
func list(app *cli.Context, cfg *cmds.Server) error {
var serverConfig server.Config
ec, err := commandSetup(app, cfg, &serverConfig)
if err != nil {
return err
}
sf, err := ec.etcd.ListSnapshots(ec.ctx)
if err != nil {
return err
}
if cfg.EtcdListFormat != "" && !validEtcdListFormat(cfg.EtcdListFormat) {
return errors.New("invalid output format: " + cfg.EtcdListFormat)
}
sr, info, err := commandSetup(app, cfg)
if err != nil {
return err
}
sr.Operation = etcd.SnapshotOperationList
b, err := json.Marshal(sr)
if err != nil {
return err
}
r, err := info.Post("/db/snapshot", b, clientaccess.WithTimeout(timeout))
if err != nil {
return wrapServerError(err)
}
sf := &k3s.ETCDSnapshotFileList{}
if err := json.Unmarshal(r, sf); err != nil {
return err
}
sort.Slice(sf.Items, func(i, j int) bool {
if sf.Items[i].Status.CreationTime.Equal(sf.Items[j].Status.CreationTime) {
return sf.Items[i].Spec.SnapshotName < sf.Items[j].Spec.SnapshotName
}
return sf.Items[i].Status.CreationTime.Before(sf.Items[j].Status.CreationTime)
})
switch cfg.EtcdListFormat {
case "json":
if err := json.NewEncoder(os.Stdout).Encode(sf); err != nil {
json := printers.JSONPrinter{}
if err := json.PrintObj(sf, os.Stdout); err != nil {
return err
}
return nil
case "yaml":
if err := yaml.NewEncoder(os.Stdout).Encode(sf); err != nil {
yaml := printers.YAMLPrinter{}
if err := yaml.PrintObj(sf, os.Stdout); err != nil {
return err
}
return nil
@ -194,23 +250,9 @@ func list(app *cli.Context, cfg *cmds.Server) error {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 1, ' ', 0)
defer w.Flush()
// Sort snapshots by creation time and key
sfKeys := make([]string, 0, len(sf))
for k := range sf {
sfKeys = append(sfKeys, k)
}
sort.Slice(sfKeys, func(i, j int) bool {
iKey := sfKeys[i]
jKey := sfKeys[j]
if sf[iKey].CreatedAt.Equal(sf[jKey].CreatedAt) {
return iKey < jKey
}
return sf[iKey].CreatedAt.Before(sf[jKey].CreatedAt)
})
fmt.Fprint(w, "Name\tLocation\tSize\tCreated\n")
for _, k := range sfKeys {
fmt.Fprintf(w, "%s\t%s\t%d\t%s\n", sf[k].Name, sf[k].Location, sf[k].Size, sf[k].CreatedAt.Format(time.RFC3339))
for _, esf := range sf.Items {
fmt.Fprintf(w, "%s\t%s\t%d\t%s\n", esf.Spec.SnapshotName, esf.Spec.Location, esf.Status.Size.Value(), esf.Status.CreationTime.Format(time.RFC3339))
}
}
@ -225,14 +267,30 @@ func Prune(app *cli.Context) error {
}
func prune(app *cli.Context, cfg *cmds.Server) error {
var serverConfig server.Config
ec, err := commandSetup(app, cfg, &serverConfig)
sr, info, err := commandSetup(app, cfg)
if err != nil {
return err
}
serverConfig.ControlConfig.EtcdSnapshotRetention = cfg.EtcdSnapshotRetention
sr.Operation = etcd.SnapshotOperationPrune
sr.Name = []string{cfg.EtcdSnapshotName}
return ec.etcd.PruneSnapshots(ec.ctx)
b, err := json.Marshal(sr)
if err != nil {
return err
}
r, err := info.Post("/db/snapshot", b, clientaccess.WithTimeout(timeout))
if err != nil {
return wrapServerError(err)
}
resp := &managed.SnapshotResult{}
if err := json.Unmarshal(r, resp); err != nil {
return err
}
for _, name := range resp.Deleted {
logrus.Infof("Snapshot %s deleted.", name)
}
return nil
}

View File

@ -8,22 +8,23 @@ import (
"path/filepath"
"strings"
"text/tabwriter"
"time"
"github.com/erikdubbelboer/gspt"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/clientaccess"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/secretsencrypt"
"github.com/k3s-io/k3s/pkg/server"
"github.com/k3s-io/k3s/pkg/version"
"github.com/pkg/errors"
"github.com/urfave/cli"
"k8s.io/utils/pointer"
"k8s.io/utils/ptr"
)
func commandPrep(cfg *cmds.Server) (*clientaccess.Info, error) {
// hide process arguments from ps output, since they may contain
// database credentials or other secrets.
gspt.SetProcTitle(os.Args[0] + " secrets-encrypt")
proctitle.SetProcTitle(os.Args[0] + " secrets-encrypt")
dataDir, err := server.ResolveDataDir(cfg.DataDir)
if err != nil {
@ -53,7 +54,7 @@ func Enable(app *cli.Context) error {
if err != nil {
return err
}
b, err := json.Marshal(server.EncryptionRequest{Enable: pointer.Bool(true)})
b, err := json.Marshal(server.EncryptionRequest{Enable: ptr.To(true)})
if err != nil {
return err
}
@ -65,7 +66,6 @@ func Enable(app *cli.Context) error {
}
func Disable(app *cli.Context) error {
if err := cmds.InitLogging(); err != nil {
return err
}
@ -73,7 +73,7 @@ func Disable(app *cli.Context) error {
if err != nil {
return err
}
b, err := json.Marshal(server.EncryptionRequest{Enable: pointer.Bool(false)})
b, err := json.Marshal(server.EncryptionRequest{Enable: ptr.To(false)})
if err != nil {
return err
}
@ -154,7 +154,7 @@ func Prepare(app *cli.Context) error {
return err
}
b, err := json.Marshal(server.EncryptionRequest{
Stage: pointer.String(secretsencrypt.EncryptionPrepare),
Stage: ptr.To(secretsencrypt.EncryptionPrepare),
Force: cmds.ServerConfig.EncryptForce,
})
if err != nil {
@ -176,7 +176,7 @@ func Rotate(app *cli.Context) error {
return err
}
b, err := json.Marshal(server.EncryptionRequest{
Stage: pointer.String(secretsencrypt.EncryptionRotate),
Stage: ptr.To(secretsencrypt.EncryptionRotate),
Force: cmds.ServerConfig.EncryptForce,
})
if err != nil {
@ -198,7 +198,7 @@ func Reencrypt(app *cli.Context) error {
return err
}
b, err := json.Marshal(server.EncryptionRequest{
Stage: pointer.String(secretsencrypt.EncryptionReencryptActive),
Stage: ptr.To(secretsencrypt.EncryptionReencryptActive),
Force: cmds.ServerConfig.EncryptForce,
Skip: cmds.ServerConfig.EncryptSkip,
})
@ -221,12 +221,13 @@ func RotateKeys(app *cli.Context) error {
return err
}
b, err := json.Marshal(server.EncryptionRequest{
Stage: pointer.String(secretsencrypt.EncryptionRotateKeys),
Stage: ptr.To(secretsencrypt.EncryptionRotateKeys),
})
if err != nil {
return err
}
if err = info.Put("/v1-"+version.Program+"/encrypt/config", b); err != nil {
timeout := 70 * time.Second
if err = info.Put("/v1-"+version.Program+"/encrypt/config", b, clientaccess.WithTimeout(timeout)); err != nil {
return wrapServerError(err)
}
fmt.Println("keys rotated, reencryption started")

View File

@ -9,22 +9,27 @@ import (
"strings"
"time"
systemd "github.com/coreos/go-systemd/daemon"
"github.com/erikdubbelboer/gspt"
systemd "github.com/coreos/go-systemd/v22/daemon"
"github.com/gorilla/mux"
"github.com/k3s-io/k3s/pkg/agent"
"github.com/k3s-io/k3s/pkg/agent/https"
"github.com/k3s-io/k3s/pkg/agent/loadbalancer"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/clientaccess"
"github.com/k3s-io/k3s/pkg/daemons/config"
"github.com/k3s-io/k3s/pkg/datadir"
"github.com/k3s-io/k3s/pkg/etcd"
k3smetrics "github.com/k3s-io/k3s/pkg/metrics"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/profile"
"github.com/k3s-io/k3s/pkg/rootless"
"github.com/k3s-io/k3s/pkg/server"
"github.com/k3s-io/k3s/pkg/spegel"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/k3s-io/k3s/pkg/vpn"
"github.com/pkg/errors"
"github.com/rancher/wrangler/pkg/signals"
"github.com/rancher/wrangler/v3/pkg/signals"
"github.com/sirupsen/logrus"
"github.com/urfave/cli"
utilnet "k8s.io/apimachinery/pkg/util/net"
@ -46,13 +51,13 @@ func RunWithControllers(app *cli.Context, leaderControllers server.CustomControl
}
func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomControllers, controllers server.CustomControllers) error {
var (
err error
)
var err error
// Validate build env
cmds.MustValidateGolang()
// hide process arguments from ps output, since they may contain
// database credentials or other secrets.
gspt.SetProcTitle(os.Args[0] + " server")
proctitle.SetProcTitle(os.Args[0] + " server")
// If the agent is enabled, evacuate cgroup v2 before doing anything else that may fork.
// If the agent is disabled, we don't need to bother doing this as it is only the kubelet
@ -105,11 +110,11 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
}
}
agentReady := make(chan struct{})
containerRuntimeReady := make(chan struct{})
serverConfig := server.Config{}
serverConfig.DisableAgent = cfg.DisableAgent
serverConfig.ControlConfig.Runtime = config.NewRuntime(agentReady)
serverConfig.ControlConfig.Runtime = config.NewRuntime(containerRuntimeReady)
serverConfig.ControlConfig.Token = cfg.Token
serverConfig.ControlConfig.AgentToken = cfg.AgentToken
serverConfig.ControlConfig.JoinURL = cfg.ServerURL
@ -128,29 +133,30 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
serverConfig.ControlConfig.DataDir = cfg.DataDir
serverConfig.ControlConfig.KubeConfigOutput = cfg.KubeConfigOutput
serverConfig.ControlConfig.KubeConfigMode = cfg.KubeConfigMode
serverConfig.ControlConfig.KubeConfigGroup = cfg.KubeConfigGroup
serverConfig.ControlConfig.HelmJobImage = cfg.HelmJobImage
serverConfig.ControlConfig.Rootless = cfg.Rootless
serverConfig.ControlConfig.ServiceLBNamespace = cfg.ServiceLBNamespace
serverConfig.ControlConfig.SANs = util.SplitStringSlice(cfg.TLSSan)
serverConfig.ControlConfig.SANSecurity = cfg.TLSSanSecurity
serverConfig.ControlConfig.BindAddress = cfg.BindAddress
serverConfig.ControlConfig.BindAddress = cmds.AgentConfig.BindAddress
serverConfig.ControlConfig.SupervisorPort = cfg.SupervisorPort
serverConfig.ControlConfig.HTTPSPort = cfg.HTTPSPort
serverConfig.ControlConfig.APIServerPort = cfg.APIServerPort
serverConfig.ControlConfig.APIServerBindAddress = cfg.APIServerBindAddress
serverConfig.ControlConfig.EnablePProf = cfg.EnablePProf
serverConfig.ControlConfig.ExtraAPIArgs = cfg.ExtraAPIArgs
serverConfig.ControlConfig.ExtraControllerArgs = cfg.ExtraControllerArgs
serverConfig.ControlConfig.ExtraEtcdArgs = cfg.ExtraEtcdArgs
serverConfig.ControlConfig.ExtraSchedulerAPIArgs = cfg.ExtraSchedulerArgs
serverConfig.ControlConfig.ClusterDomain = cfg.ClusterDomain
serverConfig.ControlConfig.Datastore.NotifyInterval = 5 * time.Second
serverConfig.ControlConfig.Datastore.Endpoint = cfg.DatastoreEndpoint
serverConfig.ControlConfig.Datastore.BackendTLSConfig.CAFile = cfg.DatastoreCAFile
serverConfig.ControlConfig.Datastore.BackendTLSConfig.CertFile = cfg.DatastoreCertFile
serverConfig.ControlConfig.Datastore.BackendTLSConfig.KeyFile = cfg.DatastoreKeyFile
serverConfig.ControlConfig.KineTLS = cfg.KineTLS
serverConfig.ControlConfig.AdvertiseIP = cfg.AdvertiseIP
serverConfig.ControlConfig.AdvertisePort = cfg.AdvertisePort
serverConfig.ControlConfig.MultiClusterCIDR = cfg.MultiClusterCIDR
serverConfig.ControlConfig.FlannelBackend = cfg.FlannelBackend
serverConfig.ControlConfig.FlannelIPv6Masq = cfg.FlannelIPv6Masq
serverConfig.ControlConfig.FlannelExternalIP = cfg.FlannelExternalIP
@ -164,10 +170,15 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
serverConfig.ControlConfig.DisableAPIServer = cfg.DisableAPIServer
serverConfig.ControlConfig.DisableScheduler = cfg.DisableScheduler
serverConfig.ControlConfig.DisableControllerManager = cfg.DisableControllerManager
serverConfig.ControlConfig.DisableAgent = cfg.DisableAgent
serverConfig.ControlConfig.EmbeddedRegistry = cfg.EmbeddedRegistry
serverConfig.ControlConfig.ClusterInit = cfg.ClusterInit
serverConfig.ControlConfig.EncryptSecrets = cfg.EncryptSecrets
serverConfig.ControlConfig.EtcdExposeMetrics = cfg.EtcdExposeMetrics
serverConfig.ControlConfig.EtcdDisableSnapshots = cfg.EtcdDisableSnapshots
serverConfig.ControlConfig.SupervisorMetrics = cfg.SupervisorMetrics
serverConfig.ControlConfig.VLevel = cmds.LogConfig.VLevel
serverConfig.ControlConfig.VModule = cmds.LogConfig.VModule
if !cfg.EtcdDisableSnapshots || cfg.ClusterReset {
serverConfig.ControlConfig.EtcdSnapshotCompress = cfg.EtcdSnapshotCompress
@ -190,9 +201,6 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
logrus.Info("ETCD snapshots are disabled")
}
if cfg.MultiClusterCIDR {
logrus.Warn("multiClusterCIDR alpha feature will be removed in future releases")
}
if cfg.ClusterResetRestorePath != "" && !cfg.ClusterReset {
return errors.New("invalid flag use; --cluster-reset required with --cluster-reset-restore-path")
}
@ -209,6 +217,14 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
return errors.New("invalid flag use; --server is required with --disable-etcd")
}
if serverConfig.ControlConfig.Datastore.Endpoint != "" && serverConfig.ControlConfig.DisableAPIServer {
return errors.New("invalid flag use; cannot use --disable-apiserver with --datastore-endpoint")
}
if serverConfig.ControlConfig.Datastore.Endpoint != "" && serverConfig.ControlConfig.DisableETCD {
return errors.New("invalid flag use; cannot use --disable-etcd with --datastore-endpoint")
}
if serverConfig.ControlConfig.DisableAPIServer {
// Servers without a local apiserver need to connect to the apiserver via the proxy load-balancer.
serverConfig.ControlConfig.APIServerPort = cmds.AgentConfig.LBServerPort
@ -393,6 +409,7 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
}
tlsMinVersionArg := getArgValueFromList("tls-min-version", serverConfig.ControlConfig.ExtraAPIArgs)
serverConfig.ControlConfig.MinTLSVersion = tlsMinVersionArg
serverConfig.ControlConfig.TLSMinVersion, err = kubeapiserverflag.TLSVersion(tlsMinVersionArg)
if err != nil {
return errors.Wrap(err, "invalid tls-min-version")
@ -422,6 +439,7 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
}
serverConfig.ControlConfig.ExtraAPIArgs = append(serverConfig.ControlConfig.ExtraAPIArgs, "tls-cipher-suites="+strings.Join(tlsCipherSuites, ","))
}
serverConfig.ControlConfig.CipherSuites = tlsCipherSuites
serverConfig.ControlConfig.TLSCipherSuites, err = kubeapiserverflag.TLSCipherSuites(tlsCipherSuites)
if err != nil {
return errors.Wrap(err, "invalid tls-cipher-suites")
@ -435,6 +453,7 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
serverConfig.ControlConfig.DisableControllerManager = true
serverConfig.ControlConfig.DisableScheduler = true
serverConfig.ControlConfig.DisableCCM = true
serverConfig.ControlConfig.DisableServiceLB = true
// If the supervisor and apiserver are on the same port, everything is running embedded
// and we don't need the kubelet or containerd up to perform a cluster reset.
@ -507,7 +526,7 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
}
agentConfig := cmds.AgentConfig
agentConfig.AgentReady = agentReady
agentConfig.ContainerRuntimeReady = containerRuntimeReady
agentConfig.Debug = app.GlobalBool("debug")
agentConfig.DataDir = filepath.Dir(serverConfig.ControlConfig.DataDir)
agentConfig.ServerURL = url
@ -542,6 +561,31 @@ func run(app *cli.Context, cfg *cmds.Server, leaderControllers server.CustomCont
go getAPIAddressFromEtcd(ctx, serverConfig, agentConfig)
}
// Until the agent is run and retrieves config from the server, we won't know
// if the embedded registry is enabled. If it is not enabled, these are not
// used as the registry is never started.
registry := spegel.DefaultRegistry
registry.Bootstrapper = spegel.NewChainingBootstrapper(
spegel.NewServerBootstrapper(&serverConfig.ControlConfig),
spegel.NewAgentBootstrapper(cfg.ServerURL, token, agentConfig.DataDir),
spegel.NewSelfBootstrapper(),
)
registry.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, serverConfig.ControlConfig.Runtime)
}
// same deal for metrics - these are not used if the extra metrics listener is not enabled.
metrics := k3smetrics.DefaultMetrics
metrics.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, serverConfig.ControlConfig.Runtime)
}
// and for pprof as well
pprof := profile.DefaultProfiler
pprof.Router = func(ctx context.Context, nodeConfig *config.Node) (*mux.Router, error) {
return https.Start(ctx, nodeConfig, serverConfig.ControlConfig.Runtime)
}
if cfg.DisableAgent {
agentConfig.ContainerRuntimeEndpoint = "/dev/null"
return agent.RunStandalone(ctx, agentConfig)

View File

@ -11,10 +11,10 @@ import (
"text/tabwriter"
"time"
"github.com/erikdubbelboer/gspt"
"github.com/k3s-io/k3s/pkg/cli/cmds"
"github.com/k3s-io/k3s/pkg/clientaccess"
"github.com/k3s-io/k3s/pkg/kubeadm"
"github.com/k3s-io/k3s/pkg/proctitle"
"github.com/k3s-io/k3s/pkg/server"
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
@ -27,7 +27,7 @@ import (
"k8s.io/client-go/tools/clientcmd"
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
bootstraputil "k8s.io/cluster-bootstrap/token/util"
"k8s.io/utils/pointer"
"k8s.io/utils/ptr"
)
func Create(app *cli.Context) error {
@ -155,7 +155,7 @@ func Rotate(app *cli.Context) error {
return err
}
b, err := json.Marshal(server.TokenRotateRequest{
NewToken: pointer.String(cmds.TokenConfig.NewToken),
NewToken: ptr.To(cmds.TokenConfig.NewToken),
})
if err != nil {
return err
@ -171,7 +171,7 @@ func Rotate(app *cli.Context) error {
func serverAccess(cfg *cmds.Token) (*clientaccess.Info, error) {
// hide process arguments from ps output, since they likely contain tokens.
gspt.SetProcTitle(os.Args[0] + " token")
proctitle.SetProcTitle(os.Args[0] + " token")
dataDir, err := server.ResolveDataDir("")
if err != nil {

View File

@ -6,6 +6,7 @@ import (
"crypto/tls"
"crypto/x509"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"net/http"
@ -18,6 +19,9 @@ import (
"github.com/pkg/errors"
certutil "github.com/rancher/dynamiclistener/cert"
"github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/net"
)
const (
@ -41,6 +45,9 @@ var (
}
)
// ClientOption is a callback to mutate the http client prior to use
type ClientOption func(*http.Client)
// Info contains fields that track parsed parts of a cluster join token
type Info struct {
*kubeadm.BootstrapTokenString
@ -233,7 +240,7 @@ func parseToken(token string) (*Info, error) {
// If the CA bundle is not empty but does not contain any valid certs, it validates using
// an empty CA bundle (which will always fail).
// If valid cert+key paths can be loaded from the provided paths, they are used for client cert auth.
func GetHTTPClient(cacerts []byte, certFile, keyFile string) *http.Client {
func GetHTTPClient(cacerts []byte, certFile, keyFile string, option ...ClientOption) *http.Client {
if len(cacerts) == 0 {
return defaultClient
}
@ -250,18 +257,29 @@ func GetHTTPClient(cacerts []byte, certFile, keyFile string) *http.Client {
if err == nil {
tlsConfig.Certificates = []tls.Certificate{cert}
}
return &http.Client{
client := &http.Client{
Timeout: defaultClientTimeout,
Transport: &http.Transport{
DisableKeepAlives: true,
TLSClientConfig: tlsConfig,
},
}
for _, o := range option {
o(client)
}
return client
}
func WithTimeout(d time.Duration) ClientOption {
return func(c *http.Client) {
c.Timeout = d
c.Transport.(*http.Transport).ResponseHeaderTimeout = d
}
}
// Get makes a request to a subpath of info's BaseURL
func (i *Info) Get(path string) ([]byte, error) {
func (i *Info) Get(path string, option ...ClientOption) ([]byte, error) {
u, err := url.Parse(i.BaseURL)
if err != nil {
return nil, err
@ -272,11 +290,11 @@ func (i *Info) Get(path string) ([]byte, error) {
}
p.Scheme = u.Scheme
p.Host = u.Host
return get(p.String(), GetHTTPClient(i.CACerts, i.CertFile, i.KeyFile), i.Username, i.Password, i.Token())
return get(p.String(), GetHTTPClient(i.CACerts, i.CertFile, i.KeyFile, option...), i.Username, i.Password, i.Token())
}
// Put makes a request to a subpath of info's BaseURL
func (i *Info) Put(path string, body []byte) error {
func (i *Info) Put(path string, body []byte, option ...ClientOption) error {
u, err := url.Parse(i.BaseURL)
if err != nil {
return err
@ -287,7 +305,22 @@ func (i *Info) Put(path string, body []byte) error {
}
p.Scheme = u.Scheme
p.Host = u.Host
return put(p.String(), body, GetHTTPClient(i.CACerts, i.CertFile, i.KeyFile), i.Username, i.Password, i.Token())
return put(p.String(), body, GetHTTPClient(i.CACerts, i.CertFile, i.KeyFile, option...), i.Username, i.Password, i.Token())
}
// Post makes a request to a subpath of info's BaseURL
func (i *Info) Post(path string, body []byte, option ...ClientOption) ([]byte, error) {
u, err := url.Parse(i.BaseURL)
if err != nil {
return nil, err
}
p, err := url.Parse(path)
if err != nil {
return nil, err
}
p.Scheme = u.Scheme
p.Host = u.Host
return post(p.String(), body, GetHTTPClient(i.CACerts, i.CertFile, i.KeyFile, option...), i.Username, i.Password, i.Token())
}
// setServer sets the BaseURL and CACerts fields of the Info by connecting to the server
@ -385,13 +418,8 @@ func get(u string, client *http.Client, username, password, token string) ([]byt
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode > 299 {
return nil, fmt.Errorf("%s: %s", u, resp.Status)
}
return io.ReadAll(resp.Body)
return readBody(resp)
}
// put makes a request to a url using a provided client and credentials,
@ -412,14 +440,59 @@ func put(u string, body []byte, client *http.Client, username, password, token s
if err != nil {
return err
}
defer resp.Body.Close()
respBody, _ := io.ReadAll(resp.Body)
if resp.StatusCode < 200 || resp.StatusCode > 299 {
return fmt.Errorf("%s: %s %s", u, resp.Status, string(respBody))
_, err = readBody(resp)
return err
}
// post makes a request to a url using a provided client and credentials,
// returning the response body and error.
func post(u string, body []byte, client *http.Client, username, password, token string) ([]byte, error) {
req, err := http.NewRequest(http.MethodPost, u, bytes.NewBuffer(body))
if err != nil {
return nil, err
}
return nil
if token != "" {
req.Header.Add("Authorization", "Bearer "+token)
} else if username != "" {
req.SetBasicAuth(username, password)
}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
return readBody(resp)
}
// readBody attempts to get the body from the response. If the response status
// code is not in the 2XX range, an error is returned. An attempt is made to
// decode the error body as a metav1.Status and return a StatusError, if
// possible.
func readBody(resp *http.Response) ([]byte, error) {
defer resp.Body.Close()
b, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
warnings, _ := net.ParseWarningHeaders(resp.Header["Warning"])
for _, warning := range warnings {
if warning.Code == 299 && len(warning.Text) != 0 {
logrus.Warnf(warning.Text)
}
}
if resp.StatusCode < 200 || resp.StatusCode > 299 {
status := metav1.Status{}
if err := json.Unmarshal(b, &status); err == nil && status.Kind == "Status" {
return nil, &apierrors.StatusError{ErrStatus: status}
}
return nil, fmt.Errorf("%s: %s", resp.Request.URL, resp.Status)
}
return b, nil
}
// FormatToken takes a username:password string or join token, and a path to a certificate bundle, and

View File

@ -7,15 +7,15 @@ import (
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/rancher/wrangler/pkg/apply"
"github.com/rancher/wrangler/pkg/generated/controllers/apps"
appsclient "github.com/rancher/wrangler/pkg/generated/controllers/apps/v1"
"github.com/rancher/wrangler/pkg/generated/controllers/core"
coreclient "github.com/rancher/wrangler/pkg/generated/controllers/core/v1"
"github.com/rancher/wrangler/pkg/generated/controllers/discovery"
discoveryclient "github.com/rancher/wrangler/pkg/generated/controllers/discovery/v1"
"github.com/rancher/wrangler/pkg/generic"
"github.com/rancher/wrangler/pkg/start"
"github.com/rancher/wrangler/v3/pkg/apply"
"github.com/rancher/wrangler/v3/pkg/generated/controllers/apps"
appsclient "github.com/rancher/wrangler/v3/pkg/generated/controllers/apps/v1"
"github.com/rancher/wrangler/v3/pkg/generated/controllers/core"
coreclient "github.com/rancher/wrangler/v3/pkg/generated/controllers/core/v1"
"github.com/rancher/wrangler/v3/pkg/generated/controllers/discovery"
discoveryclient "github.com/rancher/wrangler/v3/pkg/generated/controllers/discovery/v1"
"github.com/rancher/wrangler/v3/pkg/generic"
"github.com/rancher/wrangler/v3/pkg/start"
"github.com/sirupsen/logrus"
meta "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
@ -28,11 +28,12 @@ import (
// Config describes externally-configurable cloud provider configuration.
// This is normally unmarshalled from a JSON config file.
type Config struct {
LBEnabled bool `json:"lbEnabled"`
LBImage string `json:"lbImage"`
LBNamespace string `json:"lbNamespace"`
NodeEnabled bool `json:"nodeEnabled"`
Rootless bool `json:"rootless"`
LBDefaultPriorityClassName string `json:"lbDefaultPriorityClassName"`
LBEnabled bool `json:"lbEnabled"`
LBImage string `json:"lbImage"`
LBNamespace string `json:"lbNamespace"`
NodeEnabled bool `json:"nodeEnabled"`
Rootless bool `json:"rootless"`
}
type k3s struct {
@ -56,10 +57,11 @@ func init() {
var err error
k := k3s{
Config: Config{
LBEnabled: true,
LBImage: DefaultLBImage,
LBNamespace: DefaultLBNS,
NodeEnabled: true,
LBDefaultPriorityClassName: DefaultLBPriorityClassName,
LBEnabled: true,
LBImage: DefaultLBImage,
LBNamespace: DefaultLBNS,
NodeEnabled: true,
},
}

View File

@ -38,15 +38,34 @@ func (k *k3s) InstanceMetadata(ctx context.Context, node *corev1.Node) (*cloudpr
return nil, errors.New("address annotations not yet set")
}
addresses := []corev1.NodeAddress{}
metadata := &cloudprovider.InstanceMetadata{
ProviderID: fmt.Sprintf("%s://%s", version.Program, node.Name),
InstanceType: version.Program,
}
if node.Spec.ProviderID != "" {
metadata.ProviderID = node.Spec.ProviderID
}
if instanceType := node.Labels[corev1.LabelInstanceTypeStable]; instanceType != "" {
metadata.InstanceType = instanceType
}
if region := node.Labels[corev1.LabelTopologyRegion]; region != "" {
metadata.Region = region
}
if zone := node.Labels[corev1.LabelTopologyZone]; zone != "" {
metadata.Zone = zone
}
// check internal address
if address := node.Annotations[InternalIPKey]; address != "" {
for _, v := range strings.Split(address, ",") {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: v})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: v})
}
} else if address = node.Labels[InternalIPKey]; address != "" {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: address})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: address})
} else {
logrus.Infof("Couldn't find node internal ip annotation or label on node %s", node.Name)
}
@ -54,26 +73,20 @@ func (k *k3s) InstanceMetadata(ctx context.Context, node *corev1.Node) (*cloudpr
// check external address
if address := node.Annotations[ExternalIPKey]; address != "" {
for _, v := range strings.Split(address, ",") {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeExternalIP, Address: v})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeExternalIP, Address: v})
}
} else if address = node.Labels[ExternalIPKey]; address != "" {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeExternalIP, Address: address})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeExternalIP, Address: address})
}
// check hostname
if address := node.Annotations[HostnameKey]; address != "" {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeHostName, Address: address})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeHostName, Address: address})
} else if address = node.Labels[HostnameKey]; address != "" {
addresses = append(addresses, corev1.NodeAddress{Type: corev1.NodeHostName, Address: address})
metadata.NodeAddresses = append(metadata.NodeAddresses, corev1.NodeAddress{Type: corev1.NodeHostName, Address: address})
} else {
logrus.Infof("Couldn't find node hostname annotation or label on node %s", node.Name)
}
return &cloudprovider.InstanceMetadata{
ProviderID: fmt.Sprintf("%s://%s", version.Program, node.Name),
InstanceType: version.Program,
NodeAddresses: addresses,
Zone: "",
Region: "",
}, nil
return metadata, nil
}

View File

@ -0,0 +1,132 @@
package cloudprovider
import (
"context"
"reflect"
"testing"
"github.com/k3s-io/k3s/pkg/version"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cloudprovider "k8s.io/cloud-provider"
)
func Test_UnitK3sInstanceMetadata(t *testing.T) {
nodeName := "test-node"
nodeInternalIP := "10.0.0.1"
nodeExternalIP := "1.2.3.4"
tests := []struct {
name string
node *corev1.Node
want *cloudprovider.InstanceMetadata
wantErr bool
}{
{
name: "No Annotations",
node: &corev1.Node{},
wantErr: true,
},
{
name: "Internal IP",
node: &corev1.Node{
ObjectMeta: metav1.ObjectMeta{
Name: nodeName,
Annotations: map[string]string{
InternalIPKey: nodeInternalIP,
},
},
},
want: &cloudprovider.InstanceMetadata{
InstanceType: version.Program,
ProviderID: version.Program + "://" + nodeName,
NodeAddresses: []corev1.NodeAddress{
{Type: corev1.NodeInternalIP, Address: nodeInternalIP},
},
},
},
{
name: "Internal IP, External IP",
node: &corev1.Node{
ObjectMeta: metav1.ObjectMeta{
Name: nodeName,
Annotations: map[string]string{
InternalIPKey: nodeInternalIP,
ExternalIPKey: nodeExternalIP,
},
},
},
want: &cloudprovider.InstanceMetadata{
InstanceType: version.Program,
ProviderID: version.Program + "://" + nodeName,
NodeAddresses: []corev1.NodeAddress{
{Type: corev1.NodeInternalIP, Address: nodeInternalIP},
{Type: corev1.NodeExternalIP, Address: nodeExternalIP},
},
},
},
{
name: "Internal IP, External IP, Hostname",
node: &corev1.Node{
ObjectMeta: metav1.ObjectMeta{
Name: nodeName,
Annotations: map[string]string{
InternalIPKey: nodeInternalIP,
ExternalIPKey: nodeExternalIP,
HostnameKey: nodeName + ".example.com",
},
},
},
want: &cloudprovider.InstanceMetadata{
InstanceType: version.Program,
ProviderID: version.Program + "://" + nodeName,
NodeAddresses: []corev1.NodeAddress{
{Type: corev1.NodeInternalIP, Address: nodeInternalIP},
{Type: corev1.NodeExternalIP, Address: nodeExternalIP},
{Type: corev1.NodeHostName, Address: nodeName + ".example.com"},
},
},
},
{
name: "Custom Metadata",
node: &corev1.Node{
ObjectMeta: metav1.ObjectMeta{
Name: nodeName,
Annotations: map[string]string{
InternalIPKey: nodeInternalIP,
},
Labels: map[string]string{
corev1.LabelInstanceTypeStable: "test.t1",
corev1.LabelTopologyRegion: "region",
corev1.LabelTopologyZone: "zone",
},
},
Spec: corev1.NodeSpec{
ProviderID: "test://i-abc",
},
},
want: &cloudprovider.InstanceMetadata{
InstanceType: "test.t1",
ProviderID: "test://i-abc",
NodeAddresses: []corev1.NodeAddress{
{Type: corev1.NodeInternalIP, Address: nodeInternalIP},
},
Region: "region",
Zone: "zone",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
k := &k3s{}
got, err := k.InstanceMetadata(context.Background(), tt.node)
if (err != nil) != tt.wantErr {
t.Errorf("k3s.InstanceMetadata() error = %v, wantErr %v", err, tt.wantErr)
return
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("k3s.InstanceMetadata() = %+v\nWant = %+v", got, tt.want)
}
})
}
}

View File

@ -10,11 +10,11 @@ import (
"github.com/k3s-io/k3s/pkg/util"
"github.com/k3s-io/k3s/pkg/version"
"github.com/rancher/wrangler/pkg/condition"
coreclient "github.com/rancher/wrangler/pkg/generated/controllers/core/v1"
discoveryclient "github.com/rancher/wrangler/pkg/generated/controllers/discovery/v1"
"github.com/rancher/wrangler/pkg/merr"
"github.com/rancher/wrangler/pkg/objectset"
"github.com/rancher/wrangler/v3/pkg/condition"
coreclient "github.com/rancher/wrangler/v3/pkg/generated/controllers/core/v1"
discoveryclient "github.com/rancher/wrangler/v3/pkg/generated/controllers/discovery/v1"
"github.com/rancher/wrangler/v3/pkg/merr"
"github.com/rancher/wrangler/v3/pkg/objectset"
"github.com/sirupsen/logrus"
apps "k8s.io/api/apps/v1"
core "k8s.io/api/core/v1"
@ -23,12 +23,15 @@ import (
meta "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/apimachinery/pkg/util/wait"
utilfeature "k8s.io/apiserver/pkg/util/feature"
"k8s.io/client-go/util/retry"
ccmapp "k8s.io/cloud-provider/app"
servicehelper "k8s.io/cloud-provider/service/helpers"
"k8s.io/kubernetes/pkg/features"
utilsnet "k8s.io/utils/net"
utilpointer "k8s.io/utils/pointer"
utilsptr "k8s.io/utils/ptr"
)
var (
@ -38,16 +41,18 @@ var (
daemonsetNodeLabel = "svccontroller." + version.Program + ".cattle.io/enablelb"
daemonsetNodePoolLabel = "svccontroller." + version.Program + ".cattle.io/lbpool"
nodeSelectorLabel = "svccontroller." + version.Program + ".cattle.io/nodeselector"
priorityAnnotation = "svccontroller." + version.Program + ".cattle.io/priorityclassname"
controllerName = ccmapp.DefaultInitFuncConstructors["service"].InitContext.ClientName
)
const (
Ready = condition.Cond("Ready")
DefaultLBNS = meta.NamespaceSystem
Ready = condition.Cond("Ready")
DefaultLBNS = meta.NamespaceSystem
DefaultLBPriorityClassName = "system-node-critical"
)
var (
DefaultLBImage = "rancher/klipper-lb:v0.4.4"
DefaultLBImage = "rancher/klipper-lb:v0.4.7"
)
func (k *k3s) Register(ctx context.Context,
@ -318,10 +323,8 @@ func (k *k3s) patchStatus(svc *core.Service, previousStatus, newStatus *core.Loa
// If at least one node has External IPs available, only external IPs are returned.
// If no nodes have External IPs set, the Internal IPs of all nodes running pods are returned.
func (k *k3s) podIPs(pods []*core.Pod, svc *core.Service, readyNodes map[string]bool) ([]string, error) {
// Go doesn't have sets so we stuff things into a map of bools and then get lists of keys
// to determine the unique set of IPs in use by pods.
extIPs := map[string]bool{}
intIPs := map[string]bool{}
extIPs := sets.Set[string]{}
intIPs := sets.Set[string]{}
for _, pod := range pods {
if pod.Spec.NodeName == "" || pod.Status.PodIP == "" {
@ -343,25 +346,18 @@ func (k *k3s) podIPs(pods []*core.Pod, svc *core.Service, readyNodes map[string]
for _, addr := range node.Status.Addresses {
if addr.Type == core.NodeExternalIP {
extIPs[addr.Address] = true
extIPs.Insert(addr.Address)
} else if addr.Type == core.NodeInternalIP {
intIPs[addr.Address] = true
intIPs.Insert(addr.Address)
}
}
}
keys := func(addrs map[string]bool) (ips []string) {
for k := range addrs {
ips = append(ips, k)
}
return ips
}
var ips []string
if len(extIPs) > 0 {
ips = keys(extIPs)
if extIPs.Len() > 0 {
ips = extIPs.UnsortedList()
} else {
ips = keys(intIPs)
ips = intIPs.UnsortedList()
}
ips, err := filterByIPFamily(ips, svc)
@ -434,19 +430,19 @@ func (k *k3s) deleteDaemonSet(ctx context.Context, svc *core.Service) error {
func (k *k3s) newDaemonSet(svc *core.Service) (*apps.DaemonSet, error) {
name := generateName(svc)
oneInt := intstr.FromInt(1)
priorityClassName := k.getPriorityClassName(svc)
localTraffic := servicehelper.RequestsOnlyLocalTraffic(svc)
sourceRanges, err := servicehelper.GetLoadBalancerSourceRanges(svc)
sourceRangesSet, err := servicehelper.GetLoadBalancerSourceRanges(svc)
if err != nil {
return nil, err
}
sourceRanges := strings.Join(sourceRangesSet.StringSlice(), ",")
var sysctls []core.Sysctl
for _, ipFamily := range svc.Spec.IPFamilies {
switch ipFamily {
case core.IPv4Protocol:
sysctls = append(sysctls, core.Sysctl{Name: "net.ipv4.ip_forward", Value: "1"})
case core.IPv6Protocol:
sysctls = append(sysctls, core.Sysctl{Name: "net.ipv6.conf.all.forwarding", Value: "1"})
if ipFamily == core.IPv6Protocol && sourceRanges == "0.0.0.0/0" {
// The upstream default load-balancer source range only includes IPv4, even if the service is IPv6-only or dual-stack.
// If using the default range, and IPv6 is enabled, also allow IPv6.
sourceRanges += ",::/0"
}
}
@ -479,10 +475,14 @@ func (k *k3s) newDaemonSet(svc *core.Service) (*apps.DaemonSet, error) {
},
},
Spec: core.PodSpec{
PriorityClassName: priorityClassName,
ServiceAccountName: "svclb",
AutomountServiceAccountToken: utilpointer.Bool(false),
AutomountServiceAccountToken: utilsptr.To(false),
SecurityContext: &core.PodSecurityContext{
Sysctls: sysctls,
Sysctls: []core.Sysctl{
{Name: "net.ipv4.ip_forward", Value: "1"},
{Name: "net.ipv6.conf.all.forwarding", Value: "1"},
},
},
Tolerations: []core.Toleration{
{
@ -532,7 +532,7 @@ func (k *k3s) newDaemonSet(svc *core.Service) (*apps.DaemonSet, error) {
},
{
Name: "SRC_RANGES",
Value: strings.Join(sourceRanges.StringSlice(), " "),
Value: sourceRanges,
},
{
Name: "DEST_PROTO",
@ -558,7 +558,7 @@ func (k *k3s) newDaemonSet(svc *core.Service) (*apps.DaemonSet, error) {
Name: "DEST_IPS",
ValueFrom: &core.EnvVarSource{
FieldRef: &core.ObjectFieldSelector{
FieldPath: "status.hostIP",
FieldPath: getHostIPsFieldPath(),
},
},
},
@ -571,7 +571,7 @@ func (k *k3s) newDaemonSet(svc *core.Service) (*apps.DaemonSet, error) {
},
core.EnvVar{
Name: "DEST_IPS",
Value: strings.Join(svc.Spec.ClusterIPs, " "),
Value: strings.Join(svc.Spec.ClusterIPs, ","),
},
)
}
@ -686,6 +686,17 @@ func (k *k3s) removeFinalizer(ctx context.Context, svc *core.Service) (*core.Ser
return svc, nil
}
// getPriorityClassName returns the value of the priority class name annotation on the service,
// or the system default priority class name.
func (k *k3s) getPriorityClassName(svc *core.Service) string {
if svc != nil {
if v, ok := svc.Annotations[priorityAnnotation]; ok {
return v
}
}
return k.LBDefaultPriorityClassName
}
// generateName generates a distinct name for the DaemonSet based on the service name and UID
func generateName(svc *core.Service) string {
return fmt.Sprintf("svclb-%s-%s", svc.Name, svc.UID[:8])
@ -703,3 +714,10 @@ func ingressToString(ingresses []core.LoadBalancerIngress) []string {
}
return parts
}
func getHostIPsFieldPath() string {
if utilfeature.DefaultFeatureGate.Enabled(features.PodHostIPs) {
return "status.hostIPs"
}
return "status.hostIP"
}

View File

@ -5,7 +5,7 @@ import (
"sync"
"github.com/k3s-io/k3s/pkg/util"
controllerv1 "github.com/rancher/wrangler/pkg/generated/controllers/core/v1"
controllerv1 "github.com/rancher/wrangler/v3/pkg/generated/controllers/core/v1"
"github.com/sirupsen/logrus"
v1 "k8s.io/api/core/v1"
)

Some files were not shown because too many files have changed in this diff Show More