Also add bandwidth and firewall plugins. The bandwidth plugin is
automatically registered with the appropriate capability, but the
firewall plugin must be configured by the user if they want to use it.
Ref: https://www.cni.dev/plugins/current/meta/firewall/
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* local-storage: Fix permission
/var/lib/rancher/k3s/storage/ should be 700
/var/lib/rancher/k3s/storage/* should be 777
Fixes#2348
Signed-off-by: Boleyn Su <boleyn.su@gmail.com>
* Fix pod command field type
* Fix to int test
Signed-off-by: Derek Nola <derek.nola@suse.com>
---------
Signed-off-by: Boleyn Su <boleyn.su@gmail.com>
Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: Brad Davidson <brad@oatmail.org>
Co-authored-by: Derek Nola <derek.nola@suse.com>
* Add helper function for multiple arguments in stringslice
Signed-off-by: Derek Nola <derek.nola@suse.com>
* Cleanup server setup with util function
Signed-off-by: Derek Nola <derek.nola@suse.com>
Several places in the code used a 5-second retry loop to wait on
Runtime.Core to be set. This caused a race condition where OnChange
handlers could be added after the Wrangler shared informers were already
started. When this happened, the handlers were never called because the
shared informers they relied upon were not started.
Fix that by requiring anything that waits on Runtime.Core to run from a
cluster controller startup hook that is guaranteed to be called before
the shared informers are started, instead of just firing it off in a
goroutine that retries until it is set.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Don't set up the agent tunnel authorizer on agentless servers, and warn when agentless servers won't have a way to reach in-cluster endpoints.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Fixes an issue where CRDs were being created without schema, allowing
resources with invalid content to be created, later stalling the
controller ListWatch event channel when the invalid resources could not
be deserialized.
This also requires moving Addon GVK tracking from a status field to
an annotation, as the GroupVersionKind type has special handling
internal to Kubernetes that prevents it from being serialized to the CRD
when schema validation is enabled.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Bump go version to 1.20.3 to match upstream
* Bump cri-dockerd
* Bump golanci-lint
* go generate
* Bump selinux in cgroup test
* Bump to v1.27.1 tags
* Release documentation improvements
* Only run upgrade e2e test on PR
Signed-off-by: Derek Nola <derek.nola@suse.com>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brad Davidson <brad.davidson@rancher.com>
* Remove deprecated nodeSelector label beta.kubernetes.io/os
Problem:
The nodeSelector label beta.kubernetes.io/os in the CoreDNS deployment was deprecated in 1.14 and will likely be removed soon
Solution:
Change the nodeSelector to remove the beta
Signed-off-by: Dan Mills <evilhamsterman@gmail.com>
We need to send the full chain in order for cross-signing to work
properly during switchover to a new root.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Wait for kubelet port to be ready before setting
* Wait for kubelet to update the Ready status before reading port
Signed-off-by: Daishan Peng <daishan@acorn.io>
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brad Davidson <brad.davidson@rancher.com>
Turns out etcd-only nodes were never running **any** of the controllers,
so allowing multiple controllers didn't really fix things.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Prevents errors when starting with fail-closed webhooks
Also, use panic instead of Fatalf so that the CloudControllerManager rescue can handle the error
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Allow bootstrapping with kubeadm bootstrap token strings or existing
Kubelet certs. This allows agents to join the cluster using kubeadm
bootstrap tokens, as created with the `k3s token create` command.
When the token expires or is deleted, agents can successfully restart by
authenticating with their kubelet certificate via node authentication.
If the token is gone and the node is deleted from the cluster, node auth
will fail and they will be prevented from rejoining the cluster until
provided with a valid token.
Servers still must be bootstrapped with the static cluster token, as
they will need to know it to decrypt the bootstrap data.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
This command must be run on a server while the service is running. After this command completes, all the servers in the cluster should be restarted to load the new CA files.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>