get() is called in a loop until client configuration is successfully
retrieved. Each iteration will try to configure the apiserver proxy,
which will in turn create a new load balancer. Skip creating a new
load balancer if we already have one.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
If the port wanted by the client load balancer is in TIME_WAIT, startup
will fail. Set SO_REUSEPORT so that it can be listened on again
immediately.
The configurable Listen call wants a context, so plumb that through as
well.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Always use static ports for the load-balancers
This fixes an issue where RKE2 kube-proxy daemonset pods were failing to
communicate with the apiserver when RKE2 was restarted because the
load-balancer used a different port every time it started up.
This also changes the apiserver load-balancer port to be 1 below the
supervisor port instead of 1 above it. This makes the apiserver port
consistent at 6443 across servers and agents on RKE2.
Additional fixes below were required to successfully test and use this change
on etcd-only nodes.
* Actually add lb-server-port flag to CLI
* Fix nil pointer when starting server with --disable-etcd but no --server
* Don't try to use full URI as initial load-balancer endpoint
* Fix etcd load-balancer pool updates
* Update dynamiclistener to fix cert updates on etcd-only nodes
* Handle recursive initial server URL in load balancer
* Don't run the deploy controller on etcd-only nodes
* Add functionality for etcd snapshot/restore to and from S3 compatible backends.
* Update etcd restore functionality to extract and write certificates and configs from snapshot.
Servers should always be upgraded before agents, but generally this
isn't required because things are compatible between versions. In this
case we're OK with failing closed if the user upgrades out of order, but
we should give a clearer message about what steps are required to fix
the issue.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
We have had a couple issues with newer agents not working with old
servers or vice versa. Add a CI test to test variations on
uplevel/downlevel server/agent against latest, stable, and the previous
branch.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
K3s upgrade via watch over file change of static file and manifest
and triggers helm-controller for change. It seems reasonable to
only allow upgrade traefik v1->v2 when there is no existing custom
traefik HelmChartConfig in the cluster to avoid any
incompatibility.
Here also separate the CRDs and put them into a different chart
to support CRD upgrade.
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
It is possible that the apiserver may serve read requests but not allow
writes yet, in which case flannel will crash on startup when trying to
configure the subnet manager.
Fix this by waiting for the apiserver to become fully ready before
starting flannel and the network policy controller.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Adds support for retagging images to appear to have been sourced from
one or more additional registries as they are imported from the tarball.
This is intended to support RKE2 use cases with system-default-registry
where the images need to appear to have been pulled from a registry
other than docker.io.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Collect IPs from all pods before deciding to use internal or external addresses
@Taloth correctly noted that the code that iterates over ServiceLB pods
to collect IP addresses was failing to add additional internal IPs once
the map contained ANY entry from a previous node. This may date back to
when ServiceLB used a Deployment instead of a DaemonSet, so there was
only ever a single pod.
The new behavior is to collect all internal and external IPs, and then
construct the address list of a single type - external if there are any,
otherwise internal.
https://github.com/k3s-io/k3s/issues/1652#issuecomment-774497788
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brian Downs <brian.downs@gmail.com>
* Add tests to clientaccess/token
* Fix issues in clientaccess/token identified by tests
* Update tests to close coverage gaps
* Remove redundant check turned up by code coverage reports
* Add warnings if CA hash will not be validated
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
We also need to be more careful about setting the crictl.yaml path,
as it doesn't have kubectl's nice behavior of checking multiple
locations. It's not safe to assume that it's in the user's home data-dir
just because we're not running as root.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Previously, k3s.service was failing when the EnvironmentFile does not exist:
```
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed to load environment files: No such file or directory
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed to run 'start' task: No such file or directory
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed with result 'resources'.
Feb 02 17:17:30 suda-ws01 systemd[1]: Failed to start Lightweight Kubernetes.
```
ref: https://unix.stackexchange.com/questions/404199/documentation-of-equals-minus-in-systemd-unit-files
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Fix issue 900
cgroup2 support was introduced in PR 2584, but got broken in f3de60ff31
It was failing with "F1210 19:13:37.305388 4955 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true"
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>