* Move registries.yaml handling out to rancher/wharfie
* Add system-default-registry support
* Add CLI support for kubelet image credential providers
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Always use static ports for the load-balancers
This fixes an issue where RKE2 kube-proxy daemonset pods were failing to
communicate with the apiserver when RKE2 was restarted because the
load-balancer used a different port every time it started up.
This also changes the apiserver load-balancer port to be 1 below the
supervisor port instead of 1 above it. This makes the apiserver port
consistent at 6443 across servers and agents on RKE2.
Additional fixes below were required to successfully test and use this change
on etcd-only nodes.
* Actually add lb-server-port flag to CLI
* Fix nil pointer when starting server with --disable-etcd but no --server
* Don't try to use full URI as initial load-balancer endpoint
* Fix etcd load-balancer pool updates
* Update dynamiclistener to fix cert updates on etcd-only nodes
* Handle recursive initial server URL in load balancer
* Don't run the deploy controller on etcd-only nodes
K3s upgrade via watch over file change of static file and manifest
and triggers helm-controller for change. It seems reasonable to
only allow upgrade traefik v1->v2 when there is no existing custom
traefik HelmChartConfig in the cluster to avoid any
incompatibility.
Here also separate the CRDs and put them into a different chart
to support CRD upgrade.
Signed-off-by: Chin-Ya Huang <chin-ya.huang@suse.com>
This attempts to update logging statements to make them consistent
through out the code base. It also adds additional context to messages
where possible, simplifies messages, and updates level where necessary.
As seen in issues such as #15#155#518#570 there are situations where
k3s will fail to write the kubeconfig file, but reports that it wrote it
anyway as the success message is printed unconditionally. Also, secondary
actions like setting file mode and creating a symlink are also attempted
even if the file was not created.
This change skips attempting additional actions, and propagates the
failure back upwards.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
In rke2 everything is a static pod so this causes a chicken and egg situation
in which we need the kubelet running before the kube-apiserver can be
launched. By starting the apiserver in the background this allows us to
do this odd bootstrapping.
In k3s today the kubernetes API and the /v1-k3s API are combined into
one http server. In rke2 we are running unmodified, non-embedded Kubernetes
and as such it is preferred to run k8s and the /v1-k3s API on different
ports. The /v1-k3s API port is called the SupervisorPort in the code.
To support this separation of ports a new shim was added on the client in
then pkg/agent/proxy package that will launch two load balancers instead
of just one load balancer. One load balancer for 6443 and the other
for 9345 (which is the supervisor port).
Since generated cert/keys are stored locally, each server has a different
copy. In a HA setup we need to ensure we download the cert and key from
the same server so we combined HTTP requests to do that.