* Move registries.yaml handling out to rancher/wharfie
* Add system-default-registry support
* Add CLI support for kubelet image credential providers
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Problem:
Only the client CA is passed to the kube-controller-manager and
therefore CSRs with the signer name "kubernetes.io/kubelet-serving" are
signed with the client CA. Serving certificates must be signed with the
server CA otherwise e.g. "kubectl logs" fails with the error message
"x509: certificate signed by unknown authority".
Solution:
Instead of providing only one CA via the kube-controller-manager
parameter "--cluster-signing-cert-file", the corresponding CA for every
signer is set with the parameters
"--cluster-signing-kube-apiserver-client-cert-file",
"--cluster-signing-kubelet-client-cert-file",
"--cluster-signing-kubelet-serving-cert-file", and
"--cluster-signing-legacy-unknown-cert-file".
Signed-off-by: Siegfried Weber <mail@siegfriedweber.net>
The kube-apiserver cert should have the same SANs in the same order,
excluding the extra user-configured SANs since this will only be used
in-cluster.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
If key ends in "+" the value of the key is appended to previous
values found. If values are string instead of a slice they are
automatically converted to a slice of one string.
Signed-off-by: Darren Shepherd <darren@rancher.com>
Configuration will be loaded from config.yaml and then config.yaml.d/*.(yaml|yml) in
alphanumeric order. The merging is done by just taking the last value of
a key found, so LIFO for keys. Slices are not merged but replaced.
Signed-off-by: Darren Shepherd <darren@rancher.com>
* Update Kubernetes to v1.21.0
* Update to golang v1.16.2
* Update dependent modules to track with upstream
* Switch to upstream flannel
* Track changes to upstream cloud-controller-manager and FeatureGates
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Remove early return preventing local retention policy to be enforced
resulting in N number of snapshots being stored.
Signed-off-by: Brian Downs <brian.downs@gmail.com>
When `/dev/kmsg` is unreadable due to sysctl value `kernel.dmesg_restrict=1`,
bind-mount `/dev/null` into `/dev/kmsg`
Fix issue 3011
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Now rootless mode can be used with cgroup v2 resource limitations.
A pod is executed in a cgroup like "/user.slice/user-1001.slice/user@1001.service/k3s-rootless.service/kubepods/podd0eb6921-c81a-4214-b36c-d3b9bb212fac/63b5a253a1fd4627da16bfce9bec58d72144cf30fe833e0ca9a6d60ebf837475".
This is accomplished by running `kubelet` in a cgroup namespace, and enabling `cgroupfs` driver for the cgroup hierarchy delegated by systemd.
To enable cgroup v2 resource limitation, `k3s server --rootless` needs to be launched as `systemctl --user` service.
Please see the comment lines in `k3s-rootless.service` for the usage.
Running `k3s server --rootless` via a terminal is not supported.
When it really needs to be launched via a terminal, `systemd-run --user -p Delegate --tty` needs to be prepended to create a systemd scope.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
* remove etcd data dir when etcd is disabled
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* fix comment
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* more fixes
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
* use debug instead of info logs
Signed-off-by: galal-hussein <hussein.galal.ahmed.11@gmail.com>
Support repository regex rewrite rules when fetching image content.
Example configuration:
```yaml
# /etc/rancher/k3s/registries.yaml
mirrors:
"docker.io":
endpoint:
- "https://registry-1.docker.io/v2"
rewrite:
"^library/alpine$": "my-org/alpine"
```
This will instruct k3s containerd to fetch content for `alpine` images
from `docker.io/my-org/alpine` instead of the default
`docker.io/library/alpine` locations.
Signed-off-by: Jacob Blain Christen <jacob@rancher.com>
get() is called in a loop until client configuration is successfully
retrieved. Each iteration will try to configure the apiserver proxy,
which will in turn create a new load balancer. Skip creating a new
load balancer if we already have one.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
If the port wanted by the client load balancer is in TIME_WAIT, startup
will fail. Set SO_REUSEPORT so that it can be listened on again
immediately.
The configurable Listen call wants a context, so plumb that through as
well.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Always use static ports for the load-balancers
This fixes an issue where RKE2 kube-proxy daemonset pods were failing to
communicate with the apiserver when RKE2 was restarted because the
load-balancer used a different port every time it started up.
This also changes the apiserver load-balancer port to be 1 below the
supervisor port instead of 1 above it. This makes the apiserver port
consistent at 6443 across servers and agents on RKE2.
Additional fixes below were required to successfully test and use this change
on etcd-only nodes.
* Actually add lb-server-port flag to CLI
* Fix nil pointer when starting server with --disable-etcd but no --server
* Don't try to use full URI as initial load-balancer endpoint
* Fix etcd load-balancer pool updates
* Update dynamiclistener to fix cert updates on etcd-only nodes
* Handle recursive initial server URL in load balancer
* Don't run the deploy controller on etcd-only nodes
* Add functionality for etcd snapshot/restore to and from S3 compatible backends.
* Update etcd restore functionality to extract and write certificates and configs from snapshot.
Servers should always be upgraded before agents, but generally this
isn't required because things are compatible between versions. In this
case we're OK with failing closed if the user upgrades out of order, but
we should give a clearer message about what steps are required to fix
the issue.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>