Adds support for retagging images to appear to have been sourced from
one or more additional registries as they are imported from the tarball.
This is intended to support RKE2 use cases with system-default-registry
where the images need to appear to have been pulled from a registry
other than docker.io.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Collect IPs from all pods before deciding to use internal or external addresses
@Taloth correctly noted that the code that iterates over ServiceLB pods
to collect IP addresses was failing to add additional internal IPs once
the map contained ANY entry from a previous node. This may date back to
when ServiceLB used a Deployment instead of a DaemonSet, so there was
only ever a single pod.
The new behavior is to collect all internal and external IPs, and then
construct the address list of a single type - external if there are any,
otherwise internal.
https://github.com/k3s-io/k3s/issues/1652#issuecomment-774497788
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Co-authored-by: Brian Downs <brian.downs@gmail.com>
* Add tests to clientaccess/token
* Fix issues in clientaccess/token identified by tests
* Update tests to close coverage gaps
* Remove redundant check turned up by code coverage reports
* Add warnings if CA hash will not be validated
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
We also need to be more careful about setting the crictl.yaml path,
as it doesn't have kubectl's nice behavior of checking multiple
locations. It's not safe to assume that it's in the user's home data-dir
just because we're not running as root.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Previously, k3s.service was failing when the EnvironmentFile does not exist:
```
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed to load environment files: No such file or directory
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed to run 'start' task: No such file or directory
Feb 02 17:17:30 suda-ws01 systemd[1]: k3s.service: Failed with result 'resources'.
Feb 02 17:17:30 suda-ws01 systemd[1]: Failed to start Lightweight Kubernetes.
```
ref: https://unix.stackexchange.com/questions/404199/documentation-of-equals-minus-in-systemd-unit-files
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Fix issue 900
cgroup2 support was introduced in PR 2584, but got broken in f3de60ff31
It was failing with "F1210 19:13:37.305388 4955 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true"
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Since a recent commit, rootless mode was failing with the following errors:
```
E0122 22:59:47.615567 21 kuberuntime_manager.go:755] createPodSandbox for pod "helm-install-traefik-wf8lc_kube-system(9de0a1b2-e2a2-4ea5-8fb6-22c9272a182f)" failed: rpc error: code = Unknown desc = failed to create network namespace for sandbox "285ab835609387f82d304bac1fefa5fb2a6c49a542a9921995d0c35d33c683d5": failed to setup netns: open /var/run/netns/cni-c628a228-651e-e03e-d27d-bb5e87281846: permission denied
...
E0122 23:31:34.027814 21 pod_workers.go:191] Error syncing pod 1a77d21f-ff3d-4475-9749-224229ddc31a ("coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)"), skipping: failed to "CreatePodSandbox" for "coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-854c77959c-w4d7g_kube-system(1a77d21f-ff3d-4475-9749-224229ddc31a)\" failed: rpc error: code = Unknown desc = failed to create containerd task: io.containerd.runc.v2: create new shim socket: listen unix /run/containerd/s/8f0e40e11a69738407f1ebaf31ced3f08c29bb62022058813314fb004f93c422: bind: permission denied\n: exit status 1: unknown"
```
Remove symlinks to /run/{netns,containerd} so that rootless mode can create their own /run/{netns,containerd}.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
* Replace k3s cloud provider wrangler controller with core node informer
Upstream k8s has exposed an interface for cloud providers to access the
cloud controller manager's node cache and shared informer since
Kubernetes 1.9. This is used by all the other in-tree cloud providers;
we should use it too instead of running a dedicated wrangler controller.
Doing so also appears to fix an intermittent issue with the uninitialized
taint not getting cleared on nodes in CI.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Problem:
While using ZFS on debian and K3s with docker, I am unable to get k3s working as the snapshotter value is being validated and the validation fails.
Solution:
We should not validate snapshotter value if we are using docker as it's a no-op in that case.
Signed-off-by: Waqar Ahmed <waqarahmedjoyia@live.com>