* Add python pip pakacge to install aws cli
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Upload build artifacts to aws s3 instead of gcp bucket
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Upload logs to aws s3 instead of google buckets
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Replace gcloud auth with aws credentials for artifact uploading to buckets
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Replace usage of google bucket with aws s3 buckets
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
Problem:
Previously all of Kubernetes' image hosting has been out of gcr.io. There were significant egress costs associated with this when images were pulled from entities outside gcp. Refer to https://github.com/kubernetes/k8s.io/wiki/New-Registry-url-for-Kubernetes-(registry.k8s.io)
Solution:
As highlighted at KubeCon NA 2022 k8s infra SIG update, the replacement for k8s.gcr.io which is registry.k8s.io is now ready for mainstream use and the old k8s.gcr.io has been formally deprecated. This commit migrates all references for k3s to registry.k8s.io.
Signed-off-by: James Blair <mail@jamesblair.net>
Taint the first node so that the helm job doesn't run on it. In a real cluster the helm job would eventually succeed once all the servers were upgraded and had the new chart tarball.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Replace ETCD-JOIN-STABLE-SECOND with ETCD-JOIN-LATEST-FIRST. We don't
support joining down-level servers to existing clusters, as the new
down-level server will try to deploy older versions of the packaged
manifests.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Also reorder validations to perform the short checks first so that
things fail faster if there's a problem.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
From https://github.com/urfave/cli/pull/1383 :
> This removes the resulting binary dependency on cpuguy83/md2man and
> russross/blackfriday (and a few more packages imported by those),
> which saves more than 400 KB (more than 300 KB
> once stripped) from the resulting binary.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
Also update all use of 'go get' => 'go install', update CI tooling for
1.18 compatibility, and gofmt everything so lint passes.
Signed-off-by: Brad Davidson <brad.davidson@rancher.com>
* Update docs to include s390x arch
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Add s390x drone pipeline
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Install trivy linux arch only for amd64
This is done so that trivy is not installed for s390x arch
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Add s390x arch if condition for Dockerfile.test
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Add s390x arch in install script
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Add s390x GOARCH in build script
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Add SUFFIX s390x in scripts
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Skip image scan for s390x arch
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Update klipper-lb to version v0.3.5
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Update traefik version to v2.6.2
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Update registry to v2.8.1 in tests which supports s390x
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>
* Skip compact tests for s390x arch
This is done because compact test require a previous k3s version which supports s390x and it is not available
Signed-off-by: Venkata Krishna Rohit Sakala <rohitsakala@gmail.com>