mirror of
https://github.com/k3s-io/k3s.git
synced 2024-06-07 19:41:36 +00:00
go mod vendor
This commit is contained in:
parent
3ce8310a17
commit
d246c89254
1024
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
1024
vendor/cloud.google.com/go/CHANGES.md
generated
vendored
File diff suppressed because it is too large
Load Diff
44
vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
generated
vendored
44
vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
generated
vendored
@ -1,44 +0,0 @@
|
||||
# Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project,
|
||||
and in the interest of fostering an open and welcoming community,
|
||||
we pledge to respect all people who contribute through reporting issues,
|
||||
posting feature requests, updating documentation,
|
||||
submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project
|
||||
a harassment-free experience for everyone,
|
||||
regardless of level of experience, gender, gender identity and expression,
|
||||
sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information,
|
||||
such as physical or electronic
|
||||
addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct.
|
||||
By adopting this Code of Conduct,
|
||||
project maintainers commit themselves to fairly and consistently
|
||||
applying these principles to every aspect of managing this project.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct
|
||||
may be permanently removed from the project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||
may be reported by opening an issue
|
||||
or contacting one or more of the project maintainers.
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
|
||||
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
|
||||
|
234
vendor/cloud.google.com/go/CONTRIBUTING.md
generated
vendored
234
vendor/cloud.google.com/go/CONTRIBUTING.md
generated
vendored
@ -1,234 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
1. Sign one of the contributor license agreements below.
|
||||
1. `go get golang.org/x/review/git-codereview` to install the code reviewing
|
||||
tool.
|
||||
1. You will need to ensure that your `GOBIN` directory (by default
|
||||
`$GOPATH/bin`) is in your `PATH` so that git can find the command.
|
||||
1. If you would like, you may want to set up aliases for git-codereview,
|
||||
such that `git codereview change` becomes `git change`. See the
|
||||
[godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
|
||||
1. Should you run into issues with the git-codereview tool, please note
|
||||
that all error messages will assume that you have set up these aliases.
|
||||
1. Get the cloud package by running `go get -d cloud.google.com/go`.
|
||||
1. If you have already checked out the source, make sure that the remote
|
||||
git origin is https://code.googlesource.com/gocloud:
|
||||
|
||||
```
|
||||
git remote set-url origin https://code.googlesource.com/gocloud
|
||||
```
|
||||
|
||||
1. Make sure your auth is configured correctly by visiting
|
||||
https://code.googlesource.com, clicking "Generate Password", and following the
|
||||
directions.
|
||||
1. Make changes and create a change by running `git codereview change <name>`,
|
||||
provide a commit message, and use `git codereview mail` to create a Gerrit CL.
|
||||
1. Keep amending to the change with `git codereview change` and mail as your
|
||||
receive feedback. Each new mailed amendment will create a new patch set for
|
||||
your change in Gerrit.
|
||||
|
||||
## Integration Tests
|
||||
|
||||
In addition to the unit tests, you may run the integration test suite. These
|
||||
directions describe setting up your environment to run integration tests for
|
||||
_all_ packages: note that many of these instructions may be redundant if you
|
||||
intend only to run integration tests on a single package.
|
||||
|
||||
#### GCP Setup
|
||||
|
||||
To run the integrations tests, creation and configuration of two projects in
|
||||
the Google Developers Console is required: one specifically for Firestore
|
||||
integration tests, and another for all other integration tests. We'll refer to
|
||||
these projects as "general project" and "Firestore project".
|
||||
|
||||
After creating each project, you must [create a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount)
|
||||
for each project. Ensure the project-level **Owner**
|
||||
[IAM role](console.cloud.google.com/iam-admin/iam/project) role is added to
|
||||
each service account. During the creation of the service account, you should
|
||||
download the JSON credential file for use later.
|
||||
|
||||
Next, ensure the following APIs are enabled in the general project:
|
||||
|
||||
- BigQuery API
|
||||
- BigQuery Data Transfer API
|
||||
- Cloud Dataproc API
|
||||
- Cloud Dataproc Control API Private
|
||||
- Cloud Datastore API
|
||||
- Cloud Firestore API
|
||||
- Cloud Key Management Service (KMS) API
|
||||
- Cloud Natural Language API
|
||||
- Cloud OS Login API
|
||||
- Cloud Pub/Sub API
|
||||
- Cloud Resource Manager API
|
||||
- Cloud Spanner API
|
||||
- Cloud Speech API
|
||||
- Cloud Translation API
|
||||
- Cloud Video Intelligence API
|
||||
- Cloud Vision API
|
||||
- Compute Engine API
|
||||
- Compute Engine Instance Group Manager API
|
||||
- Container Registry API
|
||||
- Firebase Rules API
|
||||
- Google Cloud APIs
|
||||
- Google Cloud Deployment Manager V2 API
|
||||
- Google Cloud SQL
|
||||
- Google Cloud Storage
|
||||
- Google Cloud Storage JSON API
|
||||
- Google Compute Engine Instance Group Updater API
|
||||
- Google Compute Engine Instance Groups API
|
||||
- Kubernetes Engine API
|
||||
- Stackdriver Error Reporting API
|
||||
|
||||
Next, create a Datastore database in the general project, and a Firestore
|
||||
database in the Firestore project.
|
||||
|
||||
Finally, in the general project, create an API key for the translate API:
|
||||
|
||||
- Go to GCP Developer Console.
|
||||
- Navigate to APIs & Services > Credentials.
|
||||
- Click Create Credentials > API Key.
|
||||
- Save this key for use in `GCLOUD_TESTS_API_KEY` as described below.
|
||||
|
||||
#### Local Setup
|
||||
|
||||
Once the two projects are created and configured, set the following environment
|
||||
variables:
|
||||
|
||||
- `GCLOUD_TESTS_GOLANG_PROJECT_ID`: Developers Console project's ID (e.g.
|
||||
bamboo-shift-455) for the general project.
|
||||
- `GCLOUD_TESTS_GOLANG_KEY`: The path to the JSON key file of the general
|
||||
project's service account.
|
||||
- `GCLOUD_TESTS_GOLANG_FIRESTORE_PROJECT_ID`: Developers Console project's ID
|
||||
(e.g. doorway-cliff-677) for the Firestore project.
|
||||
- `GCLOUD_TESTS_GOLANG_FIRESTORE_KEY`: The path to the JSON key file of the
|
||||
Firestore project's service account.
|
||||
- `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests,
|
||||
in the form
|
||||
"projects/P/locations/L/keyRings/R". The creation of this is described below.
|
||||
- `GCLOUD_TESTS_API_KEY`: API key for using the Translate API.
|
||||
- `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone.
|
||||
|
||||
Install the [gcloud command-line tool][gcloudcli] to your machine and use it to
|
||||
create some resources used in integration tests.
|
||||
|
||||
From the project's root directory:
|
||||
|
||||
``` sh
|
||||
# Sets the default project in your env.
|
||||
$ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
|
||||
# Authenticates the gcloud tool with your account.
|
||||
$ gcloud auth login
|
||||
|
||||
# Create the indexes used in the datastore integration tests.
|
||||
$ gcloud datastore create-indexes datastore/testdata/index.yaml
|
||||
|
||||
# Creates a Google Cloud storage bucket with the same name as your test project,
|
||||
# and with the Stackdriver Logging service account as owner, for the sink
|
||||
# integration tests in logging.
|
||||
$ gsutil mb gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
$ gsutil acl ch -g cloud-logs@google.com:O gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
|
||||
|
||||
# Creates a PubSub topic for integration tests of storage notifications.
|
||||
$ gcloud beta pubsub topics create go-storage-notification-test
|
||||
# Next, go to the Pub/Sub dashboard in GCP console. Authorize the user
|
||||
# "service-<numberic project id>@gs-project-accounts.iam.gserviceaccount.com"
|
||||
# as a publisher to that topic.
|
||||
|
||||
# Creates a Spanner instance for the spanner integration tests.
|
||||
$ gcloud beta spanner instances create go-integration-test --config regional-us-central1 --nodes 10 --description 'Instance for go client test'
|
||||
# NOTE: Spanner instances are priced by the node-hour, so you may want to
|
||||
# delete the instance after testing with 'gcloud beta spanner instances delete'.
|
||||
|
||||
$ export MY_KEYRING=some-keyring-name
|
||||
$ export MY_LOCATION=global
|
||||
# Creates a KMS keyring, in the same location as the default location for your
|
||||
# project's buckets.
|
||||
$ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION
|
||||
# Creates two keys in the keyring, named key1 and key2.
|
||||
$ gcloud kms keys create key1 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
|
||||
$ gcloud kms keys create key2 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
|
||||
# Sets the GCLOUD_TESTS_GOLANG_KEYRING environment variable.
|
||||
$ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING
|
||||
# Authorizes Google Cloud Storage to encrypt and decrypt using key1.
|
||||
gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1
|
||||
```
|
||||
|
||||
#### Running
|
||||
|
||||
Once you've done the necessary setup, you can run the integration tests by
|
||||
running:
|
||||
|
||||
``` sh
|
||||
$ go test -v cloud.google.com/go/...
|
||||
```
|
||||
|
||||
#### Replay
|
||||
|
||||
Some packages can record the RPCs during integration tests to a file for
|
||||
subsequent replay. To record, pass the `-record` flag to `go test`. The
|
||||
recording will be saved to the _package_`.replay` file. To replay integration
|
||||
tests from a saved recording, the replay file must be present, the `-short`
|
||||
flag must be passed to `go test`, and the `GCLOUD_TESTS_GOLANG_ENABLE_REPLAY`
|
||||
environment variable must have a non-empty value.
|
||||
|
||||
## Contributor License Agreements
|
||||
|
||||
Before we can accept your pull requests you'll need to sign a Contributor
|
||||
License Agreement (CLA):
|
||||
|
||||
- **If you are an individual writing original source code** and **you own the
|
||||
intellectual property**, then you'll need to sign an [individual CLA][indvcla].
|
||||
- **If you work for a company that wants to allow you to contribute your
|
||||
work**, then you'll need to sign a [corporate CLA][corpcla].
|
||||
|
||||
You can sign these electronically (just scroll to the bottom). After that,
|
||||
we'll be able to accept your pull requests.
|
||||
|
||||
## Contributor Code of Conduct
|
||||
|
||||
As contributors and maintainers of this project,
|
||||
and in the interest of fostering an open and welcoming community,
|
||||
we pledge to respect all people who contribute through reporting issues,
|
||||
posting feature requests, updating documentation,
|
||||
submitting pull requests or patches, and other activities.
|
||||
|
||||
We are committed to making participation in this project
|
||||
a harassment-free experience for everyone,
|
||||
regardless of level of experience, gender, gender identity and expression,
|
||||
sexual orientation, disability, personal appearance,
|
||||
body size, race, ethnicity, age, religion, or nationality.
|
||||
|
||||
Examples of unacceptable behavior by participants include:
|
||||
|
||||
* The use of sexualized language or imagery
|
||||
* Personal attacks
|
||||
* Trolling or insulting/derogatory comments
|
||||
* Public or private harassment
|
||||
* Publishing other's private information,
|
||||
such as physical or electronic
|
||||
addresses, without explicit permission
|
||||
* Other unethical or unprofessional conduct.
|
||||
|
||||
Project maintainers have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions
|
||||
that are not aligned to this Code of Conduct.
|
||||
By adopting this Code of Conduct,
|
||||
project maintainers commit themselves to fairly and consistently
|
||||
applying these principles to every aspect of managing this project.
|
||||
Project maintainers who do not follow or enforce the Code of Conduct
|
||||
may be permanently removed from the project team.
|
||||
|
||||
This code of conduct applies both within project spaces and in public spaces
|
||||
when an individual is representing the project or its community.
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior
|
||||
may be reported by opening an issue
|
||||
or contacting one or more of the project maintainers.
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
|
||||
available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
|
||||
|
||||
[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
|
||||
[indvcla]: https://developers.google.com/open-source/cla/individual
|
||||
[corpcla]: https://developers.google.com/open-source/cla/corporate
|
505
vendor/cloud.google.com/go/README.md
generated
vendored
505
vendor/cloud.google.com/go/README.md
generated
vendored
@ -1,505 +0,0 @@
|
||||
# Google Cloud Client Libraries for Go
|
||||
|
||||
[![GoDoc](https://godoc.org/cloud.google.com/go?status.svg)](https://godoc.org/cloud.google.com/go)
|
||||
|
||||
Go packages for [Google Cloud Platform](https://cloud.google.com) services.
|
||||
|
||||
``` go
|
||||
import "cloud.google.com/go"
|
||||
```
|
||||
|
||||
To install the packages on your system, *do not clone the repo*. Instead use
|
||||
|
||||
```
|
||||
$ go get -u cloud.google.com/go/...
|
||||
```
|
||||
|
||||
**NOTE:** Some of these packages are under development, and may occasionally
|
||||
make backwards-incompatible changes.
|
||||
|
||||
**NOTE:** Github repo is a mirror of [https://code.googlesource.com/gocloud](https://code.googlesource.com/gocloud).
|
||||
|
||||
* [News](#news)
|
||||
* [Supported APIs](#supported-apis)
|
||||
* [Go Versions Supported](#go-versions-supported)
|
||||
* [Authorization](#authorization)
|
||||
* [Cloud Datastore](#cloud-datastore-)
|
||||
* [Cloud Storage](#cloud-storage-)
|
||||
* [Cloud Pub/Sub](#cloud-pub-sub-)
|
||||
* [BigQuery](#cloud-bigquery-)
|
||||
* [Stackdriver Logging](#stackdriver-logging-)
|
||||
* [Cloud Spanner](#cloud-spanner-)
|
||||
|
||||
|
||||
## News
|
||||
|
||||
_7 August 2018_
|
||||
|
||||
As of November 1, the code in the repo will no longer support Go versions 1.8
|
||||
and earlier. No one other than AppEngine users should be on those old versions,
|
||||
and AppEngine
|
||||
[Standard](https://groups.google.com/forum/#!topic/google-appengine-go/e7oPNomd7ak)
|
||||
and
|
||||
[Flex](https://groups.google.com/forum/#!topic/google-appengine-go/wHsYtxvEbXI)
|
||||
will stop supporting new deployments with those versions on that date.
|
||||
|
||||
|
||||
Changes have been moved to [CHANGES](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CHANGES.md).
|
||||
|
||||
|
||||
## Supported APIs
|
||||
|
||||
Google API | Status | Package
|
||||
---------------------------------------------|--------------|-----------------------------------------------------------
|
||||
[Asset][cloud-asset] | alpha | [`godoc.org/cloud.google.com/go/asset/v1beta`][cloud-asset-ref]
|
||||
[BigQuery][cloud-bigquery] | stable | [`godoc.org/cloud.google.com/go/bigquery`][cloud-bigquery-ref]
|
||||
[Bigtable][cloud-bigtable] | stable | [`godoc.org/cloud.google.com/go/bigtable`][cloud-bigtable-ref]
|
||||
[Cloudtasks][cloud-tasks] | beta | [`godoc.org/cloud.google.com/go/cloudtasks/apiv2beta3`][cloud-tasks-ref]
|
||||
[Container][cloud-container] | stable | [`godoc.org/cloud.google.com/go/container/apiv1`][cloud-container-ref]
|
||||
[ContainerAnalysis][cloud-containeranalysis] | beta | [`godoc.org/cloud.google.com/go/containeranalysis/apiv1beta1`][cloud-containeranalysis-ref]
|
||||
[Dataproc][cloud-dataproc] | stable | [`godoc.org/cloud.google.com/go/dataproc/apiv1`][cloud-dataproc-ref]
|
||||
[Datastore][cloud-datastore] | stable | [`godoc.org/cloud.google.com/go/datastore`][cloud-datastore-ref]
|
||||
[Debugger][cloud-debugger] | alpha | [`godoc.org/cloud.google.com/go/debugger/apiv2`][cloud-debugger-ref]
|
||||
[Dialogflow][cloud-dialogflow] | alpha | [`godoc.org/cloud.google.com/go/dialogflow/apiv2`][cloud-dialogflow-ref]
|
||||
[Data Loss Prevention][cloud-dlp] | alpha | [`godoc.org/cloud.google.com/go/dlp/apiv2`][cloud-dlp-ref]
|
||||
[ErrorReporting][cloud-errors] | alpha | [`godoc.org/cloud.google.com/go/errorreporting`][cloud-errors-ref]
|
||||
[Firestore][cloud-firestore] | beta | [`godoc.org/cloud.google.com/go/firestore`][cloud-firestore-ref]
|
||||
[IAM][cloud-iam] | stable | [`godoc.org/cloud.google.com/go/iam`][cloud-iam-ref]
|
||||
[KMS][cloud-kms] | stable | [`godoc.org/cloud.google.com/go/kms`][cloud-kms-ref]
|
||||
[Natural Language][cloud-natural-language] | stable | [`godoc.org/cloud.google.com/go/language/apiv1`][cloud-natural-language-ref]
|
||||
[Logging][cloud-logging] | stable | [`godoc.org/cloud.google.com/go/logging`][cloud-logging-ref]
|
||||
[Monitoring][cloud-monitoring] | alpha | [`godoc.org/cloud.google.com/go/monitoring/apiv3`][cloud-monitoring-ref]
|
||||
[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/compute/docs/oslogin/rest`][cloud-oslogin-ref]
|
||||
[Pub/Sub][cloud-pubsub] | stable | [`godoc.org/cloud.google.com/go/pubsub`][cloud-pubsub-ref]
|
||||
[Memorystore][cloud-memorystore] | stable | [`godoc.org/cloud.google.com/go/redis/apiv1beta1`][cloud-memorystore-ref]
|
||||
[Spanner][cloud-spanner] | stable | [`godoc.org/cloud.google.com/go/spanner`][cloud-spanner-ref]
|
||||
[Speech][cloud-speech] | stable | [`godoc.org/cloud.google.com/go/speech/apiv1`][cloud-speech-ref]
|
||||
[Storage][cloud-storage] | stable | [`godoc.org/cloud.google.com/go/storage`][cloud-storage-ref]
|
||||
[Text To Speech][cloud-texttospeech] | alpha | [`godoc.org/cloud.google.com/go/texttospeech/apiv1`][cloud-storage-ref]
|
||||
[Trace][cloud-trace] | alpha | [`godoc.org/cloud.google.com/go/trace/apiv2`][cloud-translation-ref]
|
||||
[Translation][cloud-translation] | stable | [`godoc.org/cloud.google.com/go/translate`][cloud-translation-ref]
|
||||
[Video Intelligence][cloud-video] | alpha | [`godoc.org/cloud.google.com/go/videointelligence/apiv1beta1`][cloud-video-ref]
|
||||
[Vision][cloud-vision] | stable | [`godoc.org/cloud.google.com/go/vision/apiv1`][cloud-vision-ref]
|
||||
|
||||
> **Alpha status**: the API is still being actively developed. As a
|
||||
> result, it might change in backward-incompatible ways and is not recommended
|
||||
> for production use.
|
||||
>
|
||||
> **Beta status**: the API is largely complete, but still has outstanding
|
||||
> features and bugs to be addressed. There may be minor backwards-incompatible
|
||||
> changes where necessary.
|
||||
>
|
||||
> **Stable status**: the API is mature and ready for production use. We will
|
||||
> continue addressing bugs and feature requests.
|
||||
|
||||
Documentation and examples are available at
|
||||
https://godoc.org/cloud.google.com/go
|
||||
|
||||
Visit or join the
|
||||
[google-api-go-announce group](https://groups.google.com/forum/#!forum/google-api-go-announce)
|
||||
for updates on these packages.
|
||||
|
||||
## Go Versions Supported
|
||||
|
||||
We support the two most recent major versions of Go. If Google App Engine uses
|
||||
an older version, we support that as well.
|
||||
|
||||
## Authorization
|
||||
|
||||
By default, each API will use [Google Application Default Credentials][default-creds]
|
||||
for authorization credentials used in calling the API endpoints. This will allow your
|
||||
application to run in many environments without requiring explicit configuration.
|
||||
|
||||
[snip]:# (auth)
|
||||
```go
|
||||
client, err := storage.NewClient(ctx)
|
||||
```
|
||||
|
||||
To authorize using a
|
||||
[JSON key file](https://cloud.google.com/iam/docs/managing-service-account-keys),
|
||||
pass
|
||||
[`option.WithCredentialsFile`](https://godoc.org/google.golang.org/api/option#WithCredentialsFile)
|
||||
to the `NewClient` function of the desired package. For example:
|
||||
|
||||
[snip]:# (auth-JSON)
|
||||
```go
|
||||
client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
|
||||
```
|
||||
|
||||
You can exert more control over authorization by using the
|
||||
[`golang.org/x/oauth2`](https://godoc.org/golang.org/x/oauth2) package to
|
||||
create an `oauth2.TokenSource`. Then pass
|
||||
[`option.WithTokenSource`](https://godoc.org/google.golang.org/api/option#WithTokenSource)
|
||||
to the `NewClient` function:
|
||||
[snip]:# (auth-ts)
|
||||
```go
|
||||
tokenSource := ...
|
||||
client, err := storage.NewClient(ctx, option.WithTokenSource(tokenSource))
|
||||
```
|
||||
|
||||
## Cloud Datastore [![GoDoc](https://godoc.org/cloud.google.com/go/datastore?status.svg)](https://godoc.org/cloud.google.com/go/datastore)
|
||||
|
||||
- [About Cloud Datastore][cloud-datastore]
|
||||
- [Activating the API for your project][cloud-datastore-activation]
|
||||
- [API documentation][cloud-datastore-docs]
|
||||
- [Go client documentation](https://godoc.org/cloud.google.com/go/datastore)
|
||||
- [Complete sample program](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/datastore/tasks)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `datastore.Client` to use throughout your application:
|
||||
|
||||
[snip]:# (datastore-1)
|
||||
```go
|
||||
client, err := datastore.NewClient(ctx, "my-project-id")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
Then use that client to interact with the API:
|
||||
|
||||
[snip]:# (datastore-2)
|
||||
```go
|
||||
type Post struct {
|
||||
Title string
|
||||
Body string `datastore:",noindex"`
|
||||
PublishedAt time.Time
|
||||
}
|
||||
keys := []*datastore.Key{
|
||||
datastore.NameKey("Post", "post1", nil),
|
||||
datastore.NameKey("Post", "post2", nil),
|
||||
}
|
||||
posts := []*Post{
|
||||
{Title: "Post 1", Body: "...", PublishedAt: time.Now()},
|
||||
{Title: "Post 2", Body: "...", PublishedAt: time.Now()},
|
||||
}
|
||||
if _, err := client.PutMulti(ctx, keys, posts); err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
## Cloud Storage [![GoDoc](https://godoc.org/cloud.google.com/go/storage?status.svg)](https://godoc.org/cloud.google.com/go/storage)
|
||||
|
||||
- [About Cloud Storage][cloud-storage]
|
||||
- [API documentation][cloud-storage-docs]
|
||||
- [Go client documentation](https://godoc.org/cloud.google.com/go/storage)
|
||||
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/storage)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `storage.Client` to use throughout your application:
|
||||
|
||||
[snip]:# (storage-1)
|
||||
```go
|
||||
client, err := storage.NewClient(ctx)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
[snip]:# (storage-2)
|
||||
```go
|
||||
// Read the object1 from bucket.
|
||||
rc, err := client.Bucket("bucket").Object("object1").NewReader(ctx)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer rc.Close()
|
||||
body, err := ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
## Cloud Pub/Sub [![GoDoc](https://godoc.org/cloud.google.com/go/pubsub?status.svg)](https://godoc.org/cloud.google.com/go/pubsub)
|
||||
|
||||
- [About Cloud Pubsub][cloud-pubsub]
|
||||
- [API documentation][cloud-pubsub-docs]
|
||||
- [Go client documentation](https://godoc.org/cloud.google.com/go/pubsub)
|
||||
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/pubsub)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `pubsub.Client` to use throughout your application:
|
||||
|
||||
[snip]:# (pubsub-1)
|
||||
```go
|
||||
client, err := pubsub.NewClient(ctx, "project-id")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
Then use the client to publish and subscribe:
|
||||
|
||||
[snip]:# (pubsub-2)
|
||||
```go
|
||||
// Publish "hello world" on topic1.
|
||||
topic := client.Topic("topic1")
|
||||
res := topic.Publish(ctx, &pubsub.Message{
|
||||
Data: []byte("hello world"),
|
||||
})
|
||||
// The publish happens asynchronously.
|
||||
// Later, you can get the result from res:
|
||||
...
|
||||
msgID, err := res.Get(ctx)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Use a callback to receive messages via subscription1.
|
||||
sub := client.Subscription("subscription1")
|
||||
err = sub.Receive(ctx, func(ctx context.Context, m *pubsub.Message) {
|
||||
fmt.Println(m.Data)
|
||||
m.Ack() // Acknowledge that we've consumed the message.
|
||||
})
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
}
|
||||
```
|
||||
|
||||
## BigQuery [![GoDoc](https://godoc.org/cloud.google.com/go/bigquery?status.svg)](https://godoc.org/cloud.google.com/go/bigquery)
|
||||
|
||||
- [About BigQuery][cloud-bigquery]
|
||||
- [API documentation][cloud-bigquery-docs]
|
||||
- [Go client documentation][cloud-bigquery-ref]
|
||||
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/bigquery)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `bigquery.Client` to use throughout your application:
|
||||
[snip]:# (bq-1)
|
||||
```go
|
||||
c, err := bigquery.NewClient(ctx, "my-project-ID")
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
```
|
||||
|
||||
Then use that client to interact with the API:
|
||||
[snip]:# (bq-2)
|
||||
```go
|
||||
// Construct a query.
|
||||
q := c.Query(`
|
||||
SELECT year, SUM(number)
|
||||
FROM [bigquery-public-data:usa_names.usa_1910_2013]
|
||||
WHERE name = "William"
|
||||
GROUP BY year
|
||||
ORDER BY year
|
||||
`)
|
||||
// Execute the query.
|
||||
it, err := q.Read(ctx)
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
// Iterate through the results.
|
||||
for {
|
||||
var values []bigquery.Value
|
||||
err := it.Next(&values)
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
fmt.Println(values)
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Stackdriver Logging [![GoDoc](https://godoc.org/cloud.google.com/go/logging?status.svg)](https://godoc.org/cloud.google.com/go/logging)
|
||||
|
||||
- [About Stackdriver Logging][cloud-logging]
|
||||
- [API documentation][cloud-logging-docs]
|
||||
- [Go client documentation][cloud-logging-ref]
|
||||
- [Complete sample programs](https://github.com/GoogleCloudPlatform/golang-samples/tree/master/logging)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `logging.Client` to use throughout your application:
|
||||
[snip]:# (logging-1)
|
||||
```go
|
||||
ctx := context.Background()
|
||||
client, err := logging.NewClient(ctx, "my-project")
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
```
|
||||
|
||||
Usually, you'll want to add log entries to a buffer to be periodically flushed
|
||||
(automatically and asynchronously) to the Stackdriver Logging service.
|
||||
[snip]:# (logging-2)
|
||||
```go
|
||||
logger := client.Logger("my-log")
|
||||
logger.Log(logging.Entry{Payload: "something happened!"})
|
||||
```
|
||||
|
||||
Close your client before your program exits, to flush any buffered log entries.
|
||||
[snip]:# (logging-3)
|
||||
```go
|
||||
err = client.Close()
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
```
|
||||
|
||||
## Cloud Spanner [![GoDoc](https://godoc.org/cloud.google.com/go/spanner?status.svg)](https://godoc.org/cloud.google.com/go/spanner)
|
||||
|
||||
- [About Cloud Spanner][cloud-spanner]
|
||||
- [API documentation][cloud-spanner-docs]
|
||||
- [Go client documentation](https://godoc.org/cloud.google.com/go/spanner)
|
||||
|
||||
### Example Usage
|
||||
|
||||
First create a `spanner.Client` to use throughout your application:
|
||||
|
||||
[snip]:# (spanner-1)
|
||||
```go
|
||||
client, err := spanner.NewClient(ctx, "projects/P/instances/I/databases/D")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
[snip]:# (spanner-2)
|
||||
```go
|
||||
// Simple Reads And Writes
|
||||
_, err = client.Apply(ctx, []*spanner.Mutation{
|
||||
spanner.Insert("Users",
|
||||
[]string{"name", "email"},
|
||||
[]interface{}{"alice", "a@example.com"})})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
row, err := client.Single().ReadRow(ctx, "Users",
|
||||
spanner.Key{"alice"}, []string{"email"})
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome. Please, see the
|
||||
[CONTRIBUTING](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md)
|
||||
document for details. We're using Gerrit for our code reviews. Please don't open pull
|
||||
requests against this repo, new pull requests will be automatically closed.
|
||||
|
||||
Please note that this project is released with a Contributor Code of Conduct.
|
||||
By participating in this project you agree to abide by its terms.
|
||||
See [Contributor Code of Conduct](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md#contributor-code-of-conduct)
|
||||
for more information.
|
||||
|
||||
[cloud-datastore]: https://cloud.google.com/datastore/
|
||||
[cloud-datastore-ref]: https://godoc.org/cloud.google.com/go/datastore
|
||||
[cloud-datastore-docs]: https://cloud.google.com/datastore/docs
|
||||
[cloud-datastore-activation]: https://cloud.google.com/datastore/docs/activate
|
||||
|
||||
[cloud-firestore]: https://cloud.google.com/firestore/
|
||||
[cloud-firestore-ref]: https://godoc.org/cloud.google.com/go/firestore
|
||||
[cloud-firestore-docs]: https://cloud.google.com/firestore/docs
|
||||
[cloud-firestore-activation]: https://cloud.google.com/firestore/docs/activate
|
||||
|
||||
[cloud-pubsub]: https://cloud.google.com/pubsub/
|
||||
[cloud-pubsub-ref]: https://godoc.org/cloud.google.com/go/pubsub
|
||||
[cloud-pubsub-docs]: https://cloud.google.com/pubsub/docs
|
||||
|
||||
[cloud-storage]: https://cloud.google.com/storage/
|
||||
[cloud-storage-ref]: https://godoc.org/cloud.google.com/go/storage
|
||||
[cloud-storage-docs]: https://cloud.google.com/storage/docs
|
||||
[cloud-storage-create-bucket]: https://cloud.google.com/storage/docs/cloud-console#_creatingbuckets
|
||||
|
||||
[cloud-bigtable]: https://cloud.google.com/bigtable/
|
||||
[cloud-bigtable-ref]: https://godoc.org/cloud.google.com/go/bigtable
|
||||
|
||||
[cloud-bigquery]: https://cloud.google.com/bigquery/
|
||||
[cloud-bigquery-docs]: https://cloud.google.com/bigquery/docs
|
||||
[cloud-bigquery-ref]: https://godoc.org/cloud.google.com/go/bigquery
|
||||
|
||||
[cloud-logging]: https://cloud.google.com/logging/
|
||||
[cloud-logging-docs]: https://cloud.google.com/logging/docs
|
||||
[cloud-logging-ref]: https://godoc.org/cloud.google.com/go/logging
|
||||
|
||||
[cloud-monitoring]: https://cloud.google.com/monitoring/
|
||||
[cloud-monitoring-ref]: https://godoc.org/cloud.google.com/go/monitoring/apiv3
|
||||
|
||||
[cloud-vision]: https://cloud.google.com/vision
|
||||
[cloud-vision-ref]: https://godoc.org/cloud.google.com/go/vision/apiv1
|
||||
|
||||
[cloud-language]: https://cloud.google.com/natural-language
|
||||
[cloud-language-ref]: https://godoc.org/cloud.google.com/go/language/apiv1
|
||||
|
||||
[cloud-oslogin]: https://cloud.google.com/compute/docs/oslogin/rest
|
||||
[cloud-oslogin-ref]: https://cloud.google.com/compute/docs/oslogin/rest
|
||||
|
||||
[cloud-speech]: https://cloud.google.com/speech
|
||||
[cloud-speech-ref]: https://godoc.org/cloud.google.com/go/speech/apiv1
|
||||
|
||||
[cloud-spanner]: https://cloud.google.com/spanner/
|
||||
[cloud-spanner-ref]: https://godoc.org/cloud.google.com/go/spanner
|
||||
[cloud-spanner-docs]: https://cloud.google.com/spanner/docs
|
||||
|
||||
[cloud-translation]: https://cloud.google.com/translation
|
||||
[cloud-translation-ref]: https://godoc.org/cloud.google.com/go/translation
|
||||
|
||||
[cloud-video]: https://cloud.google.com/video-intelligence/
|
||||
[cloud-video-ref]: https://godoc.org/cloud.google.com/go/videointelligence/apiv1beta1
|
||||
|
||||
[cloud-errors]: https://cloud.google.com/error-reporting/
|
||||
[cloud-errors-ref]: https://godoc.org/cloud.google.com/go/errorreporting
|
||||
|
||||
[cloud-container]: https://cloud.google.com/containers/
|
||||
[cloud-container-ref]: https://godoc.org/cloud.google.com/go/container/apiv1
|
||||
|
||||
[cloud-debugger]: https://cloud.google.com/debugger/
|
||||
[cloud-debugger-ref]: https://godoc.org/cloud.google.com/go/debugger/apiv2
|
||||
|
||||
[cloud-dlp]: https://cloud.google.com/dlp/
|
||||
[cloud-dlp-ref]: https://godoc.org/cloud.google.com/go/dlp/apiv2beta1
|
||||
|
||||
[default-creds]: https://developers.google.com/identity/protocols/application-default-credentials
|
||||
|
||||
[cloud-dataproc]: https://cloud.google.com/dataproc/
|
||||
[cloud-dataproc-docs]: https://cloud.google.com/dataproc/docs
|
||||
[cloud-dataproc-ref]: https://godoc.org/cloud.google.com/go/dataproc/apiv1
|
||||
|
||||
[cloud-iam]: https://cloud.google.com/iam/
|
||||
[cloud-iam-docs]: https://cloud.google.com/iam/docs
|
||||
[cloud-iam-ref]: https://godoc.org/cloud.google.com/go/iam
|
||||
|
||||
[cloud-kms]: https://cloud.google.com/kms/
|
||||
[cloud-kms-docs]: https://cloud.google.com/kms/docs
|
||||
[cloud-kms-ref]: https://godoc.org/cloud.google.com/go/kms/apiv1
|
||||
|
||||
[cloud-natural-language]: https://cloud.google.com/natural-language/
|
||||
[cloud-natural-language-docs]: https://cloud.google.com/natural-language/docs
|
||||
[cloud-natural-language-ref]: https://godoc.org/cloud.google.com/go/language/apiv1
|
||||
|
||||
[cloud-memorystore]: https://cloud.google.com/memorystore/
|
||||
[cloud-memorystore-docs]: https://cloud.google.com/memorystore/docs
|
||||
[cloud-memorystore-ref]: https://godoc.org/cloud.google.com/go/redis/apiv1beta1
|
||||
|
||||
[cloud-texttospeech]: https://cloud.google.com/texttospeech/
|
||||
[cloud-texttospeech-docs]: https://cloud.google.com/texttospeech/docs
|
||||
[cloud-texttospeech-ref]: https://godoc.org/cloud.google.com/go/texttospeech/apiv1
|
||||
|
||||
[cloud-trace]: https://cloud.google.com/trace/
|
||||
[cloud-trace-docs]: https://cloud.google.com/trace/docs
|
||||
[cloud-trace-ref]: https://godoc.org/cloud.google.com/go/trace/apiv1
|
||||
|
||||
[cloud-dialogflow]: https://cloud.google.com/dialogflow-enterprise/
|
||||
[cloud-dialogflow-docs]: https://cloud.google.com/dialogflow-enterprise/docs/
|
||||
[cloud-dialogflow-ref]: https://godoc.org/cloud.google.com/go/dialogflow/apiv2
|
||||
|
||||
[cloud-containeranalysis]: https://cloud.google.com/container-registry/docs/container-analysis
|
||||
[cloud-containeranalysis-docs]: https://cloud.google.com/container-analysis/api/reference/rest/
|
||||
[cloud-containeranalysis-ref]: https://godoc.org/cloud.google.com/go/devtools/containeranalysis/apiv1beta1
|
||||
|
||||
[cloud-asset]: https://cloud.google.com/security-command-center/docs/how-to-asset-inventory
|
||||
[cloud-asset-docs]: https://cloud.google.com/security-command-center/docs/how-to-asset-inventory
|
||||
[cloud-asset-ref]: https://godoc.org/cloud.google.com/go/asset/apiv1
|
||||
|
||||
[cloud-tasks]: https://cloud.google.com/tasks/
|
||||
[cloud-tasks-ref]: https://godoc.org/cloud.google.com/go/cloudtasks/apiv2beta3
|
47
vendor/cloud.google.com/go/RELEASING.md
generated
vendored
47
vendor/cloud.google.com/go/RELEASING.md
generated
vendored
@ -1,47 +0,0 @@
|
||||
# How to Create a New Release
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install [releasetool](https://github.com/googleapis/releasetool).
|
||||
|
||||
## Create a release
|
||||
|
||||
1. `cd` into the root directory, e.g., `~/go/src/cloud.google.com/go`
|
||||
1. Checkout the master branch and ensure a clean and up-to-date state.
|
||||
```
|
||||
git checkout master
|
||||
git pull --tags origin master
|
||||
```
|
||||
1. Run releasetool to generate a changelog from the last version. Note,
|
||||
releasetool will prompt if the new version is a major, minor, or patch
|
||||
version.
|
||||
```
|
||||
releasetool start --language go
|
||||
```
|
||||
1. Format the output to match CHANGES.md.
|
||||
1. Submit a CL with the changes in CHANGES.md. The commit message should look
|
||||
like this (where `v0.31.0` is instead the correct version number):
|
||||
```
|
||||
all: Release v0.31.0
|
||||
```
|
||||
1. Wait for approval from all reviewers and then submit the CL.
|
||||
1. Return to the master branch and pull the release commit.
|
||||
```
|
||||
git checkout master
|
||||
git pull origin master
|
||||
```
|
||||
1. Tag the current commit with the new version (e.g., `v0.31.0`)
|
||||
```
|
||||
releasetool tag --language go
|
||||
```
|
||||
1. Publish the tag to GoogleSource (i.e., origin):
|
||||
```
|
||||
git push origin $NEW_VERSION
|
||||
```
|
||||
1. Visit the [releases page][releases] on GitHub and click the "Draft a new
|
||||
release" button. For tag version, enter the tag published in the previous
|
||||
step. For the release title, use the version (e.g., `v0.31.0`). For the
|
||||
description, copy the changes added to CHANGES.md.
|
||||
|
||||
|
||||
[releases]: https://github.com/GoogleCloudPlatform/google-cloud-go/releases
|
17
vendor/cloud.google.com/go/issue_template.md
generated
vendored
17
vendor/cloud.google.com/go/issue_template.md
generated
vendored
@ -1,17 +0,0 @@
|
||||
(delete this for feature requests)
|
||||
|
||||
## Client
|
||||
|
||||
e.g. PubSub
|
||||
|
||||
## Describe Your Environment
|
||||
|
||||
e.g. Alpine Docker on GKE
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
e.g. Messages arrive really fast.
|
||||
|
||||
## Actual Behavior
|
||||
|
||||
e.g. Messages arrive really slowly.
|
BIN
vendor/cloud.google.com/go/keys.tar.enc
generated
vendored
BIN
vendor/cloud.google.com/go/keys.tar.enc
generated
vendored
Binary file not shown.
698
vendor/cloud.google.com/go/old-news.md
generated
vendored
698
vendor/cloud.google.com/go/old-news.md
generated
vendored
@ -1,698 +0,0 @@
|
||||
_February 26, 2018_
|
||||
|
||||
*v0.19.0*
|
||||
|
||||
- bigquery:
|
||||
- Support customer-managed encryption keys.
|
||||
|
||||
- bigtable:
|
||||
- Improved emulator support.
|
||||
- Support GetCluster.
|
||||
|
||||
- datastore:
|
||||
- Add general mutations.
|
||||
- Support pointer struct fields.
|
||||
- Support transaction options.
|
||||
|
||||
- firestore:
|
||||
- Add Transaction.GetAll.
|
||||
- Support document cursors.
|
||||
|
||||
- logging:
|
||||
- Support concurrent RPCs to the service.
|
||||
- Support per-entry resources.
|
||||
|
||||
- profiler:
|
||||
- Add config options to disable heap and thread profiling.
|
||||
- Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
|
||||
|
||||
- pubsub:
|
||||
- BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the
|
||||
callback returns).
|
||||
- Add SubscriptionInProject.
|
||||
- Add OpenCensus instrumentation for streaming pull.
|
||||
|
||||
- storage:
|
||||
- Support CORS.
|
||||
|
||||
|
||||
_January 18, 2018_
|
||||
|
||||
*v0.18.0*
|
||||
|
||||
- bigquery:
|
||||
- Marked stable.
|
||||
- Schema inference of nullable fields supported.
|
||||
- Added TimePartitioning to QueryConfig.
|
||||
|
||||
- firestore: Data provided to DocumentRef.Set with a Merge option can contain
|
||||
Delete sentinels.
|
||||
|
||||
- logging: Clients can accept parent resources other than projects.
|
||||
|
||||
- pubsub:
|
||||
- pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
|
||||
- Support updating more subscription metadata: AckDeadline,
|
||||
RetainAckedMessages and RetentionDuration.
|
||||
|
||||
- oslogin/apiv1beta: New client for the Cloud OS Login API.
|
||||
|
||||
- rpcreplay: A package for recording and replaying gRPC traffic.
|
||||
|
||||
- spanner:
|
||||
- Add a ReadWithOptions that supports a row limit, as well as an index.
|
||||
- Support query plan and execution statistics.
|
||||
- Added [OpenCensus](http://opencensus.io) support.
|
||||
|
||||
- storage: Clarify checksum validation for gzipped files (it is not validated
|
||||
when the file is served uncompressed).
|
||||
|
||||
|
||||
_December 11, 2017_
|
||||
|
||||
*v0.17.0*
|
||||
|
||||
- firestore BREAKING CHANGES:
|
||||
- Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update.
|
||||
Change
|
||||
`docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})`
|
||||
to
|
||||
`docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})`
|
||||
|
||||
Change
|
||||
`docref.UpdateStruct(ctx, []string{"Field"}, aStruct)`
|
||||
to
|
||||
`docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})`
|
||||
- Rename MergePaths to Merge; require args to be FieldPaths
|
||||
- A value stored as an integer can be read into a floating-point field, and vice versa.
|
||||
- bigtable/cmd/cbt:
|
||||
- Support deleting a column.
|
||||
- Add regex option for row read.
|
||||
- spanner: Mark stable.
|
||||
- storage:
|
||||
- Add Reader.ContentEncoding method.
|
||||
- Fix handling of SignedURL headers.
|
||||
- bigquery:
|
||||
- If Uploader.Put is called with no rows, it returns nil without making a
|
||||
call.
|
||||
- Schema inference supports the "nullable" option in struct tags for
|
||||
non-required fields.
|
||||
- TimePartitioning supports "Field".
|
||||
|
||||
|
||||
_October 30, 2017_
|
||||
|
||||
*v0.16.0*
|
||||
|
||||
- Other bigquery changes:
|
||||
- `JobIterator.Next` returns `*Job`; removed `JobInfo` (BREAKING CHANGE).
|
||||
- UseStandardSQL is deprecated; set UseLegacySQL to true if you need
|
||||
Legacy SQL.
|
||||
- Uploader.Put will generate a random insert ID if you do not provide one.
|
||||
- Support time partitioning for load jobs.
|
||||
- Support dry-run queries.
|
||||
- A `Job` remembers its last retrieved status.
|
||||
- Support retrieving job configuration.
|
||||
- Support labels for jobs and tables.
|
||||
- Support dataset access lists.
|
||||
- Improve support for external data sources, including data from Bigtable and
|
||||
Google Sheets, and tables with external data.
|
||||
- Support updating a table's view configuration.
|
||||
- Fix uploading civil times with nanoseconds.
|
||||
|
||||
- storage:
|
||||
- Support PubSub notifications.
|
||||
- Support Requester Pays buckets.
|
||||
|
||||
- profiler: Support goroutine and mutex profile types.
|
||||
|
||||
|
||||
_October 3, 2017_
|
||||
|
||||
*v0.15.0*
|
||||
|
||||
- firestore: beta release. See the
|
||||
[announcement](https://firebase.googleblog.com/2017/10/introducing-cloud-firestore.html).
|
||||
|
||||
- errorreporting: The existing package has been redesigned.
|
||||
|
||||
- errors: This package has been removed. Use errorreporting.
|
||||
|
||||
|
||||
_September 28, 2017_
|
||||
|
||||
*v0.14.0*
|
||||
|
||||
- bigquery BREAKING CHANGES:
|
||||
- Standard SQL is the default for queries and views.
|
||||
- `Table.Create` takes `TableMetadata` as a second argument, instead of
|
||||
options.
|
||||
- `Dataset.Create` takes `DatasetMetadata` as a second argument.
|
||||
- `DatasetMetadata` field `ID` renamed to `FullID`
|
||||
- `TableMetadata` field `ID` renamed to `FullID`
|
||||
|
||||
- Other bigquery changes:
|
||||
- The client will append a random suffix to a provided job ID if you set
|
||||
`AddJobIDSuffix` to true in a job config.
|
||||
- Listing jobs is supported.
|
||||
- Better retry logic.
|
||||
|
||||
- vision, language, speech: clients are now stable
|
||||
|
||||
- monitoring: client is now beta
|
||||
|
||||
- profiler:
|
||||
- Rename InstanceName to Instance, ZoneName to Zone
|
||||
- Auto-detect service name and version on AppEngine.
|
||||
|
||||
_September 8, 2017_
|
||||
|
||||
*v0.13.0*
|
||||
|
||||
- bigquery: UseLegacySQL options for CreateTable and QueryConfig. Use these
|
||||
options to continue using Legacy SQL after the client switches its default
|
||||
to Standard SQL.
|
||||
|
||||
- bigquery: Support for updating dataset labels.
|
||||
|
||||
- bigquery: Set DatasetIterator.ProjectID to list datasets in a project other
|
||||
than the client's. DatasetsInProject is no longer needed and is deprecated.
|
||||
|
||||
- bigtable: Fail ListInstances when any zones fail.
|
||||
|
||||
- spanner: support decoding of slices of basic types (e.g. []string, []int64,
|
||||
etc.)
|
||||
|
||||
- logging/logadmin: UpdateSink no longer creates a sink if it is missing
|
||||
(actually a change to the underlying service, not the client)
|
||||
|
||||
- profiler: Service and ServiceVersion replace Target in Config.
|
||||
|
||||
_August 22, 2017_
|
||||
|
||||
*v0.12.0*
|
||||
|
||||
- pubsub: Subscription.Receive now uses streaming pull.
|
||||
|
||||
- pubsub: add Client.TopicInProject to access topics in a different project
|
||||
than the client.
|
||||
|
||||
- errors: renamed errorreporting. The errors package will be removed shortly.
|
||||
|
||||
- datastore: improved retry behavior.
|
||||
|
||||
- bigquery: support updates to dataset metadata, with etags.
|
||||
|
||||
- bigquery: add etag support to Table.Update (BREAKING: etag argument added).
|
||||
|
||||
- bigquery: generate all job IDs on the client.
|
||||
|
||||
- storage: support bucket lifecycle configurations.
|
||||
|
||||
|
||||
_July 31, 2017_
|
||||
|
||||
*v0.11.0*
|
||||
|
||||
- Clients for spanner, pubsub and video are now in beta.
|
||||
|
||||
- New client for DLP.
|
||||
|
||||
- spanner: performance and testing improvements.
|
||||
|
||||
- storage: requester-pays buckets are supported.
|
||||
|
||||
- storage, profiler, bigtable, bigquery: bug fixes and other minor improvements.
|
||||
|
||||
- pubsub: bug fixes and other minor improvements
|
||||
|
||||
_June 17, 2017_
|
||||
|
||||
|
||||
*v0.10.0*
|
||||
|
||||
- pubsub: Subscription.ModifyPushConfig replaced with Subscription.Update.
|
||||
|
||||
- pubsub: Subscription.Receive now runs concurrently for higher throughput.
|
||||
|
||||
- vision: cloud.google.com/go/vision is deprecated. Use
|
||||
cloud.google.com/go/vision/apiv1 instead.
|
||||
|
||||
- translation: now stable.
|
||||
|
||||
- trace: several changes to the surface. See the link below.
|
||||
|
||||
[Code changes required from v0.9.0.](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/MIGRATION.md)
|
||||
|
||||
|
||||
_March 17, 2017_
|
||||
|
||||
Breaking Pubsub changes.
|
||||
* Publish is now asynchronous
|
||||
([announcement](https://groups.google.com/d/topic/google-api-go-announce/aaqRDIQ3rvU/discussion)).
|
||||
* Subscription.Pull replaced by Subscription.Receive, which takes a callback ([announcement](https://groups.google.com/d/topic/google-api-go-announce/8pt6oetAdKc/discussion)).
|
||||
* Message.Done replaced with Message.Ack and Message.Nack.
|
||||
|
||||
_February 14, 2017_
|
||||
|
||||
Release of a client library for Spanner. See
|
||||
the
|
||||
[blog post](https://cloudplatform.googleblog.com/2017/02/introducing-Cloud-Spanner-a-global-database-service-for-mission-critical-applications.html).
|
||||
|
||||
Note that although the Spanner service is beta, the Go client library is alpha.
|
||||
|
||||
_December 12, 2016_
|
||||
|
||||
Beta release of BigQuery, DataStore, Logging and Storage. See the
|
||||
[blog post](https://cloudplatform.googleblog.com/2016/12/announcing-new-google-cloud-client.html).
|
||||
|
||||
Also, BigQuery now supports structs. Read a row directly into a struct with
|
||||
`RowIterator.Next`, and upload a row directly from a struct with `Uploader.Put`.
|
||||
You can also use field tags. See the [package documentation][cloud-bigquery-ref]
|
||||
for details.
|
||||
|
||||
_December 5, 2016_
|
||||
|
||||
More changes to BigQuery:
|
||||
|
||||
* The `ValueList` type was removed. It is no longer necessary. Instead of
|
||||
```go
|
||||
var v ValueList
|
||||
... it.Next(&v) ..
|
||||
```
|
||||
use
|
||||
|
||||
```go
|
||||
var v []Value
|
||||
... it.Next(&v) ...
|
||||
```
|
||||
|
||||
* Previously, repeatedly calling `RowIterator.Next` on the same `[]Value` or
|
||||
`ValueList` would append to the slice. Now each call resets the size to zero first.
|
||||
|
||||
* Schema inference will infer the SQL type BYTES for a struct field of
|
||||
type []byte. Previously it inferred STRING.
|
||||
|
||||
* The types `uint`, `uint64` and `uintptr` are no longer supported in schema
|
||||
inference. BigQuery's integer type is INT64, and those types may hold values
|
||||
that are not correctly represented in a 64-bit signed integer.
|
||||
|
||||
* The SQL types DATE, TIME and DATETIME are now supported. They correspond to
|
||||
the `Date`, `Time` and `DateTime` types in the new `cloud.google.com/go/civil`
|
||||
package.
|
||||
|
||||
_November 17, 2016_
|
||||
|
||||
Change to BigQuery: values from INTEGER columns will now be returned as int64,
|
||||
not int. This will avoid errors arising from large values on 32-bit systems.
|
||||
|
||||
_November 8, 2016_
|
||||
|
||||
New datastore feature: datastore now encodes your nested Go structs as Entity values,
|
||||
instead of a flattened list of the embedded struct's fields.
|
||||
This means that you may now have twice-nested slices, eg.
|
||||
```go
|
||||
type State struct {
|
||||
Cities []struct{
|
||||
Populations []int
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
See [the announcement](https://groups.google.com/forum/#!topic/google-api-go-announce/79jtrdeuJAg) for
|
||||
more details.
|
||||
|
||||
_November 8, 2016_
|
||||
|
||||
Breaking changes to datastore: contexts no longer hold namespaces; instead you
|
||||
must set a key's namespace explicitly. Also, key functions have been changed
|
||||
and renamed.
|
||||
|
||||
* The WithNamespace function has been removed. To specify a namespace in a Query, use the Query.Namespace method:
|
||||
```go
|
||||
q := datastore.NewQuery("Kind").Namespace("ns")
|
||||
```
|
||||
|
||||
* All the fields of Key are exported. That means you can construct any Key with a struct literal:
|
||||
```go
|
||||
k := &Key{Kind: "Kind", ID: 37, Namespace: "ns"}
|
||||
```
|
||||
|
||||
* As a result of the above, the Key methods Kind, ID, d.Name, Parent, SetParent and Namespace have been removed.
|
||||
|
||||
* `NewIncompleteKey` has been removed, replaced by `IncompleteKey`. Replace
|
||||
```go
|
||||
NewIncompleteKey(ctx, kind, parent)
|
||||
```
|
||||
with
|
||||
```go
|
||||
IncompleteKey(kind, parent)
|
||||
```
|
||||
and if you do use namespaces, make sure you set the namespace on the returned key.
|
||||
|
||||
* `NewKey` has been removed, replaced by `NameKey` and `IDKey`. Replace
|
||||
```go
|
||||
NewKey(ctx, kind, name, 0, parent)
|
||||
NewKey(ctx, kind, "", id, parent)
|
||||
```
|
||||
with
|
||||
```go
|
||||
NameKey(kind, name, parent)
|
||||
IDKey(kind, id, parent)
|
||||
```
|
||||
and if you do use namespaces, make sure you set the namespace on the returned key.
|
||||
|
||||
* The `Done` variable has been removed. Replace `datastore.Done` with `iterator.Done`, from the package `google.golang.org/api/iterator`.
|
||||
|
||||
* The `Client.Close` method will have a return type of error. It will return the result of closing the underlying gRPC connection.
|
||||
|
||||
See [the announcement](https://groups.google.com/forum/#!topic/google-api-go-announce/hqXtM_4Ix-0) for
|
||||
more details.
|
||||
|
||||
_October 27, 2016_
|
||||
|
||||
Breaking change to bigquery: `NewGCSReference` is now a function,
|
||||
not a method on `Client`.
|
||||
|
||||
New bigquery feature: `Table.LoaderFrom` now accepts a `ReaderSource`, enabling
|
||||
loading data into a table from a file or any `io.Reader`.
|
||||
|
||||
_October 21, 2016_
|
||||
|
||||
Breaking change to pubsub: removed `pubsub.Done`.
|
||||
|
||||
Use `iterator.Done` instead, where `iterator` is the package
|
||||
`google.golang.org/api/iterator`.
|
||||
|
||||
_October 19, 2016_
|
||||
|
||||
Breaking changes to cloud.google.com/go/bigquery:
|
||||
|
||||
* Client.Table and Client.OpenTable have been removed.
|
||||
Replace
|
||||
```go
|
||||
client.OpenTable("project", "dataset", "table")
|
||||
```
|
||||
with
|
||||
```go
|
||||
client.DatasetInProject("project", "dataset").Table("table")
|
||||
```
|
||||
|
||||
* Client.CreateTable has been removed.
|
||||
Replace
|
||||
```go
|
||||
client.CreateTable(ctx, "project", "dataset", "table")
|
||||
```
|
||||
with
|
||||
```go
|
||||
client.DatasetInProject("project", "dataset").Table("table").Create(ctx)
|
||||
```
|
||||
|
||||
* Dataset.ListTables have been replaced with Dataset.Tables.
|
||||
Replace
|
||||
```go
|
||||
tables, err := ds.ListTables(ctx)
|
||||
```
|
||||
with
|
||||
```go
|
||||
it := ds.Tables(ctx)
|
||||
for {
|
||||
table, err := it.Next()
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
// TODO: use table.
|
||||
}
|
||||
```
|
||||
|
||||
* Client.Read has been replaced with Job.Read, Table.Read and Query.Read.
|
||||
Replace
|
||||
```go
|
||||
it, err := client.Read(ctx, job)
|
||||
```
|
||||
with
|
||||
```go
|
||||
it, err := job.Read(ctx)
|
||||
```
|
||||
and similarly for reading from tables or queries.
|
||||
|
||||
* The iterator returned from the Read methods is now named RowIterator. Its
|
||||
behavior is closer to the other iterators in these libraries. It no longer
|
||||
supports the Schema method; see the next item.
|
||||
Replace
|
||||
```go
|
||||
for it.Next(ctx) {
|
||||
var vals ValueList
|
||||
if err := it.Get(&vals); err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
// TODO: use vals.
|
||||
}
|
||||
if err := it.Err(); err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
```
|
||||
with
|
||||
```
|
||||
for {
|
||||
var vals ValueList
|
||||
err := it.Next(&vals)
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
// TODO: Handle error.
|
||||
}
|
||||
// TODO: use vals.
|
||||
}
|
||||
```
|
||||
Instead of the `RecordsPerRequest(n)` option, write
|
||||
```go
|
||||
it.PageInfo().MaxSize = n
|
||||
```
|
||||
Instead of the `StartIndex(i)` option, write
|
||||
```go
|
||||
it.StartIndex = i
|
||||
```
|
||||
|
||||
* ValueLoader.Load now takes a Schema in addition to a slice of Values.
|
||||
Replace
|
||||
```go
|
||||
func (vl *myValueLoader) Load(v []bigquery.Value)
|
||||
```
|
||||
with
|
||||
```go
|
||||
func (vl *myValueLoader) Load(v []bigquery.Value, s bigquery.Schema)
|
||||
```
|
||||
|
||||
|
||||
* Table.Patch is replace by Table.Update.
|
||||
Replace
|
||||
```go
|
||||
p := table.Patch()
|
||||
p.Description("new description")
|
||||
metadata, err := p.Apply(ctx)
|
||||
```
|
||||
with
|
||||
```go
|
||||
metadata, err := table.Update(ctx, bigquery.TableMetadataToUpdate{
|
||||
Description: "new description",
|
||||
})
|
||||
```
|
||||
|
||||
* Client.Copy is replaced by separate methods for each of its four functions.
|
||||
All options have been replaced by struct fields.
|
||||
|
||||
* To load data from Google Cloud Storage into a table, use Table.LoaderFrom.
|
||||
|
||||
Replace
|
||||
```go
|
||||
client.Copy(ctx, table, gcsRef)
|
||||
```
|
||||
with
|
||||
```go
|
||||
table.LoaderFrom(gcsRef).Run(ctx)
|
||||
```
|
||||
Instead of passing options to Copy, set fields on the Loader:
|
||||
```go
|
||||
loader := table.LoaderFrom(gcsRef)
|
||||
loader.WriteDisposition = bigquery.WriteTruncate
|
||||
```
|
||||
|
||||
* To extract data from a table into Google Cloud Storage, use
|
||||
Table.ExtractorTo. Set fields on the returned Extractor instead of
|
||||
passing options.
|
||||
|
||||
Replace
|
||||
```go
|
||||
client.Copy(ctx, gcsRef, table)
|
||||
```
|
||||
with
|
||||
```go
|
||||
table.ExtractorTo(gcsRef).Run(ctx)
|
||||
```
|
||||
|
||||
* To copy data into a table from one or more other tables, use
|
||||
Table.CopierFrom. Set fields on the returned Copier instead of passing options.
|
||||
|
||||
Replace
|
||||
```go
|
||||
client.Copy(ctx, dstTable, srcTable)
|
||||
```
|
||||
with
|
||||
```go
|
||||
dst.Table.CopierFrom(srcTable).Run(ctx)
|
||||
```
|
||||
|
||||
* To start a query job, create a Query and call its Run method. Set fields
|
||||
on the query instead of passing options.
|
||||
|
||||
Replace
|
||||
```go
|
||||
client.Copy(ctx, table, query)
|
||||
```
|
||||
with
|
||||
```go
|
||||
query.Run(ctx)
|
||||
```
|
||||
|
||||
* Table.NewUploader has been renamed to Table.Uploader. Instead of options,
|
||||
configure an Uploader by setting its fields.
|
||||
Replace
|
||||
```go
|
||||
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
|
||||
```
|
||||
with
|
||||
```go
|
||||
u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
|
||||
u.IgnoreUnknownValues = true
|
||||
```
|
||||
|
||||
_October 10, 2016_
|
||||
|
||||
Breaking changes to cloud.google.com/go/storage:
|
||||
|
||||
* AdminClient replaced by methods on Client.
|
||||
Replace
|
||||
```go
|
||||
adminClient.CreateBucket(ctx, bucketName, attrs)
|
||||
```
|
||||
with
|
||||
```go
|
||||
client.Bucket(bucketName).Create(ctx, projectID, attrs)
|
||||
```
|
||||
|
||||
* BucketHandle.List replaced by BucketHandle.Objects.
|
||||
Replace
|
||||
```go
|
||||
for query != nil {
|
||||
objs, err := bucket.List(d.ctx, query)
|
||||
if err != nil { ... }
|
||||
query = objs.Next
|
||||
for _, obj := range objs.Results {
|
||||
fmt.Println(obj)
|
||||
}
|
||||
}
|
||||
```
|
||||
with
|
||||
```go
|
||||
iter := bucket.Objects(d.ctx, query)
|
||||
for {
|
||||
obj, err := iter.Next()
|
||||
if err == iterator.Done {
|
||||
break
|
||||
}
|
||||
if err != nil { ... }
|
||||
fmt.Println(obj)
|
||||
}
|
||||
```
|
||||
(The `iterator` package is at `google.golang.org/api/iterator`.)
|
||||
|
||||
Replace `Query.Cursor` with `ObjectIterator.PageInfo().Token`.
|
||||
|
||||
Replace `Query.MaxResults` with `ObjectIterator.PageInfo().MaxSize`.
|
||||
|
||||
|
||||
* ObjectHandle.CopyTo replaced by ObjectHandle.CopierFrom.
|
||||
Replace
|
||||
```go
|
||||
attrs, err := src.CopyTo(ctx, dst, nil)
|
||||
```
|
||||
with
|
||||
```go
|
||||
attrs, err := dst.CopierFrom(src).Run(ctx)
|
||||
```
|
||||
|
||||
Replace
|
||||
```go
|
||||
attrs, err := src.CopyTo(ctx, dst, &storage.ObjectAttrs{ContextType: "text/html"})
|
||||
```
|
||||
with
|
||||
```go
|
||||
c := dst.CopierFrom(src)
|
||||
c.ContextType = "text/html"
|
||||
attrs, err := c.Run(ctx)
|
||||
```
|
||||
|
||||
* ObjectHandle.ComposeFrom replaced by ObjectHandle.ComposerFrom.
|
||||
Replace
|
||||
```go
|
||||
attrs, err := dst.ComposeFrom(ctx, []*storage.ObjectHandle{src1, src2}, nil)
|
||||
```
|
||||
with
|
||||
```go
|
||||
attrs, err := dst.ComposerFrom(src1, src2).Run(ctx)
|
||||
```
|
||||
|
||||
* ObjectHandle.Update's ObjectAttrs argument replaced by ObjectAttrsToUpdate.
|
||||
Replace
|
||||
```go
|
||||
attrs, err := obj.Update(ctx, &storage.ObjectAttrs{ContextType: "text/html"})
|
||||
```
|
||||
with
|
||||
```go
|
||||
attrs, err := obj.Update(ctx, storage.ObjectAttrsToUpdate{ContextType: "text/html"})
|
||||
```
|
||||
|
||||
* ObjectHandle.WithConditions replaced by ObjectHandle.If.
|
||||
Replace
|
||||
```go
|
||||
obj.WithConditions(storage.Generation(gen), storage.IfMetaGenerationMatch(mgen))
|
||||
```
|
||||
with
|
||||
```go
|
||||
obj.Generation(gen).If(storage.Conditions{MetagenerationMatch: mgen})
|
||||
```
|
||||
|
||||
Replace
|
||||
```go
|
||||
obj.WithConditions(storage.IfGenerationMatch(0))
|
||||
```
|
||||
with
|
||||
```go
|
||||
obj.If(storage.Conditions{DoesNotExist: true})
|
||||
```
|
||||
|
||||
* `storage.Done` replaced by `iterator.Done` (from package `google.golang.org/api/iterator`).
|
||||
|
||||
_October 6, 2016_
|
||||
|
||||
Package preview/logging deleted. Use logging instead.
|
||||
|
||||
_September 27, 2016_
|
||||
|
||||
Logging client replaced with preview version (see below).
|
||||
|
||||
_September 8, 2016_
|
||||
|
||||
* New clients for some of Google's Machine Learning APIs: Vision, Speech, and
|
||||
Natural Language.
|
||||
|
||||
* Preview version of a new [Stackdriver Logging][cloud-logging] client in
|
||||
[`cloud.google.com/go/preview/logging`](https://godoc.org/cloud.google.com/go/preview/logging).
|
||||
This client uses gRPC as its transport layer, and supports log reading, sinks
|
||||
and metrics. It will replace the current client at `cloud.google.com/go/logging` shortly.
|
||||
|
86
vendor/cloud.google.com/go/regen-gapic.sh
generated
vendored
86
vendor/cloud.google.com/go/regen-gapic.sh
generated
vendored
@ -1,86 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# This script generates all GAPIC clients in this repo.
|
||||
# One-time setup:
|
||||
# cd path/to/googleapis # https://github.com/googleapis/googleapis
|
||||
# virtualenv env
|
||||
# . env/bin/activate
|
||||
# pip install googleapis-artman
|
||||
# deactivate
|
||||
#
|
||||
# Regenerate:
|
||||
# cd path/to/googleapis
|
||||
# . env/bin/activate
|
||||
# $GOPATH/src/cloud.google.com/go/regen-gapic.sh
|
||||
# deactivate
|
||||
#
|
||||
# Being in googleapis directory is important;
|
||||
# that's where we find YAML files and where artman puts the "artman-genfiles" directory.
|
||||
#
|
||||
# NOTE: This script does not generate the "raw" gRPC client found in google.golang.org/genproto.
|
||||
# To do that, use the regen.sh script in the genproto repo instead.
|
||||
|
||||
set -ex
|
||||
|
||||
APIS=(
|
||||
google/api/expr/artman_cel.yaml
|
||||
google/iam/artman_iam_admin.yaml
|
||||
google/cloud/asset/artman_cloudasset_v1beta1.yaml
|
||||
google/iam/credentials/artman_iamcredentials_v1.yaml
|
||||
google/cloud/bigquery/datatransfer/artman_bigquerydatatransfer.yaml
|
||||
google/cloud/dataproc/artman_dataproc_v1.yaml
|
||||
google/cloud/dataproc/artman_dataproc_v1beta2.yaml
|
||||
google/cloud/dialogflow/artman_dialogflow_v2.yaml
|
||||
google/cloud/kms/artman_cloudkms.yaml
|
||||
google/cloud/language/artman_language_v1.yaml
|
||||
google/cloud/language/artman_language_v1beta2.yaml
|
||||
google/cloud/oslogin/artman_oslogin_v1.yaml
|
||||
google/cloud/oslogin/artman_oslogin_v1beta.yaml
|
||||
google/cloud/redis/artman_redis_v1beta1.yaml
|
||||
google/cloud/redis/artman_redis_v1.yaml
|
||||
google/cloud/scheduler/artman_cloudscheduler_v1beta1.yaml
|
||||
google/cloud/securitycenter/artman_securitycenter_v1beta1.yaml
|
||||
google/cloud/speech/artman_speech_v1.yaml
|
||||
google/cloud/speech/artman_speech_v1p1beta1.yaml
|
||||
google/cloud/tasks/artman_cloudtasks_v2beta2.yaml
|
||||
google/cloud/tasks/artman_cloudtasks_v2beta3.yaml
|
||||
google/cloud/texttospeech/artman_texttospeech_v1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1beta1.yaml
|
||||
google/cloud/videointelligence/artman_videointelligence_v1beta2.yaml
|
||||
google/cloud/vision/artman_vision_v1.yaml
|
||||
google/cloud/vision/artman_vision_v1p1beta1.yaml
|
||||
google/devtools/artman_clouddebugger.yaml
|
||||
google/devtools/clouderrorreporting/artman_errorreporting.yaml
|
||||
google/devtools/cloudtrace/artman_cloudtrace_v1.yaml
|
||||
google/devtools/cloudtrace/artman_cloudtrace_v2.yaml
|
||||
google/devtools/containeranalysis/artman_containeranalysis_v1beta1.yaml
|
||||
google/firestore/artman_firestore.yaml
|
||||
google/logging/artman_logging.yaml
|
||||
google/longrunning/artman_longrunning.yaml
|
||||
google/monitoring/artman_monitoring.yaml
|
||||
google/privacy/dlp/artman_dlp_v2.yaml
|
||||
google/pubsub/artman_pubsub.yaml
|
||||
google/spanner/admin/database/artman_spanner_admin_database.yaml
|
||||
google/spanner/admin/instance/artman_spanner_admin_instance.yaml
|
||||
google/spanner/artman_spanner.yaml
|
||||
)
|
||||
|
||||
for api in "${APIS[@]}"; do
|
||||
rm -rf artman-genfiles/*
|
||||
artman --config "$api" generate go_gapic
|
||||
cp -r artman-genfiles/gapi-*/cloud.google.com/go/* $GOPATH/src/cloud.google.com/go/
|
||||
done
|
||||
|
||||
# NOTE(pongad): `sed -i` doesn't work on Macs, because -i option needs an argument.
|
||||
# `-i ''` doesn't work on GNU, since the empty string is treated as a file name.
|
||||
# So we just create the backup and delete it after.
|
||||
ver=$(date +%Y%m%d)
|
||||
find $GOPATH/src/cloud.google.com/go/ -name '*.go' -exec sed -i.backup -e "s/^const versionClient.*/const versionClient = \"$ver\"/" '{}' +
|
||||
find $GOPATH/src/cloud.google.com/go/ -name '*.backup' -delete
|
||||
|
||||
#go list cloud.google.com/go/... | grep apiv | xargs go test
|
||||
|
||||
#go test -short cloud.google.com/go/...
|
||||
|
||||
#echo "googleapis version: $(git rev-parse HEAD)"
|
88
vendor/cloud.google.com/go/run-tests.sh
generated
vendored
88
vendor/cloud.google.com/go/run-tests.sh
generated
vendored
@ -1,88 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Selectively run tests for this repo, based on what has changed
|
||||
# in a commit. Runs short tests for the whole repo, and full tests
|
||||
# for changed directories.
|
||||
|
||||
set -e
|
||||
|
||||
prefix=cloud.google.com/go
|
||||
|
||||
dryrun=false
|
||||
if [[ $1 == "-n" ]]; then
|
||||
dryrun=true
|
||||
shift
|
||||
fi
|
||||
|
||||
if [[ $1 == "" ]]; then
|
||||
echo >&2 "usage: $0 [-n] COMMIT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Files or directories that cause all tests to run if modified.
|
||||
declare -A run_all
|
||||
run_all=([.travis.yml]=1 [run-tests.sh]=1)
|
||||
|
||||
function run {
|
||||
if $dryrun; then
|
||||
echo $*
|
||||
else
|
||||
(set -x; $*)
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
# Find all the packages that have changed in this commit.
|
||||
declare -A changed_packages
|
||||
|
||||
for f in $(git diff-tree --no-commit-id --name-only -r $1); do
|
||||
if [[ ${run_all[$f]} == 1 ]]; then
|
||||
# This change requires a full test. Do it and exit.
|
||||
run go test -race -v $prefix/...
|
||||
exit
|
||||
fi
|
||||
# Map, e.g., "spanner/client.go" to "$prefix/spanner".
|
||||
d=$(dirname $f)
|
||||
if [[ $d == "." ]]; then
|
||||
pkg=$prefix
|
||||
else
|
||||
pkg=$prefix/$d
|
||||
fi
|
||||
changed_packages[$pkg]=1
|
||||
done
|
||||
|
||||
echo "changed packages: ${!changed_packages[*]}"
|
||||
|
||||
|
||||
# Reports whether its argument, a package name, depends (recursively)
|
||||
# on a changed package.
|
||||
function depends_on_changed_package {
|
||||
# According to go list, a package does not depend on itself, so
|
||||
# we test that separately.
|
||||
if [[ ${changed_packages[$1]} == 1 ]]; then
|
||||
return 0
|
||||
fi
|
||||
for dep in $(go list -f '{{range .Deps}}{{.}} {{end}}' $1); do
|
||||
if [[ ${changed_packages[$dep]} == 1 ]]; then
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Collect the packages into two separate lists. (It is faster to call "go test" on a
|
||||
# list of packages than to individually "go test" each one.)
|
||||
|
||||
shorts=
|
||||
fulls=
|
||||
for pkg in $(go list $prefix/...); do # for each package in the repo
|
||||
if depends_on_changed_package $pkg; then # if it depends on a changed package
|
||||
fulls="$fulls $pkg" # run the full test
|
||||
else # otherwise
|
||||
shorts="$shorts $pkg" # run the short test
|
||||
fi
|
||||
done
|
||||
run go test -race -v -short $shorts
|
||||
if [[ $fulls != "" ]]; then
|
||||
run go test -race -v $fulls
|
||||
fi
|
5
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/.gitignore
generated
vendored
5
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/.gitignore
generated
vendored
@ -1,5 +0,0 @@
|
||||
*~
|
||||
*.swp
|
||||
*.swo
|
||||
|
||||
bin/
|
28
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/CONTRIBUTING.md
generated
vendored
28
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/CONTRIBUTING.md
generated
vendored
@ -1,28 +0,0 @@
|
||||
# How to Contribute
|
||||
|
||||
We'd love to accept your patches and contributions to this project. There are
|
||||
just a few small guidelines you need to follow.
|
||||
|
||||
## Contributor License Agreement
|
||||
|
||||
Contributions to this project must be accompanied by a Contributor License
|
||||
Agreement. You (or your employer) retain the copyright to your contribution;
|
||||
this simply gives us permission to use and redistribute your contributions as
|
||||
part of the project. Head over to <https://cla.developers.google.com/> to see
|
||||
your current agreements on file or to sign a new one.
|
||||
|
||||
You generally only need to submit a CLA once, so if you've already submitted one
|
||||
(even if it was for a different project), you probably don't need to do it
|
||||
again.
|
||||
|
||||
## Code reviews
|
||||
|
||||
All submissions, including submissions by project members, require review. We
|
||||
use GitHub pull requests for this purpose. Consult
|
||||
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
|
||||
information on using pull requests.
|
||||
|
||||
## Community Guidelines
|
||||
|
||||
This project follows [Google's Open Source Community
|
||||
Guidelines](https://opensource.google.com/conduct/).
|
38
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Gopkg.lock
generated
vendored
38
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Gopkg.lock
generated
vendored
@ -1,38 +0,0 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
digest = "1:d3f4da757a2589770c6962aa3dc8edc8285889cdd12402dde001eafe9bb03c64"
|
||||
name = "google.golang.org/api"
|
||||
packages = [
|
||||
"compute/v0.alpha",
|
||||
"compute/v0.beta",
|
||||
"compute/v1",
|
||||
"gensupport",
|
||||
"googleapi",
|
||||
"googleapi/internal/uritemplates",
|
||||
]
|
||||
pruneopts = "UT"
|
||||
revision = "583d854617af4d2080b5d2a24d72f7fc5a128ab2"
|
||||
|
||||
[[projects]]
|
||||
digest = "1:e2999bf1bb6eddc2a6aa03fe5e6629120a53088926520ca3b4765f77d7ff7eab"
|
||||
name = "k8s.io/klog"
|
||||
packages = ["."]
|
||||
pruneopts = "UT"
|
||||
revision = "a5bc97fbc634d635061f3146511332c7e313a55a"
|
||||
version = "v0.1.0"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
input-imports = [
|
||||
"google.golang.org/api/compute/v0.alpha",
|
||||
"google.golang.org/api/compute/v0.beta",
|
||||
"google.golang.org/api/compute/v1",
|
||||
"google.golang.org/api/googleapi",
|
||||
"k8s.io/klog",
|
||||
]
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
33
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Gopkg.toml
generated
vendored
33
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Gopkg.toml
generated
vendored
@ -1,33 +0,0 @@
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
#
|
||||
# [prune]
|
||||
# non-go = false
|
||||
# go-tests = true
|
||||
# unused-packages = true
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "google.golang.org/api"
|
||||
|
||||
[prune]
|
||||
go-tests = true
|
||||
unused-packages = true
|
36
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Makefile
generated
vendored
36
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/Makefile
generated
vendored
@ -1,36 +0,0 @@
|
||||
# Copyright 2018 Google LLC
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# https://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
all: gen build test
|
||||
|
||||
.PHONY: gen
|
||||
gen:
|
||||
go run pkg/cloud/gen/main.go > pkg/cloud/gen.go
|
||||
go run pkg/cloud/gen/main.go -mode test > pkg/cloud/gen_test.go
|
||||
|
||||
.PHONY: build
|
||||
build: gen
|
||||
go build ./...
|
||||
mkdir -p bin
|
||||
|
||||
.PHONY: test
|
||||
test:
|
||||
go test ./...
|
||||
# We cannot use golint currently due to errors in the GCP API naming.
|
||||
# golint ./...
|
||||
go vet ./...
|
||||
|
||||
.PHONY: clean
|
||||
clean:
|
||||
rm -rf ./bin
|
9
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/README.md
generated
vendored
9
vendor/github.com/GoogleCloudPlatform/k8s-cloud-provider/README.md
generated
vendored
@ -1,9 +0,0 @@
|
||||
# k8s-cloud-provider
|
||||
|
||||
This repository contains support files for implementing the Kubernetes cloud
|
||||
provider for the Google Cloud Platform. The code in this repository are the
|
||||
Google Cloud specific portions of the cloud provider logic.
|
||||
|
||||
## Building
|
||||
|
||||
Run `make`.
|
0
vendor/github.com/armon/circbuf/.gitignore
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/circbuf/.gitignore
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/circbuf/LICENSE
generated
vendored
Executable file → Normal file
0
vendor/github.com/armon/circbuf/LICENSE
generated
vendored
Executable file → Normal file
2
vendor/github.com/beorn7/perks/.gitignore
generated
vendored
2
vendor/github.com/beorn7/perks/.gitignore
generated
vendored
@ -1,2 +0,0 @@
|
||||
*.test
|
||||
*.prof
|
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
31
vendor/github.com/beorn7/perks/README.md
generated
vendored
@ -1,31 +0,0 @@
|
||||
# Perks for Go (golang.org)
|
||||
|
||||
Perks contains the Go package quantile that computes approximate quantiles over
|
||||
an unbounded data stream within low memory and CPU bounds.
|
||||
|
||||
For more information and examples, see:
|
||||
http://godoc.org/github.com/bmizerany/perks
|
||||
|
||||
A very special thank you and shout out to Graham Cormode (Rutgers University),
|
||||
Flip Korn (AT&T Labs–Research), S. Muthukrishnan (Rutgers University), and
|
||||
Divesh Srivastava (AT&T Labs–Research) for their research and publication of
|
||||
[Effective Computation of Biased Quantiles over Data Streams](http://www.cs.rutgers.edu/~muthu/bquant.pdf)
|
||||
|
||||
Thank you, also:
|
||||
* Armon Dadgar (@armon)
|
||||
* Andrew Gerrand (@nf)
|
||||
* Brad Fitzpatrick (@bradfitz)
|
||||
* Keith Rarick (@kr)
|
||||
|
||||
FAQ:
|
||||
|
||||
Q: Why not move the quantile package into the project root?
|
||||
A: I want to add more packages to perks later.
|
||||
|
||||
Copyright (C) 2013 Blake Mizerany
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
34
vendor/github.com/beorn7/perks/quantile/stream.go
generated
vendored
34
vendor/github.com/beorn7/perks/quantile/stream.go
generated
vendored
@ -77,15 +77,20 @@ func NewHighBiased(epsilon float64) *Stream {
|
||||
// is guaranteed to be within (Quantile±Epsilon).
|
||||
//
|
||||
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
|
||||
func NewTargeted(targets map[float64]float64) *Stream {
|
||||
func NewTargeted(targetMap map[float64]float64) *Stream {
|
||||
// Convert map to slice to avoid slow iterations on a map.
|
||||
// ƒ is called on the hot path, so converting the map to a slice
|
||||
// beforehand results in significant CPU savings.
|
||||
targets := targetMapToSlice(targetMap)
|
||||
|
||||
ƒ := func(s *stream, r float64) float64 {
|
||||
var m = math.MaxFloat64
|
||||
var f float64
|
||||
for quantile, epsilon := range targets {
|
||||
if quantile*s.n <= r {
|
||||
f = (2 * epsilon * r) / quantile
|
||||
for _, t := range targets {
|
||||
if t.quantile*s.n <= r {
|
||||
f = (2 * t.epsilon * r) / t.quantile
|
||||
} else {
|
||||
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
|
||||
f = (2 * t.epsilon * (s.n - r)) / (1 - t.quantile)
|
||||
}
|
||||
if f < m {
|
||||
m = f
|
||||
@ -96,6 +101,25 @@ func NewTargeted(targets map[float64]float64) *Stream {
|
||||
return newStream(ƒ)
|
||||
}
|
||||
|
||||
type target struct {
|
||||
quantile float64
|
||||
epsilon float64
|
||||
}
|
||||
|
||||
func targetMapToSlice(targetMap map[float64]float64) []target {
|
||||
targets := make([]target, 0, len(targetMap))
|
||||
|
||||
for quantile, epsilon := range targetMap {
|
||||
t := target{
|
||||
quantile: quantile,
|
||||
epsilon: epsilon,
|
||||
}
|
||||
targets = append(targets, t)
|
||||
}
|
||||
|
||||
return targets
|
||||
}
|
||||
|
||||
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
|
||||
// design. Take care when using across multiple goroutines.
|
||||
type Stream struct {
|
||||
|
21
vendor/github.com/bhendo/go-powershell/LICENSE
generated
vendored
Normal file
21
vendor/github.com/bhendo/go-powershell/LICENSE
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
Copyright (c) 2017, Gorillalabs
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
||||
|
||||
http://www.opensource.org/licenses/MIT
|
110
vendor/github.com/bhendo/go-powershell/README.md
generated
vendored
Normal file
110
vendor/github.com/bhendo/go-powershell/README.md
generated
vendored
Normal file
@ -0,0 +1,110 @@
|
||||
# go-powershell
|
||||
|
||||
This package is inspired by [jPowerShell](https://github.com/profesorfalken/jPowerShell)
|
||||
and allows one to run and remote-control a PowerShell session. Use this if you
|
||||
don't have a static script that you want to execute, bur rather run dynamic
|
||||
commands.
|
||||
|
||||
## Installation
|
||||
|
||||
go get github.com/bhendo/go-powershell
|
||||
|
||||
## Usage
|
||||
|
||||
To start a PowerShell shell, you need a backend. Backends take care of starting
|
||||
and controlling the actual powershell.exe process. In most cases, you will want
|
||||
to use the Local backend, which just uses ``os/exec`` to start the process.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
ps "github.com/bhendo/go-powershell"
|
||||
"github.com/bhendo/go-powershell/backend"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// choose a backend
|
||||
back := &backend.Local{}
|
||||
|
||||
// start a local powershell process
|
||||
shell, err := ps.New(back)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer shell.Exit()
|
||||
|
||||
// ... and interact with it
|
||||
stdout, stderr, err := shell.Execute("Get-WmiObject -Class Win32_Processor")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
fmt.Println(stdout)
|
||||
}
|
||||
```
|
||||
|
||||
## Remote Sessions
|
||||
|
||||
You can use an existing PS shell to use PSSession cmdlets to connect to remote
|
||||
computers. Instead of manually handling that, you can use the Session middleware,
|
||||
which takes care of authentication. Note that you can still use the "raw" shell
|
||||
to execute commands on the computer where the powershell host process is running.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
ps "github.com/bhendo/go-powershell"
|
||||
"github.com/bhendo/go-powershell/backend"
|
||||
"github.com/bhendo/go-powershell/middleware"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// choose a backend
|
||||
back := &backend.Local{}
|
||||
|
||||
// start a local powershell process
|
||||
shell, err := ps.New(back)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// prepare remote session configuration
|
||||
config := middleware.NewSessionConfig()
|
||||
config.ComputerName = "remote-pc-1"
|
||||
|
||||
// create a new shell by wrapping the existing one in the session middleware
|
||||
session, err := middleware.NewSession(shell, config)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer session.Exit() // will also close the underlying ps shell!
|
||||
|
||||
// everything run via the session is run on the remote machine
|
||||
stdout, stderr, err = session.Execute("Get-WmiObject -Class Win32_Processor")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
fmt.Println(stdout)
|
||||
}
|
||||
```
|
||||
|
||||
Note that a single shell instance is not safe for concurrent use, as are remote
|
||||
sessions. You can have as many remote sessions using the same shell as you like,
|
||||
but you must execute commands serially. If you need concurrency, you can just
|
||||
spawn multiple PowerShell processes (i.e. call ``.New()`` multiple times).
|
||||
|
||||
Also, note that all commands that you execute are wrapped in special echo
|
||||
statements to delimit the stdout/stderr streams. After ``.Execute()``ing a command,
|
||||
you can therefore not access ``$LastExitCode`` anymore and expect meaningful
|
||||
results.
|
||||
|
||||
## License
|
||||
|
||||
MIT, see LICENSE file.
|
38
vendor/github.com/bhendo/go-powershell/backend/local.go
generated
vendored
Normal file
38
vendor/github.com/bhendo/go-powershell/backend/local.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package backend
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os/exec"
|
||||
|
||||
"github.com/juju/errors"
|
||||
)
|
||||
|
||||
type Local struct{}
|
||||
|
||||
func (b *Local) StartProcess(cmd string, args ...string) (Waiter, io.Writer, io.Reader, io.Reader, error) {
|
||||
command := exec.Command(cmd, args...)
|
||||
|
||||
stdin, err := command.StdinPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the PowerShell's stdin stream")
|
||||
}
|
||||
|
||||
stdout, err := command.StdoutPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the PowerShell's stdout stream")
|
||||
}
|
||||
|
||||
stderr, err := command.StderrPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the PowerShell's stderr stream")
|
||||
}
|
||||
|
||||
err = command.Start()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not spawn PowerShell process")
|
||||
}
|
||||
|
||||
return command, stdin, stdout, stderr, nil
|
||||
}
|
69
vendor/github.com/bhendo/go-powershell/backend/ssh.go
generated
vendored
Normal file
69
vendor/github.com/bhendo/go-powershell/backend/ssh.go
generated
vendored
Normal file
@ -0,0 +1,69 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package backend
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/juju/errors"
|
||||
)
|
||||
|
||||
// sshSession exists so we don't create a hard dependency on crypto/ssh.
|
||||
type sshSession interface {
|
||||
Waiter
|
||||
|
||||
StdinPipe() (io.WriteCloser, error)
|
||||
StdoutPipe() (io.Reader, error)
|
||||
StderrPipe() (io.Reader, error)
|
||||
Start(string) error
|
||||
}
|
||||
|
||||
type SSH struct {
|
||||
Session sshSession
|
||||
}
|
||||
|
||||
func (b *SSH) StartProcess(cmd string, args ...string) (Waiter, io.Writer, io.Reader, io.Reader, error) {
|
||||
stdin, err := b.Session.StdinPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the SSH session's stdin stream")
|
||||
}
|
||||
|
||||
stdout, err := b.Session.StdoutPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the SSH session's stdout stream")
|
||||
}
|
||||
|
||||
stderr, err := b.Session.StderrPipe()
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not get hold of the SSH session's stderr stream")
|
||||
}
|
||||
|
||||
err = b.Session.Start(b.createCmd(cmd, args))
|
||||
if err != nil {
|
||||
return nil, nil, nil, nil, errors.Annotate(err, "Could not spawn process via SSH")
|
||||
}
|
||||
|
||||
return b.Session, stdin, stdout, stderr, nil
|
||||
}
|
||||
|
||||
func (b *SSH) createCmd(cmd string, args []string) string {
|
||||
parts := []string{cmd}
|
||||
simple := regexp.MustCompile(`^[a-z0-9_/.~+-]+$`)
|
||||
|
||||
for _, arg := range args {
|
||||
if !simple.MatchString(arg) {
|
||||
arg = b.quote(arg)
|
||||
}
|
||||
|
||||
parts = append(parts, arg)
|
||||
}
|
||||
|
||||
return strings.Join(parts, " ")
|
||||
}
|
||||
|
||||
func (b *SSH) quote(s string) string {
|
||||
return fmt.Sprintf(`"%s"`, s)
|
||||
}
|
13
vendor/github.com/bhendo/go-powershell/backend/types.go
generated
vendored
Normal file
13
vendor/github.com/bhendo/go-powershell/backend/types.go
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package backend
|
||||
|
||||
import "io"
|
||||
|
||||
type Waiter interface {
|
||||
Wait() error
|
||||
}
|
||||
|
||||
type Starter interface {
|
||||
StartProcess(cmd string, args ...string) (Waiter, io.Writer, io.Reader, io.Reader, error)
|
||||
}
|
120
vendor/github.com/bhendo/go-powershell/shell.go
generated
vendored
Normal file
120
vendor/github.com/bhendo/go-powershell/shell.go
generated
vendored
Normal file
@ -0,0 +1,120 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package powershell
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/bhendo/go-powershell/backend"
|
||||
"github.com/bhendo/go-powershell/utils"
|
||||
"github.com/juju/errors"
|
||||
)
|
||||
|
||||
const newline = "\r\n"
|
||||
|
||||
type Shell interface {
|
||||
Execute(cmd string) (string, string, error)
|
||||
Exit()
|
||||
}
|
||||
|
||||
type shell struct {
|
||||
handle backend.Waiter
|
||||
stdin io.Writer
|
||||
stdout io.Reader
|
||||
stderr io.Reader
|
||||
}
|
||||
|
||||
func New(backend backend.Starter) (Shell, error) {
|
||||
handle, stdin, stdout, stderr, err := backend.StartProcess("powershell.exe", "-NoExit", "-Command", "-")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &shell{handle, stdin, stdout, stderr}, nil
|
||||
}
|
||||
|
||||
func (s *shell) Execute(cmd string) (string, string, error) {
|
||||
if s.handle == nil {
|
||||
return "", "", errors.Annotate(errors.New(cmd), "Cannot execute commands on closed shells.")
|
||||
}
|
||||
|
||||
outBoundary := createBoundary()
|
||||
errBoundary := createBoundary()
|
||||
|
||||
// wrap the command in special markers so we know when to stop reading from the pipes
|
||||
full := fmt.Sprintf("%s; echo '%s'; [Console]::Error.WriteLine('%s')%s", cmd, outBoundary, errBoundary, newline)
|
||||
|
||||
_, err := s.stdin.Write([]byte(full))
|
||||
if err != nil {
|
||||
return "", "", errors.Annotate(errors.Annotate(err, cmd), "Could not send PowerShell command")
|
||||
}
|
||||
|
||||
// read stdout and stderr
|
||||
sout := ""
|
||||
serr := ""
|
||||
|
||||
waiter := &sync.WaitGroup{}
|
||||
waiter.Add(2)
|
||||
|
||||
go streamReader(s.stdout, outBoundary, &sout, waiter)
|
||||
go streamReader(s.stderr, errBoundary, &serr, waiter)
|
||||
|
||||
waiter.Wait()
|
||||
|
||||
if len(serr) > 0 {
|
||||
return sout, serr, errors.Annotate(errors.New(cmd), serr)
|
||||
}
|
||||
|
||||
return sout, serr, nil
|
||||
}
|
||||
|
||||
func (s *shell) Exit() {
|
||||
s.stdin.Write([]byte("exit" + newline))
|
||||
|
||||
// if it's possible to close stdin, do so (some backends, like the local one,
|
||||
// do support it)
|
||||
closer, ok := s.stdin.(io.Closer)
|
||||
if ok {
|
||||
closer.Close()
|
||||
}
|
||||
|
||||
s.handle.Wait()
|
||||
|
||||
s.handle = nil
|
||||
s.stdin = nil
|
||||
s.stdout = nil
|
||||
s.stderr = nil
|
||||
}
|
||||
|
||||
func streamReader(stream io.Reader, boundary string, buffer *string, signal *sync.WaitGroup) error {
|
||||
// read all output until we have found our boundary token
|
||||
output := ""
|
||||
bufsize := 64
|
||||
marker := boundary + newline
|
||||
|
||||
for {
|
||||
buf := make([]byte, bufsize)
|
||||
read, err := stream.Read(buf)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
output = output + string(buf[:read])
|
||||
|
||||
if strings.HasSuffix(output, marker) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
*buffer = strings.TrimSuffix(output, marker)
|
||||
signal.Done()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func createBoundary() string {
|
||||
return "$gorilla" + utils.CreateRandomString(12) + "$"
|
||||
}
|
9
vendor/github.com/bhendo/go-powershell/utils/quote.go
generated
vendored
Normal file
9
vendor/github.com/bhendo/go-powershell/utils/quote.go
generated
vendored
Normal file
@ -0,0 +1,9 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package utils
|
||||
|
||||
import "strings"
|
||||
|
||||
func QuoteArg(s string) string {
|
||||
return "'" + strings.Replace(s, "'", "\"", -1) + "'"
|
||||
}
|
20
vendor/github.com/bhendo/go-powershell/utils/rand.go
generated
vendored
Normal file
20
vendor/github.com/bhendo/go-powershell/utils/rand.go
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
// Copyright (c) 2017 Gorillalabs. All rights reserved.
|
||||
|
||||
package utils
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
)
|
||||
|
||||
func CreateRandomString(bytes int) string {
|
||||
c := bytes
|
||||
b := make([]byte, c)
|
||||
|
||||
_, err := rand.Read(b)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
return hex.EncodeToString(b)
|
||||
}
|
17
vendor/github.com/blang/semver/package.json
generated
vendored
Normal file
17
vendor/github.com/blang/semver/package.json
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"author": "blang",
|
||||
"bugs": {
|
||||
"URL": "https://github.com/blang/semver/issues",
|
||||
"url": "https://github.com/blang/semver/issues"
|
||||
},
|
||||
"gx": {
|
||||
"dvcsimport": "github.com/blang/semver"
|
||||
},
|
||||
"gxVersion": "0.10.0",
|
||||
"language": "go",
|
||||
"license": "MIT",
|
||||
"name": "semver",
|
||||
"releaseCmd": "git commit -a -m \"gx publish $VERSION\"",
|
||||
"version": "3.4.0"
|
||||
}
|
||||
|
200
vendor/github.com/blang/semver/range.go
generated
vendored
200
vendor/github.com/blang/semver/range.go
generated
vendored
@ -2,10 +2,33 @@ package semver
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
type wildcardType int
|
||||
|
||||
const (
|
||||
noneWildcard wildcardType = iota
|
||||
majorWildcard wildcardType = 1
|
||||
minorWildcard wildcardType = 2
|
||||
patchWildcard wildcardType = 3
|
||||
)
|
||||
|
||||
func wildcardTypefromInt(i int) wildcardType {
|
||||
switch i {
|
||||
case 1:
|
||||
return majorWildcard
|
||||
case 2:
|
||||
return minorWildcard
|
||||
case 3:
|
||||
return patchWildcard
|
||||
default:
|
||||
return noneWildcard
|
||||
}
|
||||
}
|
||||
|
||||
type comparator func(Version, Version) bool
|
||||
|
||||
var (
|
||||
@ -92,8 +115,12 @@ func ParseRange(s string) (Range, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
expandedParts, err := expandWildcardVersion(orParts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var orFn Range
|
||||
for _, p := range orParts {
|
||||
for _, p := range expandedParts {
|
||||
var andFn Range
|
||||
for _, ap := range p {
|
||||
opStr, vStr, err := splitComparatorVersion(ap)
|
||||
@ -164,20 +191,39 @@ func buildVersionRange(opStr, vStr string) (*versionRange, error) {
|
||||
|
||||
}
|
||||
|
||||
// splitAndTrim splits a range string by spaces and cleans leading and trailing spaces
|
||||
// inArray checks if a byte is contained in an array of bytes
|
||||
func inArray(s byte, list []byte) bool {
|
||||
for _, el := range list {
|
||||
if el == s {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// splitAndTrim splits a range string by spaces and cleans whitespaces
|
||||
func splitAndTrim(s string) (result []string) {
|
||||
last := 0
|
||||
var lastChar byte
|
||||
excludeFromSplit := []byte{'>', '<', '='}
|
||||
for i := 0; i < len(s); i++ {
|
||||
if s[i] == ' ' {
|
||||
if s[i] == ' ' && !inArray(lastChar, excludeFromSplit) {
|
||||
if last < i-1 {
|
||||
result = append(result, s[last:i])
|
||||
}
|
||||
last = i + 1
|
||||
} else if s[i] != ' ' {
|
||||
lastChar = s[i]
|
||||
}
|
||||
}
|
||||
if last < len(s)-1 {
|
||||
result = append(result, s[last:])
|
||||
}
|
||||
|
||||
for i, v := range result {
|
||||
result[i] = strings.Replace(v, " ", "", -1)
|
||||
}
|
||||
|
||||
// parts := strings.Split(s, " ")
|
||||
// for _, x := range parts {
|
||||
// if s := strings.TrimSpace(x); len(s) != 0 {
|
||||
@ -188,7 +234,6 @@ func splitAndTrim(s string) (result []string) {
|
||||
}
|
||||
|
||||
// splitComparatorVersion splits the comparator from the version.
|
||||
// Spaces between the comparator and the version are not allowed.
|
||||
// Input must be free of leading or trailing spaces.
|
||||
func splitComparatorVersion(s string) (string, string, error) {
|
||||
i := strings.IndexFunc(s, unicode.IsDigit)
|
||||
@ -198,6 +243,144 @@ func splitComparatorVersion(s string) (string, string, error) {
|
||||
return strings.TrimSpace(s[0:i]), s[i:], nil
|
||||
}
|
||||
|
||||
// getWildcardType will return the type of wildcard that the
|
||||
// passed version contains
|
||||
func getWildcardType(vStr string) wildcardType {
|
||||
parts := strings.Split(vStr, ".")
|
||||
nparts := len(parts)
|
||||
wildcard := parts[nparts-1]
|
||||
|
||||
possibleWildcardType := wildcardTypefromInt(nparts)
|
||||
if wildcard == "x" {
|
||||
return possibleWildcardType
|
||||
}
|
||||
|
||||
return noneWildcard
|
||||
}
|
||||
|
||||
// createVersionFromWildcard will convert a wildcard version
|
||||
// into a regular version, replacing 'x's with '0's, handling
|
||||
// special cases like '1.x.x' and '1.x'
|
||||
func createVersionFromWildcard(vStr string) string {
|
||||
// handle 1.x.x
|
||||
vStr2 := strings.Replace(vStr, ".x.x", ".x", 1)
|
||||
vStr2 = strings.Replace(vStr2, ".x", ".0", 1)
|
||||
parts := strings.Split(vStr2, ".")
|
||||
|
||||
// handle 1.x
|
||||
if len(parts) == 2 {
|
||||
return vStr2 + ".0"
|
||||
}
|
||||
|
||||
return vStr2
|
||||
}
|
||||
|
||||
// incrementMajorVersion will increment the major version
|
||||
// of the passed version
|
||||
func incrementMajorVersion(vStr string) (string, error) {
|
||||
parts := strings.Split(vStr, ".")
|
||||
i, err := strconv.Atoi(parts[0])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
parts[0] = strconv.Itoa(i + 1)
|
||||
|
||||
return strings.Join(parts, "."), nil
|
||||
}
|
||||
|
||||
// incrementMajorVersion will increment the minor version
|
||||
// of the passed version
|
||||
func incrementMinorVersion(vStr string) (string, error) {
|
||||
parts := strings.Split(vStr, ".")
|
||||
i, err := strconv.Atoi(parts[1])
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
parts[1] = strconv.Itoa(i + 1)
|
||||
|
||||
return strings.Join(parts, "."), nil
|
||||
}
|
||||
|
||||
// expandWildcardVersion will expand wildcards inside versions
|
||||
// following these rules:
|
||||
//
|
||||
// * when dealing with patch wildcards:
|
||||
// >= 1.2.x will become >= 1.2.0
|
||||
// <= 1.2.x will become < 1.3.0
|
||||
// > 1.2.x will become >= 1.3.0
|
||||
// < 1.2.x will become < 1.2.0
|
||||
// != 1.2.x will become < 1.2.0 >= 1.3.0
|
||||
//
|
||||
// * when dealing with minor wildcards:
|
||||
// >= 1.x will become >= 1.0.0
|
||||
// <= 1.x will become < 2.0.0
|
||||
// > 1.x will become >= 2.0.0
|
||||
// < 1.0 will become < 1.0.0
|
||||
// != 1.x will become < 1.0.0 >= 2.0.0
|
||||
//
|
||||
// * when dealing with wildcards without
|
||||
// version operator:
|
||||
// 1.2.x will become >= 1.2.0 < 1.3.0
|
||||
// 1.x will become >= 1.0.0 < 2.0.0
|
||||
func expandWildcardVersion(parts [][]string) ([][]string, error) {
|
||||
var expandedParts [][]string
|
||||
for _, p := range parts {
|
||||
var newParts []string
|
||||
for _, ap := range p {
|
||||
if strings.Index(ap, "x") != -1 {
|
||||
opStr, vStr, err := splitComparatorVersion(ap)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
versionWildcardType := getWildcardType(vStr)
|
||||
flatVersion := createVersionFromWildcard(vStr)
|
||||
|
||||
var resultOperator string
|
||||
var shouldIncrementVersion bool
|
||||
switch opStr {
|
||||
case ">":
|
||||
resultOperator = ">="
|
||||
shouldIncrementVersion = true
|
||||
case ">=":
|
||||
resultOperator = ">="
|
||||
case "<":
|
||||
resultOperator = "<"
|
||||
case "<=":
|
||||
resultOperator = "<"
|
||||
shouldIncrementVersion = true
|
||||
case "", "=", "==":
|
||||
newParts = append(newParts, ">="+flatVersion)
|
||||
resultOperator = "<"
|
||||
shouldIncrementVersion = true
|
||||
case "!=", "!":
|
||||
newParts = append(newParts, "<"+flatVersion)
|
||||
resultOperator = ">="
|
||||
shouldIncrementVersion = true
|
||||
}
|
||||
|
||||
var resultVersion string
|
||||
if shouldIncrementVersion {
|
||||
switch versionWildcardType {
|
||||
case patchWildcard:
|
||||
resultVersion, _ = incrementMinorVersion(flatVersion)
|
||||
case minorWildcard:
|
||||
resultVersion, _ = incrementMajorVersion(flatVersion)
|
||||
}
|
||||
} else {
|
||||
resultVersion = flatVersion
|
||||
}
|
||||
|
||||
ap = resultOperator + resultVersion
|
||||
}
|
||||
newParts = append(newParts, ap)
|
||||
}
|
||||
expandedParts = append(expandedParts, newParts)
|
||||
}
|
||||
|
||||
return expandedParts, nil
|
||||
}
|
||||
|
||||
func parseComparator(s string) comparator {
|
||||
switch s {
|
||||
case "==":
|
||||
@ -222,3 +405,12 @@ func parseComparator(s string) comparator {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// MustParseRange is like ParseRange but panics if the range cannot be parsed.
|
||||
func MustParseRange(s string) Range {
|
||||
r, err := ParseRange(s)
|
||||
if err != nil {
|
||||
panic(`semver: ParseRange(` + s + `): ` + err.Error())
|
||||
}
|
||||
return r
|
||||
}
|
||||
|
23
vendor/github.com/blang/semver/semver.go
generated
vendored
23
vendor/github.com/blang/semver/semver.go
generated
vendored
@ -200,6 +200,29 @@ func Make(s string) (Version, error) {
|
||||
return Parse(s)
|
||||
}
|
||||
|
||||
// ParseTolerant allows for certain version specifications that do not strictly adhere to semver
|
||||
// specs to be parsed by this library. It does so by normalizing versions before passing them to
|
||||
// Parse(). It currently trims spaces, removes a "v" prefix, and adds a 0 patch number to versions
|
||||
// with only major and minor components specified
|
||||
func ParseTolerant(s string) (Version, error) {
|
||||
s = strings.TrimSpace(s)
|
||||
s = strings.TrimPrefix(s, "v")
|
||||
|
||||
// Split into major.minor.(patch+pr+meta)
|
||||
parts := strings.SplitN(s, ".", 3)
|
||||
if len(parts) < 3 {
|
||||
if strings.ContainsAny(parts[len(parts)-1], "+-") {
|
||||
return Version{}, errors.New("Short version cannot contain PreRelease/Build meta data")
|
||||
}
|
||||
for len(parts) < 3 {
|
||||
parts = append(parts, "0")
|
||||
}
|
||||
s = strings.Join(parts, ".")
|
||||
}
|
||||
|
||||
return Parse(s)
|
||||
}
|
||||
|
||||
// Parse parses version string and returns a validated Version or error
|
||||
func Parse(s string) (Version, error) {
|
||||
if len(s) == 0 {
|
||||
|
10
vendor/github.com/buger/jsonparser/.gitignore
generated
vendored
Normal file
10
vendor/github.com/buger/jsonparser/.gitignore
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
|
||||
*.test
|
||||
|
||||
*.out
|
||||
|
||||
*.mprof
|
||||
|
||||
vendor/github.com/buger/goterm/
|
||||
prof.cpu
|
||||
prof.mem
|
8
vendor/github.com/buger/jsonparser/.travis.yml
generated
vendored
Normal file
8
vendor/github.com/buger/jsonparser/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.7.x
|
||||
- 1.8.x
|
||||
- 1.9.x
|
||||
- 1.10.x
|
||||
- 1.11.x
|
||||
script: go test -v ./.
|
12
vendor/github.com/buger/jsonparser/Dockerfile
generated
vendored
Normal file
12
vendor/github.com/buger/jsonparser/Dockerfile
generated
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
FROM golang:1.6
|
||||
|
||||
RUN go get github.com/Jeffail/gabs
|
||||
RUN go get github.com/bitly/go-simplejson
|
||||
RUN go get github.com/pquerna/ffjson
|
||||
RUN go get github.com/antonholmquist/jason
|
||||
RUN go get github.com/mreiferson/go-ujson
|
||||
RUN go get -tags=unsafe -u github.com/ugorji/go/codec
|
||||
RUN go get github.com/mailru/easyjson
|
||||
|
||||
WORKDIR /go/src/github.com/buger/jsonparser
|
||||
ADD . /go/src/github.com/buger/jsonparser
|
21
vendor/github.com/buger/jsonparser/LICENSE
generated
vendored
Normal file
21
vendor/github.com/buger/jsonparser/LICENSE
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2016 Leonid Bugaev
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
36
vendor/github.com/buger/jsonparser/Makefile
generated
vendored
Normal file
36
vendor/github.com/buger/jsonparser/Makefile
generated
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
SOURCE = parser.go
|
||||
CONTAINER = jsonparser
|
||||
SOURCE_PATH = /go/src/github.com/buger/jsonparser
|
||||
BENCHMARK = JsonParser
|
||||
BENCHTIME = 5s
|
||||
TEST = .
|
||||
DRUN = docker run -v `pwd`:$(SOURCE_PATH) -i -t $(CONTAINER)
|
||||
|
||||
build:
|
||||
docker build -t $(CONTAINER) .
|
||||
|
||||
race:
|
||||
$(DRUN) --env GORACE="halt_on_error=1" go test ./. $(ARGS) -v -race -timeout 15s
|
||||
|
||||
bench:
|
||||
$(DRUN) go test $(LDFLAGS) -test.benchmem -bench $(BENCHMARK) ./benchmark/ $(ARGS) -benchtime $(BENCHTIME) -v
|
||||
|
||||
bench_local:
|
||||
$(DRUN) go test $(LDFLAGS) -test.benchmem -bench . $(ARGS) -benchtime $(BENCHTIME) -v
|
||||
|
||||
profile:
|
||||
$(DRUN) go test $(LDFLAGS) -test.benchmem -bench $(BENCHMARK) ./benchmark/ $(ARGS) -memprofile mem.mprof -v
|
||||
$(DRUN) go test $(LDFLAGS) -test.benchmem -bench $(BENCHMARK) ./benchmark/ $(ARGS) -cpuprofile cpu.out -v
|
||||
$(DRUN) go test $(LDFLAGS) -test.benchmem -bench $(BENCHMARK) ./benchmark/ $(ARGS) -c
|
||||
|
||||
test:
|
||||
$(DRUN) go test $(LDFLAGS) ./ -run $(TEST) -timeout 10s $(ARGS) -v
|
||||
|
||||
fmt:
|
||||
$(DRUN) go fmt ./...
|
||||
|
||||
vet:
|
||||
$(DRUN) go vet ./.
|
||||
|
||||
bash:
|
||||
$(DRUN) /bin/bash
|
365
vendor/github.com/buger/jsonparser/README.md
generated
vendored
Normal file
365
vendor/github.com/buger/jsonparser/README.md
generated
vendored
Normal file
@ -0,0 +1,365 @@
|
||||
[![Go Report Card](https://goreportcard.com/badge/github.com/buger/jsonparser)](https://goreportcard.com/report/github.com/buger/jsonparser) ![License](https://img.shields.io/dub/l/vibe-d.svg)
|
||||
# Alternative JSON parser for Go (so far fastest)
|
||||
|
||||
It does not require you to know the structure of the payload (eg. create structs), and allows accessing fields by providing the path to them. It is up to **10 times faster** than standard `encoding/json` package (depending on payload size and usage), **allocates no memory**. See benchmarks below.
|
||||
|
||||
## Rationale
|
||||
Originally I made this for a project that relies on a lot of 3rd party APIs that can be unpredictable and complex.
|
||||
I love simplicity and prefer to avoid external dependecies. `encoding/json` requires you to know exactly your data structures, or if you prefer to use `map[string]interface{}` instead, it will be very slow and hard to manage.
|
||||
I investigated what's on the market and found that most libraries are just wrappers around `encoding/json`, there is few options with own parsers (`ffjson`, `easyjson`), but they still requires you to create data structures.
|
||||
|
||||
|
||||
Goal of this project is to push JSON parser to the performance limits and not sacrifice with compliance and developer user experience.
|
||||
|
||||
## Example
|
||||
For the given JSON our goal is to extract the user's full name, number of github followers and avatar.
|
||||
|
||||
```go
|
||||
import "github.com/buger/jsonparser"
|
||||
|
||||
...
|
||||
|
||||
data := []byte(`{
|
||||
"person": {
|
||||
"name": {
|
||||
"first": "Leonid",
|
||||
"last": "Bugaev",
|
||||
"fullName": "Leonid Bugaev"
|
||||
},
|
||||
"github": {
|
||||
"handle": "buger",
|
||||
"followers": 109
|
||||
},
|
||||
"avatars": [
|
||||
{ "url": "https://avatars1.githubusercontent.com/u/14009?v=3&s=460", "type": "thumbnail" }
|
||||
]
|
||||
},
|
||||
"company": {
|
||||
"name": "Acme"
|
||||
}
|
||||
}`)
|
||||
|
||||
// You can specify key path by providing arguments to Get function
|
||||
jsonparser.Get(data, "person", "name", "fullName")
|
||||
|
||||
// There is `GetInt` and `GetBoolean` helpers if you exactly know key data type
|
||||
jsonparser.GetInt(data, "person", "github", "followers")
|
||||
|
||||
// When you try to get object, it will return you []byte slice pointer to data containing it
|
||||
// In `company` it will be `{"name": "Acme"}`
|
||||
jsonparser.Get(data, "company")
|
||||
|
||||
// If the key doesn't exist it will throw an error
|
||||
var size int64
|
||||
if value, err := jsonparser.GetInt(data, "company", "size"); err == nil {
|
||||
size = value
|
||||
}
|
||||
|
||||
// You can use `ArrayEach` helper to iterate items [item1, item2 .... itemN]
|
||||
jsonparser.ArrayEach(data, func(value []byte, dataType jsonparser.ValueType, offset int, err error) {
|
||||
fmt.Println(jsonparser.Get(value, "url"))
|
||||
}, "person", "avatars")
|
||||
|
||||
// Or use can access fields by index!
|
||||
jsonparser.GetInt("person", "avatars", "[0]", "url")
|
||||
|
||||
// You can use `ObjectEach` helper to iterate objects { "key1":object1, "key2":object2, .... "keyN":objectN }
|
||||
jsonparser.ObjectEach(data, func(key []byte, value []byte, dataType jsonparser.ValueType, offset int) error {
|
||||
fmt.Printf("Key: '%s'\n Value: '%s'\n Type: %s\n", string(key), string(value), dataType)
|
||||
return nil
|
||||
}, "person", "name")
|
||||
|
||||
// The most efficient way to extract multiple keys is `EachKey`
|
||||
|
||||
paths := [][]string{
|
||||
[]string{"person", "name", "fullName"},
|
||||
[]string{"person", "avatars", "[0]", "url"},
|
||||
[]string{"company", "url"},
|
||||
}
|
||||
jsonparser.EachKey(data, func(idx int, value []byte, vt jsonparser.ValueType, err error){
|
||||
switch idx {
|
||||
case 0: // []string{"person", "name", "fullName"}
|
||||
...
|
||||
case 1: // []string{"person", "avatars", "[0]", "url"}
|
||||
...
|
||||
case 2: // []string{"company", "url"},
|
||||
...
|
||||
}
|
||||
}, paths...)
|
||||
|
||||
// For more information see docs below
|
||||
```
|
||||
|
||||
## Need to speedup your app?
|
||||
|
||||
I'm available for consulting and can help you push your app performance to the limits. Ping me at: leonsbox@gmail.com.
|
||||
|
||||
## Reference
|
||||
|
||||
Library API is really simple. You just need the `Get` method to perform any operation. The rest is just helpers around it.
|
||||
|
||||
You also can view API at [godoc.org](https://godoc.org/github.com/buger/jsonparser)
|
||||
|
||||
|
||||
### **`Get`**
|
||||
```go
|
||||
func Get(data []byte, keys ...string) (value []byte, dataType jsonparser.ValueType, offset int, err error)
|
||||
```
|
||||
Receives data structure, and key path to extract value from.
|
||||
|
||||
Returns:
|
||||
* `value` - Pointer to original data structure containing key value, or just empty slice if nothing found or error
|
||||
* `dataType` - Can be: `NotExist`, `String`, `Number`, `Object`, `Array`, `Boolean` or `Null`
|
||||
* `offset` - Offset from provided data structure where key value ends. Used mostly internally, for example for `ArrayEach` helper.
|
||||
* `err` - If the key is not found or any other parsing issue, it should return error. If key not found it also sets `dataType` to `NotExist`
|
||||
|
||||
Accepts multiple keys to specify path to JSON value (in case of quering nested structures).
|
||||
If no keys are provided it will try to extract the closest JSON value (simple ones or object/array), useful for reading streams or arrays, see `ArrayEach` implementation.
|
||||
|
||||
Note that keys can be an array indexes: `jsonparser.GetInt("person", "avatars", "[0]", "url")`, pretty cool, yeah?
|
||||
|
||||
### **`GetString`**
|
||||
```go
|
||||
func GetString(data []byte, keys ...string) (val string, err error)
|
||||
```
|
||||
Returns strings properly handing escaped and unicode characters. Note that this will cause additional memory allocations.
|
||||
|
||||
### **`GetUnsafeString`**
|
||||
If you need string in your app, and ready to sacrifice with support of escaped symbols in favor of speed. It returns string mapped to existing byte slice memory, without any allocations:
|
||||
```go
|
||||
s, _, := jsonparser.GetUnsafeString(data, "person", "name", "title")
|
||||
switch s {
|
||||
case 'CEO':
|
||||
...
|
||||
case 'Engineer'
|
||||
...
|
||||
...
|
||||
}
|
||||
```
|
||||
Note that `unsafe` here means that your string will exist until GC will free underlying byte slice, for most of cases it means that you can use this string only in current context, and should not pass it anywhere externally: through channels or any other way.
|
||||
|
||||
|
||||
### **`GetBoolean`**, **`GetInt`** and **`GetFloat`**
|
||||
```go
|
||||
func GetBoolean(data []byte, keys ...string) (val bool, err error)
|
||||
|
||||
func GetFloat(data []byte, keys ...string) (val float64, err error)
|
||||
|
||||
func GetInt(data []byte, keys ...string) (val int64, err error)
|
||||
```
|
||||
If you know the key type, you can use the helpers above.
|
||||
If key data type do not match, it will return error.
|
||||
|
||||
### **`ArrayEach`**
|
||||
```go
|
||||
func ArrayEach(data []byte, cb func(value []byte, dataType jsonparser.ValueType, offset int, err error), keys ...string)
|
||||
```
|
||||
Needed for iterating arrays, accepts a callback function with the same return arguments as `Get`.
|
||||
|
||||
### **`ObjectEach`**
|
||||
```go
|
||||
func ObjectEach(data []byte, callback func(key []byte, value []byte, dataType ValueType, offset int) error, keys ...string) (err error)
|
||||
```
|
||||
Needed for iterating object, accepts a callback function. Example:
|
||||
```go
|
||||
var handler func([]byte, []byte, jsonparser.ValueType, int) error
|
||||
handler = func(key []byte, value []byte, dataType jsonparser.ValueType, offset int) error {
|
||||
//do stuff here
|
||||
}
|
||||
jsonparser.ObjectEach(myJson, handler)
|
||||
```
|
||||
|
||||
|
||||
### **`EachKey`**
|
||||
```go
|
||||
func EachKey(data []byte, cb func(idx int, value []byte, dataType jsonparser.ValueType, err error), paths ...[]string)
|
||||
```
|
||||
When you need to read multiple keys, and you do not afraid of low-level API `EachKey` is your friend. It read payload only single time, and calls callback function once path is found. For example when you call multiple times `Get`, it has to process payload multiple times, each time you call it. Depending on payload `EachKey` can be multiple times faster than `Get`. Path can use nested keys as well!
|
||||
|
||||
```go
|
||||
paths := [][]string{
|
||||
[]string{"uuid"},
|
||||
[]string{"tz"},
|
||||
[]string{"ua"},
|
||||
[]string{"st"},
|
||||
}
|
||||
var data SmallPayload
|
||||
|
||||
jsonparser.EachKey(smallFixture, func(idx int, value []byte, vt jsonparser.ValueType, err error){
|
||||
switch idx {
|
||||
case 0:
|
||||
data.Uuid, _ = value
|
||||
case 1:
|
||||
v, _ := jsonparser.ParseInt(value)
|
||||
data.Tz = int(v)
|
||||
case 2:
|
||||
data.Ua, _ = value
|
||||
case 3:
|
||||
v, _ := jsonparser.ParseInt(value)
|
||||
data.St = int(v)
|
||||
}
|
||||
}, paths...)
|
||||
```
|
||||
|
||||
### **`Set`**
|
||||
```go
|
||||
func Set(data []byte, setValue []byte, keys ...string) (value []byte, err error)
|
||||
```
|
||||
Receives existing data structure, key path to set, and value to set at that key. *This functionality is experimental.*
|
||||
|
||||
Returns:
|
||||
* `value` - Pointer to original data structure with updated or added key value.
|
||||
* `err` - If any parsing issue, it should return error.
|
||||
|
||||
Accepts multiple keys to specify path to JSON value (in case of updating or creating nested structures).
|
||||
|
||||
Note that keys can be an array indexes: `jsonparser.Set(data, []byte("http://github.com"), "person", "avatars", "[0]", "url")`
|
||||
|
||||
### **`Delete`**
|
||||
```go
|
||||
func Delete(data []byte, keys ...string) value []byte
|
||||
```
|
||||
Receives existing data structure, and key path to delete. *This functionality is experimental.*
|
||||
|
||||
Returns:
|
||||
* `value` - Pointer to original data structure with key path deleted if it can be found. If there is no key path, then the whole data structure is deleted.
|
||||
|
||||
Accepts multiple keys to specify path to JSON value (in case of updating or creating nested structures).
|
||||
|
||||
Note that keys can be an array indexes: `jsonparser.Delete(data, "person", "avatars", "[0]", "url")`
|
||||
|
||||
|
||||
## What makes it so fast?
|
||||
* It does not rely on `encoding/json`, `reflection` or `interface{}`, the only real package dependency is `bytes`.
|
||||
* Operates with JSON payload on byte level, providing you pointers to the original data structure: no memory allocation.
|
||||
* No automatic type conversions, by default everything is a []byte, but it provides you value type, so you can convert by yourself (there is few helpers included).
|
||||
* Does not parse full record, only keys you specified
|
||||
|
||||
|
||||
## Benchmarks
|
||||
|
||||
There are 3 benchmark types, trying to simulate real-life usage for small, medium and large JSON payloads.
|
||||
For each metric, the lower value is better. Time/op is in nanoseconds. Values better than standard encoding/json marked as bold text.
|
||||
Benchmarks run on standard Linode 1024 box.
|
||||
|
||||
Compared libraries:
|
||||
* https://golang.org/pkg/encoding/json
|
||||
* https://github.com/Jeffail/gabs
|
||||
* https://github.com/a8m/djson
|
||||
* https://github.com/bitly/go-simplejson
|
||||
* https://github.com/antonholmquist/jason
|
||||
* https://github.com/mreiferson/go-ujson
|
||||
* https://github.com/ugorji/go/codec
|
||||
* https://github.com/pquerna/ffjson
|
||||
* https://github.com/mailru/easyjson
|
||||
* https://github.com/buger/jsonparser
|
||||
|
||||
#### TLDR
|
||||
If you want to skip next sections we have 2 winner: `jsonparser` and `easyjson`.
|
||||
`jsonparser` is up to 10 times faster than standard `encoding/json` package (depending on payload size and usage), and almost infinitely (literally) better in memory consumption because it operates with data on byte level, and provide direct slice pointers.
|
||||
`easyjson` wins in CPU in medium tests and frankly i'm impressed with this package: it is remarkable results considering that it is almost drop-in replacement for `encoding/json` (require some code generation).
|
||||
|
||||
It's hard to fully compare `jsonparser` and `easyjson` (or `ffson`), they a true parsers and fully process record, unlike `jsonparser` which parse only keys you specified.
|
||||
|
||||
If you searching for replacement of `encoding/json` while keeping structs, `easyjson` is an amazing choice. If you want to process dynamic JSON, have memory constrains, or more control over your data you should try `jsonparser`.
|
||||
|
||||
`jsonparser` performance heavily depends on usage, and it works best when you do not need to process full record, only some keys. The more calls you need to make, the slower it will be, in contrast `easyjson` (or `ffjson`, `encoding/json`) parser record only 1 time, and then you can make as many calls as you want.
|
||||
|
||||
With great power comes great responsibility! :)
|
||||
|
||||
|
||||
#### Small payload
|
||||
|
||||
Each test processes 190 bytes of http log as a JSON record.
|
||||
It should read multiple fields.
|
||||
https://github.com/buger/jsonparser/blob/master/benchmark/benchmark_small_payload_test.go
|
||||
|
||||
Library | time/op | bytes/op | allocs/op
|
||||
------ | ------- | -------- | -------
|
||||
encoding/json struct | 7879 | 880 | 18
|
||||
encoding/json interface{} | 8946 | 1521 | 38
|
||||
Jeffail/gabs | 10053 | 1649 | 46
|
||||
bitly/go-simplejson | 10128 | 2241 | 36
|
||||
antonholmquist/jason | 27152 | 7237 | 101
|
||||
github.com/ugorji/go/codec | 8806 | 2176 | 31
|
||||
mreiferson/go-ujson | **7008** | **1409** | 37
|
||||
a8m/djson | 3862 | 1249 | 30
|
||||
pquerna/ffjson | **3769** | **624** | **15**
|
||||
mailru/easyjson | **2002** | **192** | **9**
|
||||
buger/jsonparser | **1367** | **0** | **0**
|
||||
buger/jsonparser (EachKey API) | **809** | **0** | **0**
|
||||
|
||||
Winners are ffjson, easyjson and jsonparser, where jsonparser is up to 9.8x faster than encoding/json and 4.6x faster than ffjson, and slightly faster than easyjson.
|
||||
If you look at memory allocation, jsonparser has no rivals, as it makes no data copy and operates with raw []byte structures and pointers to it.
|
||||
|
||||
#### Medium payload
|
||||
|
||||
Each test processes a 2.4kb JSON record (based on Clearbit API).
|
||||
It should read multiple nested fields and 1 array.
|
||||
|
||||
https://github.com/buger/jsonparser/blob/master/benchmark/benchmark_medium_payload_test.go
|
||||
|
||||
| Library | time/op | bytes/op | allocs/op |
|
||||
| ------- | ------- | -------- | --------- |
|
||||
| encoding/json struct | 57749 | 1336 | 29 |
|
||||
| encoding/json interface{} | 79297 | 10627 | 215 |
|
||||
| Jeffail/gabs | 83807 | 11202 | 235 |
|
||||
| bitly/go-simplejson | 88187 | 17187 | 220 |
|
||||
| antonholmquist/jason | 94099 | 19013 | 247 |
|
||||
| github.com/ugorji/go/codec | 114719 | 6712 | 152 |
|
||||
| mreiferson/go-ujson | **56972** | 11547 | 270 |
|
||||
| a8m/djson | 28525 | 10196 | 198 |
|
||||
| pquerna/ffjson | **20298** | **856** | **20** |
|
||||
| mailru/easyjson | **10512** | **336** | **12** |
|
||||
| buger/jsonparser | **15955** | **0** | **0** |
|
||||
| buger/jsonparser (EachKey API) | **8916** | **0** | **0** |
|
||||
|
||||
The difference between ffjson and jsonparser in CPU usage is smaller, while the memory consumption difference is growing. On the other hand `easyjson` shows remarkable performance for medium payload.
|
||||
|
||||
`gabs`, `go-simplejson` and `jason` are based on encoding/json and map[string]interface{} and actually only helpers for unstructured JSON, their performance correlate with `encoding/json interface{}`, and they will skip next round.
|
||||
`go-ujson` while have its own parser, shows same performance as `encoding/json`, also skips next round. Same situation with `ugorji/go/codec`, but it showed unexpectedly bad performance for complex payloads.
|
||||
|
||||
|
||||
#### Large payload
|
||||
|
||||
Each test processes a 24kb JSON record (based on Discourse API)
|
||||
It should read 2 arrays, and for each item in array get a few fields.
|
||||
Basically it means processing a full JSON file.
|
||||
|
||||
https://github.com/buger/jsonparser/blob/master/benchmark/benchmark_large_payload_test.go
|
||||
|
||||
| Library | time/op | bytes/op | allocs/op |
|
||||
| --- | --- | --- | --- |
|
||||
| encoding/json struct | 748336 | 8272 | 307 |
|
||||
| encoding/json interface{} | 1224271 | 215425 | 3395 |
|
||||
| a8m/djson | 510082 | 213682 | 2845 |
|
||||
| pquerna/ffjson | **312271** | **7792** | **298** |
|
||||
| mailru/easyjson | **154186** | **6992** | **288** |
|
||||
| buger/jsonparser | **85308** | **0** | **0** |
|
||||
|
||||
`jsonparser` now is a winner, but do not forget that it is way more lightweight parser than `ffson` or `easyjson`, and they have to parser all the data, while `jsonparser` parse only what you need. All `ffjson`, `easysjon` and `jsonparser` have their own parsing code, and does not depend on `encoding/json` or `interface{}`, thats one of the reasons why they are so fast. `easyjson` also use a bit of `unsafe` package to reduce memory consuption (in theory it can lead to some unexpected GC issue, but i did not tested enough)
|
||||
|
||||
Also last benchmark did not included `EachKey` test, because in this particular case we need to read lot of Array values, and using `ArrayEach` is more efficient.
|
||||
|
||||
## Questions and support
|
||||
|
||||
All bug-reports and suggestions should go though Github Issues.
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork it
|
||||
2. Create your feature branch (git checkout -b my-new-feature)
|
||||
3. Commit your changes (git commit -am 'Added some feature')
|
||||
4. Push to the branch (git push origin my-new-feature)
|
||||
5. Create new Pull Request
|
||||
|
||||
## Development
|
||||
|
||||
All my development happens using Docker, and repo include some Make tasks to simplify development.
|
||||
|
||||
* `make build` - builds docker image, usually can be called only once
|
||||
* `make test` - run tests
|
||||
* `make fmt` - run go fmt
|
||||
* `make bench` - run benchmarks (if you need to run only single benchmark modify `BENCHMARK` variable in make file)
|
||||
* `make profile` - runs benchmark and generate 3 files- `cpu.out`, `mem.mprof` and `benchmark.test` binary, which can be used for `go tool pprof`
|
||||
* `make bash` - enter container (i use it for running `go tool pprof` above)
|
47
vendor/github.com/buger/jsonparser/bytes.go
generated
vendored
Normal file
47
vendor/github.com/buger/jsonparser/bytes.go
generated
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
package jsonparser
|
||||
|
||||
import (
|
||||
bio "bytes"
|
||||
)
|
||||
|
||||
// minInt64 '-9223372036854775808' is the smallest representable number in int64
|
||||
const minInt64 = `9223372036854775808`
|
||||
|
||||
// About 2x faster then strconv.ParseInt because it only supports base 10, which is enough for JSON
|
||||
func parseInt(bytes []byte) (v int64, ok bool, overflow bool) {
|
||||
if len(bytes) == 0 {
|
||||
return 0, false, false
|
||||
}
|
||||
|
||||
var neg bool = false
|
||||
if bytes[0] == '-' {
|
||||
neg = true
|
||||
bytes = bytes[1:]
|
||||
}
|
||||
|
||||
var b int64 = 0
|
||||
for _, c := range bytes {
|
||||
if c >= '0' && c <= '9' {
|
||||
b = (10 * v) + int64(c-'0')
|
||||
} else {
|
||||
return 0, false, false
|
||||
}
|
||||
if overflow = (b < v); overflow {
|
||||
break
|
||||
}
|
||||
v = b
|
||||
}
|
||||
|
||||
if overflow {
|
||||
if neg && bio.Equal(bytes, []byte(minInt64)) {
|
||||
return b, true, false
|
||||
}
|
||||
return 0, false, true
|
||||
}
|
||||
|
||||
if neg {
|
||||
return -v, true, false
|
||||
} else {
|
||||
return v, true, false
|
||||
}
|
||||
}
|
25
vendor/github.com/buger/jsonparser/bytes_safe.go
generated
vendored
Normal file
25
vendor/github.com/buger/jsonparser/bytes_safe.go
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
// +build appengine appenginevm
|
||||
|
||||
package jsonparser
|
||||
|
||||
import (
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// See fastbytes_unsafe.go for explanation on why *[]byte is used (signatures must be consistent with those in that file)
|
||||
|
||||
func equalStr(b *[]byte, s string) bool {
|
||||
return string(*b) == s
|
||||
}
|
||||
|
||||
func parseFloat(b *[]byte) (float64, error) {
|
||||
return strconv.ParseFloat(string(*b), 64)
|
||||
}
|
||||
|
||||
func bytesToString(b *[]byte) string {
|
||||
return string(*b)
|
||||
}
|
||||
|
||||
func StringToBytes(s string) []byte {
|
||||
return []byte(s)
|
||||
}
|
42
vendor/github.com/buger/jsonparser/bytes_unsafe.go
generated
vendored
Normal file
42
vendor/github.com/buger/jsonparser/bytes_unsafe.go
generated
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
// +build !appengine,!appenginevm
|
||||
|
||||
package jsonparser
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strconv"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
//
|
||||
// The reason for using *[]byte rather than []byte in parameters is an optimization. As of Go 1.6,
|
||||
// the compiler cannot perfectly inline the function when using a non-pointer slice. That is,
|
||||
// the non-pointer []byte parameter version is slower than if its function body is manually
|
||||
// inlined, whereas the pointer []byte version is equally fast to the manually inlined
|
||||
// version. Instruction count in assembly taken from "go tool compile" confirms this difference.
|
||||
//
|
||||
// TODO: Remove hack after Go 1.7 release
|
||||
//
|
||||
func equalStr(b *[]byte, s string) bool {
|
||||
return *(*string)(unsafe.Pointer(b)) == s
|
||||
}
|
||||
|
||||
func parseFloat(b *[]byte) (float64, error) {
|
||||
return strconv.ParseFloat(*(*string)(unsafe.Pointer(b)), 64)
|
||||
}
|
||||
|
||||
// A hack until issue golang/go#2632 is fixed.
|
||||
// See: https://github.com/golang/go/issues/2632
|
||||
func bytesToString(b *[]byte) string {
|
||||
return *(*string)(unsafe.Pointer(b))
|
||||
}
|
||||
|
||||
func StringToBytes(s string) []byte {
|
||||
sh := (*reflect.StringHeader)(unsafe.Pointer(&s))
|
||||
bh := reflect.SliceHeader{
|
||||
Data: sh.Data,
|
||||
Len: sh.Len,
|
||||
Cap: sh.Len,
|
||||
}
|
||||
return *(*[]byte)(unsafe.Pointer(&bh))
|
||||
}
|
173
vendor/github.com/buger/jsonparser/escape.go
generated
vendored
Normal file
173
vendor/github.com/buger/jsonparser/escape.go
generated
vendored
Normal file
@ -0,0 +1,173 @@
|
||||
package jsonparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
// JSON Unicode stuff: see https://tools.ietf.org/html/rfc7159#section-7
|
||||
|
||||
const supplementalPlanesOffset = 0x10000
|
||||
const highSurrogateOffset = 0xD800
|
||||
const lowSurrogateOffset = 0xDC00
|
||||
|
||||
const basicMultilingualPlaneReservedOffset = 0xDFFF
|
||||
const basicMultilingualPlaneOffset = 0xFFFF
|
||||
|
||||
func combineUTF16Surrogates(high, low rune) rune {
|
||||
return supplementalPlanesOffset + (high-highSurrogateOffset)<<10 + (low - lowSurrogateOffset)
|
||||
}
|
||||
|
||||
const badHex = -1
|
||||
|
||||
func h2I(c byte) int {
|
||||
switch {
|
||||
case c >= '0' && c <= '9':
|
||||
return int(c - '0')
|
||||
case c >= 'A' && c <= 'F':
|
||||
return int(c - 'A' + 10)
|
||||
case c >= 'a' && c <= 'f':
|
||||
return int(c - 'a' + 10)
|
||||
}
|
||||
return badHex
|
||||
}
|
||||
|
||||
// decodeSingleUnicodeEscape decodes a single \uXXXX escape sequence. The prefix \u is assumed to be present and
|
||||
// is not checked.
|
||||
// In JSON, these escapes can either come alone or as part of "UTF16 surrogate pairs" that must be handled together.
|
||||
// This function only handles one; decodeUnicodeEscape handles this more complex case.
|
||||
func decodeSingleUnicodeEscape(in []byte) (rune, bool) {
|
||||
// We need at least 6 characters total
|
||||
if len(in) < 6 {
|
||||
return utf8.RuneError, false
|
||||
}
|
||||
|
||||
// Convert hex to decimal
|
||||
h1, h2, h3, h4 := h2I(in[2]), h2I(in[3]), h2I(in[4]), h2I(in[5])
|
||||
if h1 == badHex || h2 == badHex || h3 == badHex || h4 == badHex {
|
||||
return utf8.RuneError, false
|
||||
}
|
||||
|
||||
// Compose the hex digits
|
||||
return rune(h1<<12 + h2<<8 + h3<<4 + h4), true
|
||||
}
|
||||
|
||||
// isUTF16EncodedRune checks if a rune is in the range for non-BMP characters,
|
||||
// which is used to describe UTF16 chars.
|
||||
// Source: https://en.wikipedia.org/wiki/Plane_(Unicode)#Basic_Multilingual_Plane
|
||||
func isUTF16EncodedRune(r rune) bool {
|
||||
return highSurrogateOffset <= r && r <= basicMultilingualPlaneReservedOffset
|
||||
}
|
||||
|
||||
func decodeUnicodeEscape(in []byte) (rune, int) {
|
||||
if r, ok := decodeSingleUnicodeEscape(in); !ok {
|
||||
// Invalid Unicode escape
|
||||
return utf8.RuneError, -1
|
||||
} else if r <= basicMultilingualPlaneOffset && !isUTF16EncodedRune(r) {
|
||||
// Valid Unicode escape in Basic Multilingual Plane
|
||||
return r, 6
|
||||
} else if r2, ok := decodeSingleUnicodeEscape(in[6:]); !ok { // Note: previous decodeSingleUnicodeEscape success guarantees at least 6 bytes remain
|
||||
// UTF16 "high surrogate" without manditory valid following Unicode escape for the "low surrogate"
|
||||
return utf8.RuneError, -1
|
||||
} else if r2 < lowSurrogateOffset {
|
||||
// Invalid UTF16 "low surrogate"
|
||||
return utf8.RuneError, -1
|
||||
} else {
|
||||
// Valid UTF16 surrogate pair
|
||||
return combineUTF16Surrogates(r, r2), 12
|
||||
}
|
||||
}
|
||||
|
||||
// backslashCharEscapeTable: when '\X' is found for some byte X, it is to be replaced with backslashCharEscapeTable[X]
|
||||
var backslashCharEscapeTable = [...]byte{
|
||||
'"': '"',
|
||||
'\\': '\\',
|
||||
'/': '/',
|
||||
'b': '\b',
|
||||
'f': '\f',
|
||||
'n': '\n',
|
||||
'r': '\r',
|
||||
't': '\t',
|
||||
}
|
||||
|
||||
// unescapeToUTF8 unescapes the single escape sequence starting at 'in' into 'out' and returns
|
||||
// how many characters were consumed from 'in' and emitted into 'out'.
|
||||
// If a valid escape sequence does not appear as a prefix of 'in', (-1, -1) to signal the error.
|
||||
func unescapeToUTF8(in, out []byte) (inLen int, outLen int) {
|
||||
if len(in) < 2 || in[0] != '\\' {
|
||||
// Invalid escape due to insufficient characters for any escape or no initial backslash
|
||||
return -1, -1
|
||||
}
|
||||
|
||||
// https://tools.ietf.org/html/rfc7159#section-7
|
||||
switch e := in[1]; e {
|
||||
case '"', '\\', '/', 'b', 'f', 'n', 'r', 't':
|
||||
// Valid basic 2-character escapes (use lookup table)
|
||||
out[0] = backslashCharEscapeTable[e]
|
||||
return 2, 1
|
||||
case 'u':
|
||||
// Unicode escape
|
||||
if r, inLen := decodeUnicodeEscape(in); inLen == -1 {
|
||||
// Invalid Unicode escape
|
||||
return -1, -1
|
||||
} else {
|
||||
// Valid Unicode escape; re-encode as UTF8
|
||||
outLen := utf8.EncodeRune(out, r)
|
||||
return inLen, outLen
|
||||
}
|
||||
}
|
||||
|
||||
return -1, -1
|
||||
}
|
||||
|
||||
// unescape unescapes the string contained in 'in' and returns it as a slice.
|
||||
// If 'in' contains no escaped characters:
|
||||
// Returns 'in'.
|
||||
// Else, if 'out' is of sufficient capacity (guaranteed if cap(out) >= len(in)):
|
||||
// 'out' is used to build the unescaped string and is returned with no extra allocation
|
||||
// Else:
|
||||
// A new slice is allocated and returned.
|
||||
func Unescape(in, out []byte) ([]byte, error) {
|
||||
firstBackslash := bytes.IndexByte(in, '\\')
|
||||
if firstBackslash == -1 {
|
||||
return in, nil
|
||||
}
|
||||
|
||||
// Get a buffer of sufficient size (allocate if needed)
|
||||
if cap(out) < len(in) {
|
||||
out = make([]byte, len(in))
|
||||
} else {
|
||||
out = out[0:len(in)]
|
||||
}
|
||||
|
||||
// Copy the first sequence of unescaped bytes to the output and obtain a buffer pointer (subslice)
|
||||
copy(out, in[:firstBackslash])
|
||||
in = in[firstBackslash:]
|
||||
buf := out[firstBackslash:]
|
||||
|
||||
for len(in) > 0 {
|
||||
// Unescape the next escaped character
|
||||
inLen, bufLen := unescapeToUTF8(in, buf)
|
||||
if inLen == -1 {
|
||||
return nil, MalformedStringEscapeError
|
||||
}
|
||||
|
||||
in = in[inLen:]
|
||||
buf = buf[bufLen:]
|
||||
|
||||
// Copy everything up until the next backslash
|
||||
nextBackslash := bytes.IndexByte(in, '\\')
|
||||
if nextBackslash == -1 {
|
||||
copy(buf, in)
|
||||
buf = buf[len(in):]
|
||||
break
|
||||
} else {
|
||||
copy(buf, in[:nextBackslash])
|
||||
buf = buf[nextBackslash:]
|
||||
in = in[nextBackslash:]
|
||||
}
|
||||
}
|
||||
|
||||
// Trim the out buffer to the amount that was actually emitted
|
||||
return out[:len(out)-len(buf)], nil
|
||||
}
|
1211
vendor/github.com/buger/jsonparser/parser.go
generated
vendored
Normal file
1211
vendor/github.com/buger/jsonparser/parser.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
7
vendor/github.com/chai2010/gettext-go/.travis.yml
generated
vendored
7
vendor/github.com/chai2010/gettext-go/.travis.yml
generated
vendored
@ -1,7 +0,0 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.3
|
||||
- 1.4
|
||||
- 1.5
|
||||
- tip
|
58
vendor/github.com/chai2010/gettext-go/README.md
generated
vendored
58
vendor/github.com/chai2010/gettext-go/README.md
generated
vendored
@ -1,58 +0,0 @@
|
||||
gettext-go
|
||||
==========
|
||||
|
||||
PkgDoc: [http://godoc.org/github.com/chai2010/gettext-go/gettext](http://godoc.org/github.com/chai2010/gettext-go/gettext)
|
||||
|
||||
Install
|
||||
========
|
||||
|
||||
1. `go get github.com/chai2010/gettext-go/gettext`
|
||||
2. `go run hello.go`
|
||||
|
||||
The godoc.org or gowalker.org has more information.
|
||||
|
||||
Example
|
||||
=======
|
||||
|
||||
```Go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/chai2010/gettext-go/gettext"
|
||||
)
|
||||
|
||||
func main() {
|
||||
gettext.SetLocale("zh_CN")
|
||||
gettext.Textdomain("hello")
|
||||
|
||||
gettext.BindTextdomain("hello", "local", nil)
|
||||
|
||||
// gettext.BindTextdomain("hello", "local", nil) // from local dir
|
||||
// gettext.BindTextdomain("hello", "local.zip", nil) // from local zip file
|
||||
// gettext.BindTextdomain("hello", "local.zip", zipData) // from embedded zip data
|
||||
|
||||
// translate source text
|
||||
fmt.Println(gettext.Gettext("Hello, world!"))
|
||||
// Output: 你好, 世界!
|
||||
|
||||
// if no msgctxt in PO file (only msgid and msgstr),
|
||||
// specify context as "" by
|
||||
fmt.Println(gettext.PGettext("", "Hello, world!"))
|
||||
// Output: 你好, 世界!
|
||||
|
||||
// translate resource
|
||||
fmt.Println(string(gettext.Getdata("poems.txt"))))
|
||||
// Output: ...
|
||||
}
|
||||
```
|
||||
|
||||
Go file: [hello.go](https://github.com/chai2010/gettext-go/blob/master/examples/hello.go); PO file: [hello.po](https://github.com/chai2010/gettext-go/blob/master/examples/local/default/LC_MESSAGES/hello.po);
|
||||
|
||||
BUGS
|
||||
====
|
||||
|
||||
Please report bugs to <chaishushan@gmail.com>.
|
||||
|
||||
Thanks!
|
4
vendor/github.com/cloudflare/cfssl/.dockerignore
generated
vendored
4
vendor/github.com/cloudflare/cfssl/.dockerignore
generated
vendored
@ -1,4 +0,0 @@
|
||||
cfssl_*
|
||||
*-amd64
|
||||
*-386
|
||||
dist/*
|
5
vendor/github.com/cloudflare/cfssl/.gitignore
generated
vendored
5
vendor/github.com/cloudflare/cfssl/.gitignore
generated
vendored
@ -1,5 +0,0 @@
|
||||
dist/*
|
||||
cli/serve/static.rice-box.go
|
||||
.coverprofile
|
||||
coverprofile.txt
|
||||
gopath
|
74
vendor/github.com/cloudflare/cfssl/.travis.yml
generated
vendored
74
vendor/github.com/cloudflare/cfssl/.travis.yml
generated
vendored
@ -1,74 +0,0 @@
|
||||
sudo: false
|
||||
language: go
|
||||
go:
|
||||
- 1.8.x
|
||||
- 1.9.x
|
||||
- 1.10.x
|
||||
- master
|
||||
# Install g++-4.8 to support std=c++11 for github.com/google/certificate-transparency/go/merkletree
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- ubuntu-toolchain-r-test
|
||||
packages:
|
||||
- g++-4.8
|
||||
install:
|
||||
- if [ "$CXX" = "g++" ]; then export CXX="g++-4.8"; fi
|
||||
|
||||
# Used by the certdb tests
|
||||
services:
|
||||
- mysql
|
||||
- postgresql
|
||||
before_install:
|
||||
# CFSSL consists of multiple Go packages, which refer to each other by
|
||||
# their absolute GitHub path, e.g. github.com/cloudflare/crypto/pkcs7.
|
||||
# That means, by default, if someone forks the repo and makes changes across
|
||||
# multiple packages within CFSSL, Travis won't pass for the branch on their
|
||||
# own repo. To fix that, we move the directory
|
||||
- mkdir -p $TRAVIS_BUILD_DIR $GOPATH/src/github.com/cloudflare
|
||||
- test ! -d $GOPATH/src/github.com/cloudflare/cfssl && mv $TRAVIS_BUILD_DIR $GOPATH/src/github.com/cloudflare/cfssl || true
|
||||
|
||||
# Only build pull requests, pushes to the master branch, and branches
|
||||
# starting with `test-`. This is a convenient way to push branches to
|
||||
# your own fork of the repostiory to ensure Travis passes before submitting
|
||||
# a PR. For instance, you might run:
|
||||
# git push myremote branchname:test-branchname
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
- /^test-.*$/
|
||||
|
||||
before_script:
|
||||
- go get -u github.com/golang/lint/golint
|
||||
- go get -v github.com/GeertJohan/fgt
|
||||
# Setup DBs + run migrations
|
||||
- go get bitbucket.org/liamstask/goose/cmd/goose
|
||||
- if [[ $(uname -s) == 'Linux' ]]; then
|
||||
psql -c 'create database certdb_development;' -U postgres;
|
||||
goose -path $GOPATH/src/github.com/cloudflare/cfssl/certdb/pg up;
|
||||
mysql -e 'create database certdb_development;' -u root;
|
||||
goose -path $GOPATH/src/github.com/cloudflare/cfssl/certdb/mysql up;
|
||||
fi
|
||||
script:
|
||||
- ./test.sh
|
||||
notifications:
|
||||
email:
|
||||
recipients:
|
||||
- cbroglie@cloudflare.com
|
||||
- kyle@cloudflare.com
|
||||
- nick@cloudflare.com
|
||||
on_success: never
|
||||
on_failure: change
|
||||
env:
|
||||
global:
|
||||
- secure: "OmaaZ3jhU9VQ/0SYpenUJEfnmKy/MwExkefFRpDbkRSu/hTQpxxALAZV5WEHo7gxLRMRI0pytLo7w+lAd2FlX1CNcyY62MUicta/8P2twsxp+lR3v1bJ7dwk6qsDbO7Nvv3BKPCDQCHUkggbAEJaHEQGdLk4ursNEB1aGimuCEc="
|
||||
- GO15VENDOREXPERIMENT=1
|
||||
matrix:
|
||||
- BUILD_TAGS="postgresql mysql"
|
||||
matrix:
|
||||
include:
|
||||
- os: osx
|
||||
go: 1.8.1
|
||||
env: BUILD_TAGS=
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash) -f coverprofile.txt
|
38
vendor/github.com/cloudflare/cfssl/BUILDING.md
generated
vendored
38
vendor/github.com/cloudflare/cfssl/BUILDING.md
generated
vendored
@ -1,38 +0,0 @@
|
||||
# How to Build CFSSL
|
||||
|
||||
## Docker
|
||||
|
||||
The requirements to build `CFSSL` are:
|
||||
|
||||
1. A running instance of Docker
|
||||
2. The `bash` shell
|
||||
|
||||
To build, run:
|
||||
|
||||
$ script/build-docker
|
||||
|
||||
This is will build by default all the cfssl command line utilities
|
||||
for darwin (OSX), linux, and windows for i386 and amd64 and output the
|
||||
binaries in the current path.
|
||||
|
||||
To build a specific platform and OS, run:
|
||||
|
||||
$ script/build-docker -os="darwin" -arch="amd64"
|
||||
|
||||
Note: for cross-compilation compatibility, the Docker build process will
|
||||
build programs without PKCS #11.
|
||||
|
||||
## Without Docker
|
||||
|
||||
The requirements to build without Docker are:
|
||||
|
||||
1. Go version 1.5 is the minimum required version of Go. However, only Go 1.6+
|
||||
is supported due to the test system not supporting Go 1.5.
|
||||
2. A properly configured go environment
|
||||
3. A properly configured GOPATH
|
||||
4. With Go 1.5, you are required to set the environment variable
|
||||
`GO15VENDOREXPERIMENT=1`.
|
||||
|
||||
Run:
|
||||
|
||||
$ go install github.com/cloudflare/cfssl/cmd/...
|
72
vendor/github.com/cloudflare/cfssl/CHANGELOG
generated
vendored
72
vendor/github.com/cloudflare/cfssl/CHANGELOG
generated
vendored
@ -1,72 +0,0 @@
|
||||
1.1.0 - 2015-08-04
|
||||
|
||||
ADDED:
|
||||
- Revocation now checks OCSP status.
|
||||
- Authenticated endpoints are now supported using HMAC tags.
|
||||
- Bundle can verify certificates against a domain or IP.
|
||||
- OCSP subcommand has been added.
|
||||
- PKCS #11 keys are now supported; this support is now the default.
|
||||
- OCSP serving is now implemented.
|
||||
- The multirootca tool is now available for multiple signing
|
||||
keys via an authenticated API.
|
||||
- A scan utility for checking the quality of a server's TLS
|
||||
configuration.
|
||||
- The certificate bundler now supports PKCS #7 and PKCS #12.
|
||||
- An info endpoint has been added to retrieve the signers'
|
||||
certificates.
|
||||
- Signers can now use a serial sequence number for certificate
|
||||
serial numbers; the default remains randomised serial numbers.
|
||||
- CSR whitelisting allows the signer to explicitly distrust
|
||||
certain fields in a CSR.
|
||||
- Signing profiles can include certificate policies and their
|
||||
qualifiers.
|
||||
- The multirootca can use Red October-secured private keys.
|
||||
- The multirootca can whitelist CSRs per-signer based on an
|
||||
IP network whitelist.
|
||||
- The signer can whitelist SANs and common names via a regular-
|
||||
expression whitelist.
|
||||
- Multiple fallback remote signers are now supported in the
|
||||
cfssl server.
|
||||
- A Docker build script has been provided to facilitate building
|
||||
CFSSL for all supported platforms.
|
||||
- The log package includes a new logging level, fatal, that
|
||||
immediately exits with error after printing the log message.
|
||||
|
||||
CHANGED:
|
||||
- CLI tool can read from standard input.
|
||||
- The -f flag has been renamed to -config.
|
||||
- Signers have been refactored into local and remote signers
|
||||
under a single universal signer abstraction.
|
||||
- The CLI subcommands have been refactored into separate
|
||||
packages.
|
||||
- Signing can now extract subject information from a CSR.
|
||||
- Various improvements to the certificate ubiquity scoring,
|
||||
such as accounting for SHA1 deprecation.
|
||||
- The bundle CLI tool can set the intermediates directory that
|
||||
newly found intermediates can be stored in.
|
||||
- The CLI tools return exit code 1 on failure.
|
||||
|
||||
CONTRIBUTORS:
|
||||
Alice Xia
|
||||
Dan Rohr
|
||||
Didier Smith
|
||||
Dominic Luechinger
|
||||
Erik Kristensen
|
||||
Fabian Ruff
|
||||
George Tankersley
|
||||
Harald Wagener
|
||||
Harry Harpham
|
||||
Jacob H. Haven
|
||||
Jacob Hoffman-Andrews
|
||||
Joshua Kroll
|
||||
Kyle Isom
|
||||
Nick Sullivan
|
||||
Peter Eckersley
|
||||
Richard Barnes
|
||||
Sophie Huang
|
||||
Steve Rude
|
||||
Tara Vancil
|
||||
Terin Stock
|
||||
Thomaz Leite
|
||||
Travis Truman
|
||||
Zi Lin
|
18
vendor/github.com/cloudflare/cfssl/Dockerfile
generated
vendored
18
vendor/github.com/cloudflare/cfssl/Dockerfile
generated
vendored
@ -1,18 +0,0 @@
|
||||
FROM golang:1.9.2
|
||||
|
||||
ENV USER root
|
||||
|
||||
WORKDIR /go/src/github.com/cloudflare/cfssl
|
||||
COPY . .
|
||||
|
||||
# restore all deps and build
|
||||
RUN go get github.com/cloudflare/cfssl_trust/... && \
|
||||
go get github.com/GeertJohan/go.rice/rice && \
|
||||
rice embed-go -i=./cli/serve && \
|
||||
cp -R /go/src/github.com/cloudflare/cfssl_trust /etc/cfssl && \
|
||||
go install ./cmd/...
|
||||
|
||||
EXPOSE 8888
|
||||
|
||||
ENTRYPOINT ["cfssl"]
|
||||
CMD ["--help"]
|
11
vendor/github.com/cloudflare/cfssl/Dockerfile.build
generated
vendored
11
vendor/github.com/cloudflare/cfssl/Dockerfile.build
generated
vendored
@ -1,11 +0,0 @@
|
||||
FROM golang:1.9.2
|
||||
|
||||
ENV USER root
|
||||
|
||||
WORKDIR /go/src/github.com/cloudflare/cfssl
|
||||
COPY . .
|
||||
|
||||
# restore all deps and build
|
||||
RUN go get github.com/mitchellh/gox
|
||||
|
||||
ENTRYPOINT ["gox"]
|
30
vendor/github.com/cloudflare/cfssl/Dockerfile.minimal
generated
vendored
30
vendor/github.com/cloudflare/cfssl/Dockerfile.minimal
generated
vendored
@ -1,30 +0,0 @@
|
||||
FROM golang:1.9.2-alpine3.6
|
||||
|
||||
ENV GOPATH /go
|
||||
ENV USER root
|
||||
|
||||
COPY . /go/src/github.com/cloudflare/cfssl
|
||||
|
||||
RUN set -x && \
|
||||
apk --no-cache add git gcc libc-dev && \
|
||||
go get github.com/cloudflare/cfssl_trust/... && \
|
||||
go get github.com/GeertJohan/go.rice/rice && \
|
||||
cd /go/src/github.com/cloudflare/cfssl && rice embed-go -i=./cli/serve && \
|
||||
mkdir bin && cd bin && \
|
||||
go build ../cmd/cfssl && \
|
||||
go build ../cmd/cfssljson && \
|
||||
go build ../cmd/mkbundle && \
|
||||
go build ../cmd/multirootca && \
|
||||
echo "Build complete."
|
||||
|
||||
FROM alpine:3.6
|
||||
COPY --from=0 /go/src/github.com/cloudflare/cfssl_trust /etc/cfssl
|
||||
COPY --from=0 /go/src/github.com/cloudflare/cfssl/bin/ /usr/bin
|
||||
|
||||
VOLUME [ "/etc/cfssl" ]
|
||||
WORKDIR /etc/cfssl
|
||||
|
||||
EXPOSE 8888
|
||||
|
||||
ENTRYPOINT ["cfssl"]
|
||||
CMD ["--help"]
|
186
vendor/github.com/cloudflare/cfssl/Gopkg.lock
generated
vendored
186
vendor/github.com/cloudflare/cfssl/Gopkg.lock
generated
vendored
@ -1,186 +0,0 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/GeertJohan/go.rice"
|
||||
packages = [
|
||||
".",
|
||||
"embedded"
|
||||
]
|
||||
revision = "c02ca9a983da5807ddf7d796784928f5be4afd09"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/certifi/gocertifi"
|
||||
packages = ["."]
|
||||
revision = "deb3ae2ef2610fde3330947281941c562861188b"
|
||||
version = "2018.01.18"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/backoff"
|
||||
packages = ["."]
|
||||
revision = "647f3cdfc87a18586e279c97afd6526d01b0d063"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/go-metrics"
|
||||
packages = ["."]
|
||||
revision = "6a9aea36fb410b71add0d0e58438e0cfc44d3076"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/redoctober"
|
||||
packages = [
|
||||
"client",
|
||||
"config",
|
||||
"core",
|
||||
"cryptor",
|
||||
"ecdh",
|
||||
"hipchat",
|
||||
"keycache",
|
||||
"msp",
|
||||
"order",
|
||||
"padding",
|
||||
"passvault",
|
||||
"persist",
|
||||
"report",
|
||||
"symcrypt"
|
||||
]
|
||||
revision = "746a508df14c565d2074e2919d27c2efd1a7fe89"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/daaku/go.zipexe"
|
||||
packages = ["."]
|
||||
revision = "a5fe2436ffcb3236e175e5149162b41cd28bd27d"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/getsentry/raven-go"
|
||||
packages = ["."]
|
||||
revision = "563b81fc02b75d664e54da31f787c2cc2186780b"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/go-sql-driver/mysql"
|
||||
packages = ["."]
|
||||
revision = "a0583e0143b1624142adab07e0e97fe106d99561"
|
||||
version = "v1.3"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/gogo/protobuf"
|
||||
packages = ["proto"]
|
||||
revision = "1adfc126b41513cc696b209667c8656ea7aac67c"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/golang/protobuf"
|
||||
packages = [
|
||||
"proto",
|
||||
"ptypes",
|
||||
"ptypes/any",
|
||||
"ptypes/duration",
|
||||
"ptypes/timestamp"
|
||||
]
|
||||
revision = "925541529c1fa6821df4e44ce2723319eb2be768"
|
||||
version = "v1.0.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/google/certificate-transparency-go"
|
||||
packages = [
|
||||
".",
|
||||
"asn1",
|
||||
"client",
|
||||
"client/configpb",
|
||||
"jsonclient",
|
||||
"tls",
|
||||
"x509",
|
||||
"x509/pkix"
|
||||
]
|
||||
revision = "5ab67e519c93568ac3ee50fd6772a5bcf8aa460d"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/jmhodges/clock"
|
||||
packages = ["."]
|
||||
revision = "880ee4c335489bc78d01e4d0a254ae880734bc15"
|
||||
version = "v1.1"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/jmoiron/sqlx"
|
||||
packages = [
|
||||
".",
|
||||
"reflectx"
|
||||
]
|
||||
revision = "05cef0741ade10ca668982355b3f3f0bcf0ff0a8"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/kardianos/osext"
|
||||
packages = ["."]
|
||||
revision = "ae77be60afb1dcacde03767a8c37337fad28ac14"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/kisielk/sqlstruct"
|
||||
packages = ["."]
|
||||
revision = "648daed35d49dac24a4bff253b190a80da3ab6a5"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/kisom/goutils"
|
||||
packages = ["assert"]
|
||||
revision = "d6c5360a068641fc799295411f429be80115bcfc"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/lib/pq"
|
||||
packages = [
|
||||
".",
|
||||
"oid"
|
||||
]
|
||||
revision = "88edab0803230a3898347e77b474f8c1820a1f20"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/mattn/go-sqlite3"
|
||||
packages = ["."]
|
||||
revision = "6c771bb9887719704b210e87e934f08be014bdb1"
|
||||
version = "v1.6.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/pkg/errors"
|
||||
packages = ["."]
|
||||
revision = "645ef00459ed84a119197bfb8d8205042c6df63d"
|
||||
version = "v0.8.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/crypto"
|
||||
packages = [
|
||||
"cryptobyte",
|
||||
"cryptobyte/asn1",
|
||||
"ed25519",
|
||||
"ed25519/internal/edwards25519",
|
||||
"ocsp",
|
||||
"pbkdf2",
|
||||
"pkcs12",
|
||||
"pkcs12/internal/rc2",
|
||||
"scrypt"
|
||||
]
|
||||
revision = "c126467f60eb25f8f27e5a981f32a87e3965053f"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/net"
|
||||
packages = [
|
||||
"context",
|
||||
"context/ctxhttp"
|
||||
]
|
||||
revision = "f5dfe339be1d06f81b22525fe34671ee7d2c8904"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "eb3713a13a2a3e0a35c010aed983787299798de5f20ba838399802f9ae67107c"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
86
vendor/github.com/cloudflare/cfssl/Gopkg.toml
generated
vendored
86
vendor/github.com/cloudflare/cfssl/Gopkg.toml
generated
vendored
@ -1,86 +0,0 @@
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
#
|
||||
# [prune]
|
||||
# non-go = false
|
||||
# go-tests = true
|
||||
# unused-packages = true
|
||||
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/GeertJohan/go.rice"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/backoff"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/go-metrics"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/cloudflare/redoctober"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-sql-driver/mysql"
|
||||
version = "1.3.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/google/certificate-transparency-go"
|
||||
revision = "5ab67e519c93568ac3ee50fd6772a5bcf8aa460d"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/jmhodges/clock"
|
||||
version = "1.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/jmoiron/sqlx"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/kisielk/sqlstruct"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/kisom/goutils"
|
||||
version = "1.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/lib/pq"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/mattn/go-sqlite3"
|
||||
version = "1.6.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/crypto"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "golang.org/x/net"
|
||||
|
||||
[prune]
|
||||
go-tests = true
|
||||
unused-packages = true
|
390
vendor/github.com/cloudflare/cfssl/README.md
generated
vendored
390
vendor/github.com/cloudflare/cfssl/README.md
generated
vendored
@ -1,390 +0,0 @@
|
||||
# CFSSL
|
||||
|
||||
[![Build Status](https://travis-ci.org/cloudflare/cfssl.svg?branch=master)](https://travis-ci.org/cloudflare/cfssl)
|
||||
[![Coverage Status](http://codecov.io/github/cloudflare/cfssl/coverage.svg?branch=master)](http://codecov.io/github/cloudflare/cfssl?branch=master)
|
||||
[![GoDoc](https://godoc.org/github.com/cloudflare/cfssl?status.svg)](https://godoc.org/github.com/cloudflare/cfssl)
|
||||
|
||||
## CloudFlare's PKI/TLS toolkit
|
||||
|
||||
CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line
|
||||
tool and an HTTP API server for signing, verifying, and bundling TLS
|
||||
certificates. It requires Go 1.8+ to build.
|
||||
|
||||
Note that certain linux distributions have certain algorithms removed
|
||||
(RHEL-based distributions in particular), so the golang from the
|
||||
official repositories will not work. Users of these distributions should
|
||||
[install go manually](//golang.org/dl) to install CFSSL.
|
||||
|
||||
CFSSL consists of:
|
||||
|
||||
* a set of packages useful for building custom TLS PKI tools
|
||||
* the `cfssl` program, which is the canonical command line utility
|
||||
using the CFSSL packages.
|
||||
* the `multirootca` program, which is a certificate authority server
|
||||
that can use multiple signing keys.
|
||||
* the `mkbundle` program is used to build certificate pool bundles.
|
||||
* the `cfssljson` program, which takes the JSON output from the
|
||||
`cfssl` and `multirootca` programs and writes certificates, keys,
|
||||
CSRs, and bundles to disk.
|
||||
|
||||
### Building
|
||||
|
||||
See [BUILDING](BUILDING.md)
|
||||
|
||||
### Installation
|
||||
|
||||
Installation requires a
|
||||
[working Go 1.8+ installation](http://golang.org/doc/install) and a
|
||||
properly set `GOPATH`.
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/cfssl
|
||||
```
|
||||
|
||||
will download and build the CFSSL tool, installing it in
|
||||
`$GOPATH/bin/cfssl`.
|
||||
|
||||
To install any of the other utility programs that are
|
||||
in this repo (for instance `cfssljson` in this case):
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
|
||||
```
|
||||
|
||||
This will download and build the CFSSLJSON tool, installing it in
|
||||
`$GOPATH/bin/`.
|
||||
|
||||
And to simply install __all__ of the programs in this repo:
|
||||
|
||||
```
|
||||
$ go get -u github.com/cloudflare/cfssl/cmd/...
|
||||
```
|
||||
|
||||
This will download, build, and install all of the utility programs
|
||||
(including `cfssl`, `cfssljson`, and `mkbundle` among others) into the
|
||||
`$GOPATH/bin/` directory.
|
||||
|
||||
### Using the Command Line Tool
|
||||
|
||||
The `cfssl` command line tool takes a command to specify what
|
||||
operation it should carry out:
|
||||
|
||||
sign signs a certificate
|
||||
bundle build a certificate bundle
|
||||
genkey generate a private key and a certificate request
|
||||
gencert generate a private key and a certificate
|
||||
serve start the API server
|
||||
version prints out the current version
|
||||
selfsign generates a self-signed certificate
|
||||
print-defaults print default configurations
|
||||
|
||||
Use `cfssl [command] -help` to find out more about a command.
|
||||
The `version` command takes no arguments.
|
||||
|
||||
#### Signing
|
||||
|
||||
```
|
||||
cfssl sign [-ca cert] [-ca-key key] [-hostname comma,separated,hostnames] csr [subject]
|
||||
```
|
||||
|
||||
The `csr` is the client's certificate request. The `-ca` and `-ca-key`
|
||||
flags are the CA's certificate and private key, respectively. By
|
||||
default, they are `ca.pem` and `ca_key.pem`. The `-hostname` is
|
||||
a comma separated hostname list that overrides the DNS names and
|
||||
IP address in the certificate SAN extension.
|
||||
For example, assuming the CA's private key is in
|
||||
`/etc/ssl/private/cfssl_key.pem` and the CA's certificate is in
|
||||
`/etc/ssl/certs/cfssl.pem`, to sign the `cloudflare.pem` certificate
|
||||
for cloudflare.com:
|
||||
|
||||
```
|
||||
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
|
||||
-ca-key /etc/ssl/private/cfssl_key.pem \
|
||||
-hostname cloudflare.com \
|
||||
./cloudflare.pem
|
||||
```
|
||||
|
||||
It is also possible to specify CSR with the `-csr` flag. By doing so,
|
||||
flag values take precedence and will overwrite the argument.
|
||||
|
||||
The subject is an optional file that contains subject information that
|
||||
should be used in place of the information from the CSR. It should be
|
||||
a JSON file as follows:
|
||||
|
||||
```json
|
||||
{
|
||||
"CN": "example.com",
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "San Francisco",
|
||||
"O": "Internet Widgets, Inc.",
|
||||
"OU": "WWW",
|
||||
"ST": "California"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**N.B.** As of Go 1.7, self-signed certificates will not include
|
||||
[the AKI](https://go.googlesource.com/go/+/b623b71509b2d24df915d5bc68602e1c6edf38ca).
|
||||
|
||||
#### Bundling
|
||||
|
||||
```
|
||||
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
|
||||
[-metadata metadata_file] [-flavor bundle_flavor] \
|
||||
-cert certificate_file [-key key_file]
|
||||
```
|
||||
|
||||
The bundles are used for the root and intermediate certificate
|
||||
pools. In addition, platform metadata is specified through `-metadata`.
|
||||
The bundle files, metadata file (and auxiliary files) can be
|
||||
found at:
|
||||
|
||||
https://github.com/cloudflare/cfssl_trust
|
||||
|
||||
Specify PEM-encoded client certificate and key through `-cert` and
|
||||
`-key` respectively. If key is specified, the bundle will be built
|
||||
and verified with the key. Otherwise the bundle will be built
|
||||
without a private key. Instead of file path, use `-` for reading
|
||||
certificate PEM from stdin. It is also acceptable that the certificate
|
||||
file should contain a (partial) certificate bundle.
|
||||
|
||||
Specify bundling flavor through `-flavor`. There are three flavors:
|
||||
`optimal` to generate a bundle of shortest chain and most advanced
|
||||
cryptographic algorithms, `ubiquitous` to generate a bundle of most
|
||||
widely acceptance across different browsers and OS platforms, and
|
||||
`force` to find an acceptable bundle which is identical to the
|
||||
content of the input certificate file.
|
||||
|
||||
Alternatively, the client certificate can be pulled directly from
|
||||
a domain. It is also possible to connect to the remote address
|
||||
through `-ip`.
|
||||
|
||||
```
|
||||
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
|
||||
[-metadata metadata_file] [-flavor bundle_flavor] \
|
||||
-domain domain_name [-ip ip_address]
|
||||
```
|
||||
|
||||
The bundle output form should follow the example:
|
||||
|
||||
```json
|
||||
{
|
||||
"bundle": "CERT_BUNDLE_IN_PEM",
|
||||
"crt": "LEAF_CERT_IN_PEM",
|
||||
"crl_support": true,
|
||||
"expires": "2015-12-31T23:59:59Z",
|
||||
"hostnames": ["example.com"],
|
||||
"issuer": "ISSUER CERT SUBJECT",
|
||||
"key": "KEY_IN_PEM",
|
||||
"key_size": 2048,
|
||||
"key_type": "2048-bit RSA",
|
||||
"ocsp": ["http://ocsp.example-ca.com"],
|
||||
"ocsp_support": true,
|
||||
"root": "ROOT_CA_CERT_IN_PEM",
|
||||
"signature": "SHA1WithRSA",
|
||||
"subject": "LEAF CERT SUBJECT",
|
||||
"status": {
|
||||
"rebundled": false,
|
||||
"expiring_SKIs": [],
|
||||
"untrusted_root_stores": [],
|
||||
"messages": [],
|
||||
"code": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
#### Generating certificate signing request and private key
|
||||
|
||||
```
|
||||
cfssl genkey csr.json
|
||||
```
|
||||
|
||||
To generate a private key and corresponding certificate request, specify
|
||||
the key request as a JSON file. This file should follow the form:
|
||||
|
||||
```json
|
||||
{
|
||||
"hosts": [
|
||||
"example.com",
|
||||
"www.example.com"
|
||||
],
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "San Francisco",
|
||||
"O": "Internet Widgets, Inc.",
|
||||
"OU": "WWW",
|
||||
"ST": "California"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Generating self-signed root CA certificate and private key
|
||||
|
||||
```
|
||||
cfssl genkey -initca csr.json | cfssljson -bare ca
|
||||
```
|
||||
|
||||
To generate a self-signed root CA certificate, specify the key request as
|
||||
a JSON file in the same format as in 'genkey'. Three PEM-encoded entities
|
||||
will appear in the output: the private key, the csr, and the self-signed
|
||||
certificate.
|
||||
|
||||
#### Generating a remote-issued certificate and private key.
|
||||
|
||||
```
|
||||
cfssl gencert -remote=remote_server [-hostname=comma,separated,hostnames] csr.json
|
||||
```
|
||||
|
||||
This calls `genkey` but has a remote CFSSL server sign and issue
|
||||
the certificate. You may use `-hostname` to override certificate SANs.
|
||||
|
||||
#### Generating a local-issued certificate and private key.
|
||||
|
||||
```
|
||||
cfssl gencert -ca cert -ca-key key [-hostname=comma,separated,hostnames] csr.json
|
||||
```
|
||||
|
||||
This generates and issues a certificate and private key from a local CA
|
||||
via a JSON request. You may use `-hostname` to override certificate SANs.
|
||||
|
||||
|
||||
#### Updating an OCSP responses file with a newly issued certificate
|
||||
|
||||
```
|
||||
cfssl ocspsign -ca cert -responder key -responder-key key -cert cert \
|
||||
| cfssljson -bare -stdout >> responses
|
||||
```
|
||||
|
||||
This will generate an OCSP response for the `cert` and add it to the
|
||||
`responses` file. You can then pass `responses` to `ocspserve` to start an
|
||||
OCSP server.
|
||||
|
||||
### Starting the API Server
|
||||
|
||||
CFSSL comes with an HTTP-based API server; the endpoints are
|
||||
documented in `doc/api/intro.txt`. The server is started with the `serve`
|
||||
command:
|
||||
|
||||
```
|
||||
cfssl serve [-address address] [-ca cert] [-ca-bundle bundle] \
|
||||
[-ca-key key] [-int-bundle bundle] [-int-dir dir] [-port port] \
|
||||
[-metadata file] [-remote remote_host] [-config config] \
|
||||
[-responder cert] [-responder-key key] [-db-config db-config]
|
||||
```
|
||||
|
||||
Address and port default to "127.0.0.1:8888". The `-ca` and `-ca-key`
|
||||
arguments should be the PEM-encoded certificate and private key to use
|
||||
for signing; by default, they are `ca.pem` and `ca_key.pem`. The
|
||||
`-ca-bundle` and `-int-bundle` should be the certificate bundles used
|
||||
for the root and intermediate certificate pools, respectively. These
|
||||
default to `ca-bundle.crt` and `int-bundle.crt` respectively. If the
|
||||
`-remote` option is specified, all signature operations will be forwarded
|
||||
to the remote CFSSL.
|
||||
|
||||
`-int-dir` specifies an intermediates directory. `-metadata` is a file for
|
||||
root certificate presence. The content of the file is a json dictionary
|
||||
(k,v) such that each key k is an SHA-1 digest of a root certificate while value v
|
||||
is a list of key store filenames. `-config` specifies a path to a configuration
|
||||
file. `-responder` and `-responder-key` are the certificate and the
|
||||
private key for the OCSP responder, respectively.
|
||||
|
||||
The amount of logging can be controlled with the `-loglevel` option. This
|
||||
comes *after* the serve command:
|
||||
|
||||
```
|
||||
cfssl serve -loglevel 2
|
||||
```
|
||||
|
||||
The levels are:
|
||||
|
||||
* 0 - DEBUG
|
||||
* 1 - INFO (this is the default level)
|
||||
* 2 - WARNING
|
||||
* 3 - ERROR
|
||||
* 4 - CRITICAL
|
||||
|
||||
### The multirootca
|
||||
|
||||
The `cfssl` program can act as an online certificate authority, but it
|
||||
only uses a single key. If multiple signing keys are needed, the
|
||||
`multirootca` program can be used. It only provides the `sign`,
|
||||
`authsign` and `info` endpoints. The documentation contains instructions
|
||||
for configuring and running the CA.
|
||||
|
||||
### The mkbundle Utility
|
||||
|
||||
`mkbundle` is used to build the root and intermediate bundles used in
|
||||
verifying certificates. It can be installed with
|
||||
|
||||
```
|
||||
go get -u github.com/cloudflare/cfssl/cmd/mkbundle
|
||||
```
|
||||
|
||||
It takes a collection of certificates, checks for CRL revocation (OCSP
|
||||
support is planned for the next release) and expired certificates, and
|
||||
bundles them into one file. It takes directories of certificates and
|
||||
certificate files (which may contain multiple certificates). For example,
|
||||
if the directory `intermediates` contains a number of intermediate
|
||||
certificates:
|
||||
|
||||
```
|
||||
mkbundle -f int-bundle.crt intermediates
|
||||
```
|
||||
|
||||
will check those certificates and combine valid certificates into a single
|
||||
`int-bundle.crt` file.
|
||||
|
||||
The `-f` flag specifies an output name; `-loglevel` specifies the verbosity
|
||||
of the logging (using the same loglevels as above), and `-nw` controls the
|
||||
number of revocation-checking workers.
|
||||
|
||||
### The cfssljson Utility
|
||||
|
||||
Most of the output from `cfssl` is in JSON. The `cfssljson` utility can take
|
||||
this output and split it out into separate `key`, `certificate`, `CSR`, and
|
||||
`bundle` files as appropriate. The tool takes a single flag, `-f`, that
|
||||
specifies the input file, and an argument that specifies the base name for
|
||||
the files produced. If the input filename is `-` (which is the default),
|
||||
cfssljson reads from standard input. It maps keys in the JSON file to
|
||||
filenames in the following way:
|
||||
|
||||
* if __cert__ or __certificate__ is specified, __basename.pem__ will be produced.
|
||||
* if __key__ or __private_key__ is specified, __basename-key.pem__ will be produced.
|
||||
* if __csr__ or __certificate_request__ is specified, __basename.csr__ will be produced.
|
||||
* if __bundle__ is specified, __basename-bundle.pem__ will be produced.
|
||||
* if __ocspResponse__ is specified, __basename-response.der__ will be produced.
|
||||
|
||||
Instead of saving to a file, you can pass `-stdout` to output the encoded
|
||||
contents to standard output.
|
||||
|
||||
### Static Builds
|
||||
|
||||
By default, the web assets are accessed from disk, based on their
|
||||
relative locations. If you wish to distribute a single,
|
||||
statically-linked, `cfssl` binary, you’ll want to embed these resources
|
||||
before building. This can by done with the
|
||||
[go.rice](https://github.com/GeertJohan/go.rice) tool.
|
||||
|
||||
```
|
||||
pushd cli/serve && rice embed-go && popd
|
||||
```
|
||||
|
||||
Then building with `go build` will use the embedded resources.
|
||||
|
||||
### Additional Documentation
|
||||
|
||||
Additional documentation can be found in the "doc" directory:
|
||||
|
||||
* `api/intro.txt`: documents the API endpoints
|
||||
* `bootstrap.txt`: a walkthrough from building the package to getting
|
||||
up and running
|
70
vendor/github.com/cloudflare/cfssl/test.sh
generated
vendored
70
vendor/github.com/cloudflare/cfssl/test.sh
generated
vendored
@ -1,70 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -o errexit
|
||||
|
||||
CDIR=$(cd `dirname "$0"` && pwd)
|
||||
cd "$CDIR"
|
||||
|
||||
ORG_PATH="github.com/cloudflare"
|
||||
REPO_PATH="${ORG_PATH}/cfssl"
|
||||
ARCH="$(uname -m)"
|
||||
|
||||
export GOPATH="${CDIR}/gopath"
|
||||
|
||||
export PATH="${PATH}:${GOPATH}/bin"
|
||||
|
||||
eval $(go env)
|
||||
|
||||
if [ ! -h gopath/src/${REPO_PATH} ]; then
|
||||
mkdir -p gopath/src/${ORG_PATH}
|
||||
ln -s ../../../.. gopath/src/${REPO_PATH} || exit 255
|
||||
fi
|
||||
|
||||
ls "${GOPATH}/src/${REPO_PATH}"
|
||||
|
||||
PACKAGES=""
|
||||
if [ "$#" != 0 ]; then
|
||||
for pkg in "$@"; do
|
||||
PACKAGES="$PACKAGES $REPO_PATH/$pkg"
|
||||
done
|
||||
else
|
||||
PACKAGES=$(go list ./... | grep -v /vendor/ | grep ^_)
|
||||
# Escape current cirectory
|
||||
CDIR_ESC=$(printf "%q" "$CDIR/")
|
||||
# Remove current directory from the package path
|
||||
PACKAGES=${PACKAGES//$CDIR_ESC/}
|
||||
# Remove underscores
|
||||
PACKAGES=${PACKAGES//_/}
|
||||
# split PACKAGES into an array and prepend REPO_PATH to each local package
|
||||
split=(${PACKAGES// / })
|
||||
PACKAGES=${split[@]/#/${REPO_PATH}/}
|
||||
fi
|
||||
|
||||
# Build and install cfssl executable in PATH
|
||||
go install -tags "$BUILD_TAGS" ${REPO_PATH}/cmd/cfssl
|
||||
|
||||
COVPROFILES=""
|
||||
for package in $(go list -f '{{if len .TestGoFiles}}{{.ImportPath}}{{end}}' $PACKAGES)
|
||||
do
|
||||
profile="$GOPATH/src/$package/.coverprofile"
|
||||
if [ $ARCH = 'x86_64' ]; then
|
||||
go test -race -tags "$BUILD_TAGS" --coverprofile=$profile $package
|
||||
else
|
||||
go test -tags "$BUILD_TAGS" --coverprofile=$profile $package
|
||||
fi
|
||||
[ -s $profile ] && COVPROFILES="$COVPROFILES $profile"
|
||||
done
|
||||
cat $COVPROFILES > coverprofile.txt
|
||||
|
||||
if ! command -v fgt > /dev/null ; then
|
||||
go get github.com/GeertJohan/fgt
|
||||
fi
|
||||
|
||||
if ! command -v golint > /dev/null ; then
|
||||
go get github.com/golang/lint/golint
|
||||
fi
|
||||
|
||||
for package in $PACKAGES
|
||||
do
|
||||
echo "fgt golint ${package}"
|
||||
fgt golint "${package}"
|
||||
done
|
4
vendor/github.com/container-storage-interface/spec/.gitignore
generated
vendored
4
vendor/github.com/container-storage-interface/spec/.gitignore
generated
vendored
@ -1,4 +0,0 @@
|
||||
*.tmp
|
||||
.DS_Store
|
||||
.build
|
||||
*.swp
|
38
vendor/github.com/container-storage-interface/spec/.travis.yml
generated
vendored
38
vendor/github.com/container-storage-interface/spec/.travis.yml
generated
vendored
@ -1,38 +0,0 @@
|
||||
# Setting "sudo" to false forces Travis-CI to use its
|
||||
# container-based build infrastructure, which has shorter
|
||||
# queue times.
|
||||
sudo: false
|
||||
|
||||
# Use the newer Travis-CI build templates based on the
|
||||
# Debian Linux distribution "Trusty" release.
|
||||
dist: trusty
|
||||
|
||||
# Selecting C as the language keeps the container to a
|
||||
# minimum footprint.
|
||||
language: c
|
||||
|
||||
jobs:
|
||||
include:
|
||||
|
||||
# The test stage validates that the protobuf file was updated
|
||||
# correctly prior to being committed.
|
||||
- stage: test
|
||||
script: make
|
||||
|
||||
# The lang stages validate the specification using
|
||||
# language-specific bindings.
|
||||
|
||||
# Lang stage: Cxx
|
||||
- stage: lang
|
||||
script: make -C lib/cxx
|
||||
|
||||
# Lang stage: Go
|
||||
- stage: lang
|
||||
language: go
|
||||
go: 1.10.4
|
||||
go_import_path: github.com/container-storage-interface/spec
|
||||
install:
|
||||
- make -C lib/go protoc
|
||||
- make -C lib/go protoc-gen-go
|
||||
script:
|
||||
- make -C lib/go
|
BIN
vendor/github.com/container-storage-interface/spec/CCLA.pdf
generated
vendored
BIN
vendor/github.com/container-storage-interface/spec/CCLA.pdf
generated
vendored
Binary file not shown.
59
vendor/github.com/container-storage-interface/spec/CONTRIBUTING.md
generated
vendored
59
vendor/github.com/container-storage-interface/spec/CONTRIBUTING.md
generated
vendored
@ -1,59 +0,0 @@
|
||||
# How to Contribute
|
||||
|
||||
This document outlines some of the requirements and conventions for contributing to the Container Storage Interface, including development workflow, commit message formatting, contact points, and other resources to make it easier to get your contribution accepted.
|
||||
|
||||
## Licence and CLA
|
||||
|
||||
CSI is under [Apache 2.0](LICENSE) and accepts contributions via GitHub pull requests.
|
||||
|
||||
Before contributing to the Container Storage Interface, contributors MUST sign the CLA available [here](https://github.com/container-storage-interface/spec/blob/master/CCLA.pdf).
|
||||
The CLA MAY be signed on behalf of a company, possibly covering multiple contributors, or as an individual (put "Individual" for "Corporation name").
|
||||
The completed CLA MUST be mailed to the CSI Approvers [mailing list](container-storage-interface-approvers@googlegroups.com).
|
||||
|
||||
## Markdown style
|
||||
|
||||
To keep consistency throughout the Markdown files in the CSI spec, all files should be formatted one sentence per line.
|
||||
This fixes two things: it makes diffing easier with git and it resolves fights about line wrapping length.
|
||||
For example, this paragraph will span three lines in the Markdown source.
|
||||
|
||||
## Code style
|
||||
|
||||
This also applies to the code snippets in the markdown files.
|
||||
|
||||
* Please wrap the code at 72 characters.
|
||||
|
||||
## Comments
|
||||
|
||||
This also applies to the code snippets in the markdown files.
|
||||
|
||||
* End each sentence within a comment with a punctuation mark (please note that we generally prefer periods); this applies to incomplete sentences as well.
|
||||
* For trailing comments, leave one space between the end of the code and the beginning of the comment.
|
||||
|
||||
## Git commit
|
||||
|
||||
The "system of record" for the specification is the `spec.md` file and all hand-edits of the specification should happen there.
|
||||
**DO NOT** manually edit the generated protobufs or generated language bindings.
|
||||
Once changes to `spec.md` are complete, please run `make` to update generated files.
|
||||
|
||||
**IMPORTANT:** Prior to committing code please run `make` to ensure that your specification changes have landed in all generated files.
|
||||
|
||||
### Commit Style
|
||||
|
||||
Each commit should represent a single logical (atomic) change: this makes your changes easier to review.
|
||||
|
||||
* Try to avoid unrelated cleanups (e.g., typo fixes or style nits) in the same commit that makes functional changes.
|
||||
While typo fixes are great, including them in the same commit as functional changes makes the commit history harder to read.
|
||||
* Developers often make incremental commits to save their progress when working on a change, and then “rewrite history” (e.g., using `git rebase -i`) to create a clean set of commits once the change is ready to be reviewed.
|
||||
|
||||
Simple house-keeping for clean git history.
|
||||
Read more on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/) or the Discussion section of [`git-commit(1)`](http://git-scm.com/docs/git-commit).
|
||||
|
||||
* Separate the subject from body with a blank line.
|
||||
* Limit the subject line to 50 characters.
|
||||
* Capitalize the subject line.
|
||||
* Do not end the subject line with a period.
|
||||
* Use the imperative mood in the subject line.
|
||||
* Wrap the body at 72 characters.
|
||||
* Use the body to explain what and why vs. how.
|
||||
* If there was important/useful/essential conversation or information, copy or include a reference.
|
||||
* When possible, one keyword to scope the change in the subject (i.e. "README: ...", "tool: ...").
|
51
vendor/github.com/container-storage-interface/spec/Makefile
generated
vendored
51
vendor/github.com/container-storage-interface/spec/Makefile
generated
vendored
@ -1,51 +0,0 @@
|
||||
all: build
|
||||
|
||||
CSI_SPEC := spec.md
|
||||
CSI_PROTO := csi.proto
|
||||
|
||||
# This is the target for building the temporary CSI protobuf file.
|
||||
#
|
||||
# The temporary file is not versioned, and thus will always be
|
||||
# built on Travis-CI.
|
||||
$(CSI_PROTO).tmp: $(CSI_SPEC) Makefile
|
||||
echo "// Code generated by make; DO NOT EDIT." > "$@"
|
||||
cat $< | sed -n -e '/```protobuf$$/,/^```$$/ p' | sed '/^```/d' >> "$@"
|
||||
|
||||
# This is the target for building the CSI protobuf file.
|
||||
#
|
||||
# This target depends on its temp file, which is not versioned.
|
||||
# Therefore when built on Travis-CI the temp file will always
|
||||
# be built and trigger this target. On Travis-CI the temp file
|
||||
# is compared with the real file, and if they differ the build
|
||||
# will fail.
|
||||
#
|
||||
# Locally the temp file is simply copied over the real file.
|
||||
$(CSI_PROTO): $(CSI_PROTO).tmp
|
||||
ifeq (true,$(TRAVIS))
|
||||
diff "$@" "$?"
|
||||
else
|
||||
diff "$@" "$?" > /dev/null 2>&1 || cp -f "$?" "$@"
|
||||
endif
|
||||
|
||||
build: check
|
||||
|
||||
# If this is not running on Travis-CI then for sake of convenience
|
||||
# go ahead and update the language bindings as well.
|
||||
ifneq (true,$(TRAVIS))
|
||||
build:
|
||||
$(MAKE) -C lib/go
|
||||
$(MAKE) -C lib/cxx
|
||||
endif
|
||||
|
||||
clean:
|
||||
$(MAKE) -C lib/go $@
|
||||
|
||||
clobber: clean
|
||||
$(MAKE) -C lib/go $@
|
||||
rm -f $(CSI_PROTO) $(CSI_PROTO).tmp
|
||||
|
||||
# check generated files for violation of standards
|
||||
check: $(CSI_PROTO)
|
||||
awk '{ if (length > 72) print NR, $$0 }' $? | diff - /dev/null
|
||||
|
||||
.PHONY: clean clobber check
|
10
vendor/github.com/container-storage-interface/spec/OWNERS
generated
vendored
10
vendor/github.com/container-storage-interface/spec/OWNERS
generated
vendored
@ -1,10 +0,0 @@
|
||||
approvers:
|
||||
- saad-ali # Representing Kubernetes
|
||||
- thockin # Representing Kubernetes
|
||||
- jieyu # Representing Mesos
|
||||
- jdef # Representing Mesos
|
||||
- anusha-ragunathan # Representing Docker
|
||||
- ddebroy # Representing Docker
|
||||
- julian-hj # Representing Cloud Foundry
|
||||
- paulcwarren # Representing Cloud Foundry
|
||||
reviewers:
|
13
vendor/github.com/container-storage-interface/spec/README.md
generated
vendored
13
vendor/github.com/container-storage-interface/spec/README.md
generated
vendored
@ -1,13 +0,0 @@
|
||||
# Container Storage Interface (CSI) Specification [![build status](https://travis-ci.org/container-storage-interface/spec.svg?branch=master)](https://travis-ci.org/container-storage-interface/spec)
|
||||
|
||||
![CSI Logo](logo.png)
|
||||
|
||||
This project contains the CSI [specification](spec.md) and [protobuf](csi.proto) files.
|
||||
|
||||
## CSI Adoption
|
||||
|
||||
### Container Orchestrators (CO)
|
||||
|
||||
* [Cloud Foundry](https://github.com/cloudfoundry/csi-plugins-release/blob/master/CSI_SUPPORT.md)
|
||||
* [Kubernetes](https://kubernetes-csi.github.io/docs/)
|
||||
* [Mesos](http://mesos.apache.org/documentation/latest/csi/)
|
1
vendor/github.com/container-storage-interface/spec/VERSION
generated
vendored
1
vendor/github.com/container-storage-interface/spec/VERSION
generated
vendored
@ -1 +0,0 @@
|
||||
1.1.0
|
1306
vendor/github.com/container-storage-interface/spec/csi.proto
generated
vendored
1306
vendor/github.com/container-storage-interface/spec/csi.proto
generated
vendored
File diff suppressed because it is too large
Load Diff
2
vendor/github.com/container-storage-interface/spec/lib/README.md
generated
vendored
2
vendor/github.com/container-storage-interface/spec/lib/README.md
generated
vendored
@ -1,2 +0,0 @@
|
||||
# CSI Validation Libraries
|
||||
This directory contains language bindings generated from the CSI [protobuf file](../csi.proto) used to validate the model and workflows of the CSI specification.
|
4
vendor/github.com/container-storage-interface/spec/lib/go/.gitignore
generated
vendored
4
vendor/github.com/container-storage-interface/spec/lib/go/.gitignore
generated
vendored
@ -1,4 +0,0 @@
|
||||
/protoc
|
||||
/protoc-gen-go
|
||||
/csi.a
|
||||
/.protoc
|
136
vendor/github.com/container-storage-interface/spec/lib/go/Makefile
generated
vendored
136
vendor/github.com/container-storage-interface/spec/lib/go/Makefile
generated
vendored
@ -1,136 +0,0 @@
|
||||
all: build
|
||||
|
||||
########################################################################
|
||||
## GOLANG ##
|
||||
########################################################################
|
||||
|
||||
# If GOPATH isn't defined then set its default location.
|
||||
ifeq (,$(strip $(GOPATH)))
|
||||
GOPATH := $(HOME)/go
|
||||
else
|
||||
# If GOPATH is already set then update GOPATH to be its own
|
||||
# first element.
|
||||
GOPATH := $(word 1,$(subst :, ,$(GOPATH)))
|
||||
endif
|
||||
export GOPATH
|
||||
|
||||
|
||||
########################################################################
|
||||
## PROTOC ##
|
||||
########################################################################
|
||||
|
||||
# Only set PROTOC_VER if it has an empty value.
|
||||
ifeq (,$(strip $(PROTOC_VER)))
|
||||
PROTOC_VER := 3.5.1
|
||||
endif
|
||||
|
||||
PROTOC_OS := $(shell uname -s)
|
||||
ifeq (Darwin,$(PROTOC_OS))
|
||||
PROTOC_OS := osx
|
||||
endif
|
||||
|
||||
PROTOC_ARCH := $(shell uname -m)
|
||||
ifeq (i386,$(PROTOC_ARCH))
|
||||
PROTOC_ARCH := x86_32
|
||||
endif
|
||||
|
||||
PROTOC := ./protoc
|
||||
PROTOC_ZIP := protoc-$(PROTOC_VER)-$(PROTOC_OS)-$(PROTOC_ARCH).zip
|
||||
PROTOC_URL := https://github.com/google/protobuf/releases/download/v$(PROTOC_VER)/$(PROTOC_ZIP)
|
||||
PROTOC_TMP_DIR := .protoc
|
||||
PROTOC_TMP_BIN := $(PROTOC_TMP_DIR)/bin/protoc
|
||||
|
||||
$(PROTOC):
|
||||
-mkdir -p "$(PROTOC_TMP_DIR)" && \
|
||||
curl -L $(PROTOC_URL) -o "$(PROTOC_TMP_DIR)/$(PROTOC_ZIP)" && \
|
||||
unzip "$(PROTOC_TMP_DIR)/$(PROTOC_ZIP)" -d "$(PROTOC_TMP_DIR)" && \
|
||||
chmod 0755 "$(PROTOC_TMP_BIN)" && \
|
||||
cp -f "$(PROTOC_TMP_BIN)" "$@"
|
||||
stat "$@" > /dev/null 2>&1
|
||||
|
||||
|
||||
########################################################################
|
||||
## PROTOC-GEN-GO ##
|
||||
########################################################################
|
||||
|
||||
# This is the recipe for getting and installing the go plug-in
|
||||
# for protoc
|
||||
PROTOC_GEN_GO_PKG := github.com/golang/protobuf/protoc-gen-go
|
||||
PROTOC_GEN_GO := protoc-gen-go
|
||||
$(PROTOC_GEN_GO): PROTOBUF_PKG := $(dir $(PROTOC_GEN_GO_PKG))
|
||||
$(PROTOC_GEN_GO): PROTOBUF_VERSION := v1.2.0
|
||||
$(PROTOC_GEN_GO):
|
||||
mkdir -p $(dir $(GOPATH)/src/$(PROTOBUF_PKG))
|
||||
test -d $(GOPATH)/src/$(PROTOBUF_PKG)/.git || git clone https://$(PROTOBUF_PKG) $(GOPATH)/src/$(PROTOBUF_PKG)
|
||||
(cd $(GOPATH)/src/$(PROTOBUF_PKG) && \
|
||||
(test "$$(git describe --tags | head -1)" = "$(PROTOBUF_VERSION)" || \
|
||||
(git fetch && git checkout tags/$(PROTOBUF_VERSION))))
|
||||
(cd $(GOPATH)/src/$(PROTOBUF_PKG) && go get -v -d $$(go list -f '{{ .ImportPath }}' ./...)) && \
|
||||
go build -o "$@" $(PROTOC_GEN_GO_PKG)
|
||||
|
||||
|
||||
########################################################################
|
||||
## PATH ##
|
||||
########################################################################
|
||||
|
||||
# Update PATH with the current directory. This enables the protoc
|
||||
# binary to discover the protoc-gen-go binary, built inside this
|
||||
# directory.
|
||||
export PATH := $(shell pwd):$(PATH)
|
||||
|
||||
|
||||
########################################################################
|
||||
## BUILD ##
|
||||
########################################################################
|
||||
CSI_PROTO := ../../csi.proto
|
||||
CSI_PKG_ROOT := github.com/container-storage-interface/spec
|
||||
CSI_PKG_SUB := $(shell cat $(CSI_PROTO) | sed -n -e 's/^package.\([^;]*\).v[0-9]\+;$$/\1/p'|tr '.' '/')
|
||||
CSI_BUILD := $(CSI_PKG_SUB)/.build
|
||||
CSI_GO := $(CSI_PKG_SUB)/csi.pb.go
|
||||
CSI_A := csi.a
|
||||
CSI_GO_TMP := $(CSI_BUILD)/$(CSI_PKG_ROOT)/csi.pb.go
|
||||
|
||||
# This recipe generates the go language bindings to a temp area.
|
||||
$(CSI_GO_TMP): HERE := $(shell pwd)
|
||||
$(CSI_GO_TMP): PTYPES_PKG := github.com/golang/protobuf/ptypes
|
||||
$(CSI_GO_TMP): GO_OUT := plugins=grpc
|
||||
$(CSI_GO_TMP): GO_OUT := $(GO_OUT),Mgoogle/protobuf/descriptor.proto=github.com/golang/protobuf/protoc-gen-go/descriptor
|
||||
$(CSI_GO_TMP): GO_OUT := $(GO_OUT),Mgoogle/protobuf/wrappers.proto=$(PTYPES_PKG)/wrappers
|
||||
$(CSI_GO_TMP): GO_OUT := $(GO_OUT):"$(HERE)/$(CSI_BUILD)"
|
||||
$(CSI_GO_TMP): INCLUDE := -I$(GOPATH)/src -I$(HERE)/$(PROTOC_TMP_DIR)/include
|
||||
$(CSI_GO_TMP): $(CSI_PROTO) | $(PROTOC) $(PROTOC_GEN_GO)
|
||||
@mkdir -p "$(@D)"
|
||||
(cd "$(GOPATH)/src" && \
|
||||
$(HERE)/$(PROTOC) $(INCLUDE) --go_out=$(GO_OUT) "$(CSI_PKG_ROOT)/$(<F)")
|
||||
|
||||
# The temp language bindings are compared to the ones that are
|
||||
# versioned. If they are different then it means the language
|
||||
# bindings were not updated prior to being committed.
|
||||
$(CSI_GO): $(CSI_GO_TMP)
|
||||
ifeq (true,$(TRAVIS))
|
||||
diff "$@" "$?"
|
||||
else
|
||||
@mkdir -p "$(@D)"
|
||||
diff "$@" "$?" > /dev/null 2>&1 || cp -f "$?" "$@"
|
||||
endif
|
||||
|
||||
# This recipe builds the Go archive from the sources in three steps:
|
||||
#
|
||||
# 1. Go get any missing dependencies.
|
||||
# 2. Cache the packages.
|
||||
# 3. Build the archive file.
|
||||
$(CSI_A): $(CSI_GO)
|
||||
go get -v -d ./...
|
||||
go install ./$(CSI_PKG_SUB)
|
||||
go build -o "$@" ./$(CSI_PKG_SUB)
|
||||
|
||||
build: $(CSI_A)
|
||||
|
||||
clean:
|
||||
go clean -i ./...
|
||||
rm -rf "$(CSI_A)" "$(CSI_GO)" "$(CSI_BUILD)"
|
||||
|
||||
clobber: clean
|
||||
rm -fr "$(PROTOC)" "$(PROTOC_GEN_GO)" "$(CSI_PKG_SUB)"
|
||||
|
||||
.PHONY: clean clobber
|
21
vendor/github.com/container-storage-interface/spec/lib/go/README.md
generated
vendored
21
vendor/github.com/container-storage-interface/spec/lib/go/README.md
generated
vendored
@ -1,21 +0,0 @@
|
||||
# CSI Go Validation
|
||||
|
||||
This package is used to validate the CSI specification with Go language bindings.
|
||||
|
||||
## Build Reference
|
||||
|
||||
To validate the Go language bindings against the current specification use the following command:
|
||||
|
||||
```bash
|
||||
$ make
|
||||
```
|
||||
|
||||
The above command will download the `protoc` and `protoc-gen-go` binaries if they are not present and then proceed to build the CSI Go language bindings.
|
||||
|
||||
### Environment Variables
|
||||
|
||||
The following table lists the environment variables that can be used to influence the behavior of the Makefile:
|
||||
|
||||
| Name | Default Value | Description |
|
||||
|------|---------------|-------------|
|
||||
| `PROTOC_VER` | `3.3.0` | The version of the protoc binary. |
|
BIN
vendor/github.com/container-storage-interface/spec/logo.png
generated
vendored
BIN
vendor/github.com/container-storage-interface/spec/logo.png
generated
vendored
Binary file not shown.
Before Width: | Height: | Size: 21 KiB |
2459
vendor/github.com/container-storage-interface/spec/spec.md
generated
vendored
2459
vendor/github.com/container-storage-interface/spec/spec.md
generated
vendored
File diff suppressed because it is too large
Load Diff
0
vendor/github.com/containerd/cgroups/metrics.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/cgroups/metrics.pb.txt
generated
vendored
Executable file → Normal file
4181
vendor/github.com/containerd/containerd/api/1.0.pb.txt
generated
vendored
4181
vendor/github.com/containerd/containerd/api/1.0.pb.txt
generated
vendored
File diff suppressed because it is too large
Load Diff
4181
vendor/github.com/containerd/containerd/api/1.1.pb.txt
generated
vendored
4181
vendor/github.com/containerd/containerd/api/1.1.pb.txt
generated
vendored
File diff suppressed because it is too large
Load Diff
4205
vendor/github.com/containerd/containerd/api/1.2.pb.txt
generated
vendored
4205
vendor/github.com/containerd/containerd/api/1.2.pb.txt
generated
vendored
File diff suppressed because it is too large
Load Diff
18
vendor/github.com/containerd/containerd/api/README.md
generated
vendored
18
vendor/github.com/containerd/containerd/api/README.md
generated
vendored
@ -1,18 +0,0 @@
|
||||
This directory contains the GRPC API definitions for containerd.
|
||||
|
||||
All defined services and messages have been aggregated into `*.pb.txt`
|
||||
descriptors files in this directory. Definitions present here are considered
|
||||
frozen after the release.
|
||||
|
||||
At release time, the current `next.pb.txt` file will be moved into place to
|
||||
freeze the API changes for the minor version. For example, when 1.0.0 is
|
||||
released, `next.pb.txt` should be moved to `1.0.txt`. Notice that we leave off
|
||||
the patch number, since the API will be completely locked down for a given
|
||||
patch series.
|
||||
|
||||
We may find that by default, protobuf descriptors are too noisy to lock down
|
||||
API changes. In that case, we may filter out certain fields in the descriptors,
|
||||
possibly regenerating for old versions.
|
||||
|
||||
This process is similar to the [process used to ensure backwards compatibility
|
||||
in Go](https://github.com/golang/go/tree/master/api).
|
4233
vendor/github.com/containerd/containerd/api/next.pb.txt
generated
vendored
4233
vendor/github.com/containerd/containerd/api/next.pb.txt
generated
vendored
File diff suppressed because it is too large
Load Diff
53
vendor/github.com/containerd/containerd/contrib/Dockerfile.test
generated
vendored
53
vendor/github.com/containerd/containerd/contrib/Dockerfile.test
generated
vendored
@ -1,53 +0,0 @@
|
||||
# This dockerfile is used to test containerd within a container
|
||||
#
|
||||
# usage:
|
||||
# 1.) docker build -t containerd-test -f Dockerfile.test ../
|
||||
# 2.) docker run -it --privileged -v /tmp:/tmp --tmpfs /var/lib/containerd-test containerd-test bash
|
||||
# 3.) $ make binaries install test
|
||||
#
|
||||
|
||||
ARG GOLANG_VERSION=1.12
|
||||
|
||||
FROM golang:${GOLANG_VERSION} AS golang-base
|
||||
RUN mkdir -p /go/src/github.com/containerd/containerd
|
||||
WORKDIR /go/src/github.com/containerd/containerd
|
||||
|
||||
# Install proto3
|
||||
FROM golang-base AS proto3
|
||||
RUN apt-get update && apt-get install -y \
|
||||
autoconf \
|
||||
automake \
|
||||
g++ \
|
||||
libtool \
|
||||
unzip \
|
||||
--no-install-recommends
|
||||
|
||||
COPY script/setup/install-protobuf install-protobuf
|
||||
RUN ./install-protobuf
|
||||
|
||||
# Install runc
|
||||
FROM golang-base AS runc
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
libseccomp-dev \
|
||||
--no-install-recommends
|
||||
|
||||
COPY vendor.conf vendor.conf
|
||||
COPY script/setup/install-runc install-runc
|
||||
RUN ./install-runc
|
||||
|
||||
FROM golang-base AS dev
|
||||
RUN apt-get update && apt-get install -y \
|
||||
btrfs-tools \
|
||||
gcc \
|
||||
git \
|
||||
libseccomp-dev \
|
||||
make \
|
||||
xfsprogs \
|
||||
--no-install-recommends
|
||||
|
||||
COPY --from=proto3 /usr/local/bin/protoc /usr/local/bin/protoc
|
||||
COPY --from=proto3 /usr/local/include/google /usr/local/include/google
|
||||
COPY --from=runc /usr/local/sbin/runc /usr/local/go/bin/runc
|
||||
|
||||
COPY . .
|
11
vendor/github.com/containerd/containerd/contrib/README.md
generated
vendored
11
vendor/github.com/containerd/containerd/contrib/README.md
generated
vendored
@ -1,11 +0,0 @@
|
||||
# contrib
|
||||
|
||||
The `contrib` directory contains packages that do not belong in the core containerd packages but still contribute to overall containerd usability.
|
||||
|
||||
Package such as Apparmor or Selinux are placed in `contrib` because they are platform dependent and often require higher level tools and profiles to work.
|
||||
|
||||
Packaging and other built tools can be added to `contrib` to aid in packaging containerd for various distributions.
|
||||
|
||||
## Testing
|
||||
|
||||
Code in the `contrib` directory may or may not have been tested in the normal test pipeline for core components.
|
0
vendor/github.com/containerd/containerd/runtime/linux/runctypes/1.0.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/containerd/runtime/linux/runctypes/1.0.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/containerd/runtime/linux/runctypes/next.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/containerd/runtime/linux/runctypes/next.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/containerd/runtime/v2/runc/options/next.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/containerd/runtime/v2/runc/options/next.pb.txt
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/continuity/sysx/generate.sh
generated
vendored
Executable file → Normal file
0
vendor/github.com/containerd/continuity/sysx/generate.sh
generated
vendored
Executable file → Normal file
10
vendor/github.com/containerd/typeurl/.travis.yml
generated
vendored
10
vendor/github.com/containerd/typeurl/.travis.yml
generated
vendored
@ -4,7 +4,17 @@ go:
|
||||
- 1.10.x
|
||||
- tip
|
||||
|
||||
install:
|
||||
- go get -t -v ./...
|
||||
- go get -u github.com/vbatts/git-validation
|
||||
- go get -u github.com/kunalkushwaha/ltag
|
||||
|
||||
before_script:
|
||||
- pushd ..; git clone https://github.com/containerd/project; popd
|
||||
|
||||
script:
|
||||
- DCO_VERBOSITY=-q ../project/script/validate/dco
|
||||
- ../project/script/validate/fileheader ../project/
|
||||
- go test -race -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
after_success:
|
||||
|
18
vendor/github.com/containerd/typeurl/LICENSE
generated
vendored
18
vendor/github.com/containerd/typeurl/LICENSE
generated
vendored
@ -1,6 +1,7 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
https://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
@ -175,24 +176,13 @@
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
Copyright The containerd Authors
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
https://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
|
10
vendor/github.com/containerd/typeurl/README.md
generated
vendored
10
vendor/github.com/containerd/typeurl/README.md
generated
vendored
@ -7,3 +7,13 @@
|
||||
A Go package for managing the registration, marshaling, and unmarshaling of encoded types.
|
||||
|
||||
This package helps when types are sent over a GRPC API and marshaled as a [protobuf.Any]().
|
||||
|
||||
## Project details
|
||||
|
||||
**typeurl** is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
|
||||
As a containerd sub-project, you will find the:
|
||||
* [Project governance](https://github.com/containerd/project/blob/master/GOVERNANCE.md),
|
||||
* [Maintainers](https://github.com/containerd/project/blob/master/MAINTAINERS),
|
||||
* and [Contributing guidelines](https://github.com/containerd/project/blob/master/CONTRIBUTING.md)
|
||||
|
||||
information in our [`containerd/project`](https://github.com/containerd/project) repository.
|
||||
|
83
vendor/github.com/containerd/typeurl/doc.go
generated
vendored
Normal file
83
vendor/github.com/containerd/typeurl/doc.go
generated
vendored
Normal file
@ -0,0 +1,83 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package typeurl
|
||||
|
||||
// Package typeurl assists with managing the registration, marshaling, and
|
||||
// unmarshaling of types encoded as protobuf.Any.
|
||||
//
|
||||
// A protobuf.Any is a proto message that can contain any arbitrary data. It
|
||||
// consists of two components, a TypeUrl and a Value, and its proto definition
|
||||
// looks like this:
|
||||
//
|
||||
// message Any {
|
||||
// string type_url = 1;
|
||||
// bytes value = 2;
|
||||
// }
|
||||
//
|
||||
// The TypeUrl is used to distinguish the contents from other proto.Any
|
||||
// messages. This typeurl library manages these URLs to enable automagic
|
||||
// marshaling and unmarshaling of the contents.
|
||||
//
|
||||
// For example, consider this go struct:
|
||||
//
|
||||
// type Foo struct {
|
||||
// Field1 string
|
||||
// Field2 string
|
||||
// }
|
||||
//
|
||||
// To use typeurl, types must first be registered. This is typically done in
|
||||
// the init function
|
||||
//
|
||||
// func init() {
|
||||
// typeurl.Register(&Foo{}, "Foo")
|
||||
// }
|
||||
//
|
||||
// This will register the type Foo with the url path "Foo". The arguments to
|
||||
// Register are variadic, and are used to construct a url path. Consider this
|
||||
// example, from the github.com/containerd/containerd/client package:
|
||||
//
|
||||
// func init() {
|
||||
// const prefix = "types.containerd.io"
|
||||
// // register TypeUrls for commonly marshaled external types
|
||||
// major := strconv.Itoa(specs.VersionMajor)
|
||||
// typeurl.Register(&specs.Spec{}, prefix, "opencontainers/runtime-spec", major, "Spec")
|
||||
// // this function has more Register calls, which are elided.
|
||||
// }
|
||||
//
|
||||
// This registers several types under a more complex url, which ends up mapping
|
||||
// to `types.containerd.io/opencontainers/runtime-spec/1/Spec` (or some other
|
||||
// value for major).
|
||||
//
|
||||
// Once a type is registered, it can be marshaled to a proto.Any message simply
|
||||
// by calling `MarshalAny`, like this:
|
||||
//
|
||||
// foo := &Foo{Field1: "value1", Field2: "value2"}
|
||||
// anyFoo, err := typeurl.MarshalAny(foo)
|
||||
//
|
||||
// MarshalAny will resolve the correct URL for the type. If the type in
|
||||
// question implements the proto.Message interface, then it will be marshaled
|
||||
// as a proto message. Otherwise, it will be marshaled as json. This means that
|
||||
// typeurl will work on any arbitrary data, whether or not it has a proto
|
||||
// definition, as long as it can be serialized to json.
|
||||
//
|
||||
// To unmarshal, the process is simply inverse:
|
||||
//
|
||||
// iface, err := typeurl.UnmarshalAny(anyFoo)
|
||||
// foo := iface.(*Foo)
|
||||
//
|
||||
// The correct type is automatically chosen from the type registry, and the
|
||||
// returned interface can be cast straight to that type.
|
5
vendor/github.com/containerd/typeurl/types.go
generated
vendored
5
vendor/github.com/containerd/typeurl/types.go
generated
vendored
@ -78,7 +78,10 @@ func Is(any *types.Any, v interface{}) bool {
|
||||
return any.TypeUrl == url
|
||||
}
|
||||
|
||||
// MarshalAny marshals the value v into an any with the correct TypeUrl
|
||||
// MarshalAny marshals the value v into an any with the correct TypeUrl.
|
||||
// If the provided object is already a proto.Any message, then it will be
|
||||
// returned verbatim. If it is of type proto.Message, it will be marshaled as a
|
||||
// protocol buffer. Otherwise, the object will be marshaled to json.
|
||||
func MarshalAny(v interface{}) (*types.Any, error) {
|
||||
var marshal func(v interface{}) ([]byte, error)
|
||||
switch t := v.(type) {
|
||||
|
5
vendor/github.com/containernetworking/cni/.gitignore
generated
vendored
5
vendor/github.com/containernetworking/cni/.gitignore
generated
vendored
@ -1,5 +0,0 @@
|
||||
bin/
|
||||
gopath/
|
||||
*.sw[ponm]
|
||||
.vagrant
|
||||
release-*
|
41
vendor/github.com/containernetworking/cni/.travis.yml
generated
vendored
41
vendor/github.com/containernetworking/cni/.travis.yml
generated
vendored
@ -1,41 +0,0 @@
|
||||
language: go
|
||||
sudo: required
|
||||
dist: trusty
|
||||
|
||||
go:
|
||||
- 1.7.x
|
||||
- 1.8.x
|
||||
|
||||
env:
|
||||
global:
|
||||
- TOOLS_CMD=golang.org/x/tools/cmd
|
||||
- PATH=$GOROOT/bin:$PATH
|
||||
- GO15VENDOREXPERIMENT=1
|
||||
matrix:
|
||||
- TARGET=amd64
|
||||
- TARGET=arm
|
||||
- TARGET=arm64
|
||||
- TARGET=ppc64le
|
||||
- TARGET=s390x
|
||||
|
||||
matrix:
|
||||
fast_finish: true
|
||||
|
||||
install:
|
||||
- go get ${TOOLS_CMD}/cover
|
||||
- go get github.com/modocache/gover
|
||||
- go get github.com/mattn/goveralls
|
||||
|
||||
script:
|
||||
- >
|
||||
if [ "${TARGET}" == "amd64" ]; then
|
||||
GOARCH="${TARGET}" ./test.sh;
|
||||
else
|
||||
GOARCH="${TARGET}" ./build.sh;
|
||||
fi
|
||||
|
||||
notifications:
|
||||
email: false
|
||||
|
||||
git:
|
||||
depth: 9999999
|
4
vendor/github.com/containernetworking/cni/CODE-OF-CONDUCT.md
generated
vendored
4
vendor/github.com/containernetworking/cni/CODE-OF-CONDUCT.md
generated
vendored
@ -1,4 +0,0 @@
|
||||
## Community Code of Conduct
|
||||
|
||||
CNI follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
|
125
vendor/github.com/containernetworking/cni/CONTRIBUTING.md
generated
vendored
125
vendor/github.com/containernetworking/cni/CONTRIBUTING.md
generated
vendored
@ -1,125 +0,0 @@
|
||||
# How to Contribute
|
||||
|
||||
CNI is [Apache 2.0 licensed](LICENSE) and accepts contributions via GitHub
|
||||
pull requests. This document outlines some of the conventions on development
|
||||
workflow, commit message formatting, contact points and other resources to make
|
||||
it easier to get your contribution accepted.
|
||||
|
||||
We gratefully welcome improvements to documentation as well as to code.
|
||||
|
||||
# Certificate of Origin
|
||||
|
||||
By contributing to this project you agree to the Developer Certificate of
|
||||
Origin (DCO). This document was created by the Linux Kernel community and is a
|
||||
simple statement that you, as a contributor, have the legal right to make the
|
||||
contribution. See the [DCO](DCO) file for details.
|
||||
|
||||
# Email and Chat
|
||||
|
||||
The project uses the the cni-dev email list and IRC chat:
|
||||
- Email: [cni-dev](https://groups.google.com/forum/#!forum/cni-dev)
|
||||
- IRC: #[containernetworking](irc://irc.freenode.org:6667/#containernetworking) channel on freenode.org
|
||||
|
||||
Please avoid emailing maintainers found in the MAINTAINERS file directly. They
|
||||
are very busy and read the mailing lists.
|
||||
|
||||
## Getting Started
|
||||
|
||||
- Fork the repository on GitHub
|
||||
- Read the [README](README.md) for build and test instructions
|
||||
- Play with the project, submit bugs, submit pull requests!
|
||||
|
||||
## Contribution workflow
|
||||
|
||||
This is a rough outline of how to prepare a contribution:
|
||||
|
||||
- Create a topic branch from where you want to base your work (usually branched from master).
|
||||
- Make commits of logical units.
|
||||
- Make sure your commit messages are in the proper format (see below).
|
||||
- Push your changes to a topic branch in your fork of the repository.
|
||||
- If you changed code:
|
||||
- add automated tests to cover your changes, using the [Ginkgo](http://onsi.github.io/ginkgo/) & [Gomega](http://onsi.github.io/gomega/) style
|
||||
- if the package did not previously have any test coverage, add it to the list
|
||||
of `TESTABLE` packages in the `test.sh` script.
|
||||
- run the full test script and ensure it passes
|
||||
- Make sure any new code files have a license header (this is now enforced by automated tests)
|
||||
- Submit a pull request to the original repository.
|
||||
|
||||
## How to run the test suite
|
||||
We generally require test coverage of any new features or bug fixes.
|
||||
|
||||
Here's how you can run the test suite on any system (even Mac or Windows) using
|
||||
[Vagrant](https://www.vagrantup.com/) and a hypervisor of your choice:
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
vagrant ssh
|
||||
# you're now in a shell in a virtual machine
|
||||
sudo su
|
||||
cd /go/src/github.com/containernetworking/cni
|
||||
|
||||
# to run the full test suite
|
||||
./test.sh
|
||||
|
||||
# to focus on a particular test suite
|
||||
cd plugins/main/loopback
|
||||
go test
|
||||
```
|
||||
|
||||
# Acceptance policy
|
||||
|
||||
These things will make a PR more likely to be accepted:
|
||||
|
||||
* a well-described requirement
|
||||
* tests for new code
|
||||
* tests for old code!
|
||||
* new code and tests follow the conventions in old code and tests
|
||||
* a good commit message (see below)
|
||||
|
||||
In general, we will merge a PR once two maintainers have endorsed it.
|
||||
Trivial changes (e.g., corrections to spelling) may get waved through.
|
||||
For substantial changes, more people may become involved, and you might get asked to resubmit the PR or divide the changes into more than one PR.
|
||||
|
||||
### Format of the Commit Message
|
||||
|
||||
We follow a rough convention for commit messages that is designed to answer two
|
||||
questions: what changed and why. The subject line should feature the what and
|
||||
the body of the commit should describe the why.
|
||||
|
||||
```
|
||||
scripts: add the test-cluster command
|
||||
|
||||
this uses tmux to setup a test cluster that you can easily kill and
|
||||
start for debugging.
|
||||
|
||||
Fixes #38
|
||||
```
|
||||
|
||||
The format can be described more formally as follows:
|
||||
|
||||
```
|
||||
<subsystem>: <what changed>
|
||||
<BLANK LINE>
|
||||
<why this change was made>
|
||||
<BLANK LINE>
|
||||
<footer>
|
||||
```
|
||||
|
||||
The first line is the subject and should be no longer than 70 characters, the
|
||||
second line is always blank, and other lines should be wrapped at 80 characters.
|
||||
This allows the message to be easier to read on GitHub as well as in various
|
||||
git tools.
|
||||
|
||||
## 3rd party plugins
|
||||
So you've built a CNI plugin. Where should it live?
|
||||
|
||||
Short answer: We'd be happy to link to it from our [list of 3rd party plugins](README.md#3rd-party-plugins).
|
||||
But we'd rather you kept the code in your own repo.
|
||||
|
||||
Long answer: An advantage of the CNI model is that independent plugins can be
|
||||
built, distributed and used without any code changes to this repository. While
|
||||
some widely used plugins (and a few less-popular legacy ones) live in this repo,
|
||||
we're reluctant to add more.
|
||||
|
||||
If you have a good reason why the CNI maintainers should take custody of your
|
||||
plugin, please open an issue or PR.
|
108
vendor/github.com/containernetworking/cni/CONVENTIONS.md
generated
vendored
108
vendor/github.com/containernetworking/cni/CONVENTIONS.md
generated
vendored
@ -1,108 +0,0 @@
|
||||
# Extension conventions
|
||||
|
||||
There are three ways of passing information to plugins using the Container Network Interface (CNI), none of which require the [spec](SPEC.md) to be updated. These are
|
||||
- plugin specific fields in the JSON config
|
||||
- `args` field in the JSON config
|
||||
- `CNI_ARGS` environment variable
|
||||
|
||||
This document aims to provide guidance on which method should be used and to provide a convention for how common information should be passed.
|
||||
Establishing these conventions allows plugins to work across multiple runtimes. This helps both plugins and the runtimes.
|
||||
|
||||
## Plugins
|
||||
* Plugin authors should aim to support these conventions where it makes sense for their plugin. This means they are more likely to "just work" with a wider range of runtimes.
|
||||
* Plugins should accept arguments according to these conventions if they implement the same basic functionality as other plugins. If plugins have shared functionality that isn't coverered by these conventions then a PR should be opened against this document.
|
||||
|
||||
## Runtimes
|
||||
* Runtime authors should follow these conventions if they want to pass additional information to plugins. This will allow the extra information to be consumed by the widest range of plugins.
|
||||
* These conventions serve as an abstraction for the runtime. For example, port forwarding is highly implementation specific, but users should be able to select the plugin of their choice without changing the runtime.
|
||||
|
||||
# Current conventions
|
||||
Additional conventions can be created by creating PRs which modify this document.
|
||||
|
||||
## Plugin specific fields
|
||||
[Plugin specific fields](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) formed part of the original CNI spec and have been present since the initial release.
|
||||
> Plugins may define additional fields that they accept and may generate an error if called with unknown fields. The exception to this is the args field may be used to pass arbitrary data which may be ignored by plugins.
|
||||
|
||||
A plugin can define any additional fields it needs to work properly. It is expected that it will return an error if it can't act on fields that were expected or where the field values were malformed.
|
||||
|
||||
This method of passing information to a plugin is recommended when the following conditions hold
|
||||
* The configuration has specific meaning to the plugin (i.e. it's not just general meta data)
|
||||
* the plugin is expected to act on the configuration or return an error if it can't
|
||||
|
||||
Dynamic information (i.e. data that a runtime fills out) should be placed in a `runtimeConfig` section.
|
||||
|
||||
| Area | Purpose | Spec and Example | Runtime implementations | Plugin Implementations |
|
||||
| ----- | ------- | ---------------- | ----------------------- | --------------------- |
|
||||
| port mappings | Pass mapping from ports on the host to ports in the container network namespace. | Operators can ask runtimes to pass port mapping information to plugins, by setting the following in the CNI config <pre>"capabilities": {"portMappings": true} </pre> Runtimes should fill in the actual port mappings when the config is passed to plugins. It should be placed in a new section of the config "runtimeConfig" e.g. <pre>"runtimeConfig": {<br /> "portMappings" : [<br /> { "hostPort": 8080, "containerPort": 80, "protocol": "tcp" },<br /> { "hostPort": 8000, "containerPort": 8001, "protocol": "udp" }<br /> ]<br />}</pre> | none | none |
|
||||
|
||||
For example, the configuration for a port mapping plugin might look like this to an operator (it should be included as part of a [network configuration list](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration-lists).
|
||||
```json
|
||||
{
|
||||
"name" : "ExamplePlugin",
|
||||
"type" : "port-mapper",
|
||||
"capabilities": {"portMappings": true}
|
||||
}
|
||||
```
|
||||
|
||||
But the runtime would fill in the mappings so the plugin itself would receive something like this.
|
||||
```json
|
||||
{
|
||||
"name" : "ExamplePlugin",
|
||||
"type" : "port-mapper",
|
||||
"runtimeConfig": {
|
||||
"portMappings": [
|
||||
{"hostPort": 8080, "containerPort": 80, "protocol": "tcp"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## "args" in network config
|
||||
`args` in [network config](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration) were introduced as an optional field into the `0.2.0` release of the CNI spec. The first CNI code release that it appeared in was `v0.4.0`.
|
||||
> args (dictionary): Optional additional arguments provided by the container runtime. For example a dictionary of labels could be passed to CNI plugins by adding them to a labels field under args.
|
||||
|
||||
`args` provide a way of providing more structured data than the flat strings that CNI_ARGS can support.
|
||||
|
||||
`args` should be used for _optional_ meta-data. Runtimes can place additional data in `args` and plugins that don't understand that data should just ignore it. Runtimes should not require that a plugin understands or consumes that data provided, and so a runtime should not expect to receive an error if the data could not be acted on.
|
||||
|
||||
This method of passing information to a plugin is recommended when the information is optional and the plugin can choose to ignore it. It's often that case that such information is passed to all plugins by the runtime whithout regard for whether the plugin can understand it.
|
||||
|
||||
The conventions documented here are all namepaced under `cni` so they don't conflict with any existing `args`.
|
||||
|
||||
For example:
|
||||
```json
|
||||
{
|
||||
"cniVersion":"0.2.0",
|
||||
"name":"net",
|
||||
"args":{
|
||||
"cni":{
|
||||
"labels": [{"key": "app", "value": "myapp"}]
|
||||
}
|
||||
},
|
||||
<REST OF CNI CONFIG HERE>
|
||||
"ipam":{
|
||||
<IPAM CONFIG HERE>
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Area | Purpose| Spec and Example | Runtime implementations | Plugin Implementations |
|
||||
| ----- | ------ | ------------ | ----------------------- | ---------------------- |
|
||||
| labels | Pass`key=value` labels to plugins | <pre>"labels" : [<br /> { "key" : "app", "value" : "myapp" },<br /> { "key" : "env", "value" : "prod" }<br />] </pre> | none | none |
|
||||
|
||||
## CNI_ARGS
|
||||
CNI_ARGS formed part of the original CNI spec and have been present since the initial release.
|
||||
> `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, "FOO=BAR;ABC=123"
|
||||
|
||||
The use of `CNI_ARGS` is deprecated and "args" should be used instead.
|
||||
|
||||
| Field | Purpose| Spec and Example | Runtime implementations | Plugin Implementations |
|
||||
| ------ | ------ | ---------------- | ----------------------- | ---------------------- |
|
||||
| IP | Request a specific IP from IPAM plugins | IP=192.168.10.4 | *rkt* supports passing additional arguments to plugins and the [documentation](https://coreos.com/rkt/docs/latest/networking/overriding-defaults.html) suggests IP can be used. | host-local (since version v0.2.0) supports the field for IPv4 only - [documentation](https://github.com/containernetworking/cni/blob/master/Documentation/host-local.md#supported-arguments).|
|
||||
|
||||
## Chained Plugins
|
||||
If plugins are agnostic about the type of interface created, they SHOULD work in a chained mode and configure existing interfaces. Plugins MAY also create the desired interface when not run in a chain.
|
||||
|
||||
For example, the `bridge` plugin adds the host-side interface to a bridge. So, it should accept any previous result that includes a host-side interface, including `tap` devices. If not called as a chained plugin, it creates a `veth` pair first.
|
||||
|
||||
Plugins that meet this convention are usable by a larger set of runtimes and interfaces, including hypervisors and DPDK providers.
|
36
vendor/github.com/containernetworking/cni/DCO
generated
vendored
36
vendor/github.com/containernetworking/cni/DCO
generated
vendored
@ -1,36 +0,0 @@
|
||||
Developer Certificate of Origin
|
||||
Version 1.1
|
||||
|
||||
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||
660 York Street, Suite 102,
|
||||
San Francisco, CA 94110 USA
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
|
||||
Developer's Certificate of Origin 1.1
|
||||
|
||||
By making a contribution to this project, I certify that:
|
||||
|
||||
(a) The contribution was created in whole or in part by me and I
|
||||
have the right to submit it under the open source license
|
||||
indicated in the file; or
|
||||
|
||||
(b) The contribution is based upon previous work that, to the best
|
||||
of my knowledge, is covered under an appropriate open source
|
||||
license and I have the right under that license to submit that
|
||||
work with modifications, whether created in whole or in part
|
||||
by me, under the same open source license (unless I am
|
||||
permitted to submit under a different license), as indicated
|
||||
in the file; or
|
||||
|
||||
(c) The contribution was provided directly to me by some other
|
||||
person who certified (a), (b) or (c) and I have not modified
|
||||
it.
|
||||
|
||||
(d) I understand and agree that this project and the contribution
|
||||
are public and that a record of the contribution (including all
|
||||
personal information I submit with it, including my sign-off) is
|
||||
maintained indefinitely and may be redistributed consistent with
|
||||
this project or the open source license(s) involved.
|
6
vendor/github.com/containernetworking/cni/MAINTAINERS
generated
vendored
6
vendor/github.com/containernetworking/cni/MAINTAINERS
generated
vendored
@ -1,6 +0,0 @@
|
||||
Bryan Boreham <bryan@weave.works> (@bboreham)
|
||||
Casey Callendrello <casey.callendrello@coreos.com> (@squeed)
|
||||
Dan Williams <dcbw@redhat.com> (@dcbw)
|
||||
Gabe Rosenhouse <grosenhouse@pivotal.io> (@rosenhouse)
|
||||
Stefan Junker <stefan.junker@coreos.com> (@steveeJ)
|
||||
Tom Denham <tom@tigera.io> (@tomdee)
|
191
vendor/github.com/containernetworking/cni/README.md
generated
vendored
191
vendor/github.com/containernetworking/cni/README.md
generated
vendored
@ -1,191 +0,0 @@
|
||||
[![Build Status](https://travis-ci.org/containernetworking/cni.svg?branch=master)](https://travis-ci.org/containernetworking/cni)
|
||||
[![Coverage Status](https://coveralls.io/repos/github/containernetworking/cni/badge.svg?branch=master)](https://coveralls.io/github/containernetworking/cni?branch=master)
|
||||
[![Slack Status](https://cryptic-tundra-43194.herokuapp.com/badge.svg)](https://cryptic-tundra-43194.herokuapp.com/)
|
||||
|
||||
![CNI Logo](logo.png)
|
||||
|
||||
---
|
||||
|
||||
# Community Sync Meeting
|
||||
|
||||
There is a community sync meeting for users and developers every 1-2 months. The next meeting will help on a Google Hangout and the link is in the [agenda](https://docs.google.com/document/d/10ECyT2mBGewsJUcmYmS8QNo1AcNgy2ZIe2xS7lShYhE/edit?usp=sharing) (Notes from previous meeting are also in this doc). The next meeting will be held on *Wednesday, June 21th* at *3:00pm UTC* [Add to Calendar]https://www.worldtimebuddy.com/?qm=1&lid=100,5,2643743,5391959&h=100&date=2017-6-21&sln=15-16).
|
||||
|
||||
---
|
||||
|
||||
# CNI - the Container Network Interface
|
||||
|
||||
## What is CNI?
|
||||
|
||||
CNI (_Container Network Interface_), a [Cloud Native Computing Foundation](https://cncf.io) project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins.
|
||||
CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
|
||||
Because of this focus, CNI has a wide range of support and the specification is simple to implement.
|
||||
|
||||
As well as the [specification](SPEC.md), this repository contains the Go source code of a [library for integrating CNI into applications](libcni) and an [example command-line tool](cnitool) for executing CNI plugins. A [separate repository contains reference plugins](https://github.com/containernetworking/plugins) and a template for making new plugins.
|
||||
|
||||
The template code makes it straight-forward to create a CNI plugin for an existing container networking project.
|
||||
CNI also makes a good framework for creating a new container networking project from scratch.
|
||||
|
||||
## Why develop CNI?
|
||||
|
||||
Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific.
|
||||
We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.
|
||||
|
||||
To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.
|
||||
|
||||
## Who is using CNI?
|
||||
### Container runtimes
|
||||
- [rkt - container engine](https://coreos.com/blog/rkt-cni-networking.html)
|
||||
- [Kurma - container runtime](http://kurma.io/)
|
||||
- [Kubernetes - a system to simplify container operations](http://kubernetes.io/docs/admin/network-plugins/)
|
||||
- [OpenShift - Kubernetes with additional enterprise features](https://github.com/openshift/origin/blob/master/docs/openshift_networking_requirements.md)
|
||||
- [Cloud Foundry - a platform for cloud applications](https://github.com/cloudfoundry-incubator/cf-networking-release)
|
||||
- [Mesos - a distributed systems kernel](https://github.com/apache/mesos/blob/master/docs/cni.md)
|
||||
|
||||
### 3rd party plugins
|
||||
- [Project Calico - a layer 3 virtual network](https://github.com/projectcalico/calico-cni)
|
||||
- [Weave - a multi-host Docker network](https://github.com/weaveworks/weave)
|
||||
- [Contiv Networking - policy networking for various use cases](https://github.com/contiv/netplugin)
|
||||
- [SR-IOV](https://github.com/hustcat/sriov-cni)
|
||||
- [Cilium - BPF & XDP for containers](https://github.com/cilium/cilium)
|
||||
- [Infoblox - enterprise IP address management for containers](https://github.com/infobloxopen/cni-infoblox)
|
||||
- [Multus - a Multi plugin](https://github.com/Intel-Corp/multus-cni)
|
||||
- [Romana - Layer 3 CNI plugin supporting network policy for Kubernetes](https://github.com/romana/kube)
|
||||
- [CNI-Genie - generic CNI network plugin](https://github.com/Huawei-PaaS/CNI-Genie)
|
||||
- [Nuage CNI - Nuage Networks SDN plugin for network policy kubernetes support ](https://github.com/nuagenetworks/nuage-cni)
|
||||
- [Silk - a CNI plugin designed for Cloud Foundry](https://github.com/cloudfoundry-incubator/silk)
|
||||
- [Linen - a CNI plugin designed for overlay networks with Open vSwitch and fit in SDN/OpenFlow network environment](https://github.com/John-Lin/linen-cni)
|
||||
|
||||
The CNI team also maintains some [core plugins in a separate repository](https://github.com/containernetworking/plugins).
|
||||
|
||||
|
||||
## Contributing to CNI
|
||||
|
||||
We welcome contributions, including [bug reports](https://github.com/containernetworking/cni/issues), and code and documentation improvements.
|
||||
If you intend to contribute to code or documentation, please read [CONTRIBUTING.md](CONTRIBUTING.md). Also see the [contact section](#contact) in this README.
|
||||
|
||||
## How do I use CNI?
|
||||
|
||||
### Requirements
|
||||
|
||||
The CNI spec is language agnostic. To use the Go language libraries in this repository, you'll need a recent version of Go. Our [automated tests](https://travis-ci.org/containernetworking/cni/builds) cover Go versions 1.7 and 1.8.
|
||||
|
||||
### Reference Plugins
|
||||
|
||||
The CNI project maintains a set of [reference plugins](https://github.com/containernetworking/plugins) that implement the CNI specification.
|
||||
NOTE: the reference plugins used to live in this repository but have been split out into a [separate repository](https://github.com/containernetworking/plugins) as of May 2017.
|
||||
|
||||
### Running the plugins
|
||||
|
||||
After building and installing the [reference plugins](https://github.com/containernetworking/plugins), you can use the `priv-net-run.sh` and `docker-run.sh` scripts in the `scripts/` directory to exercise the plugins.
|
||||
|
||||
**note - priv-net-run.sh depends on `jq`**
|
||||
|
||||
Start out by creating a netconf file to describe a network:
|
||||
|
||||
```bash
|
||||
$ mkdir -p /etc/cni/net.d
|
||||
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
|
||||
{
|
||||
"cniVersion": "0.2.0",
|
||||
"name": "mynet",
|
||||
"type": "bridge",
|
||||
"bridge": "cni0",
|
||||
"isGateway": true,
|
||||
"ipMasq": true,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"subnet": "10.22.0.0/16",
|
||||
"routes": [
|
||||
{ "dst": "0.0.0.0/0" }
|
||||
]
|
||||
}
|
||||
}
|
||||
EOF
|
||||
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
|
||||
{
|
||||
"cniVersion": "0.2.0",
|
||||
"type": "loopback"
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
The directory `/etc/cni/net.d` is the default location in which the scripts will look for net configurations.
|
||||
|
||||
Next, build the plugins:
|
||||
|
||||
```bash
|
||||
$ cd $GOPATH/src/github.com/containernetworking/plugins
|
||||
$ ./build.sh
|
||||
```
|
||||
|
||||
Finally, execute a command (`ifconfig` in this example) in a private network namespace that has joined the `mynet` network:
|
||||
|
||||
```bash
|
||||
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
|
||||
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
|
||||
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
|
||||
eth0 Link encap:Ethernet HWaddr f2:c2:6f:54:b8:2b
|
||||
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
|
||||
inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
|
||||
UP BROADCAST MULTICAST MTU:1500 Metric:1
|
||||
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
|
||||
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
inet6 addr: ::1/128 Scope:Host
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
|
||||
```
|
||||
|
||||
The environment variable `CNI_PATH` tells the scripts and library where to look for plugin executables.
|
||||
|
||||
## Running a Docker container with network namespace set up by CNI plugins
|
||||
|
||||
Use the instructions in the previous section to define a netconf and build the plugins.
|
||||
Next, docker-run.sh script wraps `docker run`, to execute the plugins prior to entering the container:
|
||||
|
||||
```bash
|
||||
$ CNI_PATH=$GOPATH/src/github.com/containernetworking/plugins/bin
|
||||
$ cd $GOPATH/src/github.com/containernetworking/cni/scripts
|
||||
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
|
||||
eth0 Link encap:Ethernet HWaddr fa:60:70:aa:07:d1
|
||||
inet addr:10.22.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
|
||||
inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
|
||||
UP BROADCAST MULTICAST MTU:1500 Metric:1
|
||||
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:90 (90.0 B) TX bytes:0 (0.0 B)
|
||||
|
||||
lo Link encap:Local Loopback
|
||||
inet addr:127.0.0.1 Mask:255.0.0.0
|
||||
inet6 addr: ::1/128 Scope:Host
|
||||
UP LOOPBACK RUNNING MTU:65536 Metric:1
|
||||
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
|
||||
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
|
||||
collisions:0 txqueuelen:0
|
||||
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
|
||||
```
|
||||
|
||||
## What might CNI do in the future?
|
||||
|
||||
CNI currently covers a wide range of needs for network configuration due to it simple model and API.
|
||||
However, in the future CNI might want to branch out into other directions:
|
||||
|
||||
- Dynamic updates to existing network configuration
|
||||
- Dynamic policies for network bandwidth and firewall rules
|
||||
|
||||
If these topics of are interest, please contact the team via the mailing list or IRC and find some like-minded people in the community to put a proposal together.
|
||||
|
||||
## Contact
|
||||
|
||||
For any questions about CNI, please reach out on the mailing list:
|
||||
- Email: [cni-dev](https://groups.google.com/forum/#!forum/cni-dev)
|
||||
- IRC: #[containernetworking](irc://irc.freenode.org:6667/#containernetworking) channel on freenode.org
|
||||
- Slack: [containernetworking.slack.com](https://cryptic-tundra-43194.herokuapp.com)
|
34
vendor/github.com/containernetworking/cni/RELEASING.md
generated
vendored
34
vendor/github.com/containernetworking/cni/RELEASING.md
generated
vendored
@ -1,34 +0,0 @@
|
||||
# Release process
|
||||
|
||||
## Resulting artifacts
|
||||
Creating a new release produces the following artifacts:
|
||||
|
||||
- Binaries (stored in the `release-<TAG>` directory) :
|
||||
- `cni-<PLATFORM>-<VERSION>.tgz` binaries
|
||||
- `cni-<VERSION>.tgz` binary (copy of amd64 platform binary)
|
||||
- `sha1`, `sha256` and `sha512` files for the above files.
|
||||
|
||||
## Preparing for a release
|
||||
1. Releases are performed by maintainers and should usually be discussed and planned at a maintainer meeting.
|
||||
- Choose the version number. It should be prefixed with `v`, e.g. `v1.2.3`
|
||||
- Take a quick scan through the PRs and issues to make sure there isn't anything crucial that _must_ be in the next release.
|
||||
- Create a draft of the release note
|
||||
- Discuss the level of testing that's needed and create a test plan if sensible
|
||||
- Check what version of `go` is used in the build container, updating it if there's a new stable release.
|
||||
|
||||
## Creating the release artifacts
|
||||
1. Make sure you are on the master branch and don't have any local uncommitted changes.
|
||||
1. Create a signed tag for the release `git tag -s $VERSION` (Ensure that GPG keys are created and added to GitHub)
|
||||
1. Run the release script from the root of the repository
|
||||
- `scripts/release.sh`
|
||||
- The script requires Docker and ensures that a consistent environment is used.
|
||||
- The artifacts will now be present in the `release-<TAG>` directory.
|
||||
1. Test these binaries according to the test plan.
|
||||
|
||||
## Publishing the release
|
||||
1. Push the tag to git `git push origin <TAG>`
|
||||
1. Create a release on Github, using the tag which was just pushed.
|
||||
1. Attach all the artifacts from the release directory.
|
||||
1. Add the release note to the release.
|
||||
1. Announce the release on at least the CNI mailing, IRC and Slack.
|
||||
|
33
vendor/github.com/containernetworking/cni/ROADMAP.md
generated
vendored
33
vendor/github.com/containernetworking/cni/ROADMAP.md
generated
vendored
@ -1,33 +0,0 @@
|
||||
# CNI Roadmap
|
||||
|
||||
This document defines a high level roadmap for CNI development.
|
||||
The list below is not complete, and we advise to get the current project state from the [milestones defined in GitHub](https://github.com/containernetworking/cni/milestones).
|
||||
|
||||
## CNI Milestones
|
||||
|
||||
### [v0.2.0](https://github.com/containernetworking/cni/milestones/v0.2.0)
|
||||
|
||||
* Signed release binaries
|
||||
* Introduction of a testing strategy/framework
|
||||
|
||||
### [v0.3.0](https://github.com/containernetworking/cni/milestones/v0.3.0)
|
||||
|
||||
* Further increase test coverage
|
||||
* Simpler default route handling in bridge plugin
|
||||
* Clarify project description, documentation and contribution guidelines
|
||||
|
||||
### [v0.4.0](https://github.com/containernetworking/cni/milestones/v0.4.0)
|
||||
|
||||
* Further increase test coverage
|
||||
* Simpler bridging of host interface
|
||||
* Improve IPAM allocator predictability
|
||||
* Allow in- and output of arbitrary K/V pairs for plugins
|
||||
|
||||
### [v1.0.0](https://github.com/containernetworking/cni/milestones/v1.0.0)
|
||||
|
||||
- Plugin composition functionality
|
||||
- IPv6 support
|
||||
- Stable SPEC
|
||||
- Strategy and tooling for backwards compatibility
|
||||
- Complete test coverage
|
||||
- Integrate build artefact generation with CI
|
535
vendor/github.com/containernetworking/cni/SPEC.md
generated
vendored
535
vendor/github.com/containernetworking/cni/SPEC.md
generated
vendored
@ -1,535 +0,0 @@
|
||||
# Container Networking Interface Specification
|
||||
|
||||
## Version
|
||||
This is CNI **spec** version **0.3.1**.
|
||||
|
||||
Note that this is **independent from the version of the CNI library and plugins** in this repository (e.g. the versions of [releases](https://github.com/containernetworking/cni/releases)).
|
||||
|
||||
## Overview
|
||||
|
||||
This document proposes a generic plugin-based networking solution for application containers on Linux, the _Container Networking Interface_, or _CNI_.
|
||||
It is derived from the [rkt Networking Proposal][rkt-networking-proposal], which aimed to satisfy many of the [design considerations][rkt-networking-design] for networking in [rkt][rkt-github].
|
||||
|
||||
For the purposes of this proposal, we define two terms very specifically:
|
||||
- _container_ can be considered synonymous with a [Linux _network namespace_][namespaces]. What unit this corresponds to depends on a particular container runtime implementation: for example, in implementations of the [App Container Spec][appc-github] like rkt, each _pod_ runs in a unique network namespace. In [Docker][docker], on the other hand, network namespaces generally exist for each separate Docker container.
|
||||
- _network_ refers to a group of entities that are uniquely addressable that can communicate amongst each other. This could be either an individual container (as specified above), a machine, or some other network device (e.g. a router). Containers can be conceptually _added to_ or _removed from_ one or more networks.
|
||||
|
||||
[rkt-networking-proposal]: https://docs.google.com/a/coreos.com/document/d/1PUeV68q9muEmkHmRuW10HQ6cHgd4819_67pIxDRVNlM/edit#heading=h.ievko3xsjwxd
|
||||
[rkt-networking-design]:
|
||||
https://docs.google.com/a/coreos.com/document/d/1CTAL4gwqRofjxyp4tTkbgHtAwb2YCcP14UEbHNizd8g
|
||||
[rkt-github]: https://github.com/coreos/rkt
|
||||
[namespaces]: http://man7.org/linux/man-pages/man7/namespaces.7.html
|
||||
[appc-github]: https://github.com/appc/spec
|
||||
[docker]: https://docker.com
|
||||
|
||||
This document aims to specify the interface between "runtimes" and "plugins". Whilst there are certain well known fields, runtimes may wish to pass additional information to plugins. These extentions are not part of this specification but are documented as [conventions](CONVENTIONS.md).
|
||||
|
||||
## General considerations
|
||||
|
||||
The intention is for the container runtime to first create a new network namespace for the container.
|
||||
It then determines which networks this container should belong to and for each network, which plugin must be executed.
|
||||
The network configuration is in JSON format and can easily be stored in a file.
|
||||
The network configuration includes mandatory fields such as "name" and "type" as well as plugin (type) specific ones.
|
||||
The network configuration allows for fields to change values between invocations. For this purpose there is an optional field "args" which should contain the varying information.
|
||||
The container runtime sequentially sets up the networks by executing the corresponding plugin for each network.
|
||||
Upon completion of the container lifecycle, the runtime executes the plugins in reverse order (relative to the order in which they were added) to disconnect them from the networks.
|
||||
|
||||
## CNI Plugin
|
||||
|
||||
### Overview
|
||||
|
||||
Each CNI plugin is implemented as an executable that is invoked by the container management system (e.g. rkt or Docker).
|
||||
|
||||
A CNI plugin is responsible for inserting a network interface into the container network namespace (e.g. one end of a veth pair) and making any necessary changes on the host (e.g. attaching other end of veth into a bridge).
|
||||
It should then assign the IP to the interface and setup the routes consistent with IP Address Management section by invoking appropriate IPAM plugin.
|
||||
|
||||
### Parameters
|
||||
|
||||
The operations that the CNI plugin needs to support are:
|
||||
|
||||
|
||||
- Add container to network
|
||||
- Parameters:
|
||||
- **Version**. The version of CNI spec that the caller is using (container management system or the invoking plugin).
|
||||
- **Container ID**. This is optional but recommended, and should be unique across an administrative domain while the container is live (it may be reused in the future). For example, an environment with an IPAM system may require that each container is allocated a unique ID and that each IP allocation can thus be correlated back to a particular container. As another example, in appc implementations this would be the _pod ID_.
|
||||
- **Network namespace path**. This represents the path to the network namespace to be added, i.e. /proc/[pid]/ns/net or a bind-mount/link to it.
|
||||
- **Network configuration**. This is a JSON document describing a network to which a container can be joined. The schema is described below.
|
||||
- **Extra arguments**. This provides an alternative mechanism to allow simple configuration of CNI plugins on a per-container basis.
|
||||
- **Name of the interface inside the container**. This is the name that should be assigned to the interface created inside the container (network namespace); consequently it must comply with the standard Linux restrictions on interface names.
|
||||
- Result:
|
||||
- **Interfaces list**. Depending on the plugin, this can include the sandbox (eg, container or hypervisor) interface name and/or the host interface name, the hardware addresses of each interface, and details about the sandbox (if any) the interface is in.
|
||||
- **IP configuration assigned to each interface**. The IPv4 and/or IPv6 addresses, gateways, and routes assigned to sandbox and/or host interfaces.
|
||||
- **DNS information**. Dictionary that includes DNS information for nameservers, domain, search domains and options.
|
||||
|
||||
- Delete container from network
|
||||
- Parameters:
|
||||
- **Version**. The version of CNI spec that the caller is using (container management system or the invoking plugin).
|
||||
- **Container ID**, as defined above.
|
||||
- **Network namespace path**, as defined above.
|
||||
- **Network configuration**, as defined above.
|
||||
- **Extra arguments**, as defined above.
|
||||
- **Name of the interface inside the container**, as defined above.
|
||||
|
||||
- Report version
|
||||
- Parameters: NONE.
|
||||
- Result: information about the CNI spec versions supported by the plugin
|
||||
|
||||
```
|
||||
{
|
||||
"cniVersion": "0.3.1", // the version of the CNI spec in use for this output
|
||||
"supportedVersions": [ "0.1.0", "0.2.0", "0.3.0", "0.3.1" ] // the list of CNI spec versions that this plugin supports
|
||||
}
|
||||
```
|
||||
|
||||
The executable command-line API uses the type of network (see [Network Configuration](#network-configuration) below) as the name of the executable to invoke.
|
||||
It will then look for this executable in a list of predefined directories. Once found, it will invoke the executable using the following environment variables for argument passing:
|
||||
- `CNI_COMMAND`: indicates the desired operation; `ADD`, `DEL` or `VERSION`.
|
||||
- `CNI_CONTAINERID`: Container ID
|
||||
- `CNI_NETNS`: Path to network namespace file
|
||||
- `CNI_IFNAME`: Interface name to set up; plugin must honor this interface name or return an error
|
||||
- `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, "FOO=BAR;ABC=123"
|
||||
- `CNI_PATH`: List of paths to search for CNI plugin executables. Paths are separated by an OS-specific list separator; for example ':' on Linux and ';' on Windows
|
||||
|
||||
Network configuration in JSON format is streamed to the plugin through stdin. This means it is not tied to a particular file on disk and can contain information which changes between invocations.
|
||||
|
||||
|
||||
### Result
|
||||
|
||||
Note that IPAM plugins return an abbreviated `Result` structure as described in [IP Allocation](#ip-allocation).
|
||||
|
||||
Success is indicated by a return code of zero and the following JSON printed to stdout in the case of the ADD command. The `ips` and `dns` items should be the same output as was returned by the IPAM plugin (see [IP Allocation](#ip-allocation) for details) except that the plugin should fill in the `interface` indexes appropriately, which are missing from IPAM plugin output since IPAM plugins should be unaware of interfaces.
|
||||
|
||||
```
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"interfaces": [ (this key omitted by IPAM plugins)
|
||||
{
|
||||
"name": "<name>",
|
||||
"mac": "<MAC address>", (required if L2 addresses are meaningful)
|
||||
"sandbox": "<netns path or hypervisor identifier>" (required for container/hypervisor interfaces, empty/omitted for host interfaces)
|
||||
}
|
||||
],
|
||||
"ips": [
|
||||
{
|
||||
"version": "<4-or-6>",
|
||||
"address": "<ip-and-prefix-in-CIDR>",
|
||||
"gateway": "<ip-address-of-the-gateway>", (optional)
|
||||
"interface": <numeric index into 'interfaces' list>
|
||||
},
|
||||
...
|
||||
],
|
||||
"routes": [ (optional)
|
||||
{
|
||||
"dst": "<ip-and-prefix-in-cidr>",
|
||||
"gw": "<ip-of-next-hop>" (optional)
|
||||
},
|
||||
...
|
||||
]
|
||||
"dns": {
|
||||
"nameservers": <list-of-nameservers> (optional)
|
||||
"domain": <name-of-local-domain> (optional)
|
||||
"search": <list-of-additional-search-domains> (optional)
|
||||
"options": <list-of-options> (optional)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`cniVersion` specifies a [Semantic Version 2.0](http://semver.org) of CNI specification used by the plugin.
|
||||
`interfaces` describes specific network interfaces the plugin created.
|
||||
If the `CNI_IFNAME` variable exists the plugin must use that name for the sandbox/hypervisor interface or return an error if it cannot.
|
||||
- `mac` (string): the hardware address of the interface.
|
||||
If L2 addresses are not meaningful for the plugin then this field is optional.
|
||||
- `sandbox` (string): container/namespace-based environments should return the full filesystem path to the network namespace of that sandbox.
|
||||
Hypervisor/VM-based plugins should return an ID unique to the virtualized sandbox the interface was created in.
|
||||
This item must be provided for interfaces created or moved into a sandbox like a network namespace or a hypervisor/VM.
|
||||
|
||||
The `ips` field is a list of IP configuration information.
|
||||
See the [IP well-known structure](#ips) section for more information.
|
||||
|
||||
The `dns` field contains a dictionary consisting of common DNS information.
|
||||
See the [DNS well-known structure](#dns) section for more information.
|
||||
|
||||
The specification does not declare how this information must be processed by CNI consumers.
|
||||
Examples include generating an `/etc/resolv.conf` file to be injected into the container filesystem or running a DNS forwarder on the host.
|
||||
|
||||
Errors are indicated by a non-zero return code and the following JSON being printed to stdout:
|
||||
```
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"code": <numeric-error-code>,
|
||||
"msg": <short-error-message>,
|
||||
"details": <long-error-message> (optional)
|
||||
}
|
||||
```
|
||||
|
||||
`cniVersion` specifies a [Semantic Version 2.0](http://semver.org) of CNI specification used by the plugin.
|
||||
Error codes 0-99 are reserved for well-known errors (see [Well-known Error Codes](#well-known-error-codes) section).
|
||||
Values of 100+ can be freely used for plugin specific errors.
|
||||
|
||||
In addition, stderr can be used for unstructured output such as logs.
|
||||
|
||||
### Network Configuration
|
||||
|
||||
The network configuration is described in JSON form. The configuration can be stored on disk or generated from other sources by the container runtime. The following fields are well-known and have the following meaning:
|
||||
- `cniVersion` (string): [Semantic Version 2.0](http://semver.org) of CNI specification to which this configuration conforms.
|
||||
- `name` (string): Network name. This should be unique across all containers on the host (or other administrative domain).
|
||||
- `type` (string): Refers to the filename of the CNI plugin executable.
|
||||
- `args` (dictionary): Optional additional arguments provided by the container runtime. For example a dictionary of labels could be passed to CNI plugins by adding them to a labels field under `args`.
|
||||
- `ipMasq` (boolean): Optional (if supported by the plugin). Set up an IP masquerade on the host for this network. This is necessary if the host will act as a gateway to subnets that are not able to route to the IP assigned to the container.
|
||||
- `ipam`: Dictionary with IPAM specific values:
|
||||
- `type` (string): Refers to the filename of the IPAM plugin executable.
|
||||
- `dns`: Dictionary with DNS specific values:
|
||||
- `nameservers` (list of strings): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address.
|
||||
- `domain` (string): the local domain used for short hostname lookups.
|
||||
- `search` (list of strings): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers.
|
||||
- `options` (list of strings): list of options that can be passed to the resolver
|
||||
|
||||
Plugins may define additional fields that they accept and may generate an error if called with unknown fields. The exception to this is the `args` field may be used to pass arbitrary data which may be ignored by plugins.
|
||||
|
||||
### Example configurations
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"type": "bridge",
|
||||
// type (plugin) specific
|
||||
"bridge": "cni0",
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
// ipam specific
|
||||
"subnet": "10.1.0.0/16",
|
||||
"gateway": "10.1.0.1"
|
||||
},
|
||||
"dns": {
|
||||
"nameservers": [ "10.1.0.1" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "pci",
|
||||
"type": "ovs",
|
||||
// type (plugin) specific
|
||||
"bridge": "ovs0",
|
||||
"vxlanID": 42,
|
||||
"ipam": {
|
||||
"type": "dhcp",
|
||||
"routes": [ { "dst": "10.3.0.0/16" }, { "dst": "10.4.0.0/16" } ]
|
||||
}
|
||||
// args may be ignored by plugins
|
||||
"args": {
|
||||
"labels" : {
|
||||
"appVersion" : "1.0"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "wan",
|
||||
"type": "macvlan",
|
||||
// ipam specific
|
||||
"ipam": {
|
||||
"type": "dhcp",
|
||||
"routes": [ { "dst": "10.0.0.0/8", "gw": "10.0.0.1" } ]
|
||||
},
|
||||
"dns": {
|
||||
"nameservers": [ "10.0.0.1" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Network Configuration Lists
|
||||
|
||||
Network configuration lists provide a mechanism to run multiple CNI plugins for a single container in a defined order, passing the result of each plugin to the next plugin.
|
||||
The list is composed of well-known fields and list of one or more standard CNI network configurations (see above).
|
||||
|
||||
The list is described in JSON form, and can be stored on disk or generated from other sources by the container runtime. The following fields are well-known and have the following meaning:
|
||||
- `cniVersion` (string): [Semantic Version 2.0](http://semver.org) of CNI specification to which this configuration list and all the individual configurations conform.
|
||||
- `name` (string): Network name. This should be unique across all containers on the host (or other administrative domain).
|
||||
- `plugins` (list): A list of standard CNI network configuration dictionaries (see above).
|
||||
|
||||
When executing a plugin list, the runtime MUST replace the `name` and `cniVersion` fields in each individual network configuration in the list with the `name` and `cniVersion` field of the list itself. This ensures that the name and CNI version is the same for all plugin executions in the list, preventing versioning conflicts between plugins.
|
||||
The runtime may also pass capability-based keys as a map in the top-level `runtimeConfig` key of the plugin's config JSON if a plugin advertises it supports a specific capability via the `capabilities` key of its network configuration. The key passed in `runtimeConfig` MUST match the name of the specific capability from the `capabilities` key of the plugins network configuration. See CONVENTIONS.md for more information on capabilities and how they are sent to plugins via the `runtimeConfig` key.
|
||||
|
||||
For the ADD action, the runtime MUST also add a `prevResult` field to the configuration JSON of any plugin after the first one, which MUST be the Result of the previous plugin (if any) in JSON format ([see below](#network-configuration-list-runtime-examples)).
|
||||
For the ADD action, plugins SHOULD echo the contents of the `prevResult` field to their stdout to allow subsequent plugins (and the runtime) to receive the result, unless they wish to modify or suppress a previous result.
|
||||
Plugins are allowed to modify or suppress all or part of a `prevResult`.
|
||||
However, plugins that support a version of the CNI specification that includes the `prevResult` field MUST handle `prevResult` by either passing it through, modifying it, or suppressing it explicitly.
|
||||
It is a violation of this specification to be unaware of the `prevResult` field.
|
||||
|
||||
The runtime MUST also execute each plugin in the list with the same environment.
|
||||
|
||||
For the DEL action, the runtime MUST execute the plugins in reverse-order.
|
||||
|
||||
#### Network Configuration List Error Handling
|
||||
|
||||
When an error occurs while executing an action on a plugin list (eg, either ADD or DEL) the runtime MUST stop execution of the list.
|
||||
|
||||
If an ADD action fails, when the runtime decides to handle the failure it should execute the DEL action (in reverse order from the ADD as specified above) for all plugins in the list, even if some were not called during the ADD action.
|
||||
|
||||
Plugins should generally complete a DEL action without error even if some resources are missing. For example, an IPAM plugin should generally release an IP allocation and return success even if the container network namespace no longer exists, unless that network namespace is critical for IPAM management. While DHCP may usually send a 'release' message on the container network interface, since DHCP leases have a lifetime this release action would not be considered critical and no error should be returned. For another example, the `bridge` plugin should delegate the DEL action to the IPAM plugin and clean up its own resources (if present) even if the container network namespace and/or container network interface no longer exist.
|
||||
|
||||
#### Example network configuration lists
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"plugins": [
|
||||
{
|
||||
"type": "bridge",
|
||||
// type (plugin) specific
|
||||
"bridge": "cni0",
|
||||
// args may be ignored by plugins
|
||||
"args": {
|
||||
"labels" : {
|
||||
"appVersion" : "1.0"
|
||||
}
|
||||
},
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
// ipam specific
|
||||
"subnet": "10.1.0.0/16",
|
||||
"gateway": "10.1.0.1"
|
||||
},
|
||||
"dns": {
|
||||
"nameservers": [ "10.1.0.1" ]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "tuning",
|
||||
"sysctl": {
|
||||
"net.core.somaxconn": "500"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Network configuration list runtime examples
|
||||
|
||||
Given the network configuration list JSON [shown above](#example-network-configuration-lists) the container runtime would perform the following steps for the ADD action.
|
||||
Note that the runtime adds the `cniVersion` and `name` fields from configuration list to the configuration JSON passed to each plugin, to ensure consistent versioning and names for all plugins in the list.
|
||||
|
||||
1) first call the `bridge` plugin with the following JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"type": "bridge",
|
||||
"bridge": "cni0",
|
||||
"args": {
|
||||
"labels" : {
|
||||
"appVersion" : "1.0"
|
||||
}
|
||||
},
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
// ipam specific
|
||||
"subnet": "10.1.0.0/16",
|
||||
"gateway": "10.1.0.1"
|
||||
},
|
||||
"dns": {
|
||||
"nameservers": [ "10.1.0.1" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2) next call the `tuning` plugin with the following JSON, including the `prevResult` field containing the JSON response from the `bridge` plugin:
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"type": "tuning",
|
||||
"sysctl": {
|
||||
"net.core.somaxconn": "500"
|
||||
},
|
||||
"prevResult": {
|
||||
"ips": [
|
||||
{
|
||||
"version": "4",
|
||||
"address": "10.0.0.5/32",
|
||||
"interface": 0
|
||||
}
|
||||
],
|
||||
"dns": {
|
||||
"nameservers": [ "10.1.0.1" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Given the same network configuration JSON list, the container runtime would perform the following steps for the DEL action.
|
||||
Note that no `prevResult` field is required as the DEL action does not return any result.
|
||||
Also note that plugins are executed in reverse order from the ADD action.
|
||||
|
||||
1) first call the `tuning` plugin with the following JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"type": "tuning",
|
||||
"sysctl": {
|
||||
"net.core.somaxconn": "500"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2) next call the `bridge` plugin with the following JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "dbnet",
|
||||
"type": "bridge",
|
||||
"bridge": "cni0",
|
||||
"args": {
|
||||
"labels" : {
|
||||
"appVersion" : "1.0"
|
||||
}
|
||||
},
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
// ipam specific
|
||||
"subnet": "10.1.0.0/16",
|
||||
"gateway": "10.1.0.1"
|
||||
},
|
||||
"dns": {
|
||||
"nameservers": [ "10.1.0.1" ]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### IP Allocation
|
||||
|
||||
As part of its operation, a CNI plugin is expected to assign (and maintain) an IP address to the interface and install any necessary routes relevant for that interface. This gives the CNI plugin great flexibility but also places a large burden on it. Many CNI plugins would need to have the same code to support several IP management schemes that users may desire (e.g. dhcp, host-local).
|
||||
|
||||
To lessen the burden and make IP management strategy be orthogonal to the type of CNI plugin, we define a second type of plugin -- IP Address Management Plugin (IPAM plugin). It is however the responsibility of the CNI plugin to invoke the IPAM plugin at the proper moment in its execution. The IPAM plugin is expected to determine the interface IP/subnet, Gateway and Routes and return this information to the "main" plugin to apply. The IPAM plugin may obtain the information via a protocol (e.g. dhcp), data stored on a local filesystem, the "ipam" section of the Network Configuration file or a combination of the above.
|
||||
|
||||
#### IP Address Management (IPAM) Interface
|
||||
|
||||
Like CNI plugins, the IPAM plugins are invoked by running an executable. The executable is searched for in a predefined list of paths, indicated to the CNI plugin via `CNI_PATH`. The IPAM Plugin receives all the same environment variables that were passed in to the CNI plugin. Just like the CNI plugin, IPAM receives the network configuration via stdin.
|
||||
|
||||
Success is indicated by a zero return code and the following JSON being printed to stdout (in the case of the ADD command):
|
||||
|
||||
```
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"ips": [
|
||||
{
|
||||
"version": "<4-or-6>",
|
||||
"address": "<ip-and-prefix-in-CIDR>",
|
||||
"gateway": "<ip-address-of-the-gateway>" (optional)
|
||||
},
|
||||
...
|
||||
],
|
||||
"routes": [ (optional)
|
||||
{
|
||||
"dst": "<ip-and-prefix-in-cidr>",
|
||||
"gw": "<ip-of-next-hop>" (optional)
|
||||
},
|
||||
...
|
||||
]
|
||||
"dns": {
|
||||
"nameservers": <list-of-nameservers> (optional)
|
||||
"domain": <name-of-local-domain> (optional)
|
||||
"search": <list-of-search-domains> (optional)
|
||||
"options": <list-of-options> (optional)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that unlike regular CNI plugins, IPAM plugins return an abbreviated `Result` structure that does not include the `interfaces` key, since IPAM plugins should be unaware of interfaces configured by their parent plugin except those specifically required for IPAM (eg, like the `dhcp` IPAM plugin).
|
||||
|
||||
`cniVersion` specifies a [Semantic Version 2.0](http://semver.org) of CNI specification used by the plugin.
|
||||
|
||||
The `ips` field is a list of IP configuration information.
|
||||
See the [IP well-known structure](#ips) section for more information.
|
||||
|
||||
The `dns` field contains a dictionary consisting of common DNS information.
|
||||
See the [DNS well-known structure](#dns) section for more information.
|
||||
|
||||
Errors and logs are communicated in the same way as the CNI plugin. See [CNI Plugin Result](#result) section for details.
|
||||
|
||||
IPAM plugin examples:
|
||||
- **host-local**: Select an unused (by other containers on the same host) IP within the specified range.
|
||||
- **dhcp**: Use DHCP protocol to acquire and maintain a lease. The DHCP requests will be sent via the created container interface; therefore, the associated network must support broadcast.
|
||||
|
||||
#### Notes
|
||||
- Routes are expected to be added with a 0 metric.
|
||||
- A default route may be specified via "0.0.0.0/0". Since another network might have already configured the default route, the CNI plugin should be prepared to skip over its default route definition.
|
||||
|
||||
### Well-known Structures
|
||||
|
||||
#### IPs
|
||||
|
||||
```
|
||||
"ips": [
|
||||
{
|
||||
"version": "<4-or-6>",
|
||||
"address": "<ip-and-prefix-in-CIDR>",
|
||||
"gateway": "<ip-address-of-the-gateway>", (optional)
|
||||
"interface": <numeric index into 'interfaces' list> (not required for IPAM plugins)
|
||||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
The `ips` field is a list of IP configuration information determined by the plugin. Each item is a dictionary describing of IP configuration for a network interface.
|
||||
IP configuration for multiple network interfaces and multiple IP configurations for a single interface may be returned as separate items in the `ips` list.
|
||||
All properties known to the plugin should be provided, even if not strictly required.
|
||||
- `version` (string): either "4" or "6" and corresponds to the IP version of the addresses in the entry.
|
||||
All IP addresses and gateways provided must be valid for the given `version`.
|
||||
- `address` (string): an IP address in CIDR notation (eg "192.168.1.3/24").
|
||||
- `gateway` (string): the default gateway for this subnet, if one exists.
|
||||
It does not instruct the CNI plugin to add any routes with this gateway: routes to add are specified separately via the `routes` field.
|
||||
An example use of this value is for the CNI `bridge` plugin to add this IP address to the Linux bridge to make it a gateway.
|
||||
- `interface` (uint): the index into the `interfaces` list for a [CNI Plugin Result](#result) indicating which interface this IP configuration should be applied to.
|
||||
IPAM plugins should not return this key since they have no information about network interfaces.
|
||||
|
||||
#### Routes
|
||||
|
||||
```
|
||||
"routes": [
|
||||
{
|
||||
"dst": "<ip-and-prefix-in-cidr>",
|
||||
"gw": "<ip-of-next-hop>" (optional)
|
||||
},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
- Each `routes` entry is a dictionary with the following fields. All IP addresses in the `routes` entry must be the same IP version, either 4 or 6.
|
||||
- `dst` (string): destination subnet specified in CIDR notation.
|
||||
- `gw` (string): IP of the gateway. If omitted, a default gateway is assumed (as determined by the CNI plugin).
|
||||
|
||||
#### DNS
|
||||
|
||||
```
|
||||
"dns": {
|
||||
"nameservers": <list-of-nameservers> (optional)
|
||||
"domain": <name-of-local-domain> (optional)
|
||||
"search": <list-of-additional-search-domains> (optional)
|
||||
"options": <list-of-options> (optional)
|
||||
}
|
||||
```
|
||||
|
||||
The `dns` field contains a dictionary consisting of common DNS information.
|
||||
- `nameservers` (list of strings): list of a priority-ordered list of DNS nameservers that this network is aware of. Each entry in the list is a string containing either an IPv4 or an IPv6 address.
|
||||
- `domain` (string): the local domain used for short hostname lookups.
|
||||
- `search` (list of strings): list of priority ordered search domains for short hostname lookups. Will be preferred over `domain` by most resolvers.
|
||||
- `options` (list of strings): list of options that can be passed to the resolver.
|
||||
See [CNI Plugin Result](#result) section for more information.
|
||||
|
||||
## Well-known Error Codes
|
||||
- `1` - Incompatible CNI version
|
||||
- `2` - Unsupported field in network configuration. The error message must contain the key and value of the unsupported field.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user