From 787cbcbc10e4e8be070da6bd8facccfd74cdd8b2 Mon Sep 17 00:00:00 2001 From: CentOS Sources Date: Mar 31 2020 23:32:53 +0000 Subject: import skopeo-0.1.40-7.el7_8 --- diff --git a/.gitignore b/.gitignore index 67507a5..b41eafb 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1 @@ -SOURCES/skopeo-e079f9d.tar.gz +SOURCES/skopeo-be6146b.tar.gz diff --git a/.skopeo.metadata b/.skopeo.metadata index 8e88cea..a2ee9aa 100644 --- a/.skopeo.metadata +++ b/.skopeo.metadata @@ -1 +1 @@ -523696cfb03e7d0554183fe7a55510fd37b60b90 SOURCES/skopeo-e079f9d.tar.gz +8c5b5615a62d4e504d90c6c40ee957114f5de6b1 SOURCES/skopeo-be6146b.tar.gz diff --git a/SOURCES/containers-certs.d.5.md b/SOURCES/containers-certs.d.5.md new file mode 100644 index 0000000..ffd7e4b --- /dev/null +++ b/SOURCES/containers-certs.d.5.md @@ -0,0 +1,28 @@ +% containers-certs.d(5) + +# NAME +containers-certs.d - Directory for storing custom container-registry TLS configurations + +# DESCRIPTION +A custom TLS configuration for a container registry can be configured by creating a directory under `/etc/containers/certs.d`. +The name of the directory must correspond to the `host:port` of the registry (e.g., `my-registry.com:5000`). + +## Directory Structure +A certs directory can contain one or more files with the following extensions: + +* `*.crt` files with this extensions will be interpreted as CA certificates +* `*.cert` files with this extensions will be interpreted as client certificates +* `*.key` files with this extensions will be interpreted as client keys + +Note that the client certificate-key pair will be selected by the file name (e.g., `client.{cert,key}`). +An examplary setup for a registry running at `my-registry.com:5000` may look as follows: +``` +/etc/containers/certs.d/ <- Certificate directory +└── my-registry.com:5000 <- Hostname:port + ├── client.cert <- Client certificate + ├── client.key <- Client key + └── ca.crt <- Certificate authority that signed the registry certificate +``` + +# HISTORY +Feb 2019, Originally compiled by Valentin Rothberg diff --git a/SOURCES/containers-mounts.conf.5.md b/SOURCES/containers-mounts.conf.5.md new file mode 100644 index 0000000..130c1c5 --- /dev/null +++ b/SOURCES/containers-mounts.conf.5.md @@ -0,0 +1,16 @@ +% containers-mounts.conf(5) + +## NAME +containers-mounts.conf - configuration file for default mounts in containers + +## DESCRIPTION +The mounts.conf file specifies volume mount directories that are automatically mounted inside containers. Container processes can then use this content. Usually these directories are used for passing secrets or credentials required by the package software to access remote package repositories. Note that for security reasons, tools adhering to the mounts.conf are expected to copy the contents instead of bind mounting the paths from the host. + +## FORMAT +The format of the mounts.conf is the volume format `/SRC:/DEST`, one mount per line. For example, a mounts.conf with the line `/usr/share/secrets:/run/secrets` would cause the contents of the `/usr/share/secrets` directory on the host to be mounted on the `/run/secrets` directory inside the container. Setting mountpoints allows containers to use the files of the host, for instance, to use the host's subscription to some enterprise Linux distribution. + +## FILES +Some distributions may provide a `/usr/share/containers/mounts.conf` file to provide default mounts, but users can create a `/etc/containers/mounts.conf`, to specify their own special volumes to mount in the container. + +## HISTORY +Aug 2018, Originally compiled by Valentin Rothberg diff --git a/SOURCES/containers-policy.json.5.md b/SOURCES/containers-policy.json.5.md new file mode 100644 index 0000000..2859d81 --- /dev/null +++ b/SOURCES/containers-policy.json.5.md @@ -0,0 +1,283 @@ +% CONTAINERS-POLICY.JSON(5) policy.json Man Page +% Miloslav Trmač +% September 2016 + +# NAME +containers-policy.json - syntax for the signature verification policy file + +## DESCRIPTION + +Signature verification policy files are used to specify policy, e.g. trusted keys, +applicable when deciding whether to accept an image, or individual signatures of that image, as valid. + +The default policy is stored (unless overridden at compile-time) at `/etc/containers/policy.json`; +applications performing verification may allow using a different policy instead. + +## FORMAT + +The signature verification policy file, usually called `policy.json`, +uses a JSON format. Unlike some other JSON files, its parsing is fairly strict: +unrecognized, duplicated or otherwise invalid fields cause the entire file, +and usually the entire operation, to be rejected. + +The purpose of the policy file is to define a set of *policy requirements* for a container image, +usually depending on its location (where it is being pulled from) or otherwise defined identity. + +Policy requirements can be defined for: + +- An individual *scope* in a *transport*. + The *transport* values are the same as the transport prefixes when pushing/pulling images (e.g. `docker:`, `atomic:`), + and *scope* values are defined by each transport; see below for more details. + + Usually, a scope can be defined to match a single image, and various prefixes of + such a most specific scope define namespaces of matching images. +- A default policy for a single transport, expressed using an empty string as a scope +- A global default policy. + +If multiple policy requirements match a given image, only the requirements from the most specific match apply, +the more general policy requirements definitions are ignored. + +This is expressed in JSON using the top-level syntax +```js +{ + "default": [/* policy requirements: global default */] + "transports": { + transport_name: { + "": [/* policy requirements: default for transport $transport_name */], + scope_1: [/* policy requirements: default for $scope_1 in $transport_name */], + scope_2: [/*…*/] + /*…*/ + }, + transport_name_2: {/*…*/} + /*…*/ + } +} +``` + +The global `default` set of policy requirements is mandatory; all of the other fields +(`transports` itself, any specific transport, the transport-specific default, etc.) are optional. + + +## Supported transports and their scopes + +### `atomic:` + +The `atomic:` transport refers to images in an Atomic Registry. + +Supported scopes use the form _hostname_[`:`_port_][`/`_namespace_[`/`_imagestream_ [`:`_tag_]]], +i.e. either specifying a complete name of a tagged image, or prefix denoting +a host/namespace/image stream. + +*Note:* The _hostname_ and _port_ refer to the Docker registry host and port (the one used +e.g. for `docker pull`), _not_ to the OpenShift API host and port. + +### `dir:` + +The `dir:` transport refers to images stored in local directories. + +Supported scopes are paths of directories (either containing a single image or +subdirectories possibly containing images). + +*Note:* The paths must be absolute and contain no symlinks. Paths violating these requirements may be silently ignored. + +The top-level scope `"/"` is forbidden; use the transport default scope `""`, +for consistency with other transports. + +### `docker:` + +The `docker:` transport refers to images in a registry implementing the "Docker Registry HTTP API V2". + +Scopes matching individual images are named Docker references *in the fully expanded form*, either +using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`). + +More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest), +a repository namespace, or a registry host (by only specifying the host name). + +### `oci:` + +The `oci:` transport refers to images in directories compliant with "Open Container Image Layout Specification". + +Supported scopes use the form _directory_`:`_tag_, and _directory_ referring to +a directory containing one or more tags, or any of the parent directories. + +*Note:* See `dir:` above for semantics and restrictions on the directory paths, they apply to `oci:` equivalently. + +### `tarball:` + +The `tarball:` transport refers to tarred up container root filesystems. + +Scopes are ignored. + +## Policy Requirements + +Using the mechanisms above, a set of policy requirements is looked up. The policy requirements +are represented as a JSON array of individual requirement objects. For an image to be accepted, +*all* of the requirements must be satisfied simulatenously. + +The policy requirements can also be used to decide whether an individual signature is accepted (= is signed by a recognized key of a known author); +in that case some requirements may apply only to some signatures, but each signature must be accepted by *at least one* requirement object. + +The following requirement objects are supported: + +### `insecureAcceptAnything` + +A simple requirement with the following syntax + +```json +{"type":"insecureAcceptAnything"} +``` + +This requirement accepts any image (but note that other requirements in the array still apply). + +When deciding to accept an individual signature, this requirement does not have any effect; it does *not* cause the signature to be accepted, though. + +This is useful primarily for policy scopes where no signature verification is required; +because the array of policy requirements must not be empty, this requirement is used +to represent the lack of requirements explicitly. + +### `reject` + +A simple requirement with the following syntax: + +```json +{"type":"reject"} +``` + +This requirement rejects every image, and every signature. + +### `signedBy` + +This requirement requires an image to be signed with an expected identity, or accepts a signature if it is using an expected identity and key. + +```js +{ + "type": "signedBy", + "keyType": "GPGKeys", /* The only currently supported value */ + "keyPath": "/path/to/local/keyring/file", + "keyData": "base64-encoded-keyring-data", + "signedIdentity": identity_requirement +} +``` + + +Exactly one of `keyPath` and `keyData` must be present, containing a GPG keyring of one or more public keys. Only signatures made by these keys are accepted. + +The `signedIdentity` field, a JSON object, specifies what image identity the signature claims about the image. +One of the following alternatives are supported: + +- The identity in the signature must exactly match the image identity. Note that with this, referencing an image by digest (with a signature claiming a _repository_`:`_tag_ identity) will fail. + + ```json + {"type":"matchExact"} + ``` +- If the image identity carries a tag, the identity in the signature must exactly match; + if the image identity uses a digest reference, the identity in the signature must be in the same repository as the image identity (using any tag). + + (Note that with images identified using digest references, the digest from the reference is validated even before signature verification starts.) + + ```json + {"type":"matchRepoDigestOrExact"} + ``` +- The identity in the signature must be in the same repository as the image identity. This is useful e.g. to pull an image using the `:latest` tag when the image is signed with a tag specifing an exact image version. + + ```json + {"type":"matchRepository"} + ``` +- The identity in the signature must exactly match a specified identity. + This is useful e.g. when locally mirroring images signed using their public identity. + + ```js + { + "type": "exactReference", + "dockerReference": docker_reference_value + } + ``` +- The identity in the signature must be in the same repository as a specified identity. + This combines the properties of `matchRepository` and `exactReference`. + + ```js + { + "type": "exactRepository", + "dockerRepository": docker_repository_value + } + ``` + +If the `signedIdentity` field is missing, it is treated as `matchRepoDigestOrExact`. + +*Note*: `matchExact`, `matchRepoDigestOrExact` and `matchRepository` can be only used if a Docker-like image identity is +provided by the transport. In particular, the `dir:` and `oci:` transports can be only +used with `exactReference` or `exactRepository`. + + + +## Examples + +It is *strongly* recommended to set the `default` policy to `reject`, and then +selectively allow individual transports and scopes as desired. + +### A reasonably locked-down system + +(Note that the `/*`…`*/` comments are not valid in JSON, and must not be used in real policies.) + +```js +{ + "default": [{"type": "reject"}], /* Reject anything not explicitly allowed */ + "transports": { + "docker": { + /* Allow installing images from a specific repository namespace, without cryptographic verification. + This namespace includes images like openshift/hello-openshift and openshift/origin. */ + "docker.io/openshift": [{"type": "insecureAcceptAnything"}], + /* Similarly, allow installing the “official” busybox images. Note how the fully expanded + form, with the explicit /library/, must be used. */ + "docker.io/library/busybox": [{"type": "insecureAcceptAnything"}] + /* Other docker: images use the global default policy and are rejected */ + }, + "dir": { + "": [{"type": "insecureAcceptAnything"}] /* Allow any images originating in local directories */ + }, + "atomic": { + /* The common case: using a known key for a repository or set of repositories */ + "hostname:5000/myns/official": [ + { + "type": "signedBy", + "keyType": "GPGKeys", + "keyPath": "/path/to/official-pubkey.gpg" + } + ], + /* A more complex example, for a repository which contains a mirror of a third-party product, + which must be signed-off by local IT */ + "hostname:5000/vendor/product": [ + { /* Require the image to be signed by the original vendor, using the vendor's repository location. */ + "type": "signedBy", + "keyType": "GPGKeys", + "keyPath": "/path/to/vendor-pubkey.gpg", + "signedIdentity": { + "type": "exactRepository", + "dockerRepository": "vendor-hostname/product/repository" + } + }, + { /* Require the image to _also_ be signed by a local reviewer. */ + "type": "signedBy", + "keyType": "GPGKeys", + "keyPath": "/path/to/reviewer-pubkey.gpg" + } + ] + } + } +} +``` + +### Completely disable security, allow all images, do not trust any signatures + +```json +{ + "default": [{"type": "insecureAcceptAnything"}] +} +``` +## SEE ALSO + atomic(1) + +## HISTORY +August 2018, Rename to containers-policy.json(5) by Valentin Rothberg + +September 2016, Originally compiled by Miloslav Trmač diff --git a/SOURCES/containers-registries.conf.5.md b/SOURCES/containers-registries.conf.5.md new file mode 100644 index 0000000..8ec6e34 --- /dev/null +++ b/SOURCES/containers-registries.conf.5.md @@ -0,0 +1,177 @@ +% CONTAINERS-REGISTRIES.CONF(5) System-wide registry configuration file +% Brent Baude +% Aug 2017 + +# NAME +containers-registries.conf - Syntax of System Registry Configuration File + +# DESCRIPTION +The CONTAINERS-REGISTRIES configuration file is a system-wide configuration +file for container image registries. The file format is TOML. + +By default, the configuration file is located at `/etc/containers/registries.conf`. + +# FORMATS + +## VERSION 2 +VERSION 2 is the latest format of the `registries.conf` and is currently in +beta. This means in general VERSION 1 should be used in production environments +for now. + +### GLOBAL SETTINGS + +`unqualified-search-registries` +: An array of _host_[`:`_port_] registries to try when pulling an unqualified image, in order. + +### NAMESPACED `[[registry]]` SETTINGS + +The bulk of the configuration is represented as an array of `[[registry]]` +TOML tables; the settings may therefore differ among different registries +as well as among different namespaces/repositories within a registry. + +#### Choosing a `[[registry]]` TOML table + +Given an image name, a single `[[registry]]` TOML table is chosen based on its `prefix` field. + +`prefix` +: A prefix of the user-specified image name, i.e. using one of the following formats: + - _host_[`:`_port_] + - _host_[`:`_port_]`/`_namespace_[`/`_namespace_…] + - _host_[`:`_port_]`/`_namespace_[`/`_namespace_…]`/`_repo_ + - _host_[`:`_port_]`/`_namespace_[`/`_namespace_…]`/`_repo_(`:`_tag|`@`_digest_) + + The user-specified image name must start with the specified `prefix` (and continue + with the appropriate separator) for a particular `[[registry]]` TOML table to be + considered; (only) the TOML table with the longest match is used. + + As a special case, the `prefix` field can be missing; if so, it defaults to the value + of the `location` field (described below). + +#### Per-namespace settings + +`insecure` +: `true` or `false`. + By default, container runtimes require TLS when retrieving images from a registry. + If `insecure` is set to `true`, unencrypted HTTP as well as TLS connections with untrusted + certificates are allowed. + +`blocked` +: `true` or `false`. + If `true`, pulling images with matching names is forbidden. + +#### Remapping and mirroring registries + +The user-specified image reference is, primarily, a "logical" image name, always used for naming +the image. By default, the image reference also directly specifies the registry and repository +to use, but the following options can be used to redirect the underlying accesses +to different registry servers or locations (e.g. to support configurations with no access to the +internet without having to change `Dockerfile`s, or to add redundancy). + +`location` +: Accepts the same format as the `prefix` field, and specifies the physical location + of the `prefix`-rooted namespace. + + By default, this equal to `prefix` (in which case `prefix` can be omitted and the + `[[registry]]` TOML table can only specify `location`). + + Example: Given + ``` + prefix = "example.com/foo" + location = "internal-registry-for-example.net/bar" + ``` + requests for the image `example.com/foo/myimage:latest` will actually work with the + `internal-registry-for-example.net/bar/myimage:latest` image. + +`mirror` +: An array of TOML tables specifying (possibly-partial) mirrors for the + `prefix`-rooted namespace. + + The mirrors are attempted in the specified order; the first one that can be + contacted and contains the image will be used (and if none of the mirrors contains the image, + the primary location specified by the `registry.location` field, or using the unmodified + user-specified reference, is tried last). + + Each TOML table in the `mirror` array can contain the following fields, with the same semantics + as if specified in the `[[registry]]` TOML table directly: + - `location` + - `insecure` + +`mirror-by-digest-only` +: `true` or `false`. + If `true`, mirrors will only be used during pulling if the image reference includes a digest. + Referencing an image by digest ensures that the same is always used + (whereas referencing an image by a tag may cause different registries to return + different images if the tag mapping is out of sync). + + Note that if this is `true`, images referenced by a tag will only use the primary + registry, failing if that registry is not accessible. + +*Note*: Redirection and mirrors are currently processed only when reading images, not when pushing +to a registry; that may change in the future. + +### EXAMPLE + +``` +unqualified-search-registries = ["example.com"] + +[[registry]] +prefix = "example.com/foo" +insecure = false +blocked = false +location = "internal-registry-for-example.com/bar" + +[[registry.mirror]] +location = "example-mirror-0.local/mirror-for-foo" + +[[registry.mirror]] +location = "example-mirror-1.local/mirrors/foo" +insecure = true +``` +Given the above, a pull of `example.com/foo/image:latest` will try: + 1. `example-mirror-0.local/mirror-for-foo/image:latest` + 2. `example-mirror-1.local/mirrors/foo/image:latest` + 3. `internal-registry-for-example.net/bar/myimage:latest` + +in order, and use the first one that exists. + +## VERSION 1 +VERSION 1 can be used as alternative to the VERSION 2, but it does not support +using registry mirrors, longest-prefix matches, or location rewriting. + +The TOML format is used to build a simple list of registries under three +categories: `registries.search`, `registries.insecure`, and `registries.block`. +You can list multiple registries using a comma separated list. + +Search registries are used when the caller of a container runtime does not fully specify the +container image that they want to execute. These registries are prepended onto the front +of the specified container image until the named image is found at a registry. + +Note that insecure registries can be used for any registry, not just the registries listed +under search. + +The `registries.insecure` and `registries.block` lists have the same meaning as the +`insecure` and `blocked` fields in VERSION 2. + +### EXAMPLE +The following example configuration defines two searchable registries, one +insecure registry, and two blocked registries. + +``` +[registries.search] +registries = ['registry1.com', 'registry2.com'] + +[registries.insecure] +registries = ['registry3.com'] + +[registries.block] +registries = ['registry.untrusted.com', 'registry.unsafe.com'] +``` + +# HISTORY +Mar 2019, Added additional configuration format by Sascha Grunert + +Aug 2018, Renamed to containers-registries.conf(5) by Valentin Rothberg + +Jun 2018, Updated by Tom Sweeney + +Aug 2017, Originally compiled by Brent Baude diff --git a/SOURCES/containers-registries.d.5.md b/SOURCES/containers-registries.d.5.md new file mode 100644 index 0000000..dffe387 --- /dev/null +++ b/SOURCES/containers-registries.d.5.md @@ -0,0 +1,128 @@ +% CONTAINERS-REGISTRIES.D(5) Registries.d Man Page +% Miloslav Trmač +% August 2016 + +# NAME +containers-registries.d - Directory for various registries configurations + +# DESCRIPTION + +The registries configuration directory contains configuration for various registries +(servers storing remote container images), and for content stored in them, +so that the configuration does not have to be provided in command-line options over and over for every command, +and so that it can be shared by all users of containers/image. + +By default (unless overridden at compile-time), the registries configuration directory is `/etc/containers/registries.d`; +applications may allow using a different directory instead. + +## Directory Structure + +The directory may contain any number of files with the extension `.yaml`, +each using the YAML format. Other than the mandatory extension, names of the files +don’t matter. + +The contents of these files are merged together; to have a well-defined and easy to understand +behavior, there can be only one configuration section describing a single namespace within a registry +(in particular there can be at most one one `default-docker` section across all files, +and there can be at most one instance of any key under the the `docker` section; +these sections are documented later). + +Thus, it is forbidden to have two conflicting configurations for a single registry or scope, +and it is also forbidden to split a configuration for a single registry or scope across +more than one file (even if they are not semantically in conflict). + +## Registries, Scopes and Search Order + +Each YAML file must contain a “YAML mapping” (key-value pairs). Two top-level keys are defined: + +- `default-docker` is the _configuration section_ (as documented below) + for registries implementing "Docker Registry HTTP API V2". + + This key is optional. + +- `docker` is a mapping, using individual registries implementing "Docker Registry HTTP API V2", + or namespaces and individual images within these registries, as keys; + the value assigned to any such key is a _configuration section_. + + This key is optional. + + Scopes matching individual images are named Docker references *in the fully expanded form*, either + using a tag or digest. For example, `docker.io/library/busybox:latest` (*not* `busybox:latest`). + + More general scopes are prefixes of individual-image scopes, and specify a repository (by omitting the tag or digest), + a repository namespace, or a registry host (and a port if it differs from the default). + + Note that if a registry is accessed using a hostname+port configuration, the port-less hostname + is _not_ used as parent scope. + +When searching for a configuration to apply for an individual container image, only +the configuration for the most-precisely matching scope is used; configuration using +more general scopes is ignored. For example, if _any_ configuration exists for +`docker.io/library/busybox`, the configuration for `docker.io` is ignored +(even if some element of the configuration is defined for `docker.io` and not for `docker.io/library/busybox`). + +## Individual Configuration Sections + +A single configuration section is selected for a container image using the process +described above. The configuration section is a YAML mapping, with the following keys: + +- `sigstore-staging` defines an URL of of the signature storage, used for editing it (adding or deleting signatures). + + This key is optional; if it is missing, `sigstore` below is used. + +- `sigstore` defines an URL of the signature storage. + This URL is used for reading existing signatures, + and if `sigstore-staging` does not exist, also for adding or removing them. + + This key is optional; if it is missing, no signature storage is defined (no signatures + are download along with images, adding new signatures is possible only if `sigstore-staging` is defined). + +## Examples + +### Using Containers from Various Origins + +The following demonstrates how to to consume and run images from various registries and namespaces: + +```yaml +docker: + registry.database-supplier.com: + sigstore: https://sigstore.database-supplier.com + distribution.great-middleware.org: + sigstore: https://security-team.great-middleware.org/sigstore + docker.io/web-framework: + sigstore: https://sigstore.web-framework.io:8080 +``` + +### Developing and Signing Containers, Staging Signatures + +For developers in `example.com`: + +- Consume most container images using the public servers also used by clients. +- Use a separate sigure storage for an container images in a namespace corresponding to the developers' department, with a staging storage used before publishing signatures. +- Craft an individual exception for a single branch a specific developer is working on locally. + +```yaml +docker: + registry.example.com: + sigstore: https://registry-sigstore.example.com + registry.example.com/mydepartment: + sigstore: https://sigstore.mydepartment.example.com + sigstore-staging: file:///mnt/mydepartment/sigstore-staging + registry.example.com/mydepartment/myproject:mybranch: + sigstore: http://localhost:4242/sigstore + sigstore-staging: file:///home/useraccount/webroot/sigstore +``` + +### A Global Default + +If a company publishes its products using a different domain, and different registry hostname for each of them, it is still possible to use a single signature storage server +without listing each domain individually. This is expected to rarely happen, usually only for staging new signatures. + +```yaml +default-docker: + sigstore-staging: file:///mnt/company/common-sigstore-staging +``` + +# AUTHORS + +Miloslav Trmač diff --git a/SOURCES/containers-signature.5.md b/SOURCES/containers-signature.5.md new file mode 100644 index 0000000..5b99e7c --- /dev/null +++ b/SOURCES/containers-signature.5.md @@ -0,0 +1,241 @@ +% container-signature(5) Container signature format +% Miloslav Trmač +% March 2017 + +# Container signature format + +This document describes the format of container signatures, +as implemented by the `github.com/containers/image/signature` package. + +Most users should be able to consume these signatures by using the `github.com/containers/image/signature` package +(preferably through the higher-level `signature.PolicyContext` interface) +without having to care about the details of the format described below. +This documentation exists primarily for maintainers of the package +and to allow independent reimplementations. + +## High-level overview + +The signature provides an end-to-end authenticated claim that a container image +has been approved by a specific party (e.g. the creator of the image as their work, +an automated build system as a result of an automated build, +a company IT department approving the image for production) under a specified _identity_ +(e.g. an OS base image / specific application, with a specific version). + +A container signature consists of a cryptographic signature which identifies +and authenticates who signed the image, and carries as a signed payload a JSON document. +The JSON document identifies the image being signed, claims a specific identity of the +image and if applicable, contains other information about the image. + +The signatures do not modify the container image (the layers, configuration, manifest, …); +e.g. their presence does not change the manifest digest used to identify the image in +docker/distribution servers; rather, the signatures are associated with an immutable image. +An image can have any number of signatures so signature distribution systems SHOULD support +associating more than one signature with an image. + +## The cryptographic signature + +As distributed, the container signature is a blob which contains a cryptographic signature +in an industry-standard format, carrying a signed JSON payload (i.e. the blob contains both the +JSON document and a signature of the JSON document; it is not a “detached signature” with +independent blobs containing the JSON document and a cryptographic signature). + +Currently the only defined cryptographic signature format is an OpenPGP signature (RFC 4880), +but others may be added in the future. (The blob does not contain metadata identifying the +cryptographic signature format. It is expected that most formats are sufficiently self-describing +that this is not necessary and the configured expected public key provides another indication +of the expected cryptographic signature format. Such metadata may be added in the future for +newly added cryptographic signature formats, if necessary.) + +Consumers of container signatures SHOULD verify the cryptographic signature +against one or more trusted public keys +(e.g. defined in a [policy.json signature verification policy file](policy.json.md)) +before parsing or processing the JSON payload in _any_ way, +in particular they SHOULD stop processing the container signature +if the cryptographic signature verification fails, without even starting to process the JSON payload. + +(Consumers MAY extract identification of the signing key and other metadata from the cryptographic signature, +and the JSON payload, without verifying the signature, if the purpose is to allow managing the signature blobs, +e.g. to list the authors and image identities of signatures associated with a single container image; +if so, they SHOULD design the output of such processing to minimize the risk of users considering the output trusted +or in any way usable for making policy decisions about the image.) + +### OpenPGP signature verification + +When verifying a cryptographic signature in the OpenPGP format, +the consumer MUST verify at least the following aspects of the signature +(like the `github.com/containers/image/signature` package does): + +- The blob MUST be a “Signed Message” as defined RFC 4880 section 11.3. + (e.g. it MUST NOT be an unsigned “Literal Message”, or any other non-signature format). +- The signature MUST have been made by an expected key trusted for the purpose (and the specific container image). +- The signature MUST be correctly formed and pass the cryptographic validation. +- The signature MUST correctly authenticate the included JSON payload + (in particular, the parsing of the JSON payload MUST NOT start before the complete payload has been cryptographically authenticated). +- The signature MUST NOT be expired. + +The consumer SHOULD have tests for its verification code which verify that signatures failing any of the above are rejected. + +## JSON processing and forward compatibility + +The payload of the cryptographic signature is a JSON document (RFC 7159). +Consumers SHOULD parse it very strictly, +refusing any signature which violates the expected format (e.g. missing members, incorrect member types) +or can be interpreted ambiguously (e.g. a duplicated member in a JSON object). + +Any violations of the JSON format or of other requirements in this document MAY be accepted if the JSON document can be recognized +to have been created by a known-incorrect implementation (see [`optional.creator`](#optionalcreator) below) +and if the semantics of the invalid document, as created by such an implementation, is clear. + +The top-level value of the JSON document MUST be a JSON object with exactly two members, `critical` and `optional`, +each a JSON object. + +The `critical` object MUST contain a `type` member identifying the document as a container signature +(as defined [below](#criticaltype)) +and signature consumers MUST reject signatures which do not have this member or in which this member does not have the expected value. + +To ensure forward compatibility (allowing older signature consumers to correctly +accept or reject signatures created at a later date, with possible extensions to this format), +consumers MUST reject the signature if the `critical` object, or _any_ of its subobjects, +contain _any_ member or data value which is unrecognized, unsupported, invalid, or in any other way unexpected. +At a minimum, this includes unrecognized members in a JSON object, or incorrect types of expected members. + +For the same reason, consumers SHOULD accept any members with unrecognized names in the `optional` object, +and MAY accept signatures where the object member is recognized but unsupported, or the value of the member is unsupported. +Consumers still SHOULD reject signatures where a member of an `optional` object is supported but the value is recognized as invalid. + +## JSON data format + +An example of the full format follows, with detailed description below. +To reiterate, consumers of the signature SHOULD perform successful cryptographic verification, +and MUST reject unexpected data in the `critical` object, or in the top-level object, as described above. + +```json +{ + "critical": { + "type": "atomic container signature", + "image": { + "docker-manifest-digest": "sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e" + }, + "identity": { + "docker-reference": "docker.io/library/busybox:latest" + } + }, + "optional": { + "creator": "some software package v1.0.1-35", + "timestamp": 1483228800, + } +} +``` + +### `critical` + +This MUST be a JSON object which contains data critical to correctly evaluating the validity of a signature. + +Consumers MUST reject any signature where the `critical` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data. + +### `critical.type` + +This MUST be a string with a string value exactly equal to `atomic container signature` (three words, including the spaces). + +Signature consumers MUST reject signatures which do not have this member or this member does not have exactly the expected value. + +(The consumers MAY support signatures with a different value of the `type` member, if any is defined in the future; +if so, the rest of the JSON document is interpreted according to rules defining that value of `critical.type`, +not by this document.) + +### `critical.image` + +This MUST be a JSON object which identifies the container image this signature applies to. + +Consumers MUST reject any signature where the `critical.image` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data. + +(Currently only the `docker-manifest-digest` way of identifying a container image is defined; +alternatives to this may be defined in the future, +but existing consumers are required to reject signatures which use formats they do not support.) + +### `critical.image.docker-manifest-digest` + +This MUST be a JSON string, in the `github.com/opencontainers/go-digest.Digest` string format. + +The value of this member MUST match the manifest of the signed container image, as implemented in the docker/distribution manifest addressing system. + +The consumer of the signature SHOULD verify the manifest digest against a fully verified signature before processing the contents of the image manifest in any other way +(e.g. parsing the manifest further or downloading layers of the image). + +Implementation notes: +* A single container image manifest may have several valid manifest digest values, using different algorithms. +* For “signed” [docker/distribution schema 1](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-1.md) manifests, +the manifest digest applies to the payload of the JSON web signature, not to the raw manifest blob. + +### `critical.identity` + +This MUST be a JSON object which identifies the claimed identity of the image (usually the purpose of the image, or the application, along with a version information), +as asserted by the author of the signature. + +Consumers MUST reject any signature where the `critical.identity` object contains any unrecognized, unsupported, invalid or in any other way unexpected member or data. + +(Currently only the `docker-reference` way of claiming an image identity/purpose is defined; +alternatives to this may be defined in the future, +but existing consumers are required to reject signatures which use formats they do not support.) + +### `critical.identity.docker-reference` + +This MUST be a JSON string, in the `github.com/docker/distribution/reference` string format, +and using the same normalization semantics (where e.g. `busybox:latest` is equivalent to `docker.io/library/busybox:latest`). +If the normalization semantics allows multiple string representations of the claimed identity with equivalent meaning, +the `critical.identity.docker-reference` member SHOULD use the fully explicit form (including the full host name and namespaces). + +The value of this member MUST match the image identity/purpose expected by the consumer of the image signature and the image +(again, accounting for the `docker/distribution/reference` normalization semantics). + +In the most common case, this means that the `critical.identity.docker-reference` value must be equal to the docker/distribution reference used to refer to or download the image. +However, depending on the specific application, users or system administrators may accept less specific matches +(e.g. ignoring the tag value in the signature when pulling the `:latest` tag or when referencing an image by digest), +or they may require `critical.identity.docker-reference` values with a completely different namespace to the reference used to refer to/download the image +(e.g. requiring a `critical.identity.docker-reference` value which identifies the image as coming from a supplier when fetching it from a company-internal mirror of approved images). +The software performing this verification SHOULD allow the users to define such a policy using the [policy.json signature verification policy file format](policy.json.md). + +The `critical.identity.docker-reference` value SHOULD contain either a tag or digest; +in most cases, it SHOULD use a tag rather than a digest. (See also the default [`matchRepoDigestOrExact` matching semantics in `policy.json`](policy.json.md#signedby).) + +### `optional` + +This MUST be a JSON object. + +Consumers SHOULD accept any members with unrecognized names in the `optional` object, +and MAY accept a signature where the object member is recognized but unsupported, or the value of the member is valid but unsupported. +Consumers still SHOULD reject any signature where a member of an `optional` object is supported but the value is recognized as invalid. + +### `optional.creator` + +If present, this MUST be a JSON string, identifying the name and version of the software which has created the signature. + +The contents of this string is not defined in detail; however each implementation creating container signatures: + +- SHOULD define the contents to unambiguously define the software in practice (e.g. it SHOULD contain the name of the software, not only the version number) +- SHOULD use a build and versioning process which ensures that the contents of this string (e.g. an included version number) + changes whenever the format or semantics of the generated signature changes in any way; + it SHOULD not be possible for two implementations which use a different format or semantics to have the same `optional.creator` value +- SHOULD use a format which is reasonably easy to parse in software (perhaps using a regexp), + and which makes it easy enough to recognize a range of versions of a specific implementation + (e.g. the version of the implementation SHOULD NOT be only a git hash, because they don’t have an easily defined ordering; + the string should contain a version number, or at least a date of the commit). + +Consumers of container signatures MAY recognize specific values or sets of values of `optional.creator` +(perhaps augmented with `optional.timestamp`), +and MAY change their processing of the signature based on these values +(usually to acommodate violations of this specification in past versions of the signing software which cannot be fixed retroactively), +as long as the semantics of the invalid document, as created by such an implementation, is clear. + +If consumers of signatures do change their behavior based on the `optional.creator` value, +they SHOULD take care that the way they process the signatures is not inconsistent with +strictly validating signature consumers. +(I.e. it is acceptable for a consumer to accept a signature based on a specific `optional.creator` value +if other implementations would completely reject the signature, +but it would be very undesirable for the two kinds of implementations to accept the signature in different +and inconsistent situations.) + +### `optional.timestamp` + +If present, this MUST be a JSON number, which is representable as a 64-bit integer, and identifies the time when the signature was created +as the number of seconds since the UNIX epoch (Jan 1 1970 00:00 UTC). diff --git a/SOURCES/containers-storage.conf.5.md b/SOURCES/containers-storage.conf.5.md index 625dadc..3df486e 100644 --- a/SOURCES/containers-storage.conf.5.md +++ b/SOURCES/containers-storage.conf.5.md @@ -1,16 +1,16 @@ -% storage.conf(5) Container Storage Configuration File +% containers-storage.conf(5) Container Storage Configuration File % Dan Walsh % May 2017 # NAME storage.conf - Syntax of Container Storage configuration file -# DESCRIPTION +## DESCRIPTION The STORAGE configuration file specifies all of the available container storage options for tools using shared container storage, but in a TOML format that can be more easily modified and versioned. -# FORMAT +## FORMAT The [TOML format][toml] is used as the encoding of the configuration file. Every option and subtable listed here is nested under a global "storage" table. No bare options are used. The format of TOML can be simplified to: @@ -28,6 +28,12 @@ No bare options are used. The format of TOML can be simplified to: The `storage` table supports the following options: +**driver**="" + container storage driver (default: "overlay") + Default Copy On Write (COW) container storage driver + Valid drivers are "overlay", "vfs", "devmapper", "aufs", "btrfs", and "zfs" + Some drivers (for example, "zfs", "btrfs", and "aufs") may not work if your kernel lacks support for the filesystem + **graphroot**="" container storage graph dir (default: "/var/lib/containers/storage") Default directory to store all writable content created by container storage programs @@ -36,10 +42,6 @@ The `storage` table supports the following options: container storage run dir (default: "/var/run/containers/storage") Default directory to store all temporary writable content created by container storage programs -**driver**="" - container storage driver (default is "overlay") - Default Copy On Write (COW) container storage driver - ### STORAGE OPTIONS TABLE The `storage.options` table supports the following options: @@ -47,54 +49,94 @@ The `storage.options` table supports the following options: **additionalimagestores**=[] Paths to additional container image stores. Usually these are read/only and stored on remote network shares. +**mount_program**="" + Specifies the path to a custom program to use instead of using kernel defaults for mounting the file system. + + mount_program = "/usr/bin/fuse-overlayfs" + +**mountopt**="" + + Comma separated list of default options to be used to mount container images. Suggested value "nodev". + +**ostree_repo** = "" + If specified, use OSTree to deduplicate files with the overlay or vfs backends. + **size**="" - Maximum size of a container image. Default is 10GB. This flag can be used to set quota - on the size of container images. + Maximum size of a container image. This flag can be used to set quota on the size of container images. (default: 10GB) + +**skip_mount_home** = "false" + Set to skip a PRIVATE bind mount on the storage home directory. +Only supported by certain container storage drivers (overlay). + +**remap-uids=**"" +**remap-gids=**"" + + Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of +a container, to the UIDs/GIDs outside of the container, and the length of the +range of UIDs/GIDs. Additional mapped sets can be listed and will be heeded by +libraries, but there are limits to the number of mappings which the kernel will +allow when you later attempt to run a container. -**override_kernel_check**="" - Tell storage drivers to ignore kernel version checks. Some storage drivers assume that if a kernel is too - old, the driver is not supported. But for kernels that have had the drivers backported, this flag - allows users to override the checks + Example + remap-uids = 0:1668442479:65536 + remap-gids = 0:1668442479:65536 -[storage.options.thinpool] + These mappings tell the container engines to map UID 0 inside of the + container to UID 1668442479 outside. UID 1 will be mapped to 1668442480. + UID 2 will be mapped to 1668442481, etc, for the next 65533 UIDs in + Succession. -Storage Options for thinpool +**remap-user**="" +**remap-group**="" + + Remap-User/Group is a user name which can be used to look up one or more UID/GID +ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting +with an in-container ID of 0 and then a host-level ID taken from the lowest +range that matches the specified name, and using the length of that range. +Additional ranges are then assigned, using the ranges which specify the +lowest host-level IDs first, to the lowest not-yet-mapped in-container ID, +until all of the entries have been used for maps. + + remap-user = "storage" + remap-group = "storage" + +### STORAGE OPTIONS FOR THINPOOL TABLE The `storage.options.thinpool` table supports the following options: **autoextend_percent**="" -Tells the thinpool driver the amount by which the thinpool needs to be grown. This is specified in terms of % of pool size. So a value of 20 means that when threshold is hit, pool will be grown by 20% of existing pool size. (Default is 20%) +Tells the thinpool driver the amount by which the thinpool needs to be grown. This is specified in terms of % of pool size. So a value of 20 means that when threshold is hit, pool will be grown by 20% of existing pool size. (default: 20%) **autoextend_threshold**="" -Tells the driver the thinpool extension threshold in terms of percentage of pool size. For example, if threshold is 60, that means when pool is 60% full, threshold has been hit. (80% is the default) +Tells the driver the thinpool extension threshold in terms of percentage of pool size. For example, if threshold is 60, that means when pool is 60% full, threshold has been hit. (default: 80%) **basesize**="" -Specifies the size to use when creating the base device, which limits the size of images and containers. (10g is the default) +Specifies the size to use when creating the base device, which limits the size of images and containers. (default: 10g) **blocksize**="" -Specifies a custom blocksize to use for the thin pool. (64k is the default) +Specifies a custom blocksize to use for the thin pool. (default: 64k) **directlvm_device**="" -Specifies a custom block storage device to use for the thin pool. Required if you setup devicemapper +Specifies a custom block storage device to use for the thin pool. Required for using graphdriver `devicemapper`. **directlvm_device_force**="" -Tells driver to wipe device (directlvm_device) even if device already has a filesystem. Default is False +Tells driver to wipe device (directlvm_device) even if device already has a filesystem. (default: false) **fs**="xfs" -Specifies the filesystem type to use for the base device. (Default is xfs) +Specifies the filesystem type to use for the base device. (default: xfs) **log_level**="" Sets the log level of devicemapper. - 0: LogLevelSuppress 0 (Default) + 0: LogLevelSuppress 0 (default) 2: LogLevelFatal 3: LogLevelErr 4: LogLevelWarn @@ -104,28 +146,51 @@ Sets the log level of devicemapper. **min_free_space**="" -Specifies the min free space percent in a thin pool require for new device creation to succeed. Valid values are from 0% - 99%. Value 0% disables (10% is the default) +Specifies the min free space percent in a thin pool required for new device creation to succeed. Valid values are from 0% - 99%. Value 0% disables. (default: 10%) **mkfsarg**="" Specifies extra mkfs arguments to be used when creating the base device. -**mountopt**="" +**use_deferred_deletion**="" -Specifies extra mount options used when mounting the thin devices. +Marks thinpool device for deferred deletion. If the thinpool is in use when the driver attempts to delete it, the driver will attempt to delete device every 30 seconds until successful, or when it restarts. Deferred deletion permanently deletes the device and all data stored in the device will be lost. (default: true). **use_deferred_removal**="" -Marks device for deferred removal. If the device is in use when it driver attempts to remove it, driver will tell the kernel to remove it as soon as possible. (Default is true). +Marks devicemapper block device for deferred removal. If the device is in use when its driver attempts to remove it, the driver tells the kernel to remove the device as soon as possible. Note this does not free up the disk space, use deferred deletion to fully remove the thinpool. (default: true). -**use_deferred_deletion**="" +**xfs_nospace_max_retries**="" -Marks device for deferred deletion. If the device is in use when it driver attempts to delete it, driver continue to attempt to delete device every 30 seconds, or when it restarts. (Default is true). +Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device. (default: 0, which means to try continuously.) -**xfs_nospace_max_retries**="" +## SELINUX LABELING + +When running on an SELinux system, if you move the containers storage graphroot directory, you must make sure the labeling is correct. + +Tell SELinux about the new containers storage by setting up an equivalence record. +This tells SELinux to label content under the new path, as if it was stored +under `/var/lib/containers/storage`. + +``` +semanage fcontext -a -e /var/lib/containers NEWSTORAGEPATH +restorecon -R -v /src/containers +``` + +The semanage command above tells SELinux to setup the default labeling of +`NEWSTORAGEPATH` to match `/var/lib/containers`. The `restorecon` command +tells SELinux to apply the labels to the actual content. + +Now all new content created in these directories will automatically be created +with the correct label. + +## SEE ALSO +`semanage(8)`, `restorecon(8)` + +## FILES -Specifies the maximum number of retries XFS should attempt to complete IO when ENOSPC (no space) error is returned by underlying storage device. (Default is 0, which means to try continuously. +Distributions often provide a /usr/share/containers/storage.conf file to define default storage configuration. Administrators can override this file by creating `/etc/containers/storage.conf` to specify their own configuration. The storage.conf file for rootless users is stored in the $HOME/.config/containers/storage.conf file. -# HISTORY +## HISTORY May 2017, Originally compiled by Dan Walsh Format copied from crio.conf man page created by Aleksa Sarai diff --git a/SOURCES/containers-transports.5.md b/SOURCES/containers-transports.5.md new file mode 100644 index 0000000..e9d3b9c --- /dev/null +++ b/SOURCES/containers-transports.5.md @@ -0,0 +1,109 @@ +% CONTAINERS-TRANSPORTS(5) Containers Transports Man Page +% Valentin Rothberg +% April 2019 + +## NAME + +containers-transports - description of supported transports for copying and storing container images + +## DESCRIPTION + +Tools which use the containers/image library, including skopeo(1), buildah(1), podman(1), all share a common syntax for referring to container images in various locations. +The general form of the syntax is _transport:details_, where details are dependent on the specified transport, which are documented below. + +### **containers-storage:** [storage-specifier]{image-id|docker-reference[@image-id]} + +An image located in a local containers storage. +The format of _docker-reference_ is described in detail in the **docker** transport. + +The _storage-specifier_ allows for referencing storage locations on the file system and has the format `[[driver@]root[+run-root][:options]]` where the optional `driver` refers to the storage driver (e.g., overlay or btrfs) and where `root` is an absolute path to the storage's root directory. +The optional `run-root` can be used to specify the run directory of the storage where all temporary writable content is stored. +The optional `options` are a comma-separated list of driver-specific options. +Please refer to containers-storage.conf(5) for further information on the drivers and supported options. + +### **dir:**_path_ + +An existing local directory _path_ storing the manifest, layer tarballs and signatures as individual files. +This is a non-standardized format, primarily useful for debugging or noninvasive container inspection. + +### **docker://**_docker-reference_ + +An image in a registry implementing the "Docker Registry HTTP API V2". +By default, uses the authorization state in `$XDG_RUNTIME_DIR/containers/auth.json`, which is set using podman-login(1). +If the authorization state is not found there, `$HOME/.docker/config.json` is checked, which is set using docker-login(1). +The containers-registries.conf(5) further allows for configuring various settings of a registry. + +Note that a _docker-reference_ has the following format: `name[:tag|@digest]`. +While the docker transport does not support both a tag and a digest at the same time some formats like containers-storage do. +Digests can also be used in an image destination as long as the manifest matches the provided digest. +The digest of images can be explored with skopeo-inspect(1). +If `name` does not contain a slash, it is treated as `docker.io/library/name`. +Otherwise, the component before the first slash is checked if it is recognized as a `hostname[:port]` (i.e., it contains either a . or a :, or the component is exactly localhost). +If the first component of name is not recognized as a `hostname[:port]`, `name` is treated as `docker.io/name`. + +### **docker-archive:**_path[:docker-reference]_ + +An image is stored in the docker-save(1) formatted file. +_docker-reference_ is only used when creating such a file, and it must not contain a digest. +It is further possible to copy data to stdin by specifying `docker-archive:/dev/stdin` but note that the used file must be seekable. + +### **docker-daemon:**_docker-reference|algo:digest_ + +An image stored in the docker daemon's internal storage. +The image must be specified as a _docker-reference_ or in an alternative _algo:digest_ format when being used as an image source. +The _algo:digest_ refers to the image ID reported by docker-inspect(1). + +### **oci:**_path[:tag]_ + +An image compliant with the "Open Container Image Layout Specification" at _path_. +Using a _tag_ is optional and allows for storing multiple images at the same _path_. + +### **oci-archive:**_path[:tag]_ + +An image compliant with the "Open Container Image Layout Specification" stored as a tar(1) archive at _path_. + +### **ostree:**_docker-reference[@/absolute/repo/path]_ + +An image in the local ostree(1) repository. +_/absolute/repo/path_ defaults to _/ostree/repo_. + +## Examples + +The following examples demonstrate how some of the containers transports can be used. +The examples use skopeo-copy(1) for copying container images. + +**Copying an image from one registry to another**: +``` +$ skopeo copy docker://docker.io/library/alpine:latest docker://localhost:5000/alpine:latest +``` + +**Copying an image from a running Docker daemon to a directory in the OCI layout**: +``` +$ mkdir alpine-oci +$ skopeo copy docker-daemon:alpine:latest oci:alpine-oci +$ tree alpine-oci +test-oci/ +├── blobs +│   └── sha256 +│   ├── 83ef92b73cf4595aa7fe214ec6747228283d585f373d8f6bc08d66bebab531b7 +│   ├── 9a6259e911dcd0a53535a25a9760ad8f2eded3528e0ad5604c4488624795cecc +│   └── ff8df268d29ccbe81cdf0a173076dcfbbea4bb2b6df1dd26766a73cb7b4ae6f7 +├── index.json +└── oci-layout + +2 directories, 5 files +``` + +**Copying an image from a registry to the local storage**: +``` +$ skopeo copy docker://docker.io/library/alpine:latest containers-storage:alpine:latest +``` + +## SEE ALSO + +docker-login(1), docker-save(1), ostree(1), podman-login(1), skopeo-copy(1), skopeo-inspect(1), tar(1), container-registries.conf(5), containers-storage.conf(5) + +## AUTHORS + +Miloslav Trmač +Valentin Rothberg diff --git a/SOURCES/registries.conf b/SOURCES/registries.conf index fee6fa9..4206dc9 100644 --- a/SOURCES/registries.conf +++ b/SOURCES/registries.conf @@ -1,25 +1,82 @@ -# This is a system-wide configuration file used to -# keep track of registries for various container backends. -# It adheres to TOML format and does not support recursive -# lists of registries. - -# The default location for this configuration file is /etc/containers/registries.conf. - -# The only valid categories are: 'registries.search', 'registries.insecure', -# and 'registries.block'. - +# For more information on this configuration file, see containers-registries.conf(5). +# +# There are multiple versions of the configuration syntax available, where the +# second iteration is backwards compatible to the first one. Mixing up both +# formats will result in an runtime error. +# +# The initial configuration format looks like this: +# +# Registries to search for images that are not fully-qualified. +# i.e. foobar.com/my_image:latest vs my_image:latest [registries.search] -registries = ['registry.access.redhat.com', 'docker.io', 'registry.fedoraproject.org', 'quay.io', 'registry.centos.org'] +registries = ['registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io'] -# If you need to access insecure registries, add the registry's fully-qualified name. -# An insecure registry is one that does not have a valid SSL certificate or only does HTTP. +# Registries that do not use TLS when pulling images or uses self-signed +# certificates. [registries.insecure] registries = [] - -# If you need to block pull access from a registry, uncomment the section below -# and add the registries fully-qualified name. -# -# Docker only +# Blocked Registries, blocks the `docker daemon` from pulling from the blocked registry. If you specify +# "*", then the docker daemon will only be allowed to pull from registries listed above in the search +# registries. Blocked Registries is deprecated because other container runtimes and tools will not use it. +# It is recommended that you use the trust policy file /etc/containers/policy.json to control which +# registries you want to allow users to pull and push from. policy.json gives greater flexibility, and +# supports all container runtimes and tools including the docker daemon, cri-o, buildah ... +# The atomic CLI `atomic trust` can be used to easily configure the policy.json file. [registries.block] registries = [] + +# The second version of the configuration format allows to specify registry +# mirrors: +# +# # An array of host[:port] registries to try when pulling an unqualified image, in order. +# unqualified-search-registries = ["example.com"] +# +# [[registry]] +# # The "prefix" field is used to choose the relevant [[registry]] TOML table; +# # (only) the TOML table with the longest match for the input image name +# # (taking into account namespace/repo/tag/digest separators) is used. +# # +# # If the prefix field is missing, it defaults to be the same as the "location" field. +# prefix = "example.com/foo" +# +# # If true, unencrypted HTTP as well as TLS connections with untrusted +# # certificates are allowed. +# insecure = false +# +# # If true, pulling images with matching names is forbidden. +# blocked = false +# +# # The physical location of the "prefix"-rooted namespace. +# # +# # By default, this equal to "prefix" (in which case "prefix" can be omitted +# # and the [[registry]] TOML table can only specify "location"). +# # +# # Example: Given +# # prefix = "example.com/foo" +# # location = "internal-registry-for-example.net/bar" +# # requests for the image example.com/foo/myimage:latest will actually work with the +# # internal-registry-for-example.net/bar/myimage:latest image. +# location = internal-registry-for-example.com/bar" +# +# # (Possibly-partial) mirrors for the "prefix"-rooted namespace. +# # +# # The mirrors are attempted in the specified order; the first one that can be +# # contacted and contains the image will be used (and if none of the mirrors contains the image, +# # the primary location specified by the "registry.location" field, or using the unmodified +# # user-specified reference, is tried last). +# # +# # Each TOML table in the "mirror" array can contain the following fields, with the same semantics +# # as if specified in the [[registry]] TOML table directly: +# # - location +# # - insecure +# [[registry.mirror]] +# location = "example-mirror-0.local/mirror-for-foo" +# [[registry.mirror]] +# location = "example-mirror-1.local/mirrors/foo" +# insecure = true +# # Given the above, a pull of example.com/foo/image:latest will try: +# # 1. example-mirror-0.local/mirror-for-foo/image:latest +# # 2. example-mirror-1.local/mirrors/foo/image:latest +# # 3. internal-registry-for-example.net/bar/myimage:latest +# # in order, and use the first one that exists. diff --git a/SOURCES/registries.conf.5.md b/SOURCES/registries.conf.5.md deleted file mode 100644 index 3aa4ad5..0000000 --- a/SOURCES/registries.conf.5.md +++ /dev/null @@ -1,41 +0,0 @@ -% registries.conf(5) System-wide registry configuration file -% Brent Baude -% Aug 2017 - -# NAME -registries.conf - Syntax of System Registry Configuration File - -# DESCRIPTION -The REGISTRIES configuration file is a system-wide configuration file for container image -registries. The file format is TOML. - -# FORMAT -The TOML_format is used to build simple list format for registries under two -categories: `search` and `insecure`. You can list multiple registries using -as a comma separated list. - -Search registries are used when the caller of a container runtime does not fully specify the -container image that they want to execute. These registries are prepended onto the front - of the specified container image until the named image is found at a registry. - -Insecure Registries. By default container runtimes use TLS when retrieving images -from a registry. If the registry is not setup with TLS, then the container runtime -will fail to pull images from the registry. If you add the registry to the list of -insecure registries then the container runtime will attempt use standard web protocols to -pull the image. It also allows you to pull from a registry with self-signed certificates. -Note insecure registries can be used for any registry, not just the -registries listed under search. - -The following example configuration defines two searchable registries and one -insecure registry. - -``` -[registries.search] -registries = ["registry1.com", "registry2.com"] - -[registries.insecure] -registries = ["registry3.com"] -``` - -# HISTORY -Aug 2017, Originally compiled by Brent Baude diff --git a/SOURCES/skopeo-1792243.patch b/SOURCES/skopeo-1792243.patch new file mode 100644 index 0000000..4791af6 --- /dev/null +++ b/SOURCES/skopeo-1792243.patch @@ -0,0 +1,12 @@ +diff -up ./skopeo-be6146b0a8471b02e776134119a2c37dfb70d414/vendor/github.com/mtrmac/gpgme/gpgme.go.1792243 ./skopeo-be6146b0a8471b02e776134119a2c37dfb70d414/vendor/github.com/mtrmac/gpgme/gpgme.go +--- skopeo-be6146b0a8471b02e776134119a2c37dfb70d414/vendor/github.com/mtrmac/gpgme/gpgme.go.1792243 2020-01-20 14:16:17.995468787 +0100 ++++ skopeo-be6146b0a8471b02e776134119a2c37dfb70d414/vendor/github.com/mtrmac/gpgme/gpgme.go 2020-01-20 14:16:17.997468807 +0100 +@@ -1,7 +1,7 @@ + // Package gpgme provides a Go wrapper for the GPGME library + package gpgme + +-// #cgo LDFLAGS: -lgpgme -lassuan -lgpg-error ++// #cgo LDFLAGS: -lgpgme-pthread -lassuan -lgpg-error + // #cgo CPPFLAGS: -D_FILE_OFFSET_BITS=64 + // #include + // #include diff --git a/SOURCES/skopeo-CVE-2020-8945.patch b/SOURCES/skopeo-CVE-2020-8945.patch new file mode 100644 index 0000000..d2066b0 --- /dev/null +++ b/SOURCES/skopeo-CVE-2020-8945.patch @@ -0,0 +1,1061 @@ +From c48714e522ea147e49b0d0dfddf58a9b47137055 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Miloslav=20Trma=C4=8D?= +Date: Thu, 20 Feb 2020 19:51:27 +0100 +Subject: [PATCH 1/3] Update to github.com/mtrmac/gpgme v0.1.2 +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +This fixes CVE-2020-8945 by incorporating proglottis/gpgme#23 . + +Other changes included by the rebase: +- Support for gpgme_off_t (~no-op on Linux) +- Wrapping a few more GPGME functions (irrelevant if we don't call them) + +Given how invasive the CVE fix is (affecting basically all binding +code), it seems safer to just update the package (and be verifiably +equivalent with upstream) than to backport and try to back out the few +other changes. + +Performed by +$ go get github.com/mtrmac/gpgme@v0.1.2 +$ make vendor +and manually backing out unrelated deletions of files (which Go 1.13 does +but Go 1.11, used for testing the old version, does not). + +Signed-off-by: Miloslav Trmač +--- + go.mod | 1 + + go.sum | 2 + + vendor/github.com/mtrmac/gpgme/.appveyor.yml | 40 ++ + vendor/github.com/mtrmac/gpgme/.travis.yml | 32 ++ + vendor/github.com/mtrmac/gpgme/data.go | 18 +- + vendor/github.com/mtrmac/gpgme/go.mod | 3 + + vendor/github.com/mtrmac/gpgme/go_gpgme.c | 22 ++ + vendor/github.com/mtrmac/gpgme/go_gpgme.h | 12 + + vendor/github.com/mtrmac/gpgme/gpgme.go | 346 ++++++++++++++---- + .../mtrmac/gpgme/unset_agent_info.go | 18 + + .../mtrmac/gpgme/unset_agent_info_windows.go | 14 + + vendor/modules.txt | 2 +- + 12 files changed, 432 insertions(+), 78 deletions(-) + create mode 100644 vendor/github.com/mtrmac/gpgme/.appveyor.yml + create mode 100644 vendor/github.com/mtrmac/gpgme/.travis.yml + create mode 100644 vendor/github.com/mtrmac/gpgme/go.mod + create mode 100644 vendor/github.com/mtrmac/gpgme/unset_agent_info.go + create mode 100644 vendor/github.com/mtrmac/gpgme/unset_agent_info_windows.go + +diff --git a/go.mod b/go.mod +index 788827569..3335d7573 100644 +--- a/go.mod ++++ b/go.mod +@@ -9,6 +9,7 @@ require ( + github.com/docker/docker v0.0.0-20180522102801-da99009bbb11 + github.com/dsnet/compress v0.0.1 // indirect + github.com/go-check/check v0.0.0-20180628173108-788fd7840127 ++ github.com/mtrmac/gpgme v0.1.2 // indirect + github.com/opencontainers/go-digest v1.0.0-rc1 + github.com/opencontainers/image-spec v1.0.2-0.20190823105129-775207bd45b6 + github.com/opencontainers/image-tools v0.0.0-20170926011501-6d941547fa1d +diff --git a/go.sum b/go.sum +index c04f6f3a2..3ad0f17de 100644 +--- a/go.sum ++++ b/go.sum +@@ -91,6 +91,8 @@ github.com/mistifyio/go-zfs v2.1.1+incompatible h1:gAMO1HM9xBRONLHHYnu5iFsOJUiJd + github.com/mistifyio/go-zfs v2.1.1+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4= + github.com/mtrmac/gpgme v0.0.0-20170102180018-b2432428689c h1:xa+eQWKuJ9MbB9FBL/eoNvDFvveAkz2LQoz8PzX7Q/4= + github.com/mtrmac/gpgme v0.0.0-20170102180018-b2432428689c/go.mod h1:GhAqVMEWnTcW2dxoD/SO3n2enrgWl3y6Dnx4m59GvcA= ++github.com/mtrmac/gpgme v0.1.2 h1:dNOmvYmsrakgW7LcgiprD0yfRuQQe8/C8F6Z+zogO3s= ++github.com/mtrmac/gpgme v0.1.2/go.mod h1:GYYHnGSuS7HK3zVS2n3y73y0okK/BeKzwnn5jgiVFNI= + github.com/mtrmac/image/v4 v4.0.0-20191002203927-a64d9d2717f4 h1:AE5cilZfrGtAgMg5Ed4c2Y2KczlOsMVZAK055sSq+gc= + github.com/mtrmac/image/v4 v4.0.0-20191002203927-a64d9d2717f4/go.mod h1:0ASJH1YgJiX/eqFZObqepgsvIA4XjCgpyfwn9pDGafA= + github.com/mtrmac/image/v4 v4.0.0-20191003181245-f4c983e93262 h1:HMUEnWU3OPT09JRFQLn8VTp3GfdfiEhDMAEhkdX8QnA= +diff --git a/vendor/github.com/mtrmac/gpgme/.appveyor.yml b/vendor/github.com/mtrmac/gpgme/.appveyor.yml +new file mode 100644 +index 000000000..2fdc09ab5 +--- /dev/null ++++ b/vendor/github.com/mtrmac/gpgme/.appveyor.yml +@@ -0,0 +1,40 @@ ++--- ++version: 0.{build} ++platform: x86 ++branches: ++ only: ++ - master ++ ++clone_folder: c:\gopath\src\github.com\proglottis\gpgme ++ ++environment: ++ GOPATH: c:\gopath ++ GOROOT: C:\go-x86 ++ CGO_LDFLAGS: -LC:\gpg\lib ++ CGO_CFLAGS: -IC:\gpg\include ++ GPG_DIR: C:\gpg ++ ++install: ++ - nuget install 7ZipCLI -ExcludeVersion ++ - set PATH=%appveyor_build_folder%\7ZipCLI\tools;%PATH% ++ - appveyor DownloadFile https://www.gnupg.org/ftp/gcrypt/binary/gnupg-w32-2.1.20_20170403.exe -FileName gnupg-w32-2.1.20_20170403.exe ++ - 7z x -o%GPG_DIR% gnupg-w32-2.1.20_20170403.exe ++ - copy "%GPG_DIR%\lib\libgpg-error.imp" "%GPG_DIR%\lib\libgpg-error.a" ++ - copy "%GPG_DIR%\lib\libassuan.imp" "%GPG_DIR%\lib\libassuan.a" ++ - copy "%GPG_DIR%\lib\libgpgme.imp" "%GPG_DIR%\lib\libgpgme.a" ++ - set PATH=%GOPATH%\bin;%GOROOT%\bin;%GPG_DIR%\bin;C:\MinGW\bin;%PATH% ++ - C:\cygwin\bin\sed -i 's/"GPG_AGENT_INFO"/"GPG_AGENT_INFO="/;s/C.unsetenv(v)/C.putenv(v)/' %APPVEYOR_BUILD_FOLDER%\gpgme.go ++ ++test_script: ++ - go test -v github.com/proglottis/gpgme ++ ++ ++build_script: ++ - go build -o example_decrypt.exe -i %APPVEYOR_BUILD_FOLDER%\examples\decrypt.go ++ - go build -o example_encrypt.exe -i %APPVEYOR_BUILD_FOLDER%\examples\encrypt.go ++ ++artifacts: ++ - path: example_decrypt.exe ++ name: decrypt example binary ++ - path: example_encrypt.exe ++ name: encrypt example binary +\ No newline at end of file +diff --git a/vendor/github.com/mtrmac/gpgme/.travis.yml b/vendor/github.com/mtrmac/gpgme/.travis.yml +new file mode 100644 +index 000000000..619e33721 +--- /dev/null ++++ b/vendor/github.com/mtrmac/gpgme/.travis.yml +@@ -0,0 +1,32 @@ ++--- ++language: go ++os: ++ - linux ++ - osx ++ - windows ++dist: xenial ++sudo: false ++ ++go: ++ - 1.11 ++ - 1.12 ++ - 1.13 ++ ++addons: ++ apt: ++ packages: ++ - libgpgme11-dev ++ homebrew: ++ packages: ++ - gnupg ++ - gnupg@1.4 ++ - gpgme ++ update: true ++ ++matrix: ++ allow_failures: ++ - os: windows ++ ++before_install: ++ - if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then choco install msys2; fi ++ - if [[ "$TRAVIS_OS_NAME" == "windows" ]]; then choco install gpg4win; fi +diff --git a/vendor/github.com/mtrmac/gpgme/data.go b/vendor/github.com/mtrmac/gpgme/data.go +index eebc97263..eee32c032 100644 +--- a/vendor/github.com/mtrmac/gpgme/data.go ++++ b/vendor/github.com/mtrmac/gpgme/data.go +@@ -50,25 +50,25 @@ func gogpgme_writefunc(handle, buffer unsafe.Pointer, size C.size_t) C.ssize_t { + } + + //export gogpgme_seekfunc +-func gogpgme_seekfunc(handle unsafe.Pointer, offset C.off_t, whence C.int) C.off_t { ++func gogpgme_seekfunc(handle unsafe.Pointer, offset C.gpgme_off_t, whence C.int) C.gpgme_off_t { + d := callbackLookup(uintptr(handle)).(*Data) + n, err := d.s.Seek(int64(offset), int(whence)) + if err != nil { + C.gpgme_err_set_errno(C.EIO) + return -1 + } +- return C.off_t(n) ++ return C.gpgme_off_t(n) + } + + // The Data buffer used to communicate with GPGME + type Data struct { +- dh C.gpgme_data_t ++ dh C.gpgme_data_t // WARNING: Call runtime.KeepAlive(d) after ANY passing of d.dh to C + buf []byte + cbs C.struct_gpgme_data_cbs + r io.Reader + w io.Writer + s io.Seeker +- cbc uintptr ++ cbc uintptr // WARNING: Call runtime.KeepAlive(d) after ANY use of d.cbc in C (typically via d.dh) + } + + func newData() *Data { +@@ -154,12 +154,14 @@ func (d *Data) Close() error { + callbackDelete(d.cbc) + } + _, err := C.gpgme_data_release(d.dh) ++ runtime.KeepAlive(d) + d.dh = nil + return err + } + + func (d *Data) Write(p []byte) (int, error) { + n, err := C.gpgme_data_write(d.dh, unsafe.Pointer(&p[0]), C.size_t(len(p))) ++ runtime.KeepAlive(d) + if err != nil { + return 0, err + } +@@ -171,6 +173,7 @@ func (d *Data) Write(p []byte) (int, error) { + + func (d *Data) Read(p []byte) (int, error) { + n, err := C.gpgme_data_read(d.dh, unsafe.Pointer(&p[0]), C.size_t(len(p))) ++ runtime.KeepAlive(d) + if err != nil { + return 0, err + } +@@ -181,11 +184,14 @@ func (d *Data) Read(p []byte) (int, error) { + } + + func (d *Data) Seek(offset int64, whence int) (int64, error) { +- n, err := C.gpgme_data_seek(d.dh, C.off_t(offset), C.int(whence)) ++ n, err := C.gogpgme_data_seek(d.dh, C.gpgme_off_t(offset), C.int(whence)) ++ runtime.KeepAlive(d) + return int64(n), err + } + + // Name returns the associated filename if any + func (d *Data) Name() string { +- return C.GoString(C.gpgme_data_get_file_name(d.dh)) ++ res := C.GoString(C.gpgme_data_get_file_name(d.dh)) ++ runtime.KeepAlive(d) ++ return res + } +diff --git a/vendor/github.com/mtrmac/gpgme/go.mod b/vendor/github.com/mtrmac/gpgme/go.mod +new file mode 100644 +index 000000000..3dd09c9fb +--- /dev/null ++++ b/vendor/github.com/mtrmac/gpgme/go.mod +@@ -0,0 +1,3 @@ ++module github.com/mtrmac/gpgme ++ ++go 1.11 +diff --git a/vendor/github.com/mtrmac/gpgme/go_gpgme.c b/vendor/github.com/mtrmac/gpgme/go_gpgme.c +index b887574e0..00da3ab30 100644 +--- a/vendor/github.com/mtrmac/gpgme/go_gpgme.c ++++ b/vendor/github.com/mtrmac/gpgme/go_gpgme.c +@@ -8,6 +8,28 @@ void gogpgme_set_passphrase_cb(gpgme_ctx_t ctx, gpgme_passphrase_cb_t cb, uintpt + gpgme_set_passphrase_cb(ctx, cb, (void *)handle); + } + ++gpgme_off_t gogpgme_data_seek(gpgme_data_t dh, gpgme_off_t offset, int whence) { ++ return gpgme_data_seek(dh, offset, whence); ++} ++ ++gpgme_error_t gogpgme_op_assuan_transact_ext( ++ gpgme_ctx_t ctx, ++ char* cmd, ++ uintptr_t data_h, ++ uintptr_t inquiry_h, ++ uintptr_t status_h, ++ gpgme_error_t *operr ++ ){ ++ return gpgme_op_assuan_transact_ext( ++ ctx, ++ cmd, ++ (gpgme_assuan_data_cb_t) gogpgme_assuan_data_callback, (void *)data_h, ++ (gpgme_assuan_inquire_cb_t) gogpgme_assuan_inquiry_callback, (void *)inquiry_h, ++ (gpgme_assuan_status_cb_t) gogpgme_assuan_status_callback, (void *)status_h, ++ operr ++ ); ++} ++ + unsigned int key_revoked(gpgme_key_t k) { + return k->revoked; + } +diff --git a/vendor/github.com/mtrmac/gpgme/go_gpgme.h b/vendor/github.com/mtrmac/gpgme/go_gpgme.h +index a3678b127..d4826ab36 100644 +--- a/vendor/github.com/mtrmac/gpgme/go_gpgme.h ++++ b/vendor/github.com/mtrmac/gpgme/go_gpgme.h +@@ -6,12 +6,24 @@ + + #include + ++/* GPGME_VERSION_NUMBER was introduced in 1.4.0 */ ++#if !defined(GPGME_VERSION_NUMBER) || GPGME_VERSION_NUMBER < 0x010402 ++typedef off_t gpgme_off_t; /* Introduced in 1.4.2 */ ++#endif ++ + extern ssize_t gogpgme_readfunc(void *handle, void *buffer, size_t size); + extern ssize_t gogpgme_writefunc(void *handle, void *buffer, size_t size); + extern off_t gogpgme_seekfunc(void *handle, off_t offset, int whence); + extern gpgme_error_t gogpgme_passfunc(void *hook, char *uid_hint, char *passphrase_info, int prev_was_bad, int fd); + extern gpgme_error_t gogpgme_data_new_from_cbs(gpgme_data_t *dh, gpgme_data_cbs_t cbs, uintptr_t handle); + extern void gogpgme_set_passphrase_cb(gpgme_ctx_t ctx, gpgme_passphrase_cb_t cb, uintptr_t handle); ++extern gpgme_off_t gogpgme_data_seek(gpgme_data_t dh, gpgme_off_t offset, int whence); ++ ++extern gpgme_error_t gogpgme_op_assuan_transact_ext(gpgme_ctx_t ctx, char *cmd, uintptr_t data_h, uintptr_t inquiry_h , uintptr_t status_h, gpgme_error_t *operr); ++ ++extern gpgme_error_t gogpgme_assuan_data_callback(void *opaque, void* data, size_t datalen ); ++extern gpgme_error_t gogpgme_assuan_inquiry_callback(void *opaque, char* name, char* args); ++extern gpgme_error_t gogpgme_assuan_status_callback(void *opaque, char* status, char* args); + + extern unsigned int key_revoked(gpgme_key_t k); + extern unsigned int key_expired(gpgme_key_t k); +diff --git a/vendor/github.com/mtrmac/gpgme/gpgme.go b/vendor/github.com/mtrmac/gpgme/gpgme.go +index 20aad737c..c19b9aebc 100644 +--- a/vendor/github.com/mtrmac/gpgme/gpgme.go ++++ b/vendor/github.com/mtrmac/gpgme/gpgme.go +@@ -7,7 +7,6 @@ package gpgme + // #include + // #include "go_gpgme.h" + import "C" +- + import ( + "fmt" + "io" +@@ -48,9 +47,8 @@ const ( + ProtocolAssuan Protocol = C.GPGME_PROTOCOL_ASSUAN + ProtocolG13 Protocol = C.GPGME_PROTOCOL_G13 + ProtocolUIServer Protocol = C.GPGME_PROTOCOL_UISERVER +- // ProtocolSpawn Protocol = C.GPGME_PROTOCOL_SPAWN // Unavailable in 1.4.3 +- ProtocolDefault Protocol = C.GPGME_PROTOCOL_DEFAULT +- ProtocolUnknown Protocol = C.GPGME_PROTOCOL_UNKNOWN ++ ProtocolDefault Protocol = C.GPGME_PROTOCOL_DEFAULT ++ ProtocolUnknown Protocol = C.GPGME_PROTOCOL_UNKNOWN + ) + + type PinEntryMode int +@@ -70,7 +68,6 @@ const ( + EncryptNoEncryptTo EncryptFlag = C.GPGME_ENCRYPT_NO_ENCRYPT_TO + EncryptPrepare EncryptFlag = C.GPGME_ENCRYPT_PREPARE + EncryptExceptSign EncryptFlag = C.GPGME_ENCRYPT_EXPECT_SIGN +- // EncryptNoCompress EncryptFlag = C.GPGME_ENCRYPT_NO_COMPRESS // Unavailable in 1.4.3 + ) + + type HashAlgo int +@@ -84,7 +81,6 @@ const ( + KeyListModeExtern KeyListMode = C.GPGME_KEYLIST_MODE_EXTERN + KeyListModeSigs KeyListMode = C.GPGME_KEYLIST_MODE_SIGS + KeyListModeSigNotations KeyListMode = C.GPGME_KEYLIST_MODE_SIG_NOTATIONS +- // KeyListModeWithSecret KeyListMode = C.GPGME_KEYLIST_MODE_WITH_SECRET // Unavailable in 1.4.3 + KeyListModeEphemeral KeyListMode = C.GPGME_KEYLIST_MODE_EPHEMERAL + KeyListModeModeValidate KeyListMode = C.GPGME_KEYLIST_MODE_VALIDATE + ) +@@ -168,39 +164,60 @@ func EngineCheckVersion(p Protocol) error { + } + + type EngineInfo struct { +- info C.gpgme_engine_info_t ++ next *EngineInfo ++ protocol Protocol ++ fileName string ++ homeDir string ++ version string ++ requiredVersion string + } + +-func (e *EngineInfo) Next() *EngineInfo { +- if e.info.next == nil { +- return nil ++func copyEngineInfo(info C.gpgme_engine_info_t) *EngineInfo { ++ res := &EngineInfo{ ++ next: nil, ++ protocol: Protocol(info.protocol), ++ fileName: C.GoString(info.file_name), ++ homeDir: C.GoString(info.home_dir), ++ version: C.GoString(info.version), ++ requiredVersion: C.GoString(info.req_version), ++ } ++ if info.next != nil { ++ res.next = copyEngineInfo(info.next) + } +- return &EngineInfo{info: e.info.next} ++ return res ++} ++ ++func (e *EngineInfo) Next() *EngineInfo { ++ return e.next + } + + func (e *EngineInfo) Protocol() Protocol { +- return Protocol(e.info.protocol) ++ return e.protocol + } + + func (e *EngineInfo) FileName() string { +- return C.GoString(e.info.file_name) ++ return e.fileName + } + + func (e *EngineInfo) Version() string { +- return C.GoString(e.info.version) ++ return e.version + } + + func (e *EngineInfo) RequiredVersion() string { +- return C.GoString(e.info.req_version) ++ return e.requiredVersion + } + + func (e *EngineInfo) HomeDir() string { +- return C.GoString(e.info.home_dir) ++ return e.homeDir + } + + func GetEngineInfo() (*EngineInfo, error) { +- info := &EngineInfo{} +- return info, handleError(C.gpgme_get_engine_info(&info.info)) ++ var cInfo C.gpgme_engine_info_t ++ err := handleError(C.gpgme_get_engine_info(&cInfo)) ++ if err != nil { ++ return nil, err ++ } ++ return copyEngineInfo(cInfo), nil // It is up to the caller not to invalidate cInfo concurrently until this is done. + } + + func SetEngineInfo(proto Protocol, fileName, homeDir string) error { +@@ -261,9 +278,9 @@ type Context struct { + KeyError error + + callback Callback +- cbc uintptr ++ cbc uintptr // WARNING: Call runtime.KeepAlive(c) after ANY use of c.cbc in C (typically via c.ctx) + +- ctx C.gpgme_ctx_t ++ ctx C.gpgme_ctx_t // WARNING: Call runtime.KeepAlive(c) after ANY passing of c.ctx to C + } + + func New() (*Context, error) { +@@ -281,49 +298,68 @@ func (c *Context) Release() { + callbackDelete(c.cbc) + } + C.gpgme_release(c.ctx) ++ runtime.KeepAlive(c) + c.ctx = nil + } + + func (c *Context) SetArmor(yes bool) { + C.gpgme_set_armor(c.ctx, cbool(yes)) ++ runtime.KeepAlive(c) + } + + func (c *Context) Armor() bool { +- return C.gpgme_get_armor(c.ctx) != 0 ++ res := C.gpgme_get_armor(c.ctx) != 0 ++ runtime.KeepAlive(c) ++ return res + } + + func (c *Context) SetTextMode(yes bool) { + C.gpgme_set_textmode(c.ctx, cbool(yes)) ++ runtime.KeepAlive(c) + } + + func (c *Context) TextMode() bool { +- return C.gpgme_get_textmode(c.ctx) != 0 ++ res := C.gpgme_get_textmode(c.ctx) != 0 ++ runtime.KeepAlive(c) ++ return res + } + + func (c *Context) SetProtocol(p Protocol) error { +- return handleError(C.gpgme_set_protocol(c.ctx, C.gpgme_protocol_t(p))) ++ err := handleError(C.gpgme_set_protocol(c.ctx, C.gpgme_protocol_t(p))) ++ runtime.KeepAlive(c) ++ return err + } + + func (c *Context) Protocol() Protocol { +- return Protocol(C.gpgme_get_protocol(c.ctx)) ++ res := Protocol(C.gpgme_get_protocol(c.ctx)) ++ runtime.KeepAlive(c) ++ return res + } + + func (c *Context) SetKeyListMode(m KeyListMode) error { +- return handleError(C.gpgme_set_keylist_mode(c.ctx, C.gpgme_keylist_mode_t(m))) ++ err := handleError(C.gpgme_set_keylist_mode(c.ctx, C.gpgme_keylist_mode_t(m))) ++ runtime.KeepAlive(c) ++ return err + } + + func (c *Context) KeyListMode() KeyListMode { +- return KeyListMode(C.gpgme_get_keylist_mode(c.ctx)) ++ res := KeyListMode(C.gpgme_get_keylist_mode(c.ctx)) ++ runtime.KeepAlive(c) ++ return res + } + + // Unavailable in 1.3.2: + // func (c *Context) SetPinEntryMode(m PinEntryMode) error { +-// return handleError(C.gpgme_set_pinentry_mode(c.ctx, C.gpgme_pinentry_mode_t(m))) ++// err := handleError(C.gpgme_set_pinentry_mode(c.ctx, C.gpgme_pinentry_mode_t(m))) ++// runtime.KeepAlive(c) ++// return err + // } + + // Unavailable in 1.3.2: + // func (c *Context) PinEntryMode() PinEntryMode { +-// return PinEntryMode(C.gpgme_get_pinentry_mode(c.ctx)) ++// res := PinEntryMode(C.gpgme_get_pinentry_mode(c.ctx)) ++// runtime.KeepAlive(c) ++// return res + // } + + func (c *Context) SetCallback(callback Callback) error { +@@ -340,11 +376,17 @@ func (c *Context) SetCallback(callback Callback) error { + c.cbc = 0 + _, err = C.gogpgme_set_passphrase_cb(c.ctx, nil, 0) + } ++ runtime.KeepAlive(c) + return err + } + + func (c *Context) EngineInfo() *EngineInfo { +- return &EngineInfo{info: C.gpgme_ctx_get_engine_info(c.ctx)} ++ cInfo := C.gpgme_ctx_get_engine_info(c.ctx) ++ runtime.KeepAlive(c) ++ // NOTE: c must be live as long as we are accessing cInfo. ++ res := copyEngineInfo(cInfo) ++ runtime.KeepAlive(c) // for accesses to cInfo ++ return res + } + + func (c *Context) SetEngineInfo(proto Protocol, fileName, homeDir string) error { +@@ -357,19 +399,23 @@ func (c *Context) SetEngineInfo(proto Protocol, fileName, homeDir string) error + chome = C.CString(homeDir) + defer C.free(unsafe.Pointer(chome)) + } +- return handleError(C.gpgme_ctx_set_engine_info(c.ctx, C.gpgme_protocol_t(proto), cfn, chome)) ++ err := handleError(C.gpgme_ctx_set_engine_info(c.ctx, C.gpgme_protocol_t(proto), cfn, chome)) ++ runtime.KeepAlive(c) ++ return err + } + + func (c *Context) KeyListStart(pattern string, secretOnly bool) error { + cpattern := C.CString(pattern) + defer C.free(unsafe.Pointer(cpattern)) +- err := C.gpgme_op_keylist_start(c.ctx, cpattern, cbool(secretOnly)) +- return handleError(err) ++ err := handleError(C.gpgme_op_keylist_start(c.ctx, cpattern, cbool(secretOnly))) ++ runtime.KeepAlive(c) ++ return err + } + + func (c *Context) KeyListNext() bool { + c.Key = newKey() + err := handleError(C.gpgme_op_keylist_next(c.ctx, &c.Key.k)) ++ runtime.KeepAlive(c) // implies runtime.KeepAlive(c.Key) + if err != nil { + if e, ok := err.(Error); ok && e.Code() == ErrorEOF { + c.KeyError = nil +@@ -383,7 +429,9 @@ func (c *Context) KeyListNext() bool { + } + + func (c *Context) KeyListEnd() error { +- return handleError(C.gpgme_op_keylist_end(c.ctx)) ++ err := handleError(C.gpgme_op_keylist_end(c.ctx)) ++ runtime.KeepAlive(c) ++ return err + } + + func (c *Context) GetKey(fingerprint string, secret bool) (*Key, error) { +@@ -391,7 +439,11 @@ func (c *Context) GetKey(fingerprint string, secret bool) (*Key, error) { + cfpr := C.CString(fingerprint) + defer C.free(unsafe.Pointer(cfpr)) + err := handleError(C.gpgme_get_key(c.ctx, cfpr, &key.k, cbool(secret))) +- if e, ok := err.(Error); key.k == nil && ok && e.Code() == ErrorEOF { ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(key) ++ keyKIsNil := key.k == nil ++ runtime.KeepAlive(key) ++ if e, ok := err.(Error); keyKIsNil && ok && e.Code() == ErrorEOF { + return nil, fmt.Errorf("key %q not found", fingerprint) + } + if err != nil { +@@ -401,11 +453,19 @@ func (c *Context) GetKey(fingerprint string, secret bool) (*Key, error) { + } + + func (c *Context) Decrypt(ciphertext, plaintext *Data) error { +- return handleError(C.gpgme_op_decrypt(c.ctx, ciphertext.dh, plaintext.dh)) ++ err := handleError(C.gpgme_op_decrypt(c.ctx, ciphertext.dh, plaintext.dh)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(ciphertext) ++ runtime.KeepAlive(plaintext) ++ return err + } + + func (c *Context) DecryptVerify(ciphertext, plaintext *Data) error { +- return handleError(C.gpgme_op_decrypt_verify(c.ctx, ciphertext.dh, plaintext.dh)) ++ err := handleError(C.gpgme_op_decrypt_verify(c.ctx, ciphertext.dh, plaintext.dh)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(ciphertext) ++ runtime.KeepAlive(plaintext) ++ return err + } + + type Signature struct { +@@ -432,10 +492,20 @@ func (c *Context) Verify(sig, signedText, plain *Data) (string, []Signature, err + plainPtr = plain.dh + } + err := handleError(C.gpgme_op_verify(c.ctx, sig.dh, signedTextPtr, plainPtr)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(sig) ++ if signedText != nil { ++ runtime.KeepAlive(signedText) ++ } ++ if plain != nil { ++ runtime.KeepAlive(plain) ++ } + if err != nil { + return "", nil, err + } + res := C.gpgme_op_verify_result(c.ctx) ++ runtime.KeepAlive(c) ++ // NOTE: c must be live as long as we are accessing res. + sigs := []Signature{} + for s := res.signatures; s != nil; s = s.next { + sig := Signature{ +@@ -455,7 +525,9 @@ func (c *Context) Verify(sig, signedText, plain *Data) (string, []Signature, err + } + sigs = append(sigs, sig) + } +- return C.GoString(res.file_name), sigs, nil ++ fileName := C.GoString(res.file_name) ++ runtime.KeepAlive(c) // for all accesses to res above ++ return fileName, sigs, nil + } + + func (c *Context) Encrypt(recipients []*Key, flags EncryptFlag, plaintext, ciphertext *Data) error { +@@ -467,18 +539,116 @@ func (c *Context) Encrypt(recipients []*Key, flags EncryptFlag, plaintext, ciphe + *ptr = recipients[i].k + } + err := C.gpgme_op_encrypt(c.ctx, (*C.gpgme_key_t)(recp), C.gpgme_encrypt_flags_t(flags), plaintext.dh, ciphertext.dh) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(recipients) ++ runtime.KeepAlive(plaintext) ++ runtime.KeepAlive(ciphertext) + return handleError(err) + } + + func (c *Context) Sign(signers []*Key, plain, sig *Data, mode SigMode) error { + C.gpgme_signers_clear(c.ctx) ++ runtime.KeepAlive(c) + for _, k := range signers { +- if err := handleError(C.gpgme_signers_add(c.ctx, k.k)); err != nil { ++ err := handleError(C.gpgme_signers_add(c.ctx, k.k)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(k) ++ if err != nil { + C.gpgme_signers_clear(c.ctx) ++ runtime.KeepAlive(c) + return err + } + } +- return handleError(C.gpgme_op_sign(c.ctx, plain.dh, sig.dh, C.gpgme_sig_mode_t(mode))) ++ err := handleError(C.gpgme_op_sign(c.ctx, plain.dh, sig.dh, C.gpgme_sig_mode_t(mode))) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(plain) ++ runtime.KeepAlive(sig) ++ return err ++} ++ ++type AssuanDataCallback func(data []byte) error ++type AssuanInquireCallback func(name, args string) error ++type AssuanStatusCallback func(status, args string) error ++ ++// AssuanSend sends a raw Assuan command to gpg-agent ++func (c *Context) AssuanSend( ++ cmd string, ++ data AssuanDataCallback, ++ inquiry AssuanInquireCallback, ++ status AssuanStatusCallback, ++) error { ++ var operr C.gpgme_error_t ++ ++ dataPtr := callbackAdd(&data) ++ inquiryPtr := callbackAdd(&inquiry) ++ statusPtr := callbackAdd(&status) ++ cmdCStr := C.CString(cmd) ++ defer C.free(unsafe.Pointer(cmdCStr)) ++ err := C.gogpgme_op_assuan_transact_ext( ++ c.ctx, ++ cmdCStr, ++ C.uintptr_t(dataPtr), ++ C.uintptr_t(inquiryPtr), ++ C.uintptr_t(statusPtr), ++ &operr, ++ ) ++ runtime.KeepAlive(c) ++ ++ if handleError(operr) != nil { ++ return handleError(operr) ++ } ++ return handleError(err) ++} ++ ++//export gogpgme_assuan_data_callback ++func gogpgme_assuan_data_callback(handle unsafe.Pointer, data unsafe.Pointer, datalen C.size_t) C.gpgme_error_t { ++ c := callbackLookup(uintptr(handle)).(*AssuanDataCallback) ++ if *c == nil { ++ return 0 ++ } ++ (*c)(C.GoBytes(data, C.int(datalen))) ++ return 0 ++} ++ ++//export gogpgme_assuan_inquiry_callback ++func gogpgme_assuan_inquiry_callback(handle unsafe.Pointer, cName *C.char, cArgs *C.char) C.gpgme_error_t { ++ name := C.GoString(cName) ++ args := C.GoString(cArgs) ++ c := callbackLookup(uintptr(handle)).(*AssuanInquireCallback) ++ if *c == nil { ++ return 0 ++ } ++ (*c)(name, args) ++ return 0 ++} ++ ++//export gogpgme_assuan_status_callback ++func gogpgme_assuan_status_callback(handle unsafe.Pointer, cStatus *C.char, cArgs *C.char) C.gpgme_error_t { ++ status := C.GoString(cStatus) ++ args := C.GoString(cArgs) ++ c := callbackLookup(uintptr(handle)).(*AssuanStatusCallback) ++ if *c == nil { ++ return 0 ++ } ++ (*c)(status, args) ++ return 0 ++} ++ ++// ExportModeFlags defines how keys are exported from Export ++type ExportModeFlags uint ++ ++const ( ++ ExportModeExtern ExportModeFlags = C.GPGME_EXPORT_MODE_EXTERN ++ ExportModeMinimal ExportModeFlags = C.GPGME_EXPORT_MODE_MINIMAL ++) ++ ++func (c *Context) Export(pattern string, mode ExportModeFlags, data *Data) error { ++ pat := C.CString(pattern) ++ defer C.free(unsafe.Pointer(pat)) ++ err := handleError(C.gpgme_op_export(c.ctx, pat, C.gpgme_export_mode_t(mode), data.dh)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(data) ++ return err + } + + // ImportStatusFlags describes the type of ImportStatus.Status. The C API in gpgme.h simply uses "unsigned". +@@ -517,10 +687,14 @@ type ImportResult struct { + + func (c *Context) Import(keyData *Data) (*ImportResult, error) { + err := handleError(C.gpgme_op_import(c.ctx, keyData.dh)) ++ runtime.KeepAlive(c) ++ runtime.KeepAlive(keyData) + if err != nil { + return nil, err + } + res := C.gpgme_op_import_result(c.ctx) ++ runtime.KeepAlive(c) ++ // NOTE: c must be live as long as we are accessing res. + imports := []ImportStatus{} + for s := res.imports; s != nil; s = s.next { + imports = append(imports, ImportStatus{ +@@ -529,7 +703,7 @@ func (c *Context) Import(keyData *Data) (*ImportResult, error) { + Status: ImportStatusFlags(s.status), + }) + } +- return &ImportResult{ ++ importResult := &ImportResult{ + Considered: int(res.considered), + NoUserID: int(res.no_user_id), + Imported: int(res.imported), +@@ -544,11 +718,13 @@ func (c *Context) Import(keyData *Data) (*ImportResult, error) { + SecretUnchanged: int(res.secret_unchanged), + NotImported: int(res.not_imported), + Imports: imports, +- }, nil ++ } ++ runtime.KeepAlive(c) // for all accesses to res above ++ return importResult, nil + } + + type Key struct { +- k C.gpgme_key_t ++ k C.gpgme_key_t // WARNING: Call Runtime.KeepAlive(k) after ANY passing of k.k to C + } + + func newKey() *Key { +@@ -559,85 +735,122 @@ func newKey() *Key { + + func (k *Key) Release() { + C.gpgme_key_release(k.k) ++ runtime.KeepAlive(k) + k.k = nil + } + + func (k *Key) Revoked() bool { +- return C.key_revoked(k.k) != 0 ++ res := C.key_revoked(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) Expired() bool { +- return C.key_expired(k.k) != 0 ++ res := C.key_expired(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) Disabled() bool { +- return C.key_disabled(k.k) != 0 ++ res := C.key_disabled(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) Invalid() bool { +- return C.key_invalid(k.k) != 0 ++ res := C.key_invalid(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) CanEncrypt() bool { +- return C.key_can_encrypt(k.k) != 0 ++ res := C.key_can_encrypt(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) CanSign() bool { +- return C.key_can_sign(k.k) != 0 ++ res := C.key_can_sign(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) CanCertify() bool { +- return C.key_can_certify(k.k) != 0 ++ res := C.key_can_certify(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) Secret() bool { +- return C.key_secret(k.k) != 0 ++ res := C.key_secret(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) CanAuthenticate() bool { +- return C.key_can_authenticate(k.k) != 0 ++ res := C.key_can_authenticate(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) IsQualified() bool { +- return C.key_is_qualified(k.k) != 0 ++ res := C.key_is_qualified(k.k) != 0 ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) Protocol() Protocol { +- return Protocol(k.k.protocol) ++ res := Protocol(k.k.protocol) ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) IssuerSerial() string { +- return C.GoString(k.k.issuer_serial) ++ res := C.GoString(k.k.issuer_serial) ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) IssuerName() string { +- return C.GoString(k.k.issuer_name) ++ res := C.GoString(k.k.issuer_name) ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) ChainID() string { +- return C.GoString(k.k.chain_id) ++ res := C.GoString(k.k.chain_id) ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) OwnerTrust() Validity { +- return Validity(k.k.owner_trust) ++ res := Validity(k.k.owner_trust) ++ runtime.KeepAlive(k) ++ return res + } + + func (k *Key) SubKeys() *SubKey { +- if k.k.subkeys == nil { ++ subKeys := k.k.subkeys ++ runtime.KeepAlive(k) ++ if subKeys == nil { + return nil + } +- return &SubKey{k: k.k.subkeys, parent: k} ++ return &SubKey{k: subKeys, parent: k} // The parent: k reference ensures subKeys remains valid + } + + func (k *Key) UserIDs() *UserID { +- if k.k.uids == nil { ++ uids := k.k.uids ++ runtime.KeepAlive(k) ++ if uids == nil { + return nil + } +- return &UserID{u: k.k.uids, parent: k} ++ return &UserID{u: uids, parent: k} // The parent: k reference ensures uids remains valid + } + + func (k *Key) KeyListMode() KeyListMode { +- return KeyListMode(k.k.keylist_mode) ++ res := KeyListMode(k.k.keylist_mode) ++ runtime.KeepAlive(k) ++ return res + } + + type SubKey struct { +@@ -737,12 +950,3 @@ func (u *UserID) Comment() string { + func (u *UserID) Email() string { + return C.GoString(u.u.email) + } +- +-// This is somewhat of a horrible hack. We need to unset GPG_AGENT_INFO so that gpgme does not pass --use-agent to GPG. +-// os.Unsetenv should be enough, but that only calls the underlying C library (which gpgme uses) if cgo is involved +-// - and cgo can't be used in tests. So, provide this helper for test initialization. +-func unsetenvGPGAgentInfo() { +- v := C.CString("GPG_AGENT_INFO") +- defer C.free(unsafe.Pointer(v)) +- C.unsetenv(v) +-} +diff --git a/vendor/github.com/mtrmac/gpgme/unset_agent_info.go b/vendor/github.com/mtrmac/gpgme/unset_agent_info.go +new file mode 100644 +index 000000000..986aca59f +--- /dev/null ++++ b/vendor/github.com/mtrmac/gpgme/unset_agent_info.go +@@ -0,0 +1,18 @@ ++// +build !windows ++ ++package gpgme ++ ++// #include ++import "C" ++import ( ++ "unsafe" ++) ++ ++// This is somewhat of a horrible hack. We need to unset GPG_AGENT_INFO so that gpgme does not pass --use-agent to GPG. ++// os.Unsetenv should be enough, but that only calls the underlying C library (which gpgme uses) if cgo is involved ++// - and cgo can't be used in tests. So, provide this helper for test initialization. ++func unsetenvGPGAgentInfo() { ++ v := C.CString("GPG_AGENT_INFO") ++ defer C.free(unsafe.Pointer(v)) ++ C.unsetenv(v) ++} +diff --git a/vendor/github.com/mtrmac/gpgme/unset_agent_info_windows.go b/vendor/github.com/mtrmac/gpgme/unset_agent_info_windows.go +new file mode 100644 +index 000000000..431ec86d3 +--- /dev/null ++++ b/vendor/github.com/mtrmac/gpgme/unset_agent_info_windows.go +@@ -0,0 +1,14 @@ ++package gpgme ++ ++// #include ++import "C" ++import ( ++ "unsafe" ++) ++ ++// unsetenv is not available in mingw ++func unsetenvGPGAgentInfo() { ++ v := C.CString("GPG_AGENT_INFO=") ++ defer C.free(unsafe.Pointer(v)) ++ C.putenv(v) ++} +diff --git a/vendor/modules.txt b/vendor/modules.txt +index 013f7f5ec..6cb286331 100644 +--- a/vendor/modules.txt ++++ b/vendor/modules.txt +@@ -189,7 +189,7 @@ github.com/mattn/go-isatty + github.com/mattn/go-shellwords + # github.com/mistifyio/go-zfs v2.1.1+incompatible + github.com/mistifyio/go-zfs +-# github.com/mtrmac/gpgme v0.0.0-20170102180018-b2432428689c ++# github.com/mtrmac/gpgme v0.1.2 + github.com/mtrmac/gpgme + # github.com/opencontainers/go-digest v1.0.0-rc1 + github.com/opencontainers/go-digest + +From 41a8eabf72768a2d55d819bf78ede96b7a7854e0 Mon Sep 17 00:00:00 2001 +From: Giuseppe Scrivano +Date: Wed, 30 Oct 2019 08:12:48 +0100 +Subject: [PATCH 2/3] Dockerfile: use golang-github-cpuguy83-go-md2man + +the package was renamed on Fedora 31. + +Signed-off-by: Giuseppe Scrivano +--- + Dockerfile | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Dockerfile b/Dockerfile +index 80b5fd62c..a62cdf106 100644 +--- a/Dockerfile ++++ b/Dockerfile +@@ -1,6 +1,6 @@ + FROM fedora + +-RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-go-md2man \ ++RUN dnf -y update && dnf install -y make git golang golang-github-cpuguy83-md2man \ + # storage deps + btrfs-progs-devel \ + device-mapper-devel \ + +From 27a8cd845d71acb041e4a0219c4f998576211034 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Miloslav=20Trma=C4=8D?= +Date: Thu, 31 Oct 2019 17:33:35 +0100 +Subject: [PATCH 3/3] Revert "Temporarily work around auth.json location + confusion" +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +This reverts commit 4962559e5cb79a0d6d869b1c26dbc0171354eb73. + +Signed-off-by: Miloslav Trmač +--- + systemtest/040-local-registry-auth.bats | 9 --------- + 1 file changed, 9 deletions(-) + +diff --git a/systemtest/040-local-registry-auth.bats b/systemtest/040-local-registry-auth.bats +index 2705d0d55..fe1b2f974 100644 +--- a/systemtest/040-local-registry-auth.bats ++++ b/systemtest/040-local-registry-auth.bats +@@ -14,15 +14,6 @@ function setup() { + mkdir -p $_cred_dir/containers + rm -f $_cred_dir/containers/auth.json + +- # TODO: This is here to work around +- # https://github.com/containers/libpod/issues/4227 in the +- # "auth: credentials via podman login" test. +- # It should be removed once a packaged version of podman which includes +- # that fix is available in our CI environment, since we _want_ to be +- # checking that podman and skopeo agree on the default for where registry +- # credentials should be stored. +- export REGISTRY_AUTH_FILE=$_cred_dir/containers/auth.json +- + # Start authenticated registry with random password + testuser=testuser + testpassword=$(random_string 15) diff --git a/SPECS/skopeo.spec b/SPECS/skopeo.spec index 26c1484..a31712d 100644 --- a/SPECS/skopeo.spec +++ b/SPECS/skopeo.spec @@ -1,5 +1,3 @@ -%global with_devel 0 -%global with_bundled 1 %global with_debug 1 %global with_check 0 @@ -7,122 +5,213 @@ %global _find_debuginfo_dwz_opts %{nil} %global _dwz_low_mem_die_limit 0 %else -%global debug_package %{nil} +%global debug_package %{nil} %endif %if ! 0%{?gobuild:1} -%define gobuild(o:) go build -buildmode pie -compiler gc -tags="rpm_crashtraceback ${BUILDTAGS:-}" -ldflags "${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld '" -a -v -x %{?**}; +%define gobuild(o:) \ +scl enable go-toolset-1.12 -- go build -buildmode pie -compiler gc -tags="rpm_crashtraceback ${BUILDTAGS:-}" -ldflags "${LDFLAGS:-} -B 0x$(head -c20 /dev/urandom|od -An -tx1|tr -d ' \\n') -extldflags '%__global_ldflags'" -a -v -x %{?**}; %endif -%global provider github -%global provider_tld com -%global project containers -%global repo skopeo +%global provider github +%global provider_tld com +%global project containers +%global repo skopeo # https://github.com/containers/skopeo -%global import_path %{provider}.%{provider_tld}/%{project}/%{repo} -%global commit0 e079f9d61b2508b57e9510752d7e893b544c3cb8 -%global shortcommit0 %(c=%{commit0}; echo ${c:0:7}) -%global git0 https://%{import_path} +%global provider_prefix %{provider}.%{provider_tld}/%{project}/%{repo} +%global import_path %{provider_prefix} +%global git0 https://%{import_path} +%global commit0 be6146b0a8471b02e776134119a2c37dfb70d414 +%global shortcommit0 %(c=%{commit0}; echo ${c:0:7}) -Name: %{repo} Epoch: 1 -Version: 0.1.37 -Release: 3%{?dist} +Name: %{repo} +Version: 0.1.40 +Release: 7%{?dist} Summary: Inspect container images and repositories on registries +ExcludeArch: %{ix86} s390 ppc ppc64 License: ASL 2.0 URL: %{git0} Source0: %{git0}/archive/%{commit0}/%{name}-%{shortcommit0}.tar.gz Source1: storage.conf Source2: containers-storage.conf.5.md Source3: mounts.conf -Source4: registries.conf -Source5: registries.conf.5.md -Source6: seccomp.json -ExclusiveArch: aarch64 %{arm} ppc64le s390x x86_64 -BuildRequires: git -%if 0%{?fedora} || 0%{?centos} -# If go_compiler is not set to 1, there is no virtual provide. Use golang instead. -BuildRequires: %{?go_compiler:compiler(go-compiler)}%{!?go_compiler:golang} >= 1.6.2 -%else -BuildRequires: go-toolset-1.10 -#BuildRequires: openssl-devel -%endif -BuildRequires: go-md2man -BuildRequires: gpgme-devel -BuildRequires: libassuan-devel -BuildRequires: btrfs-progs-devel -BuildRequires: device-mapper-devel -BuildRequires: pkgconfig(glib-2.0) -BuildRequires: pkgconfig(gobject-2.0) -BuildRequires: pkgconfig(ostree-1) +Source4: containers-registries.conf.5.md +Source5: registries.conf +Source6: containers-policy.json.5.md +Source7: seccomp.json +Source8: containers-mounts.conf.5.md +Source9: containers-signature.5.md +Patch0: skopeo-1792243.patch +# tracker bug: https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-8945 +# patch: https://github.com/containers/skopeo/pull/825.patch +Patch1: skopeo-CVE-2020-8945.patch +Source10: containers-transports.5.md +Source11: containers-certs.d.5.md +Source12: containers-registries.d.5.md + +BuildRequires: go-toolset-1.12 +BuildRequires: git +BuildRequires: go-md2man +BuildRequires: gpgme-devel +BuildRequires: libassuan-devel +BuildRequires: pkgconfig(devmapper) +BuildRequires: ostree-devel +BuildRequires: glib2-devel +BuildRequires: make Requires: containers-common = %{epoch}:%{version}-%{release} +Provides: bundled(golang(github.com/beorn7/perks)) = 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9 +Provides: bundled(golang(github.com/BurntSushi/toml)) = master +Provides: bundled(golang(github.com/containerd/continuity)) = d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371 +Provides: bundled(golang(github.com/containers/image)) = master +Provides: bundled(golang(github.com/containers/storage)) = master +Provides: bundled(golang(github.com/davecgh/go-spew)) = master +Provides: bundled(golang(github.com/docker/distribution)) = master +Provides: bundled(golang(github.com/docker/docker-credential-helpers)) = d68f9aeca33f5fd3f08eeae5e9d175edf4e731d1 +Provides: bundled(golang(github.com/docker/docker)) = da99009bbb1165d1ac5688b5c81d2f589d418341 +Provides: bundled(golang(github.com/docker/go-connections)) = 7beb39f0b969b075d1325fecb092faf27fd357b6 +Provides: bundled(golang(github.com/docker/go-metrics)) = 399ea8c73916000c64c2c76e8da00ca82f8387ab +Provides: bundled(golang(github.com/docker/go-units)) = 8a7beacffa3009a9ac66bad506b18ffdd110cf97 +Provides: bundled(golang(github.com/docker/libtrust)) = master +Provides: bundled(golang(github.com/ghodss/yaml)) = 73d445a93680fa1a78ae23a5839bad48f32ba1ee +Provides: bundled(golang(github.com/go-check/check)) = v1 +Provides: bundled(golang(github.com/gogo/protobuf)) = fcdc5011193ff531a548e9b0301828d5a5b97fd8 +Provides: bundled(golang(github.com/golang/glog)) = 44145f04b68cf362d9c4df2182967c2275eaefed +Provides: bundled(golang(github.com/golang/protobuf)) = 8d92cf5fc15a4382f8964b08e1f42a75c0591aa3 +Provides: bundled(golang(github.com/gorilla/context)) = 14f550f51a +Provides: bundled(golang(github.com/gorilla/mux)) = e444e69cbd +Provides: bundled(golang(github.com/imdario/mergo)) = 6633656539c1639d9d78127b7d47c622b5d7b6dc +Provides: bundled(golang(github.com/kr/pretty)) = v0.1.0 +Provides: bundled(golang(github.com/kr/text)) = v0.1.0 +Provides: bundled(golang(github.com/matttproud/golang_protobuf_extensions)) = c12348ce28de40eed0136aa2b644d0ee0650e56c +Provides: bundled(golang(github.com/mistifyio/go-zfs)) = 22c9b32c84eb0d0c6f4043b6e90fc94073de92fa +Provides: bundled(golang(github.com/mtrmac/gpgme)) = master +Provides: bundled(golang(github.com/opencontainers/go-digest)) = master +Provides: bundled(golang(github.com/opencontainers/image-spec)) = 149252121d044fddff670adcdc67f33148e16226 +Provides: bundled(golang(github.com/opencontainers/image-tools)) = 6d941547fa1df31900990b3fb47ec2468c9c6469 +Provides: bundled(golang(github.com/opencontainers/runc)) = master +Provides: bundled(golang(github.com/opencontainers/runtime-spec)) = v1.0.0 +Provides: bundled(golang(github.com/opencontainers/selinux)) = master +Provides: bundled(golang(github.com/ostreedev/ostree-go)) = aeb02c6b6aa2889db3ef62f7855650755befd460 +Provides: bundled(golang(github.com/pborman/uuid)) = v1.0 +Provides: bundled(golang(github.com/pkg/errors)) = master +Provides: bundled(golang(github.com/pmezard/go-difflib)) = master +Provides: bundled(golang(github.com/pquerna/ffjson)) = d49c2bc1aa135aad0c6f4fc2056623ec78f5d5ac +Provides: bundled(golang(github.com/prometheus/client_golang)) = c332b6f63c0658a65eca15c0e5247ded801cf564 +Provides: bundled(golang(github.com/prometheus/client_model)) = 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c +Provides: bundled(golang(github.com/prometheus/common)) = 89604d197083d4781071d3c65855d24ecfb0a563 +Provides: bundled(golang(github.com/prometheus/procfs)) = cb4147076ac75738c9a7d279075a253c0cc5acbd +Provides: bundled(golang(github.com/sirupsen/logrus)) = v1.0.0 +Provides: bundled(golang(github.com/stretchr/testify)) = v1.1.3 +Provides: bundled(golang(github.com/syndtr/gocapability)) = master +Provides: bundled(golang(github.com/tchap/go-patricia)) = v2.2.6 +Provides: bundled(golang(github.com/ulikunitz/xz)) = v0.5.4 +Provides: bundled(golang(github.com/urfave/cli)) = v1.17.0 +Provides: bundled(golang(github.com/vbatts/tar-split)) = v0.10.2 +Provides: bundled(golang(github.com/xeipuuv/gojsonpointer)) = master +Provides: bundled(golang(github.com/xeipuuv/gojsonreference)) = master +Provides: bundled(golang(github.com/xeipuuv/gojsonschema)) = master +Provides: bundled(golang(go4.org)) = master +Provides: bundled(golang(golang.org/x/crypto)) = master +Provides: bundled(golang(golang.org/x/net)) = master +Provides: bundled(golang(golang.org/x/sys)) = master +Provides: bundled(golang(golang.org/x/text)) = master +Provides: bundled(golang(gopkg.in/cheggaaa/pb.v1)) = ad4efe000aa550bb54918c06ebbadc0ff17687b9 +Provides: bundled(golang(gopkg.in/yaml.v2)) = d466437aa4adc35830964cffc5b5f262c63ddcb4 +Provides: bundled(golang(k8s.io/client-go)) = master + %description Command line utility to inspect images and repositories directly on Docker registries without the need to pull them %package -n containers-common -Summary: Configuration files for working with image signature -# /etc/containers/registries.d/default.yaml has been moved from atomic to -# containers-common -Conflicts: atomic <= 1:1.13.1-1 -Conflicts: atomic-registries <= 1:1.22.1-2 +Summary: Configuration files for working with image signatures +Obsoletes: atomic <= 1:1.13.1-2 +Conflicts: atomic-registries <= 1:1.22.1-1 +Obsoletes: docker-rhsubscription <= 2:1.13.1-31 Provides: %{name}-containers = %{epoch}:%{version}-%{release} -Obsoletes: %{name}-containers < 1:0.1.31-2 +Obsoletes: %{name}-containers <= 1:0.1.31-3 +Requires: fuse-overlayfs +Requires: slirp4netns +Requires: subscription-manager %description -n containers-common -This package installs a default signature store configuration +This package installs a default signature store configuration and a default policy under `/etc/containers/`. -%{?enable_gotoolset110} +%package tests +Summary: Tests for %{name} +Requires: %{name} = %{epoch}:%{version}-%{release} +#Requires: bats (which RHEL8 doesn't have. If it ever does, un-comment this) +Requires: gnupg +Requires: jq +Requires: podman + +%description tests +%{summary} + +This package contains system tests for %{name} %prep %autosetup -Sgit -n %{name}-%{commit0} %build mkdir -p src/github.com/containers -ln -s ../../../ src/github.com/containers/%{name} +ln -s ../../../ src/%{import_path} mkdir -p vendor/src for v in vendor/*; do - if test ${v} = vendor/src; then continue; fi - if test -d ${v}; then + if test ${v} = vendor/src; then continue; fi + if test -d ${v}; then mv ${v} vendor/src/ - fi + fi done export GOPATH=$(pwd):$(pwd)/vendor - +export GO111MODULE=off +export BUILDTAGS="exclude_graphdriver_btrfs btrfs_noversion $(hack/libdm_tag.sh) $(hack/ostree_tag.sh)" %gobuild -o %{name} ./cmd/%{name} - -if test -f man/%{name}.1.md; then - go-md2man -in man/%{name}.1.md -out %{name}.1 -fi - -go-md2man -in %{SOURCE2} -out containers-storage.conf.5 -go-md2man -in %{SOURCE5} -out registries.conf.5 +%{__make} docs %install -make DESTDIR=%{buildroot} install -install -m0644 %{SOURCE1} %{buildroot}%{_sysconfdir}/containers/storage.conf +make \ + DESTDIR=%{buildroot} \ + SIGSTOREDIR=%{buildroot}%{_sharedstatedir}/containers/sigstore \ + install +mkdir -p %{buildroot}%{_sysconfdir} +mkdir -p %{buildroot}%{_sysconfdir}/containers/{certs.d,oci/hooks.d} mkdir -p %{buildroot}%{_mandir}/man5 -install -m644 containers-storage.conf.5 %{buildroot}%{_mandir}/man5 -install -m644 registries.conf.5 %{buildroot}%{_mandir}/man5 +install -m0644 %{SOURCE1} %{buildroot}%{_sysconfdir}/containers/storage.conf +install -p -m 644 %{SOURCE5} %{buildroot}%{_sysconfdir}/containers/ +go-md2man -in %{SOURCE2} -out %{buildroot}%{_mandir}/man5/containers-storage.conf.5 +go-md2man -in %{SOURCE4} -out %{buildroot}%{_mandir}/man5/containers-registries.conf.5 +go-md2man -in %{SOURCE6} -out %{buildroot}%{_mandir}/man5/containers-policy.json.5 +go-md2man -in %{SOURCE8} -out %{buildroot}%{_mandir}/man5/containers-mounts.conf.5 +go-md2man -in %{SOURCE9} -out %{buildroot}%{_mandir}/man5/containers-signature.5 +go-md2man -in %{SOURCE10} -out %{buildroot}%{_mandir}/man5/containers-transports.5 +go-md2man -in %{SOURCE11} -out %{buildroot}%{_mandir}/man5/containers-certs.d.5 +go-md2man -in %{SOURCE12} -out %{buildroot}%{_mandir}/man5/containers-registries.d.5 + mkdir -p %{buildroot}%{_datadir}/containers install -m0644 %{SOURCE3} %{buildroot}%{_datadir}/containers/mounts.conf -install -m0644 %{SOURCE6} %{buildroot}%{_datadir}/containers/seccomp.json -install -p -m 644 %{SOURCE4} %{buildroot}%{_sysconfdir}/containers/ +install -m0644 %{SOURCE7} %{buildroot}%{_datadir}/containers/seccomp.json # install secrets patch directory -install -d -p -m 755 %{buildroot}%{_datadir}/rhel/secrets +install -d -p -m 755 %{buildroot}/%{_datadir}/rhel/secrets # rhbz#1110876 - update symlinks for subscription management ln -s %{_sysconfdir}/pki/entitlement %{buildroot}%{_datadir}/rhel/secrets/etc-pki-entitlement ln -s %{_sysconfdir}/rhsm %{buildroot}%{_datadir}/rhel/secrets/rhsm -ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secrets/rhel7.repo +ln -s %{_sysconfdir}/yum.repos.d/redhat.repo %{buildroot}%{_datadir}/rhel/secrets/redhat.repo + +# system tests +install -d -p %{buildroot}/%{_datadir}/%{name}/test/system +cp -pav systemtest/* %{buildroot}/%{_datadir}/%{name}/test/system/ %check %if 0%{?with_check} export GOPATH=%{buildroot}/%{gopath}:$(pwd)/vendor:%{gopath} + %gotest %{import_path}/integration %endif @@ -131,31 +220,63 @@ export GOPATH=%{buildroot}/%{gopath}:$(pwd)/vendor:%{gopath} %files -n containers-common %dir %{_sysconfdir}/containers +%dir %{_sysconfdir}/containers/certs.d %dir %{_sysconfdir}/containers/registries.d +%dir %{_sysconfdir}/containers/oci +%dir %{_sysconfdir}/containers/oci/hooks.d %config(noreplace) %{_sysconfdir}/containers/policy.json %config(noreplace) %{_sysconfdir}/containers/registries.d/default.yaml %config(noreplace) %{_sysconfdir}/containers/storage.conf %config(noreplace) %{_sysconfdir}/containers/registries.conf +%dir %{_sharedstatedir}/containers/sigstore +%{_mandir}/man5/* %dir %{_datadir}/containers %{_datadir}/containers/mounts.conf %{_datadir}/containers/seccomp.json -%dir %{_sharedstatedir}/atomic/sigstore -%{_mandir}/man5/*.conf.5* %dir %{_datadir}/rhel/secrets -%{_datadir}/rhel/secrets/etc-pki-entitlement -%{_datadir}/rhel/secrets/rhel7.repo -%{_datadir}/rhel/secrets/rhsm +%{_datadir}/rhel/secrets/* %files -%{_bindir}/%{name} -%{_mandir}/man1/%{name}*.1* %license LICENSE %doc README.md -%dir %{_datadir}/bash-completion/ +%{_bindir}/%{name} +%{_mandir}/man1/%{name}* +%dir %{_datadir}/bash-completion %dir %{_datadir}/bash-completion/completions %{_datadir}/bash-completion/completions/%{name} +%files tests +%license LICENSE +%{_datadir}/%{name}/test + %changelog +* Mon Mar 02 2020 Jindrich Novy - 1:0.1.40-7 +- fix "CVE-2020-8945 proglottis/gpgme: Use-after-free in GPGME bindings during container image pull" +- Resolves: #1806944 + +* Fri Jan 24 2020 Jindrich Novy - 1:0.1.40-6 +- resurrect s390x arch as kernel there now has the renameat2 syscall (#1773504) + +* Mon Jan 20 2020 Jindrich Novy - 1:0.1.40-5 +- Fix thread safety of gpgme (#1792243) + +* Thu Jan 16 2020 Jindrich Novy - 1:0.1.40-4 +- temporary disable s390x arch due to #1773504 causing fuse-overlayfs + failing to build - skopeo/contaners-common requires it + +* Wed Jan 15 2020 Jindrich Novy - 1:0.1.40-3 +- increment version to avoid dist tag clash with RHAOS + +* Thu Jan 02 2020 Jindrich Novy - 1:0.1.40-2 +- change the search order of registries and remove quay.io (#1784265) + +* Wed Dec 04 2019 Jindrich Novy - 1:0.1.40-1 +- update to v0.1.40 +- Related: RHELPLAN-26239 + +* Thu Sep 12 2019 Jindrich Novy - 1:0.1.37-4 +- Fix CVE-2019-10214. + * Fri Aug 02 2019 Jindrich Novy - 1:0.1.37-3 - rebase to 0.1.37 for RHEL7u7