Blame docs/ci.md

80c633
# Testing
15677d
15677d
The CentOS Project has some resources available for each SIG to run some CI jobs/tests for their projects.
15677d
We'll document soon how to get onboarded (on request) on the CentOS CI infra platform.
15677d
6ec6e8
![CI Infra overview](img/duffy-aws.drawio.png)
6ec6e8
73fd19
We offer the following resources :
15677d
15677d
  * Openshift hosted jenkins (one per project/SIG), using usual authentication from FAS/ACO
2bb3f4
  * EC2 Virtual Machines (or `metal` but in limited quantity) ephemeral nodes on which you can run some tests (including destructive ones), that will be automatically be discarded after tests are finished - aka `Duffy` 
f8ca97
f8ca97
2521ad
## Openshift
2521ad
2bb3f4
We provide access to an [Openshift](https://console-openshift-console.apps.ocp.cloud.ci.centos.org/) cluster, hosted (actually) next to the Duffy ephemeral nodes infra.
2bb3f4
Once [Infra ticket/request](https://pagure.io/centos-infra/new_issue?template=ci-onboarding) is validated you'll be granted access (through ACO/FAS account) to a namespace on that cluster.
b92f59
b92f59
### Authentication
b92f59
b92f59
The openshift cluster is linked/tied to accounts.centos.org so it will be using SSO to let you login (no need for additional credentials)
b92f59
b92f59
### Interacting with Openshift
b92f59
2bb3f4
One can use the [web console](https://console-openshift-console.apps.ocp.cloud.ci.centos.org/) to interact with deployed pods, see openshift logs, and also eventually interact with terminal on running pods.
b92f59
2bb3f4
But it's also possible to [download](https://console-openshift-console.apps.ocp.cloud.ci.centos.org/command-line-tools) the `oc` cli tool and so interact with openshift from cli. Don't forget that for that to work, you need first to login through web console and then you'll have a `copy login command` under your user name (top left corner) , that will basically take you to [oauth token](https://oauth-openshift.apps.ocp.cloud.ci.centos.org/oauth/token/display) display.
b92f59
You can just copy that command in your own terminal (be sure to have up2date oc binary !) and start interacting with openshift
b92f59
b92f59
b92f59
### Jenkins
b92f59
We provide a Jenkins template that can be provisioned for you automatically. It's using the jenkins image from openshift catalog but we modified it with some extra parameters : Deployed jenkins will be able to launch ephemeral pods as jenkins executor (nothing should run on the jenkins pod itself)
b92f59
Michal Konečný 46b5ac
For that you need to configure your job to use the `cico-workspace` label, as that will be automatically trigger a pod deployment in openshift from a template (cico-workspace)
b92f59
That environment pod is using a centos 8-stream based container, that will automatically connect to jenkins as "agent" and also contains the following tools :
b92f59
b92f59
  * git
b92f59
  * ansible(-core)
b92f59
  * python-cicoclient (for legacy Duffy, see below)
b92f59
  * duffy[client] pip package (to interact with newer Duffy, see below) 
b92f59
6ec6e8
That pod will mount the ssh private key used for your project under /duffy-ssh-key/ssh-privatekey (see [ssh_config](https://github.com/centosci/images/blob/master/cico-workspace/ssh_config#L2)) and also have the `CICO_API_KEY` env variable set up to request duffy v2 nodes 
b92f59
b92f59
From that point, it's up to you to :
b92f59
b92f59
 * write a function/script that will request a Duffy node (see below)
b92f59
 * ssh into the machine[s] to run some tests (usually pulled from git repository somewhere)
b92f59
 * return ephemeral nodes to duffy
b92f59
b92f59
!!! note
b92f59
    As a test job, to "play" with the concept, you can just configure a simple "Freestyle project" that would just run "/bin/bash" (so that jenkins job continue to run in background) , and from Openshift console, you can take the pod terminal to explore it and try things.
2521ad
2521ad
2521ad
## Duffy (ephemeral bare-metal/Virtual Machines provider)
2521ad
f8ca97
Duffy is the middle layer running ci.centos.org that manages the provisioning, maintenance and teardown / rebuild of the Nodes (physical hardware and VMs) that are used to run the tests in the CI Cluster.
f8ca97
2521ad
We provide both bare-metal and VMs and support the following architectures :
2521ad
2521ad
  * x86_64 (both physical and VMs)
2521ad
  * aarch64 (both physical and VMs)
2521ad
  * ppc64le (only VMs on Power 8 or Power 9 but supporting nested virtualization)
2521ad
9fe779
The EC2 instances are also provisioned with a second EBS volume (unconfigured) that you can then init the way you want (requested initially by Ceph for their own testing)
9fe779
2521ad
To be able to request ephemeral nodes to run your tests, you'll need both your `project` name and `api key` that will be generated for you once your project will have be allowed on that Infra.
2521ad
2521ad
!!! note
2521ad
    It's worth knowing that there are quotas in place to ensure that you can't request infinite number of nodes in parallel and each `session` has a maximum lifetime of 6h. After that your nodes are automatically reclaimed by Duffy and are freshly reinstalled and put back in the Ready pool.
2521ad
2521ad
f8ca97
### Installing and configuring duffy client
6ec6e8
Use `pip` (or `pip3.8` on el8) to install duffy client. (already installed in `cico-workspace` pod template in openshift so not needed there)
f8ca97
f8ca97
```shell
f8ca97
pip install --user duffy[client]
f8ca97
```
f8ca97
f8ca97
Duffy client needs the tenant's name and the tenant's API key to be able to request sessions from duffy server. If the tenant doesn't exist yet, it should be created in duffy server. Having the tenant's name and the tenant's API key, create the file `.config/duffy` with the following content. 
f8ca97
f8ca97
```
f8ca97
client:
f8ca97
  url: https://duffy.ci.centos.org/api/v1
f8ca97
  auth:
f8ca97
    name: <tenant name>
f8ca97
    key: <API key>
f8ca97
```
f8ca97
6ec6e8
One can also call `duffy client` without any existing config file (if you want to):
6ec6e8
6ec6e8
```
6ec6e8
duffy client --url https://duffy.ci.centos.org/api/v1 --auth-name <tenant_name> --auth-key $CICO_API_KEY <command>
6ec6e8
```
6ec6e8
6ec6e8
!!! danger
6ec6e8
    never leak your API key so if you use this command from within jenkins, by default jenkins is using `set -x` so outputing/echoing commands. So don't forget to use `set +x` before the duffy call (or else).
6ec6e8
f8ca97
### Requesting a session
6ec6e8
f8ca97
Before creating a session, the name of the pool is required. Check the pool available executing the command.
f8ca97
f8ca97
```shell
f8ca97
duffy client list-pools
f8ca97
```
f8ca97
f8ca97
!!! note
f8ca97
    The name of the pool is structured like this: 
f8ca97
f8ca97
    `<AAA>-<BBB>-<CCC>-<DDD>-<EEE>-<FFF>`
f8ca97
f8ca97
    - AAA: Identify if it is a bare metal or virtual machine
ef03d2
    - BBB: The kind of the instance, like seamicro bare-metal, AWS EC2, etc
ef03d2
    - CCC: The machine flavor type
ef03d2
    - DDD: Operating System (CentOS|Fedora)
ef03d2
    - EEE: OS version
ef03d2
    - FFF: architecture (x86_64|aarch64|ppc64le)
f8ca97
f8ca97
f8ca97
Having the name of the pool, request how many sessions needed. Duffy has a limit of sessions per tenant, this information is available in the duffy server.
f8ca97
ef03d2
Worth knowing that one can also see current pool usage and machines in `ready` state, etc, by querying specific pool. Example:
ef03d2
ef03d2
```
ef03d2
duffy client show-pool virt-ec2-t2-centos-9s-x86_64
ef03d2
{
ef03d2
  "action": "get",
ef03d2
  "pool": {
ef03d2
    "name": "virt-ec2-t2-centos-9s-x86_64",
ef03d2
    "fill_level": 5,
ef03d2
    "levels": {
ef03d2
      "provisioning": 0,
ef03d2
      "ready": 5,
ef03d2
      "contextualizing": 0,
ef03d2
      "deployed": 0,
ef03d2
      "deprovisioning": 0
ef03d2
    }
ef03d2
  }
ef03d2
}
ef03d2
ef03d2
```
ef03d2
ef03d2
To then request some nodes from a pool one can use the following duffy call : 
ef03d2
f8ca97
```shell
ef03d2
duffy client request-session pool=<name of the pool>,quantity=<number of machines to get>
f8ca97
``` 
f8ca97
f8ca97
By default this command outputs a _json_, but it's possible to change the format to _yaml_ or _flat_ using `--format`. Under "node" key it's possible to find the node's hostname provisioned. Log in to it as `root` user, using `ssh`.
f8ca97
f8ca97
```json
f8ca97
{
f8ca97
<...output ommited...>
f8ca97
f8ca97
"nodes": [
f8ca97
    {
f8ca97
        "hostname": "<hostname>",
f8ca97
        "ipaddr": "<ip address>",
f8ca97
f8ca97
<...output ommited...>
f8ca97
}
f8ca97
```
f8ca97
f8ca97
### Retiring a session
2521ad
2521ad
At the end of the test, you should "return" your ephemeral nodes to Duffy API service. This will trigger either a reinstall of the physical node (through kickstart) or just discarding/terminating it (if that's a cloud instance)
2521ad
f8ca97
To retire a session, the session id is required. Check the id executing.
f8ca97
f8ca97
```shell
f8ca97
duffy client list-sessions
f8ca97
```
f8ca97
f8ca97
When needed to retire the session execute the command.
f8ca97
f8ca97
```shell
f8ca97
duffy client retire-session <session id>
f8ca97
```
4057f2
4057f2
## Artifacts storage
4057f2
4057f2
There is a artifacts storage box that you can use to store ephemeral artifacts (logs, build, etc). It's publicly available as [https://artifacts.ci.centos.org](https://artifacts.ci.centos.org).
4057f2
4057f2
How can you push to that storage box ? 
4057f2
Each tenant will have a dedicated directory (owned by them) under /srv/artifacts.
4057f2
You can use your project name as user and your project ssh keypair to push through ssh (rsync, scp) to ssh://`tenant_name`@artifacts.ci.centos.org:/srv/artifacts/`tenant_name`
4057f2
4057f2
Worth knowing that while you can push through ssh, there is no allowed shell for you on that storage box, so use scp or rsync directly from the jenkins pod that has your private key to push to that storage box
4057f2
4057f2
!!! warning
4057f2
    We'll implement some rotation to clean-up used space on that machine on regular basis, so don't expect pushed files to remain available forever !
Michal Konečný 46b5ac
Michal Konečný 46b5ac
## Migration to new CI instance
Michal Konečný 46b5ac
Michal Konečný 46b5ac
In case you want to migrate your old jenkins configuration to new CI instance follow this guide.
Michal Konečný 46b5ac
cd66ff
1. Login to [old openshift instance](https://console-openshift-console.apps.ocp.ci.centos.org/)
Michal Konečný 46b5ac
2. Click at your username in upper right corner and select `Copy login command`
Michal Konečný 46b5ac
3. Use [oc](https://console-openshift-console.apps.ocp.ci.centos.org/command-line-tools) tool to login
Michal Konečný 46b5ac
4. Switch to correct project `oc project <project_name>` (you should already be on correct project, but it's better to check)
cd66ff
5. Copy the old configuration to your machine `oc rsync <pod_name>:/var/lib/jenkins/jobs <target_directory>` (you can find the jenkins pod name with `oc get pods|grep jenkins`)
Michal Konečný 46b5ac
cd66ff
!!! note
cd66ff
    Usually you want to only migrate jobs, but if you need any other configuration file, just
cd66ff
    do `oc rsh <pod_name>` and look inside `/var/lib/jenkins` for all configuration files.
cd66ff
    Just be aware that some of the files could contain IP or hostname that will no longer work
cd66ff
    in new CI instance.
Michal Konečný 46b5ac
Michal Konečný 46b5ac
6. Login to [new openshift instance](https://console-openshift-console.apps.ocp.cloud.ci.centos.org/)
Michal Konečný 46b5ac
7. Click at your username in upper right corner and select `Copy login command`
Michal Konečný 46b5ac
8. Use [oc](https://console-openshift-console.apps.ocp.ci.centos.org/command-line-tools) tool to login
Michal Konečný 46b5ac
9. Switch to correct project `oc project <project_name>` (you should already be on correct project, but it's better to check)
Michal Konečný 46b5ac
10. Copy the configuration files from your machine to openshift pod `oc rsync jobs <pod_name>:/var/lib/jenkins`
Michal Konečný 46b5ac
11. Login to new Jenkins instance (the URL should be `https://jenkins-<project_name>.apps.ocp.cloud.ci.centos.org`)
Michal Konečný 46b5ac
12. In the `Manage Jenkins` page click on `Reload Configuration from Disk`
Michal Konečný 46b5ac
Michal Konečný 46b5ac
Now you should have your old configuration available in new Jenkins instance.
Michal Konečný 46b5ac
Michal Konečný 46b5ac
!!! warning
Michal Konečný 46b5ac
    This migration doesn't migrate any credentials from the old Jenkins instance. Those needs to be
Michal Konečný 46b5ac
    migrated manually, because those are not stored in `/var/lib/jenkins`.