From dc7015b0f3fd3221d7331737924253b97f4df193 Mon Sep 17 00:00:00 2001 From: Fabian Arrotin Date: Apr 27 2022 13:19:48 +0000 Subject: Some notes about ansible-core and collections and RHEL support in CentOS infra Signed-off-by: Fabian Arrotin --- diff --git a/docs/ansible/index.md b/docs/ansible/index.md index 4d81122..5d4de3a 100644 --- a/docs/ansible/index.md +++ b/docs/ansible/index.md @@ -18,6 +18,12 @@ Each playbook for a role target a group called `hostgroup-role-`. There a small exceptions where some role- playbooks will be small variants of a role, so also with other tasks to call specific tasks for an existing role (so when for example a vhost for httpd is a variant of the httpd role) +### Collections +Starting from `ansible-core` package, some modules aren't included in ansible, so they have to be collected on the ansible management host and then be called from within playbooks and roles. +As the number of collections that CentOS Infra relies on is really small, we decided to just stick to needed collections (and specific version, that can be incremented after it's all tested). +The needed `collections` are declared in the `requirements.yml` file (one per environement) and you can use ansible-roles-ctl to fetch specific version or even update in case it's needed + + #### "pre-flight" check For each playbook configuring a role, there is an option (in case of) to end the play if we have to. Basically touching /etc/no-ansible on a managed node would ensure that the playbook is ended. That permits to have (in emergency for example) someone having a look at a node and ensuring that ansible doesn't modify the node at the same time. After each role configuration, a file is also created (monitored by Zabbix) to ensure that nodes are always configured as they have to @@ -37,6 +43,11 @@ The "on-disk" ansible directory should then look like this : ``` . ├── ansible.cfg +├── collections +│ └── ansible_collections +│ ├── ansible +│ ├── community +│ └── ├── files -> playbooks/files ├── handlers -> playbooks/handlers ├── filestore @@ -55,9 +66,10 @@ The "on-disk" ansible directory should then look like this : ``` -## Ansible roles setup +## Ansible roles and collections setup All roles will be deployed for a list of individual git repositories, each one being its own role. -A requirements.yml file will be used to declare which roles (and from where to get them) and so downloaded on the ansible host through ansible-galaxy +Same for needed collections : they'll be pointing to specific tag/version of a git repository that represents a tested version of the collection we need for ansible. +A requirements.yml file will be used to declare which roles and collections (and from where to get them) are needed and so downloaded on the ansible host through ansible-roles-ctl ## Inventory and encrypted files Inventory is itself a different git repository, git-crypted and that will be checked-out on the ansible host diff --git a/docs/operations/decommission.md b/docs/operations/decommission.md index 045bd4c..a636273 100644 --- a/docs/operations/decommission.md +++ b/docs/operations/decommission.md @@ -6,7 +6,7 @@ This is an overview of the needed tasks to perform when we want to remove a node * Delete/reinstall the (virtual) machine (cleaning up) * If hosted within Red Hat DC, update [internal ip inventory](https://docs.google.com/spreadsheets/d/1K-aewLJ17z3pRC6K5qyBRJYtNXy1WcxRSVwPkGf4NXQ) (Obviously need RH SSO and permission) - * Remove from [DNS](/infra/dns/) (public of internal, depending on the case) + * Remove from [DNS](/infra/dns/) (public or internal, depending on the case and don't forget to also remove from PowerDNS if record is delegated to that dns infra, see below) * Remove it from Ansible inventory (and search for references for that node in case of) * Remove it from Zabbix monitoring diff --git a/docs/operations/deploy/bare-metal.md b/docs/operations/deploy/bare-metal.md index dffa3a4..d3e7592 100644 --- a/docs/operations/deploy/bare-metal.md +++ b/docs/operations/deploy/bare-metal.md @@ -29,6 +29,9 @@ If we want ansible to automatically deploy it, we'll just have to add the node i * `ip` , `gateway`, `netmask` and `dns` (usually apart from `ip`, which is unique, the rest is coming through inheritance * based on group inheritance, ensure that variables documented in [adhoc-provision-node.yml](https://github.com/CentOS/ansible-infra-playbooks/blob/master/adhoc-provision-node.yml) are also defined +!!! note + We can deploy both CentOS and RHEL so if you define `rhel_version` it will be deploying RHEL but otherwise it will default to CentOS and `centos_version`, which is normally 8-stream for now + ### Deploying the machine If previous steps are done and also network switch port[s] working, we can just now proceed with ansible : diff --git a/docs/operations/deploy/common.md b/docs/operations/deploy/common.md index a051011..83c9ddd 100644 --- a/docs/operations/deploy/common.md +++ b/docs/operations/deploy/common.md @@ -57,9 +57,9 @@ The `adhoc-init-node.yml` will do the following : * retrieve ssh host keys, sign these and push the signed * retrieve locally some facts that can be used later for basic host_vars template * play the `baseline` role (common for *all* nodes but with different settings, based on inventory + * if that's a RHEL host, either point to internal mirror, or RH CDN (based on `rhel_host_use_cdn` variable/boolean in ansible) * (optional): play other roles that are tied to ansible inventory group membership (if you added the host already in some specific groups) -If you configured correctl Now that machine is in ansible inventory, you can always add new role, based on group memberships, change settings through `group_vars` or `host_vars`, etc, so Ansible BAU diff --git a/docs/operations/deploy/virtual-machine.md b/docs/operations/deploy/virtual-machine.md index 9ff6bdb..b22d8cf 100644 --- a/docs/operations/deploy/virtual-machine.md +++ b/docs/operations/deploy/virtual-machine.md @@ -8,6 +8,9 @@ Then add the new node to /host_vars/ and also in the