We have to support multiple (but not so many) Openshift/OCP 4.x clusters , deployed at various places and deployed differently, based on some rules/constraints,etc.
What is common is that we need official subscription, and managed centrally, from https://console.redhat.com/openshift (see inventory for credentials and/or internal emails)
To deploy in AWS, one can use the adhoc-deploy-ocp-aws playbook. But you need first to read all the current documentation to understand what is needed Before running the playbook, you'll need :
Warning
Be sure to review the VPC settings/subnets that openshift-install binary will create once running and ensure no overlap
Depending on the env, the ocp-admin-node can be used to :
We don't want to use the internal auth but instead rely on FAS/ACO so the first thing can be done is to tie openshift to ipsilon (id.centos.org (or id.stg.centos.org).
This is configured by the ocp-admin-role
but it needs some variables first, so create the ocp cluster client id at the ipsilon side (admin operation) and then update the ansible invetory for all the ocp_idp_
variables.
Once applied through the role, openshift will allow to login through FAS/ACO.
Worth knowing that the ocp_fas_sync
boolean lets you also automatically create projects/groups/namespaces/RBACs for the groups starting with ocp_fas_group_prefix
(for example for OCP ci, the IPA/FAS groups are all prefixed ocp-cico-
)
OCP deployed on AWS by default can use EBS volumes but it would come with limitation. So one can use EFS as centralized storage solution for PersistentVolumes (and our playbook can use that feature).
Warning
if you want to ensure that PVC would use the created PVs on top of EFS, instead of EBS, you should disable the gp2
StorageClass (under Storage/StorageClasses, edit and storageclass.kubernetes.io/is-default-class: 'false'
)
Here is an an overview of the deployed ocp ci cluster :
Apart from the FAS/auth/EFS volumes, it's worth knowing that :