Except for some identified ssh Jump Hosts, or for public services over ssh (like pushing to git.centos.org), the tcp/22 port used by sshd is firewalled on almost all the fleet.
As part of the init
process we sign the sshd host key, meaning that once it's signed by central key, you just have to trust that cert-authority and so not have to confirm each host key/fingerprint when connecting to a server over ssh.
The Ansible sshd is also distributing a ssh_known_hosts system file, so that each node (if needed) can also ssh into other centos nodes (like for backup purposes), as long as of course :
The central known_hosts_entries ansible variable at least has one default entry for the main
CentOS ENV.
Apart from that, our default sshd_config :
HostCertificate
(see the node about CA sshd host above)From a client perspective, all users ssh public keys are distributed by ansible (for sysadmin) or coming from IPA through ipsilon for some services able to query/import ssh public keys through openid/openidc (like for example pagure/git.centos.org)
We also tune the default ciphers in our host sshd_config to match current security standards and following best practices in that regard.
For bastion hosts, we don't even allow shell accounts, and so people would only get real access to the nodes/infra they'd be allowed to get to.
We can use the sshd_proxyjump_host
feature from our sshd role and ansible will just restrict "jailed" users on that jumphost.