Except for some identified ssh Jump Hosts, or for public services over ssh (like pushing to git.centos.org), the tcp/22 port used by sshd is [firewalled](https://github.com/CentOS/ansible-role-iptables/blob/master/defaults/main.yml#L11) on almost all the fleet.
As part of the `init` [process](/operations/deploy/common/) we sign the sshd host key, meaning that once it's signed by central key, you just have to trust that cert-authority and so not have to confirm each host key/fingerprint when connecting to a server over ssh.
The Ansible [sshd](https://github.com/CentOS/ansible-role-sshd) is also distributing a ssh_known_hosts system file, so that each node (if needed) can also ssh into other centos nodes (like for backup purposes), as long as of course :
The central [known_hosts_entries](https://github.com/CentOS/ansible-role-sshd/blob/master/defaults/main.yml#L23) ansible variable at least has one default entry for the `main` CentOS ENV.
From a client perspective, all users ssh public keys are distributed by ansible (for sysadmin) or coming from [IPA](/infra/authentication/) through ipsilon for some services able to query/import ssh public keys through openid/openidc (like for example pagure/git.centos.org)
We can use the `sshd_proxyjump_host` feature from our [sshd role](https://github.com/CentOS/ansible-role-sshd/blob/master/defaults/main.yml) and ansible will just restrict "jailed" users on that jumphost.