34
votes

I'm trying to run ansible role on multiple servers, but i get an error:

fatal: [192.168.0.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}

My /etc/ansible/hosts file looks like this:

192.168.0.10 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.11 ansible_sudo_pass='passphrase' ansible_ssh_user=user
192.168.0.12 ansible_sudo_pass='passphrase' ansible_ssh_user=user

I have no idea what's going on - everything looks fine - I can login via SSH, but ansible ping returns the same error.

The log from verbose execution:

<192.168.0.10> ESTABLISH SSH CONNECTION FOR USER: user <192.168.0.10> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=user -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 192.168.0.10 '/bin/sh -c '"'"'( umask 22 && mkdir -p "echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" && echo "echo $HOME/.ansible/tmp/ansible-tmp-1463151813.31-156630225033829" )'"'"''

Can you help me somehow? If I have to use ansible in local mode (-c local), then it's useless.

I've tried to delete ansible_sudo_pass and ansible_ssh_user, but it did'nt help.

7

7 Answers

57
votes

You need to change the ansible_ssh_pass as well or ssh key, for example I am using this in my inventory file:

192.168.33.100 ansible_ssh_pass=vagrant ansible_ssh_user=vagrant

After that I can connect to the remote host:

ansible all -i tests -m ping

With the following result:

192.168.33.100 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Hope that help you.

EDIT: ansible_ssh_pass & ansible_ssh_user don't work in the latest version of Ansible. It has changed to ansible_user & ansible_pass

7
votes
mkdir /etc/ansible
cat > hosts
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key

Go to your playbook directory and run ansible all -m ping or ansible ping -m "server-group-name"

2
votes

Try to modify your host file to:

192.168.0.10
192.168.0.11
192.168.0.12
2
votes

I had this issue, but it was for a different reason than was documented in other answers. My host that I was trying to deploy to was only available by going through a jump box. Originally, I thought that it was because Ansible wasn't recognizing my SSH config file, but it was. The solution for me was to make sure that the user that was present in the SSH config file matched the user in the Ansible playbook. That resolved the issue for me.

1
votes

$ansible -m ping all -vvv

After installing ansible on Ubuntu or CentOS. You can have messages below. Do not panic, you must have an access right to the file /tmp of user [/home/user_name/.ansible/tmp/]. "Authentication or permission failure".

This preconisaion will solve the problem.

[Your_server ~]$ ansible -m ping all
rusub-bm-gt | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
Your_server | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
0
votes

Best Practice for me I'm using SSH keys to access to server hosts

 

1.Create hosts file in inventories folder

[example]

example1.com

example2.com

example3.com

 

2. Create ansible-playbook file playbook.yml

---

- hosts:

    - all

- roles:

    - example

3. let's try to deploy ansible-playbook with multiple server hosts

ansible-playbook playbook.yml -i inventories/hosts example --user vagrant

-1
votes

The ansible_ssh_port changed while reloading the vm.

==> default: Booting VM...

==> default: Waiting for machine to boot. This may take a few minutes...

default: SSH address: 127.0.0.1:2222

default: SSH username: vagrant

default: SSH auth method: private key

So I had to update the inventory/hosts file as follows:

default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='centos' ansible_ssh_private_key_file=<key path>