0
votes

I'm trying to Deploy and Configure an cluster on ec2/aws with anisble. I want to do the deployment and configuration as part of the same playbook.

main.yml : 
    - hosts: localhost
      gather_facts: false

  vars_files:
    - vars/main.yml

  tasks:
    - name: Deploy the master for the kubernetes cluster
      include_tasks: tasks/kub_master.yml

    - name: Configure Master Kub node
      include_tasks: tasks/config_kub_master.yml

kub_master.yml

 ---
- name: Deploy the admin node
  ec2:
    region: "{{ region }}"
    key_name: "{{ ssh_key_name }}"
    instance_type: "{{ master_inst_type }}"
    image: "{{ image_id }}"
    count: "{{ master_inst_count }}"
    assign_public_ip: no
    vpc_subnet_id: "{{ subnet_id }}"
    group_id: "{{ sg_id }}"
    wait: yes
    wait_timeout: 1800
    volumes:
      - device_name: /dev/xvda
        volume_type: gp2
        volume_size: 50
        delete_on_termination: true
    user_data: "{{ lookup ('file', '../files/user_data_master.sh') }}"
    instance_tags:
      Name: "{{ kub_cluster }}-admin-node"
      lob: "{{ tags_lob }}"
      project: "{{ tags_project }}"
      component: "{{ kub_cluster}}_kub_master_node"
      contact_email: "{{ tags_contact_email }}"
      product: "{{ tags_product }}"
  async: 45
  poll: 25
  register: kub_mas

Kub_configure.yml

---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
  gather_facts: true
  remote_user: remote_user
  shell: " cat /etc/redhat-release " 

However this doesnt seem to work at the Kub_configure end point as it seems to fail on the remote execution.

How can we deploy and use the ip from the deployment to configure the node using single ansible playbook?

Here's the output of the ansible run : you can see the task is trying to execute in local eventhough i'm trying to give a remote address.

TASK [Configure Master Kub node] ******************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.yml:11
Read vars_file 'vars/main.yml'
included: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml for localhost
Read vars_file 'vars/main.yml'
Read vars_file 'vars/main.yml'

TASK [shell] **************************************************************************************************************************************************************************************************
task path: /home/username/Repo_S/kube_cluster/cluster_deploy/tasks/master/master_config.yml:2
<localhost> ESTABLISH LOCAL CONNECTION FOR USERe/username
<localhost> EXEC /bin/sh -c 'echoe/username && sleep 0'
<localhost> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" && echo ansible-tmp-1541439662.55-121688078512813="` echo /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813 `" ) && sleep 0'
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/commands/command.py
<localhost> PUT /home/username/.ansible/tmp/ansible-local-30214U7V93F/tmpmcASIl TO /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py
<localhost> EXEC /bin/sh -c 'chmod u+x /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py && sleep 0'
<localhost> EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-sywhejuzolifjntwhpbxlesbbbutlegn; /usr/bin/python /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/command.py'"'"' && sleep 0'
<localhost> EXEC /bin/sh -c 'rm -f -r /home/username/.ansible/tmp/ansible-tmp-1541439662.55-121688078512813/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "module_stderr": "sudo: a password is required\n",
    "module_stdout": "",
    "msg": "MODULE FAILURE",
    "rc": 1
}
    to retry, use: --limit @/home/username/Repo_S/kube_cluster/cluster_deploy/cluster_deploy.retry

PLAY RECAP ****************************************************************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=1
1
Can we see the output of Ansible?Ignacio Millán
updated in the original postM Kaim

1 Answers

0
votes

in Kub_configure.yml

Try using become: yes

This will may fix the issue

---
- hosts: "{{ kub_mas.instances[0].private_ip }}"
  gather_facts: true
  become: yes
  remote_user: remote_user
  shell: " cat /etc/redhat-release "