Ansible up and running

      7 Comments on Ansible up and running

After much delay – I’ve finally found time to take a look at Ansible.  I’ve spent some time looking at possible platforms to automate network deployment and Ansible seems to be a favorite in this arena.  One of the primary reasons for this is that Ansible is ‘clientless’ (I’m putting that in quotes for a reason, more on that in a later post).  So unlike Chef, Puppet, and Salt (Yes – there are proxy modes available in some products) Ansible does not require an installed client on the remote endpoints.  So let’s get right into a basic lab setup.

While the end goal will be to use Ansible to automate network appliances, we’re going to start with the a more standard use case – Linux servers.  The base lab we will start with is two servers, one acting as the Ansible server and the second being a Ansible client or remote server.  Both hosts are CentOS 7 based Linux hosts.  So our base lab looks like this…

image
Pretty exciting right?  I know, it’s not, but I want to start with the basics and build from there…

Note: I’ll refer to ansibleserver as ‘server’ and ansibleclient1 as ‘client’ throughout the remainder of the article.

If you’re following along, my assumption is that you have 2 servers that are similarly IP’d, have resolvable DNS names, and are fully updated CentOS 7 hosts.  That being said, let’s start out by building the Ansible server node.  The install process looks like this…

#Install the epel repo
yum –y install epel-release

#Install Ansible
yum –y install ansible

That’s it.  You’re done.  Do a quick check and see if Ansible is working as expected…

image
So it looks like everything is installed.  The next thing we want to do is configure a means to communicate with the clients.  Ansible’s default means of doing this is with SSH.  So let’s configure an SSH key on the server and send it to the client…

#Generate a local SSH key
ssh-keygen -t rsa

#Copy the key to the client
ssh-copy-id root@ansibleclient1

After the key has been installed, test it out by SSHing to the client from the server…

image

The login should work and you shouldn’t need to specify a password for the root user.

Note: In this example Im using the root account on both the server and the client.  By default, Ansible attempts to use the current logged in user to connect to the clients.  If you dont plan on using the root user on the server, you’ll need to tell Ansible to still use root for connectivity. I plan on covering this functionality in a later post.

The next thing we nee to do is to define the clients you want the server to work with.  This is done by defining them in Ansible ‘hosts’ file which is located in ‘/etc/ansible/hosts’.  Out of the box, this file is full of examples.  For the sake of clarity, I’ve removed the examples leaving the file looking like this…

image
Here you can see I’ve added a group named ‘linuxservers’ and in that group I’ve defined a client ‘ansibleclient1’.  Any host that you wish to manage must be specified in this file.  Once defined, it can be referenced either directly by name or as part of a group.  Review some of the default examples in this file to give you an idea of how you can match on different hosts.

So now that we have a client defined, how do we talk to it with Ansible?  The best test is to try and run some basic modules against the clients.  For instance, there’s a ‘ping’ module that lets you see which hosts are online…

image
Notice that you can run the ping module against the client server in a few different ways.  I can reference it by name, by group, or by using the ‘all’ flag which matches all clients defined in the hosts file.  As you’ll see, modules are the key component of Ansible that allow it to perform a wide variety of tasks.  For instance, there’s a module  for SELinux…

image
Above I call the ‘selinux’ module and pass it an argument with the ‘-a’ flag to tell it to disable SELinux.  There’s also a shell module that allows you to run shell commands on clients like I do below with the ‘ip addr’ command…

image

The list of all of the modules can be found on the Ansible website.  So while the modules themselves are powerful, running them in this manner isn’t much better than just executing one module at a time.  The system becomes really powerful when you couple modules with playbooks.

Ansible playbooks are written in YAML and contain one or more ‘plays’.  Let’s look at an example playbook so you can see what I’m talking about…

---
- hosts: linuxservers
  tasks:
    - name: Install Apache Web Server
      yum: name=httpd state=latest
      notify:
        - openport80
        - startwebserver
  handlers:
    - name: openport80
      service: name=httpd state=started
    - name: startwebserver
      firewalld: port=80/tcp permanent=true state=enabled immediate=yes

Here’s a fairly basic playbook that installs and starts an Apache web server.  This playbook defines one play, with one task.  Plays are defined by specifying the hosts to be part of the play followed by a series of tasks to execute against them.  Playbooks can contain multiple plays each with multiple tasks.  In this case, this play has one task which uses the module ‘yum’.  In addition, to the tasks, you can also define handlers.  These are items that you want to run ONLY if a certain task makes successful changes to the system.  So in this case, we tell the task ‘Install Apache Web Server’ to notify the handlers ‘openport80’ and ‘startwebserver’.  If the task results in the system successfully installing Apache, it will notify the handlers defined for the task.  So let’s save this playbook on the server and run it…

image
As you can see, the means to call a playbook is simple.  You just use the ‘ansible-playbook’ command and specify the YAML playbook definition.  As you can see from the run, Ansible successfully ran the task and in turn triggered the two defined handlers.  If we browse to the client on port 80, we should see the Apache start page…

image
Success!  Let’s run the playbook again and see what happens…

image
Note that this time the task completed without making any changes.  Since nothing changed, there’s no need to notify any of the handlers.  Pretty slick huh?

I hope this first look at Ansible has been helpful.  Stay tuned for more posts on other features and ways to automate with Ansible!

7 thoughts on “Ansible up and running

  1. subhash

    Thanks for the this nice post.

    I tried to run the playbook but I got into one issue. I am not able to execute the “startwebserver” handler. It ends with following error:

    NOTIFIED: [start web server] **************************************************
    failed: [192.168.56.10] => {“failed”: true, “parsed”: false}
    failed=True msg=’firewalld required for this module’
    OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
    debug1: Reading configuration data /etc/ssh_config
    debug1: /etc/ssh_config line 20: Applying options for *
    debug1: /etc/ssh_config line 53: Applying options for *
    debug1: auto-mux: Trying existing master
    debug1: mux_client_request_session: master session id: 2
    Shared connection to 192.168.56.10 closed.

    So my question is that, do we need to install some packages for the successful execution of module ?

    Btw, I found one typo. handler name is “openport80” but notify is sent for “openport8”.

    Reply
    1. Jon Langemak Post author

      Thanks for finding my typo! Now corrected

      What OS are you running on? It looks to me like the system isnt using firewalld. Is it an earlier version of CentOS or another OS?

      Reply
  2. Andrew Christ

    Hey Jon,
    I just ran through this on Centos 7.2 and everything worked great.

    Good job on the article, it was really easy to follow.

    Ansible definitely seems pretty cool. Being ‘clientless’ is a huge plus for me.

    Reply

Leave a Reply to Jon Langemak Cancel reply

Your email address will not be published. Required fields are marked *