The aim of this article is to talk a little about what Ansible and Docker are, to give examples of when they can be used and how, combining the two technologies, can simplify the life of QA specialists.

Terms and definitions

Before proceeding to the main part, it seems necessary to define the terms which are used.

Ansible is a configuration management system written in Python using a declarative markup language to describe configurations. It is used to automate the software configuration and deployment.

Docker is a software for deployment automating and management of applications in a virtualization environment on the operating system level.

SSH (Secure Shell) is an application-level network protocol that allows you to control the operating system remotely and tunnel TCP connections (for example, to transfer files).

Power Shell is a scripting language from Microsoft, we can say that this is a more advanced version of the command line. It can be used to manage Windows operating system and its components, as well as to create automated scripts for system administration.

Image is an independent file system, a container is formed from it.

Container is a running process of an operating system in an isolated environment with a connected file system from an image.

Ansible overview

So, Ansible is a very flexible and easy tool for writing automation scripts of any complexity, it is usually used to manage Linux nodes, but the target system can also be Windows, as well as MacOS. It also supports working with network devices on which Python version 2.4 and higher is installed via SSH or PowerShell connection. Although using Ansible you can manage Windows systems, only a Unix-like system (Linux, MacOS, FreeBSD, etc.) can act as an Ansible host. So, Ansible is a very flexible and easy tool for writing automation scripts of any complexity, it is usually used to manage Linux nodes, but the target system can also be Windows, as well as MacOS. It also supports working with network devices on which Python version 2.4 and higher is installed via SSH or PowerShell connection. Although using Ansible you can manage Windows systems, only a Unix-like system (Linux, MacOS, FreeBSD, etc.) can act as an Ansible host.

It is important to understand that there can be many target systems, and there is no need to install Ansible on them, it is enough to install only on 1 machine to control the rest of the systems. All work is done using the SSH/PowerShell connection.

What tasks can be solved using Ansible:

  • installation/removal and configuration of software;
  • creation/deletion of users;
  • users passwords/keys control;
  • creation/removal of containers/virtual machines;
  • running various scripts/tests.

Ansible project structure

In general, Ansible project may include:
  • variables;
  • playbook (scripts);
  • roles;
  • inventory (lists of host groups).

Variables

Variables are used to store values that can be used in playbook.

Variables can be defined:

  • in special files for a group/device;
  • in inventory;
  • in playbook;
  • in roles that are used then;
  • variables passed when the playbook is called.

Special files with variables can be stored in 2 directories:

  • group_vars/ – group variables (common). If necessary, you can create an “all” file that contains variables that apply to all groups.
  • host_vars/ – variables of independent hosts (private). The file names must match the host name or address in them.

The names of files in directories are not important, the main thing is that the name of the directory matches with the names in inventory.

Variable files must be in YAML format.

You don’t need to create a separate directory for each group or host, you can simply create a file and record all the necessary variables in it.

Priorities of variables (in ascending order):

  • set in roles;
  • defined in inventory;
  • group variables (common);
  • group variables (private);
  • transferred from playbook;
  • passed from the command (-e/–extra-vars).

Playbooks

Playbooks are used to execute scripts.

The playbook is a *.yml file that says what to do when this script is called.

A playbook should have a minimum structure like this:

  • hosts – target group of hosts; you can set mask exceptions or specify multiple groups through a colon;
  • tasks or roles – a performed action or a role.

You can also specify:

  • become_user – a user who will be executed by (user sudo);
  • remote_user – a user for ssh connection;
  • include – an additionally called script, for example, validation of application parameters.

Roles

Role is a structured playbook containing a set (as the least) of tasks, and additionally event handlers, variables (defaults), files, templates, as well as a description and dependencies (meta).

An example of a role structure for installing of couchbase on a project:

From the current structure we see that when this role is called, there will be executed:

  1. View will be created.
  2. Data for work will be loaded.
  3. Settings for the main application or light delivery will be loaded.
  4. Additional parameters will be loaded.

Inventory

We can say that Inventory is a file that describes the devices Ansible will connect to. Devices can be specified in the Inventory file using IP addresses or names. Devices can be specified one at a time or divided into groups. The file is described in INI format.

A file example:

The name indicated in square brackets is the name of the group. In this case, three groups of devices are created: service, service_light, service_backend. At the same time, since ansible_connection = local is specified for them, all operations will be performed on the current local machine, and the ansible_host parameter will be ignored. You should be very careful!

[simple:children] shows that several groups were merged into one.

When working with Inventory, it is a good rule to create a new file each time so that you could return to the previous state of the file if necessary.

The launch of the finished playbook is carried out by the command:

$ ansible-playbook simple-service.yml -i inventory/localhost -u username -k -vv

  • -vv is a parameter that allows to display a larger number of logs compared to -v,
  • -i is the way to the inventory file.

Docker overview

The most important and useful feature of Docker is that the application can be placed with all the dependencies in a separate module, which will be self-sufficient and will not be overloaded with unnecessary components, as it usually happens with virtual machines, which means in most cases it will provide more performance and take up less space.

After Docker appeared, the work of system administrators and DevOps specialists got simpler, because now instead of thinking how to launch the application, you need to think about where to launch it.

Docker can be used on all types of OS, but in Windows-based systems you may have some problems during installation and use. You can read more about installing of Docker Get started with Docker.

Using of Docker on the project

On the project, we came to the use of Docker as a tool to prepare a test environment, which requires minimal configuration and is almost always ready to use.

Of course, there were pitfalls. After switching (upgrading the version) to another version of the couchbase container, problems began: working container might suddenly stop responding, freeze etc. And due to the fact that the choice of ready-made containers was limited, there was no time to prepare our container, we decided to use local installation of the application.

We could not get rid of all the problems at once, but everything was being solved faster with the help of several scripts and additional control from QA no more than once in a few days. A little later, our own container was already built, but certain problems remained in it, due to the current minor version of the application, which at that time we could not change.

As an interesting launch example, I will give an example of the Apache Tomcat container adding script:

Let me explain some moments:

  • In the first line, as you can see from the comments, we check if the Apache Tomcat container with the name “ps-tomcat” is run, for which we get the list of running containers and filter it by the word “ps-tomcat” and save it into variable.
  • Then, if the value of variable is not equal to ps-tomcat (you could check for an empty value of course), we consider that the container with Apache Tomcat is not run, and we need to start it; if it’s equal, we simply display a notification and exit the script.
  • If you need to complete the installation, then download the container, launch it and copy the directories with logs, applications and config files. The reason this action is performed is that we use a customized image of Apache Tomcat, so some of the settings in Apache Tomcat, including security, are unique.
    When copying directories with settings, there is no need to store the settings files in a separate directory, check for any changes in the current version, and you can not think that the current version will be accidentally deleted.
  • We delete the container and start again, linking with local folders. This is necessary so that we can simply add .war and .properties files to the suitable directories and immediately launch the installed applications. I will also note that with the arguments of the form -p 9099: 9099 port forwarding is performed in order to access the application from the outside.
  • You can also notice that everywhere the execution of commands from the superuser is used. This is due to the fact that by default, Docker works through unix socket. For security reasons, the socket is closed to users who are not members of the docker group. You can fix this by running a small script:

What were the components of the environment we brought to docker? An attentive reader will notice that it was Apache Tomcat, also RabbitMQ and ZooKeeper. Couchbase images above 4.4.x version, unfortunately, have certain problems, so we decided to use the local installation, but no one prevents from transfering of Couch to the container in the future either.

After the above, it might seem that Docker may be an ideal solution for replacing the whole environment, but you always need to weigh the pros and cons, otherwise you can get into the situation: “When you are holding a hammer, everything around seems to be nails.”

For what, probably, Docker may not suit? For a small project, where the time spent on preparing and configuring of a separate image can be compared with installing the necessary environment.

Could our application be fully implemented in Docker? Of course, you can, but since we check the quality of the developed application, this is at least impractical because of the need to build the container each time after adding new functionality. But the finished image can be used to conduct integration testing as a part of other systems and products.

Instead of conclusion

On the project you can use Ansible as an auto-installer of the product, as an assistant for installing and configuring Jenkins and any other CI environments.

The best using of these two technologies I see in tandem where Ansible is for the initial installation and configuration of what should be outside the containers, and Docker is used to virtualize specific applications. Actually, we came up to using this on our project.

Useful links

  1. Ansible for network engineers – https://legacy.gitbook.com/book/natenka/ansible-dlya-setevih-inzhenerov/details
  2. 15 things you should know about Ansible – https://habr.com/post/306998/
  3. Container-based integration testing – https://habr.com/company/redhatrussia/blog/420385/
  4. Docker cheat sheet – https://github.com/wsargent/docker-cheat-sheet