Sep 232016
 

Currently, I’m working on automating the setup of a authoritative DNS server, namely gdnsd. There are many nice features in gdnsd, but what might be interesting for you, that it requires the zone data to be in the regular RFC1035 compliant format. This is also true for bind, probably the most widely used DNS server, therefore the approach explained here, could be also used for bind. Again I wanted to use Ansible as automation framework, not only to setup and configure the service, but also to generate the DNS zone file. One reason for this is, because gdnsd doesn’t support zone transfers, therefore Ansible should be used as synchronization mechanism and because in my opinion the JSON-based inventory format is a simple, generic but very powerful data interface. Especially when considering the dynamic inventory feature of Ansible, one is completely free where and how to actually store the configuration data.

There are already a number of Ansible bind roles available, however they mostly use a very simple approach when it’s about configuring the zone file and its serial. When generating zone files with an automation tool the trickiest part is the handling of the serial number, which has to be increased on every zone update. I’d like to explain what solutions I implemented to solve this challenge.

Zone data generation must be idempotent
One strength of Ansible is, that it can be run over and over again and only ever changes something in the system if the current state is not as desired. In my context this means, that the zone file only needs to be updated if the zone data from the inventory has changed. Therefore, also the serial number only ever has to be updated in that case. But how to know if the data has changed?

Using the powerful Jinja2 templating engine, I defined dictionary and assigned every value which would later go into the zone file. Then, I’ll create a checksum over the dictionary content and save it as comment into the zone file. If the checksum changed, the serial has to be updated. Otherwise the zone file includes the old serial and nothing changed. In practice this would look like this:

  1. Read the hash and serial which are saved as comment in the existing zone file and register a temporary variable. It will be empty if the zone file doesn’t exist yet:
    - name: Read zone hash and serial
      shell: 'grep "^; Hash:" /etc/gdnsd/zones/example.com || true'
      register: gdnsd__register_hash_and_serial
      [...]
    
  2. Define a task which will update the zone file:
    - name: Generate forward zones
      template:
        src: 'etc/gdnsd/zones/forward_zone.j2'
        dest: '/etc/gdnsd/zones/example.com'
        [...]
    
  3. In the template, create a dictionary holding the zone data:
    {% set _zone_data = {} %}
    {% set _ = _zone_data.update({'ttl': item.ttl}) %}
    {% set _ = _zone_data.update({'domain': 'example.com'}) %}
    [...]
    
  4. Create an intermediate variable _hash_and_serial holding the hash and serial read from the zone file before:
    {% set _hash_and_serial = gdnsd__register_hash_and_serial.stdout.split(' ')[2:] %}
    
  5. Create a hash from the final _zone_data dictionary, compare it with the hash (first element) in _hash_and_serial. If the hashes are equal set the serial as read before (second element) in _hash_and_serial. Otherwise set a new serial which was previously saved in gdnsd__fact_zone_serial (see following section):
    {% set _zone = {'hash': _zone_data | string | hash('md5')} %}
    {% if _hash_and_serial and _hash_and_serial[0] == _zone['hash'] %}
    {%   set _ = _zone.update({'serial': _hash_and_serial[1]}) %}
    {% else %}
    {%   set _ = _zone.update({'serial': gdnsd__fact_zone_serial}) %}
    {% endif %}
    
  6. Safe the final hash and serial as comment to the zone file:
    ; Hash: {{ _zone['hash'] }} {{ _zone['serial'] }}
    

Identical zone serial on distributed servers
I didn’t explain yet, how gdnsd__fact_zone_serial is defined. Initially, I simply had ansible_date_time.epoch, which corresponds the Unix time, assigned to the serial. This is the simplest way to make sure the serial is numerical and each zone update results in an increased value. However, in the introduction I also mentioned the issue of distributing the zone files between a set of DNS servers. Obviously, if they have the same zone data, they must also have the same serial.

To make sure multiple servers are using the same serial for a zone update, the serial is not computed individually in each template task execution, but once for each playbook run. In Ansible, one can specify that a task must only run once, even the playbook is executed on multiple servers. Therefore I defined such a task to store the Unix time in the temporary fact gdnsd__fact_zone_serial which is used in the zone template on all servers:

- name: Generate serial
  set_fact:
    gdnsd__fact_zone_serial: '{{ ansible_date_time.epoch }}'
  run_once: True

This approach is still not perfect. It won’t compare the two generated zone files between a set of servers. So you have to make sure that the zone data in the inventory is the same for all servers. Also, if you update the servers individually, the serial is generated twice and therefore are different, even when the zone data is identical. At the moment I can’t see any elegant approach to solve those issues. If you have some ideas, please let me know…

The example code listed above is a simplified version of my real code. If you are interested in the entire role, have a look at github.com: ganto/ansible-gdnsd. I hope this could give you some useful examples for using some of the more advanced Ansible features in a real-world scenario.

Sep 052016
 

Most of my readers must have heard about the “Let’s encrypt” public certificate authority (CA) by now. For those who haven’t: About two years ago, the Internet Security Research Group (ISRG), a public benefit group, supported by the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Cisco, Akamai, the Linux Foundation and many more started the challenge to create a fully trusted public key infrastructure which can be used for free by everyone. Until then, the big commercial certificate authorities such as Comodo, Symantec, GlobalSign or GoDaddy dominated the market of SSL certificates which prevented a wide use of trusted encryption. The major goal of the ISRG is to increase the use of HTTPS for Web sites from then less than 40 percent two years ago to a 100 percent. One step to achieve this, is by providing certificates to everyone for free and the other step, to do this in a fully automated way. For this reason a new protocol called Advanced Certificate Management Environment (ACME) was designed and implemented. Going forward to today: The “Let’s encrypt” CA issued already more than five million certificates and the use of HTTPS is increasing to around 45 percent in June 2016.

acme-tiny is a small Python script which can be used to submit the certificate request to the “Let’s encrypt” CA. If you’re eligible to request a certificate for this domain you instantly get the certificate back. As such a certificate is only valid for 90 days and the renewal process doesn’t need any user interaction it’s a perfect opportunity for a fully automated setup.

Since a while I prefer Ansible for all kind of automation tasks. “Let’s encrypt” finally allows me to secure new services, which I spontaneously decide to host on my server via sub-domains. To ease the initial setup and fully automate the renewal process, I wrote an Ansible role ganto.acme_tiny. It will run the following tasks:

  • Generate a new RSA key if none is found for this domain
  • Create a certificate signing request
  • Submit the certificate signing request with help of acme-tiny to the “Let’s encrypt” CA
  • Merge the received certificate with the issuing CA certificate to a certificate chain which then can be configured for various services
  • Restart the affected service to load the new certificate

In practice, this would look like this:

  • Create a role variable file /etc/ansible/vars/mail.linuxmonk.ch.yml:
    acme_tiny__domain: [ 'mail.linuxmonk.ch', 'smtp.linuxmonk.ch' ]
    acme_tiny__cert_type: [ 'postfix', 'dovecot' ]
  • Make sure the involved service configurations load the certificate and key from the correct location (see ganto.acme_tiny: Service Configuration).
  • Run the playbook with the root user to do the initial setup:

    $ sudo ansible-playbook \
    -e @/etc/ansible/vars/mail.linuxmonk.ch.yml \
    /etc/ansible/playbooks/acme_tiny.yml

That’s it. Both SMTP and IMAP are now secured with help of a “Let’s encrypt” certificate. To setup automated certificate renewal I only have to add the executed command in a task scheduler such as cron from where it will be executed as unprivileged user acmetiny which was created during the initial playbook run. E.g. in /etc/cron.d/acme_tiny:

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

@monthly acmetiny /usr/bin/ansible-playbook -e @/etc/ansible/vars/mail.linuxmonk.ch.yml /etc/ansible/playbooks/acme_tiny.yml >/dev/null

If you became curious and want to have a setup like this yourself, checkout the extensive documentation about the Ansible role at Read the Docs: ganto.acme_tiny.

This small project was also a good opportunity for me, to integrate all the nice free software-as-a-service offers the Internet is providing for a (Ansible role) developer nowadays:

  • The code “project” is hosted and managed on Github.
  • Every release and pull request is tested via the Travis-CI continuous integration platform. It makes use of the rolespec Ansible role testing framework for which a small test suite has been written.
  • Ansible Galaxy is used as a repository for software distribution.
  • The documentation is written in a pimped version of Markdown, rendered via Sphinx and hosted on Read the Docs from where it can be accessed and downloaded in various formats.

That’s convenient!