Feb 152018
 

The recently disclosed Spectre and Meltdown CPU vulnerabilities are some of the most dramatic security issues in the recent computer history. Fortunately even six weeks after public disclosure sophisticated attacks exploiting these vulnerabilities are not yet common to observe. Fortunately, because the hard- and software vendors are still stuggling to provide appropriate fixes.

If you happen to run a Linux system, an excellent tool for tracking your vulnerability as well as the already active mitigation strategies is the spectre-meltdown-checker script originally written and maintained by Stéphane Lesimple.

Within the last month I set myself the target to bring this script to Fedora and EPEL so it can be easily consumed by the Fedora, CentOS and RHEL users. Today it finally happend that the spectre-meltdown-checker package was added to the EPEL repositories after it is already available in the Fedora stable repositories since one week.

On Fedora, all you need to do is:

dnf install spectre-meltdown-checker

After enabling the EPEL repository on CentOS this would be:

yum install spectre-meltdown-checker

The script, which should be run by the root user, will report:

    • If your processor is affected by the different variants of the Spectre and Meltdown vulnerabilities.
    • If your processor microcode tries to mitigate the Spectre vulnerability or if you run a microcode which
      is known to cause stability issues.
    • If your kernel implements the currently known mitigation strategies and if it was compiled with a compiler which is hardening it even more.
    • And eventually if you’re (still) affected by some of the vulnerability variants.
  • On my laptop this currently looks like this (Note, that I’m not running the latest stable Fedora kernel yet):

    # spectre-meltdown-checker                                                                                                                                
    Spectre and Meltdown mitigation detection tool v0.33                                                                                                                      
                                                                                                                                                                              
    Checking for vulnerabilities on current system                                       
    Kernel is Linux 4.14.14-200.fc26.x86_64 #1 SMP Fri Jan 19 13:27:06 UTC 2018 x86_64   
    CPU is Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz                                      
                                                                                                                                                                              
    Hardware check                            
    * Hardware support (CPU microcode) for mitigation techniques                         
      * Indirect Branch Restricted Speculation (IBRS)                                    
        * SPEC_CTRL MSR is available:  YES    
        * CPU indicates IBRS capability:  YES  (SPEC_CTRL feature bit)                   
      * Indirect Branch Prediction Barrier (IBPB)                                        
        * PRED_CMD MSR is available:  YES     
        * CPU indicates IBPB capability:  YES  (SPEC_CTRL feature bit)                   
      * Single Thread Indirect Branch Predictors (STIBP)                                                                                                                      
        * SPEC_CTRL MSR is available:  YES    
        * CPU indicates STIBP capability:  YES                                           
      * Enhanced IBRS (IBRS_ALL)              
        * CPU indicates ARCH_CAPABILITIES MSR availability:  NO                          
        * ARCH_CAPABILITIES MSR advertises IBRS_ALL capability:  NO                                                                                                           
      * CPU explicitly indicates not being vulnerable to Meltdown (RDCL_NO):  UNKNOWN    
      * CPU microcode is known to cause stability problems:  YES  (Intel CPU Family 6 Model 61 Stepping 4 with microcode 0x28)                                                
                                              
    The microcode your CPU is running on is known to cause instability problems,         
    such as intempestive reboots or random crashes.                                      
    You are advised to either revert to a previous microcode version (that might not have
    the mitigations for Spectre), or upgrade to a newer one if available.                
    
    * CPU vulnerability to the three speculative execution attacks variants
      * Vulnerable to Variant 1:  YES 
      * Vulnerable to Variant 2:  YES 
      * Vulnerable to Variant 3:  YES 
    
    CVE-2017-5753 [bounds check bypass] aka 'Spectre Variant 1'
    * Mitigated according to the /sys interface:  NO  (kernel confirms your system is vulnerable)
    > STATUS:  VULNERABLE  (Vulnerable)
    
    CVE-2017-5715 [branch target injection] aka 'Spectre Variant 2'
    * Mitigated according to the /sys interface:  YES  (kernel confirms that the mitigation is active)
    * Mitigation 1
      * Kernel is compiled with IBRS/IBPB support:  NO 
      * Currently enabled features
        * IBRS enabled for Kernel space:  NO 
        * IBRS enabled for User space:  NO 
        * IBPB enabled:  NO 
    * Mitigation 2
      * Kernel compiled with retpoline option:  YES 
      * Kernel compiled with a retpoline-aware compiler:  YES  (kernel reports full retpoline compilation)
      * Retpoline enabled:  YES 
    > STATUS:  NOT VULNERABLE  (Mitigation: Full generic retpoline)
    
    CVE-2017-5754 [rogue data cache load] aka 'Meltdown' aka 'Variant 3'
    * Mitigated according to the /sys interface:  YES  (kernel confirms that the mitigation is active)
    * Kernel supports Page Table Isolation (PTI):  YES 
    * PTI enabled and active:  YES 
    * Running as a Xen PV DomU:  NO 
    > STATUS:  NOT VULNERABLE  (Mitigation: PTI)
    
    A false sense of security is worse than no security at all, see --disclaimer
    

    The script also supports a mode which outputs the result as JSON, so that it can easily be parsed by any compliance or monitoring tool:

    # spectre-meltdown-checker --batch json 2>/dev/null | jq
    [
      {
        "NAME": "SPECTRE VARIANT 1",
        "CVE": "CVE-2017-5753",
        "VULNERABLE": true,
        "INFOS": "Vulnerable"
      },
      {
        "NAME": "SPECTRE VARIANT 2",
        "CVE": "CVE-2017-5715",
        "VULNERABLE": false,
        "INFOS": "Mitigation: Full generic retpoline"
      },
      {
        "NAME": "MELTDOWN",
        "CVE": "CVE-2017-5754",
        "VULNERABLE": false,
        "INFOS": "Mitigation: PTI"
      }
    ]
    

    For those who are (still) using a Nagios-compatible monitoring system, spectre-meltdown-checker also supports to be run as NRPE check:

    # spectre-meltdown-checker --batch nrpe 2>/dev/null ; echo $?
    Vulnerable: CVE-2017-5753
    2
    

    I just mailed to Stéphane and he will soon release version 0.35 with many new features and fixes. As soon as it will be released I’ll submit a package update, so that you’re always up to date with the latest developments.

    Sep 232016
     

    Currently, I’m working on automating the setup of a authoritative DNS server, namely gdnsd. There are many nice features in gdnsd, but what might be interesting for you, that it requires the zone data to be in the regular RFC1035 compliant format. This is also true for bind, probably the most widely used DNS server, therefore the approach explained here, could be also used for bind. Again I wanted to use Ansible as automation framework, not only to setup and configure the service, but also to generate the DNS zone file. One reason for this is, because gdnsd doesn’t support zone transfers, therefore Ansible should be used as synchronization mechanism and because in my opinion the JSON-based inventory format is a simple, generic but very powerful data interface. Especially when considering the dynamic inventory feature of Ansible, one is completely free where and how to actually store the configuration data.

    There are already a number of Ansible bind roles available, however they mostly use a very simple approach when it’s about configuring the zone file and its serial. When generating zone files with an automation tool the trickiest part is the handling of the serial number, which has to be increased on every zone update. I’d like to explain what solutions I implemented to solve this challenge.

    Zone data generation must be idempotent
    One strength of Ansible is, that it can be run over and over again and only ever changes something in the system if the current state is not as desired. In my context this means, that the zone file only needs to be updated if the zone data from the inventory has changed. Therefore, also the serial number only ever has to be updated in that case. But how to know if the data has changed?

    Using the powerful Jinja2 templating engine, I defined dictionary and assigned every value which would later go into the zone file. Then, I’ll create a checksum over the dictionary content and save it as comment into the zone file. If the checksum changed, the serial has to be updated. Otherwise the zone file includes the old serial and nothing changed. In practice this would look like this:

    1. Read the hash and serial which are saved as comment in the existing zone file and register a temporary variable. It will be empty if the zone file doesn’t exist yet:
      - name: Read zone hash and serial
        shell: 'grep "^; Hash:" /etc/gdnsd/zones/example.com || true'
        register: gdnsd__register_hash_and_serial
        [...]
      
    2. Define a task which will update the zone file:
      - name: Generate forward zones
        template:
          src: 'etc/gdnsd/zones/forward_zone.j2'
          dest: '/etc/gdnsd/zones/example.com'
          [...]
      
    3. In the template, create a dictionary holding the zone data:
      {% set _zone_data = {} %}
      {% set _ = _zone_data.update({'ttl': item.ttl}) %}
      {% set _ = _zone_data.update({'domain': 'example.com'}) %}
      [...]
      
    4. Create an intermediate variable _hash_and_serial holding the hash and serial read from the zone file before:
      {% set _hash_and_serial = gdnsd__register_hash_and_serial.stdout.split(' ')[2:] %}
      
    5. Create a hash from the final _zone_data dictionary, compare it with the hash (first element) in _hash_and_serial. If the hashes are equal set the serial as read before (second element) in _hash_and_serial. Otherwise set a new serial which was previously saved in gdnsd__fact_zone_serial (see following section):
      {% set _zone = {'hash': _zone_data | string | hash('md5')} %}
      {% if _hash_and_serial and _hash_and_serial[0] == _zone['hash'] %}
      {%   set _ = _zone.update({'serial': _hash_and_serial[1]}) %}
      {% else %}
      {%   set _ = _zone.update({'serial': gdnsd__fact_zone_serial}) %}
      {% endif %}
      
    6. Safe the final hash and serial as comment to the zone file:
      ; Hash: {{ _zone['hash'] }} {{ _zone['serial'] }}
      

    Identical zone serial on distributed servers
    I didn’t explain yet, how gdnsd__fact_zone_serial is defined. Initially, I simply had ansible_date_time.epoch, which corresponds the Unix time, assigned to the serial. This is the simplest way to make sure the serial is numerical and each zone update results in an increased value. However, in the introduction I also mentioned the issue of distributing the zone files between a set of DNS servers. Obviously, if they have the same zone data, they must also have the same serial.

    To make sure multiple servers are using the same serial for a zone update, the serial is not computed individually in each template task execution, but once for each playbook run. In Ansible, one can specify that a task must only run once, even the playbook is executed on multiple servers. Therefore I defined such a task to store the Unix time in the temporary fact gdnsd__fact_zone_serial which is used in the zone template on all servers:

    - name: Generate serial
      set_fact:
        gdnsd__fact_zone_serial: '{{ ansible_date_time.epoch }}'
      run_once: True
    

    This approach is still not perfect. It won’t compare the two generated zone files between a set of servers. So you have to make sure that the zone data in the inventory is the same for all servers. Also, if you update the servers individually, the serial is generated twice and therefore are different, even when the zone data is identical. At the moment I can’t see any elegant approach to solve those issues. If you have some ideas, please let me know…

    The example code listed above is a simplified version of my real code. If you are interested in the entire role, have a look at github.com: ganto/ansible-gdnsd. I hope this could give you some useful examples for using some of the more advanced Ansible features in a real-world scenario.

    Sep 052016
     

    Most of my readers must have heard about the “Let’s encrypt” public certificate authority (CA) by now. For those who haven’t: About two years ago, the Internet Security Research Group (ISRG), a public benefit group, supported by the Electronic Frontier Foundation (EFF), the Mozilla Foundation, Cisco, Akamai, the Linux Foundation and many more started the challenge to create a fully trusted public key infrastructure which can be used for free by everyone. Until then, the big commercial certificate authorities such as Comodo, Symantec, GlobalSign or GoDaddy dominated the market of SSL certificates which prevented a wide use of trusted encryption. The major goal of the ISRG is to increase the use of HTTPS for Web sites from then less than 40 percent two years ago to a 100 percent. One step to achieve this, is by providing certificates to everyone for free and the other step, to do this in a fully automated way. For this reason a new protocol called Advanced Certificate Management Environment (ACME) was designed and implemented. Going forward to today: The “Let’s encrypt” CA issued already more than five million certificates and the use of HTTPS is increasing to around 45 percent in June 2016.

    acme-tiny is a small Python script which can be used to submit the certificate request to the “Let’s encrypt” CA. If you’re eligible to request a certificate for this domain you instantly get the certificate back. As such a certificate is only valid for 90 days and the renewal process doesn’t need any user interaction it’s a perfect opportunity for a fully automated setup.

    Since a while I prefer Ansible for all kind of automation tasks. “Let’s encrypt” finally allows me to secure new services, which I spontaneously decide to host on my server via sub-domains. To ease the initial setup and fully automate the renewal process, I wrote an Ansible role ganto.acme_tiny. It will run the following tasks:

    • Generate a new RSA key if none is found for this domain
    • Create a certificate signing request
    • Submit the certificate signing request with help of acme-tiny to the “Let’s encrypt” CA
    • Merge the received certificate with the issuing CA certificate to a certificate chain which then can be configured for various services
    • Restart the affected service to load the new certificate

    In practice, this would look like this:

    • Create a role variable file /etc/ansible/vars/mail.linuxmonk.ch.yml:
      acme_tiny__domain: [ 'mail.linuxmonk.ch', 'smtp.linuxmonk.ch' ]
      acme_tiny__cert_type: [ 'postfix', 'dovecot' ]
    • Make sure the involved service configurations load the certificate and key from the correct location (see ganto.acme_tiny: Service Configuration).
    • Run the playbook with the root user to do the initial setup:

      $ sudo ansible-playbook \
      -e @/etc/ansible/vars/mail.linuxmonk.ch.yml \
      /etc/ansible/playbooks/acme_tiny.yml

    That’s it. Both SMTP and IMAP are now secured with help of a “Let’s encrypt” certificate. To setup automated certificate renewal I only have to add the executed command in a task scheduler such as cron from where it will be executed as unprivileged user acmetiny which was created during the initial playbook run. E.g. in /etc/cron.d/acme_tiny:

    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    
    @monthly acmetiny /usr/bin/ansible-playbook -e @/etc/ansible/vars/mail.linuxmonk.ch.yml /etc/ansible/playbooks/acme_tiny.yml >/dev/null
    

    If you became curious and want to have a setup like this yourself, checkout the extensive documentation about the Ansible role at Read the Docs: ganto.acme_tiny.

    This small project was also a good opportunity for me, to integrate all the nice free software-as-a-service offers the Internet is providing for a (Ansible role) developer nowadays:

    • The code “project” is hosted and managed on Github.
    • Every release and pull request is tested via the Travis-CI continuous integration platform. It makes use of the rolespec Ansible role testing framework for which a small test suite has been written.
    • Ansible Galaxy is used as a repository for software distribution.
    • The documentation is written in a pimped version of Markdown, rendered via Sphinx and hosted on Read the Docs from where it can be accessed and downloaded in various formats.

    That’s convenient!

    Apr 212016
     

    I recently had the task to setup and test a new Linux Internet gateway host as a replacement to an existing router. The setup is classical with some individual ports such as HTTP being forwarded with DNAT to some backend systems.

    Redundant router setup requiring policy routing

    Redundant router setup requiring policy routing

    The new router, from now on I will call it router #2, should be tested and put into operation without downtime. Obviously this means that I had to run the two routers in parallel for a while. The backend systems however, only know one default gateway. Accessing a service through a forwarded port on router #2 resulted in a timeout, as the backend system sent the replies to the wrong gateway where they were dropped.

    Fortunately iptables and iproute2 came to my rescue. They enable you to implement policy routing on Linux. This means that the routing decision is not (only) made based on the destination address of a packet as in regular routing, but additional rules are evaluated. In my case: Every connection opened through router #2 has to be replied via router #2.

    Using iptables/iproute2 for this task this means: Incoming packets with the source MAC address from router #2 are marked with help of the iptables ‘mark’ extension. The iptables ‘connmark’ extension will then help to associate outgoing packets to the previously marked connection. Based on the mark of the outgoing packet a custom routing policy will set the default gateway to router #2. Easy, eh?

    Configuration
    Now I’ll show some commands how this can be accomplished. The following commands assume that the iptables rule set is still empty and are for demonstration purpose only. Likely they have to be adjusted slightly in a real configuration.

    First the routing policy will be setup:

    1. Define a custom routing table. There exist some default tables, so the custom entry shouldn’t overlap with those. For better understanding I will call it ‘router2’:

      # echo "200 router2" >> /etc/iproute2/rt_tables

    2. Add a rule to define the condition which packets should lookup the routing in the previously created table ‘router2’:
      &nbps;
      # ip rule add fwmark 0x2 lookup router2

      This means that IP packets with the mark ‘2’ will be routed according to the table ‘router2’.

    3. Set the default gateway in the ‘router2’ table to the IP address of router #2 (e.g. 10.0.0.2):

      # ip route add default via 10.0.0.2 table router2

    4. To make sure the routing cache is rebuilt, it need to be flushed after changes:
      # ip route flush cache

    Afterwards I had to make sure that the involved connections coming from router #2 are marked appropriately (above the mark ‘2’ was used). The ‘mangle’ table is a part of the Linux iptables packet filter and meant for modifying network packets. This is the place where the packet markings will be set.

    1. The first iptables rule will match all packets belonging to a new connection coming from router #2 and sets the previously defined mark ‘2’:

      # iptables --table mangle --append INPUT \
      --protocol tcp --dport 80 \
      --match state --state NEW \
      --match mac --mac-source 52:54:00:c2:a5:43 \
      ! --source 10.0.0.0/24 \
      --jump MARK --set-mark 0x2

      The packets being marked are restricted to meet the following requirements:

      • being sent by the network adapter of router #2 (--mac-source)
      • don’t originate in the local network (! --source)
      • target destination port 80 (--dport: example for the HTTP port being forwarded by router #2)
      • belong to a new connection (--state NEW)

      Of course additional (or less) extensions can be used to filter the packets according to individual requirements.

    2. Next, the incoming packets are given to the ‘connmark’ extension which will do the connection tracking of the marked connections:

      # iptables --table mangle --append INPUT \
      --jump CONNMARK --save-mark

    3. The packets which can be associated with an existing connection are also marked accordingly:

      # iptables --table mangle --append INPUT \
      --match state --state ESTABLISHED,RELATED \
      --jump CONNMARK --restore-mark

    4. All the previous rules where required that the outgoing packets can finally be marked too:

      # iptables --table mangle --append OUTPUT \
      --jump CONNMARK --restore-mark

    Debugging
    The following commands and iptables rules should help when setting up and/or debugging policy routing with marked packets:

    • List policy of routing table ‘table2’:

      # ip route show table router2

    • List defined routing tables:

      # cat /etc/iproute2/rt_tables

    • Log marked incoming/outgoing packets to syslog:

      # iptables -A INPUT -m mark --mark 0x2 -j LOG
      # iptables -A OUTPUT -m mark --mark 0x2 -j LOG

    Aug 282014
     

    Today I just found out, how super easy it is to setup a safe HTTP authentication via Kerberos with help of FreeIPA. Having the experience of managing a manually engineered MIT Kerberos/OpenLDAP/EasyRSA infrastructure, I’m once again blown away by the simplicity and usability of FreeIPA. I’ll describe with only a few commands which can be run within less than 10 minutes how it’s possible to setup a fully featured Kerberos-authenticated Web server configuration. Prerequisite is a FreeIPA server (a simple guide for installation can be found for example here) and a RedHat-based Web server host (RHEL, CentOS, Fedora).

    Required Packages:
    First we are going to install the required RPM packages:

    # yum install httpd mod_auth_kerb mod_ssl ipa-client

    Register the Web server host at FreeIPA:
    Make sure the Web server host is managed by FreeIPA:

    ipa-client-install --domain=example.com --server=ipaserver.example.com --realm=EXAMPLE.COM --mkhomedir --hostname=webserver.example.com --configure-ssh --configure-sshd

    Create a HTTP Kerberos Principal and install the Keytab:
    The Web server is identified in a Kerberos setup through a keytab, which has to be generated and installed on the Web server host. First make sure that you have a valid Kerberos ticket of a FreeIPA account with enough permissions (e.g. ‘admin’):

    # kinit admin
    # ipa-getkeytab -s ipaserver.example.com -p HTTP/webserver.example.com -k /etc/httpd/conf/httpd.keytab

    This will create a HTTP service principal in the KDC and install the corresponding keytab in the Apache httpd configuration directory. Just make sure that it can be read by the httpd server account:

    # chown apache /etc/httpd/conf/httpd.keytab

    Create a SSL certificate
    No need to fiddle around with OpenSSL. Requesting, signing and installing a SSL certificate with FreeIPA is one simple command:

    # ipa-getcert request -k /etc/pki/tls/private/webserver.key -f /etc/pki/tls/certs/webserver.crt -K HTTP/webserver.example.com -g 3072

    This will create a 3072 bit server key, generate a certificate request, send it to the FreeIPA Dogtag CA, sign it and install the resulting PEM certificate on the Web server host.

    Configure Apache HTTPS
    The httpd setup is the only and last configuration which needs to be done manually. For HTTPS set the certificate paths in /etc/httpd/conf.d/ssl.conf:

    [...]
    SSLCertificateFile /etc/pki/tls/certs/webserver.crt
    SSLCertificateKeyFile /etc/pki/tls/private/webserver.key
    SSLCertificateChainFile /etc/ipa/ca.crt
    

    Additionally do some SSL stack hardening (you may also want to read this):

    [...]
    SSLCompression off
    SSLProtocol all -SSLv2 -SSLv3 -TLSv1.0
    SSLHonorCipherOrder on
    SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"
    

    Kerberos HTTP Authentication:
    The final httpd authentication settings for ‘mod_auth_kerb‘ are done in /etc/httpd/conf.d/auth_kerb.conf or any vhost you want:

    <Location />
      SSLRequireSSL
      AuthType Kerberos
      AuthName "Kerberos Login"
      KrbMethodNegotiate On
      KrbMethodK5Passwd On
      KrbAuthRealms EXAMPLE.COM
      Krb5KeyTab /etc/httpd/conf/httpd.keytab
      require valid-user
    </Location>
    

    That’s it! After restarting the Web server you can login on https://webserver.example.com with your IPA accounts. If you don’t already have a valid Kerberos ticket in the Web client, KrbMethodNegotiate On enables interactive password authentication.

    Troubleshooting
    In case you get the following error message in the httpd error log, make sure the keytab exists and is readable by the httpd account (e.g. ‘apache’):

    [Wed Aug 27 07:23:04 2014] [debug] src/mod_auth_kerb.c(646): [client 192.168.122.1] Trying to verify authenticity of KDC using principal HTTP/webserver.example.com@EXAMPLE.COM
    [Wed Aug 27 07:23:04 2014] [debug] src/mod_auth_kerb.c(689): [client 192.168.122.1] krb5_rd_req() failed when verifying KDC
    
    May 162014
     

    I recently bought a PC Engines APU1C4 x86 embedded board which is meant to be the board for my future custom NAS box. In comparison to the various ARM boards it promises to be powerful and I/O friendly (3x Gbit LAN, SATA, 3x mini PCIe) and doesn’t include redundant graphics and sound circuits. On the other hand the only way to locally access it is via a serial terminal. Before installing the final system, hopefully more about this in a later article, I wanted to have a quick glance at the system from a Linux point of view. I tried booting the device over an USB stick prepared with my favorite live system SystemRescueCD, which by the way is based on Gentoo, but somehow failed as the boot process didn’t support output on a serial device and never spawned a terminal on it either. Before loosing too much time in searching for another media which would support a serial console, I simply setup my own minimal boot system based on Fedora 19. Here will follow a quick summary on what was required to achive this, as I couldn’t find a good and recent how-to about such a setup either. Because this minimal system is meant for ad-hoc booting only, I will keep things as simple as possible.

    Prepare the installation medium

    The APU1C4 supports booting over all possible storage devices, so you need to have a spare USB stick, external USB disk, mSATA disk, SATA disk or a SD card for storing the minimal Linux installation. Create at least one partition with a Linux file system of your choice on it and mount it. This will be the root directory of the new system. The following example will show how the setup is done on a device /dev/sdb with one partition mounted to /mnt/usbdisk:

    host # mount /dev/sdb1 /mnt/usbdisk

    Bootstrap minimal Fedora system in alternative root directory

    Redhat-based distributions have an easy way to install a new system to an alternative root directory. Namely, it can be done with the main package manager yum. To keep it easy I used a Fedora 19 host system to setup the boot disk. While being in the context of the host system (below indicated with ‘host #‘), always be careful that your commands are actually modifying the content under /mnt/usbdisk. Otherwise you might have a bad surprise when you reboot your host system the next time.

    1. Prepare RPM database:

    host # mkdir -p /mnt/usbdisk/var/lib/rpm
    host # rpm --root /mnt/usbdisk/var/lib/rpm --initdb

    2. Install Fedora release package:

    host # yumdownloader --destdir=/tmp fedora-release
    host # rpm --root /mnt/usbdisk -ivh /tmp/fedora-release*rpm

    3. Install a minimal set of packages (add whatever packages you’d like to have in the minimal system):

    host # yum --installroot=/mnt/usbdisk install e2fsprogs kernel \
    rpm yum grub2 openssh-client openssh-server passwd less rootfiles \
    vim-minimal dhclient pciutils ethtool dmidecode

    4. Copy DNS resolver configuration:

    host # cp -p /etc/resolv.conf /mnt/usbdisk/etc

    5. Mount pseudo file systems for chroot:

    host # mount -t proc none /mnt/usbdisk/proc
    host # mount -t sysfs none /mnt/usbdisk/sys
    host # mount -o bind /dev /mnt/usbdisk/dev

    5. chroot into the new system tree to finalize the installation:

    host # chroot /mnt/usbdisk /bin/bash

    6. Set root password

    chroot # passwd

    7. Prepare system configurations:

    chroot # echo "NETWORKING=yes" > /etc/sysconfig/network

    8. If you only have one partition with the entire system, a fstab is not needed anymore as dracut and Systemd will already know how to mount it. Otherwise create the fstab (use the UUID if you’re not sure how the disk will be called on the target system):

    chroot # dumpe2fs -h /dev/sdb1 | grep UUID
    dumpe2fs 1.42.7 (21-Jan-2013)
    Filesystem UUID: bfb2fba1-774d-4cfc-a978-5f98701fe58a
    chroot # cat << EOF >> /etc/fstab
    UUID=bfb2fba1-774d-4cfc-a978-5f98701fe58a / ext4 defaults 0 1
    EOF

    9. Setup Grub 2 for serial console:

    chroot # cat << EOF >> /etc/default/grub
    GRUB_DEFAULT=0
    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="Fedora"
    GRUB_CMDLINE_LINUX_DEFAULT=""
    GRUB_CMDLINE_LINUX="console=ttyS0,115200n8 rd.lvm=0 rd.md=0 rd.dm=0 rd.luks=0 LANG=en_US.UTF-8 KEYTABLE=us"
    GRUB_TERMINAL="serial"
    GRUB_SERIAL_COMMAND="serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1"
    GRUB_DISABLE_OS_PROBER=true
    EOF
    chroot # grub2-install /dev/sdb
    chroot # grub2-mkconfig -o /boot/grub2/grub.cfg

    10. We’re done. Exit the chroot and unmount it:

    chroot # exit
    host # umount /mnt/usbdisk/dev
    host # umount /mnt/usbdisk/proc
    host # umount /mnt/usbdisk/sys
    host # umount /mnt/usbdisk

    Now you can remove the disk from the host and connect it with the embedded board you want to boot.

    Connect to the serial console and start the system

    For connecting to the embedded board a USB to serial adapter and a null modem cable is required. There exist a number of tools to connect to a serial console on Linux which you probably already know (e.g. screen or minicom). However, I always found them painful to use. The tool of my choice is called CuteCom and is a graphical serial terminal. After selecting the correct serial device (/dev/ttyUSB0 in my case) and baud rate, you can power on your device and you’ll hopefully be greeted by the boot messages of your board and the freshly installed Linux system:

    CuteCom

    If there is no output in the terminal make sure, you use null-modem cable or adapter and not a simple serial extension cable. Further check for the correct serial port device in your serial terminal configuration and play around with the baud rate.

    Good luck and have fun with your embedded device. 🙂

    Dec 052013
     

    There already exist many tutorials how to setup a basic IPv6 network environment on Linux. Most of the time they are limited to an example of a radvd.conf enabling Router Advertisement and Stateless Address Auto Configuration (SLAAC), an example how to configure BIND to serve DNS requests for AAAA records and reply to IPv6 reverse lookups. Sometimes even DHCPv6 is mentioned as an alternative way to assign IPv6 client addresses, but mostly with a very basic example configuration without fixed address assignments. Building on this basic knowledge, I want to summarize some further settings which might be interesting, when playing around with DHCPv6 in a Linux environment.

    radvd and DHCPv6

    As I’ve already mentioned earlier there is no way around radvd, at least if your router is based on Linux. Unlike with IPv4, in IPv6 the router can announce its presence with ICMPv6 Router Advertisement messages, triggered by radvd. By default these messages include the ‘Autonomous’ flag, which enables SLAAC and therefore ask a receiver to autonomously configure its address based on the given IPv6 prefix. In a DHCPv6 subnet you may want to disable this behaviour in the radvd.conf with the following prefix option:

    AdvAutonomous off;

    There are two more radvd.conf configuration directives which are important in a DHCPv6 setup:

    AdvManagedFlag on;
    AdvOtherConfigFlag on;
    

    The ManagedFlag option (M flag) will hint the receiver to obtain a stateful address via DHCPv6. The OtherConfigFlag option (O flag) is used to inform the receiver that various other configuration information such as DNS, SIP or NTP server address lists can be requested via DHCPv6. The latter is often also used together with SLAAC, in case a client doesn’t understand RDNSS and DNSSL announcements.

    Important: Not all IPv6 clients can handle address assignment if SLAAC is disabled! According to the RFC4294 (IPv6 Node Requirement), a IPv6 client must only support address auto-configuration via SLAAC. DHCPv6 may be supported optionally. Especially Windows XP but also Google Android (see Issue #32621) won’t be able to auto-configure a routable IPv6 address without SLAAC.

    Another interesting radvd option not directly related to DHCPv6, but helpful if you like to analyse your network traffic without being distracted by all the Router Advertisement noise is the following interface option:

    UnicastOnly on;

    This will prevent radvd from broadcasting Router Advertisements and only reply with a unicast message if it receives a Router Solicitation message from a IPv6 host refreshing its routing table. Together with the possibility to only respond to a predefined list of host IPs in the radvd.conf, its even possible to run your router in complete stealth mode towards unknown IPv6 clients:

    clients {
        fd41:3fb2:3196:b1bb:52b:4dc0:1631:6626;
        fd41:3fb2:3196:b1bb:9d4a:23c:bff:fe08;
    };
    

    radvd and ip6tables

    To protect the router you may want to enable ip6tables. Additionally to the default ICMPv6 messages, such as Neighbor Solicitation/Advertisement and Destination Unreachable, which should be allowed on every IPv6 host anyway, the following rules must be configured to whitelist the radvd communication channels:

    Allow incoming Router Solicitation (ICMPv6 Type 133) messages:

    ip6tables -A INPUT -i <netdevice> -p ipv6-icmp -m icmp6 \
        --icmpv6-type router-solicitation -j ACCEPT
    

    Allow outgoing Router Advertisement (ICMPv6 Type 134) messages:

    ip6tables -A OUTPUT -o <netdevice> -p ipv6-icmp -m icmp6 \
        --icmpv6-type router-advertisement -j ACCEPT
    

    That’s it for the moment. In part 2 of my DHCPv6 series, I’ll show you some interesting Linux DHCP server configuration directives. Stay tuned and don’t hesitate to leave a comment in case this article was helpful for you or if I got it all wrong… 😉

    May 012013
     

    Since a while, I always wanted to dig into RPM packaging as it would be very useful in my daily work with several hundreds of Red Hat machines. But I didn’t find a challenging software to package since it’s hard to find popular tools not available as RPM or at least SRPM already. This lasted until recently when I had to update Oracle JRockit Java, an enterprise JDK used with the Oracle Weblogic server, on multiple dozens of machines. Accurately defined the default installation of the JDK consists of only one folder which could be tar’ed and copied over, but a real Linux admin knows, this is not the way to install software. After several days of try and error and researching JVM packaging, the result is now available on my GitHub profile.

    Download Spec File and Oracle JRockit Installer

    The easiest way to get the .spec file is to clone the oracle-jrockit-rpm repository:

    [user@host ~]$ git clone https://github.com/ganto/oracle-jrockit-rpm.git

    The following files from the repository are then required to build the RPM:

    oracle-jrockit-rpm/SOURCES/jrockit-silent.xml
    oracle-jrockit-rpm/SPECS/java-1.6.0-jrockit.spec

    Also download the Oracle JRockit installer, the x64 and ia32 version is supported by the spec file, and place it into the oracle-jrockit-rpm/SOURCES directory.

    Use mock to build the RPMs

    Mock is a useful tool to build RPMs for various target platforms. Even for the Gentoo friends it is available in the portage.

    In the first step a chroot environment for the target distribution has to be setup. Mock already comes with a fair number of different definition files for different distributions, which can be found in /etc/mock. They can be adapted according to different requirements, e.g. when a local mirror or a different base set of packages should be used. When building RPMs for RHEL/CentOS 6, I had to modify the epel-6-x86_64.cfg to use the following setup command:

    config_opts['chroot_setup_cmd'] = 'install bash bzip2 coreutils cpio diffutils findutils gawk gcc grep sed gcc-c++ gzip info patch redhat-rpm-config rpm-build shadow-utils tar unzip util-linux-ng which make'

    After adding the unprivileged build user to the ‘mock’ group, the chroot can be initialized with the following command. In this example I want to build the RPMs for the already mentioned RHEL/CentOS 6 distributions:

    [user@host ~]$ mock -r epel-6-x86_64 --init

    Next, the SRPM needs to be packaged:

    [user@host ~]$ mock -r epel-6-x86_64 --buildsrpm --spec oracle-jrockit-rpm/SPECS/java-1.6.0-jrockit.spec --sources oracle-jrockit-rpm/SOURCES

    Eventually, the final RPMs can be compiled:

    [user@host ~]$ mock -r epel-6-x86_64 --rebuild /var/lib/mock/epel-6-x86_64/root/builddir/build/SRPMS/java-1.6.0-jrockit-1.6.0.37_R28.2.5_4.1.0-1.el6.src.rpm

    If everything went well, the final RPMs can be found under /var/lib/mock/epel-6-x86_64/root/builddir/build/RPMS:

    [user@host ~]$ ls -1 /var/lib/mock/epel-6-x86_64/root/builddir/build/RPMS
    java-1.6.0-jrockit-1.6.0.37_R28.2.5_4.1.0-1.el6.x86_64.rpm
    java-1.6.0-jrockit-demo-1.6.0.37_R28.2.5_4.1.0-1.el6.noarch.rpm
    java-1.6.0-jrockit-devel-1.6.0.37_R28.2.5_4.1.0-1.el6.x86_64.rpm
    java-1.6.0-jrockit-jdbc-1.6.0.37_R28.2.5_4.1.0-1.el6.x86_64.rpm
    java-1.6.0-jrockit-missioncontrol-1.6.0.37_R28.2.5_4.1.0-1.el6.x86_64.rpm
    java-1.6.0-jrockit-src-1.6.0.37_R28.2.5_4.1.0-1.el6.x86_64.rpm

    Final Thoughts

    Of course this guide can be used to build any RPM also from other spec files and for other distributions. With these notes, I hope to be more productive when an RPM quickly has to be compiled in the future.

    If you find a bug in the spec file, feel free to open an issue on GitHub, so I can fix and learn from it. Otherwise just leave a comment below if you think this guide or the spec file was useful.

    Oct 312012
     

    As a Linux enthusiast and Gentoo user I was always looking for the perfect boot experience. While I managed to boot my kernel with EFI and grub 2 (as described in my wiki), I still had some troubles with OpenRC playing nice with my LVM-only setup initialized by dracut. Tonight I finally figured out the missing configuration pieces to shut up all warnings on system init.

    Initial situation
    All my Linux partitions are stored in a single LVM volume group, to stay as flexible as possible:

    merkur ~ # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    [...]
    └─sda5 8:5 0 49.5G 0 part
    ├─vg_merkur-slash (dm-0) 253:0 0 2.5G 0 lvm /
    ├─vg_merkur-boot (dm-1) 253:1 0 200M 0 lvm /boot
    ├─vg_merkur-tmp (dm-2) 253:2 0 6G 0 lvm /tmp
    ├─vg_merkur-swap (dm-3) 253:3 0 4G 0 lvm [SWAP]
    ├─vg_merkur-var (dm-4) 253:4 0 4G 0 lvm /var
    ├─vg_merkur-usr (dm-5) 253:5 0 12.8G 0 lvm /usr
    └─vg_merkur-opt (dm-6) 253:6 0 8G 0 lvm /opt

    My boot toolset currently consists of grub-2.00-r1, kernel-3.6.4, dracut-024, lvm-2.02.95-r4 and openrc-0.11.2

    Kernel Configuration
    Before compiling the kernel, make sure to include all the required configurations. For this setup, the most important ones are:

    CONFIG_BLK_DEV_INITRD
    CONFIG_DEVTMPFS
    CONFIG_MODULES
    CONFIG_SYSVIPC

    Dracut Configuration
    Before installing dracut, the desired modules have to be configured in /etc/make.conf. :

    DRACUT_MODULES="caps lvm mdraid syslog"

    For this setup at least the “lvm” module is mandatory. Further dracut was built with the “device-mapper” USE flag enabled.

    Altough some Linux developers (especially from Red Hat/Fedora) advice against a separate /usr partition because of many boot time dependencies on this system path, I didn’t bother much to change my years old setup. Since version 014, dracut includes a module to fill this gap (/usr/lib/dracut/modules.d/98usrmount/mount-usr.sh). It simply mounts the /usr partition right after the root file system early in the boot process. Therefore we have to make sure that the dracut modules “usrmount” and “lvm” are included in the initramfs, which was possible without any manual modification of /etc/dracut.conf, when generating the boot image with:

    dracut -H

    Kernel Command Line Configuration
    Dracut runtime parameters are given on the kernel command line in the Grub configuration. To automatically enable the LVM Volume Group and spawning a debug shell in case the boot should fail, I added the following parameters in grub:

    root=/dev/vg_merkur/slash rd.lvm.vg=vg_merkur rd.shell

    LVM Configuration
    Since dracut is now responsible to enable our volume group, the corresponding init script has to be disabled:

    rc-update del lvm boot

    Fsck and Fstab
    When booting the system now, the /etc/init.d/fsck script will complain that it cannot check the file systems which are already mounted. Fortunately, the init script allows us to define that fsck should be only run when specific “fs_passno” values are set. I therefore this value to “1” for the file systems which are mounted by dracut and to “2” for all the file systems which should be checked by OpenRC. Take care, when specifying a value of “0”, the file system will be never checked for consistency:

    # [fs] [mountpoint] [type] [opts] [dump/pass]
    /dev/vg_merkur/boot /boot ext2 noatime,nosuid,nodev 0 2
    /dev/vg_merkur/slash / ext4 noatime,discard 0 1
    /dev/vg_merkur/usr /usr ext4 noatime,discard,nodev 0 1
    /dev/vg_merkur/var /var ext4 noatime,discard,nosuid,nodev 0 2
    /dev/vg_merkur/opt /opt ext4 noatime,discard,nosuid,nodev 0 2
    /dev/vg_merkur/tmp /tmp ext4 noatime,discard,nosuid,nodev 0 2
    /dev/vg_merkur/swap none swap sw 0 0

    In /etc/conf.d/fsck we then can define, that the fsck init script should only care about file systems with a “fs_passno” larger than “1”:

    fsck_passno=">1"

    That’s it… If you have some questions or hints, please leave a comment.

    Oct 182009
     

    Since I was using the nice FineGradePermissions feature of the trac 0.11 release, and Debian was only providing trac-0.10.3 in Etch, I had a custom trac installation running on my Etch server. For migrating to Lenny you would normally think that it’s enough to just copy your project directory to the new installation. Unfortunately this results in a nasty error message:

    DatabaseError: file is encrypted or is not a database

    Hmn, so let’s check the trac migration guide which advises you to first export the sqlite database with sqlite3 in a plain SQL file. Not much luck here either, the result is an empty database:

    # sqlite3 trac.db .dump
    BEGIN TRANSACTION;
    COMMIT;

    The reason is the trac installation in Etch was using the python-sqlite-1.0.1 back-end which uses the SQLite 2 format while in Lenny there is python-pysqlite2-2.4.1 which only knows about SQLite 3.

    The conversion from SQLite 2 to 3 can be done by first exporting the database with the sqlite tool and then re-importing it with sqlite3:

    # sqlite trac.db .dump | sqlite3 trac3.db

    More infos about this can be found at the trac upgrade notes from 0.8.x to 0.9.

    Finally your trac installation should work again as usual.