Viewing ISC DHCP Server Leases on Debian 8

Overview

The Internet Systems Consortium (ISC) DHCP server logs every client lease to a lease database. Being a text file, the database can be easily viewed but the contents can be cryptic and uncorrelated.

Fortunately for us, the Debian package which installs the server includes a Perl script titled  dhcp-lease-list. The script’s function is to process the database records and present them in a more digestible way.

The script, however, does not work out of box. This post explains ways to make the script functional the way it was intended. Bear in mind there are no man pages although usage information is available from command line with the –help option.

The Script

Run the script for the first time and you’ll notice it aborts with two error messages. The first message implies that a certain file is missing while the second message tells us the database can not be opened.

root@dhcp-server~# /usr/sbin/dhcp-lease-list
To get manufacturer names please download http://standards.ieee.org/regauth/oui/oui.txt to /usr/local/etc/oui.txt
Cannot open /var/db/dhcpd.leases: No such file or directory at /usr/sbin/dhcp-lease-list line 69.

Let’s investigate the second message first.  The default path, as set in the script and seen in the message, does not exist on Debian 8. Unless you specify the full path to the database, the script is unable to locate it.

root@dhcp-server:~# dhcp-lease-list --lease /var/lib/dhcp/dhcpd.leases
MAC        IP        hostname        valid until        manufacturer
=====================================================================

Provided there are active leases, the script displays one line of records for each MAC address. To view all the leases, including the expired ones, add the –all option. Note that the last column currently has no manufacturer to show.

Contrary to the rest of the output, the manufacturer name is extracted from a separate data file. Provided by the IEEE Registration Authority, oui.txt is basically a long list of OUIs and details of their registered vendors.

An OUI (Organisationally Unique Identifier) is an identifier for a particular networking equipment. In case of a network interface card, it forms the first half of its MAC address to identify its manufacturer.

Let’s download the file.

wget -O /usr/local/etc/oui.txt http://standards.ieee.org/regauth/oui/oui.txt

You should now be able to run the script error-free and see a full output. One thing worth remembering is that lease times normally are in Universal Coordinated Time (UTC).

Environment

Date: April 2017
Platform: Debian 8.7
Package: isc-dhcp-server 4.3.1-6+deb8u2

 

Posted in DHCP | Tagged , | Leave a comment

Bacula Client Compatibility Issue

Issue

I came across a problem recently when trying to back up a Bacula client running bacula-fd version 7.0.5 from a Bacula server running bacual-director / bacula-sd version 5.2.6.

Here are the error messages:

Authorization key rejected by Storage daemon
Bad response to Storage command: wanted 2000 OK storage, got 2902 Bad storage

Diagnosis

It seemed the older storage daemon was not compatible with newer client.

Resolution

I installed Bareos client.

Bareos is a fork of Bacula project. Their FAQs informs us that the bareos-fd is compatible with all version of the bacula director when you enable the compatible mode in the config of the file daemon. On Debian and Ubuntu,, installation is straightforward since Bareos packages are available from the software repositories.

apt-get install bareos-client

The installer automatically removes the Bacula client during installation. Now edit the client configuration file (bareos-fd.conf) and make sure Bacula compatibility is on.

compatible = yes

Environment

Date: Oct 2016
Client: 
  Operating System: Ubuntu 16.04
  Bacula Client: 7.0.5
  Bareos Client: 14.2.6-3
Server:
  Operating System: Debian 8.6
  Bacula Storage (PostgreSQL): 5.2.6

 

Posted in Backup | 2 Comments

Bacula Catalog Backup Failure (Postgresql Database)

Overview

Bacula catalog service uses a database back-end to keep tack of the backup jobs by storing such information as names of archived files, their locations, dates and clients. When you are backing up the catalog, you are essentially saving a copy of the database.

The catalog backup job invokes a Bacula script which uses an external tool specific to the back-end to dump the entire database. Lack of proper access permissions for the user could result in the failure of the tool, the script and ultimately the backup job.

Symptoms

I came across this issue recently when backing up the catalog following a PostgreSQL server upgrade. Below are the main parts of error messages as observed on bconsole and in bacula.log. Note that, in case of PostgreSQL, the backup tool is pg_dump.

pg_dump: [archiver (db)] query failed: ERROR:  permission denied for relation snapshot
pg_dump: [archiver (db)] query was: LOCK TABLE public.snapshot IN ACCESS SHARE MODE

Resolution

Modify and run the Bacula grant_postgresql_privileges script. On my Debian system, the script is located at:

/usr/share/bacula-director/grant_postgresql_privileges

There are three parameters to be changed:

db_user=<bacula_db_user>
db_name=<bacula_db_name>
db_password=<bacula_db_password>

The script asks for your PostgreSQL user password when you run it.

Environment

Date: July 2016
Operating System: Debian Testing (Stretch)
Bacula: bacula-director-pgsql 7.4.1~dfsg-1
PostgreSQL: postgresql-9.4 9.4.5-2
Posted in Database | Leave a comment

Deploy AWS EC2 Instance with CloudFormation Using Existing Key and Security group

Overview

Most of us new to the Amazon Web Services, are initially happy to use their management console to administer services such as EC2, S3 and RDS. However, as we gain more experience, working with a wizard-driven web interface may not seem adequate anymore. It will specially become inefficient in case your happen to manage a large infrastructure.

You might be already using a tool such as Puppet to automate configuration on your systems. CloudFormation plays a similar role for your AWS infrastructure. It automates provisioning of cloud-bases resources. You first write a template and describe your resources and how you want them configured. Then, you upload the template to CloudFormation service where it will deploy and set up your infrastructure on your behalf.

The CloudFormation templates are written in JSON which could be a drawback for those who are not already well-versed in this format.. To ease the pain of learning, AWS provides a somewhat helpful visual tool called CloudFormation Designer and a number of sample templates.

Installation

In this post, we will be using AWS CLI to interact with CloudFormation. Install the package if it is already available from your operating system repositories. Otherwise, use the Python package installer.

pip install awscli

Configuration

  • Set up AWS credentials: A good place is .aws/config file in user home directory
[default]
 aws_access_key_id = <your_access_key>
 aws_secret_access_key = <your_secret_key>
 region = <your_region>
  • Create a template: Let’s name it simple1.json noting that the file extension is arbitrary
{
 "AWSTemplateFormatVersion" : "2010-09-09",
 "Description" : "launch an instance using existing key pair and security group",
   "Resources" : {
   "Ec2Instance" : {
   "Type" : "AWS::EC2::Instance",
   "Properties" : {
     "ImageId" : "<image_id>",
     "InstanceType": "<instance_type>",
     "AvailabilityZone": "<zone>",
     "KeyName": "<private_key>",
     "SecurityGroupIds": [ "<security_group_id>" ],
     "Tags": [ {"Key": "Name", "Value": "<instance_name>"}]
     }
   }
 }
}

The angular brackets hold resource-dependent values. Fill the values and remove the brackets before proceeding to the next step.

  • Validate template syntax and logic: Adjust file path if you are not in the directory where the template is
aws cloudformation validate-template --template-body file://simple1.json
  • Create stack:  A stack is a collections of resources as defined in the template
aws cloudformation create-stack --template-body file://simple1.json --stack-name simple1
  • List stacks: show stack summary report including resource status
aws cloudformation list-stacks --max-item 1

The stack is ready when the status changes to CREATE_COMPLETE.

Issues

CloudFormation aborts and deletes the whole stack if it fails to create any one resource. You can view the rollback process by listing the stacks and noting the resource status. The rest of the command output should hopefully give you a hint about what went wrong.

Environment

I tested the template under the following setup

Date: May 2016
Ubuntu 14.04.4 LTS
awscli 1.2.9-2

 

Posted in AWS, Cloud | Leave a comment

Icinga: Monitoring Services on Hosts with Multiple IP Addresses

Problem Definition

We are using Icinga to monitor a host with multiple network interfaces. The monitored services generally listen on the first Ethernet interface but we are now adding a service which accepts connection coming through the VPN interface only. The host is currently defined in Icinga with the IP address of the Ethernet interface. Considering that our plug-in accepts only one IP address per host, how do we define the new service?

Solution

We are going to use a custom object variable to solve our problem. Custom object variables provide a configuration method which allows administrators to add user-defined variables to host, service or contact object definitions.. The custom object variable, in our case, is a second address in the host object definition which holds the VPN interface IP address.

Icinga defines and processes custom variables in a different way to make sure there are no name collisions with standard variables. A custom variable name starts with an underscore and is case-insensitive. The underscore is not part of the name but rather an indication that it is not a standard variable.

Once running, Icinga creates corresponding macros by converting the custom variables names to upper case and, depending on the object type, prepending “_HOST”, “_SERVICE”, or “_CONTACT”. From now on, you can use these macros to reference the custom variables the same way you do with standard ones.

Implementation

Here is our modified host definition:

define host{
    use generic-host
    host_name host1
    alias host1
    address 192.168.1.95 
    _vpnaddress 10.1.1.12 ; <-- new custom variable
 }

Let’s suppose the network service listening on VPN interface is a Bacula backup client. Since our plug-in will be referencing a new host macro, we now need to define a new set of command and service. Where the definitions are added in Icinga is configuration-dependent. A likely place would be the host configuration file in objects directory.

define command{
 command_name check_vpn_bacula_client
 command_line $USER1$/check_tcp -H $_HOSTVPNADDRESS$ -p 9102
}

define service{
 use generic-service
 host_name localhost
 service_description Bacula VPN Client
 check_command check_vpn_bacula_client
}

Time to restart Icinga and check the web interface.

Environment

I tested the configuration under the following setup.

Date: May 2016
Operating system: Ubuntu 14.04.4
Icinga: icinga 1.10.3-1

 

Posted in Monitoring | Leave a comment

Creating AWS S3 Buckets in Boto 3

Overview

Python scripts written to create AWS S3 buckets in Boto 2 need to be modified in order to work with Boto 3.

Script

The following script create a new bucket named after the fully qualified domain name of the the host it runs on. Note the differences in connecting to the S3 service with a new interface (resources) and error handling.

#!/usr/bin/env python

import botocore
import boto3
import socket

access_key = '<aws_access_key_id'>
secret_key = '<aws_secret_access_key>'
region = '<region>'

# find host's fully qualified domain name
bucket = socket.getfqdn()

# connect to s3 service 
conn = boto3.resource('s3',region_name=region,aws_access_key_id=access_key,aws_secret_access_key=secret_key)

# create new bucket if it does not already exist
if not conn.Bucket(mybucket) in conn.buckets.all():
  print 'creating bucket ' + mybucket + '...'
  try:
    conn.create_bucket(Bucket=mybucket, CreateBucketConfiguration={ 'LocationConstraint': region})
  except botocore.exceptions.ClientError as e:
    print 'Error: ' + e.response['Error']['Message']
else:
  print 'bucket ' + mybucket + ' already exists'

Environment

Date: April 2016
Python: 2.7.9
Boto3: 1.3.1
Botocore: 1.4.15

Resources

Posted in AWS, Python | Leave a comment

Who’s Talking to My Server Bro?

Overview

During the course of a single day, your server initiates many network connections to other hosts and in turn is the receiving end of many connections coming from others. Wouldn’t it be nice to have daily connection reports with statistics such as sources, destinations, payloads, ports, protocols and services?

Bro is a network monitoring framework which is well suited to passively monitoring network links and generating traffic reports.  Mind you, Bro is a sophisticated piece of software with its own scripting language and interactive shell. Here, we are using a small part of its capabilities.

Installation

In the years gone by, I had mixed success in installing Bro from source on Debian. When my attempts failed, it was mainly due to one or more dependency issues. This time, I decided to try my hands at installing from binary packages. As it turned out, the installation was quick and easy since the process took care of all the dependency issues.

Bro binary packages are available from Open Build Service ((formally openSUSE Build Service). To install and configure Bro, I pretty much followed the official documentation but found myself jumping back and forth between different sections a few times to find the information I needed.

Download and install package-signing keys

wget -q http://download.opensuse.org/repositories/network:bro/Debian_8.0/Release.key -O - | apt-key add -

Add Open Build Service package repository

echo 'deb http://download.opensuse.org/repositories/network:/bro/Debian_8.0/ /' >> /etc/apt/sources.list.d/bro.list

Update lists of available packages and install Bro

apt-get update && apt-get install bro

Configuration

Bro keeps its configuration in multiple files in /opt/bro/etc directory. Normally, you would examine and modify node.cfg and networks.cfg and broctl.cfg. In my case, I had to edit only one file because the default options in the other two were compatible with my environment.

The configuration below rotates the log and statistics files every day (86400 seconds) and keeps them around for seven days. Since reports are generated when logs are rotated, Bro processes conn.log and sends out a summary report at the same time.

For some reason, the sendmail binary location (SendMail) was missing from the configuration file which meant Bro was not able to send any messages. After checking configuration options in lib/broctl/BroControl/options..py, I manually added the location and later submitted a bug report.

Modify configuration file(s)

# /opt/bro/etc/broctl.cfg
SendMail = /usr/sbin/sendmail
LogRotationInterval = 86400
LogExpireInterval = 7
StatsLogExpireInterval = 7

Use the Bro shell to update Bro with the new configuration.

broctl install

Add a cronjob for housekeeping and crash recovery

*/5 * * * * /opt/bro/bin/broctl cron

Operation

The way to manage a Bro installation is to use Bro shell. You can use Bro shell to, for example, start, stop, restart, check status or update configuration.

Get a list of commands

broctl help

Start Bro

broctl start

Check status

broctl status

Environment

Here’s the environment in which I tested the above configuration.

Date: March 2016
Last modified: 2105-03-14
Operating System: Debian 8.2 (jessie)
Packages: 
  bro 2.4.1-0
  bro-core 2.4.1-0
  broctl 2.4.1-0
  libbroccoli 2.4.1-0
  libpcap0.8 1.6.2-2
  libpython2.7 2.7.9-2
Posted in Monitoring, Security | Tagged , | Leave a comment