Frank’s friday favourite five

This weeks top reads/links

ModSecurity now supports JSON logging, hello logstash heaven!

https://github.com/SpiderLabs/ModSecurity/releases/tag/v2.9.1-rc1

Docker starts using Alpine Linux and acquires Alpine developer ncopa, hopefully this will get some attention on your favourite Linux distro and MUSL.

http://thevarguy.com/open-source-application-software-companies/docker-slims-down-containers-even-more-change-os-it-seems

CVE-2015-7547: glibc getaddrinfo stack-based buffer overflow
https://googleonlinesecurity.blogspot.no/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html

Linux mail clients in 2016

After years with different web mail clients, thunderbird, claws-mail and outlook, mutt still prevails as the most effecient, user-friendly, resource-friendly mail client.

This guide sets together a handfull of tools to do each task, and completes in a delightful mail client setup.

My mail-setup consists of fetchmail, to download mail from IMAP. procmail (called from fetchmail) to sort the mail, mutt to read the mail, msmpt to send the mail, and archivemail to convert maildir to mboxes for history.

➜  ~  cat .msmtprc 
# Set default values for all following accounts.
defaults
auth           on
tls            on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile        ~/.msmtp.log

# X 
account        X
host           X
port           587
from           X
user           X
password       X

# fastmail
account        Y
host           Y
from           Y
user           X
password       X

# Set a default account
account default : X

Now for downloading IMAP, i get my mail from two different mail providers, fethmail calls procmail for each mail, everything goes in the same INBOX.


➜  ~  cat .fetchmailrc 
#set daemon 3600
set daemon 60
set logfile /home/frank/.fetchmail.log
set no bouncemail

  poll Y protocol imap:
       username "Y" password "Y";
no keep
ssl
mda "/usr/bin/procmail -d %T"

  poll X protocol imap:
       username "X" password "X";
no keep
ssl
mda "/usr/bin/procmail -d %T"

Procmail sorts my mail, the nifty part here is where procmail automatically detects lists and sorts this.

➜  ~  cat .procmailrc 
SHELL=/bin/bash
PATH=/usr/sbin:/usr/bin
MAILDIR=$HOME/Maildir/
DEFAULT=$MAILDIR
LOGFILE=$HOME/.procmail.log
LOG=""
VERBOSE=yes


# ARTFUL PROCMAIL ALERT!
# Here are two rules that will automagically filter *most* list emails based on
# sane matches, such as list id. Very funky, and you almost never have to deal
# with folder-making for lists again.

# MOST LISTS - Automagically handle lists
:0
* ^((List-Id|X-(Mailing-)?List):(.*[<]\/[^>]*))
{
    LISTID=$MATCH/

    :0:
    * LISTID ?? ^\/[^@\.]*
    Lists/$MATCH/

}

:0:
* ^To: nagios@X
Lists/Nagios/

:0:
* ^Subject: OSSEC Notification.*
Lists/OSSECALERTS/


:0:
* ^From: .*mailman-owner@
* ^Subject: .* mailing list memberships reminder
Trash/

:0:
* ^From: .*user@rss2email.invalid
Lists/rss2email/

:0:
* ^From: .*noreply@blogger.com
Lists/rss2email/

GRsecurity and CVE-2016-0728 – Linux Kernel REFCOUNT Overflow/Use-After-Free in Keyrings

This is just a small note on defence in depth with Grsecurity, with a note on CVE-2016-0728 – Linux Kernel REFCOUNT Overflow/Use-After-Free in Keyrings.

From the kernel manual for GRsec:

By saying Y here the kernel will detect and prevent overflowing
various (but not all) kinds of object reference counters. Such
overflows can normally occur due to bugs only and are often, if
not always, exploitable.

The tradeoff is that data structures protected by an overflowed
refcount will never be freed and therefore will leak memory. Note
that this leak also happens even without this protection but in
that case the overflow can eventually trigger the freeing of the
data structure while it is still being used elsewhere, resulting
in the exploitable situation that this feature prevents.

Since this has a negligible performance impact, you should enable
this feature.

frank@sh02:/tmp$ uname -r
4.1.7-grsec

frank@sh02:/tmp$ ./cve_2016_0728 1747934f
uid=1004, euid=1004
Increfing...
Killed


Jan 22 16:48:31 sh02 kernel: [18289999.002140] PAX: From IP.ADDRESS: refcount overflow detected in: cve_2016_0728:26626, uid/euid: 1004/1004
Jan 22 16:48:31 sh02 kernel: [18289999.004656] CPU: 1 PID: 26626 Comm: cve_2016_0728 Tainted: G            E   4.1.7-grsec #3
Jan 22 16:48:31 sh02 kernel: [18289999.005925] task: ffff88007c4a3750 ti: ffff88007c4a3a38 task.ti: ffff88007c4a3a38
Jan 22 16:48:31 sh02 kernel: [18289999.007154] RIP: e030:[]  [] prepare_creds+0xa0/0x120
Jan 22 16:48:31 sh02 kernel: [18289999.008448] RSP: e02b:ffffc900462bbd98  EFLAGS: 00000a16
Jan 22 16:48:31 sh02 kernel: [18289999.009727] RAX: ffff880078f37480 RBX: ffff88000f5d39c0 RCX: 0000000000000000
Jan 22 16:48:31 sh02 kernel: [18289999.010948] RDX: 0000000000000000 RSI: ffff88007c124a68 RDI: ffff88000f5d3a68
Jan 22 16:48:31 sh02 kernel: [18289999.012135] RBP: ffffc900462bbda8 R08: 0000000000018080 R09: ffffffff8109f5d6
Jan 22 16:48:31 sh02 kernel: [18289999.013315] R10: 0000000000000000 R11: 0000000000000008 R12: ffff88007c1249c0
Jan 22 16:48:31 sh02 kernel: [18289999.014511] R13: ffff880079940d08 R14: 00006d77dcc9cfd9 R15: 00006d77dcf607a0
Jan 22 16:48:31 sh02 kernel: [18289999.015668] FS:  00006d77dd36f700(0000) GS:ffff88007d100000(0000) knlGS:0000000000000000
Jan 22 16:48:31 sh02 kernel: [18289999.016930] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
Jan 22 16:48:31 sh02 kernel: [18289999.018193] CR2: 0000717b8545c000 CR3: 0000000072cea000 CR4: 0000000000002660
Jan 22 16:48:31 sh02 kernel: [18289999.019306] Stack:
Jan 22 16:48:31 sh02 kernel: [18289999.020383]  ffff880079940d08 00007554244a3af9 ffffc900462bbdf8 ffffffff8132c8fe
Jan 22 16:48:31 sh02 kernel: [18289999.021492]  00006d77dcc9cfd9 00006d77dcf607a0 ffffc900462bbdf8 ffff880079940d08
Jan 22 16:48:31 sh02 kernel: [18289999.022764]  00007554244a3af9 00006d77dcf607a0 00006d77dcc9cfd9 00006d77dcf607a0
Jan 22 16:48:31 sh02 kernel: [18289999.024144] Call Trace:
Jan 22 16:48:31 sh02 kernel: [18289999.025188]  [] join_session_keyring+0x1e/0x180
Jan 22 16:48:31 sh02 kernel: [18289999.026269]  [] keyctl_join_session_keyring+0x34/0x60
Jan 22 16:48:31 sh02 kernel: [18289999.027295]  [] SyS_keyctl+0x208/0x220
Jan 22 16:48:31 sh02 kernel: [18289999.028304]  [] system_call_fastpath+0x16/0x89
Jan 22 16:48:31 sh02 kernel: [18289999.029289] Code: c0 74 12 f0 ff 80 d8 00 00 00 71 09 f0 ff 88 d8 00 00 00 cd 04 48 8b 83 80 00 00 00 48 85 c0 74 0a f0 ff 00 71 05 f0 ff 08 cd 04 <48> 8b 43 60 48 85 c0 74 0a f0 ff 00 71 05 f0 ff 08 cd 04 48 8b 
Jan 22 16:48:31 sh02 kernel: [18289999.032426] PAX: From IP.ADDRESS: refcount overflow detected in: cve_2016_0728:26626, uid/euid: 1004/1004
Jan 22 16:48:31 sh02 kernel: [18289999.034473] CPU: 1 PID: 26626 Comm: cve_2016_0728 Tainted: G            E   4.1.7-grsec #3
Jan 22 16:48:31 sh02 kernel: [18289999.035612] task: ffff88007c4a3750 ti: ffff88007c4a3a38 task.ti: ffff88007c4a3a38
Jan 22 16:48:31 sh02 kernel: [18289999.036594] RIP: e030:[]  [] find_keyring_by_name+0x115/0x180
Jan 22 16:48:31 sh02 kernel: [18289999.038658] RSP: e02b:ffffc900462bbd78  EFLAGS: 00000a16
Jan 22 16:48:31 sh02 kernel: [18289999.039713] RAX: 0000000000000000 RBX: ffff880078f37480 RCX: 000000007fffffff
Jan 22 16:48:31 sh02 kernel: [18289999.040654] RDX: 000000007fffffff RSI: ffff88007c1249c0 RDI: ffff880078f37480
Jan 22 16:48:31 sh02 kernel: [18289999.041587] RBP: ffffc900462bbda8 R08: 0000000000017660 R09: ffff880072d828e8
Jan 22 16:48:31 sh02 kernel: [18289999.042520] R10: ffffffff81370f0e R11: 0000000000000008 R12: ffffffff8236ca70
Jan 22 16:48:31 sh02 kernel: [18289999.043421] R13: ffff880079940d08 R14: 0000000000000000 R15: ffff88007c4a3750
Jan 22 16:48:31 sh02 kernel: [18289999.044307] FS:  00006d77dd36f700(0000) GS:ffff88007d100000(0000) knlGS:0000000000000000
Jan 22 16:48:31 sh02 kernel: [18289999.045184] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
Jan 22 16:48:31 sh02 kernel: [18289999.046066] CR2: 0000717b8545c000 CR3: 0000000072cea000 CR4: 0000000000002660
Jan 22 16:48:31 sh02 kernel: [18289999.046913] Stack:
Jan 22 16:48:31 sh02 kernel: [18289999.047767]  ffff88007c1249c0 ffff880079940d08 ffff88000f5d39c0 ffff880079940d08
Jan 22 16:48:31 sh02 kernel: [18289999.048645]  00006d77dcc9cfd9 ffff88007c1249c0 ffffc900462bbdf8 ffffffff8132c939
Jan 22 16:48:31 sh02 kernel: [18289999.049524]  00006d77dcc9cfd9 00006d77dcf607a0 ffffc900462bbdf8 ffff880079940d08
Jan 22 16:48:31 sh02 kernel: [18289999.050549] Call Trace:
Jan 22 16:48:31 sh02 kernel: [18289999.051540]  [] join_session_keyring+0x59/0x180
Jan 22 16:48:31 sh02 kernel: [18289999.052449]  [] keyctl_join_session_keyring+0x34/0x60
Jan 22 16:48:31 sh02 kernel: [18289999.053348]  [] SyS_keyctl+0x208/0x220
Jan 22 16:48:31 sh02 kernel: [18289999.054274]  [] system_call_fastpath+0x16/0x89
Jan 22 16:48:31 sh02 kernel: [18289999.055152] Code: 18 49 8b b7 e0 03 00 00 ba 08 00 00 00 48 89 df e8 41 25 00 00 85 c0 78 95 8b 13 85 d2 74 8f 89 d1 83 c1 01 71 05 83 e9 01 cd 04 <89> d0 f0 0f b1 0b 39 c2 75 4a e8 fc 31 dc ff 48 89 43 60 f0 81 


The second is TPE:

If you say Y here, you will be able to choose a gid to add to the
supplementary groups of users you want to mark as "untrusted."
These users will not be able to execute any files that are not in
root-owned directories writable only by root. If the sysctl option
is enabled, a sysctl option with name "tpe" is created

[Fri Jan 22 03:50:48 2016] grsec: From IP.ADDRESS: denied untrusted exec (due to being in untrusted group and file in world-writable directory) of /tmp/cve_2016_0728 by /tmp/cve_2016_0728[bash language=":2045"][/bash] uid/euid:1017/1017 gid/egid:1017/1017, parent /bin/bash[bash language=":1513"][/bash] uid/euid:1017/1017 gid/egid:1017/1017

So, here we have two ways to prevent this exploit with grsec, thanks!

Pound 2.7 PPA released

In the search for a more recent version of Pound to install on my servers I came out with no luck,
I wanted the ability to turn off SSLv2 as well as SSLv3 due to the past security problems regarding these protocols, I did not find anything, so I uploaded a package to my PPA at Launchpad, please feel free to submit any bugs/changes regarding this package. The package can be found here:

https://launchpad.net/~frhnk/+archive/ubuntu/pound

And can be added with

~ » add-apt-repository ppa:frhnk/pound

Collectd and Kibana4 experiences

Lately I’ve been testing a way display system metrics in Kibana 4, the reason for this, is because rather than have information and graphs from pure logs and graphs in graphite, I would like to have a centralized place where I can: Visualize logs, Visualize metrics data from various sources (and specially collectd) as well as searching log entries.

When Kibana4 was released its most visual changes was in the way you could present and create graphs, great.

While my previous setup consisting of collectd and graphite, while storing the graphs in a Times Series Database I was eager to know if Elasticsearch was suited for storing timeseries data, reading up on TSDB’s and the various comments on forums people tend to lean against that it is not a good idea, while not having a good explanation for this. The Elasticsearch developer I talked with (maybe unsurprisingly) informed me that it should not be a problem.

I started of with sending collectd directly to Logstash with the UDP plugin on the logstash server.

  udp {
    port => 25826
    buffer_size => 1452
    codec => collectd {
      authfile => "/usr/local/etc/collectd.auth"
      typesdb => "/etc/types.db"
      security_level => "Encrypt"
    }
    type => "collectd"
  }

While testing I only included a single host to monitor to see if how it would look like

An event is stored in elasticsearch like this:

{
  "_index": "logstash-2015.08.27",
  "_type": "collectd",
  "_id": "AU9woJq6imCBkq4M2M2D",
  "_score": null,
  "_source": {
    "host": "mysql01",
    "@timestamp": "2015-08-27T19:12:31.720Z",
    "plugin": "mysql",
    "plugin_instance": "innodb",
    "type_instance": "file_writes",
    "collectd_type": "counter",
    "value": 955425,
    "@version": "1",
    "type": "collectd"
  },
  "fields": {
    "@timestamp": [
      1440702751720
    ]
  },
  "highlight": {
    "type": [
      "@kibana-highlighted-field@collectd@/kibana-highlighted-field@"
    ],
    "type.raw": [
      "@kibana-highlighted-field@collectd@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1440702751720
  ]
}

Great, collectd clients successfully authenticates to the UDP input on logstash, and the data is then shipped to logstash. Success!

Scaling up a bit,
I like to gather as much information as possible from each host and all its services included, while running my testing on a single node running a simple LAMP stack, where I gather as much of the system level details, almost every value in mysql and information from mod_status for Apache, about 60 metrics are stored for this host alone. I let collectd update every 30 second.
This works fine on a modest 8GB RAM, 100GB storage and 4vCPU’s on a XEN virtualized server.

Now I want to see if it handles a bit more, I add some more hosts, with various new services to monitor, haproxy, nginx, php-fpm, mongodb, beanstalkd, rabbitmq and memcached.

Now collecting about 11 000 metrics every 30 second without breaking a sweat. Elasticsearch seems to do the job great at this scale, I have yet to test more data, but so far it looks good.

Below is various components in a application gathered in the dashboard view in Kibana 4.

kibana4

Caveats to be aware of:

Storing metrics in a resolution of this type with updates every 30 second will create some amount of data, with 11 000 metrics indexed every 30second, the storage use is about 1.7GB every 24 hour, not a big problem if you have a good amount of storage and appreciate the details.
A possible workaround is to store data in time based indices and create a retention scheme based on this.

Creating the graphs in Kibana 4 can be time consuming, its no simple way to display a graph, however – when you get used to do it its goes fast, and you can create a sample dashboard displaying the common services on a linux system displaying memory, cpu, hdd, entropy, ntp information and sort by host.

cgroups cgconfigparser and false error msgs / cgroups gotcha’s

FYI:

cgconfigparser may send you the error message:

/usr/sbin/cgconfigparser; error loading /etc/cgconfig.conf: This kernel does not support this feature

However, your kernel is probably supporting cgroups, you might as well have a configuration / syntax error in cgconfig.conf

An example is:


group webusers {perm {task {uid=root;gid=root;}admin {uid=root;gid=root;}}memory {memory.limit_in_bytes=1G;}cpu {cpu.shares=UNDEF;}}

Here the cpu.shares value is UNDEF, which is not valid, change this to for example 200, and you are good to go.

memory.memsw.limit_in_bytes missing/unable to set:

root@server:/boot# grep CONFIG_MEMCG_SWAP_ENABLED config-3.13.0-53-generic
# CONFIG_MEMCG_SWAP_ENABLED is not set

Rebuild your kernel.

cgroups ubuntu 14.04

This is a summary of what you can do in order to get Cgroups working on Ubuntu 14.04.
Some init scripts have been modified in order to get the userland tools up and running.

This guide will make you install the dependencies, place the necessary configuration files, making changes to the init scripts of the userland tools for cgroups.

Deps:

apt-get install cgroup-bin cgroup-lite libcgroup1

cgconfigparser is the program that needs to run in order to set up the configuration put in cgconfig.conf.

this program does not start at startup, but it should (create a init script or place in cgroup-lite config script, a sample is located in this post)

cgroup-lite service needs to run for cgconfigparser to run correctly

cgroup-lite is responsible for mounting the cgroups.

It is also possible to manage cgroups with cgexec(1) and mkdir(1), this article wont cover that, but configuration of cgconfig.conf and cgrules.conf

service cgroup-lite start

root@vagrant:/etc# cgconfigparser -l /etc/cgconfig.conf

cgrulesengd needs to run in order to enforce cgroups policy, it is not started automatically.

A init script for cgrulesengd should be created.

root@vagrant:/sys/fs/cgroup# cgrulesengd -d -f /tmp/hei.log


Configuration sample of cgrules

root@vagrant:/etc# cat /etc/cgrules.conf 
# /etc/cgrules.conf
#
#Each line describes a rule for a user in the forms:
#
#					
#:			
#
#Where:
#  can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for any user or group.
#        - The %, which is equivalent to "ditto". This is useful for
#          multiline rules where different cgroups need to be specified
#          for various hierarchies for a single user.
#
#  is optional and it can be:
#	 - a process name
#	 - a full command path of a process
#
#  can be:
# 	 - comma separated controller names (no spaces)
# 	 - * (for all mounted controllers)
#
#  can be:
# 	 - path with-in the controller hierarchy (ex. pgrp1/gid1/uid1)
#
# Note:
# - It currently has rules based on uids, gids and process name.
#
# - Don't put overlapping rules. First rule which matches the criteria
#   will be executed.
#
# - Multiline rules can be specified for specifying different cgroups
#   for multiple hierarchies. In the example below, user "peter" has
#   specified 2 line rule. First line says put peter's task in test1/
#   dir for "cpu" controller and second line says put peter's tasks in
#   test2/ dir for memory controller. Make a note of "%" sign in second line.
#   This is an indication that it is continuation of previous rule.
#
#
#  	  	
#
#john          cpu		usergroup/faculty/john/
#john:cp       cpu		usergroup/faculty/john/cp
#@student      cpu,memory	usergroup/student/
#peter	       cpu		test1/
#%	       memory		test2/
#@root	    	*		admingroup/
#*		*		default/
# End of file
frank:/home/frank/mem-limit        memory           limitgroup/

#CAN ALSO BE 
frank        memory           limitgroup/
#for all commands

The cgconfig:


root@vagrant:/etc# cat /etc/cgconfig.conf
group limitgroup {
perm {
admin {
uid = root;
gid = root;
}
task {
uid = 1002;
gid = 1002;
}
}
cpu {
cpu.shares = "768";
}
memory {
memory.limit_in_bytes = "30000000";
}
}

Start cgrulesengd with debug log to /tmp/hei.log:


root@vagrant:/etc# cgrulesengd -d -f /tmp/hei.log

tail /tmp/hei.log

CGroup Rules Engine Daemon log started
Current time: Thu May 7 13:16:35 2015

Opened log file: /tmp/hei.log, log facility: 0, log level: 7
Proceeding with PID 5036
Rule: frank:*
UID: 1002
GID: N/A
DEST: limitgroup/
CONTROLLERS:
*

Changes to the original http://www.filewatcher.com/p/libcgroup_0.37.1-1ubuntu10.debian.tar.gz.16867/debian/cgroup-bin.cgred.upstart.html

root@vagrant:/etc/init# cat cgrulesengd.conf
# cgrulesengd

description "cgrulesengd"
author "Serge Hallyn <serge.hallyn@canonical.com>"

start on started cgroup-lite
stop on stopped cgroup-lite

pre-start script
test -x /usr/sbin/cgrulesengd || { stop; exit 0; }
end script

script
# get default options
OPTIONS=""
CGRED_CONF=/etc/cgrules.conf
if [ -r "/etc/default/cgrulesengd" ]; then
. /etc/default/cgrulesengd
fi

# Don't run if no configuration file
if [ ! -s "$CGRED_CONF" ]; then
echo "Cgred unconfigured"
stop
exit 0
fi

# Make sure the kernel supports cgroups
# This check is retained from the original sysvinit job, but should
# be superfluous since we depend on cgconfig running, which will
# have mounted this.
grep -q "^cgroup" /proc/mounts || { stop; exit 0; }

exec /usr/sbin/cgrulesengd --nodaemon $OPTIONS
end script

changes to cgroups-lite init script – added the cgconfigparser line:

root@vagrant:~# cat /etc/init/cgroup-lite.conf
description "mount available cgroup filesystems"
author "Serge Hallyn <serge.hallyn@canonical.com>"

start on mounted MOUNTPOINT=/sys/fs/cgroup

pre-start script
test -x /bin/cgroups-mount || { stop; exit 0; }
test -d /sys/fs/cgroup || { stop; exit 0; }
/bin/cgroups-mount
/usr/sbin/cgconfigparser -l /etc/cgconfig.conf
end script

post-stop script
if [ -x /bin/cgroups-umount ]
then
/bin/cgroups-umount
fi
end script

Resources and further reading:

http://linux.die.net/man/5/cgconfig.conf
http://linux.die.net/man/5/cgrules.conf
http://tuxion.com/2009/10/13/ubuntu-resource-managment-simple-example.html
https://www.kernel.org/doc/Documentation/cgroups/
https://sysadmincasts.com/episodes/14-introduction-to-linux-control-groups-cgroups
http://www.gen.cam.ac.uk/local/it/projects/ubuntu-cgroups-and-trying-to-stop-users-making-a-system-unusable
http://docs.oracle.com/cd/E37670_01/E37355/html/ol_use_cases_cgroups.html
http://devinhoward.ca/technology/2015/feb/implementing-cgroups-ubuntu-or-debian
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html
http://blog.hintcafe.com/post/60223405371/resource-limiting-using-cgroups

Vagrant and Puppet for testing

This one is pretty simple and gives a lot in return,
When doing big changes (and minor ones, really) I can quickly see how the changes will affect production hosts.

Generate a new host certificate on the server or locally, sign it, and place it in puppet_ssl directory.

Place your config file, and type in vagrant –provision, voila.

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

config.vm.box = "precise64"
config.vm.hostname = "vagrant.frank.local"

config.vm.box_url = "http://files.vagrantup.com/precise64.box"

config.vm.synced_folder "./puppet_ssl", "/vagrant/puppet_ssl"

config.vm.provision "shell",
  inline: "
    mkdir -p /etc/puppet/ssl/certs;
    mkdir -p /etc/puppet/ssl/private_keys;
    cp /vagrant/puppet_ssl/certs/* /etc/puppet/ssl/certs;
    cp /vagrant/puppet_ssl/private_keys/* /etc/puppet/ssl/private_keys
    apt-get update &&
    apt-get -y install python-software-properties &&
    apt-add-repository -y ppa:brightbox/ruby-ng &&
    wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb &&
    dpkg -i puppetlabs-release-precise.deb &&
    rm puppetlabs-release-precise.deb && 
    apt-get -qq update &&
    apt-get -y install ruby1.9.3 puppet libaugeas-ruby1.9.1"

config.vm.provision "puppet_server" do |puppet|
        puppet.puppet_server = "puppet.x.no"
        puppet.puppet_node = "vagrant.frank.local"
        puppet.options = "--no-daemonize --pluginsync --onetime --verbose --environment=refactor"

end
end

Sensuapp and Android notification mockups

In the past 6 months I’ve been playing around with Sensuapp, for the ones of you who dont know, it’s a montoring framework, much like Nagios/Icinga and Zabbix, I’ve been using Nagios/Icinga and Zabbix for quite a while, but I never really quite liked it  as the Interfaces are not so good looking and the configuration tend to be hard to work with.

Welcome Sensuapp!

Sensuapp is written in Ruby, it has a framework for creating plugins, handlers etc. It uses AMQP to talk with the clients, and the alerts are in json.

Due to the summer vacation I have some time to work on things like hacking on random projects, So i created an handler that lets you send Notifications to Google Cloud Messaging to receive alerts when a service is failing, a disk is full, memory is full or whatever you like.

The handler is quite simple and looks like this:

[ruby]

#!/usr/bin/env ruby
#A small script to send piped json events from SensuAPP to Google Cloud Messaging, Frank Solli <frank@frank2.net>
require ‘sensu-handler’
require ‘json’
require ‘rubygems’
require ‘gcm’

class GCMALERT < Sensu::Handler
def event_name
@event[‘client’][‘name’] + ‘/’ + @event[‘check’][‘name’]
end

def handle
gcm_apikey = settings[‘gcmalert’][‘apikey’]
gcm_regid = settings[‘gcmalert’][‘registration_ids’]
puts gcm_regid

gcm = GCM.new(gcm_apikey)
registration_ids = [(gcm_regid)]

if @event[‘action’].eql?("resolve")
message = "RESOLVED – #{event_name} – #{@event[‘check’][‘output’]}"
else
message = "ALERT – #{event_name} – #{@event[‘check’][‘output’]}"
end
payload = {data: {info: "#{message}"}} #GCM requires a hash
puts payload #debug
response = gcm.send_notification(registration_ids, payload)
puts response #debug
end
end

[/ruby]

The configuration files in Sensu are in Json, this is the configuration file for GCMalert:

{
“gcmalert”: {
“apikey”: “A”,
“registration_ids”: “B”
}
}

A mockup app for android showing a test-alert:

gcm

I will be working to finish this project and hopefully have  a fully working app for Sensu. Hang on!