One of my responsibilities is to help maintain the infrastructure at Computer Science House. We have an ever growing number of servers, and we are always looking for ways to make managing these servers easier. Recently we have been running into problems keeping track of when our ssl certificates expire. Several other sysadmins at CSH have written little scripts to verify the status of our certificates, but there was no single solution that we used. Well now there is. Over the last week I’ve been diving into golang and figured it would be a good exercise to write up a tool to handle this job. Thus, sslcheck was born. Its a really simple project, it does 3 things:

  1. List the status of the certificates for the given services
  2. List only the expired certificates in the list of given services
  3. Send either of the previous reports out via an email

The usage for ssl check is also simple:

1
2
3
$ sslcheck google.com:443 csh.rit.edu:443
$ sslcheck -warning google.com:443 csh.rit.edu:443
$ sslcheck -email fake@fake.com google.com:443 csh.rit.edu:443

Installing it is easy too!

1
$ go install github.com/rossdylan/sslcheck

As always comments/questions/contributions are welcome.

Last weekend I participated in the University Hacker Olympics, hosted by SignalFire. I joined forces with Ryan Brown and Sam Lucidi to create Mailicorn.

Mailicorn is a universal webmail client designed to be run out of AWS. It combines support for any IMAP compatible email provider with realtime indexing of email. We designed mailicorn to address a gaps with the current email ecosystem. First, there are very few good universal mail clients. Thunderbird and mutt come to mind, but neither really make the email experience special. Second, there is no single place to view, search, and apply rules to your email.

We built mailicon to fix these issues. Out of the box mailicorn supports any IMAP server, and has universal search. We have a universal server-side rule engine under development. Mailicorn is very very alpha software. There is no security model because during the University Hacker Olympics we debated for quite some time on the security measures required people to trust it with their email. We ultimately decided, after talking with the awesome industry mentors available during the hackathon, that we could not reasonably tackle good security for it during a 24 hour hackathon. However we are dedicated to making an email system that will not be breached. If you want to contribute go checkout the Github.

I haven’t posted in a while, that is mostly because I am working for a startup called Exablox until January. I’ve been busy doing a lot of work and hopefully I will be able to talk more about it soon. A little while back I had an interesting problem with btrfs on my laptop running Fedora 19/20. My fedora install is a bit interesting since it runs from updates testing, and has a rawhide kernel revision. However the problem I ran into to was 100% btrfs. I use the yum plugin to create btrfs snapshots before updates. It provides a nice fallback if I totally fuck up my file system during a update. I haven’t setup automatic pruning of snapshots so If I don’t clean them out my file system will eventually fill up. When this last happened I was unable to do anything about it. btrfs has a unique issue, if the file system is full it becomes impossible to clean it out. After my root partition was filled up, Now one way to try and fix is to rebalance the file system.

1
$ sudo btrfs balance start /

Sadly my filesystem was so full that the balance operation would just error out. After mucking around for a bit I finally figured out a solution. My main issue here was that I didn’t have enough space on / to delete any files or even balance the file system. So I decided to add some space.

1
2
$ sudo mkfs.btrfs /dev/sdx
$ sudo btfs device add /dev/sdx /

In my case /dev/sdx was my friend Ethan’s flash drive. Now that I had an extra 32Gb of space btrfs magically started to balance my data and metadata over to the new drive. Once it had finished moving things around I went in and removed all of the snaphots that were filling up my root partition.

1
$ sudo btrfs subvolume delete /yum_*

After that all I had to do was remove the flash drive from my root partition.

1
$ sudo btrfs device delete /dev/sdx /

Thats my adventure with btrfs. Thanks for reading. NOTE: I might have gotten some of the syntax for the btrfs commands wrong, I am writing this post a few weeks later please correct me if that is the case.

I am a sysadmin for Computer Science House, recently we have been having some issues with our AC units. Since summer is here, and we are running a skeleton crew in Rochester I decided to try and make a monitoring system to keep track of how hot our servers are running. This way we can try and keep ahead of the game and predict what machines will be effected by thermal emergencies first. The first part of this project is called Heat. Heat is a small pure python library for grabbing information from a computers temperature sensors on linux. At this point Heat is really simple. It gives you access to the temperature in Celsius, Fahrenheit, and the sensors label. One of the neat things about heat is that it supports both python2 and python3. Check it out on github and pypi

2 weeks ago I competed in the NASA Space Apps Challenge. Space Apps was a hackathon sponsered by NASA, which also provided a series of challenges that teams could work on. I teamed up with Ryan Brown, Sam Lucidi, and Greg Jurman. We decided to tackle the challenge of creating a system to mirror all of NASA’s open source projects on github. This was challenging for a couple of reasons. The first being the fact that NASA does not have a single location for all it’s code. NASA hosts code with github, sourceforge, and their own sites. Another challenge is that NASA doesn’t have a single revision control system for all of its projects. It uses git, svn, and even just tarballs. We decided that we should write a webapp that lets a user submit information for each project, and then the application will download the source and push it to github. This webapp is spacehub. A demo of spacehub is available here. For this project we decided to use pyramid with cornice on Openshift. We chose openshift to reduce the time needed to setup hosting. Cornice is a neat layer on top of pyramid that makes it really easy to create restful APIs. On the front end we used bootstrap and emeber. Currently spacehub only supports importing tarballs. It was decided that we should focus on supporting the most difficult to import format first, and then work on the easier ones like svn and git. Over all it was an excellent hackathon Spacehub came in first place in the software category at the Rochester event. Since we won in Rochester we will now move on to be judged at the national level.

Recently I decided to try and convert my laptop from using ext4 to btrfs. The conversion process is pretty simple but since my laptop uses luks encryption and lvm there are a few extra steps. Here is a step by step walk through of what I had to do to get my laptop converted from ext4 to btrfs on fedora. First off make sure that your /etc/dracut.conf file has a line for btrfs.

1
filesystems += "btrfs"

Then run this command:

1
# dracut

This will add the btrfs module to your initramfs so you can mount a btrfs filesystem on boot. Next boot into a fedora 18 live cd. It needs to be a live cd in order for it to have all the tools required for the conversion process. Since I am using both luks and lvm on my laptop we need to decrypt the lvm partition, and then make it accessible to the live os.

1
2
3
# cryptsetup luksOpen /dev/<partition with lvm> <some name>
# vgscan
# vgchange --available y

crypsetup is used to decrypt the encrypted partitions, vgscan rescans /dev for lvm parititons and vgchange is used to make those lvm partitions mountable. Now that we have access to the lvm partitions, it is time to convert them to btrfs

1
# btrfs-convert /dev/mapper/<volume to convert>

This setup will take a decent amount of time depending on how big your filesystems are. My laptop partitions are relatively small and are on an ssd so it only took a minute or so to convert. So we now have our partitions converted to btrfs, now we need to change a few settings in order to make sure we can boot properly. First we need to change /etc/fstab to use btrfs instead of ext4. Then we need to make it so selinux can relabel the filesystem on next boot.

1
2
3
# mkdir /mnt/root
# mount -t btrfs /dev/mapper/<root_lv>
# touch /mnt/root/.autorelabel

In /etc/fstab you need to change this

1
/dev/mapper/vg_<hostname>-lv_<volume name>  ext4 default   1 1

To this

1
/dev/mapper/vg_<hostname>-lv_<volume name>  btrfs default   1 1

Note: The only thing that needs to be changed is the partition type.

To make sure you can boot far enough to have selinux actually relabel the filesystem make sure to set selinux to permissive in /etc/selinux/config Once all this is complete you can reboot back into your main os. You not boot correctly the first time since there are probably selinux issues, however after the relabel is complete reboot again and it should work fine.

A little while ago my friend David (oddshocks) created a tool called pythong, which is a tool for creating a good default project layout for python projects. After this had been under development for a little while I was chatting with another friend Ryan (ryansb) and we decided that to accompany pythong, there should be a tool to package and move around pythong projects. This tool is DaisyDukes. The current feature set of daisydukes includes

  • Creating archives of a project
    • Supports tar.(gz,bz2,lzma,xz) and zip
  • Uploading your project somewhere
    • Currently supports pypi
    • Currently working on S3 and FTP uploads
  • Packaging your project as a RPM or DEB
    • This feature is still in the planning stage.

Right now the develop branch of daisydukes supports all the archive functions, as well as uploading to pypi. Work on the other upload methods is underway. Now for some usage examples

$ daisydukes archive --zip|gzip|bzip|lzma|xz --extrafiles files...

The archive command supports most common formats, and has a flag to allow for the addition of extra files to the archive.

$ daisydukes upload --pypi [--register]

The —register flag registers the project on pypi before uploading a sdist. Daisydukes is definitely alpha software, there are bound to be rough edges. You can get the latest version on github, and a relatively stable version on pypi.

Recently I have been looking into functional languages such as Haskell and one of the things I really like about Haskell is function currying / partial application. Python has this as well.

1
2
3
4
5
from functools import partial
def func(x, y, z):
    return (x, y, z)

part = partial(func, 1)

However I find this syntax to be a bit ugly. I would prefer to just be able to call a function with only a portion of its arguments and have it generate a new function that takes in the other arguments.

1
2
3
4
def func(x, y, z):
    return (x, y, z)

part = func(1)

That syntax just seems a lot cleaner to me. In order to implement this syntax I wrote a small decorator which can be placed on a function and adds the built in currying functionality.

curry decorator
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class curry(object):
    """
    Decorator which takes a int which is the maximum number of args the decorated function can take
    """
    def __init__(self, numArgs):
        self.numArgs = numArgs

    def __call__(self, func):
        if self.numArgs > 0:
            @curry(self.numArgs-1)
            def wrapper(*args, **kwargs):
                if len(args) < self.numArgs:
                    return partial(func, *args)
                else:
                    return partial(func, *args)(**kwargs)
            return wrapper
        else:
            return func

This decorator takes in the number of arguments for the function and then recursively wraps the function. Every time an argument is added to the function it recurses down another layer until all arguments have been added the the function. It then calls the original function with all its arguments. The final syntax looks like this.

1
2
3
4
5
6
7
@curry(3)
def func(x, y, z):
    return (x, y, z)

part1 = func(1)
part2 = part1(2)
result = part2(3)

This code and the code for the pipelines mentioned in the previous post are hosted on my github page.

One of my most recent projects was implementing an easy way to link functions together in pipelines. This way the output of a function is piped into the input of another one and so on. Currently I have been experimenting with different ways of creating pipelines. Currently I have 3 different types of pipelines defined in pipelines.py. The first two are two different implementations of the same type of pipeline. This type of pipeline takes in an arbitrary number of functions and then keyword arguments specifying the other arguments for the functions in the pipeline. The syntax for creating these pipelines looks like this:

1
2
3
4
5
6
7
pipeline = PipeLine(
    add1,
    subX,
    stringify,
    subX=(2,),
)
pipeline(10)

The first 3 arguments are functions in the pipeline. The last argument is used to add another parameter to a function in the pipeline. This gives the pipeline a bit more flexibility. To use the pipeline just call it with an initial value to be put through the pipeline.

The first implementation uses a recursive method to generate the pipeline.

Recursive Pipelineslink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def PipeLine(*funcs, **kwargs):
    """
    Given an arbitrary number of functions we create a pipeline where the output
    is piped between functions. you can also specify a tuple of arguments that
    should be passed to functions in the pipeline. The first arg is always the
    output of the previous function.
    """
    def wrapper(*data):
        if len(funcs) == 1:
           combinedArgs = data + kwargs.get(funcs[-1].__name__, tuple())
           return funcs[-1](combinedArgs)
        else:
            combinedArgs = kwargs.get(funcs[-1].__name__, tuple())
            if combinedArgs != ():
               del kwargs[funcs[-1].__name__]
            return funcs[-1](PipeLine(*funcs[:-1], **kwargs)(*data), *combinedArgs)
   return wrapper

The second one uses the reduce function in order to link the functions together while passing in their other arguments.

Reduce Based Pipelineslink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def ReducePipeline(*funcs, **kwargs):
    """
    Given an arbitrary number of functions we create a pipeline where the output
    is piped between functions. You can also specify a tuple of arguments that
    should be passed to the functions in the pipeline. The first argument is
    always the output of the previous function. This version uses the reduce builtin
    instead of using recursion.
    """
    def accum(val, func):
        funcArgs = kwargs.get(func.__name__, tuple())
        if hasattr(val, "__call__"):
            return func(val(), *funcArgs)
        else:
            return func(val, *funcArgs)

    def wrapper(*data):
        newFuncs = (partial(funcs[0], *data),) + funcs[1:]
        return reduce(accum, newFuncs)
    return wrapper

The final pipeline was an experiment to see if I could link python functions using dot syntax. So the expected usage would be something like this:

1
2
pipeline = DotPipeline(initialvalue, globals())
result = pipeline.someFunction.anotherFunction.finalFunction()

The constructor for the pipeline is a little ugly since it needs the globals passed in however it is super cool that you can just string functions together and the output gets passed along. At some point I want to try and make DotPipeline not need the globals dict passed in. Here is the code for DotPipeline:

DotPipeline
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class DotPipeline(object):
    """
    String together a series of functions using dot syntax.
    give DotPipeline's constructor the starting value, and the globals dict
    and then you can call string functions together
    addOne = lambda x: x+1
    subTwo = lambda x: x-2
    p = DotPipeline(1,globals())
    p.addOne.subTwo() -> 0
    """
    def __init__(self, val, topGlobals):
        self.val = val
        self.topGlobals = topGlobals
    def __getattr__(self, name):
        self.topGlobals.update(globals())
        return DotPipeline(self.topGlobals[name](self.val), self.topGlobals)
    def __call__(self):
        return self.val

So those are some pipelines that I have been working on other the last week or so. They are not always super useful but linking functions together using a reusable pipeline is a cool way to transform data.

I recently decided I no longer wanted to use my custom blogging engine GIBSY. So I switched to octopress which makes my life a good deal easier. The old gibsy based blog will still be available by going here. This site is probably going to change a bunch as I get used to using octopress.