my musings 

Facebook Twitter LinkedIn YouTube E-mail RSS

Google Adding SDN (OpenFlow) to Android?

I just read this article on Light Reading and it piqued my interest. Essentially there is a rumor that Google is adding the Open Flow protocol to their Android (Linux) OS. For those who are not familiar with Open Flow it a protocol/API that allows a controller to configure the hardware of a switch independently of thw switches implementation. This is part of Software Defined Networking whereas you can put lots of proprietary logic in your controller based on what your network needs are.

In any regards, as a routing algorithms guy, this made me think what one could do if they combined this with a mesh routing protocol. Imagine a controller piece of software running on your phone which allows it to act as a mesh router. In areas with dense populations this would provide a reliable, non-centralized routing scheme where no one really controlled the infrastructure.

Now I am not sure all the OSI layer 1 attributes of a cell phone network but I assume there is nothing preventing two phone communicating directly instead of through a tower (other than how the hardware is configured by software) if they are close enough. If this is not the case I’d love an explanation by someone… Anyways enough ramblings. I thought this was just an interesting news article.

Quantum Networks

Today is another quick share. I hope everyone will enjoy this article on quantum networks… It certainly had my mind thinking of new protocols that could be used for this technology.

Interesting post on writing a simple OS

I thought I would just do a quick little share today. I found this little article that describes how to create a minimal “OS.” Enjoy the article.

Memory Management and Valgrind

I’ve been looking for a good topic to discuss and while I was programming this evening I thought it might be nice to share a few tricks I have learned along the way in regards to the tool valgrind. I’ll also give you a few thoughts on more general memory management. I’d love to hear some tricks you have learned and any thoughts you have.

For those of you who don’t know what valgrind is, it is “a suite of tools for debugging and profiling programs.” Generally when one says valgrind they are referring to its default tool: memcheck. valgrind is executed from the command line by typing valgrind, the options for valgrind, their program name, and the program options for their program. For instance if you were developing and testing the ps command this might look like:

valgrind -q ps aux

This command would launch the valgrind program and then valgrind would launch your own program with a bunch of checks set in place to examine things about your program. As you program executes valgrind takes notes about what is happening and tells you things you might want to know. For instance, if you malloc some memory, free it, and then access the freed memory valgrind will tell you about that illegal access. Additionally at the end of your program execution it will give you a summary of memory allocations. If you didn’t free memory it will let you know.

Now in my alias file I have a command xval which runs valgrind with three options:

alias xval='valgrind --track-origins=yes --leak-check=full --show-reachable=yes'

track-origins is a very useful option that tracks the origins of uninitialized values. This means when something bad happens you can see what caused it a lot quicker. This is a very expensive check however and it will slow down your program significantly.

leak-check is another nice option which causes valgrind to print detailed information about any leaks it detects once a program exits.

The final option is show-reachable. This show you reachable and indirectly lost memory block along with definitely lost and possibly lost blocks. As with the other options I’ve shown this produces more output for you to go through but I find it to be helpful when you are debugging problems.

For each of these options please check out the valgrind man page for a much more detailed explanation than what I’ve given here.

Those options alone are very useful but one thing you will notice is they tend to produce a large amount of text. It gets really annoying when while debugging your own program you notice memory leaks in libraries you are using. To solve this valgrind has a nice suppression file format that allows you to say, “if I get this error, ignore it.” This is done with the –suppressions=suppression_filename option.

This suppression file lets you detail what you want to ignore. You can specify multiple suppression files if you would like to. If you are lucky the open source project you are interfacing with might publish a suppression file to hide well know bugs in their code. More than likely though they won’t and so you will have to generate one yourself. To do this you can add the option –gen-suppressions=yes which will pop up a dialog every time a problem is hit. This dialog lets you decide if you want to print the suppression for this problem. After saying yes take this suppression and put it into your master suppression file. The syntax for this file will look something like this:



Each of these blocks represents a rule/a suppression. The first line gives a name (as you can see I didn’t bother to name this suppression.) Next there is the type of error. In this case a memory leak. And finally there is a call stack. The ellipse (…) signify any call stack while the obj:… signifies a specific line in the call stack which references the libGL shared object given. You can also reference function such as:



When you generate a suppression it will give you an explicit call stack. I found it useful to take the call stack the gen-suppression gives you and then to simplify it so it covers more cases with the ellipses. Just be careful that you do not suppress error in your own code. You may also want to add the –demangle=no option so your suppressions work on c++ names. Once again please read the man page to understand what that option does.

If you generate your suppression file correctly once you have fixed any problems in your code you should get a nice ending output like this:

==27059== HEAP SUMMARY:
==27059== in use at exit: 2,720 bytes in 71 blocks
==27059== total heap usage: 220 allocs, 149 frees, 563,927 bytes allocated
==27059== LEAK SUMMARY:
==27059== definitely lost: 0 bytes in 0 blocks
==27059== indirectly lost: 0 bytes in 0 blocks
==27059== possibly lost: 0 bytes in 0 blocks
==27059== still reachable: 0 bytes in 0 blocks
==27059== suppressed: 2,720 bytes in 71 blocks
==27059== For counts of detected and suppressed errors, rerun with: -v
==27059== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)

Another useful option you might want to try out is –db-attach=yes. This pauses valgrind each time a problem is hit allowing you see the problem and attach a debugger if you want.

I strongly believe that programmers should get into the habit of constantly running valgrind on their code throughout a project. If you do it from the start of the project it becomes easy to track down all your errors. Not to mention by using valgrind all the time you tend to find bugs before they hurt you too badly.

Now in terms of memory management in general I have noticed many projects take advantage of the operating system and never really have a set pattern for memory cleanup. Whenever I architect a project I am a strong believer in always having a shutdown/cleanup routine even if you might never call it. (I normally connect it to a SIGINIT or SIGTERM signal.) Yes the kernel will free up your processes memory once it goes away but if you take the time to consider a shutdown routine you are far more likely to catch other bugs. Essentially having these routines makes you go through a thought experiment where you consider the lifetime of all objects in your system. This leads to fewer memory leaks in general. Additionally I find going though this exercise makes more robust programs that can be extended to meet future needs as you have a clean way to destroy any object in the system.

With that do you use valgrind? Any more options you use a lot? I’d love to hear about them.

MAI: Mokon’s Artificial Intelligence Framework

A while back I wrote an artificial intelligence framework for some research I was doing at the time. I recently made this available on github! If you wish to use the framework itself it may take a little bit of reading of the documentation as the command line options are quiet extensive. There is a batch mode which will run a bunch of algorithm tests at once in a non-gui mode and there is a demo mode that runs in a visualizer to show how the algorithm is running. But its well worth the time! I feel the visualizer was unlike anything I have seen in heuristic search due to its ability to visualize the the underlying data structures. These give the algorithm designer great insight into their algorithm! In batch mode the program is also quiet efficient. Its written in C# and runs on linux/mono (where I tested it).

Here is a YouTube video of the framework running:

The video might seem a little confusing since in these videos I visualize a lot of the underlying data structures in the framework.

The paper can be viewed on my professor’s website:

Real-Time Search in Dynamic Worlds,

You can also view my slides:

talk slides.

And some videos:

LSS-LRTA*D*liteReal-time D*Real-time D* again

If you use the framework or paper in your research I would love to hear about it!

Personal Finance Excel Spreadsheet

I have had an excel spreadsheet where I keep track of my personal finances for a number of years now. I decided to post it on my blog today as I hope it might be useful to some people out there. To use it you should probably have some understanding of excel. It is an xlsm file since it has some VB functions to compute taxes. You will probably get some warning messages about it being a macro-spreadsheet since macro-spreadsheets can have viruses but you can ignore theses messages.

Download the file here PersonalFinance.

The file has five tabs:

  1. Budget: This tab lets you specify a bunch of things about your finances. Whether its your monthly expenses, a mortgage, or your salary. This information is then used on the other tabs.
  2. 6 Month Outlook: This tab gives you a graph with a 6 mouth outlook on your liquid assets.
  3. 24 Month Outlook: This tab gives you a graph with a 12 mouth outlook on your liquid assets.
  4. Short Term Outlook: This tab is used to populate tabs 2 and 3. It contains a day by day balance for checking and savings. You need to manually move the table up a few times a month and add new data for the months down at the bottom at the same time (Columns D-G) Starting at O38 is a template you can copy for new months.
  5. Long Term Outlook: This contains a year back year look at your current finances to help you plan for retirement.

Now this spreadsheet will have to be customized a lot for yourself but hopefully it will help. Mine has many more tabs but I deleted the ones that are more for me. As I said there are VB functions to compute taxes. For federal taxes:

=Taxes(Salary,0, MaritalStatus,TRUE,1,Deductions)

For CA state taxes:

=CATaxes(Salary,0, MaritalStatus)

And for VA state taxes:

=VATaxes(Salary,0, MaritalStatus)

I forget which year the state taxes are for but last I knew the federal taxes was up to date for 2012.

My solution for syncing dotfiles

For quiet a while now I have run into a problem: I work on a number of different linux boxes and its the pain to keep their environments in sync. For instance at home I have the following alias:

alias xval="valgrind --track-origins=yes --leak-check=full --show-reachable=yes"

This simply puts valgrind into more of an “extreme mode” (hence xval) where it produces a lot more useful output than simply running valgrind. Now it would be nice if I could have this alias for work as well and of course I could simply copy my .bashrc file between each box but that rarely happens with any consistency. This means I tend to have a different set of dotfiles on each of my systems.

My solution to this problem is to start hosting a special script on GitHub. GitHub is perfect for this for a number of reasons. First assuming you have internet access you can download from anywhere with a simple wget. I don’t have to worry about hosting it on my own site, ensuring that’s always backed up, etc. I just run wget and execute my script and my bash environment is setup just the way I like it. I suggest using wget because even a busybox distribution of Linux will have this utility.

wget -N --no-check-certificate
chmod +x menv

A second advantage of GitHub is they have a nice feature to edit the file from the web. This means as long as I have a browser I can add new aliases anywhere.

So, what does this script do? It’s a bash script that looks at the name of the script that was called and does something based on that. When you run ./menv it setups the environment. This includes creating menv_deploy, menv_cleanup, .bashrc, .bash_profile, and .vimrc. Save for the vimrc file each of these are symbolic links to the main menv script. The deploy script gets the latest version of the script from GitHub. I am calling this in a cron job so my envirnoment stays up to day. I can also just call it anytime after I edit the script on GitHub. Note, I never edit the script that I wget from GitHub as these changes would never be synced to other computers if I did. The cleanup script uninstalls all the menv files. And finally the .bashrc and .bash_profile files are standard dotfiles.

Since sometimes there are things I only want on one computer I allow someone to have “local” files as well. For instance in my work bashrc file I have some ip address and files that aren’t public. Therefore I have two files ~/menv_local_nonlogin and ~/menv_local_login which represent .bashrc and .bash_profile respectively. If these are found they are sourced by menv at the correct time.

Finally this script also sets up a message of the day (motd). This has some neat ascii art and a bunch of useful information about the system.

If you want to see this script and edit it for yourself feel free too. I need to update some of my .vimrc code. I want to install some vim plugins through this script as well.

Some useful vimrc settings.

For those of you who are VIM users I am curious what your .vimrc file looks like. For those of you who don’t know what this is, its a customization file, similar to a .profile or .bashrc that is loaded each time you launch VIM.

My vimrc file is pretty simple compared to some I’ve seen but here are a few pieces of mine that I think you will find helpful.

First, everyone should have these settings in their vimrc:

set tabstop=2 shiftwidth=2 expandtab
autocmd FileType make setlocal noexpandtab

This sets your tabs/indents to two spaces wide and when you type a tab it converts it automatically to spaces. Since makefiles need tabs in them there is a simple exception for that.

This next piece of code should, in my opinion, be mandatory for all software engineers using VIM.

set colorcolumn=80
highlight ColorColumn ctermbg=lightgrey guibg=lightgrey
highlight OverLength ctermbg=red ctermfg=white guibg=#592929
match OverLength /\%81v.\+/

This creates a grey line at the 80th character column in your window. This gives you a nice visualize reference to when you are coming to the 80th column mark. Now the over length part is the really nice portion. What this does is highlight any text over the 80 column line in an annoying red. When you turn this on you are sure to properly format your code such that it fits within 80 columns. Otherwise, if you are like me, the red highlighting will annoy you. Thus the reason why I think it should be mandatory to use.

Now that highlighting code conflicts with this following code so for now I have to use them separately.

highlight WhitespaceEOL ctermbg=red guibg=red
match WhitespaceEOL /\s\+$/

I am OCD. It annoys me when I have white space such as an extra space at the end of a line (think Object* o = new Object( ) ; _______). This code highlights any whitespace at the end of a line. You can combine that with this following function.

function! StripTrailingWhitespace()
let _s=@/
let l = line(".")
let c = col(".")
let @/=_s
call cursor(l, c)
nmap <silent> <leader><space> :call <SID>StripTrailingWhitespace()<CR>

This function adds a key combination to strip off any whitespace at the end of a line in any and all lines in the file. Its a easy way to keep up files with lots of these errors highlighted.

So what do you have in your vimrc file? I’d love to hear suggestions.

How to make a minimal bootable linux image for the Raspberry PI

So I recently bought a Raspberry PI board and I have been experimenting with it in my spare time. I didn’t like any of the pre-baked linux distros that are out there that run on the RPI so I decided to make my own. It took me a little bit to figure out how to do it manually so here is a little walk through of what I did. I ran into one problem which I will explain at the end but in the mean time hopefully this helps you if you are trying to do a similar thing. Since I made this script I’ve been learning the Open Embedded framework and Yocto Project. I hope to make a post about those in the future, but in the mean time, here is a way to make a minimal bootable linux image for the RPI.

To start my script off I am going to declare a platform directory. What this is going to be is a directory where I build things which I will be including in my distro.


Next lets set our image name and use dd to create a large empty file. This large empty file is going to be out flash card image.

dd if=/dev/zero of=$IMG bs=1MB count=1000

Now let’s use the losetup command to create a loop device out of that image file. This is mounted at ${DEV} as we will see.

DEV=`losetup -f --show $IMG`

This next part gets a little tricky. We are going to use fdisk to create two partitions on that image. The first will be for the boot partition. This is where the RPI boot loader will go along with the kernel image. And the second will be the root file system for our linux system. The wierd syntax you see below is just us sending some commands in the script to fdisk.

fdisk $DEV << EOF n p 1



Our two partitions are now created so lets now un-mount that loop device we created for the image.

losetup -d $DEV

We now have an image with two blank partitions on it. Let's now start giving those paritions some structure. First we will format them. We will use a nice utility called kpartx to make device maps for the partition and then we will format each partition. The boot partition must be VFAT for the RPI and for our Linux root file system we are going to choose ext4. Its a nice modern file system and will suit our needs just fine. Feel free to use whatever filesystem format you want here.

DEV=`kpartx -va $IMG | sed -E 's/.*(loop[0-9])p.*/\1/g' | head -1`

mkfs.vfat ${BOOTP}
mkfs.ext4 ${ROOTP}

We are now going to create a directory rootfs and a sub-directory to that, rootfs/boot and this is where we will mount those two devices we just created.

mkdir -p ${ROOTFS}
mount ${ROOTP} ${ROOTFS}

mkdir -p ${BOOTFS}
mount ${BOOTP} ${BOOTFS}

Now we are starting to get to the fun stuff. I am going to assume you have already cross compiled a copy of busybox for the RPI. I have the install directory for my busybox as '../platform/install'. We are going to copy everything from that folder and put it in our root file system. We also need to chmod it to root just to be safe. This creates a bin and sbin folder in our root file system along with a bunch of symbolic links and the busy box binary. For this example I statically compiled busybox,

rsync -a ${PLATFORM_DIR}/install/ ${ROOTFS}
chown -R root:root ${ROOTFS}

Next let's create the dev folder. We are going to create a console device and a null device.

mkdir ${ROOTFS}/dev
mknod ${ROOTFS}/dev/console c 5 1
mknod ${ROOTFS}/dev/null c 1 3

Now lets create an inittab. We are going to be using busybox as out init process and busybox doesn't implement run levels.

mkdir ${ROOTFS}/etc
echo "::sysinit:/etc/init.d/rcS
::shutdown:/bin/umount -a -r" > ${ROOTFS}/etc/inittab

On system initialization above you see we called a script rcS. Let's create that script in our root file system now and populate it with some commands. We need to mount the /proc directory and we setup the internet connection. In the example below I hard code some addresses. It would be better to use dhcp. I also run a script for the message of the day. Make sure to set the correct permissions on rcS so it is executable.

mkdir ${ROOTFS}/etc/init.d

echo "#!/bin/sh
echo \"Running rcS\"
# mount -t proc proc /proc
ifconfig eth0 up
ifconfig eth0 netmask broadcast
route add default gw
/bin/sh" > ${ROOTFS}/etc/init.d/rcS
chmod +x ${ROOTFS}/etc/init.d/rcS

I have a pre-made motd script. Let's copy that onto the root file system and set its permissions.

cp motd ${ROOTFS}/etc/motd
chmod +x ${ROOTFS}/etc/motd

Create a few more folders in the root file system.

mkdir ${ROOTFS}/proc
mkdir ${ROOTFS}/lib

And now go into the boot directory and setup the kernel boot command line parameters. We will also create an fstab file and modules file. I am not entirely sure if these are needed. If anyone could clarify this that would be helpful.

echo "dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 rootwait" > ${ROOTFS}/boot/cmdline.txt
echo "proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 0
" > ${ROOTFS}/etc/fstab
echo "vchiq
" >> ${ROOTFS}/etc/modules

We are almost there! Up in our platform directory we have the RPI firmware we downloaded from the firmware git repository. Let's copy that into out boot partition.

cp ${PLATFORM_DIR}/platforms/rpi/firmware/boot/bootcode.bin ${BOOTFS}
cp ${PLATFORM_DIR}/platforms/rpi/firmware/boot/start.elf ${BOOTFS}

Now copy the Linux kernel image onto the boot partition. This kernel image should be the RPI branch of the Linux kernel. You can follow along on the wiki site for instructions on how to build and package that.

cp ${PLATFORM_DIR}/platforms/rpi/kernel.img ${BOOTFS}

Now I have found if you don't add a small sleep into the script you can't un-mount your partitions. So let's sleep for five seconds.

sleep 5

Un-mount the partitions.

umount $BOOTP
umount $ROOTP

Delete the partition mappings.

kpartx -d $IMG

And finally remove the mount point for the root file system.

rm -rf ${ROOTFS}

This leaves us with a file called rpiimage.img that we can flash to an SD card. In my case my SD card is /dev/sdb so I run the commands:

dd if=rpiimage.img of=/dev/sdb bs=1M
eject /dev/sdb

Now take that SD card over to your RPI and it should boot up. In my case everything works great save for the ethernet nic configuration. It seems the ethernet card is coming up after my rcS script is run. This means my ifconfig commands are failing. I can manually configure it later so I am thinking about just putting in a cron job to do this for me.

Now the problem with this system is we need to cross compile everything for it by hand. This is the pits if you have a project which makes use of a lot of libraries. I am now using the Yocto Project because of this. Perhaps at a later date I will talk about that.


So it has been quiet a while but I have finally brought my website back online. My old custom website was written in C#, however, I was getting very annoyed with my old web host, ASPNIX, and so I left them. I went back to my original web host, Bluehost. Unfortunately Bluehost doesn’t support windows hosting but regardless their service and up time is orders of magnitude better than ASPNIX. Thanks Bluehost!