my musings 

Facebook Twitter LinkedIn YouTube E-mail RSS


Just a quick share for today. I just finished up a little perl script that you can run on your git repositories. It produces a bunch of statistics about the repositories. You can see it over on git hub.

A Brief Introduction to the IETF’s BIER (Bit Indexed Explicit Replication) BOF

In November I participated in the IETF meeting in Honolulu Hawaii. There are many reasons to attend the IETF but on a personal level I love going to the meetings because of the learning experience they provide. There are the working groups I always attend based on my areas of expertise: IDR (BGP), OSPF, PIM, ISIS, etc but then I find there are always the times during the IETF you look down at your schedule and aren’t quite sure what to go to. This is what I love the most because it causes you to go into unknown exciting areas, because let’s be honest, while nice, adding TLVs extensions to OSPF isn’t the most exciting! This year there was one such meeting, a BOF to be exact, that I found especially exciting called BIER or Bit Indexed Explicit Replication.

To be fair I had seen this first, I believe, on the PIM mailing list which caused me to briefly skim some of the architecture related documents. So what is BIER? Well first, if you have worked with PIM you will know what I mean when I say its a complicated technology. As one of my colleagues so eloquently put it, you know a multicast engineer versus a unicast engineer simply by how the multicast engineer refers to their code with personal pronouns and more like a mythical beast they must tame rather than an inanimate object. BIER is a multicast routing protocol which may tame the mythical beast of multicast.

The basic idea behind BIER is each edge router is assigned an offset into a bit array. These assignments are flooded to everyone in the domain through use of an IGP. On ingress into the BIER domain a multicast packet is assigned to a tuple. The packet being ingressed is encapsulated with a MPLS label which can be indexed to return this tuple. Following this MPLS label is a BIER header which contains some meta data and a bit array which is some multiple of 32 bits.

The routers in the BIER domain use their IGP databases to calculate the shortest path to each edge router. For each neighbor you OR the bit masks for each edge router in the domain whose shortest path travels through this neighbor. This creates a bit mask and next-hop table called the Bit Forwarding Table.

Upon reception of a frame with this header you AND the frame’s bit array with each entry in the Bit Forwarding Table. If your result is non-zero you replicate the frame for this neighbor setting the bit array to be this AND’ed value. (Please take a look at the IETF presentations and drafts on this for diagrams and more detail).

Since the number of edge routers in any domain is evidently limited by the size of the bit array they introduce BIER sets. Each frame with a BIER header is assigned to a set based on the MPLS label. Each set indexes into a separate Bit Forwarding Table. For each set which contains a desired egress node the ingress node makes a duplicate of the frame leading to N copies of the frame where the shortest paths converge in the multicast distribution tree originating at the ingress node. This use of sets allows for many more egress nodes than limited by the bit array length, of course, it still faces scalability issues which realistically limits the number of egress nodes. There are some other optimization such as the use of the Bit Index Routing/Forwarding Tables but I will leave that up to the reader to research. Much of what has been said here is highly tentative and will almost certainly change many times before anything gets standardized.

So to me I love this proposal because of its simplicity. It obviously has issues in where it can be deployed (for instance if you have millions of edge routers this might not be optimal for you) but for many cases it seems great. Unlike PIM it requires no per multicast flow state and it inherits features from unicast routing like FRR and unicast’s convergence time. I have heard talks about deploying it as a hybrid model with say PIM as a backbone and BIER running near the customer edge thus segmenting and simplifying the multicast domain and PIM.

Another item some people point out as a drawback is the current hardware isn’t optimized for this type of forwarding. Coming from a software point of view this doesn’t phase me in the least. Expensive hardware is good for some things but just like mainframes decline of the past commodity, multipurpose hardware is the future in networking, special purpose, expensive hardware is not.

And as always folks, the postings on this site are my own and don’t necessarily represent Brocade’s positions, strategies or opinions.

Regular Expressions

I’m by no way a regex expert but if you’re a software engineer that doesn’t know about regular expressions that should definitely be your next skill to add. A great website to start with is regular expression info. For those of you that know all about regular expressions I just wanted to share a nice tool I use with them. If you don’t know about this tool hopefully it helps you.

I’m a big believer in scripting tasks. (Checkout menv to see some of my public scripts to see some of my automation) If you find yourself doing something for a second time, you should consider automating the task as it will probably be well worth your time. Regular expressions help a lot in these automation tasks but like I said I’m still not an expert with them. I know the basics and with a little bit of google I can make one to do just about anything I need. There are a number of online regex debuggers that make the process both easier and more enjoyable. I’m a big fan of regex 101. This allows you to run a regular expression on some sample text and see in real time the matches in that text. My favorite part is the explanation panel which breaks your regular expression down into an English explanation. This is often helpful in figuring out just why your regex isn’t working how you expect. Another website I like is debuggex which gives a more visual representation of the regex. Hopefully you find these helpful. I just finished up using these for some nice perl regexes I needed for some git hooks.

Bond Genealogy

Along with technology, history is another passion of mine. In particular I find family history/genealogy facinating. I have been working on a book about some of my family history for a few years now and I am excited to announce I have officially published my Bond genealogy book, “The Genealogy of Our Family”.  It contains genealogy related to the surnames Bond, Blomquist, Brown, Fuller, Noyes, O’Kroy, Pettypool, Tilger, Wagner, Lizotte, Sherlock, Irons, & Given. You can purchase  it on Lulu.

SSH Proxy

I recently found a really neat configuration option for ssh if you need to ssh into a box say behind a NAT box. What this does is lets you ssh into an intermediate box and then ssh through that onto another. Place the following in your ~/.ssh/config:

Host fakednsname
  User targetusername
  ProxyCommand ssh proxyusername@proxyurl nc %h %p 2> /dev/null
You then type “ssh fakednsname” and you ssh through the proxy. This would be like if you did a “ssh proxyusername@proxyurl” followed by a “ssh targetusername@” on the proxy. Of course and proxyurl can be any ip address or url. This can also be used with scp or rsync.
Hope that helps!

mcommon library

I just pushed some updates to github which I thought I’d explain here a bit. For most of my personal project I use autotools as my build system. I hate repeating code and as I learn new things in autotools I’d really like to update all my project with the new functionality I’ve learned about. To do this a while back I made a common library project on github. This project contains two parts. First in src you will find a bunch of really commonly used code like the publisher subscriber model. This compiles into a shared object file that can be linked against. Next in the templates folder you will find different autotools files. These can be symbolic linked or included so I don’t have to redo them each time. If you look at my mfit project you will see examples of this. For instance in my file I do:

Read more…

cache locality optimization extensions in gcc

Tonight I thought I’d make a quick post a compiler optimization I ran across the other day. There are two GCC extensions with function attributes which allow you to take advantage of cache locality. For those of you who are not familiar with what cache locality is consider if you had 1024 bytes of memory with a page size of 64 bytes and only one page in the page table. If you have a function at offset 0, another at offset 8, and a third at offset 512 if you call the function at offset 0 then the function at 8 you are not going to have to fetch a new page between these two function calls since 0 and 8 are in the same page. On the other hand if you call the function at offset 0 and then you call the function at offset 512 a new page needs to be loaded into memory: a page fault. As you can see this takes longer and so it is desirable to have functions that are used often close to one another.

Now GCC has two extensions which allow you to take advantage of this: __attribute__((hot)) and __attribute__((cold)). These attributes allow you to mark a function as being used often (hot) or being used in-often (cold). This allows the compiler to place hot functions near each other in memory as to cause less page faults. These are overridden if you use -fprofile-use which allows you to pass in some profile statistics to GCC so it can make locality determinations automatically.

If you are interested in this check out the GCC documentation for more information. There are some other good attributes on that page as well. Have a great labor day weekend everyone!

zmap: mapping the internet

If you are not familiar with nmap, aka Network Mapper, it’s a command line tool which helps to map a network through of variety of methods including port scanning. Port scanning is where you send out a request on a given port (TCP, UDP, etc) to a given host (or range of hosts) to see if a service (such as a http web server) is running on the given port. This is often used by hackers but it also has more legitimate uses such as mapping a network.

I was recently shown a new but similar mapping tool called zmap which allows for “fast internet wide scanning”. Traditionally nmap scans the network in a synchronized fashion: sending out a request and waiting for a response. Through some kernel modifications and by sending out probe messages in an asynchronous fashion zmap is able to scan the entire internet in a very short period of time (45 minutes) and they are able to do that with 98% coverage (of course you do need sufficient bandwidth to run this).

To me this is just fascinating as I’ve done a bit of network visualization work in the past myself. I’d love to see the generated network map from this. Unfortunately I don’t think one could measure things like return trip latency in this (well unless you’re using ICMP for your probe) so it might be hard to map any form of connectivity of distribution but regardless you could make some great visualizations with this.

If this seems interesting to you here are a few links for more information. First, here is their slide deck which explains it much better than I have. Next here is the site for the open source implementation of zmap. Finally if you want to learn more about nmap check out this wiki page.

Twitter and menv update

I’ve been a little slow on updating this site but with good reason. I recently was hired by Vyatta (a Brocade company) and I am very excited about the team there. Just as always let me say the postings on this site are my own and don’t necessarily represent Brocade’s positions, strategies or opinions.

In other news I just freshened up my twitter account a bit. I’ve found that often I have quick little posts about a news article or the like which I want to post on here but which don’t really qualify for a full blog post. I’ll be putting those up on twitter from now on.

And of a more interesting nature I pushed out a few updates to menv: my linux environment setup on github. Essentially this is a script that adds bash aliases and setups vim just the way I like it so when I load up a new computer I can get it up and running quickly. I still need to extend it a lot to do things like configure gnome but all in due time.

This latest revision has a number of cool additions. Let me go over a few of them:

The first addition is simple but very useful if you are in networking and need to reference RFCs a lot:

  menv_function rfc
  function rfc {
    if [[ $2 == "-l" ]] ; then
      if [ ! -d ~/rfc/ ]; then
        mkdir ~/rfc/

      if [ ! -f ~/rfc/$1.txt ]; then
        wget -q$1.txt -O ~/rfc/$1.txt
        if [[ $? != 0 ]]; then
          echo $?
          /bin/rm -f ~/rfc/$1.txt

      if [ -f ~/rfc/$1.txt ]; then
        cat ~/rfc/$1.txt | less
        echo "RFC Not Found"

As you can see this function works from the command line where you can type something like ‘rfc 6905′ or rfc 6905 -l’. The first will download a copy of the RFC, caching it, and then pipe it into your pager. If you add the -l option it will open the RFC in lynx (a command line web page viewer). This is just easier in my opinion than opening up your browser and googling for the RFC.

Next I added a menv_remote_install script which allows you to execute it as a remote bash script to install menv onto you system. This makes the install process one line rather than the multi-line process I had before with your wget, chmod +x, etc. I haven’t in fact had a need to execute this script yet so there could still be some bugs in it.

Third, previously I had a great motd message but I found I often wanted some of the information in the motd later in the terminal’s life. Thus there is now a ni/nodeinfo alias which display that information. The motd was also improved to test for connectivity to google and dns.

Please go over to github and checkout the entire diff and the entire menv repo if you have not seen it before. I would love to know what you have in your dotfiles.

Raspberry PI Colocation

If you are a hardware or software person its great to play around with some of the various single board computers there are. In particular the Raspberry PI has a low price point which makes it perfect for little side projects or even bigger projects where for instance you want to test some distributed computing ideas. In any regards I found this group that are offering to host your Raspberry PI for free in their data center. I have no idea about how good they are but for free I thought it was pretty cool. It might be nice to a small personal website such as my own which doesn’t require much processing power. Check them out.