Wednesday, July 15, 2015

Installing NVM And Node.js To Your Home Directory

On Fedora I've found that installing node.js and npm to be less than useful for my needs; i.e., while they install and work, they want to put things into the system directories and not my home directories. And, like with Ruby on Rails, you get locked into whatever is the latest version packaged for Fedora itself, which isn't always what you'd want.

What would be nice is to have something akin to RVM for Ruby, that lets you install the specific version you want regardless of what's packages for your release.

Enter NVM, the Node Version Manager. The name is even reminiscent of RVM! :D

To install this on my Fedora system, I first downloaded the nvm release script and executed it using:

 $ wget -qO- | bash

and then sourced my .bash_profile, which was updated to include NVM.

 $ . ~/.bash_profile

I then checked to see what the latest version of node.js was to download and install

 $ nvm current

so I could be sure to install the latest version, which I did:

 $ nvm install v0.12.7
######################################################################## 100.0%

Then, to use it, you type:

 $ nvm use v0.12.7

and you should now see node in your path:

  $ which node

Wednesday, July 1, 2015

Converting VMWare Disk Images For Use With QEMU

At my current job we have a VM that was created by someone using VMware which hosts a scaled down version of our runtime environment. Being the new guy, and someone who's way more comfortable with Fedora Linux than Windows, I wanted to take the VM and run it on my favorite OS.

But I couldn't just copy the image over since the 60G disk was split up into 31 separate VMDK files, which cried out to be converted to a format usable by Linux.

So here's what I did:

 $ for vmdk in *.vmdk; do qemu-convert -f $vmdk -O raw $(basename -s .vmdk $vmdk).img; done
 $ cat *.img >> runtime_environment_vm.img

When it was all done I had a bootable disk. I created a new VM using that disk and was off to work in no time!

Wednesday, June 3, 2015

More Git Commit Recovery With RefLog

Yesterday I blogged about how I lost and then recovered a few commits in a small project I'm working on using git's cherry-pick command. Till Maas pionted out on G+ that you can get a list of the commits in your git repo with "git reflog".

The reflog command (for reference log in case the contraction was lost in you) lets you look back into the repository history beyond what makes up its current state. In our case, it can show past commit hashes so we can find and recover a commit. This would have come in handy had I not had the output of a git log in my terminal yesterday.

So, for example, I could look back through the history of my project to find all of my previous commits with:

 $ git reflog show  | grep commit
b3c1cf2 HEAD@{0}: commit (amend): Created the initial application shell.
1fc7d5a HEAD@{1}: commit (amend): Created the initial application shell.
f8df984 HEAD@{2}: commit (amend): Created the initial application shell.
801787d HEAD@{9}: commit (amend): fixup! Created the initial application shell.
c23dde8 HEAD@{10}: commit (amend): Added a Glade described main window.
d16b829 HEAD@{18}: commit (amend): Got a basic running application.
c49a3cb HEAD@{19}: commit: Got a basic running application.
f43fe07 HEAD@{25}: commit (amend): Added a Glade described main window.
f02d3bd HEAD@{26}: commit (amend): Added a Glade described main window.
92a16aa HEAD@{30}: commit (amend): fixup! Created the initial application shell.
4c8b148 HEAD@{31}: commit: fixup! Created the initial application shell.
5db5426 HEAD@{32}: commit (amend): Added a Glade described main window.
11a6b02 HEAD@{33}: commit (amend): Added a Glade described main window.
df9f92e HEAD@{34}: commit (amend): Added a Glade described main window.
3deafab HEAD@{35}: commit: Added a Glade described main window.
6330fe5 HEAD@{36}: commit (amend): Created the initial application shell.
a5eda9c HEAD@{37}: commit (amend): Created the initial application shell.
fcb58eb HEAD@{38}: commit (amend): Created the initial application shell.
2a0d994 HEAD@{39}: commit (amend): Created the initial application shell.
32b8ea8 HEAD@{40}: commit (amend): Created the initial application shell.
165ae57 HEAD@{41}: commit (amend): Created the initial application shell.
33f9885 HEAD@{42}: commit: Created the initial application shell.
5ea907a HEAD@{43}: commit: Created an initial to-do list of tasks for the project.
40a2f7a HEAD@{44}: commit (amend): Updated the README file with a little more details.
7d32cb2 HEAD@{45}: commit (amend): Updated the README file with a little more details.
53984e2 HEAD@{46}: commit (amend): Updated the README file with a little more details.
360ea5c HEAD@{47}: commit (amend): Updated the README file with a little more details.

And from this list I would have been able to find the commit(s) I wanted to recover using git show and then cherry-picked the appropriate one once found.

Thanks Till!

Tuesday, June 2, 2015

Recovering A Lost Commit With Git

Ever have one of those days where you make a bad decision?

No, I mean with writing code.

You do?

How about with source control? Things like deleting a commit on your work branch and then realizing you didn't mean to delete it?

Well, guess what? You're in luck! Git doesn't throw away your commits! And if, like me, you've accidentally.....okay, INTENTIONALLY deleted some work and then realized you want to recover it, you can!

A little background: I'm playing around with a tool to let me create and post a series of entries on the blog for my comic book podcast (The Comic Book Update). What I do now is manually schedule hourly posts of comic previews when the publishers send them to me. Which results in about two hours throughout the week (usually while drinking coffee and watching the news) of creating 60 posts (10 posts per day) and setting the scheduled time.

Tedious, I know. So I want to create a tool to do this.

Long story short, I started playing with a Ruby solution, a Python and a Java solution. But after deleting Ruby, I realized I in fact DO want to use it. And it was the one I had explored the most, but dammit I already deleted it!

Except, in my terminal buffer, I could still get to the commit hashes for the Ruby version!

So what I did was checkout a clean branch in my project repo. I then scrolled up and, one by one, did a git cherry pick of each of those hashes.

Voila! Commits were recovered!

The reason why is that, even when you delete a commit with "git reset --hard HEAD~1" or doing an interactive rebase and deleting commits from the list to be included, git does NOT actually throw the commit away. Instead it leaves it in the set of objects in your .git directory until such a time as you do garbage collection.

If you go into a repo and peek in the .git/objects directory you'll see a series of directories name 00, 01, ..., fe and ff (or some subset of them). Each of those is composed of multiple files, each representing a different commit. The parent directory is from the first two characters of the commit hash, and the filename is from the remainder of the hash. So if your hash id was f43fe07d9df08fdb6440c562639eb4ad4ce4c49e then you'll find that specific commit itself in .git/objects/f4/3fe07d9df08fdb6440c562639eb4ad4ce4c49e.

Such a nice turn of luck not having to rewrite that initial bit of code....thank you git!

Monday, June 1, 2015

Finding That Sweet Spot: Focus On Guake

One of the things I do quite often is go to a terminal, do some short command or two, then close the terminal. I do this away from my coding terminal as I don't want to contaminate bash history with those commands, mainly because I do a lot of repetitive things [1].

The application I use for such things is Guake, a great little popup terminal application that gives me exactly what I need: a simple terminal that I can show and hide without having to launch a new app. It's available on pretty much all Linux distros, so I don't think I need to tell you how to get it on your system.

The way I have Guake configured is to popup up as a small window (50% width, 50% height) at the center bottom of my current display (on my two-monitor desktop it follows the mouse pointer).

I like to have my desktop be visually appealing since I spend so much time in front of it, so I've played with various colors, layouts, etc. to get things just right. And for me, what I have now works well.

Placement shows the terminal the currently active desktop (the one where my mouse is at the time), and it pops up on top and stays on top of any other application. I have the transparency of the window set to about 15% or so to allow me to see through the window: this way I can read something on my browser or editor, for example, without having to close and re-open Guake.

To open Guake, I have F12 as my hotkey, with F11 to make it full screen. These are the default and they make perfect sense to me. Though, on my keyboards when I'm docked, F12 and Print Screen are close enough together that I will hit the latter at least once per day by mistake. So every weekend it seems I'm deleting a half dozen screen shots.... :D

The color scheme I chose is called "Hipster Green" (yeah, I hate that name too). For me I find the monochrome-like colors more visually appealing, not to mention less harsh on the eyes if I'm on my laptop in a dimly lit room [2]. I use auto-colors in ls and the monochrome color scheme fits well with it.

The option I go back and forth on is showing the tab bar. On the one hand I like to be able to quickly jump to a tab I want if it's open. But on the other hand I don't normally have more than two or three open at a time. So I hate sacrificing even that small bit of real estate. So it depends on how I'm feeling at any point whether it's enabled or not.

To open URLs that appear in the window, I've checked the Enable Quick Open when Ctrl+clicking on a filename in the terminal option (verbose, right?) on the Quick Open tab.

All in all, Guake is a great, useful tool for me to perform quick tasks from a command line without having to either pollute my primary command history with trivial little things. All in a tool that is easily launched, used, and dismissed quickly.

[1] - I will, for example, have the following three commands in my coding terminal's history:

  $ git clean -xfd && cmake all
  $ [run some test script, example app or other]

My main workflow in those cases it to build the world, run whatever I'm working on, then scroll through the output and fix things in my editor. Then I go back to the coding terminal and hit Up Up Enter, wait a second, then Up Up Enter again.

[2] - I LOVE to write code during a rain storm. Some of my favorite times have been to sit on the couch in my office at home while it's pouring rain out and work on code to the soothing sounds of rainfall.

Friday, May 29, 2015

Upgrading Fedora From F21 To F22 With FedUp

In case you missed the announcement, Fedora 22 was released on Tuesday, 26 May. You can download it here if you want to install a fresh copy.

Or, if you're like me, you can upgrade your system in place using dnf and fedup. Here's a simple, step-by-step guide to upgrading your system.

Step 1: Upgrade Your F21 System

This goes without saying, but I'll say it anyway: make sure your system is up to date with F21. This will ensure an easier upgrade.

Using Yum To Do A Distribution Synchronization

Step 2: Install FedUp

FedUp is the tool that does the heavy lifting for the upgrade. What it does it download the packages needed to upgrade your system from its current state to the released packages in Fedora 22.

Installing The FedUp Package

Step 3: Use FedUp To Download The Upgrade Packages

Now comes the first long task: downloading the upgrade packages. Depending on how much you have installed on your system, and the speed of your internet link, this could take a wee bit of time.

FedUp has a few options for how it retrieves the packages for the upgrade. You can download the ISO install image and use it as a source, you can point it to a mount point on a shared filesystem (a great way to upgrade multiple systems would be to download the upgrade packages to an NFS drive and mount it from the various machines) to retrieve them, or you can point it to the network and tell it download the packages from a configured repository. We'll be using this last option.

You'll also tell fedup to what version you want to upgrade your system. In this case, since we're going to Fedora 22, we'll tell it that with:
 $ fedup --network 22

Running FedUp Over The Network
After this finishes, you have the option of aborting the upgrade process with:
 $ fedup --resetbootloader
since FedUp will add a new entry to Grub's menu. But who's going to do that? We're here to upgrade! So now, reboot your system.

Step 4: FedUp Does The Upgrade

After rebooting your system, you'll see a new entry in Grub for the upgrade. You can still bypass the upgrade and choose one of your prior kernels. Of, by default, the system will select the upgrade path.

The FedUp Grub Menu Option

Now is the time to go have a cuppa, a pint or whatever you do to pass the time. It's going to be a while as the system is upgraded.

Step 5: Here's Your Fedora 22 System...Enjoy!

Once the install has finished, it will remove the upgrade option from Grub's menu and reboot your machine.

After The Reboot, No More Upgrade Option

You have now have a fully up-to-date and ready to run Fedora 22 system!

Wednesday, February 18, 2015

Ruby Messaging Part 1: Know The Players, Know The Game

Hey, all. Here's a new series of blog posts I've been thinking about for a few weeks now. It's a series where I'm hoping to introduce you to a bit of work I've been doing lately on the Ruby messaging front with Qpid Proton. Specifically, I've been working on a low-level set of APIs dubbed the "engine APIs". They are a close analog to elements in the AMQP protocol specification, so a knowledge of one will help in understanding the other.

In this first post I'm going to introduce you to the main players in the game and how they each represent aspects of the AMQP messaging specification. Then I'll share a very small example application that accepts a connection from a remote container, but does nothing else for now. This application we can grow over the process of these posts until we have fully functional application.

The Logical Components

Working from the outside in, we have the following pieces:

A container is either a producer, a consumer, both a producer and a consumer, or a queue. As their names imply, producers and consumers either produce (send) messages or else consume (receiver) messages, or both, as is the case with most messaging containers.

A container that primarily produces/consumers messages is going to be an application that uses messaging as a means of coordinating its efforts with other applications. An example, which I'll develop over the course of these blog posts, is a traffic light system which is designed to work with the local municipality's emergency needs.

A container that acts as a queue would be a broker, communications bridge or similar application. It's not the endpoint for the bulk of the messages it receives, but is really a conduit through which messages flow on their way to their true endpoint; i.e., it stores and then forwards messages. Examples would include the Qpid C++ broker and the Qpid Dispatch router.

Containers create connections to, or accept from, other containers. The connection is how data actually flows in between the two containers and is broken down into constituent pieces, like the layers of an onion. But the details of that flow are outside of the scope of this series of posts.

Within connections are sessions. A session will have a pair of channels for sending and receiving data.

Connections and session are thought of as endpoints which hold incoming messages and which hold the last known state information for outgoing messages.

A channel is a unidirectional (one way) means of sending messages. The reason why a session contains a pair of channels is to allow for bidirectional (two way) communcation.

The terminus maintains the state information for incoming and outgoing messages that flow over the link. A channel will have a source and a target terminus.

The link is the where the actual protocol work is done, transmitting the message from source to target. It ties together the termini of a channel.

So, to summarize:
  • A container is an application that produces, consumes or queues messages.
  • A connection is how two containers communicate with each other.
  • A connection has one or more sessions, which pair up unidirectional channels to allow bidrectional communication.
  • A channel sends data from one endpoint to the other over a link unidirectionally.
  • A channel has a terminus which maintains the state information on each end of its link.

Translating To Ruby Classes

So now that we know the big picture pieces from the specification, I'll translate that knowledge to the Proton Ruby engine work I've been done.

There is no analog (at this time) in the library for a container.

The Qpid::Proton::Connection class represents the connection between containers. With it you can start and stop working with a remote container, create new sessions, retrieve the next session which has pending work and also the next link with pending work to process. You can also access the transport engine, which is itself the topic for another post.

The Qpid::Proton::Session class is the session analog. With it you can start and stop sessions with the remote container, create endpoints to send and receive messages, peek at how many incoming and outgoing bytes there are as well as access the parent connection.

The Qpid::Proton::Sender and Qpid::Proton::Receiver classes are used for sending and receiving instances of Qpid::Proton::Message, respectively.

Additional Classes

In the Ruby library there are additional classes that work with the specification analogs to tie things together: the Transport and the Collector.

The Qpid::Proton::Transport class is the protocol engine. It processes incoming and outgoing bytes for a connection, and publishes events as that connection's state evolves over time.

The Qpid::Proton::Collector class is a FIFI queue which holds the events fired by the transport. It can be queried by the container to find the oldest event.

An Example Application: The Traffic Light Manager

Following is a simple Ruby application that accepts incoming connections and starts an AMQP dialog follows. It accepts a TCP connection, creates a connection and related objects and then begins looking at events to process.

Bear in mind that this application is by no means the best example of how to write such an application, but is only used as a means of demonstrating how the pieces we're shown here fit together. A more details example will be developed in follow up posts.

require 'qpid_proton'
require 'socket'

server = # accept connections on the default AMQP port

# start an infinite loop
loop do
  # spawn a new thread for each incoming client do |client_socket|
    # Create a connection object, a collector for any
    # events, and a transport to process the data
    # flowing over the wire.
    conn =
    collector =
    transport =

    # details of moving data between the client socket
    # and the transport are excluded here
    # instead, we'll just jump right to processing events

    # get the next event
    event = collector.peek

    unless event.nil?
      # do something here, a detail for a later post

      # remove the event we've processed and get the next
      event = collector.peek