Friday, September 27, 2013

Happy 30th Birthday, GNU!

On 27 September 1983 Richard M. Stallman posted the following message to a newsgroup:

From CSvax:pur-ee:inuxc!ixn5c!ihnp4!houxm!mhuxi!eagle!mit-vax!mit-eddie!RMS@MIT-OZ
From: RMS%MIT-OZ@mit-eddie
Newsgroups: net.unix-wizards,net.usoft
Subject: new Unix implementation
Date: Tue, 27-Sep-83 12:35:59 EST
Organization: MIT AI Lab, Cambridge, MA

Free Unix!

Starting this Thanksgiving I am going to write a complete
Unix-compatible software system called GNU (for Gnu's Not Unix), and
give it away free(1) to everyone who can use it.
Contributions of time, money, programs and equipment are greatly

To begin with, GNU will be a kernel plus all the utilities needed to
write and run C programs: editor, shell, C compiler, linker,
assembler, and a few other things.  After this we will add a text
formatter, a YACC, an Empire game, a spreadsheet, and hundreds of
other things.  We hope to supply, eventually, everything useful that
normally comes with a Unix system, and anything else useful, including
on-line and hardcopy documentation.

GNU will be able to run Unix programs, but will not be identical
to Unix.  We will make all improvements that are convenient, based
on our experience with other operating systems.  In particular,
we plan to have longer filenames, file version numbers, a crashproof
file system, filename completion perhaps, terminal-independent
display support, and eventually a Lisp-based window system through
which several Lisp programs and ordinary Unix programs can share a screen.
Both C and Lisp will be available as system programming languages.
We will have network software based on MIT's chaosnet protocol,
far superior to UUCP.  We may also have something compatible
with UUCP.

Who Am I?

I am Richard Stallman, inventor of the original much-imitated EMACS
editor, now at the Artificial Intelligence Lab at MIT.  I have worked
extensively on compilers, editors, debuggers, command interpreters, the
Incompatible Timesharing System and the Lisp Machine operating system.
I pioneered terminal-independent display support in ITS.  In addition I
have implemented one crashproof file system and two window systems for
Lisp machines.

Why I Must Write GNU

I consider that the golden rule requires that if I like a program I
must share it with other people who like it.  I cannot in good
conscience sign a nondisclosure agreement or a software license

So that I can continue to use computers without violating my principles,
I have decided to put together a sufficient body of free software so that
I will be able to get along without any software that is not free.

How You Can Contribute

I am asking computer manufacturers for donations of machines and money.
I'm asking individuals for donations of programs and work.

One computer manufacturer has already offered to provide a machine.  But
we could use more.  One consequence you can expect if you donate
machines is that GNU will run on them at an early date.  The machine had
better be able to operate in a residential area, and not require
sophisticated cooling or power.

Individual programmers can contribute by writing a compatible duplicate
of some Unix utility and giving it to me.  For most projects, such
part-time distributed work would be very hard to coordinate; the
independently-written parts would not work together.  But for the
particular task of replacing Unix, this problem is absent.  Most
interface specifications are fixed by Unix compatibility.  If each
contribution works with the rest of Unix, it will probably work
with the rest of GNU.

If I get donations of money, I may be able to hire a few people full or
part time.  The salary won't be high, but I'm looking for people for
whom knowing they are helping humanity is as important as money.  I view
this as a way of enabling dedicated people to devote their full energies to
working on GNU by sparing them the need to make a living in another way.

For more information, contact me.
Arpanet mail:


US Snail:
  Richard Stallman
  166 Prospect St
  Cambridge, MA 02139

Tuesday, September 10, 2013

Git/Subversion Error: "Index mismatch: [hash] != [hash]"

The Problem

Our source code is stored in a Subversion repository (it's an Apache project) and I use Git to clone that repo for work. This morning I attempted to update my local repo when the following happened:

^_^ [J:0/1003] mcpierce@mcpierce-laptop:Qpid (upstream) $ git svn rebase
Index mismatch: e5893616e68dba2bd8a730609d872f19517e0536 != 93c1bfda2ff4e72a1ae1fbc8374d42ebe107e1f5
rereading 695961e1cc7749916959f1f74db659e90e0d2dcc
 M qpid/cpp/src/qpid/broker/SessionState.h
 M qpid/cpp/src/qpid/broker/SemanticState.cpp
 M qpid/cpp/src/qpid/broker/SessionState.cpp
Author: pmoravec not defined in .git/authors.txt file

The user "pmoravec" was recently added as a committer to the project and his first commit apparently threw my git clone of the subversion repository off.

The Solution

For me it was a simple solution. I simply had to pull down the updated authors.txt file from Apache that included pmoravec and replace the one in my git repo:

^_^ [J:0/1029] mcpierce@mcpierce-laptop:Qpid (master) $ wget

Done and one, now I'm back to work.

Thursday, August 22, 2013

Puppet: Defining A System, Adding A Package And Launching A Service

Previously I wrote a simple blog post about setting up a Puppet master and agent. In this post I'm going to write about how to add a simple service to your puppet master that will be installed (if necessary) and started (if necessary) on systems.

Step 1: Define the system on your Puppet master

Before your Puppet master can do its work, it needs to first know what needs to be done and to whom.

In your /etc/puppet/manifests directory you'll want to create two files with the following content (remember: in my case the Puppet master's name is earth and the Puppet client's name is halo):


  import 'nodes.pp'
  filebucket { main: server => "" }
  File { backup => main }
  Exec { path => "/usr/bin:/usr/sbin:/bin:/sbin" }


  # nodes.pp
  node default {
    include ntp
  node '' inherits default {


In this example the default system definition will install the sudo module. The definition for a server will inherit that definition and add to it the ntp module. And finally the definition for halo inherits the server definition and adds to it the bip module.

In later posts I'll expand on the above to do more involved server definitions, specifically add modules that include files. But let's not get ahead of ourselves.

Step 3: Add the module definition

Before Puppet can do anything on the client it has to have more details on, in this case, the ntp module.

To do that we first create the file /etc/puppet/modules/tests/init.pp with the following content:

  class { 'ntp': }

Next we'll create the file /etc/puppet/modules/ntp/manifests/init.pp with the following content:

  class ntp {
    package { ntp: ensure => installed }
    file { "/etc/ntp.conf":
      owner => root,
      group => root,
      mode => 640,
      require => Package["ntp"],

    service { 'ntp':
      name => 'ntpd',
      ensure => running,
      enable => true,
      subscribe => File['/etc/ntp.conf'],

In the configuration we ensure the package is installed, that, if the file /etc/ntp.conf exists, the configuration is owned by user root and group root, that it has the proper file modes, and that if it doesn't exist the package named ntp is installed to provide it.

The service stanza adds to the above a check for the actual service itself. If Puppet doesn't see a process named "ntpd" then it knows that the service isn't up and will launch it for you.


That's it! If you then launch the puppet agent on your client machine, you should see it apply the above configuration by installed the package named ntp.

Sunday, August 18, 2013

A Simple Puppet 3.1 Setup

Recently I got the notion of setting up a puppetized configuration system for my computers at home. I have two machines:
  • earth - a Linux desktop, which will be the puppet client
  • halo - a Linux server, which will be the puppet server as well as a client

My biggest frustration, though, was find a simple tutorial that would help me get Puppet up and running, the two machines talking, and a configuration pushing down onto the client.

So after pulling from a few separate sources, here's what I found works. In a future post I'll write about how, after more frustration, I was able to get configurations pushing down onto the systems and how I setup version control on the puppet configurations themselves.

Step 1: Install Puppet (Fedora 19)

Not the hardest part, but you do need to be aware of what packages are out there. The two packages are puppet and puppet-server. The former is what you need on any system that will act as a client or agent, the latter on any system that will be offering up puppetized data.

So, obviously, I installed puppet on halo, and puppet and puppet-server on earth.

Step 2: Configuring The Puppet Master

This is the first part that gave me headaches. Since I don't want to deal with external certificates, I just wanted something that would work for me in my private network.

What I did was to configure earth to be it's own certificate authority with the following in /etc/puppet/puppet.conf:
    logdir = /var/log/puppet
    rundir = /var/run/puppet
    ssldir = $vardir/ssl

    # self-signing certificates
    server =
    certname =

    reports = store, http

    classfile = $vardir/classes.txt
    localconfig = $vardir/localconfig

Step 3: Exchanging SSL Certificates With The Agent

Here is where there was a decided lack of examples online. And the Puppet website didn't help at all, especially with recovering things after a failure occurred.

The steps to follow are:
  1. unless you have your own DNS, add the fully qualified hostnames of each machine in the other's /etc/hosts file; i.e., for me I had to put in halo's file, and in earth's, then
  2. open two terminals on your puppet master (in my case, on earth) and one on your puppet agent (in my case, on halo), then
  3. start up in one puppet master terminal a master using the command line:
    1. puppet master --no-daemonize --verbose
  4. on the puppet agent terminal start an agent using the command line:
    1. puppet agent -t --no-daemonize --verbose
  5. you'll see some messages about exchanging the SSL credentials and then a note that no certificate is waiting, at which point in the other puppet master terminal window you'll do:
    1. puppet cert sign
At this point your machines will have shared SSL credentials and will be able to talk to each other.

Step 4: How To Recover If Something Goes Wrong

If you start getting messages about the keys being wrong on the agent side, the easiest thing to do is to delete the /var/lib/puppet/ssl/ directories on BOTH machines. This way you no longer have any data about the other system and can start over with a clean slate.

Sunday, August 4, 2013

Grouping Channels In Weechat By Network

One of the things that's kept me from switching from Xchat to Weechat for IRC was that I couldn't keep my channels grouped together by network. Where I work we have internal IRC channels with the same name as public IRC channels, and it was confusing to look at the window and figure out which was internal or not.

Fortunately, I found a way to change that in Weechat and so have switched over to it exclusively. Here's how:

Install the plugin.

The buffers plugin lets you configure how the various buffers are displayed.

  /script install

Tell to show the IRC network name.

By default only shows the channel name (#name). To group channels together, you need to enable showing the fullly qualified channel name (network#name):

  /set buffers.look.short_names off

Sort channels by name and not numbers.

This groups the channels together by their server name. Otherwise they're grouped by the order in which they were opened. So if you open a channel in Freenode, say, after opening one in OFTC then the channel in Freenode will fall after OFTC and not get put with the other Freenode channels.

  /set buffers.look.sort name

That's it! Now your channels are grouped together by network.

Tuesday, July 9, 2013

Fedora 19: Nouveau Drivers Are Fixed!

At work a few months ago I got a new laptop: a Lenovo T530 with the Optimus Technology display chipset:
For more taxing multimedia and gaming use, NVIDIA's line of NVS with Optimus graphics cards provide the graphical processing boost you need. Optimus technology automatically configures your system to provide for the most optimal experience with your games or applications while also maintaining battery life.
The problem that I hit was that the Nouveau drivers would NOT work nicely with this video setup. If I disabled the nVidia hardware and run only the Intel video chipset things worked beautifully, but I could not use my docking station since it doesn't support external displays. If I enabled the nVidia hardware only then I could drive my external monitors but couldn't go into standby mode, couldn't dock and then undock, and inevitably the display would get borked (BZ#948079).

To deal with the problem, I had no choice but to start using the closed-source nVidia drivers. They're freely licensed, but being closed source just goes against the spirit of what I support. And it also has the annoying habit of requiring me to run the nVidia configuration tool when I dock my laptop in order to enable the external monitors rather than just switching to them automatically, which is what the nouveau drivers do.

So when I upgraded my laptop with Fedora 19 (Schroedinger's Cat), I was happy to find that the bug had, at some point, been quietly fixed!

Wednesday, June 5, 2013

Omniscience Vs. Free Will: A VERY Old Web Page I Wrote

Years ago I wrote the following about the logical inconsistency between free will and omniscience. I hadn't thought about or read the page in a long time, but it came to mind today when someone sent an email asking to translate the website I had back in the early 2000s.

The Basic Premise

The concepts of omniscience and free will and mutually exclusive. If there exists omniscience, then no being is able to make choices other than those known by the omniscient being. If free will exists, then there can be no such thing as omniscience.

What Is Omniscience?

Omniscience: having infinite awareness, understanding and insightpossessed of universal or complete knowledge
A being with omniscience is one that has complete knowledge of the universe. There can be no knowledge that this being does not possess. Nothing for it to learn since learning would require a form of limited knowledge where information is not contained in one state and then is contained in another state.

What Is Free Will?

Free Will: freedom of humans to make choices that are not determined by prior causes or divine intervention
In order for a being to have free will, it must be able to choose any possible action when a decision action is presented. Any choice that is possible must be allowed without external intervention or influence.

So, What's The Problem?

With omniscience, all knowledge is known, including the choice made by each being at every stage. It is not possible for a being to make a choice other than what is already known via omniscience, thereby precluding any chance for free will. A being cannot make any choice that would contradict omniscience, since omniscience by nature requires that all choices be predetermined.

So what?

What if the omniscient being is outside of our space-time?

Well, whether that being is confined to our universe or is somehow able to exist outside of our universe, if it has omniscience with regards to our universe then it directly affects the free will of beings within out universe. The contradiction comes from our choices being decided before they are made within our flow of time.

Can't the being outside of our universe view our choices to learn them? 

Sure. But, that is not omniscience: it is limited knowledge. Omniscience is, by definition, infinite knowledge. Since it is impossible to combine enough finite elements to reach infinity, a being cannot reach omniscience by learning.

An Example Of The Contradiction

I wake up and decide to have oatmeal for breakfast rather than eggs.

Have I made a choice of my own free will?

That depends. If there is a being with omniscience, then my choice to have oatmeal was dictated before the morning in question. I did not make a choice; instead, I acted out a script that only superficially looks like free will. The choice to have oatmeal was made by something other than myself and I could only follow along acting out a script and not making a choice of my own free will.

The problem is that omniscience requires 100% perfect knowledge. This leaves no room for variation. When each decision fork is reached, only one branch can be followed: the one previously decided via omniscience.

But, Don't We Have Free Will?

Apparently, yes, we do. But, that's not the issue. The contradiction does not mean that we do not have free will. Since we seem to have free will, the only way to resolve the contradiction is to admit that there cannot exist a being with omniscience.

Can Omniscience Be Learned?

No. The process of learning is based on having limited knowledge. Since omniscience is the state of having infinite knowledge, you can't accumulate knowledge until you finally have infinite knowledge. The problem is that there is no such thing as infinity minus one. How can you be one unit of knowledge away from having infinite knowledge?

Can't A Being Have Foreknowledge Instead?

Sure! But, foreknowledge is not the same as omniscience. Foreknowledge is merely the knowledge of something in advance, and is based on limited knowledge. A being can have foreknowledge without being omniscience.

You do not become omniscient by watching a movie backwards, for example. Omniscience would require a being to know the entire movie inside and out before ever actually observing the film.

Why Can't A Being Have Progressive Omniscience?

Progressive omniscience is a meaningless phrase. This would be knowledge gained, and is discussed above under the header Can Omniscience Be Learned?

Friday, May 17, 2013

Podcasting 101: Creating A Podcast Website

In this installment, I'll help you to understand how to create a single episode of your new podcast by taking you through the steps I follow. We will assume that you've already recorded, mixed and exported the audio.

First things first, though, you'll need to create a blog for your podcast. Then we will setup a Feedburner redirector for the podcast so that, should you ever need to move the podcast to a different hosting site, you won't have to force your listeners to manually change their feed location.

Creating The Podcast Blog Site (With

I choose for no other reason than I use it myself for both of my shows. Though I'm sure the steps I describe will apply to other sites later.

If you have a Google account already, you're halfway to creating your blog. If not, register for one by signing up for Gmail.

Go to There you will see a button in the upper-left corner that says "New Blog".

Click it.

Hopefully you've chosen a decent name for your podcast, something catching, creative and fresh. For now we won't worry about creating a podcast domain name, so in the field marked "Address" enter a unique, simplified version of your podcast's name.

Click "Create Blog!"

Now we have a way to share the episodes.

Adding Your Custom Domain

On the blog list screen, click on the blog's name, and you will see a page with options such as "Post", "Pages", "Earnings", etc. Towards the bottom of this list is "Settings".

Click it.

Under the "Publishing" heading you'll want to click "Add a custom domain". This will open an input field that will let you put in the hostname you've setup (instructions are in a future installment) for your podcast's blog site.

For example, at A Little Dead Podcast our blog's hostname is So, in this field, I have "". What this does is tell Blogger that it should make all pages and blog posts use a URL relative to the address entered.

If you don't understand what that means, it's okay. You can trust me on this.

Now you have an address you can share with listeners to tell them where to go to find your show.

Setting Up A Feed Redirector

Feedburner is a great, useful tool for podcasters. It lets you see how many times your episodes have been downloaded, from what countries the downloads have occurred, and more.

Here, though, we're going to setup the redirector itself. But there's a lot more than Feedburner can do for you, which we will explore in another installment.

In a separate tab on your browser, go to the website:

You'll see the text "Burn a feed right now" along with an input field and a checkbox that says "I am a podcaster".

Mark the checkbox.

In the input field, enter the URL for your podcast. If you don't have a custom domain and used Blogger then this is the name of your podcast followed by

So, for example, if you gave your blog the name "testcast2" in the earlier step, then here you would enter

Click "Next". If you entered the right information then you should be presented with two options under the title "Identify Feed Source":
  • an atom feed, and
  • an RSS feed
You want to select the RSS feed here, and click Next. On the next screen enter the title for your podcast redirector, and also pick and address for it. For this latter entry, pick something simple, like the name of your podcast's abbreviated name on your host site. So, again, in this example, since I named the podcast "testcast2" on, I would enter "testcast2" as the feed address.

Click Next.

On the congratulations page you'll see a feed URL at the top that looks something like this:
Copy this address and go back to you blog's setup page. Under the Settings->Other->Site Feed section, click Add for the "Pod Feed Redirect URL" and paste the URL into the input field shown.

Click the "Save Settings" button in the upper-right corner.

Back on the Feedburner page, click through all of the the Next buttons until you're back to the main page.

There! Now you have a podcast blog as well as an active feed. To verify the feed as working, copy the feedburner URL into your browser. If everything is setup correctly then you should see page that shows you an empty feed.

Future Installments

In a future installment, I'll show you how to configure Feedburner to do things like alter your RSS feed for iTunes, inserting elements that Apple specifically wants to have present, how to register the podcast with iTunes, how to change your feed to track statistics for episodes entries separately from non-episode entries on your blog (we post news that I want to view separately when looking at the audience breakdown).

Also how to use Project Wonderful for making the podcast's website help pay for itself while growing your audience.

Till then, have fun with your new website!

Thursday, May 16, 2013

Podcasting 101: What Are The Things I Need?

The first question, and probably the most challenging one for the new podcast host, is what are the things I need to create and share my show?

What you (minimally) need is:

  • equipment to record and edit your episodes
  • a means of posting the episodes themselves online
  • a way of letting listeners/viewers know that the new episodes are available
Here's what I use for my shows. I'm not going to talk about video equipment since that's a whole world different when it comes to recording and editing than audio equipment. I'll talk about that in a future installment.

Recording Equipment


To record episodes, you'll want something that allows you to record multiple tracks of audio.

For normal recordings I use Audacity, available here. It's free, available on Linux, Windows and the Mac, and supports all standard audio formats. It uses its own format for locally storing a project, but you can export the audio to MP3, Ogg Vorbis, WAV, etc. Though for podcasts I recommend MP3.

And during exports, you can add most ID3 meta data to the episode.

In a later installment I'll talk about how to use Audacity in more detail.


Don't skimp on this piece, but do pick something in your budget. Remember: you get what you pay for, and buying a cheap microphone results in bad audio quality. You want something that produces a warm sound. A decent USB headset with built-in microphone (like my Microsoft Lifechat LX-3000) works great for this, and you can use it for other things such as recording interviews, which is another installment.

Other Equipment

If you want to use a standard studio microphone (which is what I do for normal episodes), you'll want to also get a USB mixer. A certain cohost of mine gave me a very nice microphone and mixer setup that matches what he uses for his segments. And I found that it provides the best audio quality to date for the show.

Posting Episodes Online

After you've produced your episode, you need to put it up online so listeners can get to it.

The cheapest way to do this is to post the episodes on The only caveat is that you need to release your audio under some sort of free or unencumbered license. My shows are all released under Creative Commons Share-Alike Non-Commercial (CC-SA-NC) licenses. People are free to take the audio, cut it up and use bits of it, so long as it's for non-commercial results (they can't charge for the results), they have to give a reference back to us and they have to allow others to take their work and do the same.

After you've uploaded your episode to, you can then get a link directly to the MP3 file that you would put into your podcast feed. That way listeners can download the file. More on that in a later installment.

Notifying Listeners

To notify listeners that you have a new episode available, you have to create an RSS feed, which is simpler than it sounds. The first step is to create a blog, and the second is to add it to podcast aggregators like iTunes, which will also be a later installment.

Another thing to do is to setup a feed redirector, such as Feed Burner, and tell listeners to subscribe to that. Setting that up is also a later installment.

The cheapest thing is to create a blog on a free site like Blogger.

After you've uploaded your MP3 file to, you create a new blog entry on your site. In the lower-right corner where it says "Links" you copy the MP3's URL (being sure to change "https" to "http" if your URL has that) into both the "Title Link" and "Enclosure Link" fields, and setting the mime type to "audio/mp3".

Once you publish the new blog entry, anybody who is subscribed to the RSS feed for the blog will be notified that a new episode is available.


Setting up a podcast feed is pretty simple IF you have the right tools and services in place. With these first few steps you're now well on your way to being a podcaster.

Podcasting 101: How To Do A Podcast For Little To No Money

My two podcasts (A Little Dead Podcast and The Zombie Mob) recently joined a network of other podcasters called The 76th Street Network. And after talking with a few other podcasts, I found that not everybody is as familiar with the tools and services that are available.

So what I'm going to do is, over time, write up a series of blog entries here for how to take advantage of those free services and tools and also the little tips and tricks that I've developed over the past four years of hosting my own shows.

Keep an eye out here for episodes installments as they're posted.

Thursday, May 9, 2013

Flock To Fedora

From the website:
For eight years, Fedora users and developers have gathered at an event named for them, the Fedora Users and Developers conference (FUDCon). But we’ve grown, and it’s time for a new approach: Flock.
Flock is a brand new conference where Fedora contributors can come together, discuss new ideas, work to make those ideas a reality, and continue to promote the core values of the Fedora community: Freedom, Friends, Features, and First.
Fedora and the people who participate in the project encompass so much more than just an interest in Linux. Flock is where you meet with other members of the Fedora community who share whatever your interest is, whether that’s the kernel or the cloud, hardware or UX design. We also want to embrace and invite the growing open hardware community so that we can learn from one another and create better things together.
The first flock will be held 09-12 August in Charleston, SC. For more, click on the link above. And if you want to present at the conference, please go here and submit your topic.

Thursday, May 2, 2013

Close Windows, Open Doors

Microsoft wants to keep you locked in to Windows so that it can take your money, your personal data, and your user freedom. They don't want you to know that you have a choice of better operating systems; operating systems that respect your freedom. There are tons of free "as in freedom" software operating systems that you can download and install at no cost. And when they're improved, you can choose whether or not you want to upgrade, without a corporation breathing down your neck.
It is time to upgrade your computer, but not to Windows 8. Pledge to free your computer today!
Close Windows, Open Doors

Tuesday, April 23, 2013

Fedora 19: Nouveau Test Day Is The 24th

Having just received a new laptop at work (Lenovo Thinkpad T53), I was eager to load it with Fedora and get to work. But, unfortunately, there's a known bug with the nVidia graphics hardware and the nouveau drivers. If I suspend/resume my laptop, the display most likely comes back as unusable. Not locked up (I can still drop to a tty and do "init 3; init 5; exit" to get to workable desktop.

Definitely not a Good Thing™ since I like to have the laptop suspend when I close the lid to change locations, etc. I don't want to go through the whole shutdown and then startup process each time.

Fortunately, tomorrow is the Fedora 19 Test Day for Nouveau and nVidia graphics hardware! So I downloaded the latest F19 x86_64 LiveCD image, put it on a thumbdrive, and will be doing some testing later today and reporting back my findings.

Not the least of which is this one bug that's got me partially missing my old laptop.

Monday, April 1, 2013

Overriding Global Methods In Perl

In Ruby, for some work I've done recently, a coworker added some utility functions to help working with Lists, Maps and Arrays. In Ruby an array appends elements using the << operator, so by adding such an operator to his helper class he was able to have the helper code be a drop-in replacement for Ruby Arrays.

Now working in Perl, I needed to do something similar for arrays. But in Perl arrays aren't instances of any class, and my replacement is going to be just that. In Perl if you want to append to an array you use the global method:

    push(@array, $scalar);

My goal was to supplant this method with my own and, when @array is an instance of my new class (qpid::proton::ArrayHelper) do one thing. If, however, it's just a plain old Perl array then I want to call the standard function from Perl.

Sound difficult? Nope, it's easy.


To replace the global function with your own, you can simply do this:

    BEGIN {
        sub qpid_proton_push(\@ \$) {
            my (@array, $value) = @_;

            # if this is my array type, then do one thing
            if(isProtonType) {
                # do one thing
            } else {
                # call the original function
                CORE::push(@array, $value);

        # assign push to my new method
        *CORE::GLOBAL::push =&qpid_proton_push;

The solution defines a new method and then points the global function pointer to it. The reason I chose this solution rather than pointing to an anonymous method was that I needed to declare a method prototype for qpid_proton_push. Since the first argument is an array type then a prototype was necessary to let Perl know how to handle the arguments.

Friday, March 29, 2013

Typing Accents In Fedora Linux

For my degree, I need to take at least two semesters of a foreign language. And for maximum benefit I've decided Spanish is the way to go (though I'm still considering French) given that I'll have plenty of opportunities to use it in the United States, as well as Spain when my wife and I go there some day.

But the first issue for me was: how do I enter the characters in Linux?

My day to day desktop is Gnome 3, so I looked up the character keys to use on a US keyboard. To enter them you hold down the Shift and Ctrl keys, press 'u' and then the code for the accented letter you want. Remember, you hold down the Shift and Control keys through the entire keystroke sequence.

f3 ó
f1 ñ

Cuanta más sepa!

Thursday, March 28, 2013

Pandora Won't Load On Firefox And Linux...

I was having a bit of a problem getting Pandora to run on my work laptop lately. The symptom was simple: Pandora would just sit....

...and sit....

...and sit....

...before finally timing out and saying it can't load.

The problem was Flashblock, an extension I have installed to prevent Flash content from automatically loading on pages and killing performance on my laptop. I didn't even think about how Pandora's interface was a Flash app.

So adding to the whitelist for Flashblock re-enabled the site for me.

Joining A New Open Source Project...

Lately I've been feeling the itch to participate in any project that's related to the Gnome desktop. In particular, I've been wanting to work on something that I would use on a regular basis. There are several projects I regularly depend on, such as gedit, but I wasn't sure what I wanted to do.

One thing that I definitely depend on to get my work done is background music. I usually put on my headphones, plug them into my phone, and listen to either Howard Stern, my podcasts or music while I get things done.

So last week I decided to check out the gnome-music project and see if I could help out. I joined the #gnome-music channel on freenode and asked how I could help.

The first challenge is going to be to get my laptop ready for development. I use Fedora 18, the currently latest release, as my desktop and even that is a little behind the curve of what's needed for development. But, not to worry! There are ways to address the problem and get what I need to do the work.

Over the next few weeks I'll, hopefully, have some tips and help to post about here as I get involved. My first challenge is to provide a way to access music stored on Google music as well as on OwnDrive.

This is going to be fun!

Tuesday, March 19, 2013

It's Been A Long Night...

Over the past year or so my mother's health has been slowly deteriorating. At her age (late 80s) they were the inevitable results of the aging process. When I would call her she would sometimes forget that she had told me something, or she would call saying she hadn't heard from me in a long time even though it had only been a couple of days (or even a couple of hours).

In November she had a particularly hard episode, the latest in a long line due to her blood pressure and heart. She went into hospital and, after consulting with a heart specialist, they decided to install a pacemaker to keep her heart beating consistently. After the surgery she was sent to a physical therapy/rehab facility where she was supposed to be cared for and motivated to get up and move about after the surgery. Without going into it, that didn't happen and her health slipped more.

By February she had been in and out of hospital a few times. Christene and I had planned on my taking spring break this year in New Jersey to visit with her, so I went up with Ben to spend four days visiting. However, the day before we left my mom had another incident and had to go back into hospital, where she was for the whole time we visited.

When I first showed up she didn't recognize me. The woman who raised me by herself (my father passed away 48 years ago on 16 March just before I was born), who had yelled at me, hugged me, cared for me and kicked me out of the nest when it was time, didn't know me at first. It was a slow realization that crossed her face when she realized it was me, and her first words were, "My baby."

My mother would lapse in and out of consciousness and lucidity while we were there. She kept asking after Caleb and Rachel, and every once in a while the woman I knew growing up would shine through. Especially when, while discussing her living will, she said she just "wanted to cut through the bullshit" as I explained each part of it to her. It was painful to see her in so much pain that she could barely move her arm to sign the papers.

I held my mom's hand a lot that week. I would just stand at her bedside, or sit in a chair next to her and watch TV as she kept nodding off. She forget where she was fairly often, and asked me more than once if we were flying somewhere.

On the last day we were in New Jersey, I stayed until the very last minute before Ben and I had to get back to the airport. My mother's lunch was served, and we joked about hospital food. She didn't have the strength to even feed herself, so I fed my mother. Just like she had done when I was little. And we talked. About nothing in particular, just talked.

When her nurse and the doctor came in, I knew it was time to go. But I didn't want to. I knew that was going to be the last time. I hugged my mother, I kissed her head and told her how much I loved her. I had a lot of resentment from my years that I let go that day. She's my mother, and I love her and wanted her to know that. I walked away and told her, "I'll see you later, Mom."

Last night my sister, Wanda, called around midnight. Our mother passed away in her sleep at home. She had put into her living will that she wanted to be kept comfortable until the end. She hadn't been lucid for a few days, and was sleeping in a weak, restless way.

She died in her home, in her own living room, with family there.

She's flying now.

I miss you, mom. I love you.

Wednesday, February 27, 2013

Another Reason To Buy A Kindle

Because this advertisement pisses of "One Million Moms", who want to boycott Amazon for not being homophobic like them accepting of gays.

Order one here.

Monday, February 25, 2013

Ruby 2.0.0 Official Released

From the official Ruby language website:
We are pleased to announce the release of Ruby 2.0.0-p0.
Ruby 2.0.0 is the first stable release of the Ruby 2.0 series, with many new features and improvements in response to the increasingly diverse and expanding demands for Ruby.
Enjoy programming with Ruby 2.0.0!
Some of the features include:
  • Language core features
    • Keyword arguments, which give flexibility to API design
    • Module#prepend, which is a new way to extend a class
    • A literal %i, which creates an array of symbols easily
    • __dir__, which returns the dirname of the file currently being executed
    • The UTF-8 default encoding, which make many magic comments omissible
  • Built-in libraries
    • Enumerable#lazy and Enumerator::Lazy, for (possibly infinite) lazy stream
    • Enumerator#size and Range#size, for lazy size evaluation
    • #to_h, which is a new convention for conversion to Hash
    • Onigmo, which is a new regexp engine (a fork of Oniguruma)
    • Asynchronous exception handling API
  • Debug support
    • DTrace support, which enables run-time diagnosis in production
    • TracePoint, which is an improved tracing API
  • Performance improvements
    • GC optimization by bitmap marking
    • Kernel#require optimization which makes Rails startup very fast
    • VM optimization such as method dispatch
    • Float operation optimization
And it's promised that compatibility between 1.9 and 2.0 is going to be better than even from 1.8 to 1.9!

Tuesday, February 12, 2013

Terminator - Multiple Panes With No Pain

One of the habits I've adopted of the years is to keep multiple terminals open on my development desktop: one for compiling, one for running tests, one for listing and searching through code.

Similarly, for access email, I would do something similar to that, opening a separate terminal instance to run Mutt and one to run offlineimap.

When tabbed terminals started popping up, it made things easier to manage. I could have a single instance of a terminal for email, another for development, etc. This, however, had limits since I couldn't have everything easily visible at once.

Then, a while ago a friend at work turned me on to Terminator, a multi-paned terminal app. And I've easily incorporated that into my normal workflow.


The keys I commonly use for working with Terminator are ones to split the current pane vertically or horizontally, to maximize the current pane, and to switch back and forth between panes.

Splitting Panes

To split a pane vertically, creating two panes side by side, simply hit Shift+Ctrl+E. To split horizontally, creating two panes above and below, hit Shift+Ctrl+O. In both cases the newly created panes are equal in size.

Maximizing A Pane

To toggle maximizing the current pane, hit Shift+Ctrl+X. When a pane is maximized, you cannot move to a different pane, though this would be something I would add to the wish list.

Moving Between Panes

To move to the next pane, press Ctrl+Tab. To move to the previous tab, press Shift+Ctrl+Tab.

Note that the tabbing sequence appears to be defined by the creation order of the panes and not by their actual order in the window. So adding a new tab between panes 1 and 2 doesn't guarantee that this new tab will be the next tab after 1.

Programming Setup

My normal setup for programming is to open an instance of Terminator, then split it vertically once and then split the right pane horizontally once. This gives me the following layout:
Additionally, for development I use emacs as my primary editor.

When I'm writing code I hit Meta+CursorRight to move it to the right half of my display. This covers the compiling and miscellaneous panes, leaving the coding pane visible underneath. This allows me to look at other pieces of code in other source modules without leaving my editor.

When I'm fixing a compiler error, I can hit Meta+CursorLeft to move emacs to the other side of the desktop. Then I can see the compiler error as I'm fixing the code or build environment.

Also, on the fly, I can easily create temporary panes by hitting one of the split keys and then immediately maximizing that new pane. I use such panes to ssh into a build machine, to one of my other work boxes to, for example, test a new change on a different platform (such as my new Chromebook for ARM development). Then, when I'm done, simply exiting the pane restores my normal Terminator layout.


All in all, a highly useful terminal application. It provides me with the ability to have more than one piece of information on the screen at once. It works well with other tools that I use to produce code, which makes Terminator indispensable.

Wednesday, February 6, 2013

Fedora 18 For ARM Now Available

In case you missed it, I posted an article yesterday on how to load Fedora onto a Samsung Chromebook.

Today the official announcement of the release of Fedora 18 for ARM hits the 'net.
The Fedora 18 for ARM release includes pre-built images for Versatile Express (QEMU), Trimslice (Tegra), Pandaboard (OMAP4), GuruPlug (Kirkwood), and Beagleboard (OMAP3) hardware platforms. Fedora 18 for ARM also includes an installation tree in the yum repository which may be used to PXE-boot a kickstart-based installation on systems that support this option, such as the Calxeda EnergyCore (HighBank).

Tuesday, February 5, 2013

Loading Fedora On A Samsung Chromebook

Overview And Goals

The goal is to describe how to boot and run Fedora Linux on a Samsung Chromebook. Lots of thanks go to Chris Hewitt for laying the foundation for this page.

Equipment & Software

Hardware: Samsung Chromebook XE303C12 (ARM Exynos 5 processor)
Storage: Sandisk 32G card, which identifies as /dev/mmcblk0 (substitute your own drive's device in the instructions below)
Fedora Image: The generic hardware floating point image from here (or click here to download the image now).

Preparing The Disk

The first thing to do is to prepare the SD card. What we want to do is create two partitions for the kernel images and a partition for holding the root filesystem for Fedora.

You don't need a lot of space for the kernel images, so two 15MB partitions are find. You can either leave the remaining space for the root file system. Or, if you want to keep /home separate to protect it in case you have to redo the rootfs, you can create a fourth partition for the home file system. The latter is how I do things.

Partition The Drive

To partition the drive, first create the GPT partition table:

sudo gdisk /dev/mmcblk0
(parted) mktable gpt
(parted) w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/mmcblk0.
The operation has completed successfully.

Next you need to create the individual partitions. As Chris did, I'll just post what you need to type to create the partitions since it can be VERY verbose otherwise. Here we configure the geometry for the disk:

sudo gdisk /dev/mmcblk0

x # go into extra functionality/expert mode
l # set the sector alignment value
8192 # 8192 bytes
m # return to the main menu

Now we create the partitions. We'll create two 16MB partitions to hold the kernels from the Chromebook, a 15G partition for the root file system and use the remaining space for a home filesystem. If you don't want to have a separate home filesystem, then follow the instructions inline:

n # creates a new partition
1 # partition 1
(ENTER) # accept the default starting sector
+16M # make partition 1 16MB
7f00 # make the partition type "ChromeOS kernel"

n # creates a new partition
2 # partition 1
(ENTER) # accept the default starting sector
+16M # make partition 1 16MB
7f00 # make the partition type "ChromeOS kernel"

If you want the root filesystem to use the rest of the disk, then do this:

n # another partition
3 # partition 3
(ENTER) # accept the default starting sector
(ENTER) # go to the end of the free space
(ENTER) # use the default file system type

What I did was this:

n # another partition
3 # partition 3
(ENTER) # accept the default starting sector
+15G # make the root filesystem 16G
(ENTER) # use the default file system type

n # another partition
4 # partition 4
(ENTER) # accept the default starting sector
(ENTER) # go to the end of the free space
(ENTER) # use the default file system type

When it's all done, you need to update the disk:
w # writes the changes to disk

Format Root And Home

Now format the filesystems. Again, if you didn't create a home filesystem, then don't format it. And BE CAREFUL when specifying the device to format!

For me /dev/mmcblk0p3 is the root file system, while /dev/mmcblk0p4 is the home file system.

sudo mkfs -t ext4 /dev/mmcblk0p3
sudo mkfs -t ext4 /dev/mmcblk0p4

Extracting The Fedora Image

Now that you have a prepared disk, you need to first mount the file system and then extract the Fedora root filesystem onto it.

First thing is to mount the root filesystem:

sudo mount /dev/mmcblk1p3 /mnt

Now extract the file containing the image that you downloaded from Fedora:

sudo tar Jxvf Fedora-18-armhfp-rootfs.tar.xz -C /mnt

Fixing /etc/fstab In The Image

You'll want to now update the /etc/fstab file in the new image, comment out the existing entries and add a single one:

/dev/mmcblk1p3 / ext4 defaults 1 1

Using UUIDs Instead Of Device IDs

Thanks to Steve Falco for this.

To use the UUIDs, use the following command line:

dumpe2fs /dev/sdf3 | grep UUID

and then replace the UUIDs for each partition in /etc/fstabs with thes shown.

Installing The Chromebook Kernel

There is work being done to get a Fedora kernel to install on the Chromebook. But, for now, you have to work with the one that came with the Chromebook itself. To do this, you need to boot up your Chromebook into developer mode. To do this, hold down both the Escape and Refresh keys and press the Power button. When the system reboots it will say "To turn OS verification OFF, press ENTER".

Do this.

The system will then tell you that OS verification is off. This puts your Chromebook into developer mode. Don't get worried when it says your local data is being cleared, this is normal. You'll be able to dual boot your system, booting either Fedora from the SD card or ChromeOS from the SSD. It will take about 15 minutes or so to wipe the data, so be patient.

Once the system has finished clearing your data and restarting, it will reboot and show you a screen that says, "OS verification is OFF". Press Ctrl+D to boot the system at this point.

Log into the system (you will need to recreate your account). Once into the system, type Ctrl+Alt+T to launch a crosh box. Then type:

crosh> shell

Then type:

chronos@localhost / $ sudo -s 

Now lets create some bootable images!

To create a bootable image, which we'll copy to our two boot partitions, type the following:

 cd /tmp 

echo "console=tty1 debug verbose root=/dev/mmcblk1p3 rootwait rw lsm.module_locking=0" > /tmp/config vbutil_kernel --pack /tmp/newkern --keyblock /usr/share/vboot/devkeys/kernel.keyblock --version 1 --signprivate /usr/share/vboot/devkeys/kernel_data_key.vbprivk --config=/tmp/config --vmlinuz /boot/vmlinuz-3.4.0 --arch arm  

 Now we need to copy that image onto our two boot partitions and enable booting from it. To do that, type the following:

dd if=/tmp/newkern of=/dev/mmcblk1p1 

dd if=/tmp/newkern of=/dev/mmcblk1p2 

crossystem dev_boot_usb=1

It's okay if you see the message "Unable to open FDT property nonvolatile-context-storage". Apparently this happens to everybody and is not a sign that something's gone wrong.

To mark the partitions as bootable, type:

cgpt add -i 1 -S 1 -T 5 -P 10 -l KERN-A /dev/mmcblk1 

cgpt add -i 2 -S 1 -T 5 -P 5 -l KERN-B /dev/mmcblk1

Copy the kernel firmware and libraries onto the new root filesystem.

Notice that I'm using "External Drive 3" here. That's because I'm using a separate filesystem for /home. If you went with just a single filesystem then you'll use "External Drive 2" here.

Again thanks to Steve Falco for pointing out that, in some cases, these drive letters might be reversed or different. Be sure, us ls to check both External Drive mounts to see which contains the actual root file system for Fedora.

After copying the files, you'll remount the partition and chroot it to isolate our new Fedora root filesystem:

cp -rf /lib/modules/* /media/removable/External\ Drive\ 3/lib/modules/ 

cp -rf /lib/firmware/* /media/removable/External\ Drive\ 3/lib/firmware/ 

mount -o remount,suid,exec /media/removable/External\ Drive\ 3/ 

chroot /media/removable/External\ Drive\ 3/

Next we will set a password for root, and then create the GUEST account. Don't try to create a normal user account at this point:

passwd adduser guest 


umount /media/removable/External\ Drive\ 3/

Now you're ready to boot your Chromebook into Fedora!

Booting Your Chromebook Into Fedora

Reboot the Chromebook. On the screen where it says OS verification is turned off, now you will type Ctrl+U to boot from the SD card. You should see the standard Fedora booting output which means you've successfully install Fedora. Once the system has booted, if you created a separate file system for /home, you can log in as guest, su to root, add the file system by UUID, then create a proper user account. Any feedback on these instructions, about how to fix problems that come up or how to make them more efficient, please send them to me or post them below as comments.

Friday, January 4, 2013

Shut Down An iPhone App Without Rebooting

One thing that gets me annoyed now and then is when the Sirius app gets stuck and refuses to start playing Howard Stern. With the multitasking capabilities of the phone, there's no explicit way to restart an app since, by pressing the home button, you only get to push the app into the background. So most of us have had to resort to rebooting the phone to force an app to quit.

Not any more.

There's a simple way to shutdown a phone that's not documented but which does the trick. And having done it today when, again, the Sirius app got stuck and wouldn't play anything, I'll share the trick with you.

1. Press and hold the Sleep/Wake button (on top of your phone) for several seconds, until the screen pops up to let you power off the phone. DON'T POWER OFF!

2. Let go of the Sleep/Wake button and press and hold the Home button for several more seconds.

3. The screen will flash and drop you back to the application icons screen.

At this point your app is now shut down!