The year is 2011

Posted by Andrew Sat, 06 Aug 2011 05:12:38 GMT

I no longer use Windows as my primary OS. It's been a few years since I made the switch and I haven't really looked back. I'm not a zealot, so I realize that other people and businesses are on Windows and are perfectly happy being on Windows. More power to them! What I like about Ruby is being able to compile things to .exe's using rubyscript2exe and package up little utilities to give to my Windows clients and friends to do scripty things without having to go through the extreme pain of setting up mingw or cygwin or virtualbox. Everything usually is very happy.

Except today it wasn't. 

So I have this nice little app that will POST files to a site. It authenticates with OAuth and it's self-contained and pretty and nice. Built the little guy on OS X, made sure everything was In Its Right Place and switched over to my XP VM. Tried it out, everything seemed happy. Made my exe, tried it again, everything was as it Should Be. Now, here's the fun part. Transfered it to the target machine and the HTTP POST would just hang and eventually time out. No network traffic, no CPU, nothing. 

Weird, you say!

Off to debug this one. So I remember reading something somewhere about how net/http(s) is a ghetto so I figure there's some weirdness there. I check online and lo and behold there's some posts about timeouts. I try the usual remedies but it just doesn't work the way it should. Weird. I dig through the net/http code and it looks like the thing does get down to the socket, pushes to the server and then gives up after a while. Everything on the server is fine and the most infuriating part is that on my XP VM it works. Ok, so swap out httpclient for net/http. Same thing. So it's not the client. The files are just text files so there's nothing weird there. Hmmm.

What about dropping the file size? If it's a timeout maybe it's just taking too long to upload. I drop the files down to half. Nada. Down to a few lines. Nada. Down to one line. It works. 


There's something weird going on. Two lines... it breaks. At this point I am getting grumpy and I decide to try just putting the entire file contents into a POST parameter and seeing if that works. Maybe it's a firewall issue or something? I put the POST parameter in and everything is peachy. I mean it fills up the log file with 2 MB of Base64 string but at least it works. I give up for the night and go home.

As I'm leaving the office a demon from my past emerges from the cavernous vacuum of my brain. It whispers to me of hours lost debugging on Windows because of one of the most infuriating design decisions someone in Redmond made long, long ago. Figure it out yet? 

The key is the data and the operating system.

I should've realized that the problem was the stupid line endings when ONE line of data worked but TWO didn't. I've been bitten by this stupidity before but it's been so long I completely forgot about it. Basically what happens is if the file gets opened in text mode the number of bytes will be wrong. That's why apache was going nuts trying to figure out if the POST had finished: the byte count differed from the amount of data sent. There were weird EOF errors in the logs but they didn't make any sense. What is most infuriating is that on my VM the same file was being opened in binary mode by default where on the other machine it was opening in text mode. If that hadn't been the case I would've focussed more on the OS differences. 

The moral of this story is that in 2011 I'm being bitten by the same monstrous stupidity that I dealt with 10 years ago when I was hacking on Windows. Some things never change.

no trackbacks

Don't get caught by -c

Posted by Andrew Sat, 02 Jul 2011 23:37:45 GMT

Say you want to set an environment variable and then run a command. You could do something like

FOO=bar /usr/bin/env

You'd see FOO=bar in the output. If you're using something like capistrano you'll run this command remotely with a wrapper of /bin/bash -c your command

Can you spot what's wrong with this command?

bash -c FOO=bar /usr/bin/env

What do you expect it to output? Same as above? Well it outputs nothing and returns immediately with a 0 (success) exit code. Ok, so what if you wrapped it in quotes, that won't change anything, will it?

bash -c 'FOO=bar /usr/bin/env'

Nope, it runs properly now as at the beginning. Ok, what if you prepend it with another command. 

bash -c ls && FOO=bar /usr/bin/env

Now they both run. I am still trying to figure this out but it looks like this has to do with how assignments are treated.

no trackbacks

delayed_job gotcha

Posted by Andrew Mon, 17 Jan 2011 23:06:11 GMT

Found this little 'gem' (pun intended).

Here's the situation: you've got delayed_job (gem or plugin) and when you do rake jobs:work everything's fine. When you decide to do script/delayed_job start the thing says it ran but it just dies. No output anywhere indicating something is the matter.

Bad delayed_job! No cookie!

Turns out delayed_job isn't at fault here. It's our old friend daemons! I swear, the amount of problems caused by daemonization makes me yearn for a simple solution. 

Moving right along, here's your solution (

config.gem "ghazel-daemons", :lib => "daemons", :source => ''

HTML just became stricter... well sort of

Posted by Andrew Tue, 11 Jan 2011 18:46:46 GMT

So now that it's 2011, the impossible has come to pass: IE is now more strict than Chrome. 

If your facebook button stopped showing up, here's the reason

Backgrounding tasks via Capistrano... or, why I hate shells

Posted by Andrew Fri, 07 Jan 2011 22:08:21 GMT

Say you want to run a rake task in the background after a capistrano deploy. You'd think it'd be something like

run "rake blah &"

You would be wrong. Your capistrano task would never finish (if your rake task was supposed to just sit there and wait for something). If you do thought back to your Operating Systems class you'd remember something about standard input, standard output and standard error and you'd think, oh yeah let's just close those. 

run "rake blah 2>&1 >/dev/null < /dev/null &"

You'd still be wrong because now even though it finishes, nothing's actually running on the server. What to do. Well one of the most horrible parts of dealing with UNIX is dealing with signals. Signals are a really primitive method of doing interprocess communication by sending a PID a notification of some predefined type. Processes can even register to listen to those signals and take appropriate Action.

For example. Say you wanted to make a server that you could tell to reload its configuration without having to bring down the server. The most common way of doing this is sending the server a HUP (Hang-up) signal. Of course it's obvious which program you'd use to send signals, why, it's called "kill" (that's sarcasm for anyone paying attention). Ok fine, so you can signal your process by figuring out its PID and giving it a signal

kill -HUP 12345

In the process a signal handler would fire and you could Do Things. Of course there's some signals which you can't respond to like -KILL

kill -KILL 12345

kills process 12345 immediately without giving that process the ability to Do Things. This is a useful safety feature in case your handler gets stuck. Then there's ABRT which usually means your process got a CONTROL-C. I could go on, but I'm getting bored.

So apparently the default behaviour for when you disconnect from a session is a HUP (Hang up) which for most processes means death. So what you need to do is tell your rake task to ignore HUP signals. An easy way to do this is to launch rake using the aptly named (for real this time) nohup program which will dutifully prevent hangup signals from reaching your beloved rake process. And VOILLA:

run "nohup rake blah 2>&1 >/dev/null < /dev/null &"

Let's all pause and recognize that there are more characters telling the shell to run something in the background than there are characters for invoking the actual thing running. Now, I get it, UNIX has to be robust and so on, but really? This seems like a fairly common occurrence and while there are deamonizing toolkits out there, I really don't want to have to deal with someone's toolkit in order to add these simple options around a task that I already know how to run. 

What's the better solution? Well in my not-so-humble opinion I think that this kind of stuff should be part of POSIX (not LSB) and I should just be able to run

deamonize rake blah

On any system and it does the above. Maybe have an option that says redirect output to somewhere else. There's something similar in /etc/init.d/functions on SYSV systems but it's inconsistent between systems and is about 200 lines of terse BASH script to do something fairly simple. 

The next step, of course, is being able to kill the long-running process from capistrano as well. Sure, you could figure out how fork works, and then do that little pattern of "AM I THE CHILD PROCESS OR THE PARENT PROCESS" which is oh-so-much-fun. 

Or you could just do this:

In whatever task you're running spit out the to a file somewhere. When you want to kill the long-running process just read the file and send it an ABRT. That easy. If you're invoking a command in the background using an ampersand (&) you can get the PID of the child process you just created by reading the $! (the ! means "wow! this is obvious!") environment variable.


nohup something  2>&1 >/dev/null < /dev/null &

echo $! > path/to/pid

And to kill it do

kill $(cat path/to/pid)

Again, no Toolkit Required. Going back to my little deamonize utility it'd be a nice feature to optionally tell it where you want the pid file to go. 

daemonize -p path/to/pid -o path/to/log something 

And that's the news from EberTech. Join us next time when we discuss something less irritating. 

Another fun JRuby gotcha

Posted by Andrew Mon, 13 Dec 2010 07:58:46 GMT

Paperclip is a mainstay of any Rails application. The hell that was dealing with attachments in PHP was put to bed by this incredible gem/plugin. Alas, JRuby does not love paperclip the way I do. So, if you're getting

Errno::EACCES: Permission denied - Permission denied - /tmp/foo or /foo

Then you need to monkey patch FileUtils ala:

I18n 0.5.0 breaks Rails 2.3.8

Posted by Andrew Mon, 06 Dec 2010 20:38:06 GMT

So I'm working along on a crufty on Rails 2.3.8 project and my validations come up as:

{{attribute}} {{error}}

A quick WTF check reveals it's late so I must be missing something obvious. No problem, I'll make a new project and see which gem has broken this. 

*5 minutes pass*

Brand new project, one model, one field, one error... same thing is happening. I'm not running a fever so this has to be something else. After some frustrating debugging I wipe my gems and reinstall ruby and gems. I figured I just have way too many versions of too many gems and something is fighting with something else. Surprise, surprise the problem is gone. Next day I'm getting project by project up and installed (a nice sanity check anyway to make sure all the gems are accounted for in the appropriate places) and suddenly the same thing happens. Rage.

So now we play the blame game. Some quick debugging into I18n shows that the interpolations aren't being done. Maybe it's an I18n thing? I wipe I18n and voila! Problem disappears. Turns out that the nice folks at I18n decided to change the interpolation syntax and the nice folks in Rails land decided to adjust.... problem is, this breaks old Rails projects instantly. For future reference make sure you're using a different I18n for old stuff.