The latest update to run the webserver with nohup seems to have fixed my server issues. I did get a server crash soon after changing the script, but I couldn't track it down, but it's been running non-stop for a few days now.

Finally I can put this to rest. Also, out of it, I got more efficient tracking as well as more detailed error messages in some instances, and less detailed error messages in others (for files not found, for instance, it just spits out 'Not found: ' and the file name, instead of a whole stack trace.).

Moment of Truth #3

After adding some better logging and stuff in the webserver, I determined that no error was causing the server to close. While I was starting the service with node webserver &, I think that I still needed to do nohup.

We shall see. My nohup'ed webserver is now running, and will run always in the future, whether it dies randomly or not. It's just the better way to start it.

Moment of Truth #2

After upgrading to Node.js 0.10.21 and implementing domains in Node.js, I was still getting errors. Since then, I've added code to listen for the response 'close' event, as well as the server 'clientError' event, in an effort to track down the issue. I think I may be on to something. At least then I'll be able to go to stack overflow or github with the issue in hopes that it will be fixed, or the act of listening for those events and logging the errors will lead me to the fix myself. Here's hoping.

The Moment of Truth

Every morning I have an email from pingdom that my site went down overnight. I've upgraded to node.js 0.10.20 a few weeks ago to take advantage of some other bug fixes and optimizations, and it shows in the Google Analytics page load times (they're all 0 seconds)

I have had no luck tracking down the cause, but I read about how to prevent a server error from bringing down the site. I've implemented the suggested solution using node.js domains, and we will see what's happening tomorrow.

It might be the same thing because when I go into the server, node is still running my webserver, it just appears the socket was destroyed... We shall see. Wish me luck!

Flickr Integration Complete!

It really didn't take too long! What I outlined in the previous post is exactly what it does:

  • Check flickr for my photos that were tagged with "jtccom". Download the 2048 pixel wide version of it
  • Write JS + Node part of my website which shows, in a queue like manner, an image that hasn't been processed
  • Clicking on the image shows a box of where I clicked, where it would be cropped, according to the sizes defined for the header
  • Send request to the server, which uses GraphicsMagick to crop it and then create different scaled images based on the sizes defined for the site

Here are some screenshots:

At first, you are prompted with the next image which hasn't been processed

Next, you select the point where you want to crop it. Sizes are pre-determined, so there's no dragging and resizing a bounding box, it knows all the sizes and the size of the images, so it just does it for you

Click the process button when you've made your crop selection

Wait a second or two while Node.js and gm (GraphicsMagick) processes your photos.

And GraphicsMagick code in node.js, which is really helpful, and I was able to get it to work on Windows

this.handlePost = function(site, query, finishedCallback){ var tmpdir = path.normalize(site.path + site.config.tempDownloadFolder); var processeddir = path.normalize(site.path + site.config.processedFolder); var form = query.form; var sizes = site.config.imageWidths; var heights = site.config.imageHeights; var filename = form.image.substring(form.image.lastIndexOf("/")+1); var fileParts = filename.split("."); sizes.sort(function(a,b){ return b - a; }); heights.sort(function(a,b){ return b - a; }); var x1 = parseInt(form.x1, 10), x2 = parseInt(form.x2, 10), y1 = parseInt(form.y1, 10), y2 = parseInt(form.y2, 10); var w = x2 - x1, h = y2 - y1; // process first size, use that for base of resizes var croppedPath = processeddir + fileParts[0] + "-" + sizes[0] + "." + fileParts[1]; gm(tmpdir + filename).crop(w, h, x1, y1).write(croppedPath, function(err){ var sync = new SyncArray(sizes); sync.forEach(function(size, index, array, finishedOne){ if (index > 0){ var scaled = processeddir + fileParts[0] + "-" + size + "." + fileParts[1]; gm(croppedPath).resize(size, heights[index]).write(scaled, function(x){ finishedOne(); }); } else finishedOne(); }, function(){ finishedCallback({ content: JSON.stringify({ success: true }), headers: {"Content-Type": "application/json"} }); }); }); }

Again, it uses my custom built webserver and the SyncArray object that I also wrote.

Flickr code was pretty simple too. Here's that, accessing the Flickr API (no auth) with Node.js

var http = require("http"), querystring = require("querystring"), SyncArray = require("syncarray").SyncArray; this.getPhotosByTag = function(apiKey, user, tag, callback){ var self = this; var method = ""; var qs = { method: method, api_key: apiKey, user_id: user, tags: tag, format: "json", nojsoncallback: 1 }; var req = { host: "", path: "/services/rest/?" + querystring.stringify(qs) }; http.get(req, function(res){ var json = ""; res.on("data", function(d){ json += d; }).on("end", function(){ var photos = JSON.parse(json); if (photos.length > 0){ var sync = new SyncArray(photos); sync.forEach(function(photo, index, array, finishedOne){ self.getPhotoSizes(apiKey,, function(sizes){ photo.url = sizes.filter(function(s){ return s.label == "Large 2048"; })[0].source; finishedOne(); }) }, function(){ console.log("url = " + photos[0].url) callback(photos); }) } else callback([]); }); }); } this.getPhotoSizes = function(apiKey, photoId, callback){ var method = ""; var qs = { method: method, api_key: apiKey, photo_id: photoId, format: "json", nojsoncallback: 1 }; var req = { host: "", path: "/services/rest/?" + querystring.stringify(qs) }; http.get(req, function(res){ res.setEncoding("utf8"); var json = ""; res.on("data", function(d){ json += d; }).on("end", function(){ var sizes = JSON.parse(json).sizes.size; console.log(sizes.length); callback(sizes); }); }); }

The next step is to update the front end css to include all sizes of a version of the image, and switch between them using the respond.js and media queries. That should be simple, but it's late and I'm going to bed!!

Enjoy! Leave a comment.

Prettier Site

I updated the site by incorporating a random picture that I've taken, pre-cropped, into the header.

As a developer and overall lazy person, finding and cropping 10 images to the size that I want was way too much. I will have to rectify this. I got a flickr API key. Here are my plans:

  1. Obtain Flickr API Key - Done
  2. Take Pictures
  3. Upload them to flickr the normal way
  4. Add a tag to them specifying that they are suitable for the website, like jtccom
  5. Write a service that checks for new photos of mine with that tag, download them, flag them as new
  6. Write an admin interface to show new images, and for now, let me click the important part of the image, so it can crop to the size I need around the specified point. Sizes will be pre-determined (3 sizes for the 3 different breakpoints I have defined in my responsive design (not much to it).
  7. Continuously have an inflow of beautiful headers that will display on my web page

It shouldn't be that bad. Nothing I've mentioned above has me too concerned. It should be fun! For now, though, I have 10 canned images that don't populate directly from flickr. They are random so you have have to refresh more than just 10 times to see them all. Enjoy!

Tag List Added

I recently went about aggregating the tags used on my posts to create a sort of tag cloud. I never liked the display of tag clouds, so I just list them out in order of occurrence, with the most frequent showing first.

This should help me get some traffic. Node.js and MongoDB are super fast. It doesn't even stutter when loading the site, across 500+ posts. Actually, I have no idea how many there are.  Super fast.
Here's the code which pretty much finishes in -5 seconds
var db = require("../db"); this.tagCloud = []; this.initialize = function(site, callback){ var self = this; while (self.tagCloud.pop()); db.getPosts(site.db, {}, -1, -1, function(posts){ var tags = {}; posts.forEach(function(post){ if (post == null) return; for (var i = 0; i < post.tags.length; i++){ if (tags[post.tags[i]] == null) tags[post.tags[i]] = { tag: post.tags[i], count: 0 }; tags[post.tags[i]].count++; } }); for(var tag in tags){ if (tags[tag].count > 8) // arbitrary limit so we don't list like 200 tags with 1 post each self.tagCloud.push(tags[tag]); } self.tagCloud.sort(function(a,b){ return b.count - a.count; }); callback(); }); }

Google Keep is not fast enough

I installed Google Keep on all of my devices and as an extension in Chrome.  Between the time I come up with an idea and the time Chrome opens the application, I have lost the idea already. Which is funny, since it's not slow or anything. It just takes 2 seconds to click on the apps thing, click Keep, and start typing. I think the problem is my brilliant idea retention is not very long. I've clocked it at about 1.7 seconds. Google, you need to get this down to nothing flat or I'm going to lose a lot of great ideas! :)

Responsive Design

I updated the site a bit today to include a very minor subset of responsive design features. You can view it on your phone, tablet, or PC (or just open on your PC and resize the width of the window to see what's going on! I use modernizr and respond.js 


A Chat with a Coworker

Hilarious-to-me stuff bolded 

Mark Coworker: well, do they at least map to properties that have well written property names?
Me:: their property names are their querystring keys
Me:: QS.rdb = 1
Me:: javascript man, it's awesome :)
Mark Coworker: yeah... that's what i avoid with those strongly typed querystring objects of mine.
Mark Coworker: too many query string keys that don't make any sense.
Me:: strongly typed is weakly handwritten
Me:: :P
Me:: just tried to come up with something that you couldn't possibly have a comeback for, and which was cleverly punned
Mark Coworker: i don't understand how i'm the only person here who seems to have an issue with the hard-coding of non-sensical query string keys all over the place.
Me:: personally i depend on url rewriting so that the client doesn't see the querystring names... if the technology allows it easily
Me:: so i don't use querystring in my node.js web apps
Me:: i have a very nice helper method that will look for a unique key in the database... so if you passed it the text, "I dislike Mark Coworker's Strongly Typed Querystrings", with the table and the field (mongodb doesn't know of such things by those names), it will take the whole string, lowercase it, remove non-characters, replace spaces with hyphens, then look to see if that's unique. if not, it will have add an incremented value to the end and find

Mark Coworker: =P
Mark Coworker: sorry. ddin't see IM alert until the last message.
Mark Coworker: trying to get CLIENT_REPLACED build ready.
as the unique key to use in the URL for rewriting
Me:: heh
Mark Coworker: linky-no-worky
Me:: post 1253 was about wearing sweatshirts on 80 degree days  (EDITOR: side note, the day this chat took place, it was 80 degrees, early October, as we left for lunch and he had his sweatshirt on)
Me:: i mispelled your name in the url anyway
Mark Coworker: you did!
Mark Coworker: even after fixing it, the url still doesn't work.
Me:: yeah, it's down for maintenance, need more database space
Me:: too many posts
Mark Coworker: well, it's not his fault that there are some many things wrong with the world. such as the lack of database space on servers.
Mark Coworker: I'm through half my bottle of Purell
Me:: damn
Mark Coworker: it's a small traveller sized one though.
Me:: i've used half a bottle in my lifetime
Me:: post 1255, germophobe
Mark Coworker: post 1255: half of your office getting sick right before you're hosting a EVENT that took up TIME_SPAN of your life and DOLLAR_AMOUNT dollars to get ready for.
Me:: post 1256: wants editorial authority on site which talks badly about him

Note:  Things like TIME_SPAN and DOLLAR_AMOUNT were editorialized from the original chat so that stuff like that doesn't get public...