Trivial iPhone / iPod Touch delicious front end

I’ve been looking for an excuse to play with IUI for a while now, and finally I found one. I wanted a way of getting at things that I and my friends have saved in, and the native web interface isn’t very usable on the iPod touch. So let’s implement a delicious client using IUI! Also, let’s implement it in pure client-side JavaScript, so I don’t have to run a server anywhere!

So, here’s a trivial implementation. It’s hard-coded to look at my links and friends, but that’s not hard to fix, I just don’t care. The thing I’m happiest with is the way it’s entirely client-side, and pulls in things from the delicious JSON api as it needs them. And it’s pretty small, too..


Files associated with this page:

name size added

3.0 KB 2008-01-23 16:31

Optional catch in JavaScript

One of the things that’s been annoying me about JavaScript recently is the inability to only catch certain classes of exception, as in Java or Python, for instance. The try {..} catch(e) {..} block has always seemed too inclusive. But recently Ash found a crazy syntax in SpiderMonkey that lets us only catch certain exceptions.

try {
  // something that can throw
} catch(e if == 'foo') {
  // an error is only caught here if it has a 'bar' property of 'foo'
} catch(e if == 'baz') {
  // there can be different catches for different conditions
} catch (e) {
  // otherwise it's caught here. Without this block, the error would fall
  // through the 'try' and be re-thrown.

I haven’t bothered testing this in Internet Explorer (or any other web browser for that matter), because I’m only interested in server-side JavaScript execution at the moment. It works in at least recent SpiderMonkey CVS and Rhino 1.6r6, not sure about earlier versions.

Server-side JavaScript under Apache

Ash Berlin and I recently hacked together an Apache module that embeds the SpiderMonkey JavaScript engine and lets you run JavaScript code on the server as CGI scripts. We called it mod_js.

It was written as a proof-of-concept module at the last Fotango hack-day – Ash and I used my ancient mod_tt as a starting point and had a module capable of printing to the client in about 4 hours of work. Not bad. See the mod_js project page for the source code, examples and build instructions, if you want to have a look yourself.

Right now in mod_js you can print output to the client, and access CGI parameters in the incoming request. There’s no filesystem access or database access. This leads me to a dilemma. I feel there’s use in a stand-alone server-side JavaScript programming environment, but heavier ones than this exist already. Faced with having to re-implement the entirety of the Perl DBI layer, TCP/IP networking, disk access, etc, etc, with all the associated security problems that will happen, I’m forced to consider mod_js somewhat of a dead-end. It would probably be much more useful implemented as a mod_perl module using the Perl SpiderMonkey bindings to embed JavaScript, so I could use all the existing Perl infrastructure to support my scripts. It would be heavier, but there will be more hosting companies offering mod_perl than are willing to compile new Apache modules, or it could be run as a stand-alone daemon more easily than something written in C.

Given all of this, mod_js is still a nice proof-of-concept, though, and could be turned into a real server-side programming language. I also quite enjoy working in C every so often. It reminds me why I don’t do itmore.

Update: Ash gave it a real web page. Crazy.

Yet more E4X irritations

I’m finding E4X to be one of those weird technologies that is alternately utterly wonderful and incredibly irritating. The ability to treat XML data as any other JavaScript data structure allows very fast app development and messing around, but every so often I find myself amazed at how awful the syntax is.

Today’s irritation is about E4X attributes.

// a perfectly normal E4X object.
var myXML = <xml foo="bar">content</xml>;

// the attribute with value 'bar'
var myAttribute = myXML.@foo;

Easy. I love this stuff. Unfortunately, Zimki, my company’s product, uses uneval to store complex objects, and the myAttribute variable there would count. uneval won’t produce JSON but it does produce a string that, when run through eval, will probably produce the original data structure, and we store that string in a database to store object.

Not so for E4X nodes. Playing in the SpiderMonkey JS console,

js> uneval( myXML );
<xml foo="bar">content</xml>

js> uneval( myAttribute );

The first one is fine. That string will eval nicely back to the original E4X object. But the ‘bar’ there isn’t valid JavaScript – eval won’t restore the original object. In fact, had the original XML been something like:

<xml foo="delete_all_zimki_data()">bar</xml>

and we’d tried to use eval/uneval to store this XML, we’d have executed the attribute as JavaScript. Ick.

Unusually, Rhino handles this much better (normally I find Rhino lags in features..):

js> uneval( myAttribute );

Not really an attribute node any more, but at least it’s valid JavaScript and won’t destroy my server.

I don’t even have a good solution for this. Right now I’m fudging E4X nodes in the storage engine, but I really feel that attribute nodes should uneval to something a little more sensible. Perhaps I’ll be able to produce a patch to SpiderMonkey, if I have time..

E4X – A native XML datatype for JavaScript

I gave a talk on E4X. In a Just and Decent world, I wouldn’t have to write a blog entry on this, because there would be a nice front page to that listed all the recent things I’ve done, with the option to subscribe to RSS (or whatever) feeds of various subsets. But I’ve been too lazy to write this so far, so I’ll just link to it here until I get django to do what I want.

E4X is a lovely extension to JS (well, compared to messing with the DOM, and it’s in core, so embedded users get it too), despite its crazy inconsistent syntax and annoying brokeness in Firefox. Fortunately, I don’t have to care about web browser-based JS implementations, so I get to use it, and you don’t..

JavaScript strings – a followup

Having played around with the JavaScript string type some more, I think I understand why it acts as it does. I’m a Perl monkey normally, so I’m not used to the concept of immutable strings, but JavaScript strings are immutable. Playing with the === operator (approximately, ‘is this the same object’) gives:

js> "a" === "a";
js> "a" + "b" === "ab";
js> "ab".replace(/./, "c") === "cb";


js> new String("a") === new String("a");

If strings were to magically upgrade themselves to objects, they’d change behaviour – previously equivalent strings would suddenly not be equivalent. Likewise, suppose this worked:

var a = "string";
var b = "string";
a === b; # true = 1;

Shoud a still be equivalent to b? If not, a clearly isn’t immutable, as we’ve changed it. But if it is, then we’ve chanaged b at a distance – it’s grown a foo attribute.

Still all very annoying, of course, but I understand why now.

JavaScript string weirdness

Recently, I mentioned a peculiar difference between uneval and toSource. Specifically (using the SpiderMonkey JS console):

js> uneval("");
js> "".toSource();
(new String(""))

"" and new String("") are different types of objects. The first is the basic string type, and only really has a value. The second is a full Object, that happens to have a value. However, it turns out that if you treat a basic string type as an Object, say by putting ‘.’ after it in an expression, the SpiderMonkey runtime will implicitly promote the string to a String. Hence, "".toSource() promotes the string object, then calls toSource on the new String object.

Annoyingly, the String Object doesn’t hang around, it’ll get thrown away as soon as you’re done with it. This leads to the weird case that you can set attributes on a basic string type (because it’ll get promoted to an Object, and Objects have attributes) but they don’t stay set (because the Object you’ve set them on gets thrown away as soon as the set call finishes).

By the way, all of this applies very specifically to the current CVS trunk SpiderMonkey. I don’t know what most web browser engines do with strings, so don’t assume this applies in, say, Internet Explorer. But I’d be interested if someone wants to find out and tell me…

uneval() does not produce JSON

More playing with JSON and Spidermonkey has revealed yet another incredibly annoying fact (I hate those guys). Spidermonkey provides a lovely uneval() function, that does the exact opposite of eval() – turns JS objects into strings. It works on almost everything, and make life very very nice. There’s also Object.toSource() which does something similar (but not the same – try uneval("") vs "".toSource()).

But the strings that uneval produce are not valid JSON, as I have been assuming. I’ve been getting steadily more worked up at all the JSON parsers in the world, refusing to parse things that are clearly valid JavaScript, and eventually I go look at the spec, which fails to list ' as a valid string delimiter. And guess what delimiter uneval produces? Yay. So all the parsers are fine, and it’s just SpiderMonkey that’s broken.

Fortunately, Mochikit provides a nice serializeJSON() function.

further notes on JSON

Off I go, making random unsubstantiated claims about the danger of using JSON with non-ASCII characters. This called for a Test. So I wrote one. Visit my JavaScript unicode test page and see how your browser interprets external JavaScript files – I serve an ‘é’ using JavaScript to the page via 3 methods and 2 character set encodings, and try to render them all.

My conclusions from some limited testing? Owch. You can’t include a JavaScript file and expect the client to interpret it properly, unless you control both the server serving the JavaScript and the HTML page requesting it, and can make sure that they’re both in the same character set. Alternatively, you can escape all non-ascii characters in your JavaScript files using the xXX or uXXXX notations, which seems to work everywhere I’ve tried, but also seems like a pathetic work-around. Anyway, needing a work-around for only the non-obvious case means that no-one will actually do it, because no-one ever seems to bother testing with non-ASCII (see any on the JSON examples page, for instance?).

However, requesting JSON using XMLHTTPRequest seems to do the Right Thing in every browser I’ve tested, including those that include JavaScript wrongly. So if you’re using JSON as an RPC transport, instead of XML, for instance, it looks safe. From a character-set point of view.


In the bad old days of web 1.7ish, the cool thing to do for dynamic web applications was to generate HTML snippets on the server-side, pull them into your app using XMLHTTPRequest, and shove them bodily into a DIV on the page somewhere. “generating DOMs is hard” was the excuse, “the server already has a nice templating language”, etc, etc. And this was Good.

Nowadays, of course, this is Evil. Pulling HTML across the wire? It’s inefficient! Even worse, we want to send XHTML to the client, because it’s cool, but then you can’t treat your HTML like a string, you really do have to mess with DOMs. Fortunately, cute tools like MochiKit make it really easy to create DOM nodes, and provide really nice tools for making ajax requests. We see the rise of JSON – serializing your raw data on the server into a JavaScript-evaluatable string and sending that across the network, then building the DOM on the client side based on that data. And this is Good.

Soon, I expect, people will run up against annoyances in JSON. For instance, I bet there are lurking character set issues. It’s also not very portable – if I go to all the trouble of writing and exposing interesting functions of my web application in machine-readable ways, I’d like to be able to access this data using things other than javascript. Sure, there are modules like JSON that will both create and parse these things, but throwing actual XML around seems much neater to me.

We have a strange mix of XML and JSON APIs at the moment, some toolkits even making it easy to ask for either. But I consider XML far superior for this sort of thing, if only because the character set issues (my personal bugbear) are properly solved with XML. It’s disadvantage is that the JavaScript tools for dealing with the DOM are very annoying, but when we have something like XML::Simple or xmltramp for JS, and reading incoming XML is almost as easy as reading JSON, we can get rid of it and use nice sensible RPC mechanisms. JSB is a lovely example of this, for instance. And this will be Good.