Friday, July 27, 2007

Glorious links

BoingBoing's been on a good tear recently. Better than usual. For those of you whose eyes have glazed over, go back and re-read these posts:

Tuesday, July 17, 2007

All the Templating You Need

function replicate(items, template, defaults) {
    var indices = {};
    var i=0, item;

    // build direct map of column names -> row index
    while ((item = data.columns[i])) indices[item] = i++;

    return data.rows.map(function (row) {
            return template.replace(/__\$\{(.+?)\}__/g, function (str, keyword) {
                return (keyword in indices)? row[indices[keyword]] : '';
                });
            }).join('\n');
}

var data = {
    'columns': ['adj', 'noun'],
    'rows': [
        ['main', 'man'],
    ['leading', 'lady'],
    ['green', 'dreams']
        ]
};

var template = '<p>__${adj}__ __${noun}__</p>';
replicate(data, template);


Peter Michaux has some nice ideas about keeping the JSON format DRY, so that data returned resembles something more like a list of Python tuples. (Python is also probably the single language that helped me to understand efficient JavaScript patterns.)

Client-side transforms - converting an XML or JSON response into HTML on the client, to save server bandwidth and processing time - are a key part of modern web apps, but I'm not sure about a transform system that implements full-blown JavaScript logic. Branching or looping can be implemented easily in transforming functions; several templates can be used and plugged in to each other, leading to nested data structures in the response. (Hopefully, time permitting, I'll get to demonstrate how that works soon.)

innerHTML may not be a part of any standard, but there's no reason why it shouldn't be. Sometimes we need to interact with the DOM as a tree, sometimes it's more useful to unleash JavaScript's string parsing and regex power on it.

Monday, July 9, 2007

One-line CSS minifier

CSS minification in one line:

$ cat sourcefile.css | sed -e 's/^[ \t]*//g; s/[ \t]*$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$/d' | sed -e :a -e '$!N; s/\n\(.\)/\1/; ta' >target.css

With comments:

$ cat sourcefile.css | sed -e '
s/^[ \t]*//g;         # remove leading space
s/[ \t]*$//g;         # remove trailing space
s/\([:{;,]\) /\1/g;   # remove space after a colon, brace, semicolon, or comma
s/ {/{/g;             # remove space before a semicolon
s/\/\*.*\*\///g;      # remove comments
/^$/d                 # remove blank lines
' | sed -e :a -e '$!N; s/\n\(.\)/\1/; ta # remove all newlines
' > target.css

Using this script, I was able to chop about 29% (10K) off our master.css file. Assumes lines end in semicolons that should end in semicolons. May not play well with certain freakish outdated CSS hacks. Use at your own risk, and always test throughly before releasing into the wild.

Saturday, July 7, 2007

The Problem with SlickSpeed

For the past month or so, there's been a lot of noise about the SlickSpeed Selectors Test Suite. Since I'm in the market for a good selector engine for Zillow, and since it's a bit of a rite of passage (a front-end web dev's equivalent of compiler authoring?), I wrote my own, to see how well I could do and to see how it stacks up to the rest.

So of course, I modified the suite (under its MIT license) to test my little attempt as well. I was pleased with my initial results, but found the test document that comes packaged with the suite to be a little simplistic. Not enough variety or depth of nesting; the resulting DOM structures don't really resemble what I look at on a daily basis at work. I wanted to measure performance in the wild. So I replaced Shakespeare's As You Like It with the Home Details Page for Zillow.com, perhaps the most complex page on the site. Among other things, it includes a photo gallery, a Flash map, an Live Local-based Bird's Eye View map, a chart widget, several ads, tables, etc.

The results, you can see for yourself, here.

As it turns out, according to SlickSpeed, my engine outperforms all but 2 of the other engines on Firefox, and is the best performer on IE7.

So my misgivings on the nature of the document wandered over to the construction of the queries. The given queries perform a "breadth" pass, but they don't really provide a "depth" pass including all manner of combinations of possible selectors, so I wrote my own addition to the suite that picks random elements from the DOM and generates a matching CSS1 selector for it. You can see the dynamic test suite here.

Now, my Element.select engine's performance is fair to decent at best, but no longer the front-runner. Unless I can iron out the kinks, I might look into Ext's engine, especially since it fits nicely into the Yahoo! UI library we use at Zillow.

On the other hand, my Element.select engine is stand-alone and does not provide any other services or dependencies. It's a whopping 6KB (minified), but I wouldn't recommend the use of a CSS query engine for anything short of a full-scale web application anyway.

Some thoughts, though: For reasons that should be self-explanatory, it appears that all of the CSS engine makers are optimizing for Firefox. And once again, Opera's JavaScript engine (named Linear B) and DOM implementation beats out all the rest. Performance on IE looks to be all-around poorer. The Internet Explorer Team certainly has their work cut out for them, not only in improving their DOM and JScript performance and their developer tools (a decent profiler and a debugger that's not attached to Visual Studio would be nice), but also in winning over a hostile developer community. I guess that's what happens when the maker of the World's Number One Browser shuts down their development team for 5 years.

Prototype and MooTools appear to be compiling the CSS selectors into XPath statements for Firefox's (and Opera's) built-in xpath evaluator (too bad IE forgot to allow MSXML's XPath engine to apply to the HTML DOM). While the DOM performance for these XPath-based implementations is fantastic, they also help underline the end-user experience difference between browsers. Let's hope users take notice how much faster the leading non-IE browsers are in comparison; it's hard to win users over on the basis of standards compliance alone.

If nothing else, I hope my modified SlickSpeeds will help CSS query engine developers focus on what's important: CSS1 queries. The time scores at the bottom of the SlickSpeed test skew heavily toward obscure pseudoclass and attribute selectors which I for one won't use most of the time. It's the meat-and-potatoes tag, class, and ID that really count.

Sunday, July 1, 2007

Sicko

As we all know, Google has great power. It's power that comes from the masses: utilizing and channeling the activities, ideas and opinions of millions via the Web. All that information and trust capital can be a powerful tool for sustaining an environment that encourages democracy. Or not.

Update:

Apparently now, for some at Google, democracy is available to the highest bidder.

But the more important point, since I doubt that too many people care about my personal opinion, is that advertising is an effective medium for handling challenges that a company or industry might have. You could even argue that it's especially appropriate for a public policy issue like healthcare. Whether the healthcare industry wants to rebut charges in Mr. Moore's movie, or whether Mr. Moore wants to challenge the healthcare industry, advertising is a very democratic and effective way to participate in a public dialogue.
That is Google's opinion....

Web-Native UX

I'd like to address something many in the User Experience community would rather avoid, since many times it may interfere with deploying the latest cool widget or Ajax technique that comes down the pike. I want to talk about User Experience Consistency and the web. Because while standards bodies have come into being to coordinate the development of the cornerstone technologies from which we build interfaces, and any web developer worth her salt pays attention to valid, semantic markup and the very latest in CSS techniques and the newest developments in unobtrusive scripting and REST and microformats, the pace of development in web-wide usability standards has been glacial at best.

I bring this up because I've been noticing a slower adoption rate of highly-usable, widget-heavy, responsive, dynamic, configurable, powerful web applications. My source? Purely anecdotal and completely unscientific, among my friends and family and even coworkers at Zillow, who express frustration and antipathy toward websites for even minor perceived flaws, while clunky interfaces in other, more primitive sites are tolerated and even preferred to their more elegant "web-application-y" counterparts. With the exception of certain Google and Yahoo! applications, many powerful, innovative web apps are being ignored. In the rush to push the browser to its limits, it's easy to lose sight of the end goal: making routine tasks easier for end users, in the most straightforward way possible.

Web developers are a sensitive bunch - the profession long disregarded in the eyes of "serious" programmers. Ajax was to change all of that. And with impressive things now being done in one of the most challenging software development environments, the front-end of web development has finally been able to attract some formidable talent away from server-side, OS, and game development. For the better, I should think, the Web has been gaining ground, not only as a place to exchange information, but as a valid, full-fledged platform for software development. That's the idea behind all those standards out there: eventually, if we clap our hands and work hard enough, the Web might supplant desktop-native applications for all but a few specialized purposes. Soon, your computer will connect into the World Wide Continuum; your data will mingle freely with the data of billions around the world on an indistinguishable platform of desktop/web-hybrid applications, the social utopia of the Web will supplant thick-client, rugged-individualist desktop computing, the Singularity will occur, and we will all live in happy harmony with the universe.

Fact of the matter is, not nearly enough consistency and code reuse is happening on the web. To an extent, that's good. I'd like to see the web remain a wild place that functions as laboratory as much as controlled platform. But too often, problems are approached like they've never been addressed before.

Sure, there are attempts at usability standards out there. But web usability is complicated, and in spite of the best attempts of several javascript libraries, nobody, not even Google, is as slick and consistent as OS-native applications.

When I'm designing an interface, I try to take into consideration three primary concerns:

Familiarity.
If I haven't seen it before, I don't know what it does, and I don't want to use it, and may not even recognize it as part of the UI. This becomes a huge obstacle for innovative interface development - more later.
Consistency.
If a widget looks more than 70% similar to something else, I will expect them to behave the same. This has ramifications beyond your website (duh).
Ease of use.
This is a big umbrella, encompassing everything from accesibility to ergonomics to "enjoyment": does my slider have a big enough click target? Can I elect to use the keyboard to control this thingie, or am I stuck with the mouse? Do I find myself repeating the same action for common tasks? Is my path into common tasks streamlined and foolproof?

Back in the dark ages, before DHTML graduated from the shadows of image rollovers, web interfaces were largely built out of browser-native form elements and links to more pages. Usability was a minimal concern, because layouts were simpler and interaction models were much less ambitious. Form interfaces were easy to manipulate, since they were largely designed to use OS-native widgets and behave comparable to their desktop-app counterparts. Links were all blue-and-underlined, and they all took you to a new page. Now that we've graduated from a website- to a webapplication-based web, however, many users haven't followed along. Widgets that don't look like text-input boxes can be hard to spot; a recent usability study at Zillow found as much. Part of the problem may be that many users simply haven't been exposed to web applications to expect that anything other than straightforward input controls to respond to input events. I'd like to think that's part of our responsibility as web developers: challenge our users to explore, experiment, discover. But it's also our responsibility to keep the guesswork out of our interfaces.

Web usability is a moving target. I don't have answers right now to many of these questions, but I'll be discussing them as they come up. This post was to survey the territory; I hope to be able to explore aspects of this issue in greater detail soon. I'll also be writing about the technical details of implementing a large-scale interface architecture that balances web standards with friendly, usable design. I believe in powerful, flexible user interfaces, but only inasmuch as they empower the user. Gratuitous lightboxes are not welcome!