The non-world non-wide non-web

I spent a day at the recent w3c workshop on web apps and compound documents. Due to vacation, that day was the second, so I missed the chance to hear JavaScript praised as the worst invention of all time.

The adolescent sniping and general irrelevance continued on the second day, however. The sad fact is that the w3c is not concerned with the world wide web, AKA the Internet. Rather, the focus for a while now seems to be on vertical tool/plugin and service/cellphone markets, where interoperation is not a requirement, content authors are few and paid by the vertical service provider, and new standards provide livelihoods and junkets for a relative handful of academics, standards body employees, and big company implementers.

Evidence of the vertical nature of the new standards? There are only a few hundred tests for the SVG w3c recommendation. That’s several decimal orders short of what is required just for surface coverage. Often recently, when Hixie hears of a claim about an interesting SVG standard feature, he writes a testcase. Adobe’s plugin too often fails that testcase, although I am sure Adobe SVG tooling produces content that works with the Adobe plugin. Interoperation is a joke.

Real browser vendors, who have to deal with the ugly web as it is, know better. The dream of a new web, based on XHTML + SVG + SMIL + XForms, is just that — a dream. It won’t come true no matter how many toy implementations (including Mozilla implementations — we’ve supported XHTML for years) there are. Long before the w3c gets compound documents working on paper (having missed the chance with SVG 1.0 and 1.1, which ambiguate and conflict with CSS), XAML etc. will leak onto the public web.

What matters to web content authors is user agent market share. The way to crack that nut is not to encourage a few government and big company “easy marks” to go off on a new de-jure standards bender. That will only add to the mix of formats hiding behind firewalls and threatening to leak onto the Internet.

The best way to help the Web is to incrementally improve the existing web standards, with compatibility shims provided for IE, so that web content authors can actually deploy new formats interoperably.

What has this to do with Mozilla’s roadmap? Not much, which is why apart from HTML, CSS, DOM, and SVG, which we support, you probably won’t hear much more about the w3c here. But Mozilla is joining with Opera and others to explore the sort of incremental improvements to HTML proposed by us at the workshop. I expect the resulting specs and implementations to play a significant part in the roadmap.

/be

Action and Reaction

Miguel nails the key threats in XAML/Avalon/whatever: fancy graphics, widgets, and layout; easier XML-based authoring; better “managed code” model for when you have to hack; and a web-like deployment model with sandboxing for security.

The deployment model is a huge advantage over conventional app development. Web browsers and FlashPlayer have benefited from it, even as they’ve been held back by HTML and plugin stagnation. You can see Macromedia trying to escape the “plugin prison” now, and they’ve got a good chance of succeeding, thanks to FlashPlayer’s ubiquity.

The challenge for Mozilla and other open source projects is not to “react to Microsoft”, any more than it is to “react to Macromedia”. MS and MM are reacting to the same fields of force that govern everybody. The prize we seek is a better way to develop common kinds of networked, graphical applications.

People are using web standards to build web apps, and running into limitations in the deployed standards, and speedbumps in the standardization process. Other people are developing desktop apps using, e.g. Glade and GTK+, but wanting web-style deployment and even cross-platform portability. We should make it easy to do advanced, native-looking UI and fancy graphics in a web-like way, and portably.
This doesn’t require building an IDE, although who could object? The best case success story for any open source advanced layout/rendering/GUI-app platform would use the same network effects and low entry cost structure that helped the web explode to 5 billion or so public pages (but without the tag soup this time, please). People should be able to copy and mutate cool content. You should be able to edit by hand, even if the IDE generated the content, and have everything work (maybe your pretty formatting might even be preserved, what a concept!).

To make a difference on the web requires distribution, ideally in the form of support for new standards in all browsers including IE. That’s not going to happen with Mozilla code unless someone makes an Active X plugin out of Gecko, and distributes it widely. Fortunately, we have such a plugin. Distribution will be the hard part. But even without Mozilla, IE6’s behaviors (HTCs) allow a lot of extensibility. What if the minority browsers started incrementally improving HTML, DOM, etc. — and the emulation layer for modern IE were thin enough to download, if necessary?

Another requirement for web-like deployment of rich apps: a sandbox security model that allows trust to be delegated only to critical sections in the app. You shouldn’t trust a big pile of compiled C++ *or* compiled/interpreted JS/JScript/C#. Object signing is not enough — what’s need is a way to minimize the “trusted computing base” extensions, the critical sections in the app that actually need privilege. Also, those sections should automatically downgrade on exit (return from privilege-enabling method, e.g.).

I hope to blog more on these and related topics, as time allows.