Too many of the JS/DHTML toolkits have the “you must use our APIs for everything, including how you manipulate strings” disease. Some are cool, for example TIBET, which looks a lot like Smalltalk. Some have real value, e.g. Oddpost, which Yahoo! acquired perhaps as much for its DHTML toolkit as for the mail client built on that toolkit.
Yet no JS toolkit has taken off in a big way on the web, probably more on account of the costs of learning and bundling any given API, than because of the “you must use our APIs and only our APIs” problem. So people keep inventing their own toolkits.
Inventing toolkits and extension systems on top of JS is cool. I hoped that would happen, because during Netscape 2 and 3 days I was under great pressure to minimize JS-the-language, implement JS-the-DOM, and defer to Java for “real programming” (this was a mistake, but until Netscape hired more than temporary intern or loaner help, around the time Netscape 4 work began, I was the entire “JS team” — so delegating to Java seemed like a good idea at the time). Therefore in minimizing JS-the-language, I added explicit prototype-based delegation, allowing users to supplement built-in methods with their own in the same given single-prototype namespace.
In listening to user feedback, participating in ECMA TG1 (back during Edition 1 days, and again recently for E4X and the revived Edition 4 work), and all the while watching how the several major “JS” implementors have maintained and evolved their implementations, I’ve come to some conclusions about what JS does and does not need.
setTimeoutchains, and explicit control block state machines, instead of simply writing loops and similar constructs that can deliver results one by one, suspending after each delivery until called again.
That’s my “do and don’t” list for any future JS, and I will say more, with more specifics, about what to add to the language. What to fix is easier to identify, provided we can fix compatibly without making a mess of old and new.
Here are the three most-duplicated bug reports against core language design elements tracked by Mozilla’s bugzilla installation:
I argue that we ought to fix these, in backward-compatible fashion if possible, in a new Edition of ECMA-262. If we solve other real problems that have not racked up duplicate bug counts, but fail to fix these usability flaws, we have failed to listen to JS users. Let’s consider these one by one:
g(global) flag and the
lastIndexproperty, these singleton literals make for a pigeon-hole problem, and a gratuitous inconsistency with other kinds of “literals”. To fix this compatibly, we could add a new flag, although it would be good to pick a letter not used by Perl (or Perl 6, which fearlessly revamps Perl’s regular expression sub-language in ways that ECMA-262 will likely not follow).
Date.prototype.getYearmethod is a botch and a blight, the only Y2K bug in Mozilla-based browsers that still ships for compatibility with too many web sites. This bug came directly from
java.util.Date, which was deprecated long ago. I’d like to get rid of it, but in the mean time, perhaps we should throw in the towel and emulate IE’s non-ECMA behavior (ECMA-262 did standardize
getYearin a non-normative annex).
There are other bugs in JS1 to fix, particularly to do with Unicode in regular expressions, and even in source text (see the infamous ZWNJ and ZWJ should not be ignored bug). More on these too, shortly, but in a wiki, linked with informal discussion here.
This roadmap update has been much-delayed, as we have juggled priorities and sweated security releases on the AVIARY_1_0_1 branch. Sorry for the delay; I will keep the roadmap up to date much more frequently from now on.
The new roadmap restarts the document with as little repeating boilerplate as possible. Highlights:
We will construct a detailed schedule for the rest of the release. Until we have a more “real” schedule, the roadmap will be fuzzy about dates. Apart from the absolute priority that Firefox 1.1 be able to update itself in small background-downloaded increments, and that its security and quality be at least as high as Firefox 1.0.x, we have already enabled new platform features such as SVG and <canvas>. These new richer-graphics-for-the-web features are in usable shape, and they deserve testing and experimental usage in XUL and even HTML. We want developer feedback, which we will incorporate into future releases.
In order to help both our XUL platform and (more important) the open-standards-based web to compete with next-generation OSes and their proprietary frameworks, we are rearchitecting Gecko’s graphics subsystem. Here is a picture of Gecko emphasizing its graphics infrastructure as of the 1.8 milestone and Firefox 1.1:
Here is where we are headed in 1.9:
We are joining forces with the Cairo Graphics project (this will be no surprise to anyone following the project, in particular roc‘s blog). Together, we can move faster and on more platforms, toward a hardware-accelerated 2D future, and beyond.
As with any large rearchitecture, there will be bumps along the way. But we are not going to rewrite the world at once (never again!). We aim to make changes in smaller increments, that can be done during the 1.9 alpha cycles. So the 1.9 schedule, which I won’t even bother to depict yet, will have a good number of alphas.
Anyway, this is a blog item’s worth of roadmap content, which will show up in a more polished form in the main roadmap soon. Your comments are welcome.
This happens to resemble an early Avalon demo (I can’t find a link to it, but I believe there was a video on the web some time after the 2003 Microsoft PDC), which just shows how the web can and will keep up with proprietary eye-candy — at least to a “good enough” degree. What’s good enough? Whatever it takes, including 3D graphics, in due course — but always incrementally developed and deployed, with web and XUL app/extension developers giving feedback and guidance.
Web incrementalism (feedback-guided “little bangs” instead of one grand-planned “big bang”) was the leitmotif of shaver‘s keynote, and this meme reverberated throughout the conference. It seems even XHTML 2 is adapting to “mindshare” (similarity to the web-as-it-is, if not actual backward compatibility).
That’s a hopeful sign, but don’t hold your breath for XHTML 2 on the web any time soon. It was amusing to hear that one of the design aims was less scripting, because scripting is hard for authors and constrains user-agent choice — when all user-agents will need major revision to support XHTML 2, which includes XForms (and meanwhile, the DHTML/AJAX/whatever-you-want-to-call-it JS renaissance continues). In the near term, only Mozilla-based browsers come close to having all the integrated infrastructure needed by XHTML 2, and not all bundled by default. There is no sign of XHTML 2 support from Microsoft, Apple, and Opera.
Still, XML is making its way to the client side of the web, slowly but surely. To help handle XML in JS, I’ve implemented E4X for Firefox 1.1. It isn’t fully hooked up yet, but it will be soon. More in a bit, as I keep my renewed resolution to blog.
Thanks to Dawson Engler for helping get us connected to Coverity last Fall. Dan Veditz and I have done several scans of Mozilla sources (Firefox branch and trunk, Thunderbird with Calendar enabled) using Coverity’s SWAT static analysis toolset.
The good news is that our nominal error rates are respectable at first glance: as good as or better than other large open source projects. The results show many trivial redundant null checks, missing or inconsistent null checks, and the like. These numerous bugs will be filed and fixed in batches, and we haven’t classified them all yet, so the bug-filing won’t be immediate. These are low priority bugs.
There were fewer more serious errors caught, including dead code, but the most worrisome problems visible to the static analyzers were not obviously bugs. Rather, they were cases where an index into an array would be out of bounds only if assertions (NS_ASSERTION, etc.) were not seen as fatal when botched.
The toolset can be taught that
NS_ASSERTION is fatal (exits the control flow graph), just as <assert.h>’s
assert macro is understood by default to be fatal. But thanks to some bad ancient history in the Mozilla project, for most builds,
NS_ASSERTION is not fatal!
(The bad history dates from 1999 and involved various people, mostly Netscape employees, all “trying to get their jobs done” in the face of bogus assertions from some of their fellow hackers. Instead of forcing bad assertions to be fixed, at some opportunity/social/political cost to oneself, it was too easy to go along with the minority view that “assertions should just be warnings”.
To be fair, some wanted all assertions to be combined via newer macros with run-time checks that would not compile out of production builds, but that went too far: performance-critical code with a limited set of trusted callers should not have to run-time check all indices.
Anyway, I’m sorry I gave in and went along with the assertions-as-warnings flow. That was a decision made in haste and repented at leisure!)
So back to the present. Given assertions-as-warnings, the problem becomes: do we have enough test coverage with
DEBUG builds to be sure that developers will actually botch assertions whose invariants are being violated by a rare bad caller or broken data dependency? And of those few developers, how many will actually notice the warning messages in the cluttered standard output or standard error of the Mozilla-based program they’re testing? And of those, how many will actually do anything about fixing the broken assertions?
Anyone committed to reality has to believe that the odds ratios multiply into too small a fraction for us to be confident that assertions are effective “path killers” for the static analyzers to respect.
I go further: anyone committed to software quality should want fatal assertions. Assertions are proof-points, to oneself and to others reading one’s code. They are not idle wishes, or doubtful questions posed to the software gods. If your code has a bogus assertion, you should want to know why it botched, and how to fix it. You should want to keep it in a fixed form, rather than remove it, if at all possible.
Dan and I pored over the several dozen potential static and dynamic overrun errors that the Coverity tools found, and at least for the ones in Firefox, we convinced ourselves that the callers were well-behaved. So again, based on our four eyeballs and fallible brains, we believe that the tools found nothing overtly bad or exploitable.
But, we must fix this historic assertions-are-warnings botch.
Just making assertions fatal now will turn tinderboxes on Linux and Windows, at least, bright orange.
I call that a good start, and then we can close the tree, divide the labor among all hands on deck at irc.mozilla.org, and fix or remove the bogus assertions. We could do this right at the start of the 1.9 alpha cycle. You may well object that this will be too disruptive. If so, please comment here proposing a better plan that actually fixes the problem in a finite number of days. We shouldn’t put this off beyond the start of the 1.9 cycle.
Back in my February 2004 Developer Day slides, I promoted the idea of using Eclipse to create a XUL application builder, with direct-manipulation graphical layout construction and editing, project management wizards, etc.
Although a few people expressed interest and even did some hacking (the MozCreator project being the most conspicuous example, although not Eclipse-based), no one actually created an Eclipse project and built on its Graphical Editor Framework to realize a XUL app-builder.
So the thought occurs: why not patch the Eclipse IDE for Laszlo to support XUL as an alternative target language, and Firefox (or any new-style XUL app, soon enough unified via XULRunner) as the target runtime? Any takers?
The Greek poet Archilochus wrote “The fox knows many things, but the hedgehog knows one big thing.”
But what does the Firefox know? Both many things (tabbed browsing live bookmarks popup blocking mouse gestures extension architecture download manager small fast . . .) and one immense thing: that the power of the Internet and the power of open source are two sides of one coin, minted by millions of people working together as never before. Firefox shows what can be done when people use the web to collaborate without any agenda other than a common vision of simplicity and ease of use, and with the freedom to extend that vision according to individual good taste in boundless directions through XUL extensions.
In the case of Firefox 1.0, those people include the dozens of top hackers on the Mozilla project, the project managers at the Foundation and among the key strategic partners, the hundreds of CVS committers, the thousands of daily build testers and advocates, and the millions of users. I’ll single out only four by name, without slighting any others in the least.
First, many thanks to ben, who took up the flag after 0.5, kept his cool and his great sense of design under pressure, and carried the ball into the end zone. Kudos also to blake and hyatt, who started it all and showed the world the way to a better mousetrap. Finally, thanks again, and always, to asa, for his tireless testing and release leadership.
Onward to Firefox 1.1 and Mozilla 2.0!
For the impending PR1 candidate builds (tomorrow’s, we hope):
Back to Mozilla roadmap topics in my next update, some time soon-ish.
A lot of folks in the Mozilla community share the reaction Boris had to some deeply mistaken, tentative and now-aborted plans to remove View / Source and other “developer” features from Firefox. I wanted to point out that these plans were not made with agreement from me or, as far as I can tell, from Ben. First, let me just say that there is no way Firefox would ship without View / Source or any other UI that goes back to Netscape 1, and is therefore part of the “body plan” of browsers. Not while I’m around and involved, at any rate.
People dive into HTML all the time, copying and pasting, hacking, cribbing. View / Source is indispensable for such learning, not to mention for the kind of trouble-shooting all too frequently done by “end users”. My wife uses View / Source, and so do millions of others, whether or not they are “web developers” ™. You don’t have to be a Gecko hacker or even a paid web content designer to appreciate View / Source — far from it.
The line between a “user” and a “developer” is soft and flexible on the web, and it should remain that way, lest some know-it-alls or business-suited sharpies lead us down an over-complicated, proprietary path.
Even in the early days of NCSA Mosaic, when there were ~40 servers in the world with content to care about breaking with incompatible browser changes, marca and ebina had good reason to tweak Mosaic’s layout engine to support known usage errors, some of which we now call “quirks”.
I cheerfully acknowledge that this is heresy, but their decision (insofar as it was a decision) was simply good economics, and it offered better usability or human factors design than a strict SGML purism would have afforded. Without tolerating human error of the sort tolerated in natural languages, I think it likely that the web would not have grown as it did.
Throughout the explosive growth of the web, View / Source has played a crucial role, hard to appreciate if you dumb down your user model based on myopic hindsight and a static analysis of the majority cohort of “end users”.
Anyway, I wanted to reassure everyone, from our top Gecko hackers to interested web developers to enthusiastic surfers, that Firefox is not about to implode into a bare-bones, ultra-minimalist browser that those important hackers, et al., can’t use. Firefox cannot be “all things to all people” without at least some people having to configure an extension or two, but the default features should support the crucial user bases.
I’m willing to see DOM Inspector moved to an extension, based on its relative novelty and complexity compared to View / Source.
I’m increasingly skeptical about the wisdom of the alternative style sheet UI removal decried by Daniel, and I’ll make sure that feedback from the preview release on this removal is heard and fairly evaluated.
About Firefox UI: we’re trying something with both SeaMonkey and Firefox (and Thunderbird) now that couldn’t be done in the old days when Netscape paid most module owners, along with a good number of professional UI (or UE, they used to call it) designers: individually accountable product design leads.
Product design can’t be done well by committee, and SeaMonkey’s UI was always worse for the compromises, bloat, and confusion about its intended audience that resulted from past committees. No one was much or well empowered by any nominal share in such a committee or mob.
For SeaMonkey, which is moving into a sustaining engineering mode, and won’t be our premier product after Firefox 1.0, Neil Rashbrook leads the UI design and implementation.
For Firefox, Ben Goodger is the design lead.
As presented at dev-day, these slides nicely demonstrated support for Apple’s canvas tag, embedded in Mozilla as <xul:canvas> and implemented using Cairo (a static PNG of the clock and animated stars must stand in for now, in the published slides, but you can view source to see the starbar.js script and related source). Thanks go to vlad and stuart for their heroic efforts hacking up <canvas> support.
The WHAT Working Group is considering standardizing <canvas>, with the goal of interoperating implementations based on the standard. My hope is that this is done both well, and quickly, in keeping with the WHATWG charter.
People ask about how SVG in Mozilla and <canvas> relate. The short answer is that they don’t, except that both must work well in all dimensions (complete and fast alpha-blending, e.g.) on top of a common graphics substrate, which looks likely to be Cairo.
A longer answer could compare and contrast <canvas>‘s non-persistent, procedural, PostScript-y rendering API with SVG’s declarative markup and persistent DOM. The upshot is that SVG and <canvas> complement one another, catering to fairly distinct requirements and authoring audiences.
One crucial fact to keep in mind: <canvas> support is tiny compared to the implementation of any known profile of SVG, so it will be easy to justify adding <canvas> support to default builds of Mozilla products. SVG should be supported in the same way as XForms and other, bulkier implementations of standards not yet seen much on the web: as a one-click download-and-install package that extends Gecko. I’ve asked top hackers to look into generalized support for such Gecko extensions, based on XTF, with versioning and update management a la Firefox’s extensions.
I’ll blog separately about the other points of interest raised in these slides.