10 Jan 2013

TAG, You’re “It”

Congratulations to Marcos Caceres, Yehuda Katz, Alex Russell, and Anne van Kesteren on the news of their election to the W3C Technical Architecture Group (TAG).

This is great news: four out of the five reformers won.

Back-story: in late 2010, TBL invited me to join the TAG. I declined with thanks for two reasons:

  1. I was (at the time, especially) over-committed to standards (JS, mostly) and Mozilla work (e.g., setting up Mozilla Research).
  2. The TAG was not engaged “toothfully” (my word) on Web problems faced by developers, rather it seemed focused on XML and Semantic Web matters, and therefore I would be odd-TAG-member-out.

Having both (a) more time and (b, the point of this story) three or four kindred-spirit candidates would have changed my mind. I hope my decline-with-thanks message helped in some small way to bring about today’s good news.

Kudos to the reformers for running, to the W3C Advisory Committee representatives who voted in the reformers, and to the W3C and TBL for being open to reform.

/be

12 Oct 2012

HTML5 Video Update

This is a follow-up to Video, Mobile, and the Open Web. As promised there, OS-based H.264 support for the HTML5 <video> element has already landed in Gecko, and it just released this week in Firefox Beta for Android. Firefox OS (B2G to the early adopters!) also supports H.264 from the HTML5 <video> element.

The challenge remains working through OS decoders on the various desktop OSes. Here’s where we are (thanks to roc, cdouble, and cpearce):

  • Bug 794282, to enable GStreamer in official Linux builds.
  • Bug 799315, to use Windows Media Foundation on Vista and newer Windows releases. This would provide H.264/AAC/MP3 support.
  • Tracking bug 799318 for the above two and the missing Mac OS X bug, plus the Windows XP solution described next.
  • The idea for Windows XP is to use Flash. According to roc, “we believe it may be possible to use Flash unmodified. Modern Flash has APIs to let us inject compressed data to the media engine without going through their networking layer, and we can recover rendered video frames.”

So, hard work still to-do ahead of us, but nothing that we can’t overcome (knock on wood).

We are taking the fight for unencumbered formats to the next battlefront, WebRTC, also as promised. More on that front later.

As always, the dev-media list (mailman “subscribe” protocol; also a Google Group) is a fine place to discuss any of this.

/be

8 Oct 2012

Harmony of Dreams Come True

This blog focuses on portions of the new-in-ES6 stuff I presented in my Strange Loop 2012 closing keynote, which was well-received (reveal.js-based HTML slides, some from my Fluent 2012 keynote, many of those originally from Dave Herman‘s Web Rebels 2012 talk [thanks!], can be viewed here; notes courtesy Jason Rudolph).

UPDATE: the Strange Loop keynote video is up.


I blogged early in 2011 about Harmony of My Dreams, to try to fit in one page some dream-sketches (if not sketchy dreams — the #-syntax ideas were sketchy) of what I thought were the crucial elements of ECMAScript Harmony, the name I coined for the standardized future of JavaScript.

Now this dream is coming true, not just in ES6 draft specs but in prototype implementations in top browsers. Here I’ll tout Firefox 15, which released almost six weeks ago (yes, this means Firefox 16 is tomorrow, and Firefox 17 beta and 18 aurora too — these all have yet more new goodies in them — isn’t Rapid Release fun?). Per the MDN docs, the SpiderMonkey JS engine shipped in Firefox 15 sports the following new prototype-implemented draft ES6 features:

Default parameters

This extension (AKA “parameter default values”) is too sweet, and it will help put the arguments object out to pasture:


js> function f(a = 0, b = a*a, c = b*a) { return [a, b, c]; }
js> f()
[0, 0, 0]
js> f(2)
[2, 4, 8]
js> f(2, 3)
[2, 3, 6]
js> f(2, 3, 4)
[2, 3, 4]

Implementation credit goes to Benjamin Peterson for his work implementing default parameters, and to Jason Orendorff for his always-excellent code reviews. See this bug for followup work to track the latest ES6 agreement on how passing undefined (and only undefined) should trigger defaulting.

We have a few details to iron out still about scope, I suspect (based on this es-discuss message and its thread).

Rest parameters

Even sweeter than default parameters are rest parameters, and I bet they are ahead of default parameters in making arguments a bad memory some fine day:


js> function f(a, b, ...r) { print(Array.isArray(r)); return r.concat(a, b); }
js> f(1, 2)
true
[1, 2]
js> f(1, 2, 3)
true
[3, 1, 2]
js> f(1, 2, 3, 4, 5)
true
[3, 4, 5, 1, 2]

Again credit goes to Benjamin and Jason for their work.

Spread in array literals

The dual of rest is called “spread”, and it should work in call expressions as well as array literals. The latter is implemented in Firefox 16 (now in the beta channel):


js> a = [3, 4, 5]
[3, 4, 5]
js> b = [1, 2, ...a]
[1, 2, 3, 4, 5]

Thanks once again to Benjamin (a star Mozilla intern this summer) and Jason.

Spread in call expressions is not yet implemented:


js> function f(...r) { return r; }
js> function g(a) { return f(...a); }
typein:20:0 SyntaxError: syntax error:
typein:20:0 function g(a) { return f(...a); }
typein:20:0 .........................^

But I believe it is coming soon — bug 762363 is the one to watch, patch, and test.

for-of iteration

I blogged and spoke about for-of at TXJS 2011. The of contextual keyword, also in CoffeeScript, goes where in goes in for-in loops, in order to trigger the new iteration protocol (which is based on Python’s).


js> for (var v of [1, 2, 3]) print(v)
1
2
3

Arrays are iterable out of the box in ES6. This is a huge usability win! Unwary newcomers hoping for Pythonic value iteration can now avoid the trap of for-in on arrays iterating string-coerced keys rather than values.

Objects are not iterable without the programmer opting in explicitly:


js> for (var [k, v] of {p: 3, q: 4, r: 5}) print(k, v)
typein:24:0 TypeError: ({p:3, q:4, r:5}) is not iterable

To opt in, call an iterator factory, that is, a function that returns a fresh iterator for its parameter. Or simply give your objects or their common prototype an iterator property whose value is an iterator factory method: a function that returns the desired fresh iterator given its this parameter.

We require opt-in to avoid future-hostility against custom iterators for collection objects. Such objects probably do not want any kind of general property iterator default, which if left on Object.prototype, might be object-detected and prevent installation of the correct custom iterator factory.

The easiest way to create such an iterator factory is to write a generator function:


js> function items(o) { for (var k in o) yield [k, o[k]]; }
js> for (var [k, v] of items({p: 3, q: 4, r: 5})) print(k, v)
p 3
q 4
r 5

(This example uses destructuring, too.)

Note that SpiderMonkey has not yet implemented the ES6 generator function* syntax. We also haven’t added the ES6 features of delegating to a sub-generator via yield* and of returning a value from a generator (as in PEP 380). We’ll get to these soon.

Map

Have you ever wanted to map from arbitrary keys to values, without having the keys be implicitly converted to strings and therefore possibly colliding? ES6 Map is for you:


js> var objkey1 = {toString: function(){return "objkey1"}}
js> var objkey2 = {toString: function(){return "objkey2"}}
js> var map = Map([[objkey1, 42], [objkey2, true]])
js> map.get(objkey1)
42
js> map.get(objkey2)
true

The Map constructor takes any iterable, not just an array, and iterates over its key/value array-pairs.

Of course you can update a Map entry’s value:


js> map.set(objkey1, 43)
js> map.get(objkey1)
43

And you can add new entries with arbitrary key and value types:


js> map.set("stringkey", "44!")
js> for (var [k, v] of map) print(k, v)
objkey1 43
objkey2 true
stringkey 44!
js> map.size()
3

You can even use a key as a value:


js> map.set(objkey2, objkey1)
js> map.set(objkey1, objkey2)
js> for (var [k, v] of map) print(k, v)
objkey1 objkey2
objkey2 objkey1
hi 44
stringkey 44!

but now there’s a cycle between the objkey1 and objkey2 entries. This will tie up space in the table that must be manually released by breaking the cycle (or by dropping all references to the map):


js> map.delete(objkey1)
true
js> map.delete(objkey2)
true
js> for (var [k, v] of map) print(k, v)
hi 44
stringkey 44!

Setting the objkey1 and objkey2 variables to null is not enough to free the space in map tied up by the cycle. You must map.delete.

If your map is not exposed via an API by which arbitrary values could be passed as key and value to map.set, you won’t have to worry about cycles. And if the map itself becomes garbage soon (for sure), no worries. But for leak-proofing with arbitrary key/value cycles, see WeakMap, below.

Set

When you just want a set of arbitrary values, it’s a pain to have to use a map and burn code and memory on useless true values for the keys. So ES6 also offers Set:


js> var set = Set([1, true, "three"])
js> set.has(1)
true
js> set.has(2)
false
js> for (var e of set) print(e)
1
true
three
js> set.size()
3

As with Map, with a Set you can delete as well as add:


js> set.delete("three")
true
js> for (var e of set) print(e)
1
true
js> set.size()
2
js> set.add("three")
js> set.size()
3

An object element keyed by its identity works just as well as any other type of element.


js> var four = {toString: function(){return '4!'}}
js> set.add(four)
js> set.has(four)
true
js> for (var e of set) print(e)
1
true
three
4!

Unlike Map there is no cyclic leak hazard with arbitrary elements, although a WeakSet taking only object elements would still be helpful for automatic element removal when no other references to an element object remain. This idea has come up in connection with proxies and symbols, but I’ll save that for another post.

WeakMap

As noted above, with Map, making a cycle among map keys and values can tie up space in the table, and in the heap in all objects linked along the cycle or reachable from those objects, even when no references outside of the table to the key objects still live. Non-object keys, which can be recreated (forged) by writing literal string-equated expressions, have no such hazard.

ES6 WeakMap rides to the rescue:


js> var wm = WeakMap()
js> wm.set(objkey1, objkey2)
js> wm.set(objkey2, objkey1)
js> wm.has(objkey1)
true
js> wm.get(objkey1)
({toString:(function (){return "objkey2"})})
js> wm.has(objkey2)
true
js> wm.get(objkey2)
({toString:(function () {return 'objkey1'})})

So far so good, wm has a cycle but the objkey1 and objkey2 variables still keep the objects alive. Let’s cut the external references and force garbage collection:


js> objkey1 = null
null
js> gc()
"before 286720, after 282720n"
js> wm.get(objkey2)
({toString:(function () {return 'objkey1'})})
js> objkey2 = null
null
js> gc()
"before 286720, after 282624n"

At this point wm is empty. We can’t tell, however: there’s no way to enumerate a WeakMap, as doing so could expose the GC schedule (in browsers, you can’t call gc() to force a collection). Nor can we use wm.has to probe for entries, since we have nulled our objkey references!

A WeakMap is therefore close friends with the JS garbage collector. The GC knows when no references to a key object survive, and can collect the entry for that key — and for any cyclic entries in the table tied in a knot by their values being keys of other entries.

This special GC handling adds overhead, which ordinary Map users should not have to suffer.

What’s more, WeakMap accepts only object keys to enforce the no-forged-key rule necessary for the GC to be able to collect entries whose keys no longer survive — otherwise when could you ever GC an entry for key "if", which is typically interned along with the other JS reserved identifiers forever?

An entry with a key such as 42 or "42!" might be GC’ed if no copies of the key’s primitive value exist, even though the value could be recreated at any time (primitive types have value identity, not reference identity).

Of course, the GC cannot keep count of live instances of 42 very efficiently — or at all — depending on the JS engine’s implementation details. And strings are not observably shared via references and therefore counted, either (small ones could be copied, and are in many engines).

This is all a bit of a brain bender, and probably more than the average Map user needs to know, but the need for WeakMap compared to separate weak reference (on the ES7 radar!) and Map facilities is real. Smalltalkers discovered it decades ago, and called the weak key/value pair an Ephemeron (note: @awbjs, who witnessed the discovery, testified to me that the wikipedia page’s credits are incomplete).

Proxy

The draft ES6 spec has evolved since Proxies were first prototyped, but the good news is that the new Proxy spec can be implemented on the old one (which was prototyped in SpiderMonkey and V8) via Tom Van Cutsem‘s harmony-reflect library. The even better news is that the built-in direct proxies implementation has just landed in SpiderMonkey.

Tom’s __noSuchMethod__ implementation using direct proxies:


js> var MethodSink = Proxy({}, {
  has: function(target, name) { return true; },
  get: function(target, name, receiver) {
    if (name in Object.prototype) {
      return Object.prototype[name];
    }
    return function(...args) {
      return receiver.__noSuchMethod__(name, args);
    }
  }
});
js> void Object.defineProperty(Object.prototype,
  '__noSuchMethod__',
  {configurable: true, writable: true, value: function(name, args) {
    throw new TypeError(name + " is not a function");
  }});
js> var obj = { foo: 1 };
js> obj.__proto__ = MethodSink;
({})
js> obj.__noSuchMethod__ = function(name, args) { return name; };
(function (name, args) { return name; })
js> obj.foo
1
js> obj.bar()
"bar"
js> obj.toString
function toString() {
    [native code]
}

With this approach, you have to insert MethodSink just before the end of the prototype chain of an object that wants __noSuchMethod__‘s magic, using the __proto__ de facto standard that will be a de jure standard in ES6. The Object.prototype.__noSuchMethod__ backstop throws to catch bugs where the MethodSink was not on a receiver’s prototype chain.

This implementation does not just call the __noSuchMethod__ hook when a missing method is invoked, as shown after the obj.bar() line above. It also creates a thunk for any get of a property not in the target object and not in Object.prototype:


js> obj.bar
(function (...args) {
      return receiver.__noSuchMethod__(name, args);
    })
js> var thunk = obj.bar
js> thunk()
"bar"

I think this is an improvement on my original __noSuchMethod__ creation all those years ago in SpiderMonkey.

(Avid SpiderMonkey fans will cheer the switch to source recovery from decompilation evident in the result from Function.prototype.toString when evaluating obj.bar, thanks to Benjamin Peterson’s fine work in bug 761723.)

RegExp sticky (y) flag

This flag causes its regular expression to match in the target string starting from the index held in the lastIndex property of the regexp. Thus ^ can match at other than the first character in the target string. This avoids O(n2) complexity when lexing a string using a regexp, where without y one would have to take successive tail slices of the string and match at index 0.

String startsWith, endsWith, contains

These explain themselves by their names and they’re pretty simple, but also handier and more readable than the equivalent indexOf and lastIndexOf expansions.

Number isNaN, isFinite, toInteger, isInteger

The first two are not super-exciting, but worthwhile to avoid implicit conversion mistakes in specifying the isNaN and isFinite global functions which date from ES1 days:


js> Number.isNaN("foo")
false
js> isNaN("foo")
true

True fact: isNaN(" ") returns false because a string containing spaces converts (I was influenced by Perl; hey, it was the ’90s!) to the number 0, which sure enough is not a NaN. Dave Herman used this to good effect in the fun bonus segment of his BrazilJS talk.

The Integer static methods also avoid implicitly converting non-numeric arguments (e.g., "foo" to NaN). Their main purpose is to provide built-in IEEE-754 integral-double handling:


js> Math.pow(2,53)/3
3002399751580330.5
js> Number.isInteger(Math.pow(2,53)/3)
false
js> Math.pow(2,54)/3
6004799503160661
js> Number.isInteger(Math.pow(2,54)/3)
true

Notice how once you exceed the bits in IEEE double’s mantissa, Number.isInteger may return true for what you might wish were precise floating point results. Better luck in ES7 with value objects, which would enable new numeric types including IEEE-754r decimal.

Older prototype implementations

SpiderMonkey of course supports many Harmony proposals implemented based on ES4 or older drafts, e.g., const, let, generators, and destructuring. These will be brought up to spec as we hammer out ES6 on the anvil of Ecma TC39, heated in the forge of es-discuss, and user-tested in Firefox, Chrome, and other leading browsers. I hope to blog soon about other progress on the ES6 and ES7 “Harmony” fronts. A few clues can be found near the end of my Strange Loop slides.

/be

PS: I colorized the code examples in this post using the fun Prism.js library adapted lightly to ES6. Many thanks to Lea Verou, et al., for Prism.js!

18 Jun 2012

Recent talks: Fluent, TXJS 2012

I gave two talks recently, first at O’Reilly Media’s go-big-with-JavaScript FluentConf, and then at my favorite regional JS conference, the delightful TXJS (gorgeous site design), curated and stage-managed by Alex Sexton, Rebecca Murphey, and other usual suspects.

My Fluent video was up in record time, one achievement that the O’Reilly folks can brag about:

There I played the JS clip from Gary Bernhardt‘s hilarious Wat lightning talk, and gave bleeding-edge demos that my TXJS talk updated (video link for TXJS to be posted as soon as it’s available).

At TXJS, my title perhaps referenced Larry Niven (but let’s hope not), and my content directly cited the work of the late Lynn Margulis, champion of endosymbiotic theory. If JS is a mitochondrion, what in our modern web world correspond to the eukarya? I suspect we are in the midst of finding out. Node is probably involved.

TXJS 2012 Talk.001

At TXJS I mixed new metaphors like mixed drinks, harkened back to my last year’s TXJS talk, and gave the latest demos.

TXJS 2012 Talk.002

The big-picture story is renewed humility in the face of community feedback. My goal is to help the JS standards body be the best it can be as one of several symbiotic critters in an evolving system.

TXJS 2012 Talk.003

In last year’s TXJS talk I elaborated on how Ecma TC39 works, and also malfunctions sometimes. The important point is the hermeneutic spiral.

The spiral lives, and it works — although newcomers to es-discuss sometimes think consensus has been achieved within one 16-message thread. Not so fast! But consensus on ES6 is being achieved.

TXJS 2012 Talk.004

Lots of learning and re-learning here:

  • Versioning is an anti-pattern on the web, wherefore 1JS.
  • Syntax as programming language user-interface should evolve, but unlike standard library additions, developers can’t fix it or abstract over it (no macros yet, sigh). I made JS’s object model mutable at the start so people could polyfill, and they continue to do so for good reason. New syntax has to be minimal, righteous, and user-tested to get into Harmony, and that’s the way it should be.
  • One new syntax-suite that many agree pays its way, maximally minimal classes, still isn’t in ES6. Working on it…
  • My old “dunder-proto” (LOL, @littlecalculist was inspired in this pronunciation of __proto__ by Dunder-Mifflin) vanquished triangle (ugly typography didn’t help triangle either).
  • People still rant about JS’s privileged client-side status, but it’s a curse more than a blessing (Python, Lua, Ruby all would have been frozen badly if wedged into Netscape 2 in 1995).

The only hope is mutualism in an evolutionary struggle toward something better than either TC39 or JS developers could achieve by themselves. I tend to believe that this struggle will end well, better than master-planned would-be alternatives that don’t have a prayer of catching on as native-VM-implemented-yet-cross-browser on the Web as currently constituted.

TXJS 2012 Talk.005

JSFixed represents both a cry from the heart by some in the JS developer community, and an attractor for bikeshedding and tons of noise. But with excellent curation by @angustweets, @valueof, @rwaldron, and @KitCambridge, it has produced a reasonable set of recommendations. Now it’s up to TC39 to engulf the recommendations that it can stomach, as if we were a hungry cell and they a primitive bacterium, and then for both sides to find mutual wins in the resulting ensemble.

TXJS 2012 Talk.006

These are straightforward, but I expanded on maximin classes in the next slide.

Only => (fat arrow) made it into ES6; thin arrow may be an arrow too far, but I will try again at the July TC39 meeting.

TXJS 2012 Talk.007

TXJS 2012 Talk.008

The existential operator, ?., is relatively straightforward. I will put it on the agenda for July. It could fit in ES6, IMHO, without breaking any budgets or agreements.

I intend to write up Object.prototype.forEach as a strawman, based on JSFixed’s recommendation. It relies on Object.keys order, which in turn is for-in order, but no big deal. It’s a useful object method, shadowed by Array.prototype.forEach.

TXJS 2012 Talk.009

JS in 2gyr? More like at birth + 35yr. CoffeeScript forks off and rejoins with Ruby bacterial DNA engulfed. Is that Dart near the bottom? :-P

TXJS 2012 Talk.010

Evolution does sometimes paint a clade into a dead-end corner, or leave the evolved system with harsh dilemmas and “pick any two” trilemmas.

TXJS 2012 Talk.011

The quest for shorter function syntax runs afoul of this binding. As I said in my talk, if we are counting fairly, then => and -> are not a single short function syntax, they are two syntaxes grouped by being “arrows” or having two chars ending in >.

TXJS 2012 Talk.012

This slide refers to the default operator proposal for Harmony, which I’ve recently edited based on several rounds of es-discuss and twitter feedback. It is looking good, IMHO, with the only remaining issue (also open for parameter default values) of whether null as well as undefined should trigger defaulting.

TXJS 2012 Talk.013

Somehow, @rmurphey knew I would be speaking about Unicorns.

TXJS 2012 Talk.014

TXJS 2012 Talk.015

ES6 is already partly implemented in top browsers, and it is coming to more browsers soon. Time to start experimenting with it and giving feedback.

TXJS 2012 Talk.016

My demos:

So yeah: 3D games, Flash, and C/C++ in JS. Can you dig it?

TXJS 2012 Talk.017

I close by praising Wesley Snipes again (this time with a jwz memory), and taunting those who doubt either JS or Passenger 57.

I do not taunt for the sake of JS in itself, which started life as good-not-great where it wasn’t just silly. Rather, for JS as an over-maligned, unique evolving system that somehow still unites a standards group (made of competing browser vendors) with faithful JS developers, working together for a better symbiosis.

No other language has the curse or the blessing of this fate. Let’s do our best. Thanks to the JSFixed crew for doing theirs.

/be

15 Apr 2012

The infernal semicolon

Most of the comments in this semicolons in JS exchange make me sad. The code in question:

  clearMenus()
  !isActive && $parent.toggleClass('open')

relies on Automatic Semicolon Insertion (ASI) and so cannot be minified except by parsing fully (including ASI), observing the significance of the newline after clearMenus(), and inserting a semicolon when stripping that newline.

Some argue that JSMin has a bug. Doug Crockford does not want to change JSMin, and that’s his choice.

FWIW, I agree with Doug’s canonically grumpy tone if not his substance; more below on the substance.

I also agree with @cramforce and @jedschmidt that the && line is an abusage, allowed due to JS’s C heritage by way of Java, but frowned upon by most JS hackers; and that an if statement would be much better style (and, I take it, help JSMin do right). But this particular criticism is too ad hoc to help resolve the general “Let me have my ASI freedom and still minify, dammit!” debate.

Doug goes on to say:

TC39 is considering the use of ! as an infix operator. This code will break in the future. Fix it now. Learn to use semicolons properly. ! is not intended to be a statement separator. ; is.

The !-as-infix-operator idea is proposed as syntactic sugar for promises, which may or may not make it into Harmony with that exact syntax, or with any syntactic sugar at all.

Doug’s right that ! is not a statement terminator or “initiator”. And (my point here), neither is newline.

But search for [nlth] in the proposed promises grammar and you’ll see something surprising about ASI and infix operators: we can add new infix operators in the future, whether new contextual keyword-operators (e.g., is and isnt — BTW these are in doubt) or retasked, existing unary-prefix operators, provided that we insist on [no LineTerminator here] immediately to the left of any such infix operator.

(In ECMA-262, [no LineTerminator here] is used in so-called “restricted productions” to make contextually-significant newlines, e.g., after return without any expression of the return value on the same line.)

This future-friendliness to new infix operators comes directly from ASI as a newline-sensitive error correction procedure, as the example at top demonstrates. Try other examples using a leading identifier on a well-formed second line and you’ll see the same effect. Removing the newline introduces an early error, which creates homesteading space for new infix operators in a later edition of ECMA-262. Examples:

let flag = x is y;  // no n before 'is'!
x ! p = v;          // Q(x).put(’p’, v)

An aside on coding style: if we add new infix operators used in restricted productions, this gives weight to the JS coding style that puts infix operators in multiline expressions at the end of continued lines, rather than at the beginning of continuation lines.

So while I agree with Doug on those two lines of code from Bootstrap (an excellent JS library, BTW) exhibiting poor style, it is not the case that such code as written could break in the future, even if we were to adopt the !-as-infix-operator strawman. The first line terminator in that example is indeed significant.

The moral of this story: ASI is (formally speaking) a syntactic error correction procedure. If you start to code as if it were a universal significant-newline rule, you will get into trouble. A classic example from ECMA-262:

a = b + c
(d + e).print()

Similar hazards arise with [, /, and unary + and -. Remember, if there wasn’t an error, ASI does not apply.

This problem may seem minor, but JS file concatenation ups the ante. For this reason some style guides (Dojo, IIRC) advocate starting your reusable JS file with ;, but people don’t know and it’s easy to forget.

I wish I had made newlines more significant in JS back in those ten days in May, 1995. Then instead of ASI, we would be cursing the need to use infix operators at the ends of continued lines, or perhaps or brute-force parentheses, to force continuation onto a successive line. But that ship sailed almost 17 years ago.

The way systematic newline significance could come to JS is via an evolution of paren-free that makes it to Harmony status. I intend to work on this in the strawman, but not for ES6.

Some of the github issue comments are naive or idealistic to the point of being silly. Since when does any programming language not have syntax arguments? All living, practical languages that I know of, even those with indentation-based block structure and similar restrictions, have degrees of freedom of expression that allow abusage as well as good usage. Language designers can try to reduce degrees of freedom, but not eliminate them completely.

My two cents: be careful not to use ASI as if it gave JS significant newlines. And please don’t abuse && and || where the mighty if statement serves better.

I’ll also say that if it were up to me, in view of JS’s subtle and long history, I’d fix JSMin. But I would still log a grumpy comment or two first!

/be

5 Apr 2012

Community and Diversity

[I hope that it's obvious from what follows that this is a statement of personal opinion, not an official Mozilla document.]

Summary

Mitchell Baker recently wrote:

If we start to try to make “Mozilla” mean “those people who share not only the Mozilla mission but also my general political / social / religious / environmental view” we will fail. If we focus Mozilla on our shared consensus regarding the Mozilla mission and manifesto then the opportunities before us are enormous.

Mozilla’s diversity is a success condition. Our mission and our goal is truly global. Our mission taps into a shared desire for respect and control and user sovereignty that runs across cultures and across many other worldviews. We may even offend each other in some of our other views. Despite this, we share a commitment to the Mozilla mission. This is a remarkable achievement and important to our continued success.

I agree with every word of this, and I believe it applies to other communities of which I’m a member. If not, these communities will tend to splinter, and that is likely to be a net loss for everyone.

Background

A donation that I made in support of California Proposition 8 four years ago became public knowledge and sparked a firestorm of comments in the last few days, mostly on Twitter.

People in other countries or other U.S. states do not know why “Mozilla” was listed in the donation data. Donors above a certain amount are required by the State of California to disclose their employer. Mozilla had nothing to do with the donation.

I’m not going to discuss Prop 8 here or on Twitter. There is no point in talking with the people who are baiting, ranting, and hurling four-letter abuse. Personal hatred conveyed through curse words is neither rational nor charitable, and strong feelings on any side of an issue do not justify it.

In contrast, people expressing non-abusive anger, sadness, or disagreement, I understand, grieve, and humbly accept.

No Hate

Ignoring the abusive comments, I’m left with charges that I hate and I’m a bigot, based solely on the donation. Now “hate” and “bigot” are well-defined words. I say these charges are false and unjust.

First, I have been online for almost 30 years. I’ve led an open source project for 14 years. I speak regularly at conferences around the world, and socialize with members of the Mozilla, JavaScript, and other web developer communities. I challenge anyone to cite an incident where I displayed hatred, or ever treated someone less than respectfully because of group affinity or individual identity.

Second, the donation does not in itself constitute evidence of animosity. Those asserting this are not providing a reasoned argument, rather they are labeling dissenters to cast them out of polite society. To such assertions, I can only respond: “no”.

If we are acquainted, have good-faith assumptions, and circumstances allow it, we can discuss 1:1 in person. Online communication doesn’t seem to work very well for potentially divisive issues. Getting to know each other works better in my experience.

The Larger Point

There’s a larger point here, the one Mitchell made: people in any group or project of significant size and diversity will not agree on many crucial issues unrelated to the group or project.

I know people doing a startup who testify that even at fewer than 20 employees, they have to face this fact. It’s obviously true for much larger communities such as JS and Mozilla. Not only is insisting on ideological uniformity impractical, it is counter-productive.

So I do not insist that anyone agree with me on a great many things, including political issues, and I refrain from putting my personal beliefs in others’ way in all matters Mozilla, JS, and Web. I hope for the same in return.

/be

(Comments disabled on this one.)

18 Mar 2012

Video, Mobile, and the Open Web

[Also posted at hacks.mozilla.org.]

I wrote The Open Web and Its Adversaries just over five years ago, based on the first SXSW Browser Wars panel (we just had our fifth, it was great — thanks to all who came).

Some history

The little slideshow I presented is in part quaint. WPF/E and Adobe Apollo, remember those? (Either the code names, or the extant renamed products?) The Web has come a long way since 2007.

But other parts of my slideshow are still relevant, in particular the part where Mozilla and Opera committed to an unencumbered <video> element for HTML5:

  • Working with Opera via WHATWG on <video>
    • Unencumbered Ogg Theora decoder in all browsers
    • Ogg Vorbis for <audio>
    • Other formats possible
    • DHTML player controls

We did what we said we would. We fought against the odds. We carried the unencumbered HTML5 <video> torch even when it burned our hands.

We were called naive (no) idealists (yes). We were told that we were rolling a large stone up a tall hill (and how!). We were told that we could never overcome the momentum behind H.264 (possibly true, but Mozilla was not about to give up and pay off the patent rentiers).

Then in 2009 Google announced that it would acquire On2 (completed in 2010), and Opera and Mozilla had a White Knight.

At Google I/O in May 2010, Adobe announced that it would include VP8 (but not all of WebM?) support in an upcoming Flash release.

On January 11, 2011, Mike Jazayeri of Google blogged:

… we are changing Chrome’s HTML5 <video> support to make it consistent with the codecs already supported by the open Chromium project. Specifically, we are supporting the WebM (VP8) and Theora video codecs, and will consider adding support for other high-quality open codecs in the future. Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.

These changes will occur in the next couple months….

A followup post three days later confirmed that Chrome would rely on Flash fallback to play H.264 video.

Where we are today

It is now March 2012 and the changes promised by Google and Adobe have not been made.

What’s more, any such changes are irrelevant if made only on desktop Chrome — not on Google’s mobile browsers for Android — because authors typically do not encode twice (once in H.264, once in WebM), they instead write Flash fallback in an <object> tag nested inside the <video> tag. Here’s an example adapted from an Opera developer document:

<video controls poster="video.jpg" width="854" height="480">
 <source src="video.mp4" type="video/mp4">
 <object type="application/x-shockwave-flash" data="player.swf"
         width="854" height="504">
  <param name="allowfullscreen" value="true">
  <param name="allowscriptaccess" value="always">
  <param name="flashvars" value="file=video.mp4">
  <!--[if IE]><param name="movie" value="player.swf"><![endif]-->
  <img src="video.jpg" width="854" height="480" alt="Video">
  <p>Your browser can't play HTML5 video.
 </object>
</video>

The Opera doc nicely carried the unencumbered video torch by including

 <source src="video.webm" type="video/webm">

after the first <source> child in the <video> container (after the first, because of an iOS WebKit bug, the Opera doc said), but most authors do not encode twice and host two versions of their video (yes, you who do are to be commended; please don’t spam my blog with comments, you’re not typical — and YouTube is neither typical nor yet completely transcoded [1]).

Of course the ultimate fallback content could be a link to a video to download and view in a helper app, but that’s not “HTML5 video” and it is user-hostile (profoundly so on mobile). Flash fallback does manage to blend in with HTML5, modulo the loss of expressiveness afforded by DHTML playback controls.

Now, consider carefully where we are today.

Firefox supports only unencumbered formats from Gecko’s <video> implementation. We rely on Flash fallback that authors invariably write, as shown above. Let that sink in: we, Mozilla, rely on Flash to implement H.264 for Firefox users.

Adobe has announced that it will not develop Flash on mobile devices.

In spite of the early 2011 Google blog post, desktop Chrome still supports H.264 from <video>. Even if it were to drop that support, desktop Chrome has a custom patched Flash embedding, so the fallback shown above will work well for almost all users.

Mobile matters most

Android stock browsers (all Android versions), and Chrome on Android 4, all support H.264 from <video>. Given the devices that Android has targeted over its existence, where H.264 hardware decoding is by far the most power-efficient way to decode, how could this be otherwise? Google has to compete with Apple on mobile.

Steve Jobs may have dealt the death-blow to Flash on mobile, but he also championed and invested in H.264, and asserted that “[a]ll video codecs are covered by patents”. Apple sells a lot of H.264-supporting hardware. That hardware in general, and specifically in video playback quality, is the gold standard.

Google is in my opinion not going to ship mobile browsers this year or next that fail to play H.264 content that Apple plays perfectly. Whatever happens in the very long run, Mozilla can’t wait for such an event. Don’t ask Google why they bought On2 but failed to push WebM to the exclusion of H.264 on Android. The question answers itself.

So even if desktop Chrome drops H.264 support, Chrome users almost to a person won’t notice, thanks to Flash fallback. And Apple and Google, along with Microsoft and whomever else might try to gain mobile market share, will continue to ship H.264 support on all their mobile OSes and devices — hardware-implemented H.264, because that uses far less battery than alternative decoders.

Here is a chart of H.264 video in HTML5 content on the Web from MeFeedia:

MeFeedia.com, December 2011

And here are some charts showing the rise of mobile over desktop from The Economist:

The Economist, October 2011

These charts show Bell’s Law of Computer Classes in action. Bell’s Law predicts that the new class of computing devices will replace older ones.

In the face of this shift, Mozilla must advance its mission to serve users above all other agendas, and to keep the Web — including the “Mobile Web” — open, interoperable, and evolving.

What Mozilla is doing

We have successfully launched Boot to Gecko (B2G) and we’re preparing to release a new and improved Firefox for Android, to carry our mission to mobile users.

What should we do about H.264?

Andreas Gal proposes to use OS- and hardware-based H.264 decoding capabilities on Android and B2G. That thread has run to over 240 messages, and spawned some online media coverage.

Some say we should hold out longer for someone (Google? Adobe?) to change something to advance WebM over H.264.

MozillaMemes.tumblr.com/post/19415247873

Remember, dropping H.264 from <video> only on desktop and not on mobile doesn’t matter, because of Flash fallback.

Others say we should hold out indefinitely and by ourselves, rather than integrate OS decoders for encumbered video.

I’ve heard people blame software patents. I hate software patents too, but software isn’t even the issue on mobile. Fairly dedicated DSP hardware takes in bits and puts out pixels. H.264 decoding lives completely in hardware now.

Yes, some hardware also supports WebM decoding, or will soon. Too little, too late for HTML5 <video> as deployed and consumed this year or (for shipping devices) next.

As I wrote in the newsgroup thread, Mozilla has never ignored users or market share. We do not care only about market share, but ignoring usability and market share can easily lead to extinction. Without users our mission is meaningless and our ability to affect the evolution of open standards goes to zero.

Clearly we have principles that prohibit us from abusing users for any end (e.g., by putting ads in Firefox’s user interface to make money to sustain ourselves). But we have never rejected encumbered formats handled by plugins, and OS-dependent H.264 decoding is not different in kind from Flash-dependent H.264 decoding in my view.

We will not require anyone to pay for Firefox. We will not burden our downstream source redistributors with royalty fees. We may have to continue to fall back on Flash on some desktop OSes. I’ll write more when I know more about desktop H.264, specifically on Windows XP.

What I do know for certain is this: H.264 is absolutely required right now to compete on mobile. I do not believe that we can reject H.264 content in Firefox on Android or in B2G and survive the shift to mobile.

Losing a battle is a bitter experience. I won’t sugar-coat this pill. But we must swallow it if we are to succeed in our mobile initiatives. Failure on mobile is too likely to consign Mozilla to decline and irrelevance. So I am fully in favor of Andreas’s proposal.

Our mission continues

Our mission, to promote openness, innovation, and opportunity on the Web, matters more than ever. As I said at SXSW in 2007, it obligates us to develop and promote unencumbered video. We lost one battle, but the war goes on. We will always push for open, unencumbered standards first and foremost.

In particular we must fight to keep WebRTC unencumbered. Mozilla and Opera also lost the earlier skirmish to mandate an unencumbered default format for HTML5 <video>, but WebRTC is a new front in the long war for an open and unencumbered Web.

We are researching downloadable JS decoders via Broadway.js, but fully utilizing parallel and dedicated hardware from JS for battery-friendly decoding is a ways off.

Can we win the long war? I don’t know if we’ll see a final victory, but we must fight on. Patents expire (remember the LZW patent?). They can be invalidated. (Netscape paid to do this to certain obnoxious patents, based on prior art.) They can be worked around. And patent law can be reformed.

Mozilla is here for the long haul. We will never give up, never surrender.

/be

[1] Some points about WebM on YouTube vs. H.264:

  • Google has at best transcoded only about half the videos into WebM. E.g., this YouTube search for “cat” gives ~1.8M results, while the same one for WebM videos gives 704K results.
  • WebM on YouTube is presented only for videos that lack ads, which is a shrinking number on YouTube. Anything monetizable (i.e., popular) has ads and therefore is served as H.264.
  • All this is moot when you consider mobile, since there is no Flash on mobile, and as of yet no WebM hardware, and Apple’s market-leading position.

27 Feb 2012

Community-Prioritized Web Standards

Mozilla is happy to support Facebook in forming a Core Mobile Web Platform W3C Community Group in which to curate prioritized, tiered lists of emerging and de facto standards that browsers should support in order for the Web to compete with native application stacks on mobile devices.

The W3C Community Groups do not create normative specifications; their work is informative at most [UPDATED per Ian Jacob's comment]. However I believe they can add significant value, especially by helping developers make their priorities clear to the implementors who tend to control the normative specs (W3C Recommendations).

Standards-making like law-making is definitely sausage-making. How could it be otherwise, with intensely competitive companies trying to work together?

On top of this, consider how conflicted many standards bodies are by pay-to-play, however muted and tamed by “process”. Anyone can join with enough money, and inject a divergent agenda or random noise into the process.

One inevitable outcome of these conflicts is too many proposed and even finalized standards for all browsers possibly to implement correctly and completely. The nice thing about standards is….

Who is best situated to advise implementors (mainly browser vendors) on which standards to prototype and finalize first? In my view, developers. But of course you can’t ask developers questions to answer with one voice. Developer communities must acclaim their own leaders, who then speak to standards bodies.

Last year, Facebook joined the W3C. I thought at the time “there is a company with skin in the Web content game, not only for pages but especially for apps.” Facebook relies heavily on HTML5, CSS, and JS. Facebook has no browser in the market to pull focus or inject asymmetric browser/service integration agendas.

And Facebook has hired long-time Open Web developers who have risen to be leaders in their communities: James Pearce and Tobie Langel.

So I encourage everyone interested in helping to join with James, Tobie and others in the new Core Mobile Web Platform community group. Together we can get the specs that Web developers deserve, completed in the right order with multiple interoperating implementations.

/be

22 Feb 2012

Mobile Web API Evolution

Ragavan Srinivasan’s post about the forthcoming Mozilla Marketplace for Open Web Apps inspired me to write about Mozilla’s surging Web and Device API standards work.

A bit of background. Mozilla has always contributed to web standards, going back to the start of the project. We co-founded the WHAT-WG to kick off HTML5. As readers of this blog know, we are a leader in JS standardization. We have some of the top CSS and layout experts in the world.

In the last eight months, our efforts to extend the web standards to include new APIs needed to build compelling apps and OS components on mobile devices have really caught fire. B2G and Open Web Apps are the fuel for this fire.

So I thought I would compile a list of emerging APIs to which we’ve contributed. In citing Mozillans I do not mean to minimize the efforts of standardization colleagues at Google, Microsoft, Nokia, Opera, the W3C and elsewhere. Standards are a multi-vendor effort (although excluding WebGL [see UPDATE below] one shiny name is conspicuously absent from this list).

The Mozilla contributions are worth noting both to acknowledge the individuals involved, and to highlight how Mozilla is championing device APIs for the web without having a native application stack blessed with such APIs on offer. We see the Web as quickly evolving to match native stacks. We have no other agenda than improving the Web to improve its users’ lives, including Web developers’ lives — especially mobile users and developers.

As always, standards in progress are subject to change, yet require prototype implementation and user-testing. Mozilla remains committed to playing fairly by not forging de-facto standards out of prototypes, rather proposing before disposing and in the end tracking whatever is standardized.

Here is the list, starting with some 2011-era work:

  • Geolocation, with Google contributing the editor and Firefox (thanks to Jay Sullivan leading the charge) implementing early.
  • WebGL (UPDATE: Chris Marrin of Apple edited) and typed arrays.
  • Gamepad API. Co-editor: Ted Mielczarek. Mozillans are also contributing to Pointer Lock.
  • Screen Orientation. Editor: Mounir Lamouri.
  • navigator.getUserMedia. Co-editor: Anant Narayanan
  • Battery Status (in Last Call). From the Acknowledgements:

    Big thanks to the Mozilla WebAPI team for their invaluable feedback based on prototype implementations.

  • Media Capture. Fabrice Desré prototype-implemented in Gecko.
  • Network API. Editor: Mounir Lamouri.
  • Web Telephony. Ben Turner, Jonas Sicking, Philipp von Weitershausen.
  • Web SMS. Mounir Lamouri, Jonas Sicking.
  • Vibration. From the Acknowledgements:

    The group is deeply indebted to Mounir Lamouri, Jonas Sicking, and the Mozilla WebAPI team in general for providing the WebVibrator prototype as an initial input.

  • File API. Editors: Arun Ranganathan, Jonas Sicking.
  • IndexedDB. Editors includes Jonas Sicking.

I did not list most of the HTML5 and Web API work aimed at Desktop Firefox, to focus on the new mobile-oriented additions. There’s more to say, including about bundled-permission follies and how to weave permission-granting (with memorization) into interactions, but not here.

One last note. The CSS vendor prefix brouhaha had, among many salutary effects, the benefit of shining light on an important requirement of competitive mobile web development: CSS style properties such as -webkit-animation-*, however you spell them, must have fast and beautiful implementations across devices for developers to find them usable: 60Hz, artifact-free rendering under touch control. This requires such work as off-main-thread compositing and GL layers.

This is a high technical bar, but we are in the process of meeting it in the latest Firefox for Android and B2G builds, thanks to hard work from many people, especially Patrick Walton, Robert O’Callahan, Chris Jones, and Andreas Gal. Onward!

/be

28 Oct 2011

JSConf.eu

JSConf.eu 2011 was terrific, bigger and juicier than last year, with a strong sense of community felt from reject.js pre-conf:

to start:

to finish:

Chris Williams makes a moving plea for an end to negativity, meaning trolling, flaming, mocking, and hating in online media.

This sounds utopian, like “an end to history”. But it is good as an aspiration, a constant reminder, since we’ve all seen how many people tend to be more negative online than they are in person. This isn’t just a matter of isolated individual behavior, free of cultural feedback loops. The new media reinforce tribalism.

However, it is hard to be positive about some things. I will persevere….

JSConf.eu had too many awesome talks to cover without bi-locating. Mozillans were well-represented, including dmandelin and dvander on JavaScript JITs, Marijn Haverbeke on DOM implementation techniques, Chris Heilmann on Community JS reloaded – how to rock as a movement, and Andreas Gal on PDF.js. Janet Swisher led the MDC doc sprint in the Hacker Lounge.

I would like to single out Alon Zakai‘s Emscripten talk. Emscripten is an LLVM-to-JS compiler, which means it enables compiling C, C++, and Objective-C (and other languages with LLVM front ends) to JS. What’s more, interpreters written in C for Python, Ruby, and Lua have been compiled and hosted on the web.

Alon’s results are impressive, with lots of room for more wins. At JSConf.eu, jaws dropped and eyes were opened.

For my talk, I reprised some CapitolJS material, including the RiverTrail demo, which won loud and enthusiastic applause when I clicked on the “Parallel” button.

(A few people asked afterward about whether the graphics was running on one of four cores. I’ll repeat the answer here: the particle system demo uses WebGL targeting the GPU for rendering, and the four CPUs’ vector units for n-body solving. All from deadlock-free, data-race-free, seemingly single-threaded JS.)

Here’s the video of my talk:

The amazing Anna Lena Schiller created infographics for all the talks, on the spot — a truly impressive display of concentration and stamina. Here’s the one she did for my talk:

JSConf.eu-2011-InfoGraphic

And here are the updated and new slides I presented, showing ES6 work-in-progress (none of it final, so don’t panic) and covering some current controversies.

JSLOL.007

From recent es-discuss messages, I’m afraid that classes are on their way out of ES6. This seems a shame, and avoidable. In hindsight, we did not have all class advocates working in concert on the hard issues last year and earlier this year. But we also do not agree on what’s required for ES6, and some on TC39 view minimizing as future-hostile.

To be blunt, we lost some “classes” advocates who work for Google to Dart. Others at Google on TC39 seem to want more out of ES6 classes than even Dart guarantees (see the future-hostile point above).

I’m not slamming Google as a company here, since it does still support people working on JS in TC39. I respect the people involved and believe they’re for the most part making their own choices. But Dart and other unrelated Google agenda items do impose clear and significant opportunity costs on Google’s standards actiivities.

To remain positive per “An End to Negativity”, I’ll simply conclude that we TC39ers should pay attention to Dart now that it is out, even though we’ve lost time and potential contributions.

The famous Tony Hoare quote that Bill Frantz cited, which argues for deferring classes, is this:

When any new language design project is nearing completion, there is always a mad rush to get new features added before standardization. The rush is mad indeed, because it leads into a trap from which there is no escape. A feature which is omitted can always be added later, when its design and its implications are well understood. A feature which is included before it is fully understood can never be removed later.
From C.A.R.Hoare’s 1980 ACM Turing Award Lecture

I agree with Erik Arvidsson that “[b]y not providing [class] syntax we are continuing to encourage a million incompatible ‘class’ libraries.” I’m with Erik: I would still like to see TC39 agree on minimal classes. But not at any cost.

Onward to new proposals with sometimes-tentative syntax. I’m continuing to “live in a fishbowl” by showing these proposals, even though doing so risks drive-by misinterpretation that we have finalized the sum of all proposals.

So, please don’t freak out. Not all of this will make it as proposed. We may also make cuts. But it’s important to address the use-cases motivating these proposals, take in the fullness of the problem space and potential solutions, and do the hermeneutic spiral.

JSLOL.008

Apart from font issues that make <| look lopsided or non-triangular, this proposal looks good. It replaces the main legitimate use-case for assigning to __proto__: presetting the prototype link in an object literal.

JSLOL.009

Unlike Object.extend, .{ copies only “own” properties from its right-hand-side object literal, and (this is a crucial difference) it also copies properties with private name object keys (which are non-enumerable by definition). For example, base.{[privateName]: value, publicName: value2} given a private name object reference denoted privateName in scope.

JSLOL.010

Design patterns point to programming language bugs. Nevertheless, this class pattern shows clever work by Allen Wirfs-Brock, decomposing classes-as-sugar into chained operator expressions. It’s still a bit verbose and error-prone in my opinion, and cries out for the ultimate sugar of minimal class syntax (if only we could agree on that).

JSLOL.011

Much of the Dart class syntax design looks good to me. Possibly TC39 can agree to adopt it, with necessary adjustments. It would still be sugar for constructors and prototypes.

JSLOL.012

Arrow function syntax faces an uphill battle due to the combination of TC39’s agreement to future-proof by having an unambiguous LR(1) grammar (after ASI and with lookahead restrictions); mixed with the comma expression, (a, b, c), which I copied into JS’s grammar straight from C (not from Java, which left it out, instead providing comma-separated special forms in a few contexts, e.g. for(;;) loop heads). You can’t have both, and we do not want to remove the comma expression in Harmony.

JSLOL.013

JSLOL.014

I’m quite in favor of block-lambdas, and they meet formal approval from TC39’s strictest grammarian. Some still object to them as an alien DNA injection from Ruby and Smalltalk, both syntactically and (with Tennent Correspondence Principle conformance regarding return, break, continue, and this) semantically.

JSLOL.015

At this point, ES6 has no shorter function syntax. This seems like a loss, and fixable, to me. Your comments welcome, especially if they make novel distinctions that help forge consensus.

JSLOL.016

During the talk and Q&A, I recounted how the WHAT-WG was created to counteract a standards body gone wrong (the 2004-era W3C). I then raised the idea of a community-based group, a “JS-WG”, to augment the much healthier but still under-staffed Ecma TC39 committee.

Besides floating more ideas (really, the point is not to bikeshed endlessly or take in too many proposals to digest), a JS-WG worth organizing might actually develop draft specs and prototype implementation patches for JavaScriptCore, SpiderMonkey, and V8. The maintainers of those engines could use the help, and with patches and patched builds, we could scale up user testing beyond what’s in the cards now.

JSLOL.017

I know it’s hard to believe, but people are finally realizing that with V8 prototyping alongside SpiderMonkey, ES6 is happening. It’ll be prototyped in pieces. I hope many will be “on by default” (e.g., not under a flag in Chrome) well before the new edition is standardized (end of 2013). That’s how we roll in Firefox with SpiderMonkey.

/be