Cisco’s H.264 Good News

As I noted last year, one of the biggest challenges to open source software has been the patent status of video codecs. The most popular codec, H.264, is patent-encumbered and licensed by MPEG LA, under terms that prevent distributing it with open source products including Firefox. Cisco has announced today that they are going to release a gratis, high quality, open source H.264 implementation — along with gratis binary modules compiled from that source and hosted by Cisco for download. This move enables any open source project to incorporate Cisco’s H.264 module without paying MPEG LA license fees.
 
We are grateful for Cisco’s contribution, and we will add support for Cisco’s OpenH264 binary modules to Firefox soon. These modules will be usable by downstream distributions of Firefox, as well as by any other project. In addition, we will work with Cisco to put the OpenH264 project on a sound footing and to ensure that it is governed well. We have already been collaborating very closely with Cisco on our WebRTC implementation, and we are excited to see Cisco deepening their commitment to the Open Web.  Or, as Jonathan Rosenberg, Cisco CTO for Collaboration puts it,

Cisco has a long-standing history of supporting and integrating open standards, open formats and open source technologies as a model for delivering greater flexibility and interoperability to users. We look forward to collaborating with Mozilla to help bring H.264 to the Web and to the Internet.

Here’s a little more detail about how things are going to work: Cisco is going to release, under the BSD license, an H.264 stack, and build it into binary modules compiled for all popular or feasibly supportable platforms, which can be loaded into any application (including Firefox). The binary modules will be available for download from Cisco, and Cisco will pay for the patent license from the MPEG LA. Firefox will automatically download and install the appropriate binary module onto each user’s machine when needed, unless disabled in the user’s preferences.
 
Interoperability is critical on the Internet, and H.264 is the dominant video codec on the Web. The vast majority of HTML5 streaming video is encoded using H.264, and most softphones and videoconferencing systems use H.264. H.264 chipsets are widely available and can be found in most current smartphones, including many Firefox OS phones. Firefox already supports H.264 for the video element using platform codecs where they are available, but as noted in my last blog post on the topic, not all OSes ship with H.264 included. Provided we can get AAC audio decoders to match, using Cisco’s OpenH264 binary modules allows us to extend support to other platforms and uses of H.264.
 
While Cisco’s move helps add H.264 support to Firefox on all OSes, we will continue to support VP8, both for the HTML video element and for WebRTC. VP8 and H.264 are both good codecs for WebRTC, and we believe that at this point, users are best served by having both choices.
 
Of course, this is not a not a complete solution. In a perfect world, codecs, like other basic Internet technologies such as TCP/IP, HTTP, and HTML, would be fully open and free for anyone to modify, recompile, and redistribute without license agreements or fees. Mozilla is fully committed to working towards that better future. To that end, we are developing Daala, a fully open next generation codec. Daala is still under development, but our goal is to leapfrog H.265 and VP9, building a codec that will be both higher-quality and free of encumberances. Mozilla has assembled an engineering dream team to develop Daala, including Jean-Marc Valin, co-inventor of Opus, the new standard for audio encoding; Theora project lead Tim Terriberry; and recently Xiph co-founders Jack Moffitt, author of Icecast; and Monty Montgomery, the author of Ogg Vorbis.
 
Cullen Jennings, Cisco Fellow, Collaboration Group, says:

Cisco is very excited about the future of royalty free codecs. Daala is one of the most interesting ongoing technical developments in the codec space and we have been contributing to the project.

At Mozilla we always come back to the question of what’s good for the users and in this case that means interoperation of copious H.264 content across OSes and other browsers. We’ve already started looking at how to integrate the Cisco-hosted H.264 binary module, and we hope to have something ready for users in early 2014.
 
Watch this space for more exciting developments in WebRTC, Daala, and open web video.
 
/be

The Bridge of Khazad-DRM

To lighten the mood:

BalrogGandalf gandalf-sign

But actually, I’m serious.

People are rightly concerned about what is going on in the W3C with DRM, as couched in the Encrypted Media Extensions (EME) proposal. Please read Henri Sivonen’s explanation of EME if you haven’t yet.

As usual for us here at Mozilla, we want to start by addressing what is best for individual users and therefore what’s best for the Open Web, which in turn depends in large part on many interoperating browsers, and also on open source implementations with a significant combination by number and market share among those browsers.

We see DRM in general as profoundly hostile to all three of: users, open source software, and browser vendors who aren’t also DRM vendors.

Currently, users can play content that is subject to DRM restrictions using Firefox if they install NPAPI plugins, really Flash and Silverlight at this point. While we are not in favor of DRM, we do hear from many users who want to watch streaming movies to which they rent access rather than “buy to own”. The conspicuous example is Netflix, which currently uses Silverlight, but plans to use EME in HTML5.

(UPDATE: Netflix is using EME already in IE11 on Windows 8.1 without Silverlight. And Chrome OS has deployed EME as well. Apple too, in Mavericks.)

What the W3C is entertaining, due to Netflix, Google, and Microsoft’s efforts, is the EME API, which introduces new and non-standard plugins that are neither Silverlight nor Flash, called Content Decryption Modules (CDM for short), into HTML5. We see serious problems with this approach. One is that the W3C apparently will not specify the CDM, so each browser may end up having its own system.

We are working to get Mozilla and all our users on the right side of this proposed API. We are not just going to say that users cannot have access to streaming Hollywood movies, as that is a good way to lose market share and not have any product with which to uphold our mission.

Mozilla’s mission requires us to build products that users love — Firefox, Firefox for Android, Firefox OS, and Firefox Marketplace are four examples — with enough total share to influence developers, and therefore standards. Given the forces at play, we have to consider EME carefully, not reject it outright or embrace it in full.

Again, we have never categorically rejected plugins, including those with their own DRM subsystems.

However, the W3C willfully underspecifying DRM in HTML5 is quite a different matter from browsers having to support several legacy plugins. Here is a narrow bridge on which to stand and fight — and perhaps fall, but (like Gandalf) live again and prevail in the longer run. If we lose this battle, there will be others where the world needs Mozilla.

By now it should be clear why we view DRM as bad for users, open source, and alternative browser vendors:

  • Users: DRM is technically a contradiction, which leads directly to legal restraints against fair use and other user interests (e.g., accessibility).
  • Open source: Projects such as mozilla.org cannot implement a robust and Hollywood-compliant CDM black box inside the EME API container using open source software.
  • Alternative browser vendors: CDMs are analogous to ActiveX components from the bad old days: different for each OS and possibly even available only to the OS’s default browser.

I continue to collaborate with others, including some in Hollywood, on watermarking, not DRM. More on that in a future post.

/be

Firefox OS Launches

Just under two years ago, we started Firefox OS as the Boot to Gecko (B2G) project, with little more than a belief that the Web should be the only platform you need to build an open mobile device ecosystem. This vision was so compelling that we found ourselves on a rocket, joined by developers and partners around the world.

Today, I’m thrilled to report that Firefox OS phones go on sale in less than a day in Spain, with other launches to follow. See the Mozilla and Telefónica announcements. As Christian Heilmann says, “the fox is out of the bag.”

This is just the end of the beginning, a great first step into retail channels. Everyone should have the option of Open Web Devices for the benefits that come with them: owning your own stuff, innovating at the edges of the network, not having to ask permission to hack. The world needs the principles of the Web now more than ever, so please join us:

Mozilla Developer Network Firefox OS top-level page
How to contribute to Firefox OS (even if you’re not technical)

Thanks,

/be

The Cookie Clearinghouse

As you may recall from almost six weeks ago, we held the Safari-like third-party cookie patch, which blocks cookies set for domains you have not visited according to your browser’s cookie database, from progressing to Firefox Beta, because of two problems:

False positives. For example, say you visit a site named foo.com, which embeds cookie-setting content from a site named foocdn.com. With the patch, Firefox sets cookies from foo.com because you visited it, yet blocks cookies from foocdn.com because you never visited foocdn.com directly, even though there is actually just one company behind both sites.

False negatives. Meanwhile, in the other direction, just because you visit a site once does not mean you are ok with it tracking you all over the Internet on unrelated sites, forever more. Suppose you click on an ad by accident, for example. Or a site you trust directly starts setting third-party cookies you do not want.

Our challenge is to find a way to address these sorts of cases. We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.

The logic driving us along the path to a better default third-party cookie policy looks like this:

  1. We want a third-party cookie policy that better protects privacy and encourages transparency.
  2. Naive visited-based blocking results in significant false negative and false positive errors.
  3. We need an exception management mechanism to refine the visited-based blocking verdicts.
  4. This exception mechanism cannot rely solely on the user in the loop, managing exceptions by hand. (When Safari users run into a false positive, they are advised to disable the block, and apparently many do so, permanently.)
  5. The only credible alternative is a centralized block-list (to cure false negatives) and allow-list (for false positives) service.

I’m very pleased that Aleecia McDonald of the Center for Internet and Society at Stanford has launched just such a list-based exception mechanism, the Cookie Clearinghouse (CCH).

Today Mozilla is committing to work with Aleecia and the CCH Advisory Board, whose members include Opera Software, to develop the CCH so that browsers can use its lists to manage exceptions to a visited-based third-party cookie block.

The CCH proposal is at an early stage, so we crave feedback. This means we will hold the visited-based cookie-blocking patch in Firefox Aurora while we bring up CCH and its Firefox integration, and test them.

Of course, browsers would cache the block- and allow-lists, just as we do for safe browsing. I won’t try to anticipate or restate details here, since we’re just starting. Please see the CCH site for the latest.

We are planning a public “brown bag” event for July 2nd at Mozilla to provide an update on where things stand and to gather feedback. I’ll update this post with details as they become available (UPDATE: details are here), but I wanted to share the date ASAP.

/be

C is for Cookie

Mozilla is engaged in a broad, deep conversation about Internet privacy. We believe in putting users in control of their online experience, and we want a healthy, thriving web ecosystem — we do not see a contradiction. However, sometimes a crucial experiment is required to prove it.

To this end, we are testing a patch from Jonathan Mayer. Jonathan’s patch matches how Safari has worked for years, and does the following:

  • Allows cookies from sites you have already visited.
  • Blocks cookies from sites you have not visited yet.

The idea is that if you have not visited a site (including the one to which you are navigating currently) and it wants to put a cookie on your computer, the site is likely not one you have heard of or have any relationship with. But this is only likely, not always true. Two problems arise:

False positives. For example, say you visit a site named foo.com, which embeds cookie-setting content from a site named foocdn.com. With the patch, Firefox sets cookies from foo.com because you visited it, yet blocks cookies from foocdn.com because you never visited foocdn.com directly, even though there is actually just one company behind both sites.

False negatives. Meanwhile, in the other direction, just because you visit a site once does not mean you are ok with it tracking you all over the Internet on unrelated sites, forever more. Suppose you click on an ad by accident, for example. Or a site you trust directly starts setting third-party cookies you do not want.

Our challenge is to find a way to address these sorts of cases. We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.

We plan to ship an evolution of the patch “on” by default, but we want to make refinements first. To make sure we get this right we need more data. Our next engineering task is to add privacy-preserving code to measure how the patch affects real websites. We will also ask some of our Aurora and Beta users to opt-in to a study with deeper data collection.

There are many conflicting claims about how this patch will affect the Internet. Why debate in theory what we can measure in practice? We are going to find out more and adjust course as needed. This is the essence of the release test cycle.

On Tuesday we did two things:

  1. The patch has progressed to the Beta release channel for Firefox 22, but it is not “on” by default there. This allows more people to test the patch via Firefox’s “preferences” (AKA “options”) user interface, and avoids an abrupt change for site owners while we work on handling the hard cases.
  2. The patch remains in the Aurora channel for Firefox, where it is “on” by default. This gives the patch better ongoing test coverage and facilitates A/B testing.

We have heard important feedback from concerned site owners. We are always committed to user privacy, and remain committed to shipping a version of the patch that is “on” by default. We are mindful that this is an important change; we always knew it would take a little longer than most patches as we put it through its paces.

For those who read this as Mozilla softening our stance on protecting privacy and putting users first, in a word: no. False positives break sites that users intentionally visit. (Fortunately, we haven’t seen too many such problems, but greater testing scale is needed.) False negatives enable tracking where it is not wanted. The patch as-is needs more work.

We look forward to continued dialog with colleagues, contributors, fans, and detractors. We will update all of you within six weeks so you can understand our thinking and how we will proceed. Comments welcome.

/be

P.S. Cookies (name history) were originally intended to be ephemeral (Windows 3.1 had so little usable memory with its 64K memory segments that Netscape’s founders had no choice). At first, they held only session state that could be recovered from the server by logging in again.

(Remind me to tell the story some day of Montulli’s aborted “twinkies” idea from the Netscape 2 era. UPDATE: Lou has published a new blog post about cookies.)

How far we have come in the amazing, living system that is the Web! No one planned for what actually happened, but with more work on the cookie policy in Firefox and (I hope) other browsers, I believe that we can evolve to a better space.

Today I Saw The Future

This morning, Mozilla and OTOY made an announcement:

Mozilla and OTOY deliver the power of native PC applications to the Web, unveil next generation JavaScript video codec for movies and cloud gaming

What this means:

ORBX.js, a downloadable HD codec written in JS and WebGL. The advantages are many. On the good-for-the-open-web side: no encumbered-format burden on web browsers, they are just IP-blind runtimes. Technical wins start with the ability to evolve and improve the codec over time, instead of taking ten years to specify and burn it into silicon.

After these come more wins: 25% better compression than H.264 for competitive quality, adaptive bit-rate while streaming, integer and (soon) floating point coding, better color depth, better intra-frame coding, a more parallelizable design — the list goes on.

The GPU cloud has your back. Think of the amazing 3D games that we have on PCs, consoles, and handheld devices thanks to the GPU. Now think of hundreds of GPUs in the cloud, working for you to over-detail, ray/path-trace in realtime, encode video, do arbitrary (GPGPU) computation.

Or consider high-powered tools from Autodesk, Adobe, and others for 3D modeling and rendering:

Native apps from any popular OS, in the GPU cloud and on your browser. Yes, both: this is not just remote desktop tech, or X11 reborn via JS. Many local/remote hybrid computation schemes are at hand today, e.g. a game can do near-field computing in the browser on a beefy client while offloading lower LOD work to the GPU cloud.

OTOY’s CEO Jules Urbach demo’ed an entire Mac OS X desktop running in a cloud VM sandbox, rendering via ORBX.js to Firefox, but also showed a Windows homescreen running on his Mac — and the system tray, start menu, and app icons were all local HTML5/JS (apps were a mix ranging from mostly local to fully remoted, each in its own cloud sandbox).

Valve’s Steam was one such app:

Watermarking, not DRM. This could be huge. OTOY’s GPU cloud approach enables individually watermarking every intra-frame, and according to some of its Hollywood supporters including Ari Emanuel, this may be enough to eliminate the need for DRM.

We shall see; I am hopeful. This kind of per-user watermarking has been prohibitively expensive, but OTOY estimates the cost at pennies per movie with their approach.

Oculus Rift, Lightfield displays, Holodecks, and beyond. OTOY works with Paul Debevec of USC’s Institute for Creative Technologies. This is Tony Stark stuff, coming at us super-fast and soon to be delivered via JS, WebGL, and ORBX.js running in the browser.

I was thrilled to be included in today’s event, hosted at Autodesk‘s fabulous San Francisco offices. I gave a demo of Epic Games Unreal Engine 3 (Unreal Tournament, “Sanctuary” level) running via Emscripten and asm.js at full frame-rate in Firefox Aurora, and spoke about how JS will continue to evolve “low-road” as well as “high-road” APIs and features to exploit parallel hardware.

As Jeff Kowalski, Autodesk’s CTO, pointed out, the benefits go beyond major cost reduction in CGI and similar processing work, to increase collaboration and innovation radically, by relieving creative people from having to sit at big workstations. The GPU cloud means many alternative ideas, camera angles, etc., can be tried without waiting hours for each rendering. Even from the beach, via your 4G-connected tablet. Teams around the world can collaborate closely as timezones permit, across the web.

We will continue to collaborate with OTOY; I’ll post updates on this topic. It’s hot, and moving very quickly. Kudos to OTOY for their brilliant innovations, and especially for porting them to JS and WebGL so quickly!

When we at Mozilla say the Web is the platform, we are not bluffing.

/be

P.S. Always bet on JS!

P.P.S. Hat tip to Andreas Gal for seeing far, with Broadway.js.

Mozilla at 15 Memories, and Thoughts on Mozilla Research

 

[air.mozilla.org video]
[slideshare.net link]

Disrupt any enterprise that requires new clothes.

Thoreau (abridged) adjusted for Mozilla by @lawnsea.

Mozilla Research Party Talk.022

I gave a brief talk last night at the Mozilla Research Party (first of a series), which happened to fall on the virtual (public, post-Easter-holiday) celebration of Mozilla’s 15th anniversary.

I was a last minute substitution for Andreas Gal, fellow mad scientist co-founder at Mozilla Research, so I added one slide at his expense. (This talk was cut down and updated lightly from one I gave at MSR Paris in 2011.) Thanks to Andreas for letting me use two of his facebook pics to show a sartorial pilgrim’s progress. Thanks also to Dave Herman and all the Mozilla Researchers.

Mozilla is 15. JavaScript is nearly 18. I am old. Lately I mostly just make rain and name things: Servo (now with Samsung on board) and asm.js. Doesn’t make up for not getting to name JS.

(Self-deprecating jokes aside, Dave Herman has been my naming-buddy, to good effect for Servo [MST3K lives on in our hearts, and will provide further names] and asm.js.)

Color commentary on the first set of slides:

Talk.003

I note that calling software an “art” (true by Knuth’s sensible definition) should not relieve us from advancing computer science, but remain skeptical that software in the large can be other than a somewhat messy, social, human activity and artifact. But I could be wrong!

Talk.004

RAH‘s Waldo featured pantograph-based manipulators — technology that scaled over perhaps not quite ten orders of magnitude, if I recall correctly (my father collected copies of the golden age Astounding pulps as a teenager in the late 1940s).

No waldoes operating over this scale in reality yet, but per Dijkstra, our software has been up to the challenge for decades.

Talk.005

I like Ken‘s quote. It is deeply true of any given source file in an evolving codebase and society of coders. I added “you could almost say that code rusts.”

Talk.006

Here I would like to thank my co-founder and partner in Mozilla, Mitchell Baker. Mitchell and I have balanced each other out over the years, one of us yin to the other’s yang, in ways that are hard to put in writing. I can’t prove it, but I believe that until the modern Firefox era, if either of us had bailed on Mozilla, Mozilla would not be around now.

Talk.007

A near-total rewrite (SpiderMonkey and NSPR were conserved) is usually a big mistake when you already have a product in market. A paradox: this mistake hurt Netscape but helped Mozilla.

I lamented the way the Design Patterns book was waved around in the early Gecko (Raptor) days. Too much abstraction can be worse than too little. We took years digging out and deCOMtaminating Gecko.

As Peter Norvig argued, design patterns are bug reports against your programming language.

Still, the big gamble paid off for Mozilla, but it took a few more years.

Talk.008

Who remembers Netscape 6? At the time some few managers with more ego than sense argued that “the team needs us to ship” as if morale would fall if we held off till Mozilla 1.0. (I think they feared that an AOL axe would fall.) The rank and file were crying “Nooo!!!!!”

AOL kept decapitating VPs of the Netscape division until morale improved.

Talk.009

2001: Another year, another VP beheading, but this one triggered a layoff used as a pretext to eliminate Mitchell’s position. The new VP expected no mo’ Mitchell, and was flummoxed to find that on the next week’s project community-wide conference call, there was Mitchell, wrangling lizards! Open source roles are not determined solely or necessarily by employment.

At least (at some price) we did level the playing field and manage our way through a series of rapid-release-like milestones (“the trains will run more or less on time!”) to Mozilla 1.0.

Talk.010

Mozilla 1.0 didn’t suck.

It was funny to start as a pure open source project, where jwz argued only those with a compiler and skill to use it should have a binary, and progress to the point where Mozilla’s “test builds” were more popular than Netscape’s product releases. An important clue, meaningful in conjunction with nascent “mozilla/browser” focus on just the browser instead of a big 90’s-style app-suite.

Talk.011

A lot of credit for the $2M from AOL to fund the Mozilla Foundation goes to Mitch Kapor. Thanks again, Mitch! This funding was crucial to get us to the launch-pad for Firefox 1.0.

We made a bit more money by running a Technical Advisory Board or TAB, which mostly took advice from Enterprise companies, which we proceeded to (mostly) ignore. The last TAB meeting was the biggest, and the one where Sergey Brin showed up representing Google.

Due to a back injury, Sergey stood a lot. This tended to intimidate some of the other TAB members, who were pretty clearly wondering “What’s going on? Should I stand too?” An accidental executive power move that I sometimes still employ.

Talk.028

Jump to today: here is Rust beating GCC on an n-body solver. Safety, speed, and concurrency are a good way to go through college!

Talk.029

As you can see, Mozilla Research is of modest size, yet laser-focused on the Web, and appropriately ambitious. We have more awesome projects coming, along with lots of industrial partners and a great research internship program. Join us!

/be

The Web is the Game Platform

This week, a number of Mozillians attended the annual Game Developers Conference in San Francisco to demonstrate how the Web is a competitive platform for gaming and game development.

We’ve worked very hard over the past couple of months on technologies used to speed up the Web for game development, including asm.js, a JavaScript optimization technology and pure-JS-subset target language for compilers; Emscripten, a C++ to JavaScript compiler; and OdinMonkey, the SpiderMonkey JavaScript engine’s compiler for asm.js source.

On Wednesday, we were happy to announce that we’ve teamed up with Epic Games to combine these technologies to bring Epic’s Unreal Engine 3 to the Web.

This is significant because it demonstrates to everyone that game developers and publishers can now take advantage of the reach and scale of the Web without the additional user acquisition friction and even higher costs (infinite on iOS :-P) associated with third-party plugins. These technology advancements also mean improved performance and the ability to port lots of games to the Web on desktops, laptops, and mobile devices. (Note well that asm.js code runs pretty fast on modern browsers, and super-fast when optimized as by OdinMonkey.)

We’ve also developed a new demo for Banana Bread that includes multi-player support and integrates parts of WebRTC for the data channel.

Big thanks to everyone on the gaming team and supporters throughout the Mozilla project for all the hard work to make this possible. I must single out Martin Best, Dave Herman, Vladimir Vukićević (WebGL creator!), Luke Wagner, and Alon Zakai (AKA @kripken).

I was personally thrilled to meet Tim Sweeney, CEO and founder of Epic Games, for breakfast before GDC Thursday. I’d been aware of Tim since his invited talk at POPL 2006. Tim and I were on very much the same page on many things, including JS’s ascendance as a safe target assembly-like language for C++ via asm.js and beyond.

/be

P.S. I’d like to thank Luke Wagner for taking on module ownership of SpiderMonkey, as Dave Mandelin has handed the baton on.

P.P.S. Hot rumor: WebGL in IE11? Source, judge for yourself. I have hopes.

UPDATE: withinwindows.com says WebGL is coming, disabled by default so far, and (look for “UPDATE3” run together at the original source) with a different shader language, IESL based on HLSL, from GLSL ES.

2nd UPDATE: From “UPDATE4” at François Remy’s blog post, GLSL is the default and you set a registry flag to get HLSL. Cool!

P.P.P.S. asm.js support in V8 bug on file.

MWC 2013, Firefox OS, and More Web API Evolution

Last week started with a bang, with Mozilla’s Firefox OS launch at Mobile World Congress 2013. We announced that Firefox OS had won the support of 18 carriers, four device manufacturers, and a major chipset company (Qualcomm) at a barn-burner of a press conference in Barcelona on Sunday night.

Pictures (the first shows the room less than half-full, pre-press-conference):

pre_press_conference

Founders and executives:

founders_and_execs

The Mozilla crew who helped at the press conference (others were in Barcelona doing equally important pre-MWC work):

mozilla_crew

Here is our stunning, always-crowded booth in Hall 8 (“App Planet”):

our_booth

Two days later, the excitement ramped even higher with news of a fifth device maker, Sony.

Our success in attracting support from these partners is due in part to our ability to innovate and standardize the heretofore-missing APIs needed to build fully-capable smartphones purely from web standards. Therefore I will give an update below to my post from last MWC with details on API standardization progress over the last year.

I want not only to identify the good work being done, but also to reinforce that:

  • We are collaboratively building these APIs to fill real gaps in the standards.
  • We actively work to get them standardized across all operating systems and browsers, as part of Mozilla’s mission.
  • We have made terrific progress in the last year thanks to Firefox OS, and many of the APIs work in Firefox for Android too.
  • Coopetition for web developers has already caused Samsung to patch WebKit to support some of these APIs.
  • We will continue to update API specs and code as new devices and sensors come to market.

The missing APIs must be added to the web platform in order to enable the billions of new mobile users who will be coming online in the next few years to have affordable web-based phones, tablets, and apps. Emerging market consumers and developers generally cannot afford increasingly higher-end, native-app-advantaged smartphones from the two bigs.

The main reason that many developers historically have turned to native OS-specific app platforms and tools is because that was the only way to work with the different hardware elements and proprietary features (e.g., battery status, dialer, SMS, payments system) present on mobile devices. Those developers who bravely soldiered on, trying to use HTML5, were forced to envelop their code in compiled native app wrappers such as Phonegap generates.

There was no technical reason for OS proprietors to disadvantage their web stacks, lock developers into their native stacks, or make their store an exclusive trust anchor. Yet the incumbents did hold the web back by withholding APIs.

Just as the first browsers’ creators exposed file pickers and alert dialogs to work with the underlying OS, we are finally filling in the web platform to include the missing smartphone APIs. This levels the web vs. native playing field. And by following our usual best practices of working in the open and submitting all of our work to the standardization process, we can be confident that the missing functionality will become available on all platforms eventually.

When these APIs are available on all web rendering engines — even desktops and laptops, where the APIs make sense — then developers can be sure that their code will run interoperably, no matter what the underlying operating system. Fragmentation, if any, could arise at the OS level, e.g., due to not updating the OS’s web engine (as happened in Android 2.3).

Barring OS-specific update failures, developers can be sure because they have consolidated cross-OS, cross-browser market power. In today’s competitive browser markets, web developers rule the roost; browser and web engine purveyors seek to win their favor. Developers will not bother with up-and-coming operating systems that deviate more than epsilon from the standards.

The example set by our good friends at Samsung in patching WebKit based on the new specifications validates our approach. It means that if Tizen devices become numerous, it is highly likely that mobile web applications that work on Firefox OS will work on Tizen.

Likwise, we are working with Microsoft on Pointer Events, since touch-enabled web apps must today unify mouse and touch events, and the previous W3C multi-touch work foundered on patent assertions.

The orders-of-magnitude increase in Javascript performance across all engines in the last few years has been stunning. This boost helps to address another web vs. native “gap”. And just around the corner is asm.js — more on that in a future post!

Other performance enhancements include continued work on off main thread compositing and GL layers. Yes, it’s true: the modern mobile-focused web rendering model includes implicit threading in the form of the Compositor thread, and the GPU’s parallel hardware.

As always, standards in progress are subject to change, but they require prototype implementations and user-testing (“user” meaning both developer and consumer). Mozilla remains committed to playing fairly by not forging de-facto standards out of prototypes, rather proposing before disposing and in the end tracking whatever is standardized.

(This is why we are vendor-prefixing the less certain among our new APIs — but I think we should aim to unprefix quickly once we have emerging consensus among implementors.)

Here’s what we’ve achieved in the last year and what we’re working on:

  • Battery Status is now a W3C Candidate Recommendation (CR), and there was discussion recently about moving it to Proposed Recommendation (the last step before a REC, the status of an officially endorsed W3C recommended standard, indicating readiness for deployment within its problem domain).
  • WebTelephony. Mozillians: Ben Turner, Jonas Sicking, Philipp von Weitershausen. This is being worked on by Intel and Telefónica folks as the Telephony API in the SysApps Working Group.
  • WebSMS. Mozillians: Mounir Lamouri, Jonas Sicking. Being worked on as the Messaging API in the SysApps WG.
  • Pointer Lock: this API is largely stable and implemented in multiple browsers. Mozilla is participating in the W3C Pointer Events working group along with Microsoft, Google, Opera, jQuery, and others. Mozillian: Matt Brubeck, together with Microsoft’s Jacob Rossi.
  • Open WebApps has been proposed to W3C. We seem to have reached consensus to use the Mozilla proposal as a “First Public Working Draft” (FPWD) in the SysApps WG with Samsung as co-editor.
  • Alarm API is now a W3C working draft. Intel is helping edit, a great example of collaboration.
  • Web Activities, a simplified version of Web Intents. We focused on a simpler subset of Web Intents, which has proven to be a good idea. The feature set of Web Activities is working well, while Web Intents is currently going through a restart.
  • Push Notifications. This API is still in an early state. We have some prototypes but not yet shipping code. Getting this right is hard given that we want to create something secure, scaleable, and highly available.
  • WebFM API. We have an initial draft, but the W3C SysApps WG has decided to focus on a smaller set of specs first to build common ground. This API was deemed to be lower priority and so will have to wait a bit before it goes through standardization.
  • WebPayment, implemented by Fernando Jiménez Moreno of Telefónica. There is a W3C Community Group entirely focused on payments, and now a W3C Task Force. We will work in these groups to make a flexible API for payments part of the web standards.
  • Ambient light sensor, Mozillian Doug Turner is serving as editor.
  • Proximity sensor, note Doug Turner in acknowledgements.
  • UPDATE: Contacts Manager API, Eduardo Fullea and Jose M. Cantera of Telefónica are editing.

Developers interested in getting to know the new APIs better should look at this Mozilla Hacks post by Robert Nyman, which provides guidance on how to use them along with code examples.

So, Mozilla is leading the effort to complete the web platform on mobile devices. Our work is bearing fruit, as shown by the industry support for Firefox OS and the W3C’s efforts to standardize the new APIs. Even Andy Rubin had some kind words for us (thanks, Andy!).

The main prize for me remains the picture of smiling “mobile web developers” (read: “web developers”) that I can see just around the corner. Based on our MWC experience this year, the Web is in good shape to win that prize.

/be

Why Mozilla Matters

[This is an extended essay on the news out of Norway yesterday. See the closing for encouragement toward Opera and its fans, whatever the open source projects they choose to join, from me on behalf of Mozilla. /be]

Founder Flashback

I wrote about the founding of HTML5 in June, 2004, without dropping that acronym, mentioning only “Opera and others” as partners, because Apple was shy. Fragments of memory:

  • @t was then working for Microsoft (he’s at Mozilla now, it’s great to have him) and sitting in front of me during the second day of the workshop.
  • Vested interests touted XForms as inevitable (the UK Insurance Industry had standardized on it!).
  • Browser vendors were criticized as everything from uncooperative to backward in resisting the XML Utopia.
  • JavaScript took its lumps, including from Bert (whom I met years later at SXSW and got on fine with).

Hixie (pre-Google, already working on Web Forms 2) and Håkon of Opera joined David Hyatt of Apple, Mozilla’s @davidbaron and me at a San Jose pub afterward. There I uncorked and said something like “screw it, let’s do HTML5!”

We intended to do the work in the WHAT-WG. Håkon reminded us that the only likely path to Royalty-Free patent covenants remained the W3C, so we should aim to land specs there. He also correctly foresaw that Microsoft would not join our legal non-entity, rather would prefer the W3C.

So we drank a toast to HTML5.

Fast-Forward to 2013

In early January, I heard from an old friend, who wrote “Some major shift is happening at Opera, senior developers are being laid off, there’s a structural change to the company….” Other sources soon confirmed what became news yesterday: Opera was dropping Presto for WebKit (the chromium flavor).

One of my sources wrote about Mozilla and why it matters to him:

Smart people, and people I know and like, at that. Interesting work going on in [programming languages and operating systems]. Probably the last relatively independent web technology player (not something I cared about historically, but after leaving the Apple iOS fiefdom for the Google Android fiefdom, I find I’m just as locked in and feel just as monitored).

I hear this kind of comment more often lately. I think it’s signal, not noise.

Why Mozilla Matters

The January signal reminded me of mail I wrote to a colleague about “How Mozilla is different”:

  1. We innovate early, often, and in the open: both standards and source, draft spec and prototype implementation, playing fair and revising both to achieve consensus.
  2. Our global community cuts across search, device, social network businesses and agendas, and wants the end-to-end and intertwingled mix-and-match properties of the Web, even if people can’t always describe these technical properties precisely, or at all. What they want most is to not be “owned”. They seek user sovereignty.
  3. We restored browser competition by innovating ahead of standards in user-winning ways. We’re doing it again, and with allies pulling other platforms up: Samsung is patching WebKit to have the B2G-pioneered device APIs.
  4. Because we are relatively free of agenda, we can partner, federate, integrate horizontally instead of striving to capture users in yet another all-purpose vertical silo.
  5. We advance the vision of you owning your own experience and data, attached to your identity, with whatever services you choose integrated. Examples include the Social API and the new Firefox-sync project that outsources storage service via DropBox, Box, etc.

The last point is hugely important. It’s why we have had such good bizdev and partnering with Firefox OS. But it is also why we have a strongly trusted brand and community engagement: people expect us to be an independent voice, to fight the hard but necessary fight, to be willing to stand apart some times.

I see these as critical distinguishing factors. No competing outfit has all of these qualities.

Thoughts on WebKit

I read webkit-dev; some very smart people post there. Yet the full WebKit story is more complex and interesting than shallow “one WebKit” triumphalist commentary suggests.

First, there’s not just one WebKit. Web Developers dealing with Android 2.3 have learned this the hard way. But let’s generously assume that time (years of it) heals all fragments or allows them to become equivalent.

WebKit has eight build systems, which cause perverse behavior. Peel back the covers and you’ll find tension about past code forks: V8 vs. Apple’s JavaScriptCore (“Nitro”), iOS (in a secret repository at apple.com for years, finally landing), various graphics back ends, several network stacks, two multi-process models (Chrome’s, stealth-developed for two years; and then Apple’s “WebKit2” framework). Behind the technical clashes lie deep business conflicts.

When tensions run high the code sharing is not great. Kudos to key leaders from the competing companies for remaining diplomatic and trying to collaborate better.

Don’t get me wrong. I am not here to criticize WebKit. It’s not my community, and it has much to offer the world on top of its benefits to several big companies — starting with providing a high-quality open-source implementation of web standards alongside Mozilla’s. But it is not the promised land (neither is Mozilla).

If you are interested in more, see Eric Seidel’s WebKit wishes post for a heartfelt essay from someone who used to work at Apple and now works (long time on Chrome) at Google.

My point is that there’s no “One WebKit”. Distribution power matters (more below near the end of this essay). Conflicting business agendas hurt. And sharing is always hard. There are already many cooks.

In software, it’s easier in the short run to copy design than code, and easier to copy code than share it. In this spirit the open-source JS engines (two for WebKit) have learned from one another. Mozilla uses JSC’s YARR RegExp JIT, and we aim to do more such copying or sharing over time.

Web Engine Trade-offs

Some of you may still be asking: beyond copying design or sharing pieces of code, why won’t Mozilla switch to WebKit?

Answering this at the level of execution economics seems to entail listing excuses for not doing something “right”. I reject that implied premise. More below, but at all three levels of vision, strategy, and certainly execution, Mozilla has good reasons to keep evolving Gecko, and even to research a mostly-new engine.

But to answer the question “why not switch” directly: the switching costs for us, in terms of pure code work (never mind loss of standards body and community leverage), are way too high for “switching to WebKit” on any feasible, keep-your-users, current-product timeline.

XUL is one part of it, and a big part, but not the only large technical cost. And losing XUL means our users lose the benefits of the rich, broad and deep Firefox Add-ons ecosystem.

In other words, desktop Firefox cannot be a quickly rebadged chromium and still be Firefox. It needs XUL add-ons, the Awesome Bar, our own privacy and security UI and infrastructure, and many deep-platform pieces not in chromium code.

Opera, as a pure business without all of XUL, the Mozilla mission, and our community, has far lower technical switching costs. Especially with Opera desktop share so low, and Opera Mini as a transcoding proxy that “lowers” full web content and so tends to isolate content authors from “site is broken, blame the browser” user-retention problems that afflict Opera’s Presto-on-the-desktop engine and browser.

Nevertheless, my sources testify that the technical switching costs for Opera are non-trivial, in spite of being lower relative to Mozilla’s exorbitant (multi-year, product-breaking) costs. This shows that the pure “business case” prevailed: Opera will save engineering headcount and be even more of a follower (at least at first) in the standards bodies.

The Big Picture

At the Mozilla mission level, monoculture remains a problem that we must fight. The web needs multiple implementations of its evolving standards to keep them interoperable.

Hyatt said pretty much the last sentence (“the web needs more implementations of its evolving standards to keep them interoperable “) to me in 2008 when he and I talked about the costs of switching to WebKit, technical and non-technical. That statement remains true, especially as WebKit’s bones grow old while the two or three biggest companies sharing it struggle to evolve it quickly.

True, some already say “bring on the monoculture”, imagining a single dominant power will distribute and evergreen one ideal WebKit. Such people may not have lived under monopoly rule in the past. Or they don’t see far enough ahead, and figure “après moi, le déluge” and “in the long run, we are all dead.” These folks should read John Lilly’s cautionary tumblr.

@andreasgal notes that the W3C tries not to make a standard without two or more interoperating prototype implementations, and that this now favors independent Gecko and WebKit, since the only other major engine for WebKit to pair with is Microsoft’s Trident. We shall see, but of course sometimes everyone cooperates, and politics makes strange bedfellows.

I expect more web engines in the next ten years, not fewer, given hardware trends and the power wall problem. In this light, Mozilla is investing not only in Gecko now, we are also researching Servo, which focuses on the high-degree parallel (both multicore CPU, and the massively parallel GPU) hardware architectures that are coming fast.

If we at Mozilla ever were to lose the standards body leverage needed to uphold our mission, then I would wonder how many people would choose to work for or with us. If Servo also lacked research partners and good prospects due to changing technology trends, we would have more such retention troubles. In such a situation, I would be left questioning why I’m at Mozilla and whether Mozilla matters.

But I don’t question those things. Mozilla is not Opera. If we were a more conventional business, without enough desktop browser-market share, we would probably have to do what Opera has done. But we’re not just a business, and our desktop share seems to be holding or possibly rising — due in part to the short-term wins we have been able to build on Gecko.

Future Web Engines

So realistically, to switch to WebKit while not dropping out of the game for years, we would have to start a parallel stealth effort porting to WebKit. But we do not have the hacker community (paid and volunteer) for that. Nowhere near close, given XUL and the other dependencies I mentioned, including the FFOS ones. (Plus, we don’t do “stealth”.)

The truth is that Gecko has been good for us, when we invest in it appropriately. (There’s no free lunch with any engine, given the continuously evolving Web.) We could not have done Firefox OS or the modern Firefox for Android without Gecko. These projects continue to drive Gecko evolution.

And again, don’t forget Servo. The multicore/GPU future is not going to favor either WebKit or Gecko especially. The various companies investing in these engines, including us but of course Apple, Google, and others, will need to multi-thread as well as process-isolate their engines to scale better on “sea of processors” future hardware.

There’s more to it than threads: due to Amdahl’s Law both threads and so-called “Data Parallelism”, aka SIMD, are needed at fine grain in all the stages of the engine, not just in image and audio/video decoding. But threads in C++ mean more hard bugs and security exploits than otherwise.

I learned at SGI, which dived into the deep end of the memory-unsafe SMP kernel pool in the late ’80s, to never say never. Apple and Google can and probably will multi-thread and even SIMD-parallelize more of their code, but it will take them a while, and there will be productivity and safety hits. Servo looks like a good bet to me technically because it is safer by design, as the main implementation language, Rust, focuses on safety as well as concurrency.

Other approaches to massively parallel hardware are conceivable, and the future hardware is not fully designed yet, so I think we should encourage web engine research.

Strategic Imperatives

Beyond these considerations, we perceive a new strategic imperative, both from where we sit and from many of our partners: the world needs a new, cleanish-slate, safer/parallel, open-source web engine project. Not just for multicore devices, but also — for motivating reasons that vary among us and our partners — to avoid depending on an engine that an incredibly well-funded and lock-in-prone competitor dominates, namely WebKit.

Such a safer/parallel engine or engines should then be distributed and feed back on web standards, pushing the standards in more declarative and implicitly parallelizable directions.

Indeed more is at stake than just switching costs, standards progress, and our mission or values.

One pure business angle not mentioned above is that using an engine like WebKit (or what is analogous in context, e.g., Trident wrappers in China) reduces your product to competing on distribution alone. Front end innovations are not generally sticky; they’re in any event easy to clone. Consider Dolphin on Android. Or observe how Maxthon dropped from 20+% market share to below 1% in a year or so in China. Observe also the Qihoo/360 browser, with not much front end innovation, zooming from 0 to 20% in the same time frame.

Deep platform innovations can be sticky, and in our experience, they move the web forward for everyone. JS performance is one example. Extending the web to include device APIs held back for privileged native app stacks is another. So long as these platform innovations are standards-based and standardized, with multiple and open source implementations interoperating, and the user owns vendor-independent assets, then “quality of implementation and innovation” stickiness is not objectionable in my view.

Back to Mozilla. We don’t have the distribution power on desktop of our competitors, yet we are doing well and poised to do even better. This is an amazing testimony to our users and the value to them of our mission and the products we build to uphold it. In addition to Firefox on the desktop, I believe that Firefox OS and Firefox for Android (especially as a web app runtime) will gain significant distribution on mobile over the next few years.

Take Heart

So take heart and persevere. It is sad to lose one of the few remaining independent web platforms, Presto, created by our co-founder of HTML5 and the unencumbered <video> element. I hope that Opera will keep fighting its good fight within WebKit. Opera fans are always welcome in Mozilla’s community, at all levels of contribution (standards, hacking, engagement).

We don’t know the future, but as Sarah Connor carved, “No fate but what we make.” Whatever happens, Mozilla endures.

/be

Other voices: