ORBX.js and related news

ORBX-vs-H264

[UPDATE: see Jim’s fair comment below. /be]

I’m pleased to report that OTOY today has announced good news about ORBX.js and the Amazon Web Services ORBX and OctaneCloud AMIs (Amazon Machine Instances, pronounced “AHmees” — who knew?), based on terrific adoption and developer interest:

  • Free ORBX and OctaneCloud AMIs forever, not just for a trial period. OTOY will focus higher up the value chain.
  • ORBX.js to be open-sourced on github as soon as OTOY delivers on prior promises, I hope by next summer.
  • Two major studios have been evaluating ORBX for a watermarked, DRM-free Video-on-Demand service.
  • OTOY has an ORBX encoder (built using their own OpenCL compiler) that runs as a small native loopback server, so it can be addressed by browser apps using WebSockets. This is a clever interim solution that avoids plugins and anticipates “ensafened” WebCL, or Rust on the GPU, or a better solution for writing a downloadable and memory-safe encoder — something Mozilla Research has on its agenda.

The deeper meaning here, in my view: a great rift emerged between CPU and GPU in the ’90s, where serial old x86 instruction set compatibility seemed to matter (remember shrink-wrap software?). The need for speed with binary compatibility begot big, power-hungry, superscalar CPUs, while from the SGI diaspora, the GPU went massively parallel.

One consequence of the rift: the rise of ARM on mobile, where binary compatibility did not and does not matter, but power efficiency does.

This rift may yet be healed, and in a way that avoids too much custom hardware (or else we will have to rely on FPGA-on-a-chip).

With enough homogeneity and parallel processing power, always-evolving video codecs, 3D model asset streams, and undreamed-of combinations should be feasible to implement in downloadable, power-efficient, safe code. Perhaps we can even one day kill off some of the video codec patent monsters that are currently burned into silicon.

More to come in the new year; this is just another happy rolling thunder update.

/be

Today I saw the future (Update)

As noted at the Mozilla blog, OTOY and Amazon along with Autodesk and Mozilla have announced the next step in Amazon and OTOY’s GPU/cloud effort.

Demo videos:

This means developers can get started using ORBX.js with GPU-cloud encoding and downloadable decoding on all modern Web clients.

It also means that any of the Hollywood Six can start a streaming video service that reaches the most users across the Web (compared to any other purely Web-based service), using watermarking not DRM. More on this soon, if all goes as I hope.

Note that I’m an OTOY advisor. Not because of any compensation, but because I believe in their approach and their talent.

/be

Cisco’s H.264 Good News

As I noted last year, one of the biggest challenges to open source software has been the patent status of video codecs. The most popular codec, H.264, is patent-encumbered and licensed by MPEG LA, under terms that prevent distributing it with open source products including Firefox. Cisco has announced today that they are going to release a gratis, high quality, open source H.264 implementation — along with gratis binary modules compiled from that source and hosted by Cisco for download. This move enables any open source project to incorporate Cisco’s H.264 module without paying MPEG LA license fees.
 
We are grateful for Cisco’s contribution, and we will add support for Cisco’s OpenH264 binary modules to Firefox soon. These modules will be usable by downstream distributions of Firefox, as well as by any other project. In addition, we will work with Cisco to put the OpenH264 project on a sound footing and to ensure that it is governed well. We have already been collaborating very closely with Cisco on our WebRTC implementation, and we are excited to see Cisco deepening their commitment to the Open Web.  Or, as Jonathan Rosenberg, Cisco CTO for Collaboration puts it,

Cisco has a long-standing history of supporting and integrating open standards, open formats and open source technologies as a model for delivering greater flexibility and interoperability to users. We look forward to collaborating with Mozilla to help bring H.264 to the Web and to the Internet.

Here’s a little more detail about how things are going to work: Cisco is going to release, under the BSD license, an H.264 stack, and build it into binary modules compiled for all popular or feasibly supportable platforms, which can be loaded into any application (including Firefox). The binary modules will be available for download from Cisco, and Cisco will pay for the patent license from the MPEG LA. Firefox will automatically download and install the appropriate binary module onto each user’s machine when needed, unless disabled in the user’s preferences.
 
Interoperability is critical on the Internet, and H.264 is the dominant video codec on the Web. The vast majority of HTML5 streaming video is encoded using H.264, and most softphones and videoconferencing systems use H.264. H.264 chipsets are widely available and can be found in most current smartphones, including many Firefox OS phones. Firefox already supports H.264 for the video element using platform codecs where they are available, but as noted in my last blog post on the topic, not all OSes ship with H.264 included. Provided we can get AAC audio decoders to match, using Cisco’s OpenH264 binary modules allows us to extend support to other platforms and uses of H.264.
 
While Cisco’s move helps add H.264 support to Firefox on all OSes, we will continue to support VP8, both for the HTML video element and for WebRTC. VP8 and H.264 are both good codecs for WebRTC, and we believe that at this point, users are best served by having both choices.
 
Of course, this is not a not a complete solution. In a perfect world, codecs, like other basic Internet technologies such as TCP/IP, HTTP, and HTML, would be fully open and free for anyone to modify, recompile, and redistribute without license agreements or fees. Mozilla is fully committed to working towards that better future. To that end, we are developing Daala, a fully open next generation codec. Daala is still under development, but our goal is to leapfrog H.265 and VP9, building a codec that will be both higher-quality and free of encumberances. Mozilla has assembled an engineering dream team to develop Daala, including Jean-Marc Valin, co-inventor of Opus, the new standard for audio encoding; Theora project lead Tim Terriberry; and recently Xiph co-founders Jack Moffitt, author of Icecast; and Monty Montgomery, the author of Ogg Vorbis.
 
Cullen Jennings, Cisco Fellow, Collaboration Group, says:

Cisco is very excited about the future of royalty free codecs. Daala is one of the most interesting ongoing technical developments in the codec space and we have been contributing to the project.

At Mozilla we always come back to the question of what’s good for the users and in this case that means interoperation of copious H.264 content across OSes and other browsers. We’ve already started looking at how to integrate the Cisco-hosted H.264 binary module, and we hope to have something ready for users in early 2014.
 
Watch this space for more exciting developments in WebRTC, Daala, and open web video.
 
/be

The Bridge of Khazad-DRM

To lighten the mood:

BalrogGandalf gandalf-sign

But actually, I’m serious.

People are rightly concerned about what is going on in the W3C with DRM, as couched in the Encrypted Media Extensions (EME) proposal. Please read Henri Sivonen’s explanation of EME if you haven’t yet.

As usual for us here at Mozilla, we want to start by addressing what is best for individual users and therefore what’s best for the Open Web, which in turn depends in large part on many interoperating browsers, and also on open source implementations with a significant combination by number and market share among those browsers.

We see DRM in general as profoundly hostile to all three of: users, open source software, and browser vendors who aren’t also DRM vendors.

Currently, users can play content that is subject to DRM restrictions using Firefox if they install NPAPI plugins, really Flash and Silverlight at this point. While we are not in favor of DRM, we do hear from many users who want to watch streaming movies to which they rent access rather than “buy to own”. The conspicuous example is Netflix, which currently uses Silverlight, but plans to use EME in HTML5.

(UPDATE: Netflix is using EME already in IE11 on Windows 8.1 without Silverlight. And Chrome OS has deployed EME as well. Apple too, in Mavericks.)

What the W3C is entertaining, due to Netflix, Google, and Microsoft’s efforts, is the EME API, which introduces new and non-standard plugins that are neither Silverlight nor Flash, called Content Decryption Modules (CDM for short), into HTML5. We see serious problems with this approach. One is that the W3C apparently will not specify the CDM, so each browser may end up having its own system.

We are working to get Mozilla and all our users on the right side of this proposed API. We are not just going to say that users cannot have access to streaming Hollywood movies, as that is a good way to lose market share and not have any product with which to uphold our mission.

Mozilla’s mission requires us to build products that users love — Firefox, Firefox for Android, Firefox OS, and Firefox Marketplace are four examples — with enough total share to influence developers, and therefore standards. Given the forces at play, we have to consider EME carefully, not reject it outright or embrace it in full.

Again, we have never categorically rejected plugins, including those with their own DRM subsystems.

However, the W3C willfully underspecifying DRM in HTML5 is quite a different matter from browsers having to support several legacy plugins. Here is a narrow bridge on which to stand and fight — and perhaps fall, but (like Gandalf) live again and prevail in the longer run. If we lose this battle, there will be others where the world needs Mozilla.

By now it should be clear why we view DRM as bad for users, open source, and alternative browser vendors:

  • Users: DRM is technically a contradiction, which leads directly to legal restraints against fair use and other user interests (e.g., accessibility).
  • Open source: Projects such as mozilla.org cannot implement a robust and Hollywood-compliant CDM black box inside the EME API container using open source software.
  • Alternative browser vendors: CDMs are analogous to ActiveX components from the bad old days: different for each OS and possibly even available only to the OS’s default browser.

I continue to collaborate with others, including some in Hollywood, on watermarking, not DRM. More on that in a future post.

/be

Firefox OS Launches

Just under two years ago, we started Firefox OS as the Boot to Gecko (B2G) project, with little more than a belief that the Web should be the only platform you need to build an open mobile device ecosystem. This vision was so compelling that we found ourselves on a rocket, joined by developers and partners around the world.

Today, I’m thrilled to report that Firefox OS phones go on sale in less than a day in Spain, with other launches to follow. See the Mozilla and Telefónica announcements. As Christian Heilmann says, “the fox is out of the bag.”

This is just the end of the beginning, a great first step into retail channels. Everyone should have the option of Open Web Devices for the benefits that come with them: owning your own stuff, innovating at the edges of the network, not having to ask permission to hack. The world needs the principles of the Web now more than ever, so please join us:

Mozilla Developer Network Firefox OS top-level page
How to contribute to Firefox OS (even if you’re not technical)

Thanks,

/be

The Cookie Clearinghouse

As you may recall from almost six weeks ago, we held the Safari-like third-party cookie patch, which blocks cookies set for domains you have not visited according to your browser’s cookie database, from progressing to Firefox Beta, because of two problems:

False positives. For example, say you visit a site named foo.com, which embeds cookie-setting content from a site named foocdn.com. With the patch, Firefox sets cookies from foo.com because you visited it, yet blocks cookies from foocdn.com because you never visited foocdn.com directly, even though there is actually just one company behind both sites.

False negatives. Meanwhile, in the other direction, just because you visit a site once does not mean you are ok with it tracking you all over the Internet on unrelated sites, forever more. Suppose you click on an ad by accident, for example. Or a site you trust directly starts setting third-party cookies you do not want.

Our challenge is to find a way to address these sorts of cases. We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.

The logic driving us along the path to a better default third-party cookie policy looks like this:

  1. We want a third-party cookie policy that better protects privacy and encourages transparency.
  2. Naive visited-based blocking results in significant false negative and false positive errors.
  3. We need an exception management mechanism to refine the visited-based blocking verdicts.
  4. This exception mechanism cannot rely solely on the user in the loop, managing exceptions by hand. (When Safari users run into a false positive, they are advised to disable the block, and apparently many do so, permanently.)
  5. The only credible alternative is a centralized block-list (to cure false negatives) and allow-list (for false positives) service.

I’m very pleased that Aleecia McDonald of the Center for Internet and Society at Stanford has launched just such a list-based exception mechanism, the Cookie Clearinghouse (CCH).

Today Mozilla is committing to work with Aleecia and the CCH Advisory Board, whose members include Opera Software, to develop the CCH so that browsers can use its lists to manage exceptions to a visited-based third-party cookie block.

The CCH proposal is at an early stage, so we crave feedback. This means we will hold the visited-based cookie-blocking patch in Firefox Aurora while we bring up CCH and its Firefox integration, and test them.

Of course, browsers would cache the block- and allow-lists, just as we do for safe browsing. I won’t try to anticipate or restate details here, since we’re just starting. Please see the CCH site for the latest.

We are planning a public “brown bag” event for July 2nd at Mozilla to provide an update on where things stand and to gather feedback. I’ll update this post with details as they become available (UPDATE: details are here), but I wanted to share the date ASAP.

/be

C is for Cookie

Mozilla is engaged in a broad, deep conversation about Internet privacy. We believe in putting users in control of their online experience, and we want a healthy, thriving web ecosystem — we do not see a contradiction. However, sometimes a crucial experiment is required to prove it.

To this end, we are testing a patch from Jonathan Mayer. Jonathan’s patch matches how Safari has worked for years, and does the following:

  • Allows cookies from sites you have already visited.
  • Blocks cookies from sites you have not visited yet.

The idea is that if you have not visited a site (including the one to which you are navigating currently) and it wants to put a cookie on your computer, the site is likely not one you have heard of or have any relationship with. But this is only likely, not always true. Two problems arise:

False positives. For example, say you visit a site named foo.com, which embeds cookie-setting content from a site named foocdn.com. With the patch, Firefox sets cookies from foo.com because you visited it, yet blocks cookies from foocdn.com because you never visited foocdn.com directly, even though there is actually just one company behind both sites.

False negatives. Meanwhile, in the other direction, just because you visit a site once does not mean you are ok with it tracking you all over the Internet on unrelated sites, forever more. Suppose you click on an ad by accident, for example. Or a site you trust directly starts setting third-party cookies you do not want.

Our challenge is to find a way to address these sorts of cases. We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.

We plan to ship an evolution of the patch “on” by default, but we want to make refinements first. To make sure we get this right we need more data. Our next engineering task is to add privacy-preserving code to measure how the patch affects real websites. We will also ask some of our Aurora and Beta users to opt-in to a study with deeper data collection.

There are many conflicting claims about how this patch will affect the Internet. Why debate in theory what we can measure in practice? We are going to find out more and adjust course as needed. This is the essence of the release test cycle.

On Tuesday we did two things:

  1. The patch has progressed to the Beta release channel for Firefox 22, but it is not “on” by default there. This allows more people to test the patch via Firefox’s “preferences” (AKA “options”) user interface, and avoids an abrupt change for site owners while we work on handling the hard cases.
  2. The patch remains in the Aurora channel for Firefox, where it is “on” by default. This gives the patch better ongoing test coverage and facilitates A/B testing.

We have heard important feedback from concerned site owners. We are always committed to user privacy, and remain committed to shipping a version of the patch that is “on” by default. We are mindful that this is an important change; we always knew it would take a little longer than most patches as we put it through its paces.

For those who read this as Mozilla softening our stance on protecting privacy and putting users first, in a word: no. False positives break sites that users intentionally visit. (Fortunately, we haven’t seen too many such problems, but greater testing scale is needed.) False negatives enable tracking where it is not wanted. The patch as-is needs more work.

We look forward to continued dialog with colleagues, contributors, fans, and detractors. We will update all of you within six weeks so you can understand our thinking and how we will proceed. Comments welcome.

/be

P.S. Cookies (name history) were originally intended to be ephemeral (Windows 3.1 had so little usable memory with its 64K memory segments that Netscape’s founders had no choice). At first, they held only session state that could be recovered from the server by logging in again.

(Remind me to tell the story some day of Montulli’s aborted “twinkies” idea from the Netscape 2 era. UPDATE: Lou has published a new blog post about cookies.)

How far we have come in the amazing, living system that is the Web! No one planned for what actually happened, but with more work on the cookie policy in Firefox and (I hope) other browsers, I believe that we can evolve to a better space.

Today I Saw The Future

This morning, Mozilla and OTOY made an announcement:

Mozilla and OTOY deliver the power of native PC applications to the Web, unveil next generation JavaScript video codec for movies and cloud gaming

What this means:

ORBX.js, a downloadable HD codec written in JS and WebGL. The advantages are many. On the good-for-the-open-web side: no encumbered-format burden on web browsers, they are just IP-blind runtimes. Technical wins start with the ability to evolve and improve the codec over time, instead of taking ten years to specify and burn it into silicon.

After these come more wins: 25% better compression than H.264 for competitive quality, adaptive bit-rate while streaming, integer and (soon) floating point coding, better color depth, better intra-frame coding, a more parallelizable design — the list goes on.

The GPU cloud has your back. Think of the amazing 3D games that we have on PCs, consoles, and handheld devices thanks to the GPU. Now think of hundreds of GPUs in the cloud, working for you to over-detail, ray/path-trace in realtime, encode video, do arbitrary (GPGPU) computation.

Or consider high-powered tools from Autodesk, Adobe, and others for 3D modeling and rendering:

Native apps from any popular OS, in the GPU cloud and on your browser. Yes, both: this is not just remote desktop tech, or X11 reborn via JS. Many local/remote hybrid computation schemes are at hand today, e.g. a game can do near-field computing in the browser on a beefy client while offloading lower LOD work to the GPU cloud.

OTOY’s CEO Jules Urbach demo’ed an entire Mac OS X desktop running in a cloud VM sandbox, rendering via ORBX.js to Firefox, but also showed a Windows homescreen running on his Mac — and the system tray, start menu, and app icons were all local HTML5/JS (apps were a mix ranging from mostly local to fully remoted, each in its own cloud sandbox).

Valve’s Steam was one such app:

Watermarking, not DRM. This could be huge. OTOY’s GPU cloud approach enables individually watermarking every intra-frame, and according to some of its Hollywood supporters including Ari Emanuel, this may be enough to eliminate the need for DRM.

We shall see; I am hopeful. This kind of per-user watermarking has been prohibitively expensive, but OTOY estimates the cost at pennies per movie with their approach.

Oculus Rift, Lightfield displays, Holodecks, and beyond. OTOY works with Paul Debevec of USC’s Institute for Creative Technologies. This is Tony Stark stuff, coming at us super-fast and soon to be delivered via JS, WebGL, and ORBX.js running in the browser.

I was thrilled to be included in today’s event, hosted at Autodesk‘s fabulous San Francisco offices. I gave a demo of Epic Games Unreal Engine 3 (Unreal Tournament, “Sanctuary” level) running via Emscripten and asm.js at full frame-rate in Firefox Aurora, and spoke about how JS will continue to evolve “low-road” as well as “high-road” APIs and features to exploit parallel hardware.

As Jeff Kowalski, Autodesk’s CTO, pointed out, the benefits go beyond major cost reduction in CGI and similar processing work, to increase collaboration and innovation radically, by relieving creative people from having to sit at big workstations. The GPU cloud means many alternative ideas, camera angles, etc., can be tried without waiting hours for each rendering. Even from the beach, via your 4G-connected tablet. Teams around the world can collaborate closely as timezones permit, across the web.

We will continue to collaborate with OTOY; I’ll post updates on this topic. It’s hot, and moving very quickly. Kudos to OTOY for their brilliant innovations, and especially for porting them to JS and WebGL so quickly!

When we at Mozilla say the Web is the platform, we are not bluffing.

/be

P.S. Always bet on JS!

P.P.S. Hat tip to Andreas Gal for seeing far, with Broadway.js.

Mozilla at 15 Memories, and Thoughts on Mozilla Research

 

[air.mozilla.org video]
[slideshare.net link]

Disrupt any enterprise that requires new clothes.

Thoreau (abridged) adjusted for Mozilla by @lawnsea.

Mozilla Research Party Talk.022

I gave a brief talk last night at the Mozilla Research Party (first of a series), which happened to fall on the virtual (public, post-Easter-holiday) celebration of Mozilla’s 15th anniversary.

I was a last minute substitution for Andreas Gal, fellow mad scientist co-founder at Mozilla Research, so I added one slide at his expense. (This talk was cut down and updated lightly from one I gave at MSR Paris in 2011.) Thanks to Andreas for letting me use two of his facebook pics to show a sartorial pilgrim’s progress. Thanks also to Dave Herman and all the Mozilla Researchers.

Mozilla is 15. JavaScript is nearly 18. I am old. Lately I mostly just make rain and name things: Servo (now with Samsung on board) and asm.js. Doesn’t make up for not getting to name JS.

(Self-deprecating jokes aside, Dave Herman has been my naming-buddy, to good effect for Servo [MST3K lives on in our hearts, and will provide further names] and asm.js.)

Color commentary on the first set of slides:

Talk.003

I note that calling software an “art” (true by Knuth’s sensible definition) should not relieve us from advancing computer science, but remain skeptical that software in the large can be other than a somewhat messy, social, human activity and artifact. But I could be wrong!

Talk.004

RAH‘s Waldo featured pantograph-based manipulators — technology that scaled over perhaps not quite ten orders of magnitude, if I recall correctly (my father collected copies of the golden age Astounding pulps as a teenager in the late 1940s).

No waldoes operating over this scale in reality yet, but per Dijkstra, our software has been up to the challenge for decades.

Talk.005

I like Ken‘s quote. It is deeply true of any given source file in an evolving codebase and society of coders. I added “you could almost say that code rusts.”

Talk.006

Here I would like to thank my co-founder and partner in Mozilla, Mitchell Baker. Mitchell and I have balanced each other out over the years, one of us yin to the other’s yang, in ways that are hard to put in writing. I can’t prove it, but I believe that until the modern Firefox era, if either of us had bailed on Mozilla, Mozilla would not be around now.

Talk.007

A near-total rewrite (SpiderMonkey and NSPR were conserved) is usually a big mistake when you already have a product in market. A paradox: this mistake hurt Netscape but helped Mozilla.

I lamented the way the Design Patterns book was waved around in the early Gecko (Raptor) days. Too much abstraction can be worse than too little. We took years digging out and deCOMtaminating Gecko.

As Peter Norvig argued, design patterns are bug reports against your programming language.

Still, the big gamble paid off for Mozilla, but it took a few more years.

Talk.008

Who remembers Netscape 6? At the time some few managers with more ego than sense argued that “the team needs us to ship” as if morale would fall if we held off till Mozilla 1.0. (I think they feared that an AOL axe would fall.) The rank and file were crying “Nooo!!!!!”

AOL kept decapitating VPs of the Netscape division until morale improved.

Talk.009

2001: Another year, another VP beheading, but this one triggered a layoff used as a pretext to eliminate Mitchell’s position. The new VP expected no mo’ Mitchell, and was flummoxed to find that on the next week’s project community-wide conference call, there was Mitchell, wrangling lizards! Open source roles are not determined solely or necessarily by employment.

At least (at some price) we did level the playing field and manage our way through a series of rapid-release-like milestones (“the trains will run more or less on time!”) to Mozilla 1.0.

Talk.010

Mozilla 1.0 didn’t suck.

It was funny to start as a pure open source project, where jwz argued only those with a compiler and skill to use it should have a binary, and progress to the point where Mozilla’s “test builds” were more popular than Netscape’s product releases. An important clue, meaningful in conjunction with nascent “mozilla/browser” focus on just the browser instead of a big 90’s-style app-suite.

Talk.011

A lot of credit for the $2M from AOL to fund the Mozilla Foundation goes to Mitch Kapor. Thanks again, Mitch! This funding was crucial to get us to the launch-pad for Firefox 1.0.

We made a bit more money by running a Technical Advisory Board or TAB, which mostly took advice from Enterprise companies, which we proceeded to (mostly) ignore. The last TAB meeting was the biggest, and the one where Sergey Brin showed up representing Google.

Due to a back injury, Sergey stood a lot. This tended to intimidate some of the other TAB members, who were pretty clearly wondering “What’s going on? Should I stand too?” An accidental executive power move that I sometimes still employ.

Talk.028

Jump to today: here is Rust beating GCC on an n-body solver. Safety, speed, and concurrency are a good way to go through college!

Talk.029

As you can see, Mozilla Research is of modest size, yet laser-focused on the Web, and appropriately ambitious. We have more awesome projects coming, along with lots of industrial partners and a great research internship program. Join us!

/be