It’s been a few days since Poul-Henning Kamp sent a somehow rantish message to the IETF’s HTTP WG mailing list. If you are reading this post odds are you know what I’m talking about. In a nutshell, he called for the HTTP 2.0 process to be abandoned because of a number of fuzzy and IMHO very arguable reasons.
To put it in context, the HTTP 2.0 spec is currently work in progress. The current draft isn’t perfect, and odds are the final spec will not be perfect either. That’s expected, at least by me. Actually, aiming for perfection in a task this is a foolish thing to do.
SPDY is a huge step ahead from HTTP 1.1. I know well because I’ve implemented both protocols. It’s specially encouraging the fact that Google has been eating its own dog food for a fairly long time already, and thus they have proven it to be viable.
HTTP 2.0 is an evolution of SPDY, and as such it has inherited most of its improvements over previous HTTP versions. Needless to say, it isn’t perfect: it’s a much bigger and complex protocol than its predecessors, and therefore it’ll be comparatively much more costly to implement and polish properly. Not to mention the security issues that its dependency on TLS brought to the table.
There are other potential issues with HTTP 2.0. As Poul-Henning Kamp stated, the semantics of the protocol remain unchanged. That means, we get a fancy new transport layer, but many the other flaws in HTTP remain there. The existence of cookies, for instance.
It’s my understanding most of the people and organisations involved with Web infrastructure standards definition would like those semantics to be reviewed and updated. However, doing it now would actually be the fiasco that Poul-Henning referred to on his message. Notwithstanding the obvious differences between the cases, we’d be making the same mistakes that the PHP and Python folks made when they published incompatible versions of their respective programming languages. Fixing the semantics is most likely the right thing to do, but doing it along with the new transport implementation would be, in my humble opinion, a huge mistake.
There will be always a potential new protocol being discussed. It isn’t uncommon to hear ideas on how to improve and/or redesign the current ones. However, that doesn’t mean we have to stand still until the discussion is over. Technology has to evolve. We cannot wait forever for the mother of all protocols to be ready. Let me tell you something, the mighty protocol the legend described does not exist. We have to be realistic and accept that no protocol will ever be perfect, neither HTTP 2.0 nor any future or alternative version.
The decision is actually quite simple. What would you rather do, to keep working with HTTP/1.1 for the next 5 or 6 years, or to start upgrading to HTTP 2.0 next year so you can take advantage of all its improvements? What would provide a better experience to you users and customers? Baring in mind that there will never be a perfect protocol, the choice looks obvious to me.
Leaving aside some issues, HTTP 2.0 is a step forward in the right direction. That’s why I believe we’d be globally better off finishing and adopting it rather than tossing it away for no good reason. It’d definitely a terrible waste of time and effort to have to starting the definition process all over again. At the end of the day, despite its imperfection, HTTP 2.0 addresses almost all the HTTP 1.1 issues it was supposed to.
All in all, done is better than perfect.
The upcoming eighth draft of the “HPACK - Header Compression for HTTP/2” introduces a number of changes over the previous version. Nothing too drastic, but enough to make libhpack's test bench fail loudly.
Some of the libhpack code is being auto-generated from the HPACK spec document. It wasn’t trivial to put together at first, but it has proven to be a great way of avoiding a lot of potential human mistakes.
Surprisingly, it’s also been quite handy for making sure the library actually complies with the latest bits in the - still WIP - standard. Since the QA bench is executed against the last available draft, it fails depending of whether or not incompatible changes were introduced recently. Though it is never pleasant to receive a CI report with a failure status, the almost instant notification of an incompatible update in the spec is quite valuable.
Once the final version of the protocol specification is finally published this approach won’t be as useful. However, for the time being, it’s being of great help to deal with this sort of moving target.
Changes since draft-ietf-httpbis-header-compression-07:
I’m planning to update libhpack to support the changes above within a day or two. It first sight it looks like it won’t be tough to make our continuous integration system report a green light again.