
Do you know the feeling of watching somebody you respect making a blatant mistake? It is a mix of not understanding, and the thought “Why? You should know better!”. The more evident the mistake is, the stronger the feeling you get.
That’s the sort of reaction I experience when I see how the wrong leadership is being imposed on environments where Innovation is expected. More so when it is such a…
Open Source is the New Standard. How did it happen?

There is something extremely interesting about the times we are living in, especially for people who have constructed their careers around Open Source Software like me. After a fairly long process where many organizations have slowly come to understand and adopt Open Source practices and technology, Open and Closed technology have effectively traded places. Today it’s fair to say that Open Source…

When I thought I had seen everything, I stumbled upon a blog post where Microsoft invites people to celebrate the release of Debian 8 with them. I must say, reading it was like watching in amazement how Hell froze over… in slow motion.
On the one hand, it’s fantastic the company finally got it. They have released some interesting code lately, which is a very good first step. Kudos for that!
On…
Relocation? No way! But please, keep requiring it.

It’s well known that technology companies struggle to hire great software engineers. They’ve been managing to find and hire capable software engineers, which even if nontrivial, is doable. Finding great engineers though, is a complete different matter.
Odds are if you have ever met or worked with one of these exceptional people you know what I’m talking about. There’s a huge difference between…
Daylight Saving Time sucks. Big time. Especially when you think that it a fairly recent invention that most of the people I know qualifies as utterly pointless.
DST (Daylight Saving Time) was introduced by Germans around May 1916, during First World War with a sole purpose: to save coal during wartime. Allow me emphasize that: DST was introduced with the only reason of saving fuel during the war. As the war progressed, the rest of Europe adopted DST. In the United States, the plan was not formally adopted until 1918.
I must admit, the reasons we’re still suffering the pesky thing is a mystery to me. Theoretically, and according to its supporters, DST helps to save energy. Unfortunately, science has proven this hypothesis plain wrong. During the oil crisis of the 1970’s, The US Department of Transportation calculated a 1% savings in power usage thanks to DST. A whole one percent in 1970, uh? Wow.
There are many reasons why DST is an awful idea, actually. It messes with your metabolism; that’s obvious. It loses billions every year in many ways. And, last but not least, it makes millions of people’s lives much harder, especially if they work on international environments. As it couldn’t be otherwise DST could not be introduced uniformly around the globe either, and that is a source of additional trouble.
Here, in Spain, DST starts on last Sunday of March and ends on last Sunday October. However, in the US, DST starts and finished a week later. And, there is also a whole lot of countries where -lucky them- DST isn’t applied: Russia, many countries along Asia, Central and South America, and Africa.

So, a kind reminder to all my North American fellows. Note that scheduling meeting with European folks is going to be slightly tougher than usual during this week. Please, be patient until we re-sync again in a week time. Meanwhile, let’s enjoy the fact we still follow a scientifically disproven practice that helped the German save coal during World War I.
I’ve just stumbled upon the following e-mail while going through my personal Inbox. It came from OpenHub (previously known as Ohloh, an Open Source project analysis service), so it caught my attention because they don’t usually spam you with updates or promotions.
Check it out:

I must say I almost literally ROTFL when I realized it was the one of those scams where a single girl reaches out to you to look for fun.
I kinda pictured the scene in my mind. Imagine a cute girl sitting in from of her computer, checking out OpenHub profiles (??). Suddenly she freezes up. “More than 5700 commits to Open Source projects in the last 12 years??” she babbles nervously as a chill runs up her spine. “Oooh my god! That’s sooooo hot! I have to meet this guy. I’m dropping him a line”… and thus, the message. I'm positive that’s what happened.
The upcoming eighth draft of the “HPACK - Header Compression for HTTP/2” introduces a number of changes over the previous version. Nothing too drastic, but enough to make libhpack’s test bench fail loudly.
Some of the libhpack code is being auto-generated from the HPACK spec document. It wasn’t trivial to put together at first, but it has proven to be a great way of avoiding a lot of potential human mistakes.
Surprisingly, it’s also been quite handy for making sure the library actually complies with the latest bits in the - still WIP - standard. Since the QA bench is executed against the last available draft, it fails depending of whether or not incompatible changes were introduced recently. Though it is never pleasant to receive a CI report with a failure status, the almost instant notification of an incompatible update in the spec is quite valuable.
Once the final version of the protocol specification is finally published this approach won’t be as useful. However, for the time being, it’s being of great help to deal with this sort of moving target.
Changes since draft-ietf-httpbis-header-compression-07:
I’m planning to update libhpack to support the changes above within a day or two. It first sight it looks like it won’t be tough to make our continuous integration system report a green light again.
It’s been a few days since Poul-Henning Kamp sent a somehow rantish message to the IETF’s HTTP WG mailing list. If you are reading this post odds are you know what I’m talking about. In a nutshell, he called for the HTTP 2.0 process to be abandoned because of a number of fuzzy and IMHO very arguable reasons.
To put it in context, the HTTP 2.0 spec is currently work in progress. The current draft isn’t perfect, and odds are the final spec will not be perfect either. That’s expected, at least by me. Actually, aiming for perfection in a task this is a foolish thing to do.
SPDY is a huge step ahead from HTTP 1.1. I know well because I’ve implemented both protocols. It’s specially encouraging the fact that Google has been eating its own dog food for a fairly long time already, and thus they have proven it to be viable.
HTTP 2.0 is an evolution of SPDY, and as such it has inherited most of its improvements over previous HTTP versions. Needless to say, it isn’t perfect: it’s a much bigger and complex protocol than its predecessors, and therefore it’ll be comparatively much more costly to implement and polish properly. Not to mention the security issues that its dependency on TLS brought to the table.
There are other potential issues with HTTP 2.0. As Poul-Henning Kamp stated, the semantics of the protocol remain unchanged. That means, we get a fancy new transport layer, but many the other flaws in HTTP remain there. The existence of cookies, for instance.
It’s my understanding most of the people and organisations involved with Web infrastructure standards definition would like those semantics to be reviewed and updated. However, doing it now would actually be the fiasco that Poul-Henning referred to on his message. Notwithstanding the obvious differences between the cases, we’d be making the same mistakes that the PHP and Python folks made when they published incompatible versions of their respective programming languages. Fixing the semantics is most likely the right thing to do, but doing it along with the new transport implementation would be, in my humble opinion, a huge mistake.
There will be always a potential new protocol being discussed. It isn’t uncommon to hear ideas on how to improve and/or redesign the current ones. However, that doesn’t mean we have to stand still until the discussion is over. Technology has to evolve. We cannot wait forever for the mother of all protocols to be ready. Let me tell you something, the mighty protocol the legend described does not exist. We have to be realistic and accept that no protocol will ever be perfect, neither HTTP 2.0 nor any future or alternative version.

The decision is actually quite simple. What would you rather do, to keep working with HTTP/1.1 for the next 5 or 6 years, or to start upgrading to HTTP 2.0 next year so you can take advantage of all its improvements? What would provide a better experience to you users and customers? Baring in mind that there will never be a perfect protocol, the choice looks obvious to me.
Leaving aside some issues, HTTP 2.0 is a step forward in the right direction. That’s why I believe we’d be globally better off finishing and adopting it rather than tossing it away for no good reason. It’d definitely a terrible waste of time and effort to have to starting the definition process all over again. At the end of the day, despite its imperfection, HTTP 2.0 addresses almost all the HTTP 1.1 issues it was supposed to.
All in all, done is better than perfect.
Yesterday, while I was working on the upcoming new GNU MACChanger 1.7.0 I stumbled upon something unexpected, and somehow puzzling.
In a nutshell, the IEEE has a Registration Authority called OUI that manages the assignments of 802-defined MAC addresses. The MA-L (MAC Address Block Large) is the list of the MAC address ranges assigned to different organizations.

So, while I was going through the new entries in the MA-L, I did find something surprising and completely unexpected. Check out who got ~16 million MAC addresses granted:
FC-D4-F2
The Coca Cola Company
One Coca Cola Plaza
Atlanta GA 30313
UNITED STATES
I must admit curiosity is tickling me. Why!? What would Coca Cola want a block of MAC addresses allocated? What are they (planning of) doing with it? I wanna know… so badly.
It’d have been even more perplexing if they got the C0-CA-C0, or CA-C0-1A range instead of the one they got though :-)
PS: Pranksters of the world, I know what you are thinking! macchanger –bias –mac=fc:d4:f2:xx:xx:xx $IF is going to be a lot of fun, uh?
It’s been a month already since the FSL 2013, and I’ve just remembered I didn’t upload the slides of my talk there yet.
If I recall correctly, it was the fifth time I participated at this conference. I must say it’s a great conference that keeps improving over the time. The conference organisers did a fantastic job making it happen. They should feel good about the terrific job you’ve done with FSL.
No doubt Puerto Vallarta (Mexico) is a great venue for the conference, specially if you come from the Northern hemisphere where November is cold month already. Actually, I’d say that’s actually one of the coolest perks any conference can provide :-)

In this edition I keynoted along with Bruce Perens and Bdale Garbee. From a professional standpoint, it was a true honor. From a personal perspective, it was great meeting them again in the flesh.
All in all, FSL was a great event definitely worth attending, and I look forward to the next one! :-)
It’s amazing how fast time flies. I cannot believe it’s been almost two years already since I shifted gears and moved my focus away from the Cherokee project. Believe me when I say that, at the time, it was something tough to do for me.
Time has passed and I’ve got involved in a few other Open Source projects, including WebKit, and the always thrilling OpenStack project. Overall, I’m pretty happy about how things have unfolded.
This time also helped me to put things a little bit more in perspective, and to gain a better understanding on some of the upcoming technology I’m interested in, specially HTTP/2.0 and OpenStack.
Something worth noticing is the incredible hard time that the classic Web Server projects will have implementing HTTP/2.0. It applies to all of them: Apache, Nginx, Cherokee, etc. Why? Well, despite HTTPbis’ intention of not adding new functionality to HTTP/2.0 it certainly introduces so many fundamental changes to the protocol that it’d be extremely difficult to implement it properly on top of a “classic” HTTP/1 server. I know it well. I implemented SPDY in Cherokee, and then dropped the whole thing altogether when I realised the approach was just flawed.

How HTTP/2 support on top of HTTP/1 servers feel
Picture the HTTP/1 server as the “Nobel steed”. It’s indeed nobel and has served you well, but there is no way it will be able to handle its new rider. Do not bother trying to put it on an harness, it won’t work. They are two very different beats.
Despite the numerous implementation details, there are so many fundamental changes in how the protocol works that it would effectively require to rewrite the whole server from scratch to get it right. Although partial implementations can be achieved on top of the current HTTP/1 servers, they’d be quite limited, and therefore not what one could be looking for when moving Web resources over to HTTP/2.
I won’t go through the new layers of complexity that a HTTP/1 to HTTP/2 server re-implementation would require, but there are quite a few, specially if the target is to keep both protocols working seamlessly along each other. It sounds like an interesting topic for a future post though.
Bottom line, HTTP version 1.1 and the upcoming 2.0 version are like chalk and cheese, at least from a server logic point of view.
Even though it isn’t in the Top 5 most used Web Servers, Cherokee (HTTP/1.1) is currently running in hundred of thousands of devices (all the GoPro cameras, Digi embedded products, etc), plus on many critical and high demanding environments (European Space Agency, US Department of Energy, etc).

GoPro cameras run Cherokee
Quite a few mistakes were made in the Cherokee project, and I’m afraid I was the ultimate responsible of most of them. Unfortunately, not making it to the Netcraft’s Top 5 most deployed Web Server list was the price we paid. I’d say the most common one was bad prioritisation, followed closely by an excessive focus on minor technical details and the misconception of having unlimited resources available (specially time). Truth be said, its license and the requirement of signing a contributor agreement in order to get code merged into the project did not help much either.
“Mistakes are always forgivable, if one has the courage to admit them.” — Bruce Lee
Good news, the lesson was well learned.
So, assuming all that.. What if the experience and know-how of having built Cherokee was put to the purpose of building new infrastructure for the upcoming HTTP protocol? A decade-long journey does certain teach you a lot of valuable things. A whole lotta ‘em.
I’ve certainly given that idea a lot of thought during the last couple of years. Somehow I supposed it would vanish after I dived deep into other FOSS projects, but I’m admit I’m still very attracted to the idea of putting this plan together. I can’t help myself.
All in all.. I’m doing it again. Oops!
I’m starting the development of a couple of new libraries to implement the complete upcoming HTTP/2.0 protocol.
I’m pushing the first few bits to my GitHub account, We'll see how it evolves. Do not expect any fancy project website. All that eye candy will come later if the project sticks and starts attracting attention.
By now, you can find the first few bits at the libhpack repository, a C based, BSD licensed implementation of the HPACK spec (“Header Compression”) required in HTTP/2.0.
As it couldn’t be otherwise, I’d like to invite everybody to come by, give the project a try and contribute, either with code, ideas, thoughts or simply by spreading the word about it.
Rock on!
Tomorrow I’ll be presenting at Big Data Spain 2013, "OpenStack, the birth of the Open Cloud“.

If you are attending, let me know and I’ll keep an eye out :-)
I’ve just been pointer to this jaw-dropping paper: “Stealthy Dopant-Level Hardware Trojans” that shows how a complex integrated circuit could be maliciously compromised.
This new type of sub-transistor level hardware Trojan that only requires modication of the dopant masks. No additional transistors or gates are added and no other layout mask needs to be modied.
Since only changes to the metal, polysilicion or active area can be reliably detected with optical inspection, this dopant Trojans are immune to optical inspection, one of the most important Trojan detection mechanisms.

As proof of concept they were able to make changes to several hundred gates of an Intel Ivy Bridge processor which sabotaged the Random Number Generator (RNG) instructions. The exploit works by reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. Any cryptographic keys generated by the compromised chip would be easy to crack. The hacked RNG was not detected by any of the “Built-In Self-Tests” mandated by the National Institute of Standards and Technology.

All in all, it doesn’t matter how strong your cryptography method is, it can be easily cracked if you don’t have a reliable source of entropy, and in this case, it seems somebody has cut to the root of the “problem”. Be aware.