Sometime the timing is just too perfect.
Yesterday I was trying to book a flight on Brussel Airlines and when I was trying to pay via credit card, they insisted on an on-the-fly enrollment to MasterCard SecureCode. I refused and booked via the AMEX Business Service.
Today a security analysis of the whole scheme was published by British scientists, confirming my reservations.
“Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders.”
“So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure.”
(Quick aside: one of their tricks to speed up the resolution is to pre-fetch records due to expire from the cache. I’ve proposed exactly the same to the BIND folks at the DNS-OARC meeting in Chicago, 2007.)
On one mailing list, the question was raised how widespread use of Google DNS would affect the Content Distribution Networks (CDNs) like Akamai. After all, they take the source IP of the DNS query as “close to the client, network-wise” and return the best CDN node for that IP address. If now an Austrian User ask the Google DNS servers in the US, then the CDN’s nameserver will return the address of an American CDN node leading to a suboptimal choice.
That effect might become less pronounced (but does not go away) once Google deploys their DNS service in a massive anycast infrastructure. Akamai will then see the request coming from at least the same region as where the end-user is.
Actually, the best move Akamai could do is start a rival DNS resolving infrastructure. If they use anycasted recursors at each of their CDN nodes, that would really simplify their CDN algorithm as the node that gets the DNS request is very likely to be the optimal one for the actual content delivery, too.
Recently, two mails of a conspiracy theorist sneaked past my spam-filter. Pure flashback to the heyday of the good old Usenet kooks. Consider this quote:
The Jewish nazis also continued to send ‘messages’ and ‘feedback’ to me through the media and internet and through the EBL – Electronic Brain Link – whereby, among other things, they ‘invited’and sucked me in to directing my attention and using my amazing power on images in magazines, the internet, TV and other media
I mean, if that doesn’t trigger your kook-detector, nothing will.
CAcert has tried for some time to provide free X.509 certificates based on automatic checks and a web of trust. They never managed to get the root certificate included in the default installations of the major browsers. As I read it, they’ve given up on Mozilla for now.
Aaron forwarded me a link to a blog post by StartCom where they announce that their CA will be included in IE soon. As they are already recognized by Mozilla and Safari, their certs are pretty much as good as any other commercial x.509 cert for servers.
In that respect, they are not unique, you can buy commercial grade certs from various sources, the most popular being Thawte, Equifax, Usertrust, Comodo, and Verisign.
What makes StartCom special is the fact that they give away free certificates similar to what CAcert is doing. Their enrollment at http://www.startssl.com/ is pretty much straight forward and getting certificates (both by uploading CSRs or by letting them generate a key) is painless.
Furthermore, they impressed me by:
- Adding priv.at as a valid domain suffix within a few hour after I mailed them.
- Checking the server for which you requested a cert and giving you hints if you made a configuration mistake.
Um sein Geschäft mit X.509 Zertifikaten anzukurbeln, schreibt er eine Pressemeldung, die auch prompt von der Fuzo übernommen wird.
Es scheint um X.509 Zertifikate für SMTP/STARTTLS zu gehen, also die Verschlüsselung des Transportweges beim Mailversand.
Was ist da dran alles falsch?
Continue reading Zeger reitet wieder
In the wake of .org going signed, we finally have good data what that means for the authoritative nameservers. Duane gave a good talk at the recent NANOG event, showing the increase of TCP connections.
So what is the problem?
In a nutshell: Packet sizes. DNS responses containing the DNSSEC specific RRSETs are larger, and setting the DO bit that triggers their inclusion is almost default these days. So we’re now routinely exceeding the 512 bytes that the original DNS spec required. Over the years, the IETF defined EDNS0 which allowed clients to announce their support for larger responses via UDP. Not this is finally really put to the test and we can see how fallback to TCP we still observe.
Continue reading DNSSEC and large packets
Almost whenever a security event involving Windows is featured on Slashdot or Heise, some Linux fanboys will invariably post their cocky “that would not have happened with Linux” messages.
I start to see the same thing with DNS incidents and DNSSEC.
This is just as childish and stupid, especially as the voices writing such notes are often enough established engineers and not your average adolescent geek.
In reality most of the recent DNS hacks were not perpetrated by crafting forged DNS responses to poison caches but were successful attacks against the Registrar/Registrant interfaces. No, DNSSEC would not have helped in such a case.
The same is true for DNSSEC and the domain-based censorship which was just passed by the German government. DNSSEC will not help here. It is no panacea against meddling with DNS answers. It depends on who is doing the validation and whether the offending domains are actually signed or not (not likely these days):
- DNSSEC validation is done at the ISP resolver:
DNSSEC doesn’t help the end-user here at all.
- DNSSEC validation in the client, ISP recursor is used:
If the domain is signed, then the user will get a NXDOMAIN (or maybe a better error-reporting) instead of the IP address of the STOP-sign website.
So the censuring still works, just the alerting of the user (and the logging of the STOP-sign access) does not.
- DNSSEC validation in the client, full recursion at the client
Censorship is ineffective. Just the same as when the local recursor does no DNSSEC checking.
Remember: DNSSEC is not about the availability part of security, it’s only about the integrity. Censorship does not really need to attack the integrity, it’s all about availability.
Date: Wed, 20 May 2009 14:05:42 +0000
Subject: Your free trial to Last.fm Radio is over. Did you enjoy it?
Your free trial to Last.fm Radio is about to end. If you’re enjoying it, why not
subscribe for only €3.00/month and continue listening to non-stop personalised
The Last.fm Team
Deny This, Last.fm
by Michael Arrington on May 22, 2009
A couple of months ago Erick Schonfeld wrote a post titled “Did Last.fm Just Hand Over User Listening Data To the RIAA?” based on a source that has proved to be very reliable in the past. All hell broke loose shortly thereafter.
I was inclined to pay them the 3€, partly because I’ve listened a lot to a stream from them, but after this breach of their privacy agreement?
Sorry, no deal guys.
[Update: yes, I know that LastFM is disputing this story.]
The Internet Community thanks the RIPE staff for their dedicated work during the RIPE and OARC meeting:
Last week Carsten Schiefner talked about .tel at the nic.at Registrartag in Vienna. Now that .tel has finally launched, here are my thoughts on this new TLD:
Continue reading Some thoughts on .tel