Sometimes, I just make simple ones like this:
Broadcom woes
The company laptop (Windows XP) of my wife came with the Broadcom software for controlling the Wifi settings.
I’ve already had so many troubles getting that box to talk WPA to my local WLAN at home (an OpenWrt Kamikaze running on an Alix box) that I switched back to WEP.
Last week I tried to get the Broadcom junk to talk WPA to the Linksys ADSL/WLAN CPE at my mother’s place. No go. Just once, for a few seconds it managed to get the TKIP key. Most of the time it failed to negotiate an AES key. Whatever.
I’m so glad I convinced her tech department to give us local admin rights. That way I finally just nuked that dysfunctional piece of sh*** and went back to the default Windows WLAN configuration tool.
That just worked.
Instantly. No hassle at all.
US social security numbers
Today, slashdot features yet another article concerning the non-security of SSN as an authenticator. A good number of comments already discussed the stupidity of basing security on the secrecy of the SSN.
Actually, I think there is just one simple solution to keep companies from relying on the SSN as a way to authenticate people:
Publish them all.
In reality, everybody who actually uses SSNs to authenticate people needs to have access to the DB of SSNs. Anybody who handles forms which contained SSN learns them. It’s a shared secret. And it’s used so widely that the circle of people who know them is so large that the secrecy is impossible to maintain.
They may be a secret, but they are a pretty open secret. That’s not security, that’s just a marginally plausible veneer of security.
In order to get something secure in place, you need to convince people that the current scheme is broken beyond repair. So just publish them. All 300 million of them. Get over it.
Rate-limit for swatch
At work, we installed swatch to have a look at our combined logfiles. (see techrepublic or linsec for a swatch intro.)
But contrary to most of the examples, we’re using swatch not to check for known events, but to look out for unexpected entries. So basically our config is “ignore the known, send mail for the rest”:
ignore=/…/
ignore=/…/watchfor=/./
mail=…..
This has one severe drawback: every single unexpected line in a logfile will send one mail. This just doesn’t scale.
The threshold feature won’t really help us, as it rejects notifications over its limit, whereas for email notifications it’s better to collect more messages into a single email.
So I dived into the code and added a ratelimit feature for the mail Action.
Apply the patch in Actions.pm.diff and then you can write:
watchfor=/./
mail=addresses=joe\@example.com,subject=”swatch alert”,ratelimit=600,ratetag=foo
and joe will get no more than one mail per 10 mins, without missing a single message.
As written, this config has one problem: I need to flush the messages I held back once I’m allowed to send mail again. In theory, I should have added some sort of timer-based event-handling to swatch, but I considered that to be overkill. Especially if you have multiple mail statements with different rate-limits. So I added another option to the mail Action that tells it just to flush spooled messages and do nothing more. You should trigger that option frequently, e.g. with a stanza like this at the top of your config-file:
watchfor=/./
mail=addresses=joe\@example.com,subject=”swatch alert”,ratelimit=600,ratetag=foo,rateflush=1
continueignore=/ /
ignore=/…/watchfor=/./
mail=addresses=joe\@example.com,subject=”swatch alert”,ratelimit=600,ratetag=foo
Yeah, right
Today at the local (small) supermarket:
Well, at some point in history, both Spain and (parts of) Italy were part of the Hapsburg Empire, but the rest? Give me a break.
A sequel to this post:
My aim here was to build a compact set of tracks.
Well, sometimes there is no time to build an elaborate set of tracks. Today, I just made a simple loop for Clemens to play with before I had to leave:
In the wake of .org going signed, we finally have good data what that means for the authoritative nameservers. Duane gave a good talk at the recent NANOG event, showing the increase of TCP connections.
So what is the problem?
In a nutshell: Packet sizes. DNS responses containing the DNSSEC specific RRSETs are larger, and setting the DO bit that triggers their inclusion is almost default these days. So we’re now routinely exceeding the 512 bytes that the original DNS spec required. Over the years, the IETF defined EDNS0 which allowed clients to announce their support for larger responses via UDP. Not this is finally really put to the test and we can see how fallback to TCP we still observe.