My online life in Nicaragua was limited by many factors, notably the 5 times daily power flickers and the twice weekly 6+ hour long outages, huge DNS latencies, and a 128Kbit connection... and ultimately, the lack of having any internet within 6km of my house. I got out of the habit of relying on any machine but the ones in my household, adopting things that worked well offline - git for my source code control system for example, org-mode for my own personal database of writing, rsyncing via streamripper hours and hours of internet radio, using podcasts for my news, and ipv6 to allow me to actually be a useful node in the web (I ran my personal email and netnews servers that way)
I personally find the relentless "centralisation of everything web related on a cluster in the USA" trendline worrying. Everything important seems to exist today in the cloud, where I think (and still think) that a great deal of useful services belong on the edge of the local home or business (email/jabber/sip servers as three examples, local filesharing as another - it blows my mind when someone hands me a usb stick with the file I need on it - and really bugs me that I can't drag and drop files, wirelessly to my Android - at present, anyway)
Left to publish I have several long-delayed blog posts and a speech I gave in Australia on the centralization problem. I just found a printout of the speech I gave then - and can't find the file now - I felt the speech was a disaster, but looking at the next now... it's not bad, actually.
One of the most compelling talks about lowering our overall dependency on the web was Jesse Vincent's talk, 2 years ago, about the "sd" based bug tracker, which attempts to do for bug tracking what git did for source code development. It is accessible to just about everybody that has problems doing their job when disconnected from the web for longer than a few minutes.
HOWEVER
ALL that said, the power of the web for collaborative efforts, when it is possible to be online nearly all the time, IS compelling, and now that I'm back in the States (in a place that has had ONE power outage in 5 years) I guess it would be best to stop swimming upstream and go for something more normal people can use and edit.
(3G is seemingly everywhere - which increases the number of places where I can get online well beyond the mere coffee shops and corporate offices it was when I left America. )
I have a great deal of ipv6 related information that I wish to publish.
Most of the online references regarding ipv6 are from another era, structured as single-author howtos, often out of date or inaccurate, and focus only on the most modest scenarios of usage, like "how to get a home router" working. Data and instructions appropriate for "how to get a small business with multiple locations up on ipv6" or "how to build an ipv6 enabled ISP", doesn't exist.
I was up to something far more complex and hard to describe - a purely ipv6 enabled wireless isp - I called it wisp6 - and it's looking like getting the pleasures and pitfalls of that effort out to the rest of the world would be a boon for all.
Ipv6 is an increasingly hot topic. Comcast is making great strides towards a deployment in the near future, with experiments running on all the major modes of ipv6 deployment - 6RD, AFTR, and 6to4 tunneling. I've seen an explosion of ipv6 addresses in universities in South America, and China seems to be making progress too...
Something more dynamic, and multi-user, such as a wiki, would be more likely to stay up to date and grow beyond my personal boundaries of knowledge. I'd also like to have some blogging software that better matched my personal writing habits. Blogger bugs me.
Ideally I'd like to find some content management system that would be easy to manage on my Linux laptop - and run on my edge hardware - and be mirrored on the web - that I could do bi-directional syncing to when on and offline. I'd settle for a one-way system - hosted on the web, mirrorable on my laptop.
So, anyway, I'm fiddling with mediawiki (which is what wikipedia uses) and liking the features and default look and stuff like that, but not much caring for the resource requirements. I don't understand why a "live" connection to a sql database is needed, for example, when you should be able to just "publish" the entire thing directly from the source code repository.
Ikiwiki seems to have potential, and I'm fiddling with that, too. I REALLY want something that I can export cleanly from org-mode, but don't think worg in its current state is suitable.
It may be futile to continue swimming upstream on these fronts, and I might just bite the bullet and adopt mediawiki + drupal, rather than explore more options than these.
Zoneminder/Facebook gateway and other random project notes
I woke up with a really good idea this morning that I need to write down but first an aside.
I am very happy that facebook supports jabber now (XMPP). That makes it the LAST of the chat protocols that I have to wedge into one multi-chat client (pidgin or emacs' erc).
( That is, if I ever get bitlbee 1.2.4 + otr + bitlbee skype plugin + znc working again. My last attempt at getting otr patched into bitlbee failed big time. )
I really hate how "chat" - merely talking to another person over the internet - has fragmented into 14+ major protocols including irc - which has itself, fragmented into 260+ irc networks. Perhaps it's an example of the human condition that we form tribes in this way, that the tower of babel lives on... (has anyone rigged a translation bot into chat with google translate yet?)
Anyway, that's the rant part of this morning, on to the interesting stuff.
One major subproject of mine has been trying to come up with a working security and sensor system for my home. The hardware side of the software I've been calling the "pocobelle" project, and part of the software side is built around zoneminder, which is a motion detection/video capture/alert system.
A month or so ago, I got zoneminder to automagically push alerts ("Motion detected at the front door, see http://whatever/whatever for details") into jabber. It was just a couple libraries and a few dozen lines of perl to do this. (cool)
To make it faster and more reliable for local alerting, I installed my own jabber server on the pocobelle box... and also tried to register a zoneminder specific account on the main jabber.org server.
Unfortunately jabber.org hasn't been taking new signups, which frustrated me because I'm trying NOT to use google (which also does jabber) for new stuff and wanted to demo the idea to a few people - and because the zoneminder box is behind NAT I can't use the nifty federation (x2x) feature that jabber has which I could use to "link" my jabber server to another jabber server like jabber.org - AND because although jabber works great over IPv6 nobody seems to do x2x over it....
I woke up this morning with a new idea. Since facebook supports jabber now, I could setup a jabber account on IT, and have zoneminder send it's alerts out via facebook. Not only that, but by using the facebook API/uploader I could have it automatically upload the captured video, and using the facebook friends system only allow certain people to be able to view it...
That's it for the new ideas on the zoneminder front. In other news... My mom came down last month and brought 4 of Ubiquity's nanostation M5 5Ghz Wireless-n capable radios. I just got them configured - 2 with the default firmware, 1 with openwrt built by openwrt and (this weekend) my own openwrt build...
Pluses - The hardware is great, it's low power, and rated to 70C, and it has a nifty POE passthrough feature for the second ethernet port. The factory firmware is dirt simple. The web based site survey tools are great. The power monitoring LEDs are useful. I can get two of the radios to talk at 108 megabits/sec, with a real transfer rate in the 80Mbit range. I'm told I can do better than that at longer ranges than within my house (shorter ranges overdrive the local receiver), but it's still basically double the theoretical performance of wireless-g and easily triple the real world performance I've observed with wireless-g...
Minuses - Lacking IPv6 and (especially) routing protocol support, the factory firmware is USELESS. Maybe I'm spoilt by dd-wrt and openwrt having support for ospf and olsr out of the box, but believe me, as soon as you add a couple routers to your network, having a routing protocol to automagically figure out how to get stuff from point A to points B,C,D is flat out necessary... and it's not there. (aside: the regular unavailability and lack of standardization of routing protocols is what made bridging so inevitable - there is ONE standard for bridging - stp - and everybody implements it - and it works well for most things)
Secondly the factory firmware is based on a ancient version of linux - 2.6.15 - and that in itself, scares me. That's over 4 years old.
So I installed openwrt, not with a little trepidation (you need to install the nano-m firmware via tftp from a recent build of openwrt trunk, then install a bunch of modules like ath9k, and yes, I'll write up more documentation)
oooh, luxury. 8MB of flash and 32MB of ram goes a LOT further than you'd think. Linux 2.6.32.9, even, less than 4 months behind the mainline linux kernel. I installed ipv6, babel, ahcpd, quagga, snmpd, ntp, and a few other things, and still had room to spare. The web interface is not all that great, but I don't really care, I want to manage these things via command line tools anyway. After some tweaking, I got openwrt talking to the ubuquity firmware...
but so far, only at 54Mbit. I'm missing some configuration parameter (I hope). I have a 40Mhz channel configured, but... I'd like it very much if I could basically drive the radios at 100Mbit ethernet speeds, it would lower the need to shape the outgoing traffic as much, and anytime I can get something that is 5x better than an older technology (wireless 11b) I'm happier...
I also haven't figured out how to override the power settings to the full range available, limited to 17dbm of gain instead of 27dbm...
Update: These two problems are related. I'm definitely overdriving the receivers. I can actually connect as high as 162 Mbits/sec between the two. I found this out when I accidentally pointed one of the radios at the ceiling... Ooh. 162Mbits.... NICE...
Now, since I envision a day where I have to build out a bunch more of these radios, and partially because I'm a glutton for punishment, I downloaded openwrt and built my own firmware. That worked... on the first try! (that's amazing in itself, but usually a bad sign indicating real trouble ahead, as I will always then proceed to dig my own hole). On my second try - after configuring a few more modules - it fails on building the kernel for some reason:
LD [M] fs/autofs4/autofs4.o mips-openwrt-linux-uclibc-ld: unrecognized option '-Wl,-rpath,/usr/local/lib' mips-openwrt-linux-uclibc-ld: use the --help option for usage information
I'm stuck on that right now. I have a bunch of tests to do (reliability, traffic shaping, different routing protocols, etc) but I'd like the two different firmwares to at least be talking at the same rate.
But it is kind of neat to see babel "just working" to route between the routers. Hmm... maybe I can get vlans to work... I wonder if this thing has any pins I could hang a temp sensor off off...
Computationally, I've limping along, flying on the shreds of one wing. My quad core box died in August, my laptop died in December, and I killed Pocobelle back in November.
The quad-core? Dead beyond repair. I tried replacing the fan, the cpu, and the memory. Pocobelle requires some finicky jtag work to repair and was at the end of it's useful life anyway, so I replaced it with an open-rd box.
And, rather than fix the laptop, I ended up building a new machine (specific to a project I can't talk about), in x86 mode. "Buddy" is a nice machine, a dual core atom, and the zotac nvidia graphics interface is amazingly fast, even with driving two screens. I don't miss my old power sucking quad core box, except when compiling kernels and ardour. I should have made buddy be x86_64, however, so I could just clone the laptop...
Back in august, I switched over to using emacs for chat, email, news. I took major steps to try and integrate chat, in particular, into emacs - using bitlebee and znc to integrate skype, irc, and jabber into one interface. It worked great!
I do the majority of my writing and coding in emacs, already, so I ended up with two fullscreen windows that used every pixel I had available, sparingly.
I wasn't done, in particular I started working on some new blogging software and got stuck on it, and the machine I was developing it on, pocobelle, died also, and the memory stick it ran on is around here... Somewhere...
It took a LOT of work to setup all that I haven't replicated on the new box. It would have been easy had I stuck with x86_64 mode instead of reverting to x86. Instead I switched to using the new thunderbird for mail, dropped netnews entirely, and went back to pidgin for chat, because it was easier to set those up quickly. And went from the keyboard driven window manager I like, back to gnome. A major mistake, I'm thinking all that was, but I did get a chance to try some new tools.
Thunderbird 3 is awesome, in particular the tabbed searching facility is to die for. But I haven't got around to sorting my mailboxes with the same level of filtering I had with emacs. Due to switching to using imap for the email backend, I'm not sure how hard it's going to be to use emacs for mail again, and I'd rather like to keep thunderbird around as an option.
I HATE the gnome window manager. It's so dumbed down as to make me feel like Harrison Bergeron.
Pidgin is good but...
The thing is, I can feel the productivity seeping out of me every time I switch from one glossy white (web,chat,news) window to another. I feel far less productive minute by minute, hour by hour, by using these tools not integrated into my main editing interface. My personal, searchable database of everything I do was all living in emacs, and it was letting me manage far more "stuff" than I can without these tools integrated.
Sigh. So I kind of need to reboot my life - repairing the dead laptop, resurrecting pocobelle, getting a new desk made, laying out all the stuff that needs repair or upgrades on a workbench... and some time all by myself to methodically go through it all.
Facebook has finally adopted a standards based chat system, built around XMPP. That's one timesump I can just move to being chat. There's also a fix for yahoo out there in bitlbee. My first attempt at integrating the yahoo and otr patches into bitlebee did not go well, I hope someone else has fixed it.
I built out my email systems using digital certificates. They are about to expire and the cert I had for the website to un-expire them is on the dead laptop....
Hopefully I'll find the time this month to rebuild my environment to where I feel productive again. What I really want to be able to do is wake up, and sit down, and work, without having to reboot my world every time I get up. Solving THAT problem is going to require I get a solar/battery/inverter system so that I can keep the core systems powered up, and/or that I simplify my environment a lot more... need money for the inverter system...
Maybe I can outsource some of the backlog. Need money for that. Guess I need to work some, first.
While all that stuff was going backwards I did make major forward progress with "jaco", the open-rd replacement for pocobelle. It's a darn nice box.
At one point I had high hopes I could actually use jaco for a desktop replacement. At 11 watts, it eats ~1/4th the power of the atom, and runs emacs just fine. Given that my display eats 20watts, I could halve my power consumption and double run time on battery. (Lest you think this is an obscure requirement, I frequently undergo day long power failures, and running the generator for long periods is both annoying and expensive, and my existing UPS lasts about 3 hours on the atom)
The video driver, however, is so horribly slow that I just can't stand using it as a desktop.
Jaco is great for everything else though. I have it running zoneminder, in particular, as part of my home security system, in addition to it being primary DNS, squid cache, web server, bittorrent client, ipv6 gateway, mesh networking, email/imap, znc, jabber server, music and I forget what else.
I have had so much fun developing stuff to work on the open-rd that I've been afraid to turn it to the production use I'd planned for it, namely replacing my router/firewall. I guess I need to get another one, or wait for the new "guruplug"s to arrive.
First up this morning - make an outline of everything I need to do this month. Importantly, this has to include revenue generating stuff, as I burned myself out back in December and need to get on the stick, but also importantly it has to get me to where I can "wake up and work"....
Second up this morning - fix the !@#@! Laptop. To do that, I need to find the pesky little screws for a new DVD drive, and burn a new cd, and that new DVD drive is going into the box I'm typing this on so I guess I'll have to power off in a sec before I can develop more of an outl
Hopefully, with some feedback, I could order the "right thing" by the end of next week.
I write stuff and post it so that - although the average intelligence of the internet may be low - the cumulative intelligence is also inconceivably high. Out there there is always someone that has the answer to your problem...
I had pretty much settled on the TS7500 board as being pretty much ideal for what I was looking to build. It was extremely low power, and had enough DIOs, host USB, flash, etc to do most of what I needed to do.
The only problem with it was that it didn't have enough ram to do all I needed to do, and was kind of slow. I was working out the implications for motion detection in zoneminder, trying to offload that function to other devices, and not liking what the shifts in costs were like - my camera costs went way, way up, and functionality down. Zoneminder's motion detect algorithm was very, very good, but also cpu intensive. I couldn't see my way clear to embedding anything even as close to as good in a FPGA, and doubted your typical camera could, either... and I was trying to keep the moving parts and power budget to a bare minimum.
Slashdot steered me at two really cool looking products - most notably the Sheeva Plug, and secondarily the Open RD...
Which - aside from slightly higher power consumption - still far less than a laptop or atom board - are pretty much perfect for most of what I need (except the DIO relays, which I can probably do via USB).
I kind of wish they'd posted on this topic before I burned the hours writing my spec, and for all I know there is some showstopper in one of these boards, but it hurts a lot less to read than to write, so I'm going to go read up on them now, and maybe place an order or two.
The sheeva plugs have 512MB of RAM, 512MB of flash, USB 2.0, eat 3w of power at idle, and cost 99 BUCKS! for that price someone can afford to buy a couple, just to play with!
I am sore tempted to order a a touchbook, too. Life is getting a bit more interesting in the arm world.
It was a dark and stormy saturday night. Power and internet were offline. I'd got to where I was satisfied with the Linux kernel I was running. It was doing everything I needed to do, I'd loaded up all my apps and (with a 256MB of swap) hadn't had a crash in a week...
I wanted to focus on userspace, and make Pocobelle boot standalone so it didn't need to fetch, via tftp, its kernel anymore, and could stand completely alone...
(About the only bit of tuning I wanted to do was make it boot faster, I had had to put a rootdelay=10 seconds into the Linux boot so I didn't have to do any special magic (e.g. switchroot) to complete the boot off of the USB stick. But it booted in less than 2 minutes, and that was fine.)
I was *happy*. (Getting to where you have a stable kernel makes EVERYTHING else a thousand times easier)
Soooo... I decided to write the the "stable" kernel to the on-board flash. I went into RedBoot and did a:
Not remembering how Pocobelle got this way, (I'd had it in a box for 4 years or so) I blithely assumed that this was all I needed. I kept scratching my head over how this layout failed to match what was in the kernel for the layout:
My assumption was basically that I had an advanced kernel with a preproduction redboot and that the partition table info here was incorrect...
WRONG.
I did an fis init -y to clear out the uselessly small Linux partition, hacked the TS7250 driver to take my new partition table, built a new kernel,
Allocated 4MB for the new Linux kernel at 0x60000000. Erased that partition, tftp loaded and wrote the Linux kernel there...
I completely forgot that this device has a complex bootstrap process. The tiny on-board NOR flash has a tiny bootloader that then bootstraps another bootloader out on the NAND flash, which is then smart enough to load up Redboot, which in turn is smart enough to load up Linux.
Result: 1 brick. Pocobelle never gets to Redboot. It jumps right to Linux and blows up shortly thereafter.
I figure I overwrote the TS-BOOTROM, which, although it wasn't marked in the bogus fis table, was hiding at 0x06000000. The system has enough brains to jump to that spot, that's all....
Now, there may yet be something to do that will get me into rewriting the ts-bootloader into the NAND flash from the serial port. But I doubt it. (I'm writing this blog entry unattached from the internet, like I do a lot these days, so I can't google)
Post googling Update: Joy! There's a tool to boot from the serial port! Boo! I have to build redboot from scratch for it to work. I've been meaning to do that anyway, but...
There's no one to blame for this but myself. I should have dumped the contents of the flash there and looked at the strings in it to see what it meant before I overwrote it. Shoulda, but I didn't. I normally would have done that, back when I was doing this professionally. I would have puzzled and agonized for days over the mismatch between the flash info, the documentation, and the source code...
(Back then, the boards I was working on were usually very early prototypes, worth thousands or tens of thousands of dollars. My fear factor with a 145 dollar board is considerably less.)
It would be easy to fix if I had a jtag interface available, or was still living in the Silicon Valley, where I could visit any of a bunch of friends at companies that keep stuff like that lying around.
Sadly, this is the furthest I've been away from a jtag debugger in a decade. I keep meaning to get one, but they tend to be rather project specific and I've been unable to settle on the right boards for what I intend to be doing.
The nearest jtag debugger is probably 1200 miles away, in Florida, maybe further than that as Florida isn't exactly known as a hotbed of embedded development.
OK, so I rationalize to myself:
This was just an experiment, after all, to see what I could do that was new and different in the embedded world.
I've pretty much determined that the range of software I want to run is going to require at least 256MB of ram, and I might as well go for 512MB. I was kind of hoping to keep prototyping on the board (it was already quite useful as it was).
At the time that I killed Pocobelle... I was successfully running (all over ipv6):
Ipv6 tunnel
bind9, with views, dnssec, and ts-keys as my primary domain server
postfix as a mail router and spamkiller, with ca-cert certificates for security
pgp-enhanced-mailman
openssh
inn (yes, a full netnews server!)
cricket (for network monitoring)
youtube-dl for grabbing youtube offline
madplay for music
lighttpd and apache
My new, still in development, blogging software
rsync for backups
All on this 300mw power sipping machine. I'd transfered every function I'd had a dedicated server (an old laptop) doing onto this box and was ready to turn it off...
The only things that were annoyingly slow was heavy duty disk writes, (databases in particular) and the startup of interpreted programs was very slow, particularly in the web service side.
I had just switched (yesterday) from apache to using lighttpd - which was much faster than apache and, as a bonus, gave me high speed .flv streaming) and was working on getting fastcgi to work with the web interfaces for dimp, cricket, zoneminder, and my new blogging software.
And I killed it...
The darn thing was getting useful. I was actually getting dependent on it... it was routing my /48 ipv6 network, and running DNS and mail for the whole house and was serving up a bunch of mp3s and videos...
I'll miss you, pocobelle. I'll get you fixed as soon as I can, I promise.
The beauty of this particular project is that I can retreat, for a while, into booting into a qemu emulator of the arm processor in it. I never needed actual hardware to run it in the first place. It WAS essential I prove the board and kernel reliable and get a feel for it's performance. That's it.
All the binary code for it actually lives on a USB stick. I can just pop that stick over there and resume working.
(This is so much better than life in the old days, pre-usb sticks, pre USB, even... I have spent months in my life in a jtag debugger, just trying to get freshly designed and probably buggy board to run the first 50 instructions...)
So I slammed the USB sticks into another machine, made a copy, converted the result into qcow2 format, and booted up pocobelle virtually via qemu. The out of the box Linux kernel I have for that emulator is the Versatile variant of arm, which needed a bunch of modules, so I booted up another copy of the emulator that accessed the original versatile system image, and copied those over. I told inittab to use a slightly different serial port (ttyAMA0) and /etc/securetty (to let me login on that port)
And vPocobelle came to life once again! (I still need to figure out the tun interface to get the emulator on the net, however)
It's actually 3x faster to run out of the emulator in this case actually!! And the kernel I was using was fully baked! I was done. I didn't need to work on it anymore! And I'd intended to focus on userspace issues anyway and see what more memory did for me...
So (temporarily) losing the hardware is a setback, but only a minor one.
I really, really liked that it let me stay on the internet for a week without power. I'm writing this now, without power, or internet... (Note to self, get more gas for the generator monday)
I have no friggin idea how or when I'll get Pocobelle to boot again.
Maybe I'll find some california surfer dude visiting that can pick me up one in the Valley on the way down here....
Some news from the life and death and life of Pocobelle, the 300mw mail router
Briefly: what Pocobelle is about - is trying to create the internet I thought (in 1990) we'd have in 1996. Back then, I thought the internet would be a network of peers - not clients and servers - but peers. What we call P2P networking is a bitter joke compared to what could have been, what could have existed on the edge of the network, in every household, in every business, in the USA and in Timbuktu.
I expected email, netnews, DNS, radio services, web services, etc - to all exist - inside your home and not out in the "cloud". I expected to be playing concerts via the jamophone with my neighbor down the street and games with a buddy across town - with sub 1 ms latency...
Yes, the world changed. Mostly everyone became cyberserfs.
I didn't. My dream didn't die, it was just resting. Netnews isn't dead, it's still there, lots of people use it. It's easy to run your own email server, less easy to run your own DNS... Radio... Well, one day...
It IS impossible to be a true peer on the internet with IPv4 without a lot of expense. I'm doing it (mostly successfully) on the cheap with ipv6. And... I'm trying to do it, with a 140 dollar arm box with 64MB of memory that's 100% solid state and eats 300mw. That's milliwatts. Less than 1/3 of a watt.
The machine's called Pocobelle.sjds.teklibre.org.
I'm not going to bother posting much about it on my blog here, if you want to see what's going on with Pocobelle, get ipv6 working, and see for yourself.
This is an excerpt, from a lot of writing in progress:
Pocobelle has been acting as a backup email router for a few days now.
I get several hundred emails per day. It used to be thousands, but I switched to using netnews & gmane for my more high-traffic mailing lists, the lists that I mostly read and don't write to, such as lkml. Pocobelle successfully coped with the email I get in the dribs and drabs I get it in - I didn't have any complaints. It was transparent to me, and everybody. It did STARTTLS crypto without cracking 10% of cpu. My tests included sending a few dozen mails from a server outside my network, but that was it. Most of the time mail goes right to my laptop...
Today was the perfect day to try pocobelle out in a real world scenario.
I was without power or internet for 8 hours. My generator didn't work. (Most likely, I'm out of gas). My ice cream melted. Oh, well. I needed to defrost the chicken anyway.
The UPS that pocobelle was running on showed 95% of it's battery available after that period. I was really happy about that. Assuming I have a week when the internet stays up and power doesn't I should be able to have my email delivered without a problem, and periodically fire up the laptop to read it.
At 7PM, power and internet came back on simultaneously. I had previously turned off email to my laptop (and turned the laptop off, to save power). I booted up the laptop just to watch pocobelle do it's stuff...
Pocobelle got on the Net... Got an ipv6 address... And started getting the backlog of mail...
The Bind9 DNS server rapidly got to 23MB in size... The cpu went to 100%... 93% of it, gone, waiting for disk access.. the Loadavg lept past 5... available memory dropped to zero... My ssh session locked up...
Midway through the 15th email it bounced 3 messages, then it died.
Pocobelle ran completely out of memory and came to a screeching halt.
Sigh. The perils of engineering.
I hadn't thought deeply about the interaction between DNS services and email. Freshly booted, there is no DNS cache on the system.
I hadn't thought about a complete and utter cold start of absolutely everything pocobelle was connected to. There were no DNS caches anywhere it talked to that weren't "cold".
I'd got into a pathological situation, where the bandwidth being chewed up by all the mail being sent, and the time it took to "walk" DNS to verify it as "good" mail, competed and combined to bounce mails it couldn't do a reverse lookup on.
At the same time, the load on the system was such as to put it on the moon in short order.
As crashes go, it was not pretty. It brought back a flashback from 1995 where a CEO I knew, ecstatic with his new 26MB powerpoint presentation, emailed it to everyone in the company, and everyone he knew in the world, besides. That was an age, also, when a *good* mail server only had 64MB of ram....
1) Pocobelle only has 64MB of memory. (Pocobelle 2 will have 256MB or more) An easy "cure" for the memory problem was to enable swapping. When running without swap a Linux system will free up memory by discarding unused (read-only) program text pages, which are read-only, and swapping them in from the filesystem when they get used.
While there are a lot of binary pages you can do this to, it doesn't work on pages that have been modified by the linker, and it (especially) doesn't work on interpreted languages like python and perl. These languages often do have plenty of little-used pages, but they are *data* and can't get discarded because some day they MIGHT be modified further.
This arm build does not appear to have Jakub Jelinek's prelink utility installed, which will free up more memory by prelinking the various binaries. Prelink solved a few problems, but in the arm world, most people (I'm not) use a libc that wasn't compatable with prelink. I'm still researching this...
So, anyway, there are plenty of things that can't get swapped out that could, if swap was enabled. So I added 128MB of swap on the flash. Linux doesn't require that you have swap on a raw partition (although it is a good idea), so I just did a:
dd if=/dev/zero bs=1024k of=/etc/swap count=128 mkswap /etc/swap swapon /etc/swap # and add to /etc/fstab
An even better cure for this would be to use a box with more memory but that's a problem reserved for pocobelle 2.
With swapping enabled, pocobelle grew decidedly less "chunky" in the general case. There is always a lot more free memory available for general use - for example, bind9 dropped from 23MB of ram down to 14MB. In normal use, 16MB is living on swap by default.
Whenever I get around to reformatting this USB key, I will put swap on a raw partition. I might put it on the built-in flash on the board, actually. We'll see.
2) Pocobelle was configured to use one DNS server - it's own - and forward to several local servers attached to my wireless network, provided by my provider. While this is a decent config... One that a normal client would use... given that all of the servers it was connecting to were ALSO freshly booted and ALSO had to walk DNS there, they ALL failed within the default DNS timeouts.
What I decided to do was establish a robust set of DNS servers (5), having pocobelle talk to itself twice - once in the beginning of the loop and another time at the end. In the middle it talks to my main mail server, which having already done the anti-spam protection in the first place should have a cached record of the remote server's origin already.
It should effectively put a 10 second timeout on the DNS lookup instead of a 2 second one, AND get to at least one server that has a good, primed, cache; a server in the US that's impervious to power failures.
(Getting this to work was a little tricky in that I'm using bind views internally to give me a consistent picture of my network and routing configuration(s), but I'm not going to go into that here)
I hope this is sufficiently robust. I'm not going to purposely instigate another 8 hour delay on my email, at least, not in the near future.
Another answer to this is to cache more of the internet's DNS service at the start, before accepting mail. (My mail is mostly not random, but comes from a limited number of mailing list servers). I have a buddy that used to cache the entire DNS root zones back in 1995. Maybe that's still possible.
It would be good to have some sort of cache log that I could replay on name service startup (or at certain times of the day, for example, shortly before I wake up in the morning) to prime the cache(s). I can sort of do this by replaying the mail logs through the DNS system, but it would be cleaner if I could figure out a way to get my top 100 sites out of bind periodically.
3) Given that write speeds to the flash are so slow it would be best to always keep at least 512K reserved for disk buffers. Smaller writes than 128k at a time are *bad* with flash.
I used to know how to do that, but the interface to the Linux swapper has changed so much that I have to google to figure it out. (Most of this blog was written after the power failed again)
4) Given that one of pocobelle's purposes is to be a mail router, and it lives on ipv6 which has little to no spam on it, it's somewhat pointless having even the minimal anti-spam services I have on it (like those reverse lookups that caused the bounces in the first place)
I'm not going to do that, I actually want to make this into a system capable of the best anti-spam measures I can come up with because spam is just never going to go away.
...
So, after fixing 1 and 2, I fired up rss2email on a new user on pocobelle. Rss2email is written in python. It took 12 seconds to start, and 16MB of ram, and was really going slow, so I decided that I didn't need to do that on pocobelle itself, but on my smart host elsewhere. Pocobelle just needs the mail itself, not the process that generates it.
Result: I got 25 messages as fast as they could be delivered.
It ran happily with 16MB of ram out on swap.
I'm happy with pocobelle today. I'm going to turn off my laptop tonight and see what happens.
AM Update: I turned the laptop on again, and got about 60 emails sent in rapid succession. The night before I'd double the default number of connections to 12 in a burst of optimism.... Pocobelle handled the load, but I think I'm going to limit the number of inbound and outbound connections to 4. At 12, it ran at 93% of cpu and got down to very little memory during it's burst of email. Pocobelle needs to remain responsive to DNS, in particular, as it's the main DNS server for the household, and has quite a few other things to do besides email.
Now, I'm running full starttls (encrypted) email inside of my household, which probably accounts for some of the cpu usage, but I think the overhead was of startup and running all those processes, not the crypto.
Maybe I'll try rate-limiting the number of inbound connections via iptables, tarpitting them maybe, to keep the mail server on the other side happy once pocobelle it gets past 3, keeping it from rescheduling the mail repeatedly. That will ensure a burst of email actually gets sent, albeit slowly. (this is also a good anti-spam measure)
A couple weeks back I started running most of the mail servers I am responsible for over ipv6. I posted a few notes to the postfix mailing list on that.
(My apologies for the excessively geeky contents of this blog recently, I have a few more "normal", real world. blog entries in the queue...)
I posted this question to the postfix mailing list today. (I got some good responses, more info the Updates sections)
I'm trying to wrap my head around a new problem - trying to have two postfix relays and a smart host co-exist where one of the relays is a tiny power sipping ARM based board... (Read on for details)
To recap, what I did was configure my in-house (and other servers I run) server to only listen and send on ipv6 via:
For when that doesn't work. Postfix tries connecting directly to the given email addresses, which are usually ipv4, fails rapidly due to being bound to localhost only, then forwards to the smart host, for ipv4 hosts.
This handles the common case where people refuse mail delivered directly to them via ipv4 from invalid reverse dns, and hopefully works generically for those few sites (including my own) that exchange mail over ipv6.
That's been working pretty good. I'm not aware of having missed any mail at all since switching to this method. All the servers I control are exchanging email directly over ipv6 without the smarthost in the loop. I like it. Email is direct, secuire, and as fast as instant messaging once again.
Now I'm trying to wrap my head around a new problem.
Recently I built a 300mw (that's milliwatt!) postfix mail router out of an old 64MB ram TS7250 ARM board I had lying around and a 4GB usb stick, running debian lenny.
It works pretty good in my testing so far. STARTTLS Crypto works, it runs at the speed of my internet link (24KB/sec) without any problem, and transfers on the internal net at ~500KB/sec (it's bound by the usb stick, actually). I have not abused it heavily yet - I need to see what happens when I send very large emails, for example. I will have to limit the number of inbound and outbound connections, to be sure.
(I live way out in the country, and have a (slow) wireless connection to the net. Power and/or internet frequently go out. Remember the bad old days, when email got transfered via dial up connection or via carrier pigeon? Technologically, I'm living there, admittedly with a splendid view of the ocean.
Running a 300mw mail server makes a lot of sense - I have enough battery power to run for days instead of hours sipping power like that (the wireless router uses about 5w) It beats running mail on my laptop, at 65w, by a country kilometer.)
So what I think I want to do is setup fallback relaying as follows:
MX 5 mylaptop.example.org # if my laptop's up send mail there MX 10 mytinyarmbox.example.org # if not, try my arm box MX 20 mysmarthost.example.org # otherwise, default to my well connected host
Now, 99.9999% of the internet is NOT relaying mail over ipv6, so what happens in that case is my or your mail ends up at my smarthost, which then relays it for me.
Problem 1) I am under the impression from a foggy memory of reading some RFC or other, that at minimum, 2 MX records will be tried. So adding a third might introduce problems with some MTAs that ONLY do 2 MX records, in that far off day when more stuff speaks ipv6 directly, or when it fails to fallback to my third, primary smarthost.
Update: Wietse Venema quoted me chapter and verse of the related RFC2821, which states:
"the SMTP client SHOULD try at least two addresses". With three MX hosts you're operating outside the recommendation.
More on how I currently solve this are going to be subject of another blog entry. Briefly, I implemented Bind9 "views" to present two MX records to the outside world, and 3 to my own. Postfix exceeds the RFC in every respect, and does the right thing. Solved. I'm routing my own damn mail.
Problem 2) My smarthost is only smart enough to try sending to one other relay (I think).
Problem 3) Similarly mytinyarmbox is only smart enough to try sending to one smarthost. I'm afraid if I set it up to relay it will fail to reach my laptop, then relay mail back to the main smarthost which will relay it back to the arm box which will relay it back to the smarthost until the loop count is exceeded. I guess I'm looking for some "never use the smarthost relay for these domains" option in postfix... Obviously, after googling, I'm not phrasing the question right....
Update: It turned out that the smarthost lines in postfix "do the right thing". It will not try to send email to a server that I control that has a lower MX record priority than itself. I couldn't find an answer on google, because smarthost does it right to begin with! Wietse, again:
If the machine sends mail to a less preferred MX host than itself, then it is badly borked. To pull that off with Postfix you would have to turn off DNS or override the routing with a transport map.
All I did was add another smarthost to my laptop, so when it can't get to my main server, it forwards the mail to myarmbox. In /etc/postfix/main.cf:
Bing! I can turn my laptop off 10 seconds after sending the last mail, and know that my mail will eventually get to the Net without my further intervention.
Problem 4) My laptop/primary mail server is actually on a dynamic ipv6 address (I control what ipv6 tunnel it is running on and update its dns record with nsupdate when it changes), so that no matter where I am, I have an ipv6 connection, when I have a connection. It seems inefficient to route mail to my house and then back if I'm not there, especially when my house is off the net and I'm not there to fix it...
Update: The above problem is basically fixed by the dual smarthost line.
I am patently aware that there are other, less crazy ways to do all this (like fetchmail or offlineimap), but 1) I get a lot of mail (think: lkml) so getting email whenever possible, in the background, rather than via a cron job that eats my connection for minutes or hours at a time, is a good idea, and 2) I have to run my own mail servers anyway, so why not skip that step? And 3) It's kind of fun.)
If anyone would like to dink with this little arm box, email me privately, I'll set you up an account.
Last night I got an old box with an arm cpu mostly working. I used to use it as a dns and dhcp server back when wireless routers were lame. It's a TS7250 - a great little 200 Mhz ep9302 arm box, eating only 300mw of power (less with power saving on!), and now that I live deep in the boonies I figured I could retask it, maybe make it run email and squid, etc. Aside from floating point, it's probably about as powerful as a pentium II, and I used to hang dozens of email users off of one of those. I just need to support me and my roomate. 300mw sounds about right.
But it's gotta speak ipv6 to talk email in my world. So, first up was getting an ipv6 enabled and modernized 2.6.29 kernel... which I mostly have now, thanks to these patches for the ts-7250 which bring it up to date, enable 64MB of ram, and a host of other features on this board that I didn't even know existed.
Next was that all-important ipv6 enabled squid server. I've been building my own squid server with ipv6 support ever since the OLPC project started, and I'm kind of tired of it... and figuring out how to get it to cross compile for debian on the arm eabi I was NOT looking forward to.
This morning I downloaded the Squid 3.1 release and was preparing to get it built, looking over the bug reports in debian and in ubuntu and dreading having to build my own version for the arm box and...
I'd really like to see peer to peer gaming take off. Being able to play with your schoolmates in the neighborhood would ultimately lead to sub 10ms latencies and new models for high bandwidth local interaction that simply can't be matched with the "central server in timbuktu" model.
I haven't felt like posting much. Jellyfish can STING. Maybe I shouldn't have featured them in a song as foodstuff. They are striking back!
Everything is the same, only different, in Nicaragua. The power flickers off 5-6 times a day, but somehow the government is now avoiding rolling outages, even with oil at 117/barrel. I guess that relationship with Chavez is paying off. The local cable co has rolled out internet on a pay per month basis, no yearly contracts, which I may pursue. My (shared) DSL service's router is not pingable from the outside, so my ipv6 tunnel doesn't work. Ah, progress.
Over the past year, lobster has gone up in price from 200 cordoba to 255, and a slice of pizza rose from 18 to 22. People are telling me that the price of wheat is slated to double again over the course of this year. The exchange rate between the Cordoba and the dollar is still around 18.9 to 1. My rent's still the same. Haven't checked on electric. Conclusion: I can survive for a year on just the cash I have on hand.
I'm doing my best to ignore today's election in Pennsylvania. I didn't understand all the hoorah about Obama being "elitist", when I heard what he said, what I heard was a fragment of John Lennon's "Working Class Hero":
Keep you doped with Religion, Sex, and TV
Which rings true to me, far more true than the chattering of the military-media complex, but maybe it's playing well in Pennsylvania.
The recent wave of bank bailouts have been so huge and so numerous that the media has run dry of their sex scandal reserve. Now, that's a problem! Forget oil, wheat, bank losses, and corporate fascism - There hasn't been an Spitzer-level T&A distraction in ages!
Oh, well, I'm going to focus on what's important.
I have enough projects to keep me busy for the rest of the year:
IPv6 - on platforms as diverse as the olpc and telephony project Tranzport and Alphatrack Drivers for Ardour Generic Query router to prove out Songs to finish Weight to lose Low power "router of the future" to build FPGA stuff to figure out Spanish to learn Book to write a best practices document to write, maybe even a rfc
This isn't 9 months worth of work, it is years worth... and absolutely none of it has anything to do with America's troubles, unless my bank vanishes in a puff of smoke. It is silly and painful for me to pay the slightest bit of attention.
I want a president who not only freely discusses and shrugs off his or her loves and sexual desires and even affairs, but dares to enjoy sex and thrives because of it and makes his behavior a part of his perspective and attitude on life and love and leadership and the general sticky messy beautiful evolution of the human soul. Is that too much to ask?
If I didn't laugh at the world as much as I do I would have slipped off the deep end long ago. (Some would argue that I already have slipped off the deep end)
Problem with all that laughter was that it is 3AM, and I woke up my landlady...
Seeing how humans cope with the world, with machines, and with "progress", is like watching a cat trying to play jazz.
Ahh, multicast - the holy grail of distribution networks - is like wet paint. Once you decide that "hey, multicast would be the best way to do this", you are compelled to touch it. You are led down a twisty trail of rfcs, all different, and complex protocols like IGMP...
It's no wonder that skype and bittorrent went their own way, and adopted simpler protocols (udp,tcp) to achieve their purposes. Figuring out how to use multicast properly is a black art. The amount of open source code that actually uses it is limited to a few odd corners of the internet, and is very hard to understand.
The one major client side application of multicast - multicast DNS - is so badly broken that it makes me cringe to see the packets go by. The following is a dns scan taken from a public wireless access point (the names and mac addresses have been changed to protect the innocent), using mdns-scan:
At the time, the access point in question was saturated - maybe capable of 3KB/sec out to the internet. Now, I don't think multicast was at fault for that in this case, but given how 802.11 wireless works (coherent explanation to be typed in later), multicast is bad. BAD. BAD. It does not scale.
But it's not just multicast. The uselessness of some of the dns announcements above in mapping back to conventional DNS boggle the mind.
Nicolas Bob's computer.local Great. Not only do we have spaces in this name, but punctuation! Try parsing that dns name with tools like grep - or sticking it into named - etc.. You can't even type it into a web browser or ssh. What's the point?
[MA:C :AD:DR:ES:S ] as a part of the announcement. Cooool. Now I know who you are - forever....
mbears-mbp._smb._tcp.local: Thanks for letting me know you have windows filesharing turned on. I look forward to introducing your girlfriend to your wife.
pecutmac._ssh._tcp.local: I'm willing to bet that none of the people on this network broadcasting that they have ssh available have ever used it. Why should they tell people about its availability?
5 multicast announcements from this one machine alone...
Itunes: Why the heck does itunes have to have its very own announcement with a huge unique identifier?
Programmers coding for multicast ought not to be allowed to code for it until they can recite the relevant RFCs chapter and verse, the same goes for their QA people. Would it have been so hard for Apple to enforce a DNS compatible naming scheme with a single regex? IDN, even, would have been fine. Or did they put the same people that loosed filenames with spaces and punctuation on the world on the multicast DNS project?
I'm told that the linux.conf.au network basically suffered congestion collapse. Was it due to the 70+ olpcs merrily broadcasting their services under both IPv4 and IPv6? I don't know...
I saw that the latest iphones support ssh, via a mdns-scan, the other day. That was kind of cool... but seeing the public wifi airspaces of the world clogged with devices saying "Hi! I'm ME! I do this, this, and this! My owner is clueless! I am insecure! CRACK ME! ME! ME!" really gets to me.
People can wander around naked at home all they want, but I'd really like to see computer manufacturers implement a standard policy of clothing their hardware by default on unknown access points.
Although I just used multicast dns as a talking point, it is far from being the worst offender. As it is a relatively new protocol, the designers should have done better. Perhaps it can be fixed in the field before it becomes more pervasive.
The rest of the local networks services is much worse.
I don't want to talk about all the other announcements like SSDP, and SMB - or the bittorrent traffic, worms, insecure IM exchanges, bogus DNS servers, dhcp announcements, and TCP retransmits, etc, I saw on this poor overloaded public access point. It depressed me. All I wanted to do was get my email, (via IMAPS, thank you very much) but I couldn't.
I turned off my laptop, had a long black coffee, and moved on. I don't know how to fix public internet access points. I just don't. We could use a unique frequency band per user and people would still screw it up. Maybe the amateur radio guys and the FCC have it right, that certification should be required in order to broadcast anything on any frequency band.
I now know what it must have been like for someone that understood germ theory during the black plague era, seeing all the rats scuttle around.
I got ipsec over ipv6 to work yesterday, on my laptop, server, and olpc. I couldn't get it to work using setkey or racoon, but racoon2 worked the first time.
Nowhere on the net could I find an example of a working ipv6 to ipv6 setup for linux ipsec - even though the protocol was designed for that scenario in the first place, and only painfully adapted to ipv4 scenarios over the course of years.
The setup:
OLPC -> laptop -> hurricane electric tunnel -> toutatis. 2/48 -> 1/48 -> /64 -> ipv4 tunnel -> ipv6 at he -> toutatis.taht.net
Yea! a static ipv6 address! Now that I have squid (from cvs), jabber, dns, web, ssh all running pure ipv6, No IPv4 on the client is now feasible. I'm free of IPv4! with Native encryption! Bwhahahahaha...
Naturally I started playing with ipsec. The initial key negotiation phase is painfully slow, especially over a 220ms internet RTT, but after that the ipsec vpn tunnel is completely transparent to the user (and opaque to the sniffer). It's pretty darn fast on local connections though.All kinds of things "just worked", but I got into a world of hurt dealing with NetworkManager on the laptop providing the tunnel. NetworkMangler arbitrarily takes your interface up and down to get an ipv4 address and wipes out your pre-existing ipv6 setup when it should just co-exist. !@#!@#!@
I still haven't figured out how to make NM do the right thing. If you add static ipv6 ips on the olpc, they also get flushed when NM does its thing. Shouldn't ipv6 on a given device just stay up and let RA (router advertisement) do its thing, most of the time? There must be some kind of RS (router solicitation) message that says - "I'm not sure if I'm still on the right net"... I'd like my ssh over ipv6 connections to stay running through a dhcpv4 change whenever possible... part of the point of ipv6 stateless autoconfiguration is that you don't neeed a sharecropper's lease anymore.
The latest firefox beta works great running on the olpc. Much more usable, at least for an adult, and having adblock+ running on the olpc is a real win. The olpc also works (once you turn -notcp off) X11 client/server over ipv6. It is a great X-terminal! I can think of lots of ways X could be used in this way - keeping an executable on the school server and just displaying it on the olpc would ease on major software rollouts - and allow the use of more complex software that won't fit into the memory available on the olpc. On my wireless network you simply don't notice the fact that (firefox for example) is running on a remote server, 'cept when you want to use flash. Startup time is vastly improved and scrolling is totally fast.... In poking about ipsec I noticed that the geode processor in the olpc has a hardware encryption block. A couple ipv6 network and ipsec benchmarks are in order, and I'm going to go fight with NetworkMangler some more...
I've been reading rfcs lately, working on multiple levels - 1) How does a given stateless protocol work, and 2) How is it secured, how can it be subverted? I'm doing this because 3) I'm trying to design a new stateless protocol and 4) I'd like to get it secure-able eventually - and 5) avoid any potential patent/copyright issues at the outset. I'm confident I have the latter solved now, but it took a month of review, headache, fear and worry before I felt capable of coding again.
I wish everyone was born with a "get one great lawyer free" card, that they could use up when they most needed it. I found myself missing the structure of a major corporation, badly.
I also mostly convinced myself that achieving security and statelessness were almost impossible, that security requires state - but note my equivocation "almost" and "mostly" - darn adverbs - I'm still going to give the design as-is a go and see what happens next.
This morning, I looked up, and saw Johnathan Zittrain had written an excellent article on the future of computing. It is not a bright one, particularly for independently minded programmers:
To be sure, amateurs who do not have houses to lose to litigation can still contribute to free software projects—they are judgment proof. Others can contribute anonymously, evading any claims of patent infringement since they simply cannot be found. But this turns coding into a gray market activity, eliminating what otherwise could be a thriving middle class of contributing firms should patent warfare ratchet into high gear.
While that rang especially true for me given when I'd done all month, it was his proposals for solving the spam/bot problems by creating for ever more limited, tethered, restricted (I'd call them "broken") computers that got to me. Instead of getting back to work, I started writing up a response... but then Richard Stallman fired back
It is true that a general computer lets you run programs designed to spy on you, restrict you, or even let the developer attack you. Such programs include KaZaA, RealPlayer, Adobe Flash, Windows Media Player, Microsoft Windows, and MacOS. Windows Vista does all three of those things; it also lets Microsoft change the software without asking, or command it to permanently cease normal functioning.
But restricted computers are no help, because they have the same problem, for the same reason.
The iPhone is designed for remote attack by Apple. When Apple remotely destroys iPhones that users have unlocked to enable other uses, that is no better than when Microsoft remotely sabotages Vista. The TiVo is designed to enforce restrictions on access to the recordings you make, and reports what you watch. E-book readers such as the Amazon “Swindle” are designed to stop you from sharing and lending your books.
Very good debate in both articles. Both describe at a 50 thousand feet what I'm trying to get done at ground level, in an itty, bitty, obscure corner of the internet. Today I wanted to talk about how the ntp protocol achieves a consistent and secure view of time itself.
I'm out of time now - that valuable yet non-material substance - I've got some coding to do.
IPv6 has a feature that was innovative and useful back when it was designed in the mid-90s, called stateless autoconfiguration. An IPv6 enabled machine can automatically create an IPv6 address for itself, get on a network, and figure out how to access the internet.
This solves a tough problem that occurs on just about every network I've ever run into - two devices will automatically configure themselves with the same static IPv4 address, disabling both devices (or at the very least, making access to them a flaky experience). Worse, usually that automatically configured device comes up on an ip address that is dedicated to crucial infrastructure - the main router for a home, for example, is usually on 192.168.0.1. Bring up another device intended to be a main router for that home, (say you are adding a wireless access point) and it too comes up on 192.168.0.1. Unless you have a private ethernet port handy (which requires some knowledge of networking), usually you have to temporarily disable the main router - disconnecting your kids from their second life sessions - and bring up the new one, change its default IPv4 address - find a static allocation for it - write it down - remember to use the right netmask - (all of which requires some knowledge of networking), save it, reboot both routers, and then proceed to configuring the new one.
On larger networks, someone randomly bringing up a machine on a static ip address can be catastrophic.
Now, DHCP has become the standard way to assign new "dynamic" IP addresses in a NATted IPv4 network. It's come quite a long way - from, at first, only being able to assign IP addresses, to now being able to specify dns servers, time servers, netbios servers, serve up boot files, statically assign some IP addresses, dynamically do others, etc, etc.
Early versions of the DHCP server code helped turn most of us into landless cyberserfs - there was no way to assign and route static ip addresses sanely back in the 90s - so most end users went dynamic early, as the end users could not be trusted to assign their own gear the right addresses, and statically assigning the wrong addresses could be catastrophic to the whole network. ISPs ended up authenticating via protocols like ppp, and assigning dynamic addresses, because it was easier, and safer - not because it was right - or better - using static addressing would have simplified billing and firewalling (and worm/virus control) considerably, then and now.
Dynamic addressing also made it much harder to create services at home. It moved the domain name system into the province of the ISP rather than into the home or small business. This has led to such steps backwards as not having DNS running at all on most small networks - meaning that printers can't get assigned sane names like "printer.hm.taht.net", ditto for your daughter's machine - instead, people manually remember and type in IP addresses whenever they want to talk to machines inside of their own network. (solutions to this have appeared in Netbui, Bonjour and dnsmasq, but I'm already way off where I intended to go with this piece.
Most people don't even know how to discover the IP addresses of machines in their own network - they are no longer connecting to each other, but to the internet...
I think that people dig "the Net" so much more than "their home" is because that the net is so much easier to use than their own gear!
It's not just DHCP's fault - routing was also painful in a sometimes connected via modem internet environment, and CIDR was just getting started. There were no good solutions back then to these problems.
The cyberserfdom created by this flaw is reflected by the terminology used by DHCP itself. You get a IP "Lease", which you have to "Renew" periodically. Although the documentation uses "server" and "client" terminology, I translate that into "Master" and "Sharecropper" when I'm feeling depressed.
Static IP addressing, assigned via DHCP, appeared in later versions, keying off the MAC address of an ethernet card, much like IPv6 stateless autoconf works. Back in the 90s, you used to have to manually enter your own name servers, now DHCP handles it for you. Getting on a "normal" ipv4 network today is as simple as enabling dhcp and turning on your machine, and setting up a "normal" ipv4. I rarely see a well configured small dhcp network, with static addressing for printers, etc, usually people end up manually assigning the static ips rather than getting DHCP to do it. More often than I care to remember, they assign static ips in the dynamic lease range, leading to all sorts of oddities (that newer dhcp servers generally detect).
It's not that people are stupid. This is hard stuff to get right all the time.
At the lowest level of the protocol, you have to get that IP address right... and you need to be able to access a nameserver in order to do anything on the internet. It is helpful to also run your own dns at home... (DNS translates www.example.com into 127.0.0.2 on my machine)
Running DNS at home? I almost never ever see that... DNS servers take up a lot of memory and very few routers supported it until recently. Everybody seems to think that www.example.org is the only correct form of a computer name - most of the people with vanity names just use them for websites instead of managing their home network. People just plug in their airport and laptop and expect to be on the internet, no where else.
Setting up good DNS is hard, for a lot of reasons - security, complexity, and the warts in a major program and protocol. It is still highly desirable to run your own DNS at home - it also speeds up web access - but solutions are emerging that make it less probable you will need to do so in the future.
I wish a lot more programmer energy had gone into making home/small business DNS services easier. I've been running my own DNS servers for 20 years and I STILL don't get it completely right, and never on the first time.
ANYWAY.
IPv6 solves the "get IP address" problem well. Actually, it oversolves it. It solves it so well that it introduces new problems. While a human being can remember "my printer is on 192.168.0.50", I find it impossible to remember or even type in the fact that my olpc is on [2001:470:806f:2d7:217:c4ff:fe10:b811] which is what its full IPv6 autoconfigured address looks like. It's GREAT that it's on the net... all by itself... routing - announcing its existence -
Theoretically.
Regrettably one crucial piece of stateless autoconf, went into DHCP, but didn't make it into IPv6 - the ability to get a nameserver automatically. Thus, the friendly internet of http://ipv6.google.com becomes http://[2001:4860:0:2001::68] (if you can remember that, you are a better person than me) unless you do something about it.
There's a few solutions, DHCPv6 and RDNSS announcements in radvd (covered in RFC 4339rfc 5006) - which I'll get to in my next blog. (I'll fix up this one a bit later too)
One nice thing about trying to explain a difficult thing is that you get to challenge your assumptions.
Yesterday, I was quite ironically amused at the idea that in order to talk about filtering out [REDACTED] content you had to filter your own content just to talk about it. The problems in filtering out references to [REDACTED] and to advertising and other [REDACTED] content are manyfold. The advantages of filtering are equally manyfold. The strongest point that I can make about it, though, is that it is best that those filters be under your control - not under the government's or the ISP's or your corporation. Although - I know quite a few sysadmins that filter out out advertising content transparently on behalf of their bandwidth needs, with the tacit approval of their IT manager if not necessarily the CEO....
It is possible to implement a pg-13 web, for yourself, if you so desire, and I'm totally cool with that - just don't force your filters down my throat. Same goes for google safesearch. I just turned it back on because the effects were minimal, and I know how to turn it off, and I'm already less horny for having done so.
I have been using privoxy for years and years now. Although privoxy just came out with a new release, it still doesn't do ipv6, and I decided to update to the latest adblock plus and go with that alone. I'm a little nervous about this, as I understand how privoxy works, and it arbitrarily drops potentially dangerous (worm/virus) content, as well as eliminates flash (which, aside from youtube, is mostly ads), but it does tend to be slow...
So I just updated to the latest adblock plus and it's doing a great job so far. Very fast, very clean... no ads....
I still haven't figured out how to make adblock drop flash images by default, nor malware, but I'm looking into it. Given the good reviews firefox 3.0 beta has been getting, I decided to try that yesterday, too.
I am a tabbed browsing addict. I will end up with hundreds of tabs over the course of a couple days. I still find it amazing that firefox 2 just runs and runs and runs under almost any load I give it. It's a far cry from 2000 where I'd be lucky to keep a browser running for more than an hour or three. Still, with lots of tabs open, I would end up with a gigabyte of memory in use, eventually, and a dramatic slowdown, after a couple days. Firefox is one of two reasons why I ended up updating my laptop from 1GB to 4GB of ram, actually - the other was ardour...
So I fired up the beta of firefox.
Wow. I have 30 tabs open, and I'm only using 101MB of ram. Gmail is lightening quick, seemingly faster than my IMAP connection(s). Blogger is zippy... Everything seems faster (partially due to dropping privoxy). Ipv6 just works. Wow. Wow. Wow.
I have some issues with font rendering (and flash isn't working, but I don't care - although I imagine that flash was causing a lot of my old bloat) but I want to give this release a good workout, so I'm planning to wait for a crash.
I can't wait for this code to get into my nokia 810... and I wonder what will happen if I incorporate jemalloc into ardour....
Yea! One down - six to go (if lwn.net, slashdot.org, technorati, andrewsullivan, blogger.com, and Doc Searls adopted ipv6 I'd handle 80% of my net needs). It's time to turn ipv6 up!!!
Just did a traceroute6 of ipv6.google.com FROM AUSTRALIA!! - 8 hops! (yea, there's at least one tunnel in there, skewing the statistics) and it seems faster than the normal ipv4 version (20-22 hops), perhaps due to the dramatically reduced number of hops, or that my provider isn't shaping ipv6 packets, or just delusion...
traceroute to ipv6.l.google.com (2001:4860:0:2001::68) from 2001:388:f000::9bb, 30 hops max, 16 byte packets 1 2001:388:f000::9ba (2001:388:f000::9ba) 32.86 ms 30.1 ms 31.464 ms 2 ge-1-0-0.bb1.a.syd.aarnet.net.au (2001:388:1:5001:212:1eff:fe8e:51fe) 31.081 ms 30.497 ms 32.046 ms 3 pao-a-bb1.aarnet.net.au (2001:388:1::13) 187.837 ms 186.293 ms 187.517 ms 4 paix.ipv6.he.net (2001:504:d::10) 188.402 ms 185.233 ms 186.553 ms 5 10g-2-4.core1.ash1.ipv6.he.net (2001:470:0:35::2) 262.67 ms 269.186 ms 262.203 ms 6 pr61.iad07.net.google.com (2001:504:0:2:0:1:5169:1) 276.096 ms 261.268 ms 261.907 ms 7 * * * 8 2001:4860:0:2001::68 (2001:4860:0:2001::68) 263.054 ms 263.585 ms 260.878 ms
Can anycast be far behind?
I sure hope turning IPv6 up becomes a trend this year! I'm off to do a little packet sniffing on my local net to see what's goin on under the covers... Ipv6 shouldn't seem this much faster to me...
Update: Packet loss seems to be less, on average, between ipv6.google.com and me in Oz, than between me and google over ipv4. So I'm seeing more re-transmits on ipv4, which slows the apparent speed down vs a vs ipv6... That's it. Aside from that, the packet sizes are slightly larger on ipv4....
I have continued my experimentation with IPv6. (I'm working on something that is shaping up to be pretty cool, actually, especially for sites far from google)
I've already seen IPv6 change my workflow in several minor, but significant ways. The biggest noticeable one was to be working on a git project and be able to pull AND push. What I mean by that is that I'll do most of my work offline, in git, on my laptop. When I'm ready to test it, I push it out to the remote server, and login to that via ssh.
Now - I'm in front of a xterm window, ok? logged into that server. I build the project there, and start testing it. I mess up something. So I fix it - iterating over several versions - until I get it right. I commit the changes to that copy of the git project... AND push the changes back to the copy living on my laptop, from the server.
Now, by flipping a window, I could have pulled the copy back from the server, rather than pushed it, but I saved a mental context switch by pushing it... a subtle difference in workflow.
Similarly I can pull stuff from my laptop to the server (I forgot to scp a data file, for example) rather than push it....
In this case, the client becomes the server, and the server becomes the client. There's no longer a master/slave relationship between the server and me... I like that.
Now, there are some pretty massive security implications in doing the work this way, but at the moment I mostly just care about my own productivity...
And, compared to seeing this darn message multiple times a day, I'd rather be responsible for my own problems....
My dream is a simple one... I'd like to be able to send and receive email to my laptop once again, no matter where I am, no matter what networks I'm behind, and send stuff from the command line, while I'm offline. When you think about it, the pop protocol was one of the first steps towards the AOL-ization of the internet, imap4 was a coherently designed successor (stage 2), and gmail the end product of treating the browser as the OS (Stage 3). Gmail has great ease of use. It's downright addictive - and it works everywhere - and it has lousy filtering for when you are on tons of mailing lists - and of course requires that you be online to read it.
I just got off of gmail. I am tired of google having all my (base) mail and spitting ads in my direction. I also (incidentally) got off most of my mailing lists, and am re-reading up on procmail, to my dismay - server side filtering of your mailboxes makes sense, but can't we have invented a better syntax for it by now? :yuck:
Now there are all kinds of solutions to the email-on-the-laptop problem - fetchmail perhaps being the most common - but ideally - in an end to end connected world - I'd merely connect to the internet, update my ipv6 dns record, and mail would start to flow directly into my mailboxes - no imap or pop protocols required. My backup mx forwarder would temporarily store (and filter) my email and all I'd have to do would be to connect, and invisible, non-graphical daemons would take care of the rest.
Apache over ipv6 works, postgres looks like it would work... that's really about it for most of the services I run today. Irc works... Everything works... Maybe a git server I should put up...
So, anyway, I got imap and smtp auth working today so I can regress back to stage 2, and maybe, with a little more work, I can regress back to stage 0 - the way email was supposed to be - direct to my box.
imap works over ipv6 pretty good, as does rsync. Thunderbird tries first to connect to the ipv6 address, fails in 30 sec or so (at present, due to the routing issues below), then rolls over transparently to the ipv4 address.
IPv6's various tunneling mechanisms are giving me heartache, though. Tunnels on hosts where I have control of the routers are no problem - ridiculously easy - but wedging protocol 41 out via 6in4 tunneling hasn't worked on most of the commercial nets I've tried, nor the college where I'm staying at. I tried to setup an alternate mechanism (miredo) but that didn't work on the known public server I was aware of in Australia - and running the miredo server requires two consecutive ipv4 addresses?! on the server - an insane requirement. I'm looking into another method. Failing that I might fallback on openvpn or a derivative...
That gets me to my next bitch of the day - I have two machines here, both with wireless networking, but no access point. I USEd to have a wireless card that could act as an access point, but no longer, and no matter what I've tried ad-hoc mode seems to be broken on this card, too, so I can't get my stinking handheld to talk to my laptop even with an air gap of a few centimeters. Dumb.
And now I'm grumpy about all the captcha I've had to look at today. If I can figure out a way to blog via git I'm all for it.
Well, I got my first ever encrypted starttls and smtpd over ipv6 to work early this morning.
I didn't quite get it done the way that I wanted, but so far it seems to work. I'm not going to write a howto on this, but roughly, what I did was:
Enabled ipv6 in postfix 2.3.X on both boxes. That's a two liner in /etc/postfix/main.cf:
inet_interfaces = all inet_protocols = all
Added a AAAA record in dns for my default mail host (which is on a ipv6 native network) so it had both AAAA and A records. Added another pure ipv6 domain that only got mail via a AAAA address...
Got a tunnel for my ipv4 server from Hurricane Electric's tunnel broker. A couple clicks and one automagically made script and an entry in /etc/rc.local and I was done.
Went through a whole lot of hell with SSL certs. I'm not going to go into that here.
Got a good buddy to bail me out of that jam, who also sneered at my ipv4 spam stopping setup and laid some seriously heavyweight anti-spam stuff all over it while I slept.
Tested so far has been interactions with gmail and a few friend's mailers running postfix, mostly.
The good:
Email over ipv6 works! And because I ignore rbls, email gets through, encrypted, in seconds, rather than minutes. And I haven't seen a single spammer attempt to connect to the ipv6 address.
The bad:
Nobody cares! Nobody, but nobody, is routing mail over ipv6 but these two servers it seems. Gmail doesn't even try to use starttls, either.
The ugly:
Losing the rbl system is going to hurt.
Now, what I had wanted to do was setup a mx 10 the_ipv6_address and a mx 20 the_ipv4 address, but I ran into issues with the tls certs not working with the different names and I gave up.
The only thoughts I can think of at the moment (it has been a long night filled with other geeky pleasures, like wrestling with and failing at 6in4 tunneling through an apparently stateful firewall)
1) greylisting - with only a weak reliance on the ip address - probably would help in the ipv6 case...
2) Man, dealing with your own email server almost requires a masters degree these days. This stuff used to be easy. Yes, I regarded configuring sendmail, back in the 90s - as EASY. It SHOULD be easy, especially if you are only dealing with 8 email addresses, as I am, but nooo...
3) I'd still like to relay from my laptop in the field, rather than run imap/etc. Get the graphical clients out of this... I broke sending email from clients somewhere along the way last night (I think I need to switch back from sasl to dovecot), which kind of makes all this moot - been sending test mails via the grungy old Mail utility....
3a) getting the voicemail to email gateway to work on the blackfin is looking really hard now.
4) Probably makes sense to "secure by obscurity" the mail server's ipv6 address, and only accept mail on those ipv6 addresses, and rotate them regularly.
5) For ipv6 email, a basic web of trust get established now, with support for revocation of certs, a free crypto infrastructure (verisign charges 2600 dollars for an SSL cert! for a string of numbers! WTF!?), and we repeal the rfc that says MUST NOT on encryption, and make it MUST.
6) Securing other avenues of attack - e.g. via the web - is going to be no picnic either.