July 20, 2006

OpenVPN tunneling over SSH

The classy.dk kitchen server sits behind an ADSL router provided by my ISP. That router is sensibly almost closed with only FTP, HTTP, SMTP and DNS ports open by defaut and none of these mapped to NATted addreses that are assigned by default through DHCP on the router. I'm fine with that even if it is stupid ISP control of my actions - less security threats to worry about, and I can actually turn on windows on new machines without being owned by virus after 5 seconds.
The only server I have setup to listen to inbound traffic is the old warhorse classy.dk web server (and yes it is in fact located in my kitchen like it says on the blog.
Occasionally I'd like to access resources on other machines on the net though and that just blows. The problem is that the other machines sometimes run windows and most certainly shouldn't be listening to network traffic. I could use SSH tunneling via the webserver and then a terminal emulator to look at the hidden machines, but that's just annoying. I want full access with file browsing. The works.
A real VPN is needed but which one, how to set it up and how to pass it through an interface on the webserver?
Here's a way: OpenVPN with SSH tunneling.
Since I'm not talking more than one machine at a time I can just use the simple point to point setup with a static key. I want to modify the howto to work through an SSH tunnel.


  1. Modify the server vpn configuration file by adding proto tcp-server
  2. Start the VPN server
  3. Modify the client by changing the first line to remote localhost and adding the line proto tcp-client
  4. Tunnel your local port 1194 (what OpenVPN uses) to 1194 on the machine you want to access - via the web server hosting the SSH daemon:
    ssh -L1194:vpnserver:1194 user@webserver
  5. Start the VPN client

That's it. Great stuff I've been looking for. Now I can drop files to and from servers at home that are safely stashed away out of reach it the wild and dangerous internets.

Posted by Claus at 12:10 AM | Comments (9)

July 18, 2006

Symlinks on Windows

Who knew? Windows actually supports symlinks. It's directory only - but that's just too bloody useful to hide away in a resource kit. Thank god we (at least for know) still have Sysinternals. Can't wait to see these guys stop making a difference as they try to work from inside the belly of the beast.
Anyways. Sysinternals provide the useful utility Junction to define... junctions - that's what they're called on Windows, directory symlinks.
This brings the well known "versioned directories, live verson symlinked" deployment technique to Windows.

Thanks for the heads up to that other Claus.

Posted by Claus at 11:40 PM | Comments (0)

July 17, 2006

The UNIX quote

"I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy."

Rob Pike

He's right and he is wrong. I think it's entirely likely to me that we'll find further down the road that software works much like genetic development in nature. Nature never throws out old designs. In fact most of our human basic design is the same as the basic design in fish and plants and bacteria and it hasn't changed in billions of years. However, the interest, the competitive edge, moves away from the old designs once they win and onto greater things. So i'm not sure we'll ever have new file systems or new anything really. I find it entirely likely that inside the massively parallel billion CPU core machine of 2050 we'll find a million linux 2.6 cores with ext3 filesystems...
I think we can already see this as OS'es get commoditized and the interest moves from scaling up to scaling out. Scaling out is a developer way of saying "I'm not going to fix the I/O DNA or the process DNA of computing, I'll just add sophistication on top".
The only real reason this isn't truly plausible on a 200 year scale is energy consumption. It's quite possible that in a truly parallelized world we'd really much rather have a much simpler operating system able to function on much less power, but robust and distributable.

[UPDATE should have read the whole thing and a minimum of stuff about plan 9 - which answers some of the questions, but the failure of plan 9 to catch on underscores the point - and it's clear from the interview that Pike is aware of this]

The question that then comes to mind: Suppose we wanted to build the multi-concurrent internet-ready super machine of the future, programmed entirely in a fanstastic functional language that is able to hide complexity and concurrency in an efficient way, what would we keep around?
Some ideas on design points:


  • Software will run on millions of mutually sandboxed cores. Cores are perishable and automatically restartable. Cores are simply glorified processes.
  • Cores maintain a distinction between interior and exterior and police their communication surface (think cells)
  • Cores are hardware independent, all software on a core relocates effortlessly to other cores
  • There is no "shared storage" there is only the cores. The communication substrate between cores is the only shared medium and it has no state
  • Any idea of of privilege or trust other than sandboxes is just unmaintainable. The idea that we'll be running software that is hundreds of times more complex than what we have today (or run the same software on data scaled hundreds of times, which is really the same thing) and be able to think consciously about trust is probably not sound.
  • The coordination mechanisms between software can't come from a high enough level of abstraction.
    What that means is that any kind of coordination protocol or mechanism that is "bottom up" is really not useful. So an example would be implementing component coordination within a sandbox but not supporting coordination between the sandboxes from above.
    What I'm thinking of here is once again the security mechanisms and the privilege mechanisms, but also something that might just be more of a pipe dream - which is the scripted ability to control any resource on any reachable machine - with sandboxing and privacy of course, but still. The point about it being from above not below is that I don't want to have to go to a substrate below to accomplish my connectivity goal, it should just be a standard operating assumption about any layer that it naturally distributes and shares
  • Unreliability is the norm not the exception. I mean this both in terms of hardware failure, software bugs and malware. As the world becomes more and more complex, there's just no way we will remain in conscious control of the quality of our system. At best we can do some double computation and fact checking and that kind of thing.

(I think I need to start a blog specifically for spaced out posts)

Posted by Claus at 2:31 AM | Comments (0)

July 16, 2006

Free lunch is over: Concurrency is the future

Eye opening (well I'm unsure if it is - the failure of acceleration pointed out has been apparent for a while) piece on a fundamental sea change in computing technology forced by the breakdown of the previously available "free lunch" of exponential hardware improvement.
Improvements in dealing with concurrency (from functional programming comes tons of way to do concurrency without thinking explicitly about threads) is definitely something to watch.
The benefits by the way are already appreciable as concurrency is already a design problem to be reckoned with in distributed computing - and with everything moving to the web who is not doing distrbuted computing projects?
For the ultimate in concurrency we need to go to quantum computing of course.

Posted by Claus at 6:07 PM | Comments (0)

July 11, 2006

Google indexes exe-files

Sweetness. Google actually does a bindump of exe files it finds in the world and indexes the resulting metadata. So you can basically search for executables online that use a particular DLL or expose a particular call.

Posted by Claus at 6:35 PM | Comments (0)

July 2, 2006

Dabble DB looks awesome

Just had a look at the Dabble DB demo and it looks awesome. Simple and highly useful. Definitely signing up for this. (Yes, the whole deal with letting other people handle your important data is still scary - but soon somebody will start The Hosted Data Backup Company that does backups of your GMail, Writely, Dabble, Flickr, yada yada yada data repositories and they will in turn standardize the output formats of these apps. and that will in turn make desktop applications that back your data up easy to write. Markets at work. Lovely.


Posted by Claus at 8:28 PM | Comments (0)