Computer vira in general, and the technically impressive Stuxnet virus* in particular, provide important lessons interface designers should take to heart - instead of following the design guidelines of spammers.
The interface lesson of spam is as follows
Throw a lot of shit out there, and the users will self-selectSpam is sent to a generous multiple of the users it actually works for, in the sense that they react to it. The reason this works is that the spammer does not pay for our attention.
Stay low, stay out of sight, until you have a high chance of successIf a virus is very obnoxious, disinterested users are quickly annoyed by it, and inoculation - in the form of signature files for antivirus software - quickly develops. Stuxnet is an extreme example of this, going to great lengths to stay out of the way except where the intended target - Iranian nuclear facilities controlled by specific Siemens industrial control software - was found.
Your software is better if it operates on a virus theory of interface design, than a spam theory of interface design. Err on the side of no interaction, if you're not sure the interaction will be useful.
* it's a worm. I know. To the normals, everything is a "virus".
When I've been doing stuff with Morten, we've always had a basic dividing line in how we are interested in things, which has been very useful in dividing the work and making sure we covered the idea properly. Morten is intensely interested in use, and I'm intensely interested in capabilities and potentials.
Any good idea needs both kinds of attention, so it's good to work with someone else who can really challenge your own perspective. If only we had a third friend who was intensely interested in making money we could have really made something of it. It's never too late, I suppose.
Anyway, my part of the bargain is capabilities. Yesterday evening, and this morning, I added another year worth of lifetime to my aging* Android phone, one of the original Google G1 development phones.
It's a slow, old device compared to the current android phones. Yesterday, however, by installing Cyanogenmod on the phone, I upgraded to Android 2.2 - Froyo - and boy, that's a lot of capability I just added to the phone**.
First, about the lifetime: Froyo has a JIT, an accelerator if you're not a technical person, which makes it possible for my aging phone to keep up with more demanding applications, expecting better hardware.
Secondly, Froyo is supported by the Processing environment, for experimental programming, so now I can build experimental touch interfaces in minutes, using the convenience of Processing. This makes both Processing and the phone infinitely more useful.
Thirdly, Froyo has a really nice "use your phone as an Access Point"-app for sharing your 3G connection over WiFi***. I had a hacked one for Android 1.6 as well and occasionally this is just a really nice appliance to have in a roomfull of people bereft of internet.
Fourth, considering that Chrome To Phone is just an easy demo of the cloud services behind Froyo it sure feels revolutionary. Can't wait to see people maxing out this particular capability.
Fifth, and it feels silly to have to mention this, but Froyo is the first smartphone HW/SW combo you can add storage in a reasonable way, i.e. moving apps + everything to basically unlimited replaceable storage.
On top of all the conveniences of not being locked down, easy access to the file system, easy backup of text messages and call logs; this feels like a nice edition of Android to plateau on for a while. If the next year or so is more about hardware, new hardware formats, like tablets, and just polishing the experience using all of these new capabilities, I think that'll work out great.
* (1.5 yrs old; yes it sucks that this is 'old'. We need to do better with equipment lifetimes)
** I'm going to but a couple of detailed howto posts on the hacks blog over the next couple of days, so you can do the same thing.
*** For Cyanogenmod, you need this.
I got into the Wave preview, and first things first: This really is a beta. The bugs. It is full of them. It is a dangerous move, lots of people aren't very good at looking at unfinished things, but I personally appreciate the opportunity.
Second thing: You can't really evaluate something that involves communication before you're actually trying it out for communication that you really want to be doing. Trying out Wave without a real need or purpose - because I can't invite the people I usually need to talk to - makes me question any and all user testing of software that isn't done with real paying customers.
That being said, even on the features, it seems the experience is a rewrite or two from being generally useful. Which is not a problem. GMail, famously, was redone 6 times for experience. Wave still feels like a lot of the interface has not been sandpapered down to proper size. I'm even unsure about the basic interface metaphor.
After seeing the original Wave demo, and reading the whitepapers, I wrote about how the project hit all the right marks in terms of openness and technology, and I still think the underlying tech - choice of Problem, Platform and Policy - is incredibly right. It's just that its right for a lot of different experiences and the one we're looking at now isn't polished yet, and doesn't really show off all the 3Ps can handle.
So what could it do and what does it do and what does and does not work?
What the Wave technology does underneath the experience is real time, collaborative, federated, versioned, editing of XML
But of course it is not an experience.
The Wave UI seems to have stayed pretty close to the tech-description above. The experience is "structured, live, collaborative writing". Examples of what it isn't:
However, we could just have a couple of different experiences - much like I chat in Adium, but love the archive in GMail. That integration is incomplete - in that I can't restart the chart later. Once I'm in GMail I have moved on. There's no Adium friendly API into the chat archives. With Wave I could start a conversation with reference in a previous conversation - which could be a great interface. Also, the ability to fork conversations, could be made very nice as an experience.
When I started writing, I wanted to include my list of observations of odd things in Wave (desktop metaphor seems wrong for this, wave/contact/tool organization takes up too much screen compared to content, muting vs archiving hard to understand, 99 changes in a wave impossible to understand, bots vs gadgets what does what? why is there a 'display' and an 'edit' mode?) as well as what seems like bugs - but I think I'll do that separately later in a better format.
So, yesterday afternoon, Morten suggested it would be cool if there was a site that could score the days at Roskilde against personal preferences as expressed through Last.fm.
Indeed it would be, and since Morten does nice minimal interfaces and I do data gathering and mixing, we agreed to split the work, and build the Best day at Roskilde-finder.
It's worthwhile to have a look at what infrastructure we have used for this and which situational hacks are involved. I didn't have to scrape the concert program myself, as Steffen had already done that, through Yahoo's YQL.
What I needed to do is mine Last.fm's API for relevancy for those bands to merge with the user's favorite bands.
Present that to Morten's website as simply as possible and let Morten make a useful interface for the data.
It doesn't quite end there, though. Morten had previously exploited live play information from Danish National Radio to create a radio station persona on Last.fm.
Through Spotify, using Spotify's Last.fm integration he is also building a Roskilde Festival persona.
- these will give more general than personal answers: "If you're the kind of person listening to this radio station you will like".
It's interesting how much infrastructure is available - and useful - for a mashup like this.
We're using Yahoo, Last.fm, Danish Radio's website, Roskilde's website and Spotify as data sources/web services - and combining preexisting situational hacks from 3 people, on top of the obvious webservers and direct hacking.
These resources can be combined, and hidden away, in less than 10 hours to produce a coherent, simple and fun website.
Add instant distribution through Facebook and Twitter (Facebook wins) and there's a nice useful bit of mashup for an intended audience of 200-10000 people.
Netbooks, so far, haven't really been interesting. They are cheap - and of course that's interesting in and of itself - but they don't really change what you can do in the world. Their battery life, shape, weight and notably software have been much the same as expensive laptops, just with a little less in the value bundle. Which is perfectly fine for 90% of laptop uses.
That's set to change, though. New software, assuming the network, and consumer packaged for simplicity, sociality and "cultural computing" more than "admin and creation" style computing is just about to surface. Fitted with an app-store and simplified, the netbook assumes more an appliance role than a general purpose computing role.
The hardware vendors are adapting to that idea also; moving towards ultra low power consumption and enough battery life that you simply stop thinking about the battery.
Meanwhile, Microsoft is busy squandering this opportunity. They simply don't get this type of environment, apparently - and are intent on office-ifying and desktop-ifying the metaphor. Where Bill Gates "a computer on every desk" was once a vision of not having computing only in corporations and server parks it is now severely limiting. Why do I need a desk to have a computer?
I thought Bing vs Wave makes an interesting comparison. Bing is a rebranding of completely generic search; absolutely nothing new. Not a single feature in the presentation video does anything I don't already have. And yet it's presented in classic Microsoft form as if it was something new and as if these unoriginal product ideas sprang from Microsoft by immaculate conception.
Contrast that to Google Wave, which - if it does something wrong - is overreaching more than underwhelming. And contrast also Wave's internet-born and internet-ready presentation and launch conditions. It's built on an open platform (XMPP aka Jabber). The Wave whitepapers gladly acknowledge the inspiration from research on collaborative creation elsewhere. The protocol is published. A reference implementation will be open sourced. The hosted Wave service will federate. It is a concern for Google (mentioned in presentations) to give third parties equal access to the plugin system - the company acknowledges that internally grown stuff has an initial advantage and is concerned with leveling the playing field.
Does Microsoft have the culture and the skills to make the same kind of move? I'm not suggesting that there's an evil vs nonevil thing here - obviously Google wins by owning important infrastructure - but just that the style of invention in Wave, based on other people's standards and given away so others can again innovate on top of it, seems completely at odds with Microsoft's notion of how you own the stuff you own.
So Wolfram Alpha - much talked about Google killer - is out. It's not really a Google killer - it's more like an oversexed version of the Google Calculator - good to deal with a curated set of questions.
The cooked examples on the site often look great of course, there's stuff you would expect from Mathematica - maths and some physics, but my first hour or two with the service yielded very few answers corresponding to the tasks I set my self.
I figured that one of the strengths in the system was that it has data not pages, so I started asking for population growth by country - did not work. Looking up GDP Denmark historical works but presents meaningless statistics - like a bad college student with a calculator, averaging stuff that should not be averaged. A GDP time series is a growth curve. Mean is meaningless.
Google needs an extra click to get there - but the end result is better.
I tried life expectancy, again I could only compare a few countries - and again, statistics I didn't ask for dominate.
Let's do a head to head, by doing some stuff Google Calculator was built for - unit conversion. 4 feet in meters helpfully over shares and gives me the answer in "rack units" as well. Change the scale to 400 feet and you get the answer in multiples of Noah's Ark (!) + a small compendium of facts from your physics compendium...
OK - enough with the time series and calculator stuff, let's try for just one number lookup: Rain in Sahara. Sadly Wolfram has made a decision: Rain and Sahara are both movie titles, so this must be about movies. Let's compare with Google. This is one of those cases where people would look at the Google answer and conclude we need a real database. The Google page gives a relief organisation that uses "rain in sahara" poetically, to mean relief - and a Swiss rockband - but as we saw Wolfram sadly concluded that Rain + Sahara are movies, so no database help there.
I try to correct my search strategy to how much rain in sahara which fails hilariously by informing me that no, the movie "Rain" is not part of the movie Sahara. Same approach on Google works well.
I begin to see the problem. Wolfram Alpha seems locked in a genius trap, supposing that we are looking for The Answer and that there is one, and that the problem at hand is to deliver The Answer and nothing else. That model of knowledge is just wrong, as the Sahara case demonstrates.
The over sharing (length in Noah's Ark units) when The Answer is at hand doesn't help either, even if it is good nerdy entertainment.
Final task: major cities in Denmark. The answer: We don't know The Answer for that - we have "some answers" but not The Answer, so we're not going to tell you anything at all.
Very few questions are really formulas to compute an answer. And that's what Wolfram Alpha is: A calculator of Answers.
This project is so right. Replace Processing's own language with as intuitive, but very powerful, Scala and you have the immediacy of processing with some really serious legs for later abstraction.
I even think Scala has standard Processing beat as far as intuition goes. No void. No brackets when there are no parameters and so on. Better expression/ascii ration, quite simply.
Ubicomp er den gamle drøm om beregning i alting - og her er et virkelig godt slideshow, der diskuterer om vi ikke har fået det allerede uden at lægge mærke til det - i iPods og snedige telefoner og uventede remixes af virkelighedsdata med webdata. Man får virkelig aktiveret tankerne her.
When we built Imity - bluetooth autodetecting social network for your cell phone - we did - of course - get the occasional "big brother"-y comment about how we were building the surveillance society. We were always very careful to not frame the application as being about that, careful with the language, hoping to foster a culture that didn't approach the service on those terms. We never got the traction to see whether our cultural setup was sufficient to keep the use on the terms we wanted, but it was still important to have the right cultural idea about what the technology was for, to curb the most paranoid thinking about potentials.
It's simply not a reasonable thing to ask of new technology, that it should be harm-proof. Nothing worthwhile is. Cars aren't. Knives aren't. Why would high-tech ever be. And just where in the narrative of some future disaster does the backtracking to find the harm end? Computers and the internet are routinely blamed for all kinds of wrongdoing, whereas the clothing, roads, vehicles and other pre-digital artifacts surrounding something bad routinely are not.
What matters is the culture of use around the technology, whether there is a culture of reasonable use or just a culture of unreasonable use. And you simply cannot infer the culture from the technology. Culture does not grow from the technology. It just does not work that way.
I think a lot of the internet disbelief wrt. to The Pirate Bay verdicts comes from basically missing this point. "But then Google is infringing as well" floats around. But the important thing here is that Pirate Bay is largely a culture of sharing illegally copied content whereas Google is largely a culture of finding information.
I think it's important to keep culture in mind - because that in turn sets technology free to grow. We can't blame technology for any potential future harm; we'll just have to not do harm with it in the future - but the flip side of course is that responsibility remains with us.
I haven't read the verdict, but the post verdict press conference focused squarely on organization, behaviour and economics of what actual crossed the Pirate Bay search engine, which seems sound.
- that being said, copyright owners are still squandering the digital opportunity by not coming up with new ways of distribution better suited for the digital world, but the internet response wrt. The Pirate Bay that they just couldn't be quilty, for technological reasons, does not really seem solid to me, if we are to reason in a healthy manner about technology and society at all.
The What You Want-Web got a number of power boosts this week.
The What-You-Want Web is my just-coined phrase for the lock-in free, non-value-bundled, disintermediated, higly competitive computation, api, and experience fabric one could hope the web is evolving towards. Twitter already lives there, nice to see some more people join.
The important thing about all of these announcements is that they forgo a number of options for making money off free/cheap: Lowering the friction towards zero means the services have to succeed on their own merits. If they fail to offer what I need or want, I can just leave. I don't have to buy into the platform promise of any of these tools, I can just get the stuff that has value to me.
I think in 5 years we will remember Twitter largely as the first radically open company on the web. Considering the high availability search and good APIs, there literally is no aspect of your life on Twitter that you can't take with you.
P.S. (Also, three cheers for Polarrose, launching flickr/facebook-face recognition today. A company adding decisive value with unique technology, born to take advantage of the WYW-Web.)
Pretty good overview of what's wrong with URL shorteners. They destroy the link space, adds brittle infrastructure run by who knows who. We already know that the real value proposition is traffic measurement - i.e. selling your privacy short.
The problem of course is the obvious utility of shorteners.
This is all new stuff, the current state of the art is not how it is going to end.
Kunstig Intelligens er som regel noget med elegante søgealgoritmer. Man har en masse data, og vil gerne vide noget om dem, og så bruger man en af en række af elegante søgealgoritmer; der er forskellige snit - brute force, optimale gæt, tilfældige gæt. Inde i kernen af sådan en algoritme ligger der en test, der viser om man har fundet det man ledte efter.
Det er jo enkelt nok at forstå, når man ikke bare kigger på det magiske resultat. Mere punket bliver det, når testen der er kernen i algoritmen, udføres af en laboratorierobot. Altså af en rigtig fysiske maskine, der arbejder med rigtig fysiske biologiske systemer i laboratoriet.
Sådanne maskiner findes faktisk, ihvertfald en af dem. Og den har lige haft et gennembrud og isoleret et sæt gener, der kodede for et enzym, man ikke kendte den genetiske kilde til.
Wiredartiklen har mange flere detaljer, hvad der gør det ekstra trist at vide at Wired Online lige er blevet skåret drastisk ned af en sparekniv.
Listen to this, as the frequency goes up, splits into multiple tones, and then turns into chaos, briefly reintegrates, and then turns back into chaos. You might also like this version, where I've simplified to pure semi tones (i.e. the keys on a piano).
[UPDATE: New personal favourite - in C major - much more dramatic.]
The logistic map is probably the simplest and most celebrated math lab example of chaos.
It's a pretty simple function f. There's a control parameter r. When you take a number, say 0.5 and compute f(0.5) and then f(f(0.5)) and so on, interesting things happen. When the parameter r is low, you quickly end up at a fixed value, some point p where f(p) = p, so the iteration just stays there. When you increase r however, a lot of stuff happens - first a split, so the iteration flip flops between two values, and then that happens again into four values and so on. Above a certain value of r you reach chaos. This famous image shows the fixed points and chaos of the iteration for values of r.
The image however is static - you don't get a feel for how the dynamics of the iteration hops around on the image.
I was curious how that sounds, so I made this Pure Data patch and took a slow slide up the chaos scale. The result is above.
Her er en nydelig, teknologifri, produktintroduktion. Det er appetitligt og imødekommende - problemet er bare at det løfte produktet giver er et jeg har hørt, og set svigtet, dusinvis af gange. Alting fra Microsofts "Information at your fingertips" til tagclouds deler løftet. Og det holder aldrig helt. De bedste skud nogensinde på at holde det løfte er Google og Wikipedia.
Den abstrakte venskabelighed kommer simpelthen i vejen; en enkelt konkret succes i videoen havde nok solgt det bedre.
Sensing in the iPhone, Radiohead 3D data and a little hacking, and you have Thom Yorke doing his best Leia-Hologram impersonation in the air above an iPhone.
Polarrose har fået nyt look og ansigtssøgemaskine. Her er f.eks. Abraham Lincoln i mange udgaver. Tallet over hvert ansigt dækker over mange steder hvor det samme billede er fundet, så en nejs feature er altså at man så enkelt som muligt ser forskellige billeder af en person.
My old amazon hack to collaboratively filter on amazon.com, but purchase within the EU from amazon.co.uk had gone staie but is now fixed: Find it on amazon.co.uk (usage: Drag to toolbar, click when on a single book page on amazon).
Amazon has gone to "meaningful URLs" - except not for machines so much, without changing. Screenscraping and its bookmarklet cousin has always been brittle.
On the Second Life blog Jim Purbrick riffs on a session we had at EuroFOO about mixing the real world and Second Life. What Jim has done - and what was the onset of the discussion - and what we've done a little bit of at Imity - is prototyping data-enriched physical worlds - augmented reality - in Second Life where everything works and you don't have to mess about with the physical shortcomings of Bluetooth or RFID scans. We talked a little bit about this in the context of CO2 accounting - modeling high fidelity CO2 accounting inside Second Life, giving you a perfect CO2 history of every simulated object in SL. The logical conclusion - all the more relevant since the recent interest in Second Life ecology, was to do actual CO2 accounting for Second Life inside the simulated world. But this probably is some ways off.
By no means have we given up on the idea of doing nice Imity/Second Life crossovers by the way. There's just the problem of time.
Also, note how this very nice idea is completely immune to the SL hype discussion.
Through a thread here (in danish) I found this T-shirt. I'm actually slightly red/green challenged so while I could tell that there was a text there I couldn't tell what it said. (I can tell the colors apart easily but some of the tests designed to trip people like me up do trip me up). Asking a colleague produced no result (more danish) - well actually it did, he's just telling the better story where it doesn't - so I did the reasonable thing and wrote a perl script to color separate the image so I could tell what it said.
The color separation also shows how the trip up actually works. Above is the red, green and blue components of the image respectively. Notice how the blue component is just noise as far as the test goes. The red has more signal in the letters and the green has less signal inside the letters. One can see how the obfuscation works when you have trouble distinguishing red from green: The reduced green signal is matched by an increased red signal, but if you can't tell them apart this just cancels to noise.
Presumably, if the lowered green matched the heightened red exactly I wouldn't even know I was missing out on something if it weren't for the social clue in "this looks like one of those colortests".
The blue component has absolutely no effect on the readability of the text (for me, that is). This image, which is the red and green without the blue, is more or less as unreadable as the original (i.e. I can tell there's something to miss, but not what I'm missing).
The code I used for the separation can be found here.
Here's a totally obvious fact not much talked about: Of course virus writers test against popular anti-virus packages before 'releasing'. They want the virus to work after all.
Not exactly for the same reasons, but I assume the story is equally obvious if you replace 'virus writers' with 'bacteria' and 'popular anti-virus packages' with 'antibiotics commonly used in hospitals'.
Fun, but fundamentally broken business idea: "Why don't we make an 'ideas for software' exchange where you can sell your idea without having to build the software!". You publish the idea and then let other people run with it and then they pay you back for that pivotal seed insight later on. Fun idea. But there's a reason Ben Hammersley isn't rolling in cash generated by Lazyweb which is simply that ideas are dirt cheap and people have them all the time. Their value is really, really close to zero. Unless you have real pyaing customers with money in hand waiting for your idea then you're just not adding any significant value by just having the idea. The problem in making money with ideas is in the words making and money not the word ideas.
The idea that you can test the idea before actually building is novel - but unless there's a creative process there - a brebuild phase that adds value and diminishes development cost later, itøs just a waste of valuable lead time.
As a more personal aside - I just don't believe markets build good software, not yet anyway - good software (if it is new in any interesting way) happens best with 3-4 guys in a room who know each other well and have the necessary skills. It's a refreshing idea to create an idea market for this kind of thing but the transaction costs in building software are just too high. They actually dwarf the price of the software itself. That's why you try to build teams small enough that you don't have transaction costs.
Ning is attacking the same kind of problem, which is of equally doubtful value, but at least they're changing the rules where it matters: The price/time to execute the idea.
Who knew? There's a cybernetic scientific principle to back up the "simple software" movement as promoted in words and actions by 37 Signals. The law says that if your system (here, software) is more flexible it will be able to handle more usage scenarios. As discussed all over the place on Signal vs. Noise, simple software is flexible because as you try to apply it to new situations it doesn't have a lot of parts that get in the way.
I like the connection. It also rhymes with basic intuitions from physics and mathematics: The more constraints (i.e. features) you add to a system (of equations), the fewer solutions (i.e. uses) it has.
The Unix Way was bred from this understanding more than 30 years ago. It's saddening that the economics of selling software (mainly Windows + MS Office, but there are other culprits) has led us on such a detour.
So previously I was talking about some of the commonolaties between good innovation practice in general and some of the well tested methods to do agile software design. For the agile software side of that equation I forgot this summary by Alistair Cockburn of what you're supposed to do. I particularly like that Cockburn also has a version of the "leave it alone" suggestion I made with his personal safety point.
The good thing about the practices Cockburn highlight is that none of them cost serious money or time. All it is is healthy direct communication that builds good knowledge - and good products.
I feel blessed to have experienced this kind of work environment over the last couple of years...
[UPDATE: Also Joi Ito chimes in from a completely different angle:
Some of the elements of a cool place is that there isn’t so much of an "authority" but there is a sense of safety.]
Sharing a quick note doesn't get much simpler than this.
Also - congratulations to shortText on just delivering service before asking me for anything.
It's obvious isn't it? GMail's very nice compression of threads makes email as efficient and compact as IM.
It's secure. File sharing comes naturally right inside GMail. Images are inlined in GMail messages. 2.6 GB free file storage space. Permanent archive with efficient and simple search. I frequently conduct 30-50 email conversations in groups of 3-5 people in GMail. Feels very much like IM.
That, being said I think Campfire looks nice and fits into the "simple language, all-in-browser" category replacers 37Signals seem to like doing. If I didn't already have a superior free version I might just buy it.
If you don't see flash and are using the Adblock extension in Firefox or if you uninstalled Adblock because you couldn't get flash to work with Adblock, you should use Adblock Plus. It works. The links pointed to on extensionmirror.nl don't work - but the install from the Adblock Plus website does. The original Adblock project has been dormant at least for a year.
I'm back from Amsterdam and the debut of EuroOSCON. I've had fun and broodje haring (which is a "pickled herring hot dog" with raw onion and pickles).
There were good talks and not so good talks and a lot of nice and interesting people.
As a perl hacker it was fun to see all (well, a lot of) the perl hackers on the mailing lists in the flesh and there were plenty of good talks for my tastes anyway.
Overall the vibe was "sharing, simplicity, source" - all the projects were focused on moving on. Tons of projects were in a remake phase rather than a make phase so they were focused on getting it right the second (or third or forth or ...) time around.
Memorable performances in the "it's about the user" category were Ben Goodger's talk on the making of Firefox and Jeff Waugh's talk on Gnome
What impressed me the most was the intense commitment to the social side of software. It's why open source works of course, but it was also top of mind for everyone from perl 6 reimplementers to Linux desktop hackers. Or maybe I just went to those talks were this was top of mind. A conference is after all designed so that you miss 80% of the fun.
I might cover some of the talks on my infrequent hacking addendum (in particular I think I'll expand on the sadness that is perl 6) but here's at least some highlights: The best talk I saw was Autrjius Tang's talk on pugs. I liked that it actually works, that there's heart and courage, vision, friendliness, openness, a complete lack of perl (or any language) bigotry and if you add a nice presentation and the ambition to go on a 10 year hacking pilgrimage a la Paul Erdös I think we have a best of show winner. If I was a benevolent millionaire (I'm not) I would sponsor Autrijus' walkabout in a heartbeat.
The Maker Faire was fun, but as a previous Ars Electronica attendee I have to say I was distinctly underwhelmed by most of the hacks. Each year Ars E presents nothing but hacks thay are way, way superior. That's all fine and dandy this is supposed to be about grassrootsy get involved hack your own stuff like MAKEing, it's not supposed to be perfect - but the secret here of course is that so is Ars E to a large extent.
Couple of good things there though. A couple of projects had vision and heart (not just hacking) in mind. The nodel project is a collection of tools for building an open voluntary people/locations/events semantic web. Fernando Botelho was interested in building cheap computers for the blind from open source software. It's more af a call for action than an existing project but there's scope and heart there.
Other than that, the most admired demo was Beth Goza's tour through Second Life - an MMORPG that has two to three unique enabling features: There's a free basic account if you want to look around. You can script the environment yourself. The interactions are social and not hack&slay. It's simply not about killing people.
I quite liked a demo that wasn't really part of the faire but just an impromtu demo by Liz Turner of her ICONAUT a newsmeme analysis tool that slices and dices news keywords with isometric iconography. It's either informing and eye-opening or just eye-opening, but it's certainly that.
It seems Larry McVoy of BitMover is either a complete asshole or just your run of the mill socially inept paranoid hacker. He is actively reaching out to customers to have their employees stop working on competing open source to bitkeeper. These people aren't BitKeeper developers, they're end users. It's like if Microsoft tried preventing Word users from contributing source to Open Office. Simply, insane. Lesson learned: Avoid BitKeeper at all costs.
f(arg1, arg2, arg3, arg4)and defining the function f' as
f'(x) = f(x, arg2, arg3, arg4)
I hadn't seen the Google Feed Reader before today, it started cropping up in my referrer log. GMail-like, with what appears to be a too heavy interface for it's own good. It's quite slow at present and not immediatelty useful. But, like GMail, the choices made for how to browse feeds may make sense in the end.
(About not seeing it before: It's new.)
The otherwise excellent Firefox plugin Slogger has a serious flaw in that logging a page that you got as the result of a POST request repeats the request. Thinking in RESTian terms, you're only supposed to consider results of GET requests static redoable and cacheable, not POST requests. If you're autologging something important you might get disappointing results after purchasing that book/deleting that DB records and so on and so on. This is a similar discussion as the one over Google Web Accelerator, but more serious, since this also involves POSTs and not just ill-considered GETs.
I'll update this post if someone posts a fix to the Slogger mailing list.
OK, now that we've established that surface matters and language matters is it ok to dissect game changer hopefuls like Ning?
It looks to me like a well done, well integrated advanced web panel for a well equipped PHP hosting solution. What that means is: You use the webbrowser to configure and manage your hosted application. You can rely on certain pluggable elements that will fit right into your data (e.g. user login) but other than that you're writing PHP agains a set of compatible elements accessing e.g. all the open social services out there.
A server with a full CPAN, a web interface to my CGI directory and something like Catalyst would seem to do almost the same trick. Or the equivalent based on Rails.
But of course execution is everything and ideas nothing in this particular case. Looking forward to testing this - even if i do have to learn php.
Over on Signal to Noise there's a discussion of the speed with which writeboard fixed the obvious blunder of not locking documents.
The obvious discussion gets started: Was this held back to have some good news to offer real fast, or is this really a rapid release cycle. Some think so, some dismiss that as rampant conspiracy theory. It's impossible to tell of course, but it's worthwhile to mention that Jason Fried suggested doing exactly that (holding back features) as a marketing strategy in his talk at Reboot. Of course he didn't suggest that you should tell people another story in public. Personally I'm not sure I like the suggestion either way.
Also: The whole "no public betas" discussion seems like religion over fact to me. In both schools (i.e. Google vs. 37Signals) products get releases with plenty of flaws and shortcomings. In both cases your best bet is to hope that they're working hard to improve the product. As far as I'm concerned I derive no comfort from having the beta sticker removed when I still have to expect bugs and feature changes/additions as a part of life with the product.
Since we're talking about wikis (even ones that won't admit they are secretly wikis, you'd think they were afraid to be thrown out of the country if the secretly admitted to being open community software) I looked around to see if it was really true that there weren't any good hosted wiki services around.
There's plenty of hosting plans including wikis, but there's also Schtuff, which is free, simple, has history (with differencing in the same style as writeboard), access control, search, backups, and much more.
They could do with a fresher look and bigger fonts, but other than that it looks really nice. Bookmarked.
Other free options are here.
Good post by Rick Segal on Writeboard and the response from geeks like myself. Language matters and "Yes. This is just a one page wiki with an edit history and a forced login to view/edit.", my description of Writeboard which works for me, does not work for everyone. So what's the invention in Writeboard? Certainly nothing in the functionality of the application which really is plain old wiki with absolutely nothing new. It's not even simplified in any way. Even Ward Cunninghams ur-wiki had all the features of writeboard, with the same stunning simplicity.
The language however is clean and maybe new. The product name and domain name, immediately descriptive and comprehensible, and the language used in the application, without cutesy tech words (wiki) that mom wouldn't know the meaning of. I'll buy an argument that this really matters. (The notion of a new product which has no novelty, except in language, is intriguing. Can user centered design really be boiled down to pure language?)
The new language does however have a bad sound of 'consumer' ringing through my years. My mom won't mind at all. She is a consumer when it comes to tech. For 2-way conversation types like myself it's kind of offputting though, so I'll take my wiki'ing elsewhere.
37Signals' Writeboard is trying to derive some buzz off the recent interest in collaborative web editing, e.g. JotSpot Live or Writely. But Writeboard is in fact nothing new, but simply a 37Signals branded version of ... the wiki!
Yes. This is just a one page wiki with an edit history and a forced login to view/edit. There's no fancy rich text editing, it's plain old wiki-formatting once again. There are no collaboration features except what was already the case with a plain old wiki (reload the page to see other people's recent edits). It has less features than almost any wiki package I can think of. There are plenty of hosted wikis (or just look here).
Free is always nice, but this must be seen as a pure marketing effort to drive attention and interest to the products of 37Signals. 37Signals is of course "the Apple of simple web applications", so I'm sure they can manage to get product reviews out of this (oh wait, they have)
Been trying out the new Jotlive, a kind of subethaedit in the browser with AJAX, and it works - most of the time. There's outlining, complete with eye opening live drag and drop, it's visible what the others are doing, including the outline drag and drop. This is true AJAX eye-candy, and I think it will be useful too, although I've been using it for too short a time to actually prove that point.
Caveat: It seems to be a little buggy still with occasional disconnects and (urgh) loss of data, but if this is possible in the browser you really need good desktop apps to beat Web 2.0 applications.
What's also cool about it is the story of how it came about:
"The idea from JotSpot Live came from those two lines of thought: fine grained editing and locking combined with live updates. I had a chance to try implementing the idea at our hackathon a little while later, and (surprisingly) it seemed to work! "
WSJ is running a story on a complete redesign of Windows, supposedly in Vista
While Windows itself couldn't be a single module -- it had too many functions for that -- it could be designed so that Microsoft could easily plug in or pull out new features without disrupting the whole system. That was a cornerstone of a plan Messrs. Srivastava and Valentine proposed to their boss, Mr. Allchin.
It's not just AJAX library snippets in ASP.NET, now Microsoft is also including an MPI library with a BSD license with their compute server product. Rampant communism! This software not only has a communist license but turns entire armies of machines into hard working slaves!
(lest we forget: There's also the use of Lucene.NET in Lookout - but that was more of an accident)
(oh, and FTP - BSD licenses all - so nothing sinister going on, just a nice counterpoint to the insane rhetoric against open source)
The new Office 12 interface will be something completely else. What this means of course is that organizations will have to expend huge amounts on training users to the new interface. Why not spend that money training them to use open software like OpenOffice instead? Note to office integration vendors: Do you think your existing Office integration will work seamlessly with a completely renewed office interface? No?
It's nice to see Microsoft embracing the enormous speedup in productivity inherent in code sharing and open source. That's what one thinks as Microsoft's Atlas framework for AJAX web development, while still unfinished, is already borrowing idiom and maybe even code from the already established open source solutions out there. Can't wait for the patent applications to start flowing from this "investment in innovation".
The license for the demo applications for Atlas carries a good clause that should be added to all open source licenses pronto:
(C) If you begin patent litigation against Microsoft over patents that you think may apply to the software (including a cross-claim or counterclaim in a lawsuit), your license to the software ends automatically.
Jon Udell posts about an experiment in cookie development, pitching cookie emulations of three common work paradigms in software against each other. The the glee of commercial software types, the traditional managed approach won. This is entirely unsurprising, but not for any of the reasons Udell gives.
From the description given, it's unclear whether staffing was the same on all projects, but even if it were there's no surprise in that the managed team came out ahead. What they were doing was delivering one take on one product - no second generation to consider, no bugs to fix. I don't know of any argument why this would be faster with open source.
Secondly, all the succesful open source projects have a 'star manager' (in the parlance of Udells post). Linux has Linus, Python has Guido, Perl has Larry, Firefox has Goodger and Ross. The list goes on. It's unclear whether this was the case for the cookie project, but if not I can't think of any good reason why open source by committee would be any better than any other software designed by committee. I am reminded of Jimbo Wales' statement about Wikipedia at Reboot: It doesn't work because it's a reputation system, but because it's a reputation culture. There's always someone there to command natural respect.
The reasons given, while obviously not without merit for the case of Linux (there was plenty of inspiration around) just fail completely for other categories of software. Apache was a semi-early entry in its category and was based on the equally open NCSA HTTP daemon. Not first, but still quite early. Python/Perl/Ruby had no obvious predecessors. In short, the reasoning that open source can't lead but only follow seems entirely bogus, based on examples defined by succesful commercial software. Obviously in these cases, open source can't win.
Best post in a while from from Just: Your PC is a tamagotchi. And he's right. Who wants one of those, really?
You can do a lot on the server of course, but apparently, you can even test the UI itself. Automated. Cross platform (browser + OS). Nice.
The only thing I'm left thinking is this: Wouldn't the test itself tend to work exactly when the browser is well behaved and therefore provide to many positive test results?
Just as I was prototyping my own web framework (frameworks are like CMS's everybody wants their own) making webprogramming simpler for personal projects, I hear I have to investigate Catalyst as an attempt at this. Let's hope it installs better than Maypole did* (never got that to run on Windows) and that it can gain some momentum.
Sofar I'm liking it - template engine is the Template Toolkit, model uses Class::DBI - just what I was considering up front.
I was going to call mine pails, as a joke and to indicate a tool to pump water off the sinking ship that is perl. Between Python, Ruby and PHP there's a lot of competition in the scripted world.
* [UPDATE: It did. Straight off CPAN, requiring no crazy modules. No default usable cgi script for use under mod_perl2 though]
He didn’t bother to tell me what it does, and remember, I only really started looking at Ruby last week, but it’s obvious
A parable: I recently returned from vacationing in Italy. I don't speak a word of Italian, but I was pretty much able to read all the signs (All 10$ words come from latin. I like 10$ words). I still don't speak a word of Italian.
Tim Bray's example reminds a little of this. I know and use a multitude of languages and I realize ruby isn't far from some of them. The example Bray quotes is really easy to make out. But it's almost as if the proximity of ruby to e.g. perl is working against me, for much the same reason I never bothered to learn proper Swedish.
Excellent post on structure vs. data, the soundbite is
Data First strategies have higher usability efficiency (all rest being equal) than Structure First strategies.- which means nothing but the following: Structure is unnatural for us, it must be learned and until we learn we are challenged by structure. Data, more naturally, is just language and we've been wired for that for 40000 years.
The only problem wth the last assertion, and we're learning that during the current "remix everything" paradigm and the emergence of the hypercomplex society, is that we quite simply can't keep up - and that the structural efficiencies that we're used to are too expensive to be valuable when the structure we apply them to are as volatile as structure is in our highly mutable digital society.
This is also the reason why microformats have been so hugely successful and why the semantic web, old style, is unlikely to succeed in the near future.
I'm torn on the Hitting The High Notes thing. Clearly good programmers are qualitatively better than mediocre programmers. They solve other problems than the mediocre ones and they're much better at solving the right problem.
It's got a lot to do with the "Never solve the problem as stated" rule. It is almost never the task to just do what you're told. That's what mediocre people do (if you're lucky), but the reality is that you need people who will help you solve netirely different problems than the one you stated.
On the other hand, I think the cult of excellence is frequently wrong. It's much more important to simply be than it is to be excellent. And I think most software proves that point on a daily basis. We're using what's there - we're not waiting for the very best fix to our particular problem. Being there simply rocks.
In the same week Google Earth is (re)launched as a completely free version of Keyhole, Google adapts to the intense interest in Google Maps and publishes an API. Bloody excellent.
I can't be fun competing with that.
It is interesting how much easier it is for Google to launch all this stuff without a public backlash because there is absolutely no question of monopoly for search (nobody has one) and also because Google behaves so very well interop-wise (free and available API's for anyone to tinker with). Yahoo also gets it. Amazon also gets it. Microsoft only partially gets it and Apple, despite having an extremely hackable platform, doesn't get it at all in terms of communication and the data services Apple also offers.
It's no wonder, given the fact that the designers of Java completely botched the implementation of generics, that sentiment along the lines of Generics Considered Harmful begins to appear. But generics aren't harmful. They just need to be done properly, used properly and tooled properly. It is quite possible that no programmer has ever been in a situation that all of these preconditions were met.
The generics in C# and java are just badly done. Pure and simple. Too much work left for the developer, and not nearly enough reliance on the compiler. This takes away the breathtaking flexibility of C++ generics, but still adds the complexity.
That's the "done properly" part. Clearly, even C++ gets it partially wrong, but mostly in ways that you could tool your way out of. As for "used properly" - obviously you can do horrible things with templates. If you use it for it's basic usefulness - which is to provide the equivalent of all the useful helper words and verb and noun modifiers from natural language - then generics work just great. And not having them, leaves you with the alternative of a rich external build environment - you need code generators to not go insane.
Tooling is abysmal in all cases I have seen. The compilers I've used are just to slow. They expose the complexity of template definitions to unwitting template users (a horrible, horrible problem - you should never have to know the insides of a template you use) and fail in other ways to tool templates properly. I've made more comprehensive notes here.
Microsoft's plans to "extend" (that MS speak for break) the RSS standard, as reported here.
Just wanted to have a post to reference back to, when Microsoft's patent application for RSS appears in a few years.
The bright new world of weakly typed, hackable web services also hold new perils. Google just switched GMail from using the domain gmail.google.com to mail.google.com - and at least the GMail power tweaks I'm using just blew up.
[UPDATE: What really seems to break everything is in fact the path part of the url: It switched from being rooted at /gmail to being rooted at /mail]
I'm betting tons of people's little compiled micro-applications just blew up too. That'll teach you to use static resources to bind to anything as dynamic as a web site.
I wonder how long it will take before companies start achieving notoriety when they break 3rd party hacks for websites. Clearly they could claim that their service wasn't intended for hacking, but according to Microsoft old timers (and from reports also according to the leaked source of some time ago) Microsoft, of compatibility breaking notoriety, could actually claim the same thing. The main problem on Windows was always use of undocumented calls. Again: This is according to reports.
When will Google start feeling Microsoft's pain?
Det er blevet tid til at efterrationalisere lidt mere på Reboot end bare ved at referere foredrag - og bare for at holde fast i den globaliserede virkelighed vil jeg gøre det på dansk, selvom det er helt ligemeget at det er på dansk.
Om få år er det jo alligevel ligemeget. Og hey, som alle ved, blogger man jo mest for sin egen skyld.
David Axmarks foredrag om open source havde en vigtig pointe:
Open Source is brilliant at getting the little things right
I always found the 20-way drop down with language pairs ("English to French, Portuguese to English" and so on) on web translation sites annoying. The proper thing for these services to do is to detect the language of the page you want to translate on their own and just show it in english already - an expert interface could be a button away, but simple things ("this page to English") should be simple.
I've made a bookmarklet (and a perl script) that does exactly this. It loads a webpage, tries to guess the language used, if Google translate supports it it is then translated and you're done.
Don't abuse it please, or I'll have to take it down.
The underlying language categorizer is TextCat. This service works no better than TextCat does.
[UPDATE: This actually works directly on Google]
[Update: Better version over here, that stores the searches serverside so this works from multiple machines]
God bless Greasemonkey. Now I can have stored searches in GMail implemented as a Greasemonkey user script. Justerens comment was on the nose "Isn't that what filters and labels are for?". It is - but filters only applies to new email - and spamfiltering overrides filters. I have consistent problems with specific kinds of ham, that I actually have rules to pick up, that ends up filtered as spam.
Google has taken the '00s concept of open source code bounties to new heights. This summer Google will sponsor, a large number of of open source projects, offering a bounty of 4500$ for up to 200 participating developers - if they complete their designated project.
Projects that take on developers will also receive a smaller sum to help in managing the summer coders.
Maxing out on the developer limit, the project will amount to a 1 mio$ contribution to open source.
The companies that sponsor/employ the important members of the large open source projects probably contribute more money than this - but even if this is mainly good advertising it's still a huge deal for the projects that benefit.
[Update: It's x86]
Can't say I knew before I read the infoworld story but the huge fuss (also on slashdot) over Apple's supposed switch to Intel seems to be completely unaware that OS X already runs on x86. There are obvious commercial reasons not to go to a highly clonable platform, but technically Apple seems to be well along the way already.
Sigh. The Firefoxians must really, really hate me. Not only did the install completely fuck up. They broke "open everything in tabs, not in new windows" again. I absolutely really, really hate this. I've tried the usual settings but nothing works.
Could someone please remind me what secret hack I'm forgetting when I want nothing at all to open any new stinking windows, except my own personal self going to the file menu and asking "Open a new window".
Dansk fortsættelse: Oversætterne af Firefox er helt sikker velmenende, men for helvede for nogen kedelige, dårligt skrivende geeks. På engelsk står der mundret "Tabbed browsing" i settings. På dansk står der "Fanebladsbaseret internetsurfing". Det hjælper jo ikke at applikationen er oversat til dansk når det sprog den er oversat til er fuldstændig rædselsfuldt. Det er dårlig, dårlig "Jusabilligti".
I absolutely buy the part of Joe Kraus' Hackathon post that says that short focused bursts with a focus on actually shipping is how the really good stuff gets done. The problem with hackathons is that they are not, in my experience, truly sustainable, reproducible events.
My personal experience with Hackathons come from doing maybe 10 of them, either alone or with 1-2 co workers. This has been possible where I've worked due to trusting or hands off management - either way works if you have geeks of quality on your staff.
Management of that kind is a prerequisite, but it is not how hackathons get done: You need an itch to scratch. Good ways to find itches are "stuff is cool", "stuff is really, really late and really, really necessary" or just that you genuinely have an itch to scratch.
The problem with hackathons is that if you try to run a hackathon as a "process" without that itch, you'll get nowhere fast. A day off the map with less than 100% motivation is just a day off the map. I have tried, and I have seen colleagues try, to falter because the itch we felt wasn't genuine, which meant that the hackathon wasn't the focused energy boost it's supposed to be, but just a day with an undefined task and a tight deadline.
But when it's good it's good. I think all the really essential core ideas in our product has come out of sessions like hackatons.
The search intensity coloring on Google search history is not graded for power users...
So the automation of the tag hack is done, tags followed here. Suggestions were followed, so tags are now named tag_ something not deli_ something. Updated daily (which might be a too low frequency for some of these tags). I'll clean this up, autodetect tags from RSS feed links and share the tracker script in a little while.
But just as I finished I saw the error of my ways: I should have just made an XSLT of the RSS feed, and let Google crawl the result of that. That would be autoupdated and google would scrape it for me. Must fix.
As previously mentioned I wasn't impressed by backpack. Other people are, mainly because of the email integration. To me that feature is much better when just emulated with GMail. What you get is a responsive AJAX interface, spam and virus filtering, lightning fast search, tons of storage and a price tag of $0 per year for as many pages as you like as long as you stay under 2GB of total storage.
Here's what you do - it's just 2 easy steps
"But GMail doesn't provide public permalinks to messages", I hear you cry.
This is why I'm working on GPack, a miniature email backed wiki implementation. It support textile formatting and automated updates when your GPack is updated via email. It'll be done real soon.
Incident to the previous post, you might enjoy Larry Wall interviewed here on perl as a glue language and perl people as glue people (no it's not a glue-sniffing pun), multi paradigm aware integrators of things.
As I thought about that it struck me how very apt the term 'glue language' is in the case of perl. Often when you're trying to stick things together with glue you find that the glue ends up sticking to you instead in a big mess. Perl does that sometimes.
While I can see the point of Basecamp ("The simplest thing that could possibly work" for project management), Backpack seems utterly pointless. It's an "almost wiki" where each wikipage consists of multiple data items, combined with a TODO list. Pages have access controls and there's a simple email=>wiki update feature where you can do an edit to a wiki page by sending an email.
Augmented wikis are done better elsewhere.
Parts of the application simply shouldn't have been released, since they are clearly not done yet. The "email this page" feature is horribly implemented. First of all - to email a page you have to use a special "email key" as the address. Having a hard to remember email address to keep track of outside your personal organiser application kind of defeats the purpose of an organiser, no? There are simpler and better ways to handle the security issues of email, e.g. the way JotSpot does the it. And when you do send an email to a page, what you get is an embedded subject line, which links to a page with this inviting look. Incidentally, I did not set up this page to be public. Access control just isn't working yet for embedded email.
There's no easy to find search for your data. I thought GMail had made that mandatory for this kind of application.
The most impressive thing is the hype/content ratio on this project - almost enterprise grade. The hypefest here really tested my gag reflex.
But the worst thing is actually this: I don't think the 37 signallers realize they just created a "me too" product of the worst kind. There is nothing new here that isn't in several open source packages and/or one of the other social software products. No convincing extra simplicity, no fresh new UI ideas.
To check the Google tag hacking idea, I've created a script to generate tag search tokens from del.icio.us RSS feeds. I plan to run this nightly on many different tags, but for now i've just done one run on the tag googlehacks. The generated page is here.
Feel free to mirror a copy and link to that and/or my copy. The more the merrier in getting some momentum for the tag search google bomb. When I get the full setup done with nightly updates, I'll share the script to generate the html, not that it's in any way complicated.
The tags I use consist of the prefix 'deli_' in front of the tagname. Tags with non alphanumeric characters are mangled (all non alphanumerics replaced by _).
I've been thinking about whether this might get me accused of link farming, but I wouldn't think so. This isn't farming. I actually don't want the pages of links themselves to be popular. I am only interested in injecting search tokens in the searchable text set for other pages. For people uninterested in these made up terms (currently leading to empty searches) this shouldn't be an issue. The search tokens won't crop up in result lists, won't taint any search for "real" search terms and I wouldn't expect them to affect page rank either.
The GMail filesystem - an actual, mountable Linux filesystem stored on GMail.
This is less surprising than it sounds, because good GMail libraries exist and good "File System on Anything" libraries exist, but it's still a very cool hack and it underlines some of the important points that are anti-software patent and pro open source.
It's beginning to look like the only thing protecting the desktop from irrelevance is the lack of a widely deployed easy to use infrastructure for private URL spaces. What's changing the desktop from enabler to encumbrance is our poor ability to integrate our local data with the many web service providers that are quietly starting to dominate the attention landscape. The ability of data to integrate into these services is beginning to dominate all other uses of data.
This struck home with me as Infoworld gave up on homegrown taxonomies and started just using del.icio.us instead. The thinking used to be that providing structure was what a content provider did, but now Infoworld is turning that model on it's head and just trusting the common taxonomy instead.
With any luck this will also mark the beginning of the end of the ridiculous notions of deep links and front pages. Clearly, when Infoworld gives up owning the taxonomy, each Infoworld page has to survive on it's own merits, independently of any site map Infoworld has canonized.
I am aware that this may seem like old news to many readers, and I agree that it is, conceptually. As a practical problem however, it is only now becoming a huge problem as the variety of services available to augment your data just keeps growing. It's also worth noting that so far SOAP has had nothing to do with this new services world.
Apartments listed on Google Maps
People have started annotating Google Maps satellite images - sadly not on Google Maps itself, but rather on Flickr using exported images.
myGmaps provides a direct Google Maps annotation system. I can't imagine this service will not be shut down as they are effectively stripping all Google content except the map itself.
Also, people are sightseeing on Google Maps.
GMail is being used as an online back system.
Greasemonkey simplifies and automates end-user customizations of websites (for Mozilla users only). This is a good thing as we recall from the allmusic redesign scandal.
Yahoo's term extraction service has a nice interplay with Technorati.
Embracing user defined extensions of your service can be a powerful thing...
The Starter Edition is a simplified version of Windows XP, oriented at users who have never had a computer or have little computer experience. It can open only three programs at the same time, with a maximum of three windows for each program and cannot connect to computer networks.This crippled version of Windows is intended as a "third world Windows" to combat Linux adoption by 3rd world goverment programs. So the poor are only entitled to crippled computer systems? That Microsoft would consider this a good option, and possibly good PR is beyond me. Why any government would even consider this option is also beyond me. The restrictions are arbitrary and purely commercial and smack of discrimination - and from a purely technical point of view, I would guess that they limit the usability of the system to such an extent that a well configured Linux box would actually be a better buy for poor, computer illiterate users.
Keep them flash files loading, Loading, loading loading - Rawhide!
Good find by Just
Utterly cool. Possibly too cool.
I am slightly unimpressed with A9's Opensearch. A standard protocol for publishing search interfaces is a good idea. Whether basing it on RSS 2.0 is a good idea remains to be seen - but at least something somebody calls RSS is widely deployed and it is also extensible so that a search engine may extend the metadata published in search results.
What is decidedly underwhelming is the Opensearch aggregator A9. Seeing the data overkill of a 5-way multi-search aggregated into an A9 user profile brings me back to pre-google days when everybody, mistakenly, thought the problem was finding the right data. That's not the problem. It's not finding all the junk that's hard. A9 is like a browser based version of those desktop super searchers that were popular back in the 90s. And like those tools it is quite simply solving the wrong problem.
The next question then is what the right problem for Opensearch is if it is not the Opensearch aggregator. Personally I think the Opensearch search profiles will be extended with some kind of search profile indicating the grammar of searchable assertions (e.g. a specification telling me that I can search a particular database for the address of post offices based on postal codes). My search for post offices will then lead to this search profile and I will be able to use it. It will be sort of a weakly linked version of the semantic web. The inly version of the semantic web that could ever work.
An overlooked aspect of trackbacks is that as a trackbacker you also get an inderect traffic measurement for the trackbackee. If we assume some kind of fixed clickthrough rate for the links you are able to plant elsewhere, then your referrer log gives you a good indication of the traffic to URL's you've trackbacked.
All this to say, that judging from my referrer log, Joe Kraus' long tail post and Adam Rifkin's microformats post are getting read a lot.
Kraus' post is brand new and interesting, so no wonder there, but Rifkin's post seems very consistent traffic wise.
It's no surprise that Microsoft loves Groove Networks and its product. It's closed source. It's closed data. It's locked to Windows. But all of these properties also mean that Groove is just so old fashioned. I'm not saying the product isn't viable - in fact the few times I've used it it's been pretty neat - but I can't help but wonder how many companies wouldn't rather just use Basecamp for data sharing and project tracking and Skype for voice along side it.
If you want more control there's something like Jot.
For a whole lot of people it's not really important for these applications to be integrated in to one closed data universe. Rather, that's a huge disadvantage. And it is obviously easier to integrate an open enterprise (i.e. one where company, customers and subcontractors talk to each other across technical and corporate boundaries).
When Groove started, these easy to use almost-free collaborative spaces online didn't exist, but now they do and that makes it hard for me to see why Groove will remain interesting.
The google map hack still takes more effort than it should in the final product (I'm hoping that Google doesn't try actively to fend these annotations off but realize the immense value in them), but there's a better writeup than any I have seen on engadget.
The (currently) last comment on this post is good:
Reading around this it seems there is a more away from LAMP and towards Ruby on Rails rather than Perl or PHP; BSD rather than Linux and maybe lighttpd rather than apache, MySQL seems to be the constant.I don't know why the direct translation BLMR (as in late) is so bad, though.
So rather than LAMP try LMBR, pronounced limber to coin a new acronym.
Earlier I wrote about a 1991 Microsoft memo
[...]this was back when printing was a difficulty[...]
The many GBrowser rumours just got a new shot of fuel, with the announcment that Mozilla Firefox lead developer Ben Goodger now works for Google.
According to item 4 of this Google seminar writeup nobody uses the "I'm feeling lucky" button. Must be because they don't know how. "I'm feeling lucky" is absolute essential to navigate websites with crummy site navigation and/or search facilities.
What you do is create an "I'm feeling lucky" search shortcut (in mozilla firefox obviously - you have switched haven't you?) using the site:crummysearch.org search parameter (or alternatively the inurl parameter) to restrict to the poor site in question. Since Pagerank works so much better than most website ranking algorithms this actally works much better than the search page on the website itself. You usually get there first go. Good cases where it works are (at least) wikipedia, IMDB and allmusic.com.
Your average website (including this one) has surprisingly crummy search facilities. It's one thing that they're slow - but the second thing is that people have typically done a premature optimization and indexed the content in the database instead of indexing the shown pages. For this weblog, for instance, that means that page comments don't get indexed along with their posts. This is a very common phenomenon. It is particularly annoying when people misunderstand the web and do a full robot block via robots.txt (e.g. all Danish newspapers do this). Their own search facilities are no match to Google's - which means their sites have about 1/10th the usability they could have if they were Google indexed.
Meanwhile, we're waiting for "Google Tags" - useful stored public searches, available at urls like this: http://tags.google.com/myspecifictag - Maybe I should just make a service like that here on classy.dk. Obviously, there are resources like the The Google Hacking Database, but they don't have live queries...
Ha! The irony is thick here at classy.dk today. It was only a few hours ago that I wrote that I had yet to see a malicious Moveable Type plugin, so that the lack of a security model in software like MT was not yet a problem. And now that I'm catching up on my K5 I learn that actually a free weblogging service was actually used maliciously just recently. It's not quite an attack via server side software, being a case of bad HTML filtering in comments instead, but it strikes awful close.
If you develop anything for the web, or even if you're just a user at the geeky end of the user scale you should read Adam Rifkin's aggregated take on weblications, Link's from that post will keep you busy for a long time. Of particular interest The Web Way, because it links to so many other cool places. Of more particular interest, this presentation on 'the lowercase semantic web' which is a new moniker to describe all those metadata enhancements that are gaining popularity because of the popularity of blogging. Blogging has created a market for smarter clients (sometimes just neat plugins for firefox or similar) able to extract useful data from DHTML, meta tags and link rel attributes and that in turn breeds these kinds of new micro standards.
You probably want to read this before enjoying the etech slide show.
Actually, what all this metadata tells me is that all of the new competing "closed source but free" desktop search engines coming out from all the "We're the platform"-contenders are failures in the making. There's so much metadata in the web you browse everyday and none of the desktop tools are ready to aggregate that metadata for you in a useful way. Nor will they ever be.
The possibilities in from-the-ground-up popular adoption of these new embedded metadata standards means that we need search that is also from the ground up, open and with a plugin architecture. There may be room for the homegrown information assistant, that I put on hold when I installed desktop google, yet.
Obviously the security fine print we need for plugins that work on the pages we browse is a little more involved than the security model we need on the data we're publishing on the web anyway. It may just be so involved that it's unfixable. But obviously, write now I am already trusting a vendor. I have yet to hear of MT plugins made by evil wrongdoers that trash your webserver instead of doing something useful, by the way.
The buzz over browser based applications, called weblications by some, is growing. The responsiveness of relatively feature rich applications like GMail is inspiring. The magic that goes on is really quite straightforward in principle and the dictionary lookup made by this guy is a nice example of the simplicity.
Source is included.
My reservations with weblications is the same as with all previous "rich internet app" frameworks like Flash and similar technologies, that it invalides the meaning of the URL space. I like REST. But even from that perspecitve, the structured approach to weblications inherent in the use of the XmlHTTPRequest with standard dynamic html, since it is at least possible to reverse engineer the wire format used by XmlHTTPRequest, like people have done for GMail. The killer combination of Perl with LWP::UserAgent or WWW::Mechanize is not beat yet.
Nice (and old) article on code as writing or as the title puts it, The Poetry of Programming.
From the always interesting Lambda the Ultimate programming languages weblog, an entry on RDF and databases - including some basic notes on possible basic applications of RDF.
Amen to this post on intertwingly. Sam Ruby notes that while the SOAP based web services specifications allow for all kinds of things, they don't allow for not doing all these things but keeping it simple instead. They really should. The desire to build something that lasts is a good one, but simple things have to always stay simple.
The standards stack around email is a good example both of why one tries to build something extensible (if you use every tweak of the standard you're sending around some very contorted messages with every possible type of encoding and character escape you can think of) and an example of how to build something that's extensible, but where the extensions don't hurt the simple implementations. If you don't use all kinds of fancy features of email the standards are really, really simple.
One of few dissatisfactory features when shopping on Amazon.com is what happens in the period after you make your order and before the order ships.
I often find myself buying books that "usually ships in 8-9 days" as it says on the website. Most of the time this involves buying from amazon.co.uk books that were really published in the US market (ie they have a same day delivery status on Amazon.com), so they have to be moved to England first which takes time.
Once you've submitted such an order you rarely get any information on the progress of those 8-9 days until the day the books ship.
There's an "expected shipping date", but right up until the actual shipping date that often seems to just be the current date + the known average delivery time (so for today it's October 27th for books shipping in 8-9 days)
What would be nice is some more structure to this information, so you could tell what kind of progress had been made in the flow that usually takes 8-9 days, and progress for each book in the order preferably.
This is especially interesting, since in my experience the variance on this 8-9 number is quite high - much higher than the variance in delivery time once the book is shipped and the postal service is in charge (even if that takes 3 days).
The variance in itself would be an interesting number to know btw. I wonder if the "8-9" is supposed to account for that or simply to represent an average of 8.5.
If you want to take my advice, and combine Google Desktop and Slogger then there are some good features and some bad features to consider.
First the good: Google uses file modified timestamps to order results. That works well with Slogger, since the time you browsed by is the time the slogger cache is modified. You still don't get the original URL of the site. You can modify sloggers behaviour to include the URL in the filename, but that gives funky filenames that may not be legal on your system.
Now the bad: Slogger caches everything and Google caches everything, so there's a bad interaction with the Slogger cache and the Google cache taking up tons of space. Also, if you just use the defaults, each lookup via Google in your slogger cache gets logged by Slogger! So each time you click the search history button, the number of hits grows by one. You need to disable the Google desktop search specifically (and probably google.com as well) from Slogger caching. Simply add 127.0.0.1 and www.google.com to your Slogger block filter (part of the Slogger settings accessible from the extensions menu in Firefox).
Undecided: Google Desktop assumes an extension of .html means that the file in question belongs to your "web history". That means that the Slogger files get listed as web history, but it also means that all kinds of Javadoc does I would guess. I don't know if that's annoying or not.
Or so it would seem. I absolutely hate new windows. I use tabbed browsing aggresively. When Firefox 0.9 came out, the Mozillans had changed the standard behaviour from allowing me to control what tab new URL's were accessed from, to automatically opening a new window. Fortunately the Single Window plugin fixed that (almost). Now they've changed the behaviour again and broken the Single Window plugin. I really, really hate that - to the point of switching browsers.
Certainly the most entertaining one - it has cartoon foxes - is Why's (Poignant) Guide to Ruby. If you don't believe me check the sidebar on this page. It is a rather involved story on copyright, that ends up suggesting that we (the readers) could do a lot worse than copying the book and redistributing it. Why goes on to give an example of what we might do:
IDEA ONE: BIG TOBACCO
Buy a cigarette company. Use my cartoon foxes to fuel an aggressive ad campaign. Here’s a billboard for starters:
Make it obvious that you’re targeting children and the asthmatic. Then, once you’ve got everyone going, have the truth people do an expose on me and my farm of inky foxes.
Sensible Hipster Standing on Curb in Urban Wilderness: He calls himself the lucky stiff.
(Pulls aside curtain to reveal grey corpse on a gurney.)
Hipster: Some stiffs ain’t so lucky.
(Erratic zoom in. Superimposed cartoon foxes for subliminal Willy Wonka mind trip.)
Yo. Why you gotta dis Big Smokies like dat, Holmes?
(Why's main weblogging gig is good fun too (it has the motto "Hex-editing reality to give us infinite grenades!!"). Bookmarked.)
The least surprising recent development I can think of, is Google Desktop search. Nice idea, but this FAQ answer on Mozilla Firefox is a joke.
It says "we may choose to support Firefox later" where it should say "We'll have Firefox support in a few weeks". We're supposed to be thinking that Google are the good guys, and here they are as "Microsoft only" as you can get.
It's even easy to do. Using the Slogger plugin for Firefox I've been archiving my Firefox history for a couple of months now. I run the Swish-E indexer on that from time to time, and then a perl CGI script gives me in-browser indexed search history. If the indexer API of Google Desktop is hackable, then there you are - simply replace Swish-E with Google Desktop and you're done.
[Obviously just letting Desktop Google index Slogger's cache will get you halfway there - but you don't get the browsing history metadata]
[UPDATE: Continue here for some Google specific Slogger tips]
No it's not the software (or the computer) that's micro: It's the company and the company idea: Eric Sink builds a miniscule company as an MSDN feature.
Along the way there's a lot of notions along the same lines as gapingvoid's How to be creative - including, I guess that means it's important, "Don't quit your day job", a version of "Put the hours in" on persistence and in fact also a version of all the discussions on making sure the creative stuff is fun, not duty - Sink simply calls it "I think this is a great way to fail", but it's almost the same thing: Don't make your creativity a death march project.
It turns out there's a buffer overflow in MS Word also, so Word documents are now to be considered possible carriers of vira. So it's not just script vira anymore. Opening a word document with scripting turned off is still unsafe.
This has got nothing to do with Outlook or IE security. No saving to disk first will protect you. It's simply Word being unsafe.
This must be the best news the Open Office people have had in years!
OK, they're not really consistent - but at least their BASIC error codes are.
In an ironic comment to Joel's comments on .NET - this 30 year backward compatibility of error codes is also gone in .NET...
Tim O'Reilly's Web 2.0 conference - has brought a new entry in the race to be the most important instant, hosted, gamechanging revolution. From the description I like Jot as at least a nice try.
It's a wiki built out to basically be a write/read CMS. Templates allow for structured data, so that websites grow a data model in a natural "no code" way.
In addition it integrates with email, so that you can move all the 1-1 private conversations into open forums without a great hassle.
The "no code" moniker is unfortunately often another name for "no design" and I have doubts about how much value the data that ends up in this wiki in forms other than text really has.
I guess a test is in order to get an idea of that.
(as reported by Jon Udell. He also has a live recording as a flash movie at the end of that link.)
(Oh, and this by the way appears to be the stealth mode upstart Excite.com co-founder Joe Kraus is talking about on his quite good weblog Bnoopy)
Is the real attraction of GMail perhaps that it looks a lot like the bridge on a Star Trek Federation starship?
No nonsense. Data rich. Lots of rounded corners. And with an always-on super presence running behind the screens.
In fact it looks as if the interface was stolen from Joe Reiss' Spoiler-free Opinion Summary of TV shows based on the Star Trek universe.
Microsoft will no longer security update Internet Explorer for anyone who does not upgarde to Win XP service pack 2. I can't imagine they won't have to change that policy. They're abandoning maintenance, of what is probably the single most used application on most systems (the one they claimed in court was an integral part of the operating system), on 50% of their installed base with that move.
It seems what they're really saying is
Don't use Internet Explorer
Demand of you e-bank and all the other web applications you use, that they work properly in standards compliant browsers.
If IE is an integral part of Windows, as Microsoft claims, then they just said "we're no longer supporting Windows 2000". Microsoft's advice could also be seen as another statement: "Don't use Windows".
I think on the other hand that this article is right. It's not malice - just incompetence. Or, incompetence is probably too harsh - I know I wouldn't want my software to be given the scrutiny of 10000 security experts and evil hackers. So let's say "inability to retroactively deal with the lack of designed-in security"
I think this also fits into Joel's continuing story of a change in Microsoft's way of thinking from a customer focus to more of a tech focus.
I would have loved to hear about The building of Basecamp (not the Copenhagen restaurant space but the project management web application). Meanwhile, basecamp - the web app - does have a Copenhagen angle, since the lead developer lives here. Lets check his score: Still mid-twenties, check. Lead architect for web development platform in hot scripting language, check. Maker of world known web application, check. Works on a first name basis with the best web design companies, check.
I'm entirely with Jon Udell on the issue of old tech not getting replaced. I think the urge to reinvent instead of repurposing is sad. So much could be done by simply extending the way we use the workhorse protocols of the internet: The email protocol suite, http and various XML transports over http.
What we need, and don't have, is an email client that is as conducive to innovation as the browser is - but we don't have that and it's an open question whether or not the good hackers could beat the bad hackers in a war to control such a client.
I have to take issue with Joel's statement that,
I'll go out on a limb and say that there is not a single content-based website online that would gain even one dollar in revenue by improving usability, because content-based websites (by which I mean, websites that are not also applications) are already so damn usable.
The addition of new protocols to your system and binding of new apps to these protocols is so cool. It makes things like Feed Your Reader really simple to do - which is great.
Note to English speakers: FYR is a good name, being also one of the truly great words in the Danish language with a good number of unrelated and loosely related meanings:
That sloppy piece of code you wrote today could well be used by someone in 50 years, so maybe you should think twice about the sloppiness.
That's my take on Tim Bray's observation that each year over 5 Billion lines of COBOL gets written and added to the 200 Billion lines of code in production.
Reimplementation is a very costly affair - and it's not just the cost. It's risky too.
We've been told for ages that COBOL was dead. Few schools teach it. Nobody learns about it until they get hired into a team that still uses it. In 1999 we were told about the armies of retired COBOL-slingers that got hired back for one last shoot-out at the Y2K Corral because there weren't any active developers to take on the job.
But if the figure of 5 Billion lines is right, then we might try some math. Let's assume an average developer can produce 100 lines of code per day and works 250 days a year (COBOL developers just don't DO holidays - they're too mission critical) then we arrive at a figure of 200000 active COBOL developers. Since I think we were flattering the COBOL developers with both the 100 and the 250 it could easily be a million instead.
That's maybe not as many as there are Java developers but is not exactly a population size that one would consider threatened by extinction.
Found this wiki clone homepage and read a while down until I got to the following diary entry:
Rollback mystery solved: The search engines did it!
6. June, 2004
[UPDATE and sidenote: The correct answer to this problem is not the one chosen. Just use HTTP in proper RESTian fashion, by not doing mutations with GET but only with POST. Search engines use HTTP properly and don't follow post links]
Is Google's legendary server park overstretched? My GMail has gone very unresponsive lately - and my Google news alerts have been missing from Sep 16 until today.
Kottke speculates on the possibility of Google building a more deeply google-integrated Mozilla.
Amazon has A9, Microsoft is building up MSN as the brand for this kind of server integrated utilities, Yahoo is building who knows what.
Lots of guesses going around.
[UPDATE: A new Google UserAgent is currently crawling the web]
This password hack is just brilliant. Keep one secret password. Never use it in public, but use an MD5 hashed version of the master and the current site instead. With a little help from software it's as easy as using the password directly and you're not grossly vulnerable to attacks on website17 that you registered at 3 years ago.
A translucent response to the "Registration required" password hell around us.
A couple of days ago I wrote that I thought that the msnbot was the dumbest bot in town, since it was the only bot fooled by my ilizer service. But I was wrong. The msnbot is no dumber than the rest of the robots. Have a look at this google search. It is the world as seen through the eyes of the Bobby accessibility checker, and the googlebot really went for this one. I have no idea why Bobby checker actually process URLs in hyperlinks so they also filter through Bobby though - I don't really see the use (comic or otherwise).
Next question: What would be a useful heuristic to identify bots like this? I'm doubting there really is one. Most likely a filter would just be a long list of known cases, and probably there are just too many filters around to make that worthwhile. Presumably most serious filters implement the robot exclusion standard to save bandwidth and clock cycles.
According to a Netcraft news story, RSS traffic is causing traffic spikes every hour on the hour because newsreaders have hourly feed refresh built in and everybody is just doing it at the top of the hour.
The solution to the problem is really, really simple: Randomize the timing of the update to an odd minute count.
That completely reverses my opinion of the redesign. We probably wouldn't have the good links without it, meaning that the redesign was a good thing since it made Adrian Holovaty write the plugin.
I am switching to the Sage feed reading plugin for Mozilla Firefox from my previous feedreader Syndirella. The reason to switch was that Sage does even better what Syndirella also tried to do: Integrate feed reading with web browsing.
Its even better for the following reasons:
He said it on the Gillmor Gang, in a very listenable way. And he has written it down as well. Jonathan Schwarz has a hardware makers approach to open source. Free software is good since it drives the adoption of open standards (there is no incentive in free software to not follow standards as there is on closed source) and since standardization enables even more widespread adoption of technology, meaning a need for more hardware that's good business for Sun.
The software makers part of the equation is that when platform costs dwindle you can spend more time doing the business specific stuff for client X, meaning again more tech adoption meaning more business - if you have a service approach to software.
There's another reason its good: When all the stuff we're used to becomes a commodity software makers will finally have to go elsewhere and innovate instead of just living off the fat that is The Standard Office Desktop.
From the Pizza Party man page:
pizza_party [-o|--onions] [-g|--green-peppers] [-m|--mushrooms] [-v|--olives] [-t|--tomatoes] [-h|--pineapple] [-x|--extra-cheese] [-d|--cheddar-cheese] [-p|--pepperoni] [-s|--sausage] [-w|--ham] [-b|--bacon] [-e|--ground-beef] [-c|--grilled-chicken] [-z|--anchovies] [-u|--extra-sauce] [-U|--user= username] [-P|--password= pasword] [-I|--input-file= input-file] [-V|--verbose] [-Q|--quiet] [-F|--force] [QUANTITY] [SIZE] [CRUST]
The pizza_party program provides a text only command line interface for ordering DOMINOS pizza from the terminal. This program is intended to aid in the throwing of PIZZA PARTIES which are also sometimes known as ZA PARTIES
pizza_party -pmx 2 medium regular
Orders 2 medium regular crust pizzas with pepperoni, mushrooms, and extra-cheese.
Your HTML comments are propagated to the browser client.
Case in point, esselte.com:
<!--DONT RELEASE THIS TO LIVE WITHOUT CHANGING OVER THE HACK BELOW!!!!! STAGE.ESSELTE.COM >>> WWW.ESSELTE.COM - RAE -->
or: How a search engine in beta transformed the internet as we know it!
[UPDATE: The other bots are equally stupid]
In June a rather stupid service here on classy.dk was a surprise hit, and I have Microsoft to thank for the experience.
Some time ago I made a web page transformation engine that converted the text on pages ti lingiige liki this - i.e. replacing all vowels with the vowel i instead. This is inspired by a danish childrens song where you repeat the same verse once for each vowel using first only a's then only e's and so on. As a nice (but fatal) touch the service also rewrites hyperlinks so they are also redirected through the service, si yii cin livi iiir intiri lifi briwsing inli thi wirld widi wib.
It is still going on. As of this writing msnbot has crawled some 65,000 URLs transformed through my service. And boy, has it gone far! The Wayback Machine, MIT, even competitor Giigli got a visit.
Naturally I had to check if Thi Intirnit had made it into MSN search's sandbox index. It had. A lot. And then some.
Google/MS battle round X: Microsoft buys Lookout a "search your desktop" application that integrates with Outlook and - to emphasize the Google fight - quirky, bouncing, colored double O's in the company logo.
This seems to try to be (half of) the required personal search space - searching files and email fast. I wonder what ranking system they have in place though. As mentioned, search is not enough.
[UPDATE: MS decided to keep making Lookout available: Here it is. That makes Microsoft an open source distributor - well sort of. Lucene.Net is (as mentioned in comments) on an apache style license so you can legally embed it in other apps)]
As previously mentioned, our personal information space is a shambles compared to the published information space of the web. A good reason for this is the thousands and thousands of people working to augment the public information space with searches, meta-searches, meta*-searches etc etc etc. Another good reason is that you're all alone in metadata linking your personal data, whereas you have the help of millions in making sense of the public space. That means quite simply that the search engine companies have a lot more to go on when it comes to indexing public space than they do when indexing personal space.
Everybody wants that to change and averybody is waiting for the personal information killer app. Maybe MS Longhorn will be it, but personally I have to say I doubt that very much, since I think the latter problem is much larger than the former. The metadata quality is low.
By combining a local install of Apache, the slogger firefox extension, the Swish-E indexer and a little homespun perl I've been running a "Search my browsing history" on my desktop for about a month, and I'm already drowning in data. Difficulty ranking and poor quality of metadata (or just the difficulty of using the metadata there is) rapidly degrades the content of the index.
To be fair, I spent very little time on this version 0 of the utility, and with only a a few enhancements I could solve the problem so the index would work properly for much more browsing than it does currently but there's no way it would nicely handle e.g. my > 1GB email collection without a major upgrade.
Probably the key enhancer would be linking all metadata situationally by keeping an accurate record of time with all recorded information (as also suggested by Jon Udell in references above) but my experience suggests that you will have much more limited situational recall than you expect. What you'll need is a situational equivalent of PageRank some kind of indicator that a piece of information is actually among the <1% of the stuff you have read that stuck in your mind.
The open source .NET project Mono has gone 1.0 which presumably means that Mono now has full copies of all the major API's included in the first .NET release or equivalents. I wonder if the Novell acquisition of Ximian sped things along in a dramatic fashion or not.
It looks (from choice of screenshot samples) as if the GUI libraries for mono aren't really cross platform. I hope that's just me not paying attention.
Jon Udell thinks about a Google OS - what automated metadata generation and filtering can do for our data drowning desktops. His thinking is interesting, and relates to the famous "metadata is crap" slogan. We need to accumulate metadata as a transparent, tacit activity, not a chore. It's unclear if Longhorn and WinFS is Microsoft getting this message or missing this message.
Here's an example of the kind of thinking required to do software well as opposed to just doing it. As it turns out this particular example is also becoming quite fashionable as the XML backlash (aka the "XML as programming language" backlash) continues and terms like domain specific languages and little languages get thrown around more often.
The problem at hand is that of word stemming and the solution to the problem is the Snowball language. Stemming is the act of truncating search words to a root for use in search queries (e.g. "words" -> "word"), which is useful in searches. More than 20 years ago Martin Porter created the common standard algorithm in english language stemming, now known simply as The Porter Stemmer. Over the next many years a number of implementations appeared and most of them were in fact faulty. People simply weren't capable of implementing the stemming algorithm correctly. To solve this problem once and for all, Porter designed a little language specifically suited to the definition of stemming algorithms. Along with the language he designed a Snowball to C compiler so that the snowball stemmers would be useful in common programming environments. This story is found in Porter's account of the creation of Snowball.
After the appearance of Snowball, stemmers have been submitted to the project for 11 additional languages. The brevity of the snowball stemming algorithms is testament to the usefulness of this particular little language, and the page describing the snowball implementation of the Porter stemmer from the original algorithm is good evidence as well.
So what has this got to do with how software should be done as opposed to how it is done? Simply this: Even relatively small self contained problems like word stemming take an enormous effort to do correctly. And note: By "correctly" I don't even mean "perfectly", since that is certainly not true of algorithmic word stemmers, I just mean "as intended by design". Only a very limited part of all software is written with that level of attention to detail or that amount of upfront design to guarantee a decent chance of success.
It also demonstrates quite exactly the promise of dynamic extensible languages: Good extensible languages afford the construction of little languages for specific tasks within their own programming environment, and little languages afford a clarity of implementation you can't get without domain specific languages.
The talk on open source that Tim O'Reilly has been giving all over the place for the last year or so - including at last years Rreboot event - has now turned into an essay.
The talk was excellent so I'm certainly looking forward to the essay.
I fully support the interpretation of the current perl 6 plans that perl is moving from an engineering art (i.e. a kind of physics) to weird unicode dependent alchemy (i.e. a kind of chemistry). Hence the periodic table of perl 6 operators.
Joelonsoftware takes issue with the .NET strategy. He's annoyed about the lack of an upgrade path (in his case from VB) and he basically thinks Microsoft is squandering their platform advantage by moving to new things all the time. There's no lock-in in new things.
I don't really buy it. It's not like the Windows API is going away, it's just that you can consider it frozen. If and when the new way (.NET) proves as productive as hoped that will finance the upgrade in and of itself. As far as I'm concerned the new stuff in .NET is mainly about increasing developer productivity and very little about enhancing user experience.
Obviously, if MS fails to deliver the gains in productivity they will have made a terrible strategic blunder.
Among the interesting stuff in this worthwhile essay is an assertion (borne out by the leaked windows source code) that it is simply not true that Microsoft has deliberately broken thid party products to make room for their own. In contrast, a lot of effort has gone into keeping features around to ensure smooth upgrades.
[UPDATE: Loosely Coupled chimes in and agrees. Phil Wainwright summarizes his position in the title of a post: Avalon: Microsoft's microchannel, i.e. an attempt to redefine the PC industry that will fail (like IBM's microchannel arcitecture) because everyone is locked in to todays Windows. In an earlier post he has coupled this with the notion that XML is to Microsoft as PC was to IBM, meaning that the embrace of open standards of information interchange automatically opens up the platform. It will either stay open and compatible or become irrelevant. To carry the analogies one step too far: If Microsoft is IBM in this story then the network is Microsoft. Staying open and network compatible is more important (and cost efficient) than staying Microsoft compatible today - just as staying Microsoft compatible was better back when IBM introduced the PS/2-OS/2 combo]
The power of both arguments depend on whether or not Microsoft does in fact hold monopoly power. It will take that kind of power to keep customers locked in to Windows (and not switch them to Linux or at least Open Office).
Firefox 0.9 has changed the default behaviour when clicking URL's from using focused tab to opening a new window. I've been playing around with config parameters to stop it - but nothing works. (in particular the *opentabfor* parameters don't do it)
Do you know how to fix this? Tell me please before I have to downgrade to 0.8.
[UPDATE: someone did know - thanks Claus!]
The title of this post is literally snakeman (a contortionist) in Danish, and that was the name of my personal version of snake, the computer game classic every kid hacked his own version of back when I had my Amstrad computer.
I didn't have many commercial games for my computer, so I mainly played games I wrote myself. Really simple ones as you could guess. This one I was particularly happy with. I wrote it in one hour, and I used pointers into a position array to move the head and tail of the snake, so that the game didn't slow down as the snake grew in the same way it did on the versions I had seen at the computer programming night school.
My brother and I played this game all the time. We had a deal: He had a lot of records, and he could only play my computer if I could listen to his records, so we spent many evenings reading, playing and listening to David Bowie.
I recently recreated the exact gameplay of the Amstrad version as a browser based game (works in IE and Mozilla/Netscape), with a Mondrian/De Stijl visual theme.
Use the arrow keys to move, pick up the green bonus squares. Don't hit the wall, yourself or the blue tail you drop in your wake every once in a while.
The distinguishing features of this version relative to other versions
Google has a new version of Google Groups in beta. It is pretty beta-like still. Lots of test groups and some kinks to work out (among them, danish characters are generally handled incorrectly) but there is one new feature that is truly great: All groups emit atom feeds of recent changes So now we have newsreader feeds for just about everything, and certainly for all of usenet. If you look as an example at the feed overview for comp.lang.perl.misc You'll notice a good effort to maintain the "don't be evil" value. Here it's the slogan "You don't have to come to us". They even recommend common newsreaders that are able consume the Atom feeds. If you dig around you'll also find recommendations for what you do if you would rather have RSS and again its not a Google product plug but a referral to a feed conversion service.
The NY Times international weather page thinks it makes sense to
Once again the hacked treatment of resources downloaded (instead of just relying on the MIME type) carries with it a virus threat as reported here
Switch to Mozilla's Firefox. You'll like it.
Tim Bray finds an Irish web services standard and likes it. His description does sound good, rather RESTful and all, but Sam Ruby has good objections.
The points Sam Ruby make illustrate with great clarity why much more care than you think must always be taken in interface design. The natural tendency, even for good engineers, is to start fresh, and then to solve all the problems from your own ground up. As Ruby points out (and REST embodies), you can often use preexisting standards to great effect, if you use the preexisting standards carefully. You don't always need Yet Another Interface Layer. It also illustrates that other important aspect of good interfaces: Clarity and precision is not enough. Naturalness wins them over. So when an XML based protocol suddenly disallows UTF-8, and decides HTTP GET is not fancy enough for professional use, then that's a serious design flaw, because clients have every right to ask why? - And than because they don't bother to ask. They just choose the one where they don't have to ask.
[UPDATE] Sean McGrath points out in a comment below that the specs Ruby commented on are outdated. On McGrath's blog we find this tasty view of the future (unrelated) as well as this Haskell hacker joke which seems to have the hardest learning curve of any joke I ever came across. Not usualy considered a good quality in jokes, but yhen I also like JAPHs. Blog bookmarked.
Microsoft has released - at no cost - the Visual C++ toolkit 2003 which includes the command line version of MS Visual C++, so at last the platform compiler for Windows is freely available. Let a thousand open source IDE's sprout.
I guess this was a logical consequence of the fact that the .NET sdk was free anyway and MS is going the CLR way. The release of this toolkit is great news for projects such as perl. At last everyone can build perl properly, which means that at last CPAN is a real opportunity for the common windows user. IMHO, considering how well many perl modules build just from CPAN distributions, this makes Activestate perl a lot less appealing, even if ppm's still are easier to use when available.
Open source projects on Windows just got a lot simpler to distribute. Next step up would be just including the compiler with the windows distribution like all unixes do.
[UPDATE] Bo points out that you might also want the debugger download to go with the compiler. That too is free as in beer.
ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
I failed to see the announcement of Apocalypse 12 on OO Perl 6. While the simplifications of class design look to be exactly on target, there's plenty to stoke your perl blasting furnace if you dislike perl line noise:
The parser will make use of whitespace at this point to decide some things. For instance
$obj.method + 1
is obviously a method with no arguments, while
is obviously a method with an argument.
It gets worse. Some horrible decisions seem to have been made along the way, among them the decision to use operators not currently easily available on my keyboard, namely "french quotes" « (Decimal 171 in my charset) and » (ASCII 187). What is this? APL? I don't know why I hadn't noticed this, but it stinks - in fact so much that if this ends up in the language I will probably never use it. It's hard enough to understand languages. Why it should be hard even to type them is beyond me.
Why conversations like this 77 message thread on systems compatibility wasn't a warning sign just to leave the bloody things out I don't know. Perhaps someone following the process can enlighten me. Is this some joke I'm missing or just somebody actively trying to bury perl in favour of python?
The fun is heightened by close review of the Google Groups thread. Larry sends a message with text "Can you see these -> <- chars" - and some whitespace inbetween indicating to the viewer this failed. Then in a reply somebody quotes Larry and in their reply the missing characters show up! Come on man. Give us a rest. I don't want to work with my bloody keyboard definition to use your programming language. If this stays I'm clearly gone as will so many others. They simply must be kidding. They must.
A very nice feature of my GMail account is that you can set the reply-to email to be something else than GMail.
That solves the problem of data ownership since it is easy to maintain a conversation both on GMail and on my existing email system via the following system:
Home email forwards all mail to GMail and Office
Via reply-to I can respond via GMail and still have the email returned to my old email address
The forward to the Office makes sure that the Windows "You've got mail" envelope works for my personal well as well as office email.
Obviously I would prefer email to work differently: Ideally email reply-to could indicate an entire route for the response to take so that all the systems that archived the inbound mail also archived the response transparently and maintained consistent as a result - without me having to use more than one client for the different services.
Because you often find yourself doing stuff like this instead of actually solving problems. Sometimes software is living proof that turning the entire world into language is a noisy, painful and mostly meaningless experience.
On the other hand, sometimes you solve problems almost by accident, and then software writing does not stink at all.
Some new software services and applications quickly get such a following that they turn into a movement. Even more so when the applications are bold and try to change the way you do something that you do a lot. I just this week had various experiences in that respect.
Observation 1: I uninstalled iTunes. It's a nice app and all but in the end I just couldn't take the behavioural model of iTunes: The default setting of assuming complete control over all my music, organizing it so it makes sense for the software but not for me, and finally - and most obnoxiously - the default idea that music is wallpaper continuously running in the background - that what I'm after is this player that just keeps on rolling over my entire collection whenvery you press play. I simply can't stand that. Whenever I click on some song on the disk, it wouldn't stop at the end but just continue running over some other music elsewhere in my collection.
Observation 2: I got a GMail test account through a kind reader of classy.dk who had seen my personalized advertising rant. Initial impressions: The advertising is rather toned down, in fact more so than on the search pages. But the new way of organizing all email into threads by default is another one of those blue pill moments. A slightly nauseating loss of control sets in. I guess my ideal mailsystem is not really a mail system at all. Instead it is email stored as easily tooled text messages, end then actually having those easiliy made tools at hand. It would be the UNIX maildir if not for conveniences like recipient defined metadata and indexing.
Bottom line is that it is impossible - without significant mail volume - to tell if the threaded dialogs for all email really works. I'm not sure I'm confident enough that it will to start using Gmail as my main thing right away. That underscores the other well known problem of GMail: Data ownership. I know I only use server based mail for junk at the moment because I like to have my mail close. (Yes I DO back it up).
I'm not sure another person than one of the core Parrot hackers could pull this off, but Dan Sugalski has a piece on ONLamp on a real macho hacking expedition he took part in, in which a dated 4GL language was reimplemented in perl targeting parrot.
I can just imagine the meeting where the enthusiatic and confident Sugalski talked his boss into this. I am a firm believer in just trusting the hackers, especially in a situation like this where they chose to change as little as possible instead of reinventing everything. But Sugalski was really taking a bet on the parrot.
As recently commented on, Bruce Eckel is busy criticising the Java Generics proposal. In a recent post he takes a step back for a very good discussion on Static, Dynamic and Latent typing and how these various styles of typing serve to improve programmer productivity. Good stuff.
Along the way a reference to a short note by Martin Fowler on directing versus enabling approaches to development. These terms will soon become standard references in software literature.
And through more fortuitous linkage this brings us to a very readable interview with Ward "Wiki" Cunningham.
[UPDATE II: Bruce Eckel adds criticism of the same type on the new Java Generics]
I would like to add some points to the famous Why I Am Not A Java Programmer. My number 1 objection to Java is that java programming carries a basic philosphy of "Nothing but Java" and all the Java projects I have ever had to use has carried that philosphy with them in the extended philosphy "Nothing but our stuff". Everybody is making "Extensible Frameworks" but they never use the other guys frameworks. Case in point: The Eclipse IDE does not, on first install, when initially opened provide something as simple as a "File Open" in the file menu. If your resource was not created with Eclipse it does not exist. I realize that of course you can open your files but the fact that you have to think to do it is a serious alarm bell against any further use.
I simply cannot accept that I cannot casually use Java/Eclipse. Netbeans is better in this respect but only marginally so.
I like things that were created to work well "bottom up", an IDE starts as a source editor and then attaches functionality to that. It should never be the other way around.
[UPDATE: One more thing: From an environment where design patterns are second nature there is a peculiar absence of facades: So many simple tasks can only be accomplished by composing a number of highly abstract components (If you know Java you all know the examples from basic file I/O via XML APIs on to various J2EE apis). From an implementation stand point that is the way to play, but the lack of facades makes everything look horribly complicated even for simple usage scenarios]
I realize these complaints are not new - If they are being adressed by a new wave of Java API's do let me know.
Yes, I know there is Jython - but that is because of Python developers willing to adapt. Yes, I know there's BSF but that's just another framework. Yes, I know there is support for hooking in native API's but come on, you know that you're guilty.
It's finally here - the Apache foundations CVS killer with the cool name. Actually I think that was an OK speed for moving from 0.0 to 1.0. It took 3.25 years from the very first milestone release.
For those of us following the languishing Perl 6 project that's pretty good going (although obviously and in true Perl 6 fashion Perl 6 is a much bigger undertaking)
[Update: Mikkel informs me that Subversion is not apache Foundation proper - but it has Apache roots as far as history, people and codebase (Apache Portable Runtime is the runtime) goes]
Now you can pop rap your way to good user interfaces by rappin' along to the OK/Cancel HCI rap. Text sampler:
After that generate a lot of designs
run them by some users even just 2 at a time
iterate and iterate and soon you'll obliterate
any interfaces which are wack or inconsiderate
I couldn't agree more with this positive appraisal of code generation. Auto generated code is where the fun begins and the time savings explode. I know I have saved several man months of work through code generation.
However, code generation as an activity separate from the focus of "real" software development is an artifact of rigid old languages with a poor "meaning/code" ratio, i.e. languages that tend to contain too much code that does not really express design but needs to be around for the truly expressive code to run. The next step up is cutting out this boilerplate middleman entirely - which is why OCaml is on my reading list.
The Pragmatic Programmers propose a rule that you should learn one new programming language each year to stay sharp. For this year my plans involve 3 languages, Parrot assembler, BPEL business process definitions and finally OCaml, which seems to be a very nice functional language with free (as in freedom) quality implementations available on all common platforms. Based on ideas in Parrot and the uptake of Python + Ruby I think it is safe to say that your standard programming language is undergoing fundamental changes these years, adopting more and more of the proven but hitherto purely "academic" programming techniques. Interesting news is that the platforms (notably .NET) are keeping pace with this development as well.
There is of course a fork of interests where platform vendors seem to favor tools and complexity and open sourcers seem to favor language invention and compactness. The languages I mention above are good examples. The BPEL spec is rough going, heavy on concpets and XML gunk and it is almost impossible to think of an application made without heavy dute design software. The OCaml distribution on the other hands drops us back into the immediate console of yesteryear. Where parrot is going it is too early to tell.
An OCaml tutorial may be found here.
Then, if you're using a Pentium 4 with hyperthreading, it might be a good idea to turn off hyperthreading at the BIOS level. I have had experiences with applications going invisible - WinAmp, w.bloggar, and some home grown stuff would start and would work, but would not be drawn to the screen. I was unable to find any good reason for this until one of my coworkers suggested hyperthreading as the culprit and lo and behold it seems - at least on one of my systems - that if I turn off hyperthreading the problem is gone.
On K5, there's a piece on Why my Moveable Type blog must die - well not mine specificially but MT blogs in general. Some of it is Andrew Orlowski style blog bashing, pointlessness defined, but there are good points also. The best one is that MT is quite brittle and inflexible when your page demands start to grow or you desire some kind of CGI interaction. It is a problem with MT that the Admin interface isnot built from the same templating system as the MT blogs themselves. That is quite a nuisance when MT's data model becomes to constrained for your needs. There is even an unclean separation between blog and admin system: Error messages on mailing errors are delivered by the admin system and not your blog, so you need to hack in an ugly interface breaking way to do "branded" email from MT.
Among the features of Mozilla Firebird is the simplicity ff the install: Thw Win32 binary download is a zip file and you just extract that and run.
It is testament to the sad state of consumer software that you involuntarily need to struggle with this simplicity. You find yourself looking for the SETUP.EXE file in the distribution, but there isn't one because it isn't needed.
One feature they should add to the Win32 browser though is the ability to register the browser a little better with the registry so that the Installshieldy plugins can go about their complicated ways. Of course some of us wish that all the flash sites would just go away and die anyway, so this may be a feature and not a bug in the browser.
Just stumbled on mod_pubsub a combination of various clients and a mod_perl application for enabling publish subscribe two way interactions with browser clients over HTTP. The demo apps indicate that browser mileage will vary - but that is to be expected when using specs to their limits.
This open source project comes out of what looks like interesting work by Rohit Khare.
As Nikolaj informs me, I actually stumbled on mod_pubsub indirectly a few years back when KnowNow (the company sponsoring mod_pubsub) was featured in Release 1.0. I remember reading the description of the technology and thinking 50% vaporware and 50% brittle borwser hack. The brittleness I assume is still an issue, but no vapor is to be found.
The Swedish social security number system claims to use a "modulus 10" algorithm to validate numbers. However, the algorithm in use is not what any math savvy person would expect given that description namely "36 modulus 10 is the remainder in division by 10 i.e. 6". But rather "Round to next higher multiple of 10, then subtract" so 36 modulus 10 in the Swedish system is 40-36=4 not 6. So that is actually 10 minus modulo 10. How odd to have an elementary math error in a national standard.
A nice idea: Build corporate collaborative environments from popular web based applications like Wikis, Chat rooms, blogs, etc. is the concept for Smart Meeting Design. The caveat if one were interest in the company as an investment is that the idea is very easily copied - but that just speaks to the strength of the idea and is not really a weakness.
The Linux desktop is much talked about, but not widely deployed. If I were the network manager for a huge office network of mostly boring browserlike data entry clients I would consider MandrakeMove a game changer. On the fly hardware detection, i.e. the simplest install you could possibly think of and with al features of a modern desktop (internet, multimedia, MS Office compatible productivity apps).
The "Linux Lifeboat" run from CD concept is nothing new but the completeness and hardware discovery features are.
In an internal memo, IBM executives are encouraged to switch to Linux on the desktop by the end of 2005.
No less effort will do certainly, and it still sounds a bit weak to me.
I think it is useful to compare the effort to that other great desktop effort: Microsoft's Longhorn evangelism and pre-release program. The Longhorn promotion is being carried out with great force and demands of total commitment by everybody at Microsoft. It is clearly the focus for Microsoft. That's a considerable overperformance compared to a leaked internal executive only memo. The results could very well be proportionally more impressive.
The Perl 6 project is suffering severely due to Larry Wall's protracted illness. Parrot is slogging along through subdecimals and the Perl 6 language project has turned into what looks very much like a deathmarch. This summer the project will be 4 years old and Still Not Shipping. Perl 4 is turning into The Great American Programming Language - a mythical programming substrate, immensely powerful and so full of right and bright ideas that it is downright scary. But mainly mythical.
The project makes it abundantly clear how impossible pure peerage is. In any group somebody ends up asserting leadership and are consequently missed dearly when not available. No amount of community feeling or symbolic abdication can change that. Let's hope Wall gets well soon.
The many IE CSS breakage circumvention hacks make the classic "That's not a bug - it's a feature!" motto true in an odd way: You use actual bugs to get by.
Who would have thought it possible: Among the other cool things in Longhorn is a reinventions of the command line based on xml streams instead of line streams, as blogged by Jon Udell. It's a long way away but Longhorn is still looking rather good.
So I left work early to combat a vicious cold. When I got home I scanned once again
1) the recent discussion with Justeren and BoSD on VeriSign's i-Nav plugin, and how the entire OS should just support unicode from the ground up. BoSD mentions
2) Joelonsoftware's Unicode rant. I had read that before, but now had time to read it again. I also had time to catch up on Joel's writing in general and found
3) a favorable review of
4) Eric S. Raymond's Art of Unix Programming which looks like a fun, if ideosyncratic, read. Part of that book is
5) the telling of the story of Plan 9, the planned replacement for Unix from "The Makers of Unix". Buried among the anecdotes was
6) the story of how UTF-8 came into existence as the native character set of Plan 9.
And thus we come full circle: Yes, all the clever guys agree (that group includes by the telling of this story at least Ken Thompson, Rob Pike and Dan Bernstein) that Unicode should not be a hack but just a basic fact of the operating system. They came to that conclusion years ago and here we are still fighting vendor and application specific plugins for DNS/Unicode integration.
How's that for fortuitous circular linkage?
If you use Internet Explorer you may have installed VeriSign's i-Nav plugin to resolve internationalized domain names you will have discovered this week that VeriSign's recent SiteFinder abomination is not unique but rather typical of their behaviour.
Hidden somewhere in a EULA, and in their i-Nav FAQ is the fact that the I-Nav plugin is "...automatically updated without you having to worry about it".
What is less clear from the FAQ is that in VeriSign's world that also means that they believe the company has the right to install/enable additional plugins. This week VeriSign added the i-Nav plugin to MS Outlook without asking me if I wanted that enabled (the timing is due to the fact that the migration from RACE to Punycode has begun last weekend). So the i-Nav plugin is actually a trojan on your system and VeriSign believes it has the right to modify your applications as the company sees fit. Truly annoying.
I am unsure to what extent they just enabled the plugin and to what extent they actually installed. The net effect is the same
Learn more about the hidden connection between whitbeer (aka weissbier) and i-Nav in the comments
Why on earth should I have to know so much tech stuff to use my DVD rewritable drive as a backup device. Where is the simple, easy to use "Just use your DVD-RW drive as another disk drive" application that used to ship with my old CD-RW drive? Why are all the DVD applications stupid hacks in comparison. And finally why the hell is the situation getting worse not better? When will software and hardware vendors get a grip?
Suggestions for non-crappy apps are welcome. Comment spammers will be censored.
Don Box is doing a lot of work protecting XAML from misperception and ridicule. The latest installment informs us that suspicions like my own or that of luminaries like Sam Ruby are wrong: XAML can separate style and content as well as CSS/HTML can.
If you work right you don't need a debugger says Uncle Bob. He wants us to do uncompromising test driven development and Get It Right The First Time.
That's like saying that Aspirin is bad for you, since if you don't drink, do exercise, don't work too hard, do get your sleep and do eat healthy foods all the time you will never have a headache. Aspirin is the debugger of hangovers.
I am hung over. I happen to like the conversational style of development that relies on the debugger, and as I have written before I think it is the way of the future, buf of course he has a valid point: When you make things simpler you make it easier for people to accomplish their work mindlessly. You should not let idiotic programmers work mindlessly.
Via Sam Ruby
I think this page uses iframes very nicely. They do something like this:
Sample material for the article is included in iframes that are sized to the flow of the text. The content inside the frames is available for scrolling and further examination, but the flow of the article isn't broken.
Frames as part of page compositing (those ugly "homepage" menus) are just ugly and non functional, but here it's really just a visual resource that just happens to be a website. That's much better,
Unstruct.org is about "Unstructured Information Management", and has a tutorial on what that means (roughly "search, data mining, text analysis"). It's maybe a too business oriented (i.e. non technical), but still sounds just like knowledge I would like. Bookmarked.
Groklaw - the SCO lawsuit debunking site - is expanding its debunking business. In the linked article Steve Ballmer's recent claims on Windows versus Linux security are debunked with vigor. Groklaw is great.
I'm not going to Blogforum.dk this friday and saturday (Yes I really should - but it's the last business day of the month, and in the evening I'm going to the annual reunion of alumni of "Regensen" and saturday I will be hung over from friday's reunion) but gladly concede the point that GrokLaw probably woulnd't exist without the recent easy access to community software like blogs.
Tim Bray has other figures but a recent survey of some of the sites close to Classy shows a consistent Mozilla market share of approx 10% for surfers visiting these sites. That's not too shabby. For classy.dk it is in fact a little bit higher than 10% (approx 14%) but traffic is pretty minimal also. The most interesting one among them is lodret.dk since that has by far the biggest traffic of the surveyed sites. This indicates that Mozilla has made it at least some way outside the pure geek demographic.
At work I am part of a team that develops a process engine, so this BPEL resource center is of immediate interest to me. BPEL is promising, and also - having a solid theoretical base (PDF) - interesting.
It is however - and this seems to be a big-iron XML theme - gunky. Check out the sample scripts for examples of this. E.g. this representation of "output=4/2".
I'm all for metadata and machinable information, but there are costs. A painfully obvious one is that it will take a high level of tooling to make this healthy for programmers to consume and produce.
I don't understand comments like REST is hard to understand. As far as I can understand people like Tim Bray, all they are saying is "Why invent an abstraction that convolutes requests for information when HTTP already has all the API you really need". What's hard about that?
Dave Winer has invented this meme that REST is hazy philosphy while all he is being is a practical programmer with code to ship. HTTP and xml parsers hazy philosophy? In perl you can get a long way with LWP and XML::Simple.
People have different programming styles, but I have a feeling that this craving for toolkits is caused by the dizzyness caused by realizing that there really isn't more to it than that. It feels like cheating.
Obviously when the simplicity breaks - i.e. when a resource referenced by a uri isn't static enough for the uri to be useful in repeated GET requests for example - then you want to do something else.
While I'm still working on the "There are way too many Jakob Nielsen bashers" rant, here a list of the things not to like about flash as observed on danish moblogging upstart albinogorilla.dk.
I'll stop being cranky now.
My friends know me as the undesigner. I can make anything look ugly. But even I was impressed with the simplicity of these super clean CSS menus. The menu content renders like "just a list" so the semantics - and the lynx display of this example looks perfect and well organized (see for yourself). That's exactly the point of CSS. It is such a pity that MS XAML appears to have dropped the content/styling separation.
The menus drop some of their styling when transitioning from item to item, when I view them in IE, but that is probably fixable.
Not only am I now slightly underwhelmed by the XAML "Hello world", I am also mystified why anobdy would care about it in the face of the coolness of hassle free C# source embedded XML. That makes sense - and looks good on screen, and is conceivable as handwritten too. Easily the best syntax for rich embedded data since ... the perl heterogenous hash and array.
I've blogged about this syntax proposal before.
And here I've been using homespun perl scripts using LWP, HTTP::Request and HTML::Form to automate web interactions when I could have just used WWW::Mechanize which nicely combines the above modules into one convenient package for walking through websites.
It doesn't really do anything that isn't in the synopsis of the other three modules, but facades are nice.
A new piece by Clay Shirky on the semantic web is making the log rounds.
His basic point: The notion of inference at the core of the semantic web effort is just plain wrong. I couldn't agree more, and in fact if you're an ardent classy.dk follower you should know this already from this, this or this post.
[UPDATED] ...see full text for details
The basic problem may be summarized
Knowledge is bottom-up not top-down,
- meaning that knowledge is assembled from our sensory data by inferencing.
That in turn has as a consequence the following important observation:
Any interpretation of the information/knowledge that hits us is cool as
long as it is useful. As an important example, consistency is an
artifact of usefulness not the other way around, and consistency is not
a NECESSARY artefact in any way. Consistency is one of the ways we use
to reduce the search space of useful ideas we have to examine before
emitting information into the world again.
The quote above are from some unpublished personal notes of mine, so the blockquote is really just there for emphasis, not attribution.
Useful collection of commentary on intertwingly. As highlighted by people criticizing Shirky's essay it is a stretch to equate teh attempt at rule engine formalisms and the semantic web. There is no specific agenda as to how the semantic web data is to be constructed or queried, it's just that some of the backers are working on rule engine formalisms.
Apart from the discussion on whether or not XAML is evil, there is the other question of what the point is exactly and there's a good thread on that on Intertwingly.
It seems the basic conclusion is that it is a format for specifying (i.e. serializing) CLR composites. Allowing for class definition, property setting, object instantiation and event registration (but 'event' is a datatype, so that's just special property setting) this is about as flexible as you would like. It's not new - it's just new for .NET.
On reading Don Box' "Hello World" sample in XAML it seems that this generalized perception of XAML as an XML language for CLR composites also is a good starting point for sneaking intentional programming into .NET. Through the structure of XML one gets for free the ability to mix new meanings into the source code format and/or compositing the source from a repository of fragments.
The only sad thing is the gunkiness of Box' example. It's even worse than the generic Java "Hello World". Code in this format will not be generally consumabe as text - only through some rich development environment.
David Weinberger skriver om fremtiden for blogging, og det gør Dave Winer også. Det basale pitch er: Web enabled diskussion - afstanden fra email til blogs vil aftage efterhånden som flere og flere blogger. Flere og flere blogs vil være private. Og endelig noterer Weinberger hvad jeg opfatter som indlysende at mange blogs slet ikke vil blive opfattet sådan fordi de pga deres cirkulation fungerer helt anderledes end en "person til person" blog med begrænset cirkulation. (f.eks. vil deres kommentar links ligne Amazon book recommendation mere end threaded samtale).
Jeg synes begge indlæg understreger min tidligere pointe.
Yes, every conceivable problem has made someone write "a little tool" and, yes there are people who use all of them. Impressive collection of tools for tweaking windows and other stuff - especially useful if you're a windows developer.
Of course some of us just use perl.
As far as I can tell from the description of AdaptiveMetricsContext (from the new Longhorn XAML specs) that formatting engine misses one of the qualites of HTML/CSS, namely that the styling information can be kept almost completely separate from the content. When the logical model of a page is established (via e.g. nested div tags) no other styling information goes into a page. Only content.
This is a good separation of concerns, and seems to be missing from the new MS format. Since the new MS formats are all XML the separation may be implemented anyway using e.g. xslt's but it is not built in to the format as far as I can tell.
So while we were all thinking Microsoft had quietly decided that the browser is good enough as it is, and now that the other browsers are all getting to be so much better than Microsoft's, they were in fact secretly at their old monopolistic tricks again according to Jon Udell. MS is introducing new 'standards' for rendering of text and vector graphics, potentially breaking CSS and SVG - and XML Schema. Is Longhorn going to send us back to the sad world of "vendor backed standards" instead of real standards?
Innovation is perfectly legitimate, and in fact I think Udell is not entirely fair when he suggests that XML Schema is somehow the final word on structured metadata representation, but it's good to be on the alert from the very beginning.
Additional commentary may be found on ongoing.
Udell's post has prompted plenty of reactions from microsoftees, some of them recorded here. My take on the responses: As to a new schema language, I think the points made are valid, particularly the notion that is is better to define a new schema format to fit than to imply semantics for an existing format that aren't part of the standard in the first place. As to an SVG replacement, the arguments seem completely gratuitious and mainly a question of ownership. I suspect arguments in favor of Microsoft's CSS replacement will be equally vacuous. More fun can be had reading an excerpt of an interview on Mozilla XUL versus MS XAML.
While it is debateable whether XUL is anything resembling a standard, the arguments given to answer the questions of why MS is rolling its own version of the same idea are completely gratuitous and amount to nothing more than "XUL is not owned by Microsoft": I quote:
Q. What s the difference between XUL and XAML?
(i.e. "XAML is the exact same thing, but made by us, and since we own the platform we can make it work for more applications than the Mozilla project could for XUL")
Q. Why did you create XAML instead of using XUL?
A. XAML gives developers richer control over the Longhorn user interface: its tags map directly into objects in the Avalon engine, and developers get a choice of programming language. In short, they re going to be able to build very rich Avalon UI with their existing programming language skills.
(i.e. "Because we will own XAML. Since we own the platform we can make it work for more applications than the Mozilla project could for XUL")
Better description of differences here - Obviously leveraging .NET gives advantages including bytecode compilation with much improved GUI performance as a consequence. Can't wait for the mono implementation.
Microsoft knows how to generate press. Longhorn has been a stable on all the MS drone blogs for a while now, and they are currently releasing betas for review even though the OS won't ship until 2006, and that goes into every tech newsmedia of course. The 2006 ship date is not to be criticized. I much prefer what appears to be actual news, but rare releases, to the recent microscopic updates to Windows.
The new filesystem sounds interesting, and the ambition to structure the filesystem namespace with levels of granularity between that of byte and file reminds me of some of the ambitions expressed by Hans Reiser for the ultimate file system. We'll have to wait and see how intuitive it is before we can tell whether it's just "Every OS ships with an SQL engine" or an actual integrated system for accessing data inside files.
Palladium is still there to dread, no matter how many times MS renames it. This is IP rights darkness at the hardware level and really not a good idea - at least not if you consider all the monopolistic plays that can be made in not sharing API's to access the functionality and enable it in free OS'es. If you thought Winmodems were evil by blurring the layering of device driver layer and OS layer, then this really places us at the very lowest circle of hell.
The look and feel seems a bit less annoying than XP, although the system still uses up screen real estate like the actual work we want to do didn't matter at all (UPDATE: There will be a new look, it's just not there yet), and finally there's the telling 88% CPU usage on a reasonable 900Mhz laptop with 256 MB RAM from running just the 'clock' application. How very Microsoft.
Even if they won't accept my (perfectly valid) credit card data, Amazon search inside is positively cool and bound to be indispensable. Really good coverage on the feature on Wired News - once again by Gary Wolf. (link found via his blog, so there). It is hard not to be impressed, even if I cant get the actual pages to work. Just this new way to find books is worth it.
An online anti-spam service, Sneakemail, has reportedly been subject to a devious Googlehack: The Google user agent has been fed Sneakemail defamation, while everyone else has been seeing a product plug for competing company X. Or so the story goes. I have not been able to verify the claim, since the offending page no longer lists any defamation (nor does Google). How devious! Build your street cred, and then give Google and only Google some solid bait words to draw unrelated traffic to your site.
She's a BRICK house - she's mighty, mighty, letting it all hang out. Funky CSS can now render ugly buildings of houses too. Wonderful. Add those table free round boxes and we have full separation of content and graphical disaster.
The house renders OK in Explorer - the round cornered boxes don't.
When Steve Ballmer has to compare Windows Server 2003 (MS's current offering) to Red Hat 6 (Red Hat is shipping or about to ship version 10) to score points in a Cert Security Advisory Shootout, you can just smell the fuzzy math a mile away. If he could make the claim stick against more recent Red Hat's (maybe the versions Red Hat is actually shipping today) wouldn't he have? Also he says "4 to 5 times" the number of vulnerabilities for Red Hat compared to Windows. We'll assume it's 4 since he doesn't just give out a number, and that's not all that bad compared to Windows 2000's 17.
Ballmer may be referring to reports like this from the Aberdeen group (full version requires free registration. If that's the case, then the number for open source is 16 not, "4 to 5 times Windows Server 2003" but exactly 4 times. And that's all open source. I doubt very much if Windows Server 2003 cen keep the number down to 4 with every conceivable windows product installed. The Aberdeen report does quote a total number of 7 advisories affecting MS products for the same period.
From Clay Shirky's In Praise of Evolvable Systems
Only solutions that produce partial results when partially implemented can succeed.
Sometimes you have to make up the partial results, but producing them is essential. They have to be real goals, not just "Ok, I did this now" milestones. The space program of the 1960s is a good example. The diversity of and number of systems built to accomplish the very indivisible goal of "man on moon" was enormous.
Interesting study on how to adapt OO language syntax to make a language support something similar to XML Schema natively in this paper by Erik Meijer et al of Microsoft Research. Work is extremely concrete, being a suggestion for an actual extension of the C# language. Included is funky native query syntax. This is at least a nice idea, even if it turns out not to be the silver bullet it feels like the authors intended it to be.
TODO : Write the perl source filter that supports these extensions for perl class definitions.
via Sam Ruby
Playing around with Moveable Type templatating and customization for a joint project with Just. It's mostly nice and easy to use, if a little underdocumented (but when you have source that's not a problem). Two annoyances:
The CMS is not really a moveable type application. It uses a different templating system than the weblogs for reasons that aren't clear to me. It is also difficult to extend/modify the CMS, since it uses some huge subs to generate the pages in what looks to me like a very circumstantial (if code line efficient) way. A bit of a shame. MT could be even more interesting if the CMS could be modified easily also. The problem I was trying to work was adding more fields to entry posts, and changing the wa one specifies categories.
Jon Udell has a post on rich GUI's versus the browser, and while there is much to be said for the point that rich GUI's provide more efficient responsive work interfaces, Udell's final comment is for me the signal that the rich application model may be Just Wrong:
Trying to sort out a permissions problem with IIS 6, I clicked a Help button and landed on a Web page. The page could only describe the tree-navigation procedure required to find the tabbed dialog box where I could address the problem. It could not link to that dialog box. This is nuts when you stop and think about it. Documentation of GUI software needs pages of screenshots and text to describe procedures that, on the Web, are encapsulated in links that can be published, bookmarked, and e-mailed. A GUI that doesn't embrace linking can never be truly rich.That is so very true. At my job we opted for a rich GUI as the centerpiece of the workflow system we have built. That was a good idea because of the responsiveness, and also because the GUI builder tools available on the desktop by far exceed the tools available to build browser based applications. The complexity you can accomplish with very few days of work using an efficient GUI builder (at work that's Delphi - but I'm sure Visual Studio .NET would also prove the point) can not be matched with browser based development.
Udell's post is in reference to comments by a number of microsoftees, notable Robert Scoble, and after Udells post on rich GUI's versus the browser (and the discussion is subsequently continued by among others Tim Bray). People like Bray believe that the irony of this sad state of affairs is that people are getting so used to the open resource linking that they.
My feelings are mixed. Resource centric, linkable apps are essential, but my personal counterpoint to this story is that I started editing this post using Moveable Types 'Blog This' browser popup, but mid edit I switched the text to w.bloggar - my go to rich GUI blog post editor, which has faster and more response preview capability, and a ton of shortcut keys to simplify editing.
Today is your last chance to sign up for Google Code Jam. The company setting up this contest, Topcoder, holds plenty of these contests, but since I haven't heard of them before now you can tell the effect on their business of the Google name. Clearly Google's technical street cred is hard to beat, so a Google branded test of your skills is just what you need to motivate yourself.
My goodness, one forgets easily. The C++ part of my brain has atrophied. The C# part has not yet been built. The VB.Net part I cut out my self using a teaspoon, and the Java part is this bulbuous appendage to the back of my head doing very little and generally being in the way when it does so. Must study or fail miserably...
Jay Allen has been away during the weekend on a deep hacking expedition, working out an Moveable Type comment spam fighter plugin, MT-Blacklist. After 40 hours of coding he has now gone to sleep after releasing a beta version, but sadly the plugin won't run out of the box on my machine. It seems Allen has been a bit careless in assuming that the MoveableType installation is in general using the CGI module. I use mod_perl, which basically breaks the plug in as is. With a little hacking I can make the plugin display work, but not yet the functionality....More work to do tomorrow - but it's about time I get into MT plugin creation anyways.
Further evidence that computer viruses have gone seriously commercial. Wired News reports on a Polish group of crackers who claim to control more than 450.000 trojaned computers that they use to route spam through routes so temporary and cloaked that it is impossible to find the source of the spam.
This is similar to the theory of the SoBig mail relays - viruses are no longer 'just' malicious but carry commercial payloads. Ironically, that would probably make them less obvious to the owners of infected machines, since machines are mostly more powerful than needed, and a little mail relaying isn't necessarily that taxing on a system.
More and more applications stealthily access the internet as any user of a personal firewall will gladly attest. When installing ZoneAlarm for example, the first few weeks after installation every working hour is interrupted by warnings that "Application X is trying to connect to service Y using privilege Z". The difficulty lies in determining which of the many hard to understand internet access attempts are legitimate and which are malicious. Too few of the access attempts are immediately understandable.
I'm sure an ineffective but monopoly enhancing "Microsoft safe socket" add-on to Windows isn't far off, where applications trying to acquire remote sockets have to sign their attempts to do so and register them somehow with an authoritative socket request registry.
When you don't actually distribute the software you write, platform choice becomes that much simpler. A new Tucows blogging tool is written in Ruby and uses PostgreSQL as its database. Paul Graham wrote (parts of) Yahoo Store in Lisp. Neither Ruby, nor Lisp can claim to be "the established thing", but as everybody has been saying for a while, it just doesn't matter as long as you don't need to distribute the software. What a great chance to make things interesting.
I was considering augmenting the previously mentioned MS Word blogging add-in with hands-free operation using speec recognition. A simple first test has changed my mind. The following sample was not quite the drunken nonsense when I said it as it was when the computer interpreted it
I'm currently using dictation to ensure a word document without typing. It almost works. Not everyone is picked up correctly and speeches prefer right now quash the words on misunderstood but maybe that is to be effective from Microsoft product. Contra Asian is not the job by the technician on the action expected to be
I saw Susan
Because the previous one simply had too many errors this one doesn't appear to be doing much better stuff all we needed to run some more training tests before I use the system full series rising on the other hand the results are entering text line is fine choir from .
I don't know who Susan is.
Jon Udell finds an interesting blunder in the calendaring of times. Calendars rarely allow for the possibility that an event happens in another time zone than the one in which it is recorded. I.e. I might like to publish some event that will occur on my next trip to New York, in New York local time, but that will look positively crazy in my Danish time zone calendar.
The initial thought on how to fix it would be "publish in ISO format, using the time zone offset of the place where the event occurs" - and wait for a better client to be smart about it.
Udell's observation may seem plain to Americans, but we Danes live in a country bound to one time zone and don't experience this kind of problem much. I am reminded of somebody's observation on some other blog on how the term syndication (as in RSS) has no intuition attached to it in many European countries, since many of them are so small (or so old world/big government) that all the major media aren't really served by networks, but just single companies with total regional reach.
When preparing my danish language site Håndtegnsguiden I found out that many browsers aren't ready for Internationalized Domain Names. Internet Explorer mostly is set up for Race encoded names, which is not the proposed standard everybody is adopting, and if your Mozilla is a little old (like half a year or so) then that too fails to handle IDN names properly. To help people debug the problem I set up an error page that will load if they have the wrong (version of their) browser. I was sad to be unable to give suggestions on how to handle IDN's for Lynx - the worlds favourite text only browser. My colleague Bo was of the same mind, and unlike me he still hacks C (I miss it in a twisted masochistic way. Simplicity is so cool). So this is a preannouncement of Bo's IDN hack for Lynx. See the screenshots of håndtegn.classy.dk loaded in lynx! A patch will be made available on classy.dk once it's a little more stable.
Sun has a new bet for the end user desktop Java Desktop System. Interestingly, since this is Sun, the product is based on Linux instead of Sun's Unix variant. I have a hard time seeing this succeed though. It appears to offer nothing not already on offer from other vendors, except maybe some Java gunk thrown on. The particular combaination of development tools doesn't look like somethign anybody but Sun would want to target, and then this is just Sun's own distribution of Linux with a heavy Java add-on.
Michael Feathers thinks about frameworks. They're supposed to be nice and reusable, but more often than not they're hard to use and the abstraction that was supposed to get you reuse gets you headaches instead. Sad examples include the Java IO system, and e.g. the Xerces parser - and these are only small, local, limited libraries. The really big, hairy ones (e.g. J2EE) are much worse than that and basically inaccessible without framework specific tooling to handle the heavy lifting.
Of the things I have used and worked on, a couple of rules apply
Actually, point no. 2 comes with a caveat which probably says more about my particular style of thought than about the Principles of Good Software: When faced with a bulky but simple one-off development task I find that I prefer to 'design' my way out of the problem rather than just doing the work in a straightforward but possibly tedious fashion. I prefer to write a use-once framework that solves the problem indirectly. This has advantages and disadvantages. The main advantages are a) that I'm not bored - and bored people tend to write worse software and b) when my solution fails it does so in an 'interesting' easily spottable way. I find that I can have greater confidence in the final solution once there are no obvious flaws.
The main disadvantage is when my intuition on the cost of the use-once framework is wrong and I find myself heavily delayed, doing work of little immediate value and no lasting value.
This is some old news I hadn't noticed: Novell has bought Ximian. This is good news, in that the rather impressive mono project now has a solid corporate sponsor.
While open source diehards will probably lament the fact that Microsoft is the driver behind the .NET architecture (and we can all dread an MS led SCO like lawsuit against mono) mono is an exciting cross platform development opportunity.
Jon Udell makes some points about side effects of the current blaster worm, and how if affects non Microsoft customers. Since the worm generates so intense traffic, ISP's that you've never heard of has taken to blocking port 135 on which it attacks. That means that the routing layer - that is normally completely transparent to you - sudden fails to route traffic on this port, even between networks that are not vulnerable to the blaster virus, simply to protect their networks against virus induced overload. This may or may not be necessary, but if it spreads then the concept of free traffic on the internet could soon be over again.
Rule 1. Break their links.
Rule 2. Make sure they don't notice the links were broken.
As an example on of how to do it perfectly, take a look at Microdoc-News' old feed. No ' We have moved' post in the discontinued feed. No HTTP permanent redirect. Just a dumb 'human consumption only' HTML redirect.
My newsreader of course didn't interpret this correctly, so I was actually just assuming that feed had gone dead - ontil I checked and found the new feed elsewhere.
Not a very clued move by a guy who's trying to make a living through micro publishing.
The reason I bothered is of copurse the good copy on Microdoc News. For example this instruction on accessing specialized google sub searches is useful, although I'm not sure I think Google's implementation of these smart words is as admirable as the page rank itself.
The linux source code SCO claims is stolen is finally being shown (to make people pony up the money required by SCO's recently announced ransom scheme I suppose). Bruce Perens shoots down the claims ownership. Elsewhere, the developers of Samba criticize SCO's hypocritical stance on the GPL in general. Apparently Darl McBride, CEO of SCO, has picked up a page from the Microsoft playbook, namely the one in which open source is evil and un-american. This hasn't stopped SCO from taking advantage of the GPL (just as it hasn't stopped Microsoft from doing so).
Let's hope SCO burns up fast.
Once again Microsoft is busy shafting consumers. According to 10 pieces of identical spam in my hotmail inbox (which is my spam only email address anyway, so not a problem) MSN messenger "as part of Microsofts initiative for trustworthy computing" (translated from Danish - wording may not accurately reflect the English version of this message) will be updated in a non-backwards compatible fashion. The trustworthy computing line is as believeable as the now standard 911 defence for the Bush administration's policies on any issue. Reports in the news indicate that it is not a security update but just an attempt to throw all the MSN capabable IM clones off the MSN network.
In short, they're at it again - and inspired by the "take no prisoners, answer no questions, money is power" style of the Bush administration we can expect a lot more of incidents like this in the near future. Attacks on Google. Tying the optional MS Office add-on to Windows even closer to the OS. More use of MSN from the OS.
If only "avoid their products like the plague" was a real option.
As an aside, the Danish arm of Microsoft translates "trustworthy computing" as "pålidelig beregning" - which means something more along the lines of "dependable computation" (with "beregning" in danish being unmistakeable as anything but adding numbers together).
Now that the Blaster worm is spreading it is once again time to think about biologically inspired anti-virus defences. The absolutely coolest way to combat a virus like Blaster is to spread the security update that protets against blaster by USING the security hole the patch is to fix. i.e. spreading a benign worm.
If Microsoft did this there would be a public outcry of course - the two problems involved is the possible invasion of privacy (pirated windows versions would NOT have the automatic Windows Update enabled and would therefore be particularly susceptible to the worm) and of course the difficulty in distinguishing benign viruses from non-benign ones. Viruses that pose as security updates are already out there. And last but not least a benign virus would be a colossal liability for the spreader. People might not agree on what constitutes benign after all.
From what I understand this is a problem with spam also: Spamblocking software by its very nature may block what some consider ligitimate email which could both cause legal problems for the blocker and of course raises freedom of speech issues. With paper mail I think it is the law (at least in Denmark) that a sender has the right to insist on delivery, regardless of the opinion of the intended recipient.
This is only now being limited wrt mass email.
580 people attended a conference on spam (well, on stopping it actually - it was not an email marketing conference). That's quite a number.
Among the speakers was a Microsoft researcher - Joshua Goodman (Warning: PowerPoint at end of link). Among his points was one of those lines you've been hearing a lot about a almost anything - namely that open source software helps evil spammers learn how to get past anti-spam filters. To his credit, he knows people will attack this point, but one has to wonder, if they teach you to say this in Microsoft School or is it just something PowerPoint inserts automatically when it sees the words open source?
No empirical evidence shows open source anything to be more vulnerable to security threats.
As to gold standard spam filters (like SpamAssasin) being the thing to beat: A gold standard is the thing to beat no matter whether it is open source or not. With spam it is very simple. Keep writing spam until you're not filtered.
Spam filtering seems to be the latest craze for computer scientists looking for interesting jobs. You get to apply classical CS and a little math. There's plenty of test material to train on. The only drawback to this jop option is that the available free solutions are already very good - but of course for another year or so, not everybody will know this....
And what did I mean exactly with the "hearing a lot" quip above: It seems in this economic crisis post 911 world that people find the argument "Freedom X also applies to malicious agent Y leading to consequence Z - therefore freedom X is bad" compelling regardless of what X, Y and Z is. The opposite is true of course. It takes very particular qualification of X,Y and Z to make the statement true.
SD Times has done a reader survey on choice of development platform.
Non-comittal surveys on these things are notoriously bogus - everybody is eager to say they are using the hot technologies of the moment, and nobody is saying that they are stepping up their investment in COBOL.
The examples for this survey is that it says we're supposed to see a drop in the use of SQL and HTML based technologies in the next year. As if.
Furthermore: If you're bored with the daily grind of your object oriented programming job, there's also a fun read on what it takes to solve problems that are actually interesting.
The problem is airline route optimisation, not for the airline but for you, the lowly customer:
If you want to do a simple round-trip from BOS to LAX in two weeks, coming back in three, willing to entertain a 24 hour departure window for both parts, then limiting to "reasonable" routes (at most 3 flights and at most 10 hours or so) you have about 5,000 ways to get there and 5,000 ways to get back. Listing them is a mostly trivial graph-search [...] The real challenge is that a single fixed itinerary (a fixed set of flights from BOS to LAX and a fixed set back) with only two flights in each direction may have more than 10,000 possible combinations of applicable "fares", each fare with complex restrictions that must be checked against the flights and the other fares. That means that the search space for this simple trip is of the order 5000 x 5000 x 10000, and a naive program would need to do a _lot_ of computation just to validate each of these possibilities.
That is SO cool...
What's even cooler is that the company that makes this stuff only accepts programmer job applications in the form of actual running code to solve famous algorithmic problems. No "Must know Java, must have 20 years experience, must be 25 years old" there.
Some more company info
Paul Graham - who writes great articles about software and software languages, is the designer of the Arc language and also author of good resources on spam filtering - has written a piece on Why Arc Isn't Especially Object-Oriented
The funniest reason he gives (and a good one too) is this:
Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.
It is so true. Objects shouldn't be everywhere. Except of course when you do it in the style of the best scripting languages where you have all the other ways of writing software, but you also have implicitly all of the nice metadata that object oriented techniques can make so powerful use of.
Of course the movement to reduce the cost of definition of classes and objects - which is happening in most languages through either new modes of writing (modern IDE's a la my description of software pragmatics) or through a "multiple language strategies" approach.
The latter is the TIMTOWTDI principle of perl: Programming languages should be as flexible in letting you aggregate meaning as natural language is. Suppose you have some objects already and you want to work with stuff like that. Then the most obvious way to arrive at your data is to be able to construct new objects from prototypes, so your language should allow for that. Suppose you know the data in your object. Then the most obvious thing is list constructors as in lisp. There are many other styles of expression and they should all be able to blend.
When your language forces you to design through one particular strategy then the fun that was design quickly becomes the tedium that is programming.
Following up on the disruptive abilities of web services a brief recap of what the disruption is all about. It can be paraphrased like this "Web services are unix-style 'simple tools' for business processes"
The "simple tools" philosophy of the unix system environment is the style of development that carefully avoids to build monolithic systems to solve specific tasks, but rather solve all problems by ad-hoc assembly of a long list of simple tools that "do one thing well" through the shared metaphors of the unix shell, files and pipes. The simple tools philosphy is in contrast to the idea of the 'Integrated Environment' - invariably huge, comparatively closed, 'total' systems with an answer for everything. While there are many good reasons to work inside huge monolithic apps (the "simple tools" style has never been able to make sense of GUI's for instance) the simple tools philosphy is remarkably powerful for many problems, as anyone who knows the awesome power of the command line will gladly confirm.
The portability and end user simplicity of (good) open source build processes are evidence of the remarkable power of the simple tools philosophy.
The economics of simple tools comes from the network effect of integration. The total value of 'grep' comes about as a sum of (part of) the value of all the simple toolchains 'grep' is used in.
The way this particular webservice pitch goes traditional business software is entirely about expensive, closed source, monolithic, hard to integrate apps. The value of their constituent parts comes about as a fraction of the single toolchain (the monolithic app) they appear in.
Web services are to these apps as simple tools are to complex IDE's.
The latest buzzword that attempts to do for web services what the shared metaphors of the unix environment (files, pipes, processes) do for the simple tools is the Enterprise Service Bus.
Or we can just let Jon Udell explain the whole thing: The ESB, the quintessential simple tool in this context - the Active Intermediary, and finally let him wrap up with a toy example of what this kind of open, ad-hoc, transparent integration can do.
The essential concept for the simple tools is that each tool along the chain is really transparent to the next tool. Only the data in the pipe matters. That is why web services need to focus not on API's and programming interfaces but on data representation API's. And this is why SOAP is already under fire from REST interfaces. In SOAP the data interface is tied into the API, the action interface and that's just not very transparent. Hmm, I think I just started another very lengthy post by accident.
Loosely Coupled sees a disruption in Siebel's future (and in may other futures by the way:
The really bad news for Siebel %u2014 along with Oracle, PeopleSoft, SAP and every other packaged enterprise applications vendor %u2014 is that people don't want to buy software at all. They don't even want to pay for it %u2014 and with hosted services, they don't have to.
It's tough to manage, tough to maintain, tough to install, and expensive to boot, so who can blame them. The alternative to software being proposed is still being fleshed out, but the core of the new world of software is being fleshed out:
The nasty business practice of SCO continues apace as "SCO readies new Linux licensing program".
The SCO Group is preparing a new Linux licensing program that it claims will allow users of the open-source operating system to run Linux without fear of litigation.
In short, it's nothing but a protection racket trying to suck a little money out of Linux users trading on the fear that IBM might lose the lawsuit over Linux. It would be honorable after winning, but right now SCO is just trading on fear. Any gangster would be proud of their ingenuity.
And while Echo/Necho/Atom/Pie has found yet another name, Dave Winer realizes he has lost this one, and stops wielding Userlands copyright like a blunt weapon as posted to Sam Ruby's blog. Ruby's posting comes with a 'maybe we can get along' hopeful message, but two comments down we learn from Bill Kearney that
Atom isn't about RSS. Atom is a lot more than just syndication.
That has to be a new record in forgetfulness, or if not, it is exactly why Winer, and other reinvention critics, is right about the Echo/Necho/Atom/Pie* project.
* I would like your comments on a related matter:
Does Echo/Necho/Atom/Pie sound more like
Personally I liked Necho for 'the RSS that can't be called RSS because Dave will be angry' but the new name is Atom. The other one was cool because it sounded almost like 'necro' or 'nether' and therefore had a cool ring of death or illicitness to it. Of course Atom already is too crowded and used name, so lets see how long it sticks. I would like to propose a new name : ATGNWT (at newt) - which just means All The Good Names Were Taken.
Another interesting point about ECHO is the discussion on escaping content. Some people argue to do it with an old school layered approach, where the content is just binary data. But clearly if mixing XML vocabularies in something as relatively simple as a weblog is too hard to do, then that really takes a lot of bite out of XML.
Jakob Nielsens "PDF: Unfit for Human Consumption" rant is being blogged heavily. Of course no one blogs the implied statement: PDF's are great for digital distribution of paper documents. This is a true and valuable revolution and people who don't recognize that are forgetting how hellish printing material retrieved from the net used to be - if you could retrieve it at all. I love PDF's for that.
For online use PDF's are terrible (just like Flash, but in contrast to that abomination PDF's weren't made for content originating online), and Jakob Nielsen's alertbox is completely to the point of course.
Furthermore we all know (from the Sklyarov case if nothing else) how detestable Adobe's position on right of use and copyright is.
TrackBack auto-discovery is disabled for intra-blog links. Probably a good idea, but that doesn't mean that intra-blog updating should not take place. It should take the form of automatic discovery of references to self, and of forward links. Pages with backwards or forward references would make a lot of sense for building coherency of blog entries, bliki style.
They should have their own template tag, since obviously this is something else than other peoples TrackBacks. Nonetheless, a link to that page would look great right next to the comment(n) and the trackback(n) post notes on the default template.
Google is evidence that this feature would beat topic based coherency features.
TimBray is organizing an anti Internet Explorer campaign, complete with campaign graphics, much like the ongoing (no relation to Bray, pun intended) "No to Warnock's Dilemma" campaign here on classy.dk. But it a pity that this page does not do browser detection. I've looked at it with Mozilla firebird and not only was the anti-IE warning not removed, but the site actually looked just the same. Sure, the fact that font scaling actually works is a definite plus, but still - from a client perspective one has to say of the campaign, 'Where's the beef?'. It might be there, but they're not dishing it out.
I am completely on board with the Longhorn FUD campaign however. Let's give them some of their own medicine for a while. (Scoble tells us that Google does work on Longhorn. Good to know that it did when it suddenly doesn't)
I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better [...] has better survival characteristics than the-right-thing.
Via Bram Cohen - who looks to be a very clever guy. I should probably get into Python like the rest of them (or Ruby, like some of the other rest of them) Perl6 on the other hand does look like it will be great, even if only for us braindamaged perl users.
Which is not to say that the browser is the right answer for everything. Here's an overgeneralization which I think works. All computer applications fall into one of three baskets: information retrieval, database interaction, and content creation. History shows that the Web browser, or something like it, is the right way to do the first two. Which leaves content creation. [...] The browser makes a lousy funnel through which to pour your soul into a computer, and I don?t see any reason to expect that to change.I agree. And I also like the 'sharecropper' analogy used by Bray for developing on closed platforms.
What he forgets to mention is that due to the lameness of the current US governments notion of monopoly control, even browser based services can be co-opted by the platform owner. Or they can try, at least - as MS is about to start doing with Google - complete with all the old monopolistic tricks. Search will be MS search unless you go out of your way to avoid it. I'll wager that the Google toolbar is likely to suddenly stop working in future versions of Explorer.
Then comes the interesting question: Has platform lock-in moved inside the browser? Will there be a mass-exodus from IE because it doesn't work with Google just like there was a mass-exodus from IBM hardware and OS/2 because it wasn't Windows?
Have we completed the move away from hardware to the degree that data (aka content) is finally king?
David Weinberger is checking out life
as a Linux novice and he is struggling. Of course. Weinbergers expectations for how things are supposed to work are failing in ways he does not even begin to capture in the post.
In short Linux is still rather geeky stuff.
From Don Park's Blog:
AdventCode now uses just 5 HTML tags and CSS to control the XML/XSL output
And then Tim Bray goes on to tell the story of an ancient (well, 1990) Search UI
The idea was that any time you clicked on something, the software tried to figure out a reasonable way to combine what you%u2019d clicked on with your most recent resultThat is, code completion on on tree structured data - essentially dynamic xpath generation. I can only say that I really think this is the way to play. Only the structuring must be fast and nonrestrictive. Language does not follow hierarchies trictly. That why we like it so much. When logic is just too hard to come by, we say what we want to say in slightly incongruous ways anyway. This information is still useful and should be searchable.
It is a fact universally acknowledged that any new momentum gathering idea will be in need of a standard. Such is the conclusion one draws from a discussion on Tim Bray's ongoing of a thread on intertwingly on Yet Another Protocol. This time, publishing to the web is being reinvented. My question: what is wrong with WebDAV? It's there, it does sorta what is wanted, it's metadata savvy and extensible. It does what it dows well - with enough versioning that the Apache project is basing the new subversion versioning tool on it.
Through the apache project is is universally available as open source on almost any toaster.
Why it a new thing really required?
The unofficial resource for data on Google's DNS scheme and other stuff (like how the toolbar obtains pageranks) and of course data on that internet weather phenomenon known as the
We need more than pointers to schemas, we need catalogues of schemas, explaining which ones work where and which ones are up-and-coming. We need online documentation, with helpful tips from early adopters, like PHP's online manual. And we need use cases, lots of use cases, so we can 'view source' and see how others have done it.
The other thing we need is approachable tools, and we need them now. Simple, reliable, adaptable tools for reading and writing valid XML, and which allow us to add or delete our own schema elements, so we can start playing around with all those killer-app capabilities of semantically tagged content.
Sam Ruby and others (a collection of people on Sam Ruby's blogroll. Following intertwingly will get you there) are doing interesting things in fixing and empowering RSS feeds. Most notable accomplishment: True xhtml support for posts.
Not many will have noticed that many XML parsers (notable microsoft's) think my feed is broken because I live in a city called København in my native language. That of course is immediately fixed by mod'ing the feed to emit the new funky RSS with xhtml, so that I can have properly defined Latin-1 entities.
I think that alone is a good enough reason to score a point for Sam Ruby and careful, well done XML use instead of just colloquial XML ('stuff in tags'). When I look at XML capable open source tools, the XML handling is always terrible. Parsers routinely require certain namespace prefixes to be used for certain elements they are interested in, instead of locating and processing the name space declarations and prefixes in use in the current document. They never validate and the layering is always bad. The information model of the data presented in XML is rarely made explicit. Data is picked from raw XML with great bluntness instead.
And finally as a perl hacker I must admit I find the profusion of simple, but incomplete tools available on CPAN to handle XML confusing and limiting. It would be great with a perl-like complete accepted standard of tools doing ALL the xml core features: raw xml, schemas, namespaces, xpath, xslt's etc.
The O'Reilly network is starting a series on Perl Design Patterns
...many of the problems the GoF is trying to solve are better solved in Perl-specific ways, using techniques not open to Java developers or those C developers who insist on using only objects.
That is so true. In particular doing the structural patterns (and associated behavioral ones) in pure GoF style is completely wrong when a dynamic language with open class implementations is available.
I remember reading about double dispatch problems and realizing that contrived calls through two class hierarchies are irrelevant due to perl's dynamic inheritance model: There is no static type, so methods are dispatched to the dynamic type of an object even if that type was unknown to the calling context.
If you need more control use Conway's multimethods.
Loosely coupled mentions an interview with to a KP VC on disruption in software. The claim 'startups have the advantage'. Somebody's been reading their 'Innovator's Dilemma'. While it is true that the upstart can successfully feed off much smaller revenue streams than a big company there's an important caveat wrt. web services. A technology is only disruptive if the incumbents don't get it. Sofar i haven't seen any signinficant reason to think that the BigCo's are any less tuned to web services than startups are.
Just - inventor of the sadly dormant relate-a-zon game - would like this Java app: TouchGraph AmazonBrowser V1.01:
The TouchGraph AmazonBrowser allows one to examine the graph of similar items at Amazon.com - and in a beatiful Java Applet graph at that.
The web is infested with hyperbole. The dotcom bubble of course wa shyperbole through and through; nothing but me-too companies and ridiculous 'bleednig-edge' e-business portals, platforms and technologies. But just because we're in a downturn that doesn't mean the hyperbole is gone. In particular the celebritybloggers have gone in to much too high a gear when it comes to evaluating the quality of blog journalism and the depth of blogging technology. It's as if the act of publishing on the web turn nice ideas into Deep Intellectual Property.
A case in point: Blog Post Analysis. The simple insight: Most blogs display multiple unrelated posts on their main page (that is sorta the concept), so we have to extract individual posts from RSS permalinks or through some kind of parsing heuristic. Nice idea. It is even well executed. What is annoying about this simple concept and the simple tools built to support it is that this is then turned into Blog Post Analysis (tm probably pending) aka 'BPA technology'. Give us a break would you?
The auto RSS maker is wery nicely done but 'BPA Technology'?
To check how well done the script was I wrote a simple version myself using perl's standard toolkit for the job:
Classy's nonfunctional three hour Blog Post Analyser. Source of said analyzer.. Some text gets lost and there are plenty of other kinks.
It does reasonably well on my own log. A previous version did much better on Doc Searls' weblog than the current one. Somewhere along the way my heuristic for associating text with links went haywire.
The next thing to do about this is to break it up: First write one or two parses that condense webpages into very simplified xml markup, which only preserves the tree structure of the original page.
This should then be processed, not by a perl script but an XSL transform.
The direct opposite of this overstating of results and ideas may be found in the world of mathematics that I was trained in. Here, the goal is to present much, saying very little so results are more likely to be understated than overstated (At least work by good mathematicians).
Developer macho sentiment: There is an informal mountaineering rule that you never help incompetents up a mountain to killed higher up, only down, to get out alive. Wannabe SOAP developers should be told to work through Bruce's TIJ book, TCP Illustrated, RFC2516, Hunter's Servlets, Box and Skonnard on XML and then maybe write a first web service
I believe that writing (even code) is expression so wannebe's should spend less time wanna'ing and then a little more being. But other than that I wholeheartedly endorse the sentiment that knowing what you're doing is your own responsibility. If you fail you're to blame too. People who don't care about their knowledge should never be trusted with important work.
Loosely coupled thinks business software practices should start looking for another way to make money. With standardised, rich service offerings available to companies - offered by SOA's (which are rehashed webservice based ASP's) business analysts will be able to do most of the aggregation work to combine services by themselves.
Some slides presenting SOA architectures. My favourite quote: Code doesn't travel. Amen to that. All good successful standards are based on located service and portable data (email, html, - even tcp/ip itself). (Bad news: Slides are in PowerPoint. Urgh. What happened to platform independence?)
Abstract and interesting essay on languages and also indirectly the why of languages. What makes a particular language worthwhile and lasting.
The remarks on descendants of languages and Java as a dead end is right on the money (and no, C# is not a java descendant - if it is anything it is Delphi with C syntax). Nobody's going to want to bring Java philosophy along to their next language. The libraries are downright hideous, and all the interesting stuff is done better elsewhere.
This link also discovered through the marvelous Ongoing.
There's an important learning experience here, which is that as dull as mainstream programming environments may seem, they are actually undergoing a massive change inspired by features of the classic dynamic languages (i.e. lisp) with reflection and genericity as the key things.
Onjava has a brilliant article on aspect oriented programming. Well' actually it's the comments at the end that are brilliant. An intelligent reader discusses the concept of AOP and its practical usefulness at great length with one of the authors. Much of interest is said. One of the things is a discussion along the lines of "is OOP not a better match for describing the real world?". I have a strong opinion on that issue. The answer to the question is 'Absolutely Not'. Approx 20 minutes into your first course in object oriented design, you get to the problem of multiple inheritance, and the text will present you with something like the 'salary example' ("part time teachers" are both "part time employees" and "teachers", in either case they are "people" and have a "name", so which is it? The example is contrived, but text book examples usually are). Some kind of fix is proposed (outlawing multiple inheritance, using only interfaces) but the fixes rarely address the real problem, which is that in either case you are constructing nothing but a theory about the world you are modeling.
Using interfaces may look like the answer, but isn't really - at least not in strongly types, statically typed languages like C++ or Java. Claiming some property of a real world object is inherently dynamic. That dynamic aspect of language is poorly captured by most OO languages. You end up constructing actual objects with the desired properties and implementing some kind of reference to your original objects. This is worse than useless. Your model now contains synthetic objects with a rather abstract correspondence to your problem domain. Do this enough times and your code becomes a nightmare.
All kinds of bad design come from this need to construct objects mirroring the original model objects to add new typing, like the dreaded 'class hierarchy replication' (where you have your main model class essentially replicated in an other class hierarchy to add your jnew feature in a typedependent way)
Any technology that allows me to introduce the new look on old classes dynamically by actual reinterpretation of the original class instead of introducing additional classes is most welcome, and that is what (at least JBOSS) AOP looks like doing.
Soundbite: In theory, I like SQL a lot. In practice it revolts me, and I'm not sure why.
For my money it is because the stuff I think of as static declarative matter becomes dynamic (all the selecting and inserting and updating) and all the dynamic stuff I want to do with data is either completely hideous to do or I have to do it in some syntax with all the beauty of QBasic.
As far as I'm concerned, for SQL to be pretty I should be able to present all the relations (including the ones I need on the fly and don't allow for in the native schema) as declarative matter, and then these relations should be malleable in the way one has come to expect from modern languages with high quality generic datacontainers.
Then there's the problem of transactions and of persistence. It is a shame that these two features of data that are totally unrelated to schema modeling are integrated in the same package. You would really like your modern procedural, dynamic, OO language with functional language features and the expressiveness of Lisp to offer transactionality of updates unrelated to the relational model and you would like the lifetime of data to be unrelated to both.
Much talk is made of the Object/Relational impedance mismatch, but I think that is a bad description of the problem. The impedance mismatch is between the entire set of 'data server' language and services and the data and services of the rest of your programming system. It means you have to abuse the dataserver language for tasks your application host language is better suited for in order to get the other qualities also. You end up persisting objects to relational stores to have transactional copies of them.
Simple rediscoveries like object prevalence prove the point. That people need to 'discover' persistence and journaling filesystems (which is what the prevalence model is) prove how damaging the SQL based confusion of transactionality, persistence and relationality is.
Oh, and it doesn't help that SQL idiom requires you to SHOUT ALL THE TIME.
Tim Bray's ongoing is providing tons of inspiration to me. I just found a brilliant observation on software writing : Writing the Hard Line of Code. Amen to that. This is exactly what happens 9 out of 10 times. Of course the reason one does this kind of thing is that last 1 of of 10 cases, where the flow gets you and that is sufficient for wonderful things to happen.
If - like me - you have an excessively verbal inner life, the same kind of thing happens in 'plain old writing'.
You're arrested by something and get the feeling there's a decent point to be made, and you sort of know what it is, but it is tied in with too much internalized knowledge to get onto paper in the next hour or so - my usual limit for stretches of continuous good prose.
So you start to express some of these internal prerequisites hoping the flow will get you to that point you were looking to make. And you then often find that the point either does not carry as well as you hoped, or you just don't get to it - caught up in all the marshalling.
The 'verbal inner life' thing comes into play because, in this mode of thinking you're always adding story threads to a dense forest of other threads. Nothing is just an observation.
Thank god we have blogging and hypertext to deliver us from this mess of setting up your stories...
Joel on Software - Friday, May 16, 2003 comments that software prototypes are almost never worth the effort. This has to be part of developer 'folk knowledge' by now.
Even worse, when prototypes succeed they run the risk of being used in production systems with all the consequences this has of the prototype solution not being properly architected to fit into a full production deliverable.
I think The Pragmatic Programmer does an admirable job in laying out the Good Rules.
Do blackboard or paper prototyping. When building incomplete software use a 'Tracer bullets' approach. Tracer bullets are developed by sticking to the full architecture but keeping all elements not required to display functionality 'constant', e.g. letting functions return constant values instead of actually computing anything, etc.
That this comes out from Bray's point of view as less intrusive than cookies is beyond me. I use Spamhole religiously when asked for an email address. The important step here is that the friction of the signup will make the URL for the RSS feed (or other HTTP headers as suggested) precious to you, so that you will keep using the very same one. And if friction is high enough you will even make sure to transfer the URL to other machines as required.
I've been intrigued for a long time about the notion of precious URLs. Good permalinks are precious. Google cache links to vanished pages are precious.
I think precious URLs would form a great way to introduce micro-economy to online publishing.
The easy way to think about them is Bray's way: If advertising is supposed to matter to the RSS publishers, then subscriber counts must be measurable and precious URLs is the way to go there.
But one can easily imagine a client side version of precious URLs also. It would rely on a new enriched client. When loading up a website the client would negotiate in the background the licensing terms for the site through site metadata. After negotiating terms the site would then publish to your personalized precious URL's - e.g. by proxying the precious content through the licensing client. The licensing client would implement strict observance of URL expiration time, ensuring that your money is not wasted by continued reloading. Client side, you may stipulate how much money you're willing to spend in a given amount of time, how much without being asked, and how much at a specific site.
While the interaction is designed to provide transactional security, the individual load occurs in the background and is not viewed as an economic transaction. The model is 'metered surfing' You're simply charged a bulk amount per month, and conversely the website does not enter into a transaction with you, but is reimbursed for number of served pages.
It's just another micropayment scheme - but I think the importance of introducing a kind of friction sufficiently harmless that we can accept it happening in the background is important.
The New York Times costs approx 1$. One would expect to surf the website for some fraction of that per day, which means that a particular page should cost no more than a penny or two.
I don't know of anyone using micropayments as minute as that on a regular basis, but mayby iTunes will change that. If permanent ownership of a song is only 1$ surely you would expect to be able to buy other media for that kind of money also.
While we're touching on the subject of giving out emailadresses: I couldn't be bothered to keep giving out the same address. Sometimes the disinformation of changing address is even intentional.
Ironically Spamhole has changed their website so that the friction of creating a spamhole has gone up a lot. You now have to sit around and wait for confirmation emails with confusing validation instructions. That is not just bad. It is unusable.
Just references yet another story predicting Google doom, because of blogs. Supposedly the link structure of blogging is destroying the importance of blogging. I don't know. I still find what I'm looking for just fine, thank you. And forthermore, I think it is important to note that almost all legitimate observations on blog's in the rankings are examples of vanity surfing. Scanning for your own name is likely to bring a lot of polluting links to your weblog.
On the other hand, I just spent hours finding out who Robert Scoble is - only to learn that he's just a blogger turned microsoft flack. Being a blogger, page after page of links to his name are blogrefs. Unfortunately his new job shows and he's proudly plugging just about everything with an enthusiasm fitting for his job. Very low grade. Can be avoided without loss of insight.
Found an old PC forum link on Jeremy Allaire's Radio
Esther has a theory that what will create the semantic web are large-scale data-centric applications driven by large corporations and government, forcing the creation of standards and platforms.
Counterexamples: HTML, HTTP, trackback, RSS, ...
I think the conclusion is true but the argument wrong. The only really important thing to know is that it all starts with data. Somebody will make something available in a useful format. People will write tools to access that. Other people will make their data 'tool compatible'.
I think it is a good question if RDF will make it at all. If it does it will likely make it either as 'Open Schema' - a data description standard tailored to be made collaboratively instead of by some central publishing house - or 'Poor mans Prolog' - a net ready substrate for rule engines to run on.
The Open Schema approach could survive grassroots style OR corporate style. The rule engine substrate is unlikely without some heavy lifters (making it unlikely alltogether).
RDF as 'data ready hyperlinks' might be viable.
Nikolaj recently reminded me of tinyurl.com - the tongue in cheek URL forwarding service that cuts overlong CGI permalinks down to size.
It's a nice idea - in part because of the nice execution and in part because of the problems it highligts.
The idea is nice, and entire URL spaces work OK, so you can abbreviate parts of your URL space to http://tinyurl.com/myspace/mysuburl etc.
The problems highlighted are at least threefold:
How could TinyURL fix the privacy problems: avoiding implicit trust with an intermediary and opening of private namespaces?
Avoiding the implicit trust of the intermediary requires a two way encrypted communication between the ultimate client and the server tinyurl is a proxy for.
That sounds like a Public Key Infrastructure application. A public registry of certificates avoids the need for the client to have a relationship with every server - while allowing intermediaries like tinyurl.
But if the intermediary is to add any value, the data can only be partially clouded. In particular the target urls must be known to the intermediary. So some kind of partial trust protocol needs to be layered on top of the certificate technique. One imagines a mixed payload with encrypted and unencrypted parts.
While PKI and all of the upcoming digital signature concepts provide a cryptographic basis for this kind of work they do not provide the real value giver, namely the protocol allowing mixed trust and without no network effects. In fact, I think I'm following Bruce Schneiers train of thought when pointing out that the real source of trust is the entire relationship between the truster and the trustee. It is an open question if they really need centralized crypto to build or maintain that trust.
The second aspect is reminiscent of the first: A lot of interesting resource for intermediation are on private networks because of safety concerns. Adding crypto to these services before allowing intermediaries access can be difficult and the current certificate infrastructure is such that it is expensive to add and manage certificates.
Second problem is that the certs don't offer fine grained control in this situation either. With public crypto registers I can easily trust any counterpart - but if I want monitored audited access only, crypto isn't it (without a lot of expensive middleware)
In short - an open dynamic architecture for trust and permissions is needed. That was (some of) whaty digital identities was supposed to do - but they don't seem to have made any impact just yet.
The big ideas (project liberty et al) at last glance just seemed to be a new crossplatform 'closed identity' solution.
An unrelated thougt:
The meaningless TinyURL space makes a point about language. Sometimes it is easier to remember a complex grammatically sound statement than it is to remember a short meaningless one. In fact I'll bet good money that statement was easier to remember than the number 49205710362881947293, even though that is only one digit per word of the previous sentence. But that was not really the main point so I left it for the MORE block
What that means is that carefully designed URL spaces are easier to remember than tinyurls
In an act of IEEE cross-browsing I found a nice little piece on misapplied organizational practices - debating bad process-heavy work environments vs bad work-heavy work environments (aptly characterized here as commitment driven management). I think I've tried both - and it is impossible to stress the key points of the op-ed too much:
Reading Jon Udell on Rule engines and rule languages.
Aren't Aspects compile time rule engines (just like C++ templating is compile time functional programming)? And isn't it a natural idea to implement a mixed-mode rule/procedural approach with aspects.
Full on rules based programming is IMO as Udell puts it 'another mess'.
In my daily work writing software for a lowly domain registration engine we encounter the procedural/rule disconnect all the time like all other business programmers. The greatest problems we have with our rule setup is that you tend to apply rules in very data rich environments (that's where they're handy) and that makes testing and debugging the rules a nightmarish and very slow process to the great disappointment of management who heard us praising the rules approach when we were struggling with some hideous legacy monolithic procedural code.
Why exactly is it, that with all the usability features in MS Outlook, they still haven't figured out that 'Reply to All' should not include the sender?
The next version of windows is being leaked continuously to the internet (you'd think it was a marketing effort, so consistent is the process). Sadly, in the next version of windows, MS has decided to use even more space for their own purposes. A screenshot accompanies ZDnet's story on the new Windows and truly depressing amounts of space is taken up by windows. The terrible Office toolbar has now made it to the windows release itself.
A very nice micro-application of webservices: At the danish concert ticket seller (a rival to the local Ticketmaster subsidiary) Billetmaskinen they have found a nice way around the annoying address data entry. You simply enter your phone number and they'll snap up your data from the directory listing. The privacy concerns are obvious, and it would be preferable with a URL to something like Ascio's digital identity so you could give out the address you would like to give out. But it still saves me a good deal of trouble, using only data about me I publish already. I haven't seen this elsewhere and I'm currently wondering why. Why does the credit card companies not offer something similar? Obviously not from the actual card number (that would kill any resemblance of security) but an ID number would be simple to add - and if you're going to pay anyway, or accept delivery, they WILL know who you are.
I have to take issue with my esteemed colleague Just's opinion on the new design of berlingske.dk. Just likes it. I don't. my parameters for judgement: 1. No interesting news in the content. 2. The new site is S*L*O*W compared to the old one. And due to the way it's done the page hasn't loaded properly until ALL the crap is loaded, so you cant start news browsing until you have waited for a very long time.
I haven't seen such a poor relaunch in terms of performance since krak.dk decided to destroy their website a while back, by making it a ten click instead of a 1-click operation to locate a map to some location. Among other much used but very sucking services: TDC's phone directory. First of all - even when you're explicitly looking for directory service you are redirected to the yellow pages. THAT SUCKS VERY MUCH! It's just about as annoying as spam. Second of all, the page has the same problems as berlingske.dk. Embedded script necessary to navigate from yellow pages to directory listing fail to load if you don't wait for all possible crap to load and again you have to wait a lot. Sucks, sucks, sucks, sucks, sucks!
BUT, I know that all you graphic-ponys like the information I don't use conveyed in the 'nice look' of these sites and you take issue with Jakob Nielsens strict 'the text is only the words on the page' view of webpages, and that you all feel that the look conveys information as well and of course you have a point (heck, even Google are using the rendered form of hyperlinks to enhance their page ranking algorithms by increasing the score for boldfaced links that stand out on the page). It's just that there is absolutely no way anything but the name and address entry fields of the phone directory are helping me find that phone number in any way, shape or form. That particular action is information redux - a pure memory prosthetic. Just the digits, please.
Google - as we all know - gets the idea of the memory prosthetic. Speed and simplicity of application is of the essence for the use of Google to look up a lot of stuff we actually already know. Since Google runs so fast, it is a viable replacement for keeping your own favorite links around, in some cases it is even a viable alternative to DNS lookups. The operative concept here is Michael Polanyis tacit knowledge. You really don't want to spend time thinking about how you recover information. The latency in recovery completely kills the value of the information, since information is hardly ever the end but always the means towards an end, and if you have a high latence along some information recovery path you're just not going to use that path.
A study has been made on the influence of spell checkers on writing quality. The result - a little unsurprising I think - is that using spelling and grammar checking can actually impede performance.
The way this probably works is that you change your work mode from a 'creative' mode to a rules-based fact checking mode. This has two problems: First of all, the spell checking software is far from perfect. Generally speaking the checker will not catch all errors, so the rules you're checking against to see if you're done are incomplete.
Second, you rarely want to be in a rules based mode during writing. Working rules based (with present day technology at least) invariably means that you're working from closed world, severely bounded models of the problem - i.e. the proverbial hammer that nails your english to the floor. A new book that's coming out just now, makes the same point as the experiment only about child creativity and interactive computer games as opposed to old-fashioned creative self-made games.
Writing is more of a search problem in that respect . You're scanning your memory and the situation for appropriate phrases to apply when continuing the text. While I am an AI optimist, computers are unlikely to sensibly support that process in any known near term future.
p.s. I know that classy.dk is a living counterexample to the experiment discussed. Browsers are possibly the worst text entry interface possible.
At long last there's a new perl Apocalypse out. What a wait. But what an Apocalypse! It is long (64 page printout) and dense as you would expect, but it outlines some of the most needed features for programming perl in the large, which is exactly what perl needs, since programming in the large is definitely the thing that is most difficult in perl. There's a solid type system (optional of course), function prototypes (optional of course) and a general consolidation of the model of ALL structure which at the same times cleans up the many ways in which control can flow and then promptly fills up the conceptual space made available by the cleanup with yet more arcana.
I stand by earlier statements that the redesign is doing more harm than good by the enormous addition of features, but clearly there are vast improvements also.
The syntax in this apocalypse is presented using spiffy new Apocalypse 5 regexes, and they are beautiful and very readable. The new type system and prototypes look very promising indeed.
Also, a programming style replacing line noise with method calls on builtin abstract data types and a few universal operators is emerging in the examples, which is very welcome. I find the many discussions on super powerful operators sad and completely beside the point when the point isn't writing one page programs doing amazing stuff.
In short, there is hope still for a simple to use but extremely powerful new perl. The pace of development is higly unpredictable however. This Apocalypse has been 9 months in the making. From mailing lists I gather there's been plenty of external problems delaying it, and the subject of course promises to be the second last whopping big one (the last one being objects). With a little luck we're halfway there, so we'll have perl6 around 2005.
There's a separate problem with the runtime though. It seems to be bogged down in some pretty arcane discussions without solid use-case discussions, but I'm just a bystander in the process, so I might just be missing some solid thinking.
Oh goody, the tried and true, 'fast but inflexible' idea of using just serialized objects instead of relational datastores has a new name:Object Prevalence. To be fair, there's a consistent API to define the way the serialization is used, but it IS just serialization.
The tradeoffs are the usual ones: No cross object queries. No real transaction support (well, you can use a transactional datastore that is non-relational (i.e. what people do using Berkeley DB as an object store from perl via one of the many 'persist to hash' modules on CPAN) and the two tier nature of this way of hacking storage means that it is good and fast if you don't need to scale it or query it in as yet unthought of ways.
What WOULD be nice would be an upgrade path to relational data and replicated objects and transactions - but wait, that's what EJB's are supposed to do, isn't it?
Reading the disussions about the design of perl 6 which seems to have come to a screeching halt bogged down by arcane disussions of excessivily promoted new features one is reminded of the old perl slogan "worse is better" - meaning 'usable' is better than 'perfect'.
I would however like to add a personal sentiment: It is not that much better. A language has to emerge as an end result. If that language is more difficult than perl to explain or - even worse - to understand, then it doesn't really matter what nifty capabilities it has. It will not be used. Simplicity wins.
Thinking about this, people's fondness for php and python, a previous post about - among other things - average developer skills, and having just read The Innovator's Dilemma, I think it is safe to say that perl is experiencing a disruption. And even worse, the perl community is reacting exactly like an incumbent champion of industry would, adding features - and cost - and spending endless amounts of time on sophistication and 'getting it right' to tweak the mileage the language offers.
Perl in itself was disruptive when it appeared. It is a remarkable unifying improvement on the unix toolchain, replacing shell scripts, awk, grep, etc. with a unified extensible tool. This made new things possible - like building a lot of the web - and was arguably the start of the rise of scripting languages as first class citizens of the software world.
The disruption perl is facing is the attack of the average programmer. People think perl is hard. They're probably right. So they turn to tools which may not enjoy the advantage of CPAN, with which you can do literally everything, and the best build system in the world (CPAN again and the completely standard modules) but they get the job done. And more people can learn how to use them, so there is no question perl is losing mind-share.
Reading Lisp discussion lists will give you a sense of what I'm talking about. And reading Peter Norvig's 1999 summary of the state of Lisp is a lot like reading the 'State of the Onion' that set perl 6 in motion. Incidentally, as far as I understand the examples in the 3rd edition of Norvig's AI book which used to be in Lisp, are now written in Python.
The SWIG C/C++ wrapper generator no longer comes with a Visual C makefile. Here's one. Place it in the Win subdirectory of the Swig distribution and run using nmake. Why bother when there's a binary to be had? Well I was thinking about doing autoswig a perl interface to swig itself - using swig of course. The idea would be to access the swig parser from perl. I'm not sure about using the generation stage from perl, but robust C/C++ parsing has tons of uses. The XML output format exposes the parse result relatively directly so using that and a command line is a slower, but maybe less work-intensive way to access just the parser. The output looks rather ugly though.
For my own reference mainly a few notes on lightweight languages workshop 1. In particular you should follow the link to the beatiful thoughts about language design at Paul Grahams site. His description of the goals of his new language would be arrogant if they weren't so very well written up.
Dr. Dobbs carried a story about the workshop which was much better than their usual material.
A while back there was a story here at classy.dk about how Developers are more important than development environment, based on an informal study of different programmers doing solutions in different languages of the same problem.
I found another programmer who took the bait. This time it's a C++ programmer inspired by Peter Norvigs Lisp arrogance. He manages to demonstrate that the developers of the templating system had a bad case of abstraction envy when they designed the template mechanism, and is able to basically copy the Lisp version into templated C++.
The template mechanism is wonderful except for two facts:
First, the C++ compiler - under heavy use of templates - is basically the slowest runtime environment for ideas I can think of. Clearly a new approach to the use of clock cycles is needed to make the C++ compile cycle acceptable. I'm sure any heavy user of templates can understand that sentiment, even if he doesn't agree.
I think basically what is needed is to stop thinking about the compile cycle as an offline activity - which is inherent in C/C++ think with the use of preprocessing, makefiles etc. Things like 'precompiled headers' are just hacks to work around the underlying issue that the model for how C and C++ generates code needs an overhaul.
The second fact is the more serious: Templates violate the fundamental principles of good productivity enhancers. Debugging template rich code is terrible, since the names of things that you use as short hand to make your code legible have vanished in the debugger, i.e. you cannot debug in your thought medium. In fact it can be hard enough to create even facades for your template rich libraries that let you wtop worrying about the templates themselves. STL and some of the 'gold-standard' libraries around make a very good effort in this respect but it is punishingly difficult for the average programmer. And it is not made easier by the poor interactive qualities of the template processor. (Basically you want to single step compilation - which is what I mean when I say the C++ compiler is really a runtime environment)
Generics are poised to enter both C# and Java. For J2EE programmes the obscurity of templating techniques should provide a welcome pause to the arcane work they have to do to write millions of classes (not to mention deployment descriptors) to provide J2EE services, but one would hope that the Java and C# language designers come up with a better way to open up the template processor for debugging etc., while hiding the results of running it from client programmers.
A brilliant idea, PLEAC - Programming Language Examples Alike Cookbook tries to implement a number of standard programming constructs in as many language as possible. Each language implementation is done in 'idiomatic' fashion, i.e. in the style programmers native in the language would do it (module personal style considerations of course)
This is so much more interesting that the usual 'Why I hate language XXX' articles about scripting languages found all over the net. The usually follow the template 'I completely gave up on language XXX when I tried to implement task T' (task T) is then something for which the writers favourite language, YYY has native support whereas the implementation in XXX is usually badly done, and certainly require the use of some arcane library
A nice collection of strange - sometimes stupid - questions asked to and answered by Linus Torvalds.
Derogatory name found in The Register - for the Microsoft anti-trust settlement recently upheld in US courts, largely favouring Microsoft. The classy.dk opinion on the matter is - as has been previously mentioned - that MS is very guilty indeed.
Think about it : If Windows NT was a patented drug most of the core would have been in the public domain a long time ago. In drug treatments openness is forced on drug companies because nobody would use a drug if they didn't know exactly what the substance used is. So the only protection oavailable to the drug companies are 'process secrets' - on how to manufacture the drug - and the patent system. So one tends to come out in favour of the patent system for drugs. Not so with software. Secrecy works as a protection in itself.
NOBODY enjoys the monopoly power as much as Microsoft - and nobody uses it more liberally.
There should be a time limit on how long this kind of knowledge can remain a trade secret. Evolution in software is actively hindered by the secrecy. If the core of the operating system was made public after some period of time that would force companies to actual aggresive invention - instead of just the introduction of more and more bloat, so the same basic functionality can be repackaged again and again to accrue more income from old work.
OK, I know it is getting a bit tedious, all this talk about language, but Joel Spolsky's Law of Leaky Abstractions is another argument why the final programming technology will be loose-knit, open and language like. The law simply says that all abstractions come to an end - and sooner or later you have to abandon the abstraction and look at the substrate it abstracts from.
That's like an inverse Gödel theorem: Instead of the idea that sooner or later you have to make reference to some meta-level to correctly describe your world, this law says that sooner or later you need to de-meta. So this is a pragmatic dualism to the idealistic notion of formal methods in programming design.
If that is the case, why not make technology that by definition covers the whole range of possible meta-levels.
Being a mathematician I have often encountered a purely mental version of leaky abstractions.
Mathematics can be a delightful play with words. A mental game, where the only thing required of you is to come up with a consistent set of utterances that are somehow interesting and meaningful in the end. This 'no rules' aspect of mathematics is a driver towards more and more abstraction. Mathematicians are always abstracting to meta-levels. The meta-level then becomes the real substrate for a new discipline of mathematics, and this new discipline in turn feeds the creation of new meta-levels of knowledge with its own group of specialists.
This process may sound ridiculous and unproductive when described from this rather tremendous distance, but in fact it is important and highly productive. The constant redefinition and refining of mathematical concepts makes the work of geniouses commonplace
An interesting example of this is the subject of linear algebra and convex analysis. The historically inclined mathematician will find the original sources for material in these fields hard to read and almost incomprehensible. Generation after generation of reformulation of the knowledge in the field has shaped into an efficient - if sometimes boring - body of knowledge. The work that was hard to the dicoverers/inventors of linear algebra is now taught to university freshmen as an easy way into the basic notions of proof in mathematics.
What has this got to do with leaky abstractions? When you're doing mathematics, trying to prove something about mathematical objects you tend to set aside the knowledge you have in principle that these objects and the models they fit into are really abstractions, and that they are not really objects at all, but rather just specific features that something may have and that you are at present recognizing this something by that property. If you always have to second guess your primal use of language - namely the presentation of information about concrete physical things - you tend to get lost really fast - so you suspend your knowledge that what you're talking about is an abstraction and talk about it as if it was a concrete thing. This works very well, if you have a good power of imagination at least. Because language is multilayered and doesn't look different when you access a meta-level of information it is efficient and convenient to dispense with the knowledge of abstraction.
I've never really had major difficulties in 'going meta'. In accessing the next level of abstraction. For me the problem always was going the other way. Once you're deeply into the abstraction - if you suddenly arrive at some new object, that you constructed on the meta-level, but that does of course have a less abstract value (i.e. in the context a real value) the very talented also know how to step back from the abstraction and access the 'real' world beneath. I have always had trouble with that, and I really think that is why I am not a mathematician today.
This is less of a problem when you have 'perfect' abstractions. But unfortunately, the 'perfect' abstractions are the very old ones, the polished ones. The new ones - and the ones so new you're making them up as you go along, tend to be more imprecise and leak a lot. When that happens, that's when you need to be good at stepping back from the abstraction to some level of knowledge that doesn't leak. A lower level, where the information you've produced makes sense. When you need to do that the very flatness, and non-layeredness of the language you use becomes the problem. You find it hard to distinguish between information about the abstract layer and information about the concrete layer. And when that happens, you know you are lost.
So in short, using language to model is no panacea either. It's just - in my opinion - the least leaky abstraction we have of knowledge itself.
Found an interesting and lengthy Object Orientation backlash. With the view on what an OO advocate is supposed to think presented in the article, the author has an easy case to make, and the basic claim that OO is not the optimal language for ALL problems is obvious.
When that is said and done, I think the author would have a better case in recognizing the cases where object orientation DOES make all the sense in the world, and furthermore in recognizing the importance that OO is capable of having when programming in the large. An initiative like .NET is not as easy to conceive of without a good object environment. Well yes, you can - it's called C, Unix, scripting languages and a compiler - but objects are eminently practical constructs if you want to hide traversal of a process or machine boundary from your local programming environment.
Furthermore, the author - correctly IMO - argues from the assumption that the true productivity enhancing feature of a programming system is how well the programming system is at emulating the features of our built-in natural language processing system, and the accompanying world modeling. The true measure of programming environment sophistication should be how many of the abstraction constructs we live by it is able to support, and how well it supports them. The parody of OO (dogmatic "OO analysis and design" in the spirit of 300 man teams) that is criticized sacrifices all the flexibility of the basic OO ideas in this regard by enforcing very strict 'rules of speech', in the form of lengthy and complex development guidelines, and that is just not very liberating.
It is not really efficient debunking of anything to remark that there are other routes to flexibility than OO. This is hardly surprising. I have the same feeling the author has about OO about the use of UML to describe a lot of things. I like a few of the diagrams for specific descriptive tasks and use them for that, but I would hate to build a system entirely from UML, or indeed to model solely with UML.
However arguing against the power of OO to flexibly interpret the verb part of sentences based on the types and model theory of the noun parts of the sentence seems ludicrous. The claim that plain old procedural languages do this just as well is just not true. The notion that verb parts of sentences should not be typed (there's a comment about 'a + b' not being a method on either a or b) is absurd. Clearly the type of a and b matters a great deal in the interpretation of 'a + b' and this is not JUST a matter of typing, since the interpretation of verb parts of sentences -also in natural languages - can rely both on type and instance data. The 'framing problem' in the semantics of natural language is all about type and instance dependencies in natural language.
Archiving, categorizing, run-everywhere perl blogware in 91 lines of code at Blosxom. I must admit I post a lot from work, so webbased post editing is good to me, which is why I use Moveable Type. But yes. HTML textarea fields suck as editors go.
The .NET implicit notion that there is no network latency and communication models that work well on the desktop automatically distribute is wrong.
Think of Web Services as queues that consume and generate XML Schema Typed messages. That will force your design to be coarse-grain and loosely-coupled. Halleluja!
Every hacker watching The Matrix would know this: While the greenish glyphs streaming down the screen in the hacker submarine look really cool they do not represent in any significant way the use of visual information when hacking.
The reason: Our perception of visual information is geared for an enormous ability to orchestrate information spatially and this is done at the cost of a very poor visual resolution for temporal information.
We all know from the cinema what the approximate maximal resolution of visual information is : Approx 24 Hz, the rate of display for standard film. If it were better, movies would not look to us like fluent motion.
Our shape recognition ability on the other hand is almost unlimited and the brain even has some amazing computing related tricks where we have very high spatial resolution in the focus area of vision, which comes at the expense of general sensitivity (amateurs guess : Sincy you need a certain number of photons for a difference over space to be present you need a higher level of lighting to realize good spatial resolution). Our peripheral vision on the other hand is extremely sensitive, but has less resolution.
So a better way to construct a new age visual hacking device would be to keep the complicated glyphs - which we can easily learn to recognize - for focal vision and add peripheral information that is important but only as background information that may require us to shift our attention.
An idea for debugging could by glyphs representing various levels of function from the highest to the lowest - all visible at the same time - and then use the peripheral information for auxiliary windows. In the case of a debugger you could have variable watches etc. in the peripheral view and they would only flicker if some unexpected value was met.
I think complex glyphs would be a workable model for representing aspect oriented programming. In linguistic terms we would be moving from the standard indo-european model of language form to some of the standard cases of completely different grammers (insert technical term here) where meanings that are entire sentences in indo-european languages are represented as complex words through a complicated system of prefixing, postfixing and inflection. Matrix-like complex glyphs would be good carriers for this model of language.
Aspect oriented programming is reminiscent of this way of thinking of meaning, in that you add other aspects of meaning and interpretation of programming as modifiers to the standard imperative flow of programming languages. Design By Contract is another case in point. Every direct complex statement has a prefix and a postfix of contract material.
What would still be missing from the debugging process would be some sense of purpose of the code. And that's where the temporal aspects of hacking that the glyph flows in The Matrix represent come into play. A group of scientists have experimented with turning code into music. The ear, in contrast to the eye, has excellent temporal resolution in particular for temporal patterns, i.e. music. That's a nice concept. You want your code to have a certain flow. You want nested parentheses for instance and that could easily be represented as notes on a scale. While you need to adopt coding conventions to absorb this visually, failure to return to the base of the scale would be very clear to a human listener.
In fact, while our visual senses can consume a lot more information than our aural senses, the aural senses are much more emotional and through that emotion - known to us everyday in e.g. musical tension, the aural senses can be much more goal oriented than the visual. This would be a beautiful vision for sound as a programming resource.
They should make some changes in The Matrix Reloaded. The perfect futurist hackers workbench would consist of a largish number of screens. The center screens would present relatively static, slowly changing, beautiful complex images representing the state of the computing system at present. The periphery would have images more resembling static noise, with specific color flares representing notable changes in state away from the immediate focus. I.e. changes that require us to shift our attention.
While working, this code-immersed hacker would listen to delicate code-induced electronica and the development and tension in the code/music would of course be the tension in the film as well, and this then would tie the emotions of the hacker as observer of The Matrix - i.e. the software world within the world of the film - neatly to the emotions of the moviegoer.
Just found the very nice Perl Oasis POD browser. A watering hole indeed. Simple access to all your POD's. The application cheats, since perls own module reflection properties aren't yet up to snuff (hopefully Perl 6 will fix this) but the cheats employed work well enough to be useful in a lot of cases. The explorer view of my @INC and POD's contained therein is a lifesaver. Missing: perldoc integration.
Some of the most interesting ideas in generative programming are converging. Charles Simonyis idea of Intentional programming now has it's own research company
research company. He is joined by the father of aspect oriented programming.
The connection is obvious - at least from what little I know - and I like the initiative since it ties in with some of my own thoughts on software pragmatics.
I am a little worried about the whole-sale approach implicit in intentional programming.
Simonyi talks of 'lifting' source code into the intentional world. If he is unwilling to lift it back down, then I think the initiative will fail - as long as it is not a significant platform initiative from a major vendor, like .NET is. (In many ways .NET is what Microsoft did instead of intentional programming).
I like program generation, and I like the idea of structure editing instead of text editing, but program generation has to abide by very strict rules previously discussed and text editing has two very important advantages over structure editing, namely
The rules that must be followed by program generation to be successful have been previously discussed : The total analysis of software - from design to debugging to redesign to debugging once again - must be possible in the edited medium instead of some generated medium.
A very slow compiler is not efficient since you cannot reasonably rewrite the code and then re-debug, so you cannot analyze the edited medium (i.e. source code).
Similarly if the edited medium vanishes (e.g. a wizard dialog) and you need to reengineer you are lost with generated code.
Together the leveraging of natural language inherent in artificial languages, the ability to do and store incomplete ideas, and the continued acces to the ideas in the medium you expressed them in throughout the ENTIRE lifespan of the idea including design and reengineering are very important for a succesful pragmatic software solution.
What is required in addition to these features ?
Two things mainly: Automation of all that is not editing in the design medium and then one that is a little hard to explain other than as All the qualities of natural languages missing from artificial languages. I should like to point out that in this case the phrase 'natural language' means 'the complete set of utterances of human speakers' not some speficic language with a specific grammar.
The first is a matter of a good setup. Consumate pros have complete automation of tasks once the tasks leave the design medium. Continuous integration is the standard buzz-word for this and it is available to you in most environments if you are serious about it.
The second is where intentional programming makes it's move, but also where it is at least partially ill-conceived IMO. What do natural languages have that artificial languages don't have:
An alternative phrasing would be natural language is open by default - anything goes of the person you are communicating with 'get's it' - insertion of other languages, incomplete expression, new words.
The perfect system will be as open as natural language. Personally I think the best way to define something as open as natural language is simply to use natural language as the model.
Intentional programming adresses a lot of these points: The failure of any particular artificial grammar to catch it all, the need to view all utturances about a particular piece of software as part of the software. If it fails in the most important one, namely the requirement to be 'open by default' it will matter a lot less.
Anything goes is the most important quality of natural language.
The always clear seeing Jon Udell has an article on the future of .NET server libraries and when servers will expose their functionality via managed code for ultimate extensibility.
This is the good old world where everybody just used the C libraries and good integration languages were available in source or at least in super integrable form.
Typically the C-library wrappers in the integration language (like mod_perl) define the 'managed code' layer.
The notion that server software is mainly a series of libraries that developers can then extend on is reminiscent of the war something like perl fits inside Apache.
Now if just Parrot can get some leverage and Postgresql, qmail and apache open their API's to Parrot we have the same thing only open and naturally reflective because of Perl/Python/Ruby power.
That said it is only further evidence that .NET really is an attempt to do something better than they ever did before even if the same kind of function has been available on Linux for years.
I've been thinking of getting a new machine for classy.dk and while I'm thinking I've been updating the configurations of my email server and my web server for a low RAM environment . Since I rarely have more than one or two visitors for my sites that should help the pace of classy.dk which has been abysmal (at least in CGI interactions)
Initial results are encouraging for some parts of the interaction but storing pages is excruciatingly slow. They are slower than I think reasonable even. Since server load never goes very much up during interactions.
So in short - if you have som 72 pin RAM to spare in reasonable sizes (>8 MB) donations are accepted.
While I DO realize that Perl is not the right tool for everything, I DO think it can be used for anything. In particular Object persistence can be as simple as SPOPS -- Simple Perl Object Persistence with Security
In short, you can do more with less code. Some of the 'stiffer' languages simple cannot express conveniently the same ideas as the flexible langugages like perl. The downside (in the case of perl 5 it is obvious) is the rather contorted design. Each chapter of Programming Perl basically has to mention one or more arcane exceptions to the rule, to make the language just a little bit more malleable.
This detracts from the language from a maintenance point of view, since the behaviour of program text becomes extremely dependent on the local situation it is presented in.
The great thing about Perl 6 from this point of view is that Larry Wall is trying to find an overall behavioural chararteristic of the language with fewer exceptions. This new look of the language even manages to add even more abilities within the same better managed functionality.
In particular OO will look better as will a lot of the rather happenstance functional programming features in P5
The danger in the rewrite is that the unfashionable (but to my mind - and every other perl mind around -very practical) idea that program code is often more readable when brief, will be abandoned in favor of the cleaner behaviour.
The common practice seems to be that the benefits of explicit design outweigh the disadvantages implied by this - namely verbose code. J2EE is an extreme case of this principle. All the thoughts are explicitly there at every moment. Personally I think it hurts.
I'm still not using it, and the J2EE SIG at my company died for lack of time and interest, but the pace and scope of JBoss >JBOSS development is admirable.
Version 3.0 includes everything you need - including Apache Axis plugin SOAP implementation
Found a nice article on The Seven Habits of Highly Defective Developers. An anti-patterns route to good development habits. Most of them may be implied from good practice guides like my very own favourite The pragmatic programmer, but sometimes it is nice to see the bad thing you are about to do discouraged rather than seeing some good idea you didn't think of recommended.
Somewhat related to the last post on the magical coincidences engineered by Just another perl hackers, if you take the number
4856... Lots of digits removed, see the MORE section for all of them ...9443
you will notice some interesting facts.
Coincidence? I think not!
Not satisfied? Don't have a a C-compiler? Then take another prime, say
4931... Even more digits removed, see the MORE section for all of them ...3537
Given the same treatment, this is an actual linux executable, that RUNS DeCSS !!!
The full gzip prime
The full executable prime
Just found en extremely interesting thread about
DTDs, W3C schemas and RELAX NG. The subjuct which may at first appear rather esoteric, concerns the nature of type systems, and how typing relates to XML, and ultimately this seems to be important for the role of XML as open and productivity enhancing, rather than just as a new inefficient means of consuming bandwidth and clock-cycles.
The thread starts out as a discussion of the relative merits of the schema language RELAX NG vs. XML Schema.
Proponents of Relax NG claim a number of advantages of Relax NG over XML Schema
A corollary to the above; RELAX NG does not stipulate any information about the data other than what is present in the document, in particular there is no explicit type information. Typing is reduced to "data shape", i.e. constraints verifiable on the data through processing, but not through static querying of information type.
In contrast the key Schema proponent of the conversation, Don Box, claims explicit named typing as an advantage of XML-Schema over RELAX NG. In short, a war of religion is looming over XML typing.
There are at least three notions of type to consider in order to form an opinion on this issue (btw. I am not an expert of even a computer science graduate, so if these distinctions are at an odd angle with standard descriptions let me know (comment).
There is a notion of strong or weak types, meaning whether assertions of type about data are implictly enforced, requiring explicit action by the programmer for type-reinterpretation to be allowed(this is strong typing - as in C++)
There is a notion of static or dynamic type, which is somewhat similar to strong/weak typing, but concerns whether type assertions are enforced as before (static) or at (dynamic) run time.
Finally there is the notion of explicit/implicit type, i.e. basically whether or not the typename is part of the type signature or if is just the concrete interface serviced by the type, that is. In the latter case, only the ability to access the interface counts, whether the interface was available for the right reasons (i.e. through the proper type) is not important.
A language can make choices along each one of these axes independently of the others. But since all of the above properties adress the balance between constraints on algorithms and processing instructions (i.e. between predicative and imperative aspects of an algorithm), usually the bias of a language tends toward either the predicative (strong/static/explicit) or the imperative (weak/dynamic/implicit).
Note however that a well thought out language need not sacrifice any predicative accuracy by going (weak/dynamic/explict). There is a sacrifice in processing time by doing so, since the satisfaction of constraints must be computed at runtime, but it is very possible if not very common to do heavily constrained programming in highly dynamic languages.
What makes the XML thread interesting in this context is James Clarks speculation about the use of named types :
However, I still have my doubts that named typing is appropriate for XML. I would speculate that named typing is part of what makes use of DCOM and CORBA lead to the kind of relatively tight coupling that is exactly what I thought we were all trying to avoid by moving to XML (from this post).
I think this is a very valid point, and one that is even more to the point when it comes to SOAP and WSDL, which has a particularly bad structure in the way type information is mixed with other service data.
In bad SOAP implementations (like e.g. the one available in Borlands Delphi environment) this means that the client side of the SOAP request is bound at compile time to the server implementation. The client interface is in fact published by the server, so the server metadata is used once, at compile time.
So instead of requiring a particular input data set, and accepting whatever the server sends that happens to match the requirements, there is now an assumption about what exactly the server sends.
I think this is in contrast to some of the usual protocol design maxims, about specifying only what one end of an interface must accept, and usually stating that non-accepted content must either be passed through for later processing or ignored.
The flip side of the coin is whether there is a viable alternative to named types if XML is to be used predominantly as a data-centric language.
Clearly the possibility of naming types is economical, as is the possibly of default interpretation. And on the other hand the true need for openness is often in question.
(TO BE CONTINUED))
In an interview about the progress of .Net, Bill Gates talks about how Hotmail is an example of "software as a service" - and he's not talking about Hailstorm.
If software-as-a-service is going to anything like Hotmail I don't want it. Hotmail has less features, more ads, more intrusive Microsoft commercials and is of course slower than a 'real' email service. This is a good example of the whole "Let's force everything to port 80"/"We can just use the browser as interface" fallacy. What we need is a truly new renegotiation of the network/terminal interface. SOAP isn't it.
Not to sound too much like a Linux whiner, but I think the X-Terminal - with access to local data and processing resources is really a better way to start, than the opposite. Or one could study the Groove architecture to see if that cuts the cake. At least they have a notion of networked working space - enhanced with local resources to display and process data, AND with integration for server based services via a business integration server.
Well, actually the battle of the would be webservice Titans.
Oracle has rebutted Microsoft's claims about the speed of .NET relative to the speed of J2EE. Turns out (of course) the Microsoft verson of the famous Java Petstore demo was performance optimised. The original version was written for clarity in use of J2EE API's for beginners, and tried to exercise the use of various design guidelines that prove important and practical for scalable systems. (n-tier design, abstract database logic). Applying the same optimisations to J2EE Oracle more than tie the game.
Not that I'm surprised the Microsoft stats were bogus, but the story has two good points:
So much for the would be Titans. One of the real ones have also commented.
While the comments by Dietrich Ayala (The quote is "Obviously I screwed up somewhere, but I've only been using Perl since 1993, so I'm not an expert") do present a valid point, the case for perl vs the rest of the world is almost always overstated in favor of 'the rest of the world'. It would appear perl is very hateable. Personally I think that we NEED to move beyond c-like languages (and no, I'm not talking about garbage collection either) to see real productivity gains. The points on Software Pragmatism still stand however, so a language capable of complicated interpretation should expose the complicated interpretation to the programmer - in the debugger, I guess.
Something like SQL EXPLAIN : What did you do to arrive at the interpretation for my questions, that you arrived at.
Maybe perl 6 - with the grammar represented IN the language will be able to do something like this.
The wired article Deep Link Foes Get Another Win comments on the sad, ridiculous outcome of a lawsuit by Danish newspapers against a link-digest service called Newsbooster. The predictable, but still idiotic, claim of the newspapers is that the forwarding of openly available links to openly available content on their webpages is somehow a violation of their copyright. Nonsense! If the articles were excerpted, so you could read the news without visiting the webpages that would be something, but the idea that you HAVE to arrive at a page through link navigation from a banner page is ridiculous, and the claims made by newspaper spokespersons that they are not trying to limit the availability of deep linking, is of course absurd - since the only thing Newsbooster is guilty of is deep linking.
What's even more ridiculous is that the newspapers could stop the deep linking by changing the way they implement their websites. If they are so intent on only offering links to one page - which of course reduces the value of their service to very little - this is completely possible by serving only dynamic pagereferences, modified on an hourly timescale.
The proper solution for the newspapers is to get with the program and turn their site into true hypertext where every page is a valid and compelling entry point to the entire website. Reworking newspaper sites in this fashion works with the hypertext publishing model instead of against it. Think Amazon. All of their book pages serve as an excellent introduction to further Amazon inventory.
With a proper implementation Newsbooster adds value to the newspapers instead of drawing value.
In fact I think that even with badly made newssites this is true. Peoply simply don't use their back button that much but continue through the newsflow after scanning pages.
Would be interesting to hear someone like Jakob Nielsens comments on this.
A radical approach to Humanistic Intelligence for software development is Charles Simonyi's Intentional programming. A Microsoft technology which seems to languish in the research labs, if it has not been abandoned. The references to Simonyi on the website of Microsoft Research seems to have vanished. If you have any information on this technology more recent than the quoted article or what appears in the book Generative Programming then please tell me about it.
It's an amazing book. Reading it makes you feel that
The first experience is of course unfair to the later literature. But still, the second experience does indicate that researchers in the field of software project management have failed to make any significant progress.
My reading of the book was well timed with my reading of the cover story of the latest issue of Technology Review, about the poor quality of software today - and how it appears to have gotten only worse lately. There are two things in particular that interest me in this connection:
Back to the first point: Why did Object Orientation (OO) fail as a silver bullet for software design? Answer: It did not fail. It just succeded in disguise as component based development.
In OO design books the pervasive practice of component based development is often ridiculed as being only object based instead of fully object oriented. What we should learn from this is that the key component of object orientation (pun intended) is encapsulation, and the localization and naming of complicated things implied by encapsulation, not the entire bestiary of what you might call ontological linguistic devices available in full OO development environments. By referring to OO as ontological I mean the focus on things and their properties and making statements about them.
As it turns out the usefulness of active philosophizing (i.e. working explicitly with the ontological aspects of things, by doing OO modeling) is thoroughly dominated by the usefulness of simply having name and language for the concrete objects themselves, i.e. components. I'm not sure that it is surprising that there is more expansive demand for object users than for object makers The object maker ends up having a role similar to the language designer and the tool maker, and the trade of object making is therefore a much more select profession that that of the object user.
This development is related to another interesting development in software design, namely the birth of Software Pragmatics. This is important enough to merit capitalization.
To explain the grand statement, in linguistics pragmatics refers to the study of the relationship between language and context-of-use (This definition quoted from Speech and Language Processing. Among the topics of lingustic pragmatics are discourse analysis, and how language is used to model the world (semiotics). Software Pragmatics does the same thing for software development. The key discipline in Software Pragmatics is the pattern movement inspired by the well-known Design Patterns book. And of course there's the very influential book called The Pragmatic Programmer which is almost against theory, but just emphasizes everyday pragmatic thinking. I still think it's authors manage to make a great deal of very valid and general points, even though the scope of their book is not as grand as the scope of the patterns movement. Thirdly there is the new situation and conversation oriented project methodologies such as Extreme programming, which also fit the mold: Orienting the development of the craft towards elevating the quality of the work process and communication as opposed to a more theoretical rigorous invention of new technique which is the classical model for development of the trade.
The collection of books just mentioned have had a remarkable influence. I believe that the familiarization of huge numbers of programmers to component based development on the Windows platform is sort of the illegitimate father of this success. It certainly wasn't the heavy smalltalk bias of the original Design Patterns and Extreme Programming inventors. The component based development approach is exactly focused on reapplication of effective work processes as opposed to a more invention-like approach.
This brings us back to the original question of why Brooks' points on software development still apply. As pointed out by Brooks there have been numerous grand schemes to change the face of software development: Automatic programming, Generative programming, Expert systems, etc. Why the success of Component based development ? And why does this work where the grand schemes fail. When thinking about this I am reminded of Steve Mann's concept of Humanistic Intelligence. Humanistic Intelligence is a concept for intelligent signal processing that emphasizes the human brain as the intelligent processor of signals, as opposed to other concepts of machine intelligence, where the effort goes into adding intelligence to the machine. The idea behind HI is that it is orders of magnitude simpler to enhance the sensory experience of the human brain to include signals normally processed by machines as opposed to adding intelligent processing to the machines receiving the signals:
Rather than trying to emulate human intelligence, HI recognizes that the human brain is perhaps the best neural network of its kind, and that there are many new signal processing applications, within the domain of personal technologies, that can make use of this excellent but often overlooked processor
Software Pragmatics could be seen as an example of Humanistic Intelligence. The developers stays at the center of the development process, only his sensory experience is enhanced through better conceptualization via components and patterns and through better tooling. This also explains some of the more significant demographic shifts in the world of software developers. The rise of modern reflective languages like Java and Microsofts .NET platform are an indication that the most effective machine assistance in development is conversational/discursive in nature. Standard modern tools that programmers live by, like symbol completion and symbol browsing are examples of this. The developer is left in control, and automated programming is deemphasized (I consider the curse of the one-way wizard to be a step down in productivity).
There's a great promotional video on Cray's new line of supercomputeres here. The video is very interesting - among the facts is the average heat produced by the system: 45 watts per square centimeter! That is the processor heat - so for a kilo-processor system (i.e. one with 1000+ processors) we're talking tremendous power consumption. The computation power of the machine is enormous. They mention stat of 10 GFlops per processor (or was it board of 4) and a total system capability of 1024 of these boards meaning a total machine capability of 10 (or 40) TeraFlops. But of course THAT machine will consume insane amounts of power (.5 MW ? - sounds absurd but 1000 times 0.5 KW and you're there) Interestingly they're able to fit 16 4 processor boards in a single cabinet, so the largest machines will be 'only' 64 cabinets, which apart from the need for heat dissipation shouldn't consume too much space.
Need we mention that the estimated computational power of the brain is somewhere between 10^13 and 10^16 operations per second - so possibly 1000 times that of this new machine. And that the brain consumes about 25 watts of powerm so that energy efficiency of the brain compared to our present technology is 10^6 better or so. Room for improvement.
But - if we can keep inventing at Moore Law pace, the brain quality machine is only 10 years away. Add another 10 years to get decent power features for a single brain machine (and a high-power 'brainpower of a university' machine in research centers) and the world could be a very changed place as previously discussed by Bill Joy and Ray Kurzweil among others.
UPDATE: February 2005: Updated the bookmark, so it works with current amazon link format. At least for books. At least in Firefox.
UPDATE: March 2008: Fixed again
No, you shouldn't even trust a nice guy like me.
Yes, an examination of the source will reveal that the link does no harm to you or your browser.
An interesting study of the effect of social factors (i.e. who does the coding) as opposed to technical factors (i.e. implementation language) in software implementation is rounded off with a final LISP example inthis article.
The conclusion social factors dominate when it comes to efficiency. Unfortunately developers are ineffective at evaluating their own qulaity of work, so metrics must be applied instead of subjective judgment. (or rather, metrics and subjective judgment are uncorrelated. Which is right is of course another issue)
This may not be surprising, but only strengthens the case for scripting languages, and newer, more reflective environments. Microsofts .NET strategy is of course a development in favor of this way of thinking.
Interesting is of course the fact that LISP is fast to write as well as execute and has been aroudn for ages. The only problem for list is the lack of integrability with many 'real world' environments. It's funny how old-school ideas like the UNIX shell and LISP still cannot be beat on fundamentals
Of late, Microsoft and others have taken to using september 11 as a marketing device. The tactic is invariably the same is in a recent report of dubitable independence, namely the claim that open source software is less secure
simply because of the source availability. This claim is of course blatantly wrong. The evidence that security by obscurity i.e. that lack of publicly available information about a security flaw, protects agains exploitation has been discredited so many times that it is hard to find room to mention them all. The innumerable flaws in IIS and Internet Explorer, the deCSS story, the PDF/Dimitri Sklyarov story. The Enigma machine is an early story.
So we can only repeat once again that open discussion about security is the best means of security there is.
I found a quote somewhere from Kevin Werbach from Release 1.0 to the effect that weblogs, webservices and wireless internet is the next world wide web. The excitement is supposedly back. Then he loses it by stating that You heard it here first. Is he kidding? It's been going on for a while now.
It is interesting though. To me, something like Radio Userland is mainly interesting because it challenges some of the design paradigms we've gotten used to, by efficiently moving control to the edges, using the shared space of the internet itself mainly for storage and to enable discovery.
This new edge controlled network is in the very early stages of formation, and resembles the WWW of 1992-1994: All content is essentially static, since the edge network - the control - is not generally available 24/7 or visible at all.
What kind's of dynamic content are possible on the edge controlled sometimes-on network? Is this finally the emergence of a real live architecture of software agents? Why should it be? Well, the edge-network needs some technology to move control about, and specifically to move control into some visible available space. The moveable control would be some kind of software agents.
What other language calls design documents apocalypses?
In in what other language would these design documents contain marvelous paragraphs like the following
Let's face it, in the culture of computing, regex languages are mostly considered second-class citizens, or worse. "Real" languages like C and C++ will exploit regexes, but only through a strict policy of apartheid. Regular expressions are our servants or slaves; we tell them what to do, they go and do it, and then they come back to say whether they succeeded or not.
At the other extreme, we have languages like Prolog or Snobol where the pattern matching is built into the very control structure of the language. These languages don't succeed in the long run because thinking about that kind of control structure is rather difficult in actual fact, and one gets tired of doing it constantly. The path to freedom is not to make everyone a slave.
However, I would like to think that there is some happy medium between those two extremes. Coming from a C background, Perl has historically treated regexes as servants. True, Perl has treated them as trusted servants, letting them move about in Perl society better than any other C-like language to date. Nevertheless, if we emancipate regexes to serve as co-equal control structures, and if we can rid ourselves of the regexist attitudes that many of us secretly harbor, we'll have a much more productive society than we currently do. We need to empower regexes with a sense of control (structure). It needs to be just as easy for a regex to call Perl code as it is for Perl code to call a regex.
You've got to love it. Even if you don't want to use it, you've got to love it!
Finally my DI.pm - Digital Identity perl - module has come to life. I've had a hard time finding time to write up this rather minimalist, but eminently practical module. I'm using DI login on my test server at last. Publication of version 0.01 of the toolkit some time this weekend on F9S
If you're transmitting information (even stochastic information) over a channel with noise (i.e. a random distortion of the data) there are good theorems and algorithms to recover the original variable from the distorted signal if you have a good model of the distortion. This is used ingeniously by IBM to protect privacy while still collecting customer information over the internet. Customer data is collected - but passed through a distorting filter. The filter safely eliminates any meaningful individual value of the original customer data, but the distribution of the original data can be recovered with good accuracy. Think a second about applying this to online voting. It would be theoretically sound, protect online voters from any possiblity of political pressure or abuse, but would be very hard to explain to the public.
How appropriate that the CEO of a company with a new strategy for
Open Source But Closed Binaries - threatening (in part) the open source movement by alienating non technical users, should be called Ransom Love...
Non technical quote "Well, according to Ransom Love, CEO of Caldera Systems".
What's all this then: an xml.com guide to a primer to a format for describing other things located at URLs - themselves references. How much indirection can you take!
XML.com: Go Tell It On the Mountain [May. 15, 2002]
Warcraft 2 hackign can now go above and beyond design of scenarios. The FreeCraft open source game engine is stable and available. Now if I could just find the time to write a brand new world for it.
The browser battle is on again. The official version of Mozilla 1.0 from Netscape is almost there. As reported on news.com:
Netscape 7.0 rekindles browser battles - Tech News - CNET.com