The immensely popular and simply fantastic Google si a cultural phenomenon of enormous proportions. We've all heard of Googlewhacking i.e. the sport of finding two words that - when searched for on Google - return exactly 1 (one) hit. My esteemed colleague Jasper was a googlewhack until I posted this story.
Now there's a new sport in town, namely the intentional posting to newsgroups of material that - when found in google and highlighted in yellow - presents a nice graphic.
I hereby challenge all post order merchants:
Beat this in your mail order catalog.
The whole idea is somewhat reminiscent of the age-old sport of Just another perl hacker email tags: The art of tagging your email with 4 lines of dense perl code that runs, only to print the words Just another perl hacker.
Hmmm, hvis man skriver med et spørgsmål til dkhostmaster får man følgende svade retur:
Denne kvittering giver Dem sikkerhed for at vi har modtaget Deres mail.
Lad derfor være med at maile, ringe eller skrive om emnet igen. Det
vil bare forsinke ekspeditionen yderligere.
Den var ikke gået i Amerika.
No this is not an article about a failed 2M Invest company... Actual legislation is being proposed in the US Congress to allow any copyright holder to hack the hackersas reported on K5. In short, the proposed bill provides immunity for a number of possible liabilities caused by interfering with another party's computer, if the intent was explicitly - and upfront - to foil illegal use of copyrighted material.
This is the old "If guns are outlawed only outlaws will have guns" idea. Let the good guys give the bad guys a taste of their own medicine. Only, in the virtual world, where boundaries of location (especially in a P2P world) are abstract and hard to define, it seems to me that this bill is an extension of the right to self defence and the right to protect the sanctity of the home, to actually allowing aggresive vigilante incursions on other peoples property, when the other people are accused of copyright infringement.
It goes right to the core of current intellectual property debates, and raises in a very clear way the civil right issues involved in the constant and rapidly increasing attempts at limiting right-of-use for lawfully purchased intellectual property. Whose property IS intellectual property anyway?
In the olden days - when intellectual property was securely tied to some kind of totem, a physical stand-in for the intellectual property, in the form of the carrier of the information, i.e. a book or an LP or similar, there was a simple way to settle the issue. Possesion of the totem constituted an interminable right of use of the intellectual property. The only intellectual property available on a per-use basis was the movies. Live performance does not count in this regard, since live performance is tied to the presence of the performer, and the consumption of live performance is not therefore a transfer of an intellectual property to the consumer, in that it is neither copyable or transferable or repeatable.
It is of course the gestural similarity with live performance that has led to the rental model for film.
As the importance of the totem began to degrade, so began the attacks on the physical interpretation of intellectual property. We have seen these attacks and reinterpretations of purchase through the introduction of casette tapes, video tape, paper copiers, copyable CD rom media, and now just the pure digital file.
At each of these turning points attempts are made to limit the right-of-use to film-like terms. Use of intellectual property is really just witnessing of a performance. So you pay per impression, and not per posession.
What is interesting of late, and in relation to the lawsuit, is both the question of whether this 'artistic' pricing model is slowly being extended from the entertainment culture to all cultural interaction. Modern software licenses are moving towards a service-model with annual subscription fees. This could be seen as a step towards pure per-use fees for all consumable culture - an idea that is at least metaphorically consistent with the notion of the information grid. Information service (including the ability to interact) is an infrastructure service of modern society, provided by information utilities, and priced in the same way as electrical power.
In practice you do not own the utility endpoints in your home - the gasmeter and the electrical power connection to the grid. And ownership of any powercarrying of powerconsuming device does not constitute ownership of the power/energy carried or consumed. In the same way the content companies would have us think of hardware. And Microsoft would like you to think of Windows as content in this respect.
Secondly, there is the important question of how this interpretation of information and culture relates copyright to civil right.
The sanctity of physical space (i.e. the right of property) is a very clear and therefore very practical measure of freedom. Actions within the physical space are automatically protected through the protection of the physical space. There are very real and important differences between what is legal in the commons and what is legel in private space. And of course the most important additional freedom is the basic premise of total behavioural and mental freedom.
The content company view of intellectual property is a challenge to this basic notion of freedom. There is a fundamental distinction between the clear cut sanctity of a certain physical space, and the blurry concept of "use".
The act of use itself can be difficult to define, as property debates over "deep-linking" make clear.
In more practical terms, any use of digital data involves numerous acts of copying of the data. Which ones are the ones that are purchased, and which ones were merely technical circumstances of use. The legislation proposed enters this debate at the extreme content-provider biased end of the scale. Ownership of anything other than the intellectual rights to content are of lesser importance than the intellectual ownership.
The difficulty of these questions compromise the notion of single use and use-based pricing. And ultimately - as evidenced by the deep-link discussions - the later behaviour of the property user is also impacted by purchase of intellectual property according to the content sellers. This is a fundamental and important difference between the electrical grid and live performance on one hand, and intellectual property on the other. Intellectual property simply is not perishable, and, as if by magic, it appears when you talk about it.
Interestingly a person with a semiotics backgorund would probably be able to make the concept of "use" seem even more dubious, since the act of comprehension of any text or other intellectual content, is in fact a long running, never ending and many faceted process. In the simplest form, you would skirt an issue such as this, and go with something simple like "hours of direct personal exposure to content via some digital device". That works for simple kinds of use, but not for complicated use. And is should be clear from endless "fair use" discussions that content owners are very aware of the presence of ideas made available in their content in later acts of expression.
A wild farfetched guess would be that as we digitize our personal space more and more, expression will be carried to a greater and greater extent over digital devices, so that the act of thought is actually external, published and visible (witness the weblog phenomenon). In such a world, the notion that reference is use becomes quite oppresive.
Ultimately the concept of free thought and free expression is challenged by these notions of property. It is basically impossible to have free thought and free expression without free reference or at least some freedom of use of intellectual materials.
At [smoothbeats.com] is a nice always on hiphop radio station (broadcasting shoutcast format). Listen.
Just found en extremely interesting thread about
DTDs, W3C schemas and RELAX NG. The subjuct which may at first appear rather esoteric, concerns the nature of type systems, and how typing relates to XML, and ultimately this seems to be important for the role of XML as open and productivity enhancing, rather than just as a new inefficient means of consuming bandwidth and clock-cycles.
The thread starts out as a discussion of the relative merits of the schema language RELAX NG vs. XML Schema.
Proponents of Relax NG claim a number of advantages of Relax NG over XML Schema
A corollary to the above; RELAX NG does not stipulate any information about the data other than what is present in the document, in particular there is no explicit type information. Typing is reduced to "data shape", i.e. constraints verifiable on the data through processing, but not through static querying of information type.
In contrast the key Schema proponent of the conversation, Don Box, claims explicit named typing as an advantage of XML-Schema over RELAX NG. In short, a war of religion is looming over XML typing.
There are at least three notions of type to consider in order to form an opinion on this issue (btw. I am not an expert of even a computer science graduate, so if these distinctions are at an odd angle with standard descriptions let me know (comment).
There is a notion of strong or weak types, meaning whether assertions of type about data are implictly enforced, requiring explicit action by the programmer for type-reinterpretation to be allowed(this is strong typing - as in C++)
There is a notion of static or dynamic type, which is somewhat similar to strong/weak typing, but concerns whether type assertions are enforced as before (static) or at (dynamic) run time.
Finally there is the notion of explicit/implicit type, i.e. basically whether or not the typename is part of the type signature or if is just the concrete interface serviced by the type, that is. In the latter case, only the ability to access the interface counts, whether the interface was available for the right reasons (i.e. through the proper type) is not important.
A language can make choices along each one of these axes independently of the others. But since all of the above properties adress the balance between constraints on algorithms and processing instructions (i.e. between predicative and imperative aspects of an algorithm), usually the bias of a language tends toward either the predicative (strong/static/explicit) or the imperative (weak/dynamic/implicit).
Note however that a well thought out language need not sacrifice any predicative accuracy by going (weak/dynamic/explict). There is a sacrifice in processing time by doing so, since the satisfaction of constraints must be computed at runtime, but it is very possible if not very common to do heavily constrained programming in highly dynamic languages.
What makes the XML thread interesting in this context is James Clarks speculation about the use of named types :
However, I still have my doubts that named typing is appropriate for XML. I would speculate that named typing is part of what makes use of DCOM and CORBA lead to the kind of relatively tight coupling that is exactly what I thought we were all trying to avoid by moving to XML (from this post).
I think this is a very valid point, and one that is even more to the point when it comes to SOAP and WSDL, which has a particularly bad structure in the way type information is mixed with other service data.
In bad SOAP implementations (like e.g. the one available in Borlands Delphi environment) this means that the client side of the SOAP request is bound at compile time to the server implementation. The client interface is in fact published by the server, so the server metadata is used once, at compile time.
So instead of requiring a particular input data set, and accepting whatever the server sends that happens to match the requirements, there is now an assumption about what exactly the server sends.
I think this is in contrast to some of the usual protocol design maxims, about specifying only what one end of an interface must accept, and usually stating that non-accepted content must either be passed through for later processing or ignored.
The flip side of the coin is whether there is a viable alternative to named types if XML is to be used predominantly as a data-centric language.
Clearly the possibility of naming types is economical, as is the possibly of default interpretation. And on the other hand the true need for openness is often in question.
(TO BE CONTINUED))
Hmmm, the danish fuel-cell advocacy website brintbiler.dk thinks that the keyword 'management' was a good one to purchase on Google Adwords. I found out trying to find a link for
Either they weren't thinking when they purchased, or Google is not in proper working order, or there are hitherto unknown connections between component programmign and green high-tech energy.
In an interview about the progress of .Net, Bill Gates talks about how Hotmail is an example of "software as a service" - and he's not talking about Hailstorm.
If software-as-a-service is going to anything like Hotmail I don't want it. Hotmail has less features, more ads, more intrusive Microsoft commercials and is of course slower than a 'real' email service. This is a good example of the whole "Let's force everything to port 80"/"We can just use the browser as interface" fallacy. What we need is a truly new renegotiation of the network/terminal interface. SOAP isn't it.
Not to sound too much like a Linux whiner, but I think the X-Terminal - with access to local data and processing resources is really a better way to start, than the opposite. Or one could study the Groove architecture to see if that cuts the cake. At least they have a notion of networked working space - enhanced with local resources to display and process data, AND with integration for server based services via a business integration server.
The best and most tasteless one I've found is on the recent Israeli strike against a militant Hamas leader - leading to the death of 15 others: Stevo suggests that if only the Isreali forces communicated with MSN Messenger and Exchange Server, surely the attacking plane would have been informed about the many civilians who died and their deaths could have been avoided.
Man - as previously mentioned - could degrade into biomass for information processing, if the pressure of divided attention cannot be tamed.
Listening to the radio program
This American Life | Give the People What They Want it becomes clear that we are biomass for information - and happily so - as long as were consuming and processing social information. The story is of a home for the Alzheimer plagued, where they stage fake weddings to please the diseased. The weddings are fake. Bride and groom were hired to play bride and groom. It's like Tony n'Tinas wedding without the pretense of any excitement, except that of a social situation in which you participate.
This helps the Alzheimers patients, by placing them in a social situation they can understand and consume mentally, perhaps remembering similar situations from their own life. However, since they are Alzheimer's patients, the memory of the event lasts only a few hours. The home could stage the wedding again the next day, and the attendees would attend as if yesterday had never happened.
I am reminded of a novel by the Danish novelist Svend Aage Madsen called Se Dagens Lys (literally translates to "See the Light of Day") about a man who wakes up each morning in a new world, with a new wife, and new neighbours to happily live though the social gestures of the day, and then wakes up next day with no emotional history, just more social gestures and a new but similar setting (and I am of course also reminded of "Once in a Lifetime", and "Brave New World" and "1984" and every other fictionalization of the modern emotionally disengaged life)
Jeg vil også v?re med aktienedturen! Min gamle arbejdsplads SimCorp - et 2M selskab - har nu halveret b?rsv?rdien siden introduktionen på K?benhavns Fondsb?rs. Heldigvis må jeg ikke s?lge mine medarbejderaktier, så der er ingen anledning til at v?re trigger happy hernede på bunden. Jeg kunne nu bedre lide den tredobbelte kurs, som de n?sten nåede en kort overgang.
Btw. så er firmaet ikke i nogensomhelst fare, og meget velpolstret.
The University at Odense (in Denmark) just setup a new supercomputing cluster built entirely on commodity hardware.
It is worthwhile to compare this to the supercomputers built with 'rocket-grade' technology, i.e. the fast Crays and the IBM peta-flop machine.
The price performance ratio of the commodity system cannot be beat of course. But The Horseshoe as the commodity system is dubbed is close to the maximum considerable on commodity hardware with 512 PC's in parallel.
By comparison the top crays will be 20 times as fast (servicing 10K PCs would not be fun) and the IBM machine 50 times faster again (servicing 500K PCs is decidedly non-funny) and as to power consumption - the PC solutions is on a level with the Crays (based on reported price of power the consumption for the full cluster is approximately 73KW (1.7 MDKK for three years continued use at 1.2 per KWh comes out at about 73KW) with the IBM solution coming in at much lower consumption rates (2MW per Peta flop i.e. 4KW for 2 TeraFlops). The heat from the PC's will take quite a cooling system. And scaling the cooling system without doing something like the Cray or IBM solution.
So the IBM technology would save 0.5 MDKK per year on power alone.
In this article, a member of the Danish system team declares the supercomputer "dead" - but from the industrial scale environment you would need to invest in to scale the commodity solution, I'm guessing that if you really need top speed, they are far from dead.
Supercomputers are also clustered systems, but the power use and sheer physical size of a large scale clustered system should ensure a place for custom hardware one would guess.
A recent visit to top500.org and links found there gives some nice price and performance quotes for fast systems.
First off, the possibilty of HorseShoe actually reaching 2 TeraFlops on real problems is hypothetical. This performance is the pure processor speed, where memory bandwidth and the price of parallelization is not considered. From the design of the HorseShoe (commodity networking - fast ethernet only). One would guess that you would need an algorithm that works well with very coarse grained parallelism to achieve any performance near the theoretical topspeed. Indeed only the top 23 systems on the worldwide list are rated > 1TFlops at top500. But that test is only a test of LINPACK, so for search problems or simulations it may not be veri significant.
Secondly, pricing: ASCI White had a cost, reportedly, of 110 M$ two years ago. Assuming it could be done at half that rice today we're still way above the price of a commoidty system. On the other hand IBM plans to build Blue Gene (or at least Blue Gene/L) for 100 M$ by 2004. This machine will be rated at 200TFlops, so the price per TFlop will be 0.5 M$. Assuming a drop in price/performance of 2 over the next couple of years also, this is still comparable to the price of a commodity system today. and the x100 scalefactor will be hard to do for the commodity cluster.
So commodity systems can't beat custom systems, even if the processors of almost all the supercomputers are commodity processors.
Cray is developing a 50 GFlop computer (the SV2 mentioned earlier) using a more traditional supercomputing approach. There are some price announcements for Govt. orders for two these systems, but is is unclear whether the reported 19M$ includes a full-sized system.
Well, actually the battle of the would be webservice Titans.
Oracle has rebutted Microsoft's claims about the speed of .NET relative to the speed of J2EE. Turns out (of course) the Microsoft verson of the famous Java Petstore demo was performance optimised. The original version was written for clarity in use of J2EE API's for beginners, and tried to exercise the use of various design guidelines that prove important and practical for scalable systems. (n-tier design, abstract database logic). Applying the same optimisations to J2EE Oracle more than tie the game.
Not that I'm surprised the Microsoft stats were bogus, but the story has two good points:
So much for the would be Titans. One of the real ones have also commented.
Som rapporteret i Computerworld Online kan der stadig tjenes penge i konsulentbranchen - dvs. blandt IT folk der arbejder med forretningsudvikling og på timetakst - altså forretninger hvor IT- konsulenten kan betragtes som bare endnu en rådgiver med endnu et speciale. I alle de brancher hvor software ellers VIRKELIG burde v?re profitabelt - dvs. software fremstillingsvirksomhed med en bruttomargin t?t på de 100% (fordi al udvikling er faste omkostninger) er der dyb krise.
Det er selvf?lgelig knap så overraskende at en virksomhed uden fleksible udgifter taber penge når markedet er dårligt. Men det er alligevel skidt at det er dem alle sammen der går så dårligt. Det der er galt med det er naturligvis at man som IT person så heller ikke har den fabelagtige mulighed for at geare sin personlige indsats ved at skyde sin produktivitet ud i oms?ttelige intellektuelle rettigheder - altså software.
Man er i stedet reduceret til en jobtype som l?gen og advokaten. Vell?nnet konsulentarbejde jovist, men bundet til ens fysiske tilstedev?relse og de timers arbejde man l?gger i det.
Som rapporteret i B?rsen har I-Datas administrerende direkt?r nedskrevet sine egen I-Data aktier til 0 kr i seneste regnskab for sit personlige selskab TIC A/S. Jeg er glad for at kunne sige at lige pr?cis om I-Data har jeg v?ret af den mening fra virksomhedens begyndelse at det var en d?dssejler.
Virksomhedens produktpalette har v?ret ildevarslende fra dag 1: En virksomhed med ekspertise i printere til hensygnende IBM netv?rk. Sikkert en mulig forretning, men med bestemte v?kstbegr?nsninger. Så opk?b af en modemfabrikant (Midt i boomet, mens alle er på vej til nye platforme?) med en styrke inden for ISDN (mens alle er på vej over i ADSL) - Virksomheden har også en business i X.25 netv?rk (packet switch standarden der ikke er TCP/IP). Vi har altså ny en virkesomhed med ekspertise i d?d eller d?ende teknologi, der vil på b?rsen som v?kstvirksomhed - og derfor lanceres biksen som en virksomhed med masser af fede ting i pipelinen. Men hele oms?tningen bundet til de d?ende teknologier såvidt jeg forstår. En opskrift på katastrofe. Så kommer de utallige udmeldinger om indgåede storaftaler der viser sig at v?re forkerte ikke bare gang på gang, men hver gang.
Der var en god investering (Exbit) fuldst?ndig uden relation til virksomhedens egen teknologiforretning, og med den solgt fra er der kun legacy platformene tilbage. Og de er alle ved at d?.
This is great! There's a guy who has actually purchased a sponsored link on Google for keyword Hell
There's a largeish site and a book about The Terrors of Hell being advertised.
Oddly enough - considering the web demographic - the sites found are not for satanist information but religious sites. Biblehelp also bought heaven, oddly enough no porn sites did.
Reading about GM's Billion-Dollar Bet there are several points of good news about fuel cells:
First off - Fuels cells are no longer found only in German research labs (and of course as we're proud to mention at Risø labs, the danish research institution founded by Niels Bohr and at the forefront of moderne energy research (wind power and fuels cells)).
Secondly, products ARE beginning to appear. As per the usual technolgy adoption path, fuels cells are being tested for real applications, first where price does not matter as much as the inherent benefits of fuels cells : The highest energy density available.
Elsewhere (link TBA) it has been reported that battery sized fuel cells are reaching the market right about now, for remote hard-to-service devices where chemical batteries or other power sources are less than practical.
Now that some of the more radical computer controlled hybrid approaches to power are being to appear on non-weird vehicles, maybe the fuel could actually be next.
While the comments by Dietrich Ayala (The quote is "Obviously I screwed up somewhere, but I've only been using Perl since 1993, so I'm not an expert") do present a valid point, the case for perl vs the rest of the world is almost always overstated in favor of 'the rest of the world'. It would appear perl is very hateable. Personally I think that we NEED to move beyond c-like languages (and no, I'm not talking about garbage collection either) to see real productivity gains. The points on Software Pragmatism still stand however, so a language capable of complicated interpretation should expose the complicated interpretation to the programmer - in the debugger, I guess.
Something like SQL EXPLAIN : What did you do to arrive at the interpretation for my questions, that you arrived at.
Maybe perl 6 - with the grammar represented IN the language will be able to do something like this.
Dette burde have v?ret overskriften på en artikel i Berlingske Tidende. Den virkelige var Smuthul i dansk lov skal lukkes. Det omhandler bortskaffelsen af den fundamentale retssikkerhed der går under navnet dobbelt strafbarhed som tilsiger at man ikke kan d?mmes i Danmark for en forbrydelse begået i et andet land hvis den ikke var strafbar hvor den blev begået, altså i gernings?jeblikket.
Det er en r?dselsfuld id?, og et kn?fald for de laveste h?vnmotiver i strafferetssystemet. Ideen med loven fremgår klart af begrundelsen - hvor argumentationen er at man ?nsker at d?mme p?dofile sexturister for forbrydelser i udlandet, eller at s?tte en stopper for at somaliske indvandrere tager til hjemlandet for at få deres b?rn omskåret - hvad der er ulovligt i Danmark. Jeg bryder mig heller ikke om nogen af delene, men hvis ?kle og moralsk frast?dende mennesker kan finde et sted hvor det regnes for OK er der kun en ting at sige : 'Those are the breaks'. I et retssamfund b?r det h?jere og bredere princip (og da specielt fundamentale frihedsrettigheder, som det her jo i virkeligheden er) have forrang fremfor det sn?vrere hensyn.
Det bliver interessant at h?re om det kan findes en lavere, men stadig principiel, barriere der beskytter den enkeltes ret til at f?rdes frit i udlandet efter der g?ldende lov. Hvis det l?gges op til sk?nsm?ssige vurderinger om principper som dette, så er den helt gal.
Man kan komme i tanke om komplekse forbrydelser af f.eks. ?konomisk art som er internationale hvor det er legitimt at påstå at et begreb om gerningssted ikke giver mening. Her giver retsforf?lgelse mening.
Ligeledes findes der et undtagelsessystem for forbrydelser så grove at de transcenderer landegr?nserne - nemlig forbrydelser mod menneskerettighederne, som også retsforf?lges internationalt. Det giver også - omend ikke uden politikse kontroverser - mening.
Men sigtet med den tilt?nkte begr?nsning i retssikkerheden er tilsyneladende helt pr?cist at str?kke dansk lov l?ngere.
Det sidste der er at sige er, at argumentationen der bruges kan anvendes om alle frihedsrettigheder der ikke er ligegyldige, dvs. rettigheder der faktisk udfordes til gr?nsen af faktiske begivenheder og handlinger. Tag f.eks. den fundamentale dobbeltbeskyttelse i at et n?vningeting og de juridikse dommere ved en n?vningeret skal v?re enige om skyldssp?rgmålet for at der kan f?ldes dom. Man kan allerede h?re Lone Espersen : "Det er kr?nkende for retsbevidstheden at de demokratisk valgte n?vninge ikke har ret til at tage stilling til skyldssp?rgsmål. Vi har ikke brug for juridiske smagsdommere".
The wired article Deep Link Foes Get Another Win comments on the sad, ridiculous outcome of a lawsuit by Danish newspapers against a link-digest service called Newsbooster. The predictable, but still idiotic, claim of the newspapers is that the forwarding of openly available links to openly available content on their webpages is somehow a violation of their copyright. Nonsense! If the articles were excerpted, so you could read the news without visiting the webpages that would be something, but the idea that you HAVE to arrive at a page through link navigation from a banner page is ridiculous, and the claims made by newspaper spokespersons that they are not trying to limit the availability of deep linking, is of course absurd - since the only thing Newsbooster is guilty of is deep linking.
What's even more ridiculous is that the newspapers could stop the deep linking by changing the way they implement their websites. If they are so intent on only offering links to one page - which of course reduces the value of their service to very little - this is completely possible by serving only dynamic pagereferences, modified on an hourly timescale.
The proper solution for the newspapers is to get with the program and turn their site into true hypertext where every page is a valid and compelling entry point to the entire website. Reworking newspaper sites in this fashion works with the hypertext publishing model instead of against it. Think Amazon. All of their book pages serve as an excellent introduction to further Amazon inventory.
With a proper implementation Newsbooster adds value to the newspapers instead of drawing value.
In fact I think that even with badly made newssites this is true. Peoply simply don't use their back button that much but continue through the newsflow after scanning pages.
Would be interesting to hear someone like Jakob Nielsens comments on this.
Forleden kommenterede jeg på en artikel i weekendavisen, men man skal åbenbart have sammenh?ngen med f?r man åbner munden (en ubehagelig erkendelse). Bogen artiklen var taget fra havde pr?cis min pointe. Kombineret med den statistik at s?gningen til naturvidenskabelige uddannelser i Vest-Europa (jeg er stadig skeptisk mht. om den slags statistikker også omfatter USA) er katastrofalt ringe.
Om det er tiderne der bare er imod videnskab - eler om det er "De Store L?fts Tid" der er forbi er måske sv?rt at sige. Har videnskaben ligesom det andet fremragende l?ft - socialdemokratismen og fagbev?gelsen - sejret ad helvede til, ved at forsvinde ud i baggrunden som en allestedsn?rv?rende, og derfor usynlig samfundskomponent, og er de store l?ft derfor kun synlige ved deres spektakul?re fejl. For socialdemokratismen, kammerateri og grundl?se storstrejker, som ikke handler om rettigheder og velf?rd, men bare om politisk magt; for videnskaben spektakul?re fejl som Ozonlagshuller, DDT-inducerede naturkatastrofer, Exxon Valdez, etc.
Og hvis nu "De store L?ft" (jeps, det lyder som et ?kelt maoistisk kampagneprogram - jeg ved det godt") har spillet fallit som attraktive samfundsmål, hvad sker der så?
Nogle indikatorer: Der er en stigende forståelse for at privatiseringen af forskning og udvikling (dvs. videnskab som ikke er drevet at Store Programmer) har nogen triste og middelalderlige kvaliteter. Mange påpeger at udstrakt patentering truer med at tage live at sundhedsvidenskaberme. Den samme tendens har v?ret synlig i informationsteknologi l?nge - med udstrakt negativ effekt af monopol-magt etc.
Er denne udvikling en uundgåelig konsekvens af nedbruddet af en videnskabelig offentlighed sponseret af regeringer?