Having just seen a brilliant documentary about The Band on swedish national television, I went to the record store to catch up with this music and bought Rock of Ages - a two-cd live recording from new years eve in New York in 1971. It is absolutely brilliant.
The sheer musicianship - which was what was so impressive even in the TV documentary where that usually doesn't come very well across - is amazing, and the music which for lack of a better word I'll call well-rooted (in american traditional genres that is) is really soulful.
A BBC news story tells us something we've known for a long time: The hi-tech workplace is no better than factories. It involves long hours, insecure job outlook, and a poor working environment.
This fits right into the interesting dividing line of the modern society, with an educated class - who work a lot and identify primarily with their work - and a not so educated class - who identify with their home and leisure activities.
An interesting turn around from earlier days when the the term 'leisure class' referred to the non-working rich.
And there's also an article in The Wall Street Journal about profiling gone wrong.
The examples from that article all seem to be about cultural overfitting in the profiling models. Some random selection by a purchaser is used a high-valued proximity generator in search space, and suggestions for like-minded but fatally off-target titles are the result.
One has to wonder why this happens to these people. Granted, the stickiness of a particular search session of amazon can be annoying (browse one unrelated item and idiotic links will be added to the conversational state of the current search for the duration of the search). In general though, I don't experience this kind of thing very much. But then I don't have a TiVo of course - and would really much rather have ReplayTV if I had the choice.
Among the possible reasons are
I think the first and second explanations are the most probable and that the third only matters inasmuch as when I'm browsing for technical literature there are very few strong personal feelings attached to the individual book selection and furthermore technical books tend to cover a subject more evenly, so any failure to match true preferences isn't nearly as intrusive as it would be to receive a suggestion to buy a ridiculous band like the now defunct Guns'n'Roses just because I happen to like Iggy Pop.
The failure to model overfitting shouldn't be discredited at all.
A pure bayesian 'most likely secondary purchase' model on a per book basis would probably fail miserably.
Due to the relative sparseness of the purchase space considering the true number of dimensions (3 million books in print) you definitely need to emply some technique to not fit the noise. A principal compenents analysis of the search space is probably a good idea and good incremental algorithms exist to compute one.
It would be interesting to know how people work with overfitting when the only observations that really make any sense are the successes. It would make sense to assume a general decay of success for a particular association of titles and let then to let the successes enforce the probabilities that aren't failures.
Found the Survey of the State of the Art in Human Language Technology. Looking forward to reading it.
November 22nd marks the six-month anniversary of the notes from Classy's kitchen. To celebrate this milestone event the staff here at classy is pleased to announce the general availability of two new service emails. They are internet classics, but with a Classy touch.
firstname.lastname@example.org - Will spew insults back at the sender for general amusement. Many sites have an abuse emailaddress, but in general they receive abuse. Personally I think that is unfair, so the classy.dk service generates abuse.
In a similar vein email@example.com will bitch and moan about stuff that's just no good.
Coming soon : Language sensitive insensitivity, whereby the service will abuse you in your own language.
The actual abuse generator is on it's way as a webservice also - by popular demand - and is in general due for a ton of extensions and improvements.
If you'd like to know I can tell you that we're reaching record numbers of sites (~600), they are downloading record numbers of pages a month (~6000) and the pages downloaded have record size.
The daily average of postings for the six months is approx 0.75 - which isn't all that bad.
In this installment - Free web services with no incentive to pay any money to use them - will be very unlikely to stay alive. The very nice Backflip online favourites service has been down now for 9 full days due to a database crash. (Memo to users: If you care about your data make sure they are stored in a safe place, i.e. in more than one place) The reason for the lengthy downtime seems to be mainly financial. There's simply not enough money to guarantee against downtime for this service.
Update 20021125 : Backflip did come back up. Keep fighting!
A brilliant idea, PLEAC - Programming Language Examples Alike Cookbook tries to implement a number of standard programming constructs in as many language as possible. Each language implementation is done in 'idiomatic' fashion, i.e. in the style programmers native in the language would do it (module personal style considerations of course)
This is so much more interesting that the usual 'Why I hate language XXX' articles about scripting languages found all over the net. The usually follow the template 'I completely gave up on language XXX when I tried to implement task T' (task T) is then something for which the writers favourite language, YYY has native support whereas the implementation in XXX is usually badly done, and certainly require the use of some arcane library
I've been reading a little about aspect oriented programming - which is reminiscent of intentional programming as evidenced by the recent company formation by intentional pioneer Simonyi and Aspect pioneer Kiczales.
Other than being a cool concept it also fits very well into the pragmatic vein being almost precisely a technology for making good use of the discursive state of language.
The basic notion is overloading the program flow with 'stacked' pre-method and post-method calls packaged as aspects. The typing comes about by specifying target predicates for the method signatures a particular aspect modifies.
Once that idea has settled in as defined on top of an OO language like Java the extension to a pure aspect language where ALL function calls are basically invoked by predicate comes to mind. Clearly the execution order of very large sets of predicated computations become an issue - especially since aspects explicitly allow sideeffects (a pet example of an aspect is 'design by contract') but the final execution model has other advantages like implicit simple langage for multitasking of operations. It is easy to consider aspect invocation events - certainly if you are used to the Delphi class libraries where implementation by delegate uses a pre- and post-method pattern all the time.
Continuing that thinking you end up with a notion reminiscent of Linda tuple-space, only the tuples are now method signatures where program state has been reified (fancy word for 'stored as data' - with some theoretical sauce added), so that the dynamic state of a particular computation is available inside the tuple space to any available processor. I'm not to sure about the reification part though. Methinks I should hack something like that together. Of course the real beauty of someting like AspectJ is that all the decisions on 'pattern matching', i.e. typing, make sense at compile time, so that dynamic complexity and type inferencing complexity is not multiplied, whereas the fully dynamic model does NOT do this, but clearly it doesn't have to be that bad at all. If desired one could dispense with the ability to compuet predicates dynamically and do the same compile time optimization for each definable pre- and post-state available.
Coming up with a viable language to express such a complicated flow of execution is certainly an admirable achievement, and the formulation given in AspectJ seems very elegant. The equivalence with pure java is a nice selling point. As long as you're in doubt you have a perfect code generator at your disposal and can work in generated code. The original code is a good design medium though - and the developers of AspectJ even took pains to debug enable aspects. That is very close to being the complete list of requirements for good tooling.
It seems that during the current depression the 'natural laws' of magical price/performance gains for technology are still holding true
News about IBM's supercomputers quote a price of 290 mio$ for two machines - one 100 teraflop machine and one 360 teraflop machine. The previous mention of BlueGene/L had it listed as a 200 teraflop machine, not 360 teraflops, but the price was listed then as 100 mio$. Assuming that the relative timing of the two machines means that the cost for BlueGene is only two thirds of the full price, that's 200 mio$ for 360 teraflops, or 0.55 mio$ per teraflop when delivered. This isn't too far from the estimate of a year ago, although that estimate had an earlier delivery date.
De medarbejdere i Orange der ikke bliver fyret ?nsker man åbenbart rejser så hurtigt som muligt. Ihvertfald sender man nu et klart signal om at man ikke stoler på medarbejderne ved at fyre nogen af dem uden at v?re helt sikker på de har gjort noget galt.
Det kan man kalde management by fear!
A nice collection of strange - sometimes stupid - questions asked to and answered by Linus Torvalds.
Amazon apparantly realized how ridiculous the apparel links looked on book searches and changed the recommendation text to 'Customers who wear clothes also shopped for' instead of claiming a connection with the book searches.
Further evidence favouring openness (and even the throwaway certificates i mentioned below) can be found in a long and entertaining interview with Bruce Schneier one of the worlds leading cryptography experts.
His contention is that even for that very important function of verifying identity there are no safe measures deployed, and any and all of the grand schemes to do so will fail very often. This tells us two things: First, that schemes that don't have to be grand are better. They too do eventually enjoy a network effect, but they don't require everybody to be plugged in to work. Secondly, interfaces will be compromised, so you better prepare for it somehow, by limiting the consequences per breach.
His point of view is directly related to thought about digital identity and comes out in favor of loose-knit reputation systems and throwaway identification in specific cases to guard against the consequences when (not if) your identification point itself becomes compromised (either because you were careless or for systemic reasons).
However one can't help but feel that even though the computation of identification is the most powerful computation there is, the points should apply to all the other computations also. So in a way I think the Schneier article comes out - indirectly - in favour of the openness of design efforts also. Any idea that we can keep our world closed through the application of technology is flawed, so we might as well build it open from the start. Open with anti-intrusion measures that is.
At Loosely Coupled a story is made about some naive quotes on emergent intelligence in the distributed information infrastructure of the web. It is well known that all claims up until today about AI have been wrong, certainly when it comes to establishing a timeline, but I think it is important to point out that the analogy with Pasteurs nihil ex nihil experiment - which established the importance of the cell - is flawed. Pasteurs findings are well established, it's just the analogy that is wrong. Thw author, Phil Wainewright, is forgetting that intelligence actually did spring from nothing, it is just not an everyday occurence but but took all of our evolutionary history to take place. The real question therefore becomes whether or not Phil Wainewright is actually a creationist?
So: No, wiring together all of todays computers will not - not even by duplicating them a millon times - produce intelligence. But seeing as 1) we're actually capable of injecting design into the process of building the future network and 2) knowing also that intelligence and other very advanced structural organization of information (like e.g. the cell) can in fact be made without a conscious designer, I think the score comes out two points in favour of realising actual intelligence in a distributed computer environment in the future.
Just a note to myself - now that I'm in webservices open computation etc. etc. mode. Link farms, i.e. networks of circular refererers who try to boost site relevancy in search engines are simply viruses attempting to piggyback information to your machine using the open interface of HTTP
When all of these ideas about openness take off, the grand scheme notions of security and identification will all fail and we will have to fall back of a security model that is open and experience- and reputation-based. It will probably employ huge masses of throwaway certificates manufactured for specific computations/validations but kept around for reputation purposes.
The certificates themselves will, sooner or later, enter the address space, so that addresses are essentially anonymous. The navigation for adressess will be based on dynamic content based services like Google, not static services like DNS, and the whole system will end up lookin gmore like a biosystem.
The notion of the semantic web and the schema efforts to enable it are worthwhile, and by their openness on the right track, but since language and hence knowledge is a game of incompleteness and ambiguity, the schema efforts are likely to fail due to their grand scheme nature. People will not comply. Reading some comments about XML support in the most common client data tools on the planet (apart from the browser) namely MS Office it is comforting to know that they are at least getting some of it right, working up from data instead of down from metadata. This is the only thing that could possibly work. And this is the reason why Google is such a huge success. The ambient contradiction is, that the story is about the XML-enabling of Office (i.e. a huge push down from metadata, not up from data).
From a client perspective, however, there is no question that the direction I indicate is the important one. The really interesting thing is that one will expect to be able to import old non-standard data to XML (proprietary of course - they are still not the good guys, just less shady).
Next step up: RDF actively deployed and used by e.g. Google. The first application of this is already out there of course - being the many interlinked weblogs about web services and their many cross supscriptions and structured cross linkage.
Derogatory name found in The Register - for the Microsoft anti-trust settlement recently upheld in US courts, largely favouring Microsoft. The classy.dk opinion on the matter is - as has been previously mentioned - that MS is very guilty indeed.
Think about it : If Windows NT was a patented drug most of the core would have been in the public domain a long time ago. In drug treatments openness is forced on drug companies because nobody would use a drug if they didn't know exactly what the substance used is. So the only protection oavailable to the drug companies are 'process secrets' - on how to manufacture the drug - and the patent system. So one tends to come out in favour of the patent system for drugs. Not so with software. Secrecy works as a protection in itself.
NOBODY enjoys the monopoly power as much as Microsoft - and nobody uses it more liberally.
There should be a time limit on how long this kind of knowledge can remain a trade secret. Evolution in software is actively hindered by the secrecy. If the core of the operating system was made public after some period of time that would force companies to actual aggresive invention - instead of just the introduction of more and more bloat, so the same basic functionality can be repackaged again and again to accrue more income from old work.
OK, I know it is getting a bit tedious, all this talk about language, but Joel Spolsky's Law of Leaky Abstractions is another argument why the final programming technology will be loose-knit, open and language like. The law simply says that all abstractions come to an end - and sooner or later you have to abandon the abstraction and look at the substrate it abstracts from.
That's like an inverse Gödel theorem: Instead of the idea that sooner or later you have to make reference to some meta-level to correctly describe your world, this law says that sooner or later you need to de-meta. So this is a pragmatic dualism to the idealistic notion of formal methods in programming design.
If that is the case, why not make technology that by definition covers the whole range of possible meta-levels.
Being a mathematician I have often encountered a purely mental version of leaky abstractions.
Mathematics can be a delightful play with words. A mental game, where the only thing required of you is to come up with a consistent set of utterances that are somehow interesting and meaningful in the end. This 'no rules' aspect of mathematics is a driver towards more and more abstraction. Mathematicians are always abstracting to meta-levels. The meta-level then becomes the real substrate for a new discipline of mathematics, and this new discipline in turn feeds the creation of new meta-levels of knowledge with its own group of specialists.
This process may sound ridiculous and unproductive when described from this rather tremendous distance, but in fact it is important and highly productive. The constant redefinition and refining of mathematical concepts makes the work of geniouses commonplace
An interesting example of this is the subject of linear algebra and convex analysis. The historically inclined mathematician will find the original sources for material in these fields hard to read and almost incomprehensible. Generation after generation of reformulation of the knowledge in the field has shaped into an efficient - if sometimes boring - body of knowledge. The work that was hard to the dicoverers/inventors of linear algebra is now taught to university freshmen as an easy way into the basic notions of proof in mathematics.
What has this got to do with leaky abstractions? When you're doing mathematics, trying to prove something about mathematical objects you tend to set aside the knowledge you have in principle that these objects and the models they fit into are really abstractions, and that they are not really objects at all, but rather just specific features that something may have and that you are at present recognizing this something by that property. If you always have to second guess your primal use of language - namely the presentation of information about concrete physical things - you tend to get lost really fast - so you suspend your knowledge that what you're talking about is an abstraction and talk about it as if it was a concrete thing. This works very well, if you have a good power of imagination at least. Because language is multilayered and doesn't look different when you access a meta-level of information it is efficient and convenient to dispense with the knowledge of abstraction.
I've never really had major difficulties in 'going meta'. In accessing the next level of abstraction. For me the problem always was going the other way. Once you're deeply into the abstraction - if you suddenly arrive at some new object, that you constructed on the meta-level, but that does of course have a less abstract value (i.e. in the context a real value) the very talented also know how to step back from the abstraction and access the 'real' world beneath. I have always had trouble with that, and I really think that is why I am not a mathematician today.
This is less of a problem when you have 'perfect' abstractions. But unfortunately, the 'perfect' abstractions are the very old ones, the polished ones. The new ones - and the ones so new you're making them up as you go along, tend to be more imprecise and leak a lot. When that happens, that's when you need to be good at stepping back from the abstraction to some level of knowledge that doesn't leak. A lower level, where the information you've produced makes sense. When you need to do that the very flatness, and non-layeredness of the language you use becomes the problem. You find it hard to distinguish between information about the abstract layer and information about the concrete layer. And when that happens, you know you are lost.
So in short, using language to model is no panacea either. It's just - in my opinion - the least leaky abstraction we have of knowledge itself.
Amazon.com is about to loose it's long time standing good guy status when it comes to usable and information rich websites. The many shops, the decision to complicate life for us book-buyers by making book search a two-click operation instead of a one-click operation, the terrible "Your Gold Box" idea were all nails in the coffin for the usability of the site - whereas "The page you made" and 'See Inside' vindicated the site. The latest attempt at upsale linkage is particularly lame and insensitive to what the user is actually interested in. You're now getting suggestions from the apparel store when you're browsing for books. Seen many underwear/book stores on your local high-street lately ? Didn't think so (although I guess FNAC actually does this with some success). An example of how stupid this looks: Do they really think that the average buyer of The Windows Interface Guidelines for Software Design: An Application Design Guide would want to buy pink cheetah print slippers in girl sizes - as was suggested to me when I looked up the book. No. Misdirected efforts at tailormade information make really intrusive advertising ploys - not much better than spam. Intrusive advertising makes enemies - at least one, anyway.
Previously I made a comment about the link between digital technologies, intellectual property and personal freedom. It still seems to me that the digitization of our personal space means that we will to a greater and greater extent extrovert our thoughts into some technological device (e.g. classy.dk) - when we do that we are suddenly publishers - and intellectual property rights owners think they have some rights on our expression. The conflict between the principles of the information economy and the principles of individual liberty become very important and visible. The grandfather of digital risks to personal freedom is of course the fear of the universal personal ID
It is very hard to have a reasonable and workable opinion about these issues. The digitization presents remarkable opportunities for prosperity and a good life, but making them mandatory (btw as a left leaning Scandinavian I have been happily centrally registered for ages) makes them dangerous.
Our notions of society simply don't cover the networked society, and our network does not really support any notions of society at all (except a naively open one). Using current technology the digital life is essentially a public life.
Clearly 'freedom of speech' must be augmented with a constitutional principle of 'fair use' - since we will be users of so many technical and semi-technical interfaces, and since text and other forms of 'speech' more and more become something we use, i.e. more and more active functionally as opposed to purely being expression, but I'm not sure that does really cover what the notion of Networked Man should be.
This btw. relates to the very way we build our technology. Only open adaptable technology makes it possible for the individual to choose NOT to join the collective.
Some further thoughts on language, models, and objects related to yesterdays post. One thing that seems ancient about dogmatic OO is the 'closed world' obsession with complete models, and the idea that you can capture everything in one model. This is clearly wrong, and is I have remakred previously, the key advantage of natural language over artificial languages is the 'open by default' fully reflective nature of the models embodied in natural language. The very real, and frequently occuring inconsistencies and ambiguities this allows can be handled by an intelligent processor.
A pragmatic modern approach to OO that dynamically accepts and adapts to foreign service interfaces is much more in tune with the true power of natural language than old closed world OO.
Found an interesting and lengthy Object Orientation backlash. With the view on what an OO advocate is supposed to think presented in the article, the author has an easy case to make, and the basic claim that OO is not the optimal language for ALL problems is obvious.
When that is said and done, I think the author would have a better case in recognizing the cases where object orientation DOES make all the sense in the world, and furthermore in recognizing the importance that OO is capable of having when programming in the large. An initiative like .NET is not as easy to conceive of without a good object environment. Well yes, you can - it's called C, Unix, scripting languages and a compiler - but objects are eminently practical constructs if you want to hide traversal of a process or machine boundary from your local programming environment.
Furthermore, the author - correctly IMO - argues from the assumption that the true productivity enhancing feature of a programming system is how well the programming system is at emulating the features of our built-in natural language processing system, and the accompanying world modeling. The true measure of programming environment sophistication should be how many of the abstraction constructs we live by it is able to support, and how well it supports them. The parody of OO (dogmatic "OO analysis and design" in the spirit of 300 man teams) that is criticized sacrifices all the flexibility of the basic OO ideas in this regard by enforcing very strict 'rules of speech', in the form of lengthy and complex development guidelines, and that is just not very liberating.
It is not really efficient debunking of anything to remark that there are other routes to flexibility than OO. This is hardly surprising. I have the same feeling the author has about OO about the use of UML to describe a lot of things. I like a few of the diagrams for specific descriptive tasks and use them for that, but I would hate to build a system entirely from UML, or indeed to model solely with UML.
However arguing against the power of OO to flexibly interpret the verb part of sentences based on the types and model theory of the noun parts of the sentence seems ludicrous. The claim that plain old procedural languages do this just as well is just not true. The notion that verb parts of sentences should not be typed (there's a comment about 'a + b' not being a method on either a or b) is absurd. Clearly the type of a and b matters a great deal in the interpretation of 'a + b' and this is not JUST a matter of typing, since the interpretation of verb parts of sentences -also in natural languages - can rely both on type and instance data. The 'framing problem' in the semantics of natural language is all about type and instance dependencies in natural language.
Danish science writer Tor Nørretranders (of "The User Illusion" (Mærk Verden in danish) fame) has written a new book on evolution, cooperation, the gift economy, and sex. The basix premise : Everything we do, we do basically for sex. Sex is the ONLY major driver for human endeavour.
Not having read it yet it sounds like this book fits into the same mould as his previous books. The basic premise can be inferred from general principles (The laws of evolution in this case) without writing a 330 page book. Scientific ideas used to sell the story are oversold as controversial and sometimes they are even oversold as new. A Case in point for this books seem to be 'the economics of cooperation' as analyzed using the Prisoners Dilemma. This is one of the oldest ideas in 'experimental mathematics', and newspapers were covering this story at least 10 years ago. Thirdly - once the creative (or other) juices start flowing, the story tends to get ahead of the basic premise, and some wild claim is added to the mix. In this case the merging of 'gift economy' (e.g. Open Source) and the whole sex thing.
It's a shame - the basic story is interesting in and of itself. The setup to make all this magical and exciting makes the story less appealing to me. I'd like just the facts - without the mystery. The notion that there has to be a msytery around for a phenomenon to be exciting is dangerous in my opionion.
While the domain business is 50% porn (and no, that is not a stupidly generalized exaggerated number, it is just a fact) I can assure you - talking from work, while updating the servers on a sunday afternoon, huge cold coming on, hung over from yesterdays birthday bash - that there is absolutely no sex involved in the actual work here at Ascio
Very few people will have noticed the downtime on classy.dk this saturday evening.
We had a power outage here at Classy's, that was caused by my niece, who accidentally turned off the power while turning off the light in my office after she had been in my office to look for some crayons.
She found a ballpoint pen instead, and made some nice cutout figures - and power was restored to classy.dk
Det er stadigt for tidligt at sige om Dagen har fundet en plads i mediebilledet, og også stadig for tidligt at sige om man orker at l?se den på en daglig basis. De mange skolebladsartikler sk?mmer ('baggrundshistorier' der er så indlysende at de andre aviser undlader at bringe dem, eller i bedste fald bringer i en passende 1/4 dels l?ngde af Dagens variant), og af og til l?ses erhvervsstoffet bedst som et specialtill?g til Markedsf?ring, hvilket virker en anelse navlebeskuende. (Forresten, når jeg ser en helsides artikel om hvordan Danmark er 'brandet' i udlandet, så afsikrer jeg min pistol)
Og så er den lidt for fyldende smartness i sektion B i kulturstoffet efter min mening absurd i sammenblanding den langt mere kontante sektion A (N?jj hvor fedt, en temaside om Laudrups start i Br?ndby hele 90 dage efter de andre aviser bragte historien om hvor sv?rt det vil blive at v?re Laudrup på en regnvåd efterårsdag efter et undgåeligt nederlag, og en hel uge f?r TV3 har et stort tema om samme emne pr?senteret af Paul Gazan!).
Omvendt så var Dagen på pletten med en hård udmelding ovenpå den fuldst?ndig perspektivl?se finanslov, som i Berlingeren fik en lidt mere diskret tommelfinger nedad. Og historien om den ligeså perspektivel?se udvikling K?benhavns Fondsb?rs og dens utallige småaktier er inde i var også på sin plads - og sj?lden l?sning i de andre blade.
Der er m.a.o. både håb og en hel del arbejde i sigte.
Traffic is picking up on classy dk - 25% more visitors loading 60% more content. Whis means that the average visitor is loading more - a nice statistic in itself. And this was even a month where I was away from my own machine for 1/3 of the month, i.e. the usual self observation was suspended for much of the month. All in all - I'm quite pleased with everything but my PageRank - but as is well known, for a low traffic site like mine, this mostly depends on being part of a Link Farm of a webring of weblogs. Since logs do a lot of self reference they tend to like like hypertext condensation points, i.e. important pages.
Most pleasing search finding my pages : 'neil young helpless meaning' - I don't know what the searcher was looking for, but I know what the song means to me!
Interestingly, Google was recently sued over the PageRank algorithm - there was some prima facie merit to the lawsuit in that rank of 0 was assigned to a page, which smacks of manual intervention, but clearly not everyone can be on Google's top ten for a particular word, so there is a fundamental problem here. It goes to the very heart of knowledge and democracy. Not everyone can be best, so if it makes sense for everyone to go with the best, most of us are in trouble. (No I'm not the best either and if you think you are I would like to remind you of a poll done AFTER an exam at Copenhagen Business School, AFTER people had gotten their grades, so they KNEW how they were doing OBJECTIVELY (well as objectively as possible). I forget the exact numbers - but it was something like 80% thinking they were in the top half, and 50% thinking they were in the top 10%. Of course you might just be clever, so why not risk it. Don't take my word for your mediocrity. Don't trust the numbers. Optimism is after all (in the words of Noam Chomsky) a strategy for a better future)
As luck would have it, friction and the importance of our distribution in the physical world, means even us slowpokes have a shot at being first in getting to a particular point in success space, so there. Everyone gets their space.
Incidentally, what the intellectual property owners would really like is a tax on intellectual friction.
Tight-lipped danes are want to say that the enormous friendliness of Americans does not run very deep, whereas the danes underneath their reserved exterior are really very nice people once you get to know them.
The claim is of course bogus. It's just that friendliness is part of American idiom ('Hi, how are you') and not in Danish. Clearly use of idiomatic phrases should not be taken to be actual intrusion on your privacy, nor should they be interpreted as genuine concern. That doesn't mean that there aren't phrases that mean the same thing to an American that 'how are you' means to a Dane.
This is all obvious and well-known. Traveling in America, it takes a day or two to get your own level of friendliness and approachability up to regional standards, but even us tight-lipped people do get it eventually. At least if you're just the very least culturally sensitive.
The more interesting question is how to actually be as friendly as an american, once you're back in Denmark. You can't do a direct translation of american phrases. People would basically try to avoid you if you did, and it is not really obvious how to be as friendly in Danish, except of course that you can smile and talk a lot.