Really from the 1980s, when we got scalable computing for the first time, technology has been a bit of a fetish, fetishistic device. We got obsessed with it. I think the last three to four years, we started to take it for granted, which is a much better place to be in, because then it’s a useful tool. So I think what you starting to see is technology becomes pervasive, which means you can get both high level design, highly structured things, and you can get self-organization.
So if you look at the way social computing works, it can assemble very sophisticated systems from free tools within an open architecture. Even ten years ago that would have been a million dollars plus project. So I think the key differences is not, not the way you pose the question, I think is more the fact it is now pervasive. I think we’re also starting to realize the limits of technology, and starting to realize that its there to augment human intelligence, not to replace it. And that comes like a post BPR Six Sigma phase, but that’s early days, so that’s a less justifiable prediction.
I think, as you know, technology is all co-evolve, so co-evolution is part, you know, from when we picked up the first branch and hit, hit an animal with it, we’ve been co-evolving with our tools. So I think the tools modify themselves to work better with human social systems and humans learn better how to use the tools and how to take… It is a co-evolutionary process rather an adoption process. And you also get lot of expectation in that, the thing that technology allows us to do is to make what, you know, goal called Punctuated Equilibrium: sudden, rapid, unexpected changes, because we see potential in technology which it wasn’t designed for. Twitter is a classic case with hash tags. Nobody designed that in, but it just got used. So that, that’s acceptation within the co-evolution environment rather than adaptation.