SXSW: The Web That Wasn’t

by danhon

The Web That Wasn’t

Room 19AB
Monday, March 10th
11:30 am – 12:30 pm

Alex Wright Information Architect, The New York Times

AW: Gonna talk about a lot of early precursors to the web, different versions of hypertext, early thining that went on around how networked information systems. It’s itneresting – not just histrorically, but also relevant to today ont he web – if you look at the history of technology, it’s clear that the best tech doesn’t always win, it ususally doesn’t win, we can look to Windows, when Windows first rose to prominence there were other OSes that were far more advanced, e.g. Mac OS, OS/2, UNIX, any reasonable objective standard, Windows was not sophistacted thing out there, and also VHS/Betamax.

Same argument can be made abotu the web – it’s a very simplistic technology which is why it’s become so succesful. So, interactive media, and ways of linking information together. Going to walk thorgh exmaples of early technoloy and relate them to current limitations.

So as researching this topic looked for earliest example of someone descirbing something that looked like the web, had to go quite a ways back, first thing was Charles Cutter – 1883, very few people probably heard of, notable librarian in 19thc, contemporary with Dewey, Dewey decimal system was Windows of Library Catalogs, Cutter had a better library system that failed to gain traction in marketplace, more sophisticated way of organising books, was big thinkiner of organising information, spaces, metadata. In 1883, he wrote an essay for library journal called the Buffalo Public Library of 1983. He had this – they barely had telegraph then – the first patent for typewrite rfiled just a few years before – he imagined a device at a desk with a keyboard connected by a wire and the idea was that people could type things itno that keyboard and pull up a book stored somewhere else and over time these devices could be networked together and books from different places without having people to physically retrieve them. For 1883 that was pretty good, and that was al theoretical.

A few years later, HG Wells, a SF writer, wrote an essay called the world brain – he describes a vision for the way in which electronic information might take shape, this was in the early days of radio, glimpses of tv, he began to have this idea this emerging electronic infrastructure would enable people to create documents and interact in new ways and they cuold create an encyclopaedia and a vast newtwork of information woudl emerge from this vast data warehouse, that vast network would come to life, acquire intelligence. His essay was quite influential and inspired people to think abotu what that would look like. This is an era where there’s no such thing as a microchip and no such thing as a screen , this was a very analogue mind.

A few years later, Teilhard de Chardin, a French Jesuit monk/priest had a similar idea, wrote essays where he talked about – radio really starting to take hold, TV out there, saw this emerging possibility of some new kind of network information space emerging out of the ether as people collect and share information, de Chardin’s writing was controversial, the church banned him from publishing, was considered heretical, was a threat to the orthodox view of the Catholic church, he went further, went to say how people wcould approach divine consciousness by sharing information, was considered a heretic, but acquired a loyal following among other Jesuits, whocirculated his papers privately among an informal network, a real cult following inside the catholic church and outside it. Marshall Mclewan, took that as his foundation.

All of this was theoretical, nothign being built.

There was naother guy called Paul Otlet, who worked around the turn around the 20th century, did a lot of deep thinking about information and ways of collecting and organising information and was a fascinating figure, kind of like a librarian, founded documentalism movement, theory of information, wrote prolifically about his whole theory about how information could be organised, core insight was that librarians had it worng -t hey were fixated on books, but Otlet’s insight was that informationw as trapped in books, if you could get beyond the physical artifact of the book then you could remix that information in interesting ways and create new kinds of documents and interfaces to sorting and classifying that information and over time you could have whole new types of documents and experiences if you could just find a way to get the information out of the books and into a different framework.

This is a diagram – his books had this great little diagrams, his basic idea is that you might have a – some kind of classification system that ould enable you to organise information stored on index cards, the state of the art storage media of the day, the info on that cards would be extracted form books, those books are filled of ideas which are from people who have lucky charms popping out of their heads, the book was just one layer of information and a continuum of information production and workign with it.

Otlet wanted to – and did – build this, got funding form Belgian govt in the 1930s, during the time of the League of Nations, sold it by tying in to a utopian ideal of world peace and a way of sharing information globally and having a single storehouse of the world’s knowledge. So he got some prime real estate and he went and hired people and went through this process and went through 10ks of books to extract and file information and put it on index cards. Finally in this system, the universal decimal classification system – fairly sophisticated for how to make the information accessible and usable.

So there was a documentary made about him a few years ago, showing a quick excerpty from that.

[shows documentary clip]

So this was 1934. So pretty good.

So why has no one heard of all that? He did all his work in Brussels and we know what happened – the Nazis came marching in, they took it over, they gutted it, threw everything away and made an exhibition of Third Reich art, Otlet died penniless later. Then the Beglians had other things on their mind and his whole legacy was pretty much forgotten, until 25-30 years later, a young grad student, a young library grad science student uncovered the paper trail and made a pilgrimage to Belgium to look for remnants and found an old office – cluttered desk, papers everywhere, manuscripts, whole plans for the thing, and wrote article and book and has slowly started to resurrect his reputation – a few years later, they rebuilt it as a legacy. He clearly thought up the idea of the web in 1943 and in the post-war period, most of computing technology was driven by the anglo/American industrial complex, and the people doing this – writing in English, came out of the defence department.

So these are some more diagrams. A scheme that explains the classification work – how the UDC works.

Way more sophisticated than the web today – you could have a top down classification of knowledge determined by Librarians but also a bottom up user driven way and those things could coexist and complement each other. There was a framework for capturing documents that users suggest – he had this term, the social space of a document – understand a book in relation to other books and the people who read it.

He also had this idea, quite relevant, that those links between documents could mean something – you think of the web today, a hyperlink is a dumb thing, it doesn’t tell you anything about meaning, a link is a link is a link, but in Olets world, this document agrees/disagrees and others with this other document tehy could communicate things. In a way a sophisticated idea, even though the entire thing ran on paper.

So what if things had been different? What if Otlet’s work had turned out to be the foundation? We’d have more of a topdown layer, a world where you had some sort of ontology at the top level that would coexist with a bottom-up social classification. This idea of links that contain meaning is an interesting idea. So let’s look at the web today. There are examples of people today starting to approach these ideas, this is a demo called Facetag/facet/tag:

It was developed by some guys in Italy – a way of experimenting merging top-down with bottom-up classification, users could come up with links and describe links in a set vocabulary, some high level framework but contents of classification is driven by users. Doesn’t predetermine possible categories. As people go through the process of tagging, those tags are exposed and suggested, then it fits into a larger, filtered space, a compromise between top and bottom classification.

Another example is facetmap developed by Travis Wilson, an example using wine, creating a faceted classification for describing different kinds of wine – regions, varietals, prices, so forth, if you set up a framework for doing that then you can mix and sort and filter data rather than a simplistic top down hierarchy.

And also – a proposal floating around in w3c for vote-links, an addition to the href tag that would describe what a link means, if it agrees/disagrees with something, but an acknowledgement of doign something more with the hyperlink rather than it just being a one-way street now.

The person who tends to get more of the credit, deservedly so, is Vannevar Bush. The two things is that his name is pronounced Van-eever, and he has no relation to the current President. He was an advisor to FDR, a prolific inventor in his own right. Ran his own lab, etc. Later became president of Carnegie Institution, key figure in the era of the Manhattan Project. Today most of those accomplishments have been forgotten, but now rememebered for As We May Think, written in 1939, for the Atlantic, reprinted in Life Magazine. In this essay, he proposed this idea of the memex. The basic idea is that he saw this as a tool that would let – scholars – the idea was that users would sit there at a desk and pull up a document on a screen and those docs were stored on microfilm, this was a pre-digital era, and you would pull up docs on one screen and any other doc on a screen next to it and you could relate the docs to each other, he coined that, he called that a link. They could also make comments about that relationship and contribute to a store of information that would get recorded, the user at the desk would take two documents, mix them, then add something and that would become part of a permanent record and another user could come along later and actually see that trail. He also added other things – a camera attached to forehead, etc. He wrote this essay describing how this thing would work – the key ideas was that the big insight was that scholars are involved inc reating levels of meaning that are not necessarily captured in reading. He saw that over time anew type of document would emerge out of that. Those documents could be portable, too.

A couple thigns to point out: even though Bush was an accomplished inventor, he had no intention of biulding this thing, it was vapourware from the sate, it was on purpose, he’d tried to build this and it had been totally frustrating, the tech didn’t work in the way he envisioned. He would describe the thing he’d like it to be, the point of it was that it was a concept car, that’s the thing he’s best remembered for.

What would Bush’s web look like? Two-way links, visible trails and microfilm! Links that work in both directions, if you see a page that links to another, there’s no record of where it comes from, Bush’s idea was that you could pull up a doc and see what links out to and links in to. And this idea that the trails could be made visible in some way, behaviour, experience could become part of a larger record, a key idea the web just doesn’t support well. But that said there are people out there trying to approximate these features. One of these is trackbacks, inbound links to a particular document. In principle, that’s the idea, value in seeing inbound links to a document. Another example of exposing trails is and social bookmarking sites, it’s interesting to see what other people are reading, is simplistic, but you can create that experience. So we’ll talk about bush later.

Eugene Garfield – he was a kind of librarian, an information scientist anyway, he came up with this idea interested in the way people organise scientifc journals, and how most indexes to that information were limited and involved someone manually reading and deciding what it’s about, a better way of doing that would be who cites that article, he didn’t care about the subject matter, but he cared about the footnotes – follow the footnote trail and you’d see the web of influence that an article would have, who’s citing and how often have they been cited, a ranking. This article has a high rank, and if it points to that document, then that means more, etc. So does this sound familiar? It’s pagerank.

Citation index is now a core resource.

Garfield’s web: it’s pretty simple [screenshot of Google]. In fact when Brin and Page published their paper about Pagerank, Garfield was the first cite.

So, moving on.

Doug Engelbart. Engelbart was a disciple of Bush, he was stationed in the navy in the late forties after WW2 and he got a copy of Life magazine, he said it changed his life, he started to pursue this, and was an early pioneer in computer science, he’s today known as the inventor of the mouse, that’s the least of his accomplishments, what he spent most of his – he worked at SRI on the vision of the oNline System, a system for organising networked information, his work was funded by the defence department, it was part of the whole project that led up to the internet, and he wrote a paper, Augmenting Human Intelligence, describing this. He gave a demo in SF the mother of all demos. The basic idea was – he really saw the system as a way to support groups of people working together, a way to create tools, an OS where people could work together but create and collaborate tools and information. I’m going to try to show a brief excerpt from the demo.

[realplayer link]

Whole demo is an hour and a half long, whole thing’s on Youtube. He has a live video conferencing app in, a word processor, it’s an elaborate system. So what would Engelbart’s web look like?

Very focussed on enabling small groups to collaborate – process hierarchy, that would let people create layers of meaning on top of things, and then you’d have these built in multimedia tools. One short-coming of the web today is that it doesn’t support collaboration tools, is that it’s designed for an individual user and doesnt’ let you collaborate within the browser and Engelbart’s would’ve supported more collaboration as a two-way street, so we find workarounds so wikis are a great example of that, and other groupware applications out there, none of them work that well because the browser’s so limited, it doesn’t support identity management, so we have halfway realised solutions because they work within limitations. This is another interesting example – Hyperscope, the idea is that in recent years, Engelbart has tried to create an approximation of NLS in a web browser, trying to embed these functions, a very hyperlinked doc that opens and closes. It’s very beta, it’s interesting to look at. But anyway, a lot of collaborators when to PARC, Engelbart’s still around, still active, and he has a vision that’s far beyond what the web turned out to be.

So Ted Nelson.

If it wasn’t for him, probably none of us would be in the room right now, Nelson is an interesting figure, I gave a talk a few months ago, where I called Nelson half-crazy and I got an email from him a few minutes later, he didn’t agree with me at all, Nelson is sometimes characterised as half-crazy, he is at the same time an absolutely brilliant guy, a formative figure, he deserves all the credit in the world. He is a real inspiring figure and he basically spent his career working outside the computer industry, most of his important work was on the fringes of respectability, started as a grad student sociology at Harvard, was a real humanist, btu hung out in the comptuer lab and worked with some scientists and mathematicians and was fascinated by the potential of computers and felt they were treating them as calculating machines and he thought there was a tendency among CS who was creating a priesthood that was locking out others who might benefit. Humanist vision of computing, anyone should be able to use them, real intention was to empower people, expression, let people connect with each other directly. So he was a fringe pioneer for a long time, self-published his books, wrote an influential essay in 1965 where he coined the term hypertext but went on to write books that he self-published and they have a real revolutionary vibe to them. If you’re going to read one book, read Literary Machines, he talks about Xanadu which comes from the Coleridge poem. This was his idea for what became the web, this open collaborative space where people can create and share information in new ways. So his books are full of these great diagrams where he explains hypertext that no one had thought of. Some of this is hard to make sense of.

Nelson-isms. Zippered lists, Windows sandwiches. Tranclusion: the idea that you have a document that embeds information from another document in realtime. Nelson’s idea was that you could take a piece of a document and put it in another document, it would be a live connection. Some of these other ideas – hyperbooks, anthological, rather than a one-sized fits all, different experiences for different kinds of information. You might have new kinds of ways of presenting information.

Nelson’s web would not look like the w3. Transclusion, two-way linking and intellectual property controls. An example of transclusion – OLE.

Andries Van Dam – early collaborator with Nelson, they built the Hypertext Editing System, ran on greenscreen terminals, used it as an experiment in classrooms to let students collaborate with each other and network in interesting ways, they did this when Engelbart was inventing the mouse, so they had a lightpen and footpedal – a point and kick interface – and this evolved int he 70s and 80s, a new intermediate system that they developed. And they built this, a hypermedia learning environment. They had a lot of tools built into the desktop to do multimedia authoring, writing papers, creating links to original material, an interesting experiment running in a walled garden, interesting learning coming out of it. They had the idea of network functinoality embedded int he OS, instead they had multimedia editors, they had tools particular to the media you’re working with, they had the idea of not a browser, the whole thing was a broswer. It was a closed system, but they were able to realise that vision. It’s really a pretty interesting experiment.

Xerox PARC: they built NoteCards, information on a card that could link out to other cards, nodes of information that would take shape around that, different ways of browsing that, they coined a browsercard that you could use to look at other cards and then a filebox where you could file things. It’s interesting, I haven’t seen this, I’ve read about it, but it’s an interesting precursor.

Another one I should mention – CMU they did Zog at this time another hypermedia application that I don’t have any experience or screenshots of.

Apple Hypercard – which I’m sure some of you saw, in the 80s and 90s, a limited way of having a hypertext like experience.

Tim Berners Lee. One of his early desktops.

In search of the web that wasn’t. One of the themes that emerges: different kinds of classification, top down and bottom up working together, links that work both ways, seeing pathways, links that mean something, visible pathways, identity, reputation, having a more table framework for managing identity, giving people controls around privacy, treating users – most of these systems treat users as creators, but the web tends to treat users as consumers. If we create things, we have to go to other websites, but the browser doesn’t let us do that in a useful way. What does that point to?

The next generation – something like Facebook tries to do a lot of this kinds of stuff. I know we’re tired of hearing about it now. It does have identity management, it has links of meaning, links work in both directions, features missing in the web – it does provide a framework for a lot of this functionality and features.

Q: Isn’t LCD what you want?

It can still be easy to use and open and still address these issues. Browser dev has come to a grinding halt. There’s been such a monopoly.

Q: On education side of IA, the history of the library system – is it better to study library science, is it better to do something else for IA?

At the time, there wasn’t much web education going on, it’s been slightly relevant, but it’s starting to chagne, at UCB, the old library school is the library of information, they merged the HCI school.

Q: How do you imagine the infrastructure for this next-gen evolved web, with respect to ownership? Do you see feasible built out – a private/commercial context? Public/collective ownership?

I don’t know! I think it’s going to take some combination of OS evolution and browser vendors everyone working together, creating a better infrastructure at the client layer, the big problem is there’s such a mass of legacy content you can’t start over, you have to build on top of that, there are frameworks for moving forward, it’s a messy problem.

[punctuated equilibrium]