Tracing the Evolution of Social Software


The term 'social software', which is now used to define software that supports group interaction, has only become relatively popular within the last two or more years. However, the core ideas of social software itself enjoy a much longer history, running back to Vannevar Bush's ideas about 'memex' in 1945, and traveling through terms such as Augmentation, Groupware, and CSCW in the 1960s, 70s, 80s, and 90s.

By examining the many terms used to describe today's 'social software' we can also explore the origins of social software itself, and see how there exists a very real life cycle concerning the use of technical terminology.

1940s — Memex

The earliest reference that I can find to people using computers to collaborate with one another is from the 1940s.

Near the end of World War II, in 1945, Vannevar Bush wrote a seminal article on the future of computing in As We May Think. In it, he conceived of a device he called the 'memex', which today we might call the personal computer:


"A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory."

Later on, the article discusses Memex's further benefits to groups:

"And his trails do not fade. Several years later, his talk with a friend turns to the queer ways in which a people resist innovations, even of vital interest. He has an example, in the fact that the outranged Europeans still failed to adopt the Turkish bow. In fact he has a trail on it. A touch brings up the code book. Tapping a few keys projects the head of the trail. A lever runs through it at will, stopping at interesting items, going off on side excursions. It is an interesting trail, pertinent to the discussion. So he sets a reproducer in action, photographs the whole trail out, and passes it to his friend for insertion in his own memex, there to be linked into the more general trail.

Wholly new forms of encyclopedias will appear, ready-made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities. The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client's interest. The physician, puzzled by its patient's reactions, strikes the trail established in studying an earlier similar case, and runs rapidly through analogous case histories, with side references to the classics for the pertinent anatomy and histology. The chemist, struggling with the synthesis of an organic compound, has all the chemical literature before him in his laboratory, with trails following the analogies of compounds, and side trails to their physical and chemical behavior."

As far as I can tell, this is also the first mention in literature of what will eventually be called hypertext. However, the term 'memex' never caught on - - Vannevar's ideas were way before their time.

1960s - ARPA and Licklider

It wasn't until the early 1960s that I find the idea of using computers to collaborate came up again.

As a response to the USSR launching Sputnik, the US formed the Advanced Research Projects Agency (ARPA) in 1958. In l8 months, ARPA has developed the first successful satellite. In 1962 Dr. J.C.R. Licklider was appointed to head ARPA, and changed ARPA to offer more research grants to universities. In fact, it was due to his efforts that universities offered their first Ph.D.'s in computer science. It was this research that ultimately led to ARPANET, commercial time-sharing systems, and ultimately to the Internet.

Licklider wrote in 1968 in The Computer as a Communication Device:

"To appreciate the importance the new computer-aided communication can have, one must consider the dynamics of 'critical mass,' as it applies to cooperation in creative endeavor. Take any problem worthy of the name, and you find only a few people who can contribute effectively to its solution. Those people must be brought into close intellectual partnership so that their ideas can come into contact with one another. But bring these people together physically in one place to form a team, and you have trouble, for the most creative people are often not the best team players, and there are not enough top positions in a single organization to keep them all happy. Let them go their separate ways, and each creates his own empire, large or small, and devotes more time to the role of emperor than to the role of problem solver. The principals still get together at meetings. They still visit one another. But the time scale of their communication stretches out, and the correlations among mental models degenerate between meetings so that it may take a year to do a week’s communicating. There has to be some way of facilitating communication among people without bringing them together in one place."

Here you see Licklider really speaking of more than just communication. He also describes about methods of collaboration and how people function in groups.

1960s — Augmentation


One of the early ARPA research projects was at SRI, where Doug Englebart, inspired by Vannevar Bush's vision, set up a research lab that created an elaborate hypermedia system called NLS (oNLine System). This was the first successful implementation of hypertext (though that term was not invented until later), and it was here that the mouse was invented as well as the first on-screen video teleconference.

Engelbart's seminal work was Augmenting Human Intellect: A Conceptual Framework, which he wrote in 1962. In it he set out his basic idea of augmentation:

"By 'augmenting human intellect' we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by 'complex situations' we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers—whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human 'feel for a situation' usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids."

He was also among the first to say that in order to design such tools, we must:

"integrate psychology and organizational development with all of these advances in computing technology."

Over time this term evolved to become called 'office augmentation' -- Englebart preferred the term 'augmentation' over almost anywhere that 'automation' was used, as automation seem de-personalizing. However, Englebart's work was ultimately sold by SRI to Tymshare, where they commercialized it under their newly formed "Office Automation Division". Thus it appears that term 'automation' won over the term 'augmentation', and Englebart's ideas of integrating psychology and organization development were lost.

1970s — Office Automation

IBM coined the term 'word processing' in the 1960s, which originally encompassed all business equipment -- including manually operated typewriters -- that was concerned with the handling of text, as opposed to data. By the 70s they were attempting to broaden the scope of their products to all aspects of the office, so they coined the term 'office automation'.

The use of this term swiftly become quite generic, and was used by all the major computer companies of the time. However, any ideas of collaboration become lost in the ideas of process and automation.

1970s — Electronic Information Exchange System (EIES)


Yet the number of successful product lines bearing the tag 'office automation' did mean that there was increased research money for creating new tools. One of the most important was a project called Electronic Information Exchange System (EIES), which had funding from for-profit companies like IBM and AT&T, non-profit foundations like the Annenberg Trust, and governmental agencies like NSF and the New Jersey Commission of Science and Technology.

EIES was the first major implementation of collaborative software. In a paper from 1972, EIES founder Murray Turoff describes an early version of EIES:

"Basically the Delphi Conference appears to have utility when one or more of the following conditions were met:

  • the group cannot meet often enough in committee to give adequate timely consideration to the topic because of time or distance constraints
  • there is a specific reason to preserve the anonymity of the conferees (e.g., refereeing of position papers or a free exchange among different levels in an organizational structure)
  • the group is too large for an effective conference telephone call or committee exchange
  • the group is interdisciplinary to the extent that a structured or refereed communication mode as opposed to a committee or panel approach is more desirable in promoting an efficient exchange of information
  • telephone and letter communications, on a one-to-one basis, are insufficient or too cumbersome to augment the particular committee activity
  • disagreement among members of the group are too severe for a meaningful committee of face-to-face process for the exchange of views and information.
  • You can see from this list that EIES pioneered many of the concepts of BBS- style community software that we see today. Ultimately EIES featured threaded-replies, anonymous messages, polling, etc. Also, note the early importance of trying to understand groups so that you can optimize for the best group process.

    However, as a generic term, EIES was too cumbersome. I see references from that period to terms like 'decision support system', 'computer-mediated communications', and 'collective intelligence', but none of these were broadly adopted either.

    1980s — Groupware (Part 1)


    Peter and Trudy Johnson-Lenz are credited by many as coining the term 'groupware' in 1978, after experiencing EIES for the first time. They defined groupware as:

    "intentional group processes plus software to support them."

    I have long preferred this definition for two reasons -- first, the word intentional implies conscious design. Second, this definition also contains the important distinction that group processes come before the software. I felt that this definition properly excluded multi-user databases and electronic mail that are not designed specifically to enhance the group process. (I wrote about this in an 1990 article called Definitions of Groupware.)

    There were a number of other definitions during that period:

  • Doug Engelbart"A co-evolving human-tool system."        
  • David Coleman"Computer-mediated collaboration that increases the productivity or functionality of person-to-person processes."        
  • C.A. 'Skip' Ellis - - "Computer-based systems that support groups of people engaged in a common task (or goal) and that provide an interface to a shared environment."
  • Very swiftly, the term 'groupware' was adopted by the EIES community, as well as many of the spinoff software that was developed in the early 80's. However, the term was not broadly adopted outside of this community for some time.

    1980s — Computer-Supported Collaborative (or sometimes Cooperative) Work (CSCW)

    Meanwhile, the academic community was not happy with either the term 'office automation' or 'groupware' for research into how groups use computers to collaborate.

    After the failure of an ACM conference on Office Automation, MIT's Irene Greif and DEC's Paul Cashman coined the term CSCW for a workshop held in 1984, which was followed by the first CSCW conference in 1986. There is still an annual CSCW Conference, which this year is being held in Chicago on November 6-10th.


    The people initially involved with this conference came from either Human Computer Interaction (HCI) background or Information Systems (IS) background, thus the different definitions for the second 'C' in 'CSCW'; the HCI people preferred the small team focused 'cooperative', whereas the IS people chose the broader 'collaborative'. Scott Schopieray has a nice diagram about this on the right.

    The Digital Media Laboratory defines CSCW as:

    "a multidisciplinary research field including computer science, economics, sociology, and psychology. CSCW research focuses on developing new theories and technologies for coordination of groups of people who work together."

    Brian Wilson defined it as:

    "CSCW is a generic term which combines the understanding of the way people work in groups with the enabling technologies of computer networking, and associated hardware, software, services and techniques."

    However, most definitions I've seen compare it to groupware:

  • Applied Informatics and Distributed Systems Group at Technische Universitat Munchen"While Groupware refers to the real computer-based systems, the notion CSCW means the study of tools and techniques of Groupware as well as their psychological, social and organizational effects."        
  • Tom Brink"Groupware is often used to specifically denote the technology that people use to work together, whereas CSCW refers to the field that studies the use of that technology."
  • This term never really was adopted by anyone except the academic community, and even now, there are many that prefer different terms, such as 'social computing' or 'coordination science'. danah boyd offered this comment to me in an email:

    "In the Computer Supported Cooperative Work (CSCW) space (those who address this area in academia), one switch has been to 'social computing' and the move was far more intentional. CSCW comes out of Human-Computer Interaction (HCI) paradigm.  By and large, HCI has been strongly associated with quantitative psychology in terms of methods.


    Groupware and collaborative software have a heavy handed connotation of 'work' (deeply connected to HCI's emphasis on activity theory).  While CSCW has the term work directly embedded in it, there's a strong push towards other aspects of social life.  Qualitative approaches have been infused into HCI and HCI practitioners are drawing heavily from sociology and anthropology, focusing directly on everyday social life. [This move may also be a purposeful move towards Marx, but maybe i'm reading too far into things.]"

    1990's — Groupware (Part 2)


    Meanwhile, the term 'groupware' hit the mainstream in 1988, when Robert Johansen wrote the best-selling business book Groupware: Computer Support for Business Teams. One unique contribution that Johansen's book offered was a distinction between time and place for different types of collaboration. See the diagram on the right for some detail.

    Unfortunately, it was this success that was also the downfall of the term 'groupware', for it got co-opted by marketing. Initially the co-opting was done by Lotus Notes, which I personally didn't feel deserved to be called groupware, as it was really more of a multi-user database that could be used to make groupware, but wasn't actually groupware. Then Microsoft further corrupted the term when they released Microsoft Exchange Server and Outlook with calendaring features to compete with Lotus Notes, and called that groupware as well.

    Chip Morningstar, an early pioneer in collaboration software and virtual worlds, comments:

    "(in the 1990's) I know that we (the Xanadu/AMIX community) hated the term 'groupware', as would anyone who has any respect for the English language. Also, at the time, the term was generally applied to things like Lotus Notes, which we felt was in a category distinct from what we were doing."

    Currently Wikipedia defines groupware as:

    "software that integrates work on a single project by several concurrent users at separated workstations"

    Thus today almost any software that supports multiple users can somewhat legitimately say that they are 'groupware'.

    1990s — Origin of Social Software

    While the term 'groupware' was slowly losing its meaning, a new phrase, 'social software' was beginning to coming into vogue. However, for the first 15 years of its existence, mostly in the 1990s, the term was rarely used outside of very specialized groups.

    One of the best ways I've found to see how words, terms, phrases, and memes spread through culture is looking through Google's marvelous archive of usenet newsgroups. Searching by date, the earliest reference I can find to the term 'social software' is a posting in 1990 -- in this newsgroup posting it isn't really clear what the definition of social software is, only that it is associated with 'open hypertext' and a committee in Japan to study 'social-hyper computing'. The next mention of social software is in 1992 in which Ted Nelson's Xanadu and Phil Salin's AMIX are called social software.

    The next couple of years of usage of the term social software appear to be largely associated with the nanotechnology community and those influenced by them, or by the diaspora of people who left Xanadu and AMIX when they were both closed by Autodesk. Given these references, my first guess was that the term originated within Xanadu/AMIX communities, as they had close connections with many of the people involved with nanotechnology.

    K. Eric Drexler

    However, after contacting some of my collegues that used to work for Xanadu or AMIX, they say the term probably came from K. Eric Drexler, founder of the Foresight Institute. Drexler is best known for coining the term nanotechnology, and his interest in hypertext and group augmentation comes from his desire to make sure that we think critically about the technology before we develop it.

    The earliest reference that I can find to term 'social software' in his writings is in Hypertext Publishing and the Evolution of Knowledge, originally published at the Hypertext '87 Conference, but updated online though 1997), where the term is used three times, in the following contexts:

    Filtered vs. bare hypertext: A system that shows users all local links (no matter how numerous or irrelevant) is bare hypertext. A system that enables users to automatically display some links and hide others (based on user-selected criteria) is filtered hypertext. This implies support for what may be termed social software, including voting and evaluation schemes that provide criteria for later filtering.
    Agents can also implement social software functions - for example, applying voting-and-rating algorithms to sets of reader evaluations and publishing the results.
    A hypertext publishing medium will have abilities beyond supporting improved critical discussion. Since it is computer-based, it can naturally support software for collaborative development of modeling games and simulations [29] (and enable effective criticism of published model structures and parameters). Social software could facilitate group commitment and action: individuals could take unpublicized positions of the form I will publicly commit to X if Y other people do so at the same time. Once Y people take a compatible position, everyone's commitment (to making a statement, forming a group, making a contribution, etc.) could be automatically published. The possibilities for hypertext-based social software seem broad.

    I wrote Eric to find out why he used the term, and this was his reply:

    "I don't recall when I began using it (it wasn't in Engines of Creation), but it does seem to me, as best I can recall, that I coined it.

    I used the term 'social software' because I am concerned with communication and collaboration on all scales, including the whole of society. Thus, I see media at the scale of the World Wide Web as forms of social software.

    We need to make society-scale conflict -- not just group-scale cooperation -- more productive. Better media, better social software, can help.

    I'd rather see an emphasis on this dimension of social software than on the origin of the name itself."

    Drexler's term 'social software' didn't initially take off. There do not appear to be many consistent mentions of the term in the late 90's. There continue to be references that I attribute to the term spreading out from the nanotechnology community, but also I see references to the 'social software' of the brain. There also seem to be a few consulting companies named Social Software but none appear to have had much success. Wiki was invented in 1995, but I don't see it, or any of the subsequent wikis defining themselves as social software for a number of years.

    2000s — Evolution of Social Software


    It isn't until late 2002 that the term 'social software' came into more common usage, probably due to the efforts of Clay Shirky who organized a "Social Software Summit" in November of 2002. He recalls his first usage of the term to be from approximately April of 2002.

    I asked Clay if it was the loss of meaning in the terms 'groupware' that made him choose the term 'social software', and he replied:

    "I was looking for something that gathered together all uses of software that supported interacting groups, even if the interaction was offline, e.g. Meetup, nTag, etc. Groupware was the obvious choice, but had become horribly polluted by enterprise groupware work."

    I asked him why he didn't use the term 'collaborative software' and he commented:

    "...because that seems a sub-set of groupware, leaving out other kinds of group processes such as discussion, mutual advice or favors, and play.

    The broader issue is that there was no word or phrase that grouped the CSCW and online community currents together without also including a lot of non-group oriented stuff. CMC (Computer-Mediated Communication) for example, includes broadcast outlets like C|Net, two-person email exchanges, and spam -- much too broad. There was also no word or phrase that called attention to the explosion of interesting software for group activities that fell outside online communities and CSCW, things like Bass-Station (which is for offline community) or "Uncle Roy is All Around You" (which is computer-supported collaborative play.)"

    Clay also offered me some interesting commentary on the term 'social computing':

    "On a side note, there is in the research community a similar phrase, 'social computing', that both MSFT and IBM use. I think this phrase also doesn't fit the domain well. It seems to suffer from Shannon Envy, where the researchers interested in social effects are trying to convince their colleagues that what they are working on is also computing.

    I think the 'social computing' phrase is a shame for two reasons. First, there's no need to apologize for studying social effects by pretending that they are a form of computing (the old argument about computers as computing vs communicating devices goes back to Licklider in the early 60s, and its disheartening to see the communications people agonizing over it 40 years on).

    Second, the phrase social computing could describe a really interesting domain, where groups are used to find approximately optimal solutions to hard combinatorial problems."

    2000s — Changing Definitions of Social Software


    An early definition by Clay for the definition of social software was:

    "1. Social software treats triads of people differently than pairs.
    2. Social software treats groups as first-class objects in the system."

    However, Clay more recently prefers the simpler:

    "software that supports group interaction"

    I note that this is quite a bit closer to the Johnson-Lenz definition of groupware.

    One of areas of disagreement about the definition of social software is one of scope. Sunir Shah host of MeatballWiki conceives of social software as being mainly about support for online communities, whereas Clay Shirky desires for it to also:

    "explicitly try to include online support for both lightweight social value (e.g. and offline interaction (e.g. Dodgeball, PacManhattan) in the definition."

    There is a long discussion on Meatball Wiki at SocialSoftware on this topic. One of the more interesting definitions comes from Tom Coates:

    "augmentation of human's socializing and networking abilities by software, complete with ways of compensating for the overloads this might engender"

    I asked Adina Levin, of SocialText, about her thoughts on why we are now using the term 'social software' and not some other term. She said:

    "I believe that there are new things in social software, but we're also compelled to use new words instead of building on the old ones because of the way the discourse works.

    When you talk about 'groupware', people think of the hard-to-use, under-adopted Lotus Notes category. In the mid-90s, groupware and knowledge management were used for software that represented the taylorization of knowledge work – the idea that you can automate knowledge work into pre-defined workflows and "capture the assets" in people's brains. Often, the ideas sound good to managers, but the tools did little for the people using them. There were fancy schemes intended to "incent participation" because the tools didn't do much for the people using them without lollipops. Email had massive adoption, and more complex tools often gathered dust.

    Meanwhile, the term 'virtual community' became associated with discredited ideas about cyberspace as an independent polity, and failed dotcom ideas about assembling community in the shadow of a mass-market brand such as forums on the Coca Cola site.

    Several years ago, in the depths of the tech recession, there were signs of creative life in weblog and journal communities, conversation discovery with daypop and then technorati, the growth curve of wikipedia, mobile games, photo and playlist sharing. The liveliness was about the communities, and also about the culture of tool mix'n'match bricolage. Many of the attributes of social software -– hyperlinks for naming and reference, weblog conversation discovery, standards-based aggregation -– build on older forms. But the difference in scale, standardization, simplicity, and social incentives provided by web access turn a difference in degree to a difference in kind.

    These forms grew without any forced discussion "how to incent participation". People are compelled to write blogs and journals to show off and to share, to contribute to wikipedia and open source software projects for the joy of building things with other people. There are some lessons about social patterns and social affordances that this generation of social software communities and tools get right, are worth understanding and building on.

    We might be better off as a culture if we used rhetorical techniques from traditional cultures to appropriate the words of the previous generation, but deepen them with new insights from the current generation. But we're children of the enlightenment, we want progress, and in order to get the (deserved) attention for new generations of real innovation, we need to use new terms."

    2010 — Future Thoughts on Social Software

    VR Conference

    In examining the origins of 'social software' we can see the terminology for the field has moved through a sort of life cycle. There have been many terms for this type of software, some of which have taken off, and some of which have not.

    Typically, a visionary originates a term, and a community around that visionary may (or may not) adopt it. The diaspora of the term from that point can be slow, with 10 or 15 years passing before a term is more generally adopted. Once a term is more broadly adopted, it faces the risk of becoming a marketing term, corrupted into differentiating products rather than explaining ideas.

    Is 'social software', which just now gaining wide acceptance, destined for the same trash heap of uselessness as groupware? And, if so, what impact does the changing of this terminology have on the field of social software itself? Only the future holds those answers ...

    Posted on October 13, 2004 at 11:40 PM in Social Software, Web/Tech, Wiki | Permalink



    Social Software and the Politics of Groups

    First published March 9, 2003 on the "Networks, Economics, and Culture" mailing list.
    Subscribe to the mailing list.

    Social software, software that supports group communications, includes everything from the simple CC: line in email to vast 3D game worlds like EverQuest, and it can be as undirected as a chat room, or as task-oriented as a wiki (a collaborative workspace). Because there are so many patterns of group interaction, social software is a much larger category than things like groupware or online communities -- though it includes those things, not all group communication is business-focused or communal. One of the few commonalities in this big category is that social software is unique to the internet in a way that software for broadcast or personal communications are not.

    Prior to the Web, we had hundreds of years of experience with broadcast media, from printing presses to radio and TV. Prior to email, we had hundreds of years experience with personal media -- the telegraph, the telephone. But outside the internet, we had almost nothing that supported conversation among many people at once. Conference calling was the best it got -- cumbersome, expensive, real-time only, and useless for large groups. The social tools of the internet, lightweight though most of them are, have a kind of fluidity and ease of use that the conference call never attained: compare the effortlessness of CC:ing half a dozen friend to decide on a movie, versus trying to set up a conference call to accomplish the same task.

    The radical change was de-coupling groups in space and time. To get a conversation going around a conference table or campfire, you need to gather everyone in the same place at the same moment. By undoing those restrictions, the internet has ushered in a host of new social patterns, from the mailing list to the chat room to the weblog.

    The thing that makes social software behave differently than other communications tools is that groups are entities in their own right. A group of people interacting with one another will exhibit behaviors that cannot be predicted by examining the individuals in isolation, peculiarly social effects like flaming and trolling or concerns about trust and reputation. This means that designing software for group-as-user is a problem that can't be attacked in the same way as designing a word processor or a graphics tool.

    Our centuries of experience with printing presses and telegraphs have not prepared us for the design problems we face here. We have had real social software for less than forty years (dated from the Plato system), with less than a decade of general availability. We are still learning how to build and use the software-defined conference tables and campfires we're gathering around.

    Old Systems, Old Assumptions

    When the internet was strange and new, we concentrated on its strange new effects. Earlier generations of social software, from mailing lists to MUDs, were created when the network's population could be measured in the tens of thousands, not the hundreds of millions, and the users were mostly young, male, and technologically savvy. In those days, we convinced ourselves that immersive 3D environments and changing our personalities as often as we changed socks would be the norm.

    That period, which ended with the rise of the Web in the early 1990s, was the last time the internet was a global village, and the software built for this environment typically made three assumptions about groups: they could be of any size; anyone should be able to join them; and the freedom of the individual is more important than the goals of the community.

    The network is now a global metropolis, vast and heterogeneous, and in this environment groups need protection from too-rapid growth and from being hijacked by anything from off-topic conversations to spam. The communities that thrive in this metropolitan environment violate most or all of the earlier assumptions. Instead of unlimited growth, membership, and freedom, many of the communities that have done well have bounded size or strong limits to growth, non-trivial barriers to joining or becoming a member in good standing, and enforceable community norms that constrain individual freedoms. Forums that lack any mechanism for ejecting or controlling hostile users, especially those convened around contentious topics, have often broken down under the weight of user hostile to the conversation (viz usenet groups like soc.culture.african.american.)

    Social Software Encodes Political Bargains

    Social interaction creates a tension between the individual and the group. This is true of all social interaction, not just online. Consider, from your own life, that moment where you become bored with a dinner party or other gathering. You lose interest in the event, and then, having decided it is not for you, a remarkable thing happens: you don't leave. For whatever reason, usually having to do with not wanting to be rude, your dedication to group norms overrides your particular boredom or frustration. This kind of tension between personal goals and group norms arises at some point in most groups.

    Any system that supports groups addresses this tension by enacting a simple constitution -- a set of rules governing the relationship between individuals and the group. These constitutions usually work by encouraging or requiring certain kinds of interaction, and discouraging or forbidding others. Even the most anarchic environments, where "Do as thou wilt" is the whole of the law, are making a constitutional statement. Social software is political science in executable form.

    Different constitutions encode different bargains. Slashdot's core principle, for example, is "No censorship"; anyone should be able to comment in any way on any article. Slashdot's constitution (though it is not called that) specifies only three mechanisms for handling the tension between individual freedom to post irrelevant or offensive material, and the group's desire to be able to find the interesting comments. The first is moderation, a way of convening a jury pool of members in good standing, whose function is to rank those posts by quality. The second is meta-moderation, a way of checking those moderators for bias, as a solution to the "Who will watch the watchers?" problem. And the third is karma, a way of defining who is a member in good standing. These three political concepts, lightweight as they are, allow Slashdot to grow without becoming unusable.

    The network abounds with different political strategies, like Kuro5hin's distributed editorial function, LiveJournal's invitation codes, MetaFilter's closing off of user signups during population surges, Joel Spolsky's design principles for the Joel on Software forum, or the historical reactions of earlier social spaces like LambdaMOO or Habitat to constitutional crises are all ways of responding to the fantastically complex behavior of groups. The variables include different effects at different scales (imagine the conversation at a dinner for 6, 60, and 600), the engagement of the users, and the degree to which participants feel themselves to be members of a group with formal goals.

    Further complicating all of this are the feedback loops created when a group changes its behavior in response to changes in software. Because of these effects, designers of social software have more in common with economists or political scientists than they do with designers of single-user software, and operators of communal resources have more in common with politicians or landlords than with operators of ordinary web sites.

    Testing Group Experience

    Social software has progressed far less quickly than single-user software, in part because we have a much better idea of how to improve user experience than group experience, and a much better idea of how to design interfaces than constitutions. While word processors and graphics editors have gotten significantly better over the years, the features for mailing lists are not that different from the original LISTSERV program in 1985. In fact, most of the work on mailing list software has been around making it easier to set up and administer, rather than making it easier for the group using the software to accomplish anything.

    We have lots of interesting examples of social software, from the original SF-LOVERS mailing list, which ifrst appeared in 1970s and outlived all the hardware of the network it launched on, to the Wikipedia, a giant community-created encyclopedia. Despite a wealth of examples, however, we don't have many principles derived from those examples other than "No matter how much the administrators say its 'for work', people will bend communications tools to social uses" or "It sure is weird that the Wikipedia works." We have historically overestimated the value of network access to computers, and underestimated the value of network access to other people, so we have spent much more time on the technical rather than social problems of software used by groups.

    One fruitful question might be "How can we test good group experience?" Over the last several years, the importance of user experience, user testing, and user feedback have become obvious, but we have very little sense of group experience, group testing, or group feedback. If a group uses software that encourages constant forking of topics, so that conversations become endless and any given conversation peters out rather than being finished, each participant might enjoy the conversation, but the software may be harming the group goal by encouraging tangents rather than focus.

    If a group has a goal, how can we understand the way the software supports that goal? This is a complicated question, not least because the conditions that foster good group work, such as clear decision- making process, may well upset some of the individual participants. Most of our methods for soliciting user feedback assume, usually implicitly, that the individual's reaction to the software is the critical factor. This tilts software and interface design towards single-user assumptions, even when the software's most important user is a group.


    Another critical question: "What kind of barriers work best?" Most groups have some sort of barrier to group membership, which can be thought of as a membrane separating the group from the rest of the world. Sometimes it is as simple as the energy required to join a mailing list. Sometimes it is as complicated as getting a sponsor within the group, or acquiring a password or key. Sometimes the membrane is binary and at the edge of the group -- you're on the mailing list or not. Sometimes its gradiated and internal, as with user identity and karma on Slashdot. Given the rich history we have with such social membranes, can we draw any general conclusions about their use by analyzing successes (or failures) in existing social software?

    There are thousands of other questions. Can we produce diagrams of social networks in real time, so the participants in a large group can be aware of conversational clusters as they are forming? What kind of feedback loops will this create? Will software that lets groups form with a pre-set dissolution date ("This conversation good until 08/01/2003.") help groups focus? Can we do anything to improve the online environment for brainstorming? Negotiation? Decision making? Can Paypal be integrated into group software, so that groups can raise and disperse funds in order to pursue their goals? (Even Boy Scouts do this in the real world, but it's almost unheard of online.) And so on.

    The last time there was this much foment around the idea of software to be used by groups was in the late 70s, when usenet, group chat, and MUDs were all invented in the space of 18 months. Now we've got blogs, wikis, RSS feeds, Trackback, XML-over-IM and all sorts of IM- and mail-bots. We've also got a network population that's large, heterogeneous, and still growing rapidly. The conversations we can have about social software can be advanced by asking ourselves the right questions about both the software and the political bargains between users and the group that software will encode or enforce.


    The future of facts (and the rise of fact servers)

    The Wikipedia had to freeze the George W. Bush entry a few weeks ago because people were altering it to suit their political viewpoints at an alarming rate. So, the editors pared the page down to the non-controversial "core" of facts. There was still a lot of information there — much more than merely "He was born, he drank, he became president" — and occasional acknowledgements of controversies, such as whether Bush satisfactorily completed his National Guard service.

    But, most interesting to me, towards the top, on the right, the Wikipedia ran one of the staples of its biographical entries: A fact box.

    Bush fact box from wikipedia

    I find this two-tiered view of facts, quite common in reference works, fascinating. And in the context of a bottom-up work such as the Wikipedia, in the midst of a dust-up over what constitutes a factual account of the life of W, you have to ask: What's happening to facts?

    I don't like facts and I never have. Psychologically, metaphysically and sociologically, I'm uncomfortable in their stern, disapproving, Cheney-like presence.

    Psychologically, I freeze when I have to recite one. They are, for me, simply opportunities to be wrong in public. My hesitation is noticeable, leading people to think I must be struggling to make up the fact, which actually is frequently the case. That's why JOHO has been 100% fact free since it's inception. That's my pledge to you.

    I also have a metaphysical problem with facts. Of course I understand that there's a real world that existed before I was born and into which I will be buried (or smudged, depending on the cause of my demise). But facts aren't the same thing as reality. They are one way reality — the way the world is apart from our awareness of it — shows itself to us. Without us, the universe would carry on fine, but facts wouldn't emerge from the darkness. Because experience is cultural, facts are cultural artifacts: They're expressed in language, they have a grammar, they are deeply contextual. Facts don't like us saying that, but it's true: "The Titanic sank in 1912" is only a fact because of a context that implicitly includes an understanding of how names stand for things, a decision to mark time by trips around the sun, a convention that numbers years from the birth of a guy I don't care much about, and a historical-cultural context that says that the sinking of a large ship is worth making an explicit proposition about.

    Now, you probably snort at that line of thought because you think I'm running from the pure, brutal "Look, it happened!" that facts express. But I'm not. It was sad when the great ship went down (down to the bottom of the...), and it happened on a date we agree on. But facts are not context-free meteors that slam into our planet unbidden. They are instead a way of conjuring up the world in one of its infinite facets. They are a way of speaking, a form of rhetoric, and thus should not be treated as if they are the end-all of thought and discussion. But, sociologically, that's often how they're used: They are the knuckle sandwich of rhetoric. Facts are, of course, peculiarly important, but they are not the only peculiar and important things we say to one another. And they are not quite as reality-based, muscular and manly as they pretend. Inside every fact is a value struggling to get out.

    I Love Facts

    To forestall rants about how I don't believe in facts and think that, for example, the date the Titanic went down is subject to debate, let me state for the record: The Titanic sank on April 15, 1912. We should reject any explanation of facts that lets someone claim that the date of its sinking is up for grabs, relative or unknowable. Facts are crucial in disciplines I care a lot about, including science and journalism. Nevertheless, facts are form of understanding and a form of rhetoric, and thus they are always infected with slimy humanity.

    So, when the Web started heating up the Internet, I was among those who thought that we were going to see a merging of voice and facts, and, more particularly, voice and objectivity. (Objectivity is the mood in which we get all factual.) To a greater extent than I'd hoped, that's happening: Just read your 50 favorite blogs. Many Big-Time Journalists go to absurd lengths to hide their political sympathies — one editor boasts he doesn't even vote — but it's reversed on blogs: If we don't know who you're voting for, how can we trust what you write?

    And yet...There are classes of facts I don't want wrapped in voice. If I post a question about the battery life of a laptop, I'll trust the people who write in response more than I trust the computer company's site, but I trust the company site more for the dimensions of the machine. The company is liable for its answer in a way that a random blogger isn't; if I have to buy a new carrying case because the number was wrong, the blogger can say, "Sorry, dude, I misread the measuring tape," whereas I'll expect the company to compensate me one way or another.

    Similarly, I count on mainstream newspapers to provide fact-based stories that "cover" an event: I don't expect in the foreseeable future to be counting on webloggers to tell me how many troops attacked Samara, how this was coordinated with other simultaneous battles, or how many civilians were killed. Of course I expect bloggers to fact check the media's ass but good, which implies that I don't have full confidence in the media's ability to deliver the facts. (PS: there's no such thing as "the" facts because which facts are relevant is not itself a matter of fact.) But covering events seems to require the type of centralization that only a news bureau can provide. (Hint: Any sentence of mine that of the form "only a _____ can provide" is likely to turn false particularly quickly.) Further, news organizations stand behind their stories in a way that someone talking over the virtual back fence doesn't have to. (Of course, sometimes the news media stand behind their stories Rather longer than they should.)

    The role of facts in discourse may look immutable, but it is exactly the sort of thing that can change; I've been reading Foucault recently and it's startling how such deep structures can transform rapidly.(It's also startling how unbelievably brilliant Foucault was.) I don't know what will happen, but my hunch is that we are heading towards commoditizing facts, driving down their value so that they don't provide differentiating value. For example, take the table of Bush facts at the Wikipedia. With the right API, the Wikipedia could become a Fact Server that delivers the undisputed facts about any of its 1,000,000+ topics to any application that asks politely, making facts cheaper than popcorn.

    Now, it would be irresponsible for a fact server to serve up dubious or putative facts, but if it only serves the commoditized facts, it won't have all that much value. So, perhaps fact servers will deliver facts along with metadata about how reliable the facts are: It's 0.99 certain that Bush was born in 1946 but it's 0.4 that he completed his National Guard duty. Will this sharpen the line between the two tiers of facts — the reliability of lower-class facts will always be the subject to argument while 0.99s are beyond serious dispute — or will it tar all facts with the welcome brush of human fallibility?

    There are bunches of other questions, many of which take on an Hegelian cast. For example, the Wikipedia fact box gives Bush's date of birth but not his race. That's because our culture does not count race as relevant (haha!), and, no, you can't always tell from the photo. The Wikipedia fact box also does not state who W's parents are, yet in some cultures knowing your parentage is as important as knowing the year you were born. But, if Wikipedia acts as a fact server, it won't have to decide which 0.99 facts to include in the fact box. It will simply serve up all facts the requesting app wants. Thus, Bush's date of birth, race and parentage will show up as equal; if your culture values parentage, your app will make a big deal of that. If some other culture considers listing the date of birth to be a type of ageism, its apps will ignore that datum. Undoubtedly, some app will find intense value in the 0.99 fact that Bush is white. So, the commoditization of facts may result in the formation of cultural fact boxes that divide us on the basis of a consensus core of 0.99s that we all agree on: Cultures united in a core of commoditized facts from which they select the fact boxes that divide us. Weird. Or is it the way the world has always implicitly worked?

    The delivery of facts with probabilities as part of them could lead to unpredictable consequences. Building doubt into facts could transform their rhetorical and social role. Will we recognize facts as being as perpetually subject to argument as are opinions? Will their source of authority become an integral part of them, as opposed to being an outside reference? Will the recognition that they're socially conditioned degrade them so that all facts are equal, no matter how contradictory or stupid — appending a huge "Whatever!" to all factual discussions? Are we heading towards a more sophisticated, nuanced way of thinking that will put facts in their place, or towards a new age of stupidity and obstinacy? And in the new world of facts, what will be the sound of voices conversing and voices testifying?

    I believe we are currently inventing a new and important life for facts. We just don't yet know what it will be.

    The end of data?

    Here's an idea for the book I am perpetually working on working on. (No, that's not a typo. I've been working for over a year on a proposal that would enable me to work on the book.)

    There used to be a difference between data and metadata. Data was the suitcase and metadata was the name tag on it. Data was the folder and metadata was its label. Data was the contents of the book and metadata was the Dewey Decimal number on its spine. But, in the Third Age of Order (see the previous issue), everything is becoming metadata.

    For example, imagine you're at a large corporation doing a Third Order treatment of its digital library of research articles. Instead of (or, in addition to) designing a large, complex, hierarchical taxonomy, you focus on adding enough metadata to each article so that people will be able to sort and classify them any which way they want. If someone wants to find all the articles that talk about hydrocarbons written in Italian in 1965 and that have more than 30 footnotes, they'll be able to. If someone wants to make a browsable hierarchy based not on topic but on gender or on the number of co-authors, they'll be able to. You build enriched objects first so your users can forever after taxonomize the way they want to, instead of the way you think they'll want to.

    Now take a closer look at these information objects. They look like contents tagged with lots of metadata, but in fact they're all metadata. If I'm looking for an article about hydrocarbons written by Barbara Rodriguez, then the article's topic ("hydrocarbons") and author's name ("Rodriguez, Barbara") are metadata, and the content is the data. But, I could just as well be trying to remember the name of the author who wrote an article that included the phrase "Hydrocarbons are the burros of the the cosmos" sometime in the 1960s, in which case the content and date are metadata and the author's name is the data. What's data and what's metadata depends on the person doing the asking.

    So, in the Third Age of Order, all data is metadata. Contents are labels. Data is all surface and no insides. It's all handles and no suitcase. It's a folder whose content is just another label. It's all sticker and no bumper.

    Why does this matter? It changes the primary job of information architects. It makes stores of information more useful to users. It enables research that otherwise would be difficult, thus making our culture smarter overall. But, most interestingly (at least to me), this does the ol' Einsteinian reverse flip to Aristotle. Aristotle assumed that of the 10 categories by which one could understand a thing, one must be primary: Where that thing fits into the tree of knowledge. So, you could say that Alcibiades is made of flesh or lived in Greece, but if you really want to understand him, you have to say that he is an animal of a particular kind. But, now that everything is metadata, no particular way of understanding something is any more inherently valuable than any other; it all depends on what you're trying to do. The old framework of knowledge — and authority — are getting a pretty good shake.

    Right? Wrong? Old? Obvious? Pointless? Stop me before I make a fool of myself to someone not as nice as you...

    My friend Robert Morris who teaches computer science at U. Mass Boston, and who has always been unnecessarily generous to me with what he knows, says that the above is pretty much old news:

    The short answer is that in the business, nobody anymore contends there is a diffference between data and metadata ort her than in a context such as you mention, namely the metadata is usually that part which helps you locate and use the other part and which you can often ignore if you already know those things.

    Bob points to Life Science IDs (LSIDs) as an example of a standard that does sort of distinguish data from metadata.

    An LSID is an immutable, permanent, globally unique key to a piece of information. The LSID spec requires that getData always return the same bytes for the entire future of the universe, whereas getMetadata may return things about the information that could change.

    LSIDs are being supported by the Interoperable Informatics Infrastructure Consortium (I3C). An LSID server sits in front of your database or application so you can continue to use your existing infrastructure.

    Sounds like the architecture for a life sciences fact server...



    by Marcel Duchamp

        Let us consider two important factors, the two poles of the creation of art: the artist on the one hand, and on the other the spectator who later becomes the posterity.

        To all appearances, the artist acts like a mediumistic being who, from the labyrinth beyond time and space, seeks his way out to a clearing.

        If we give the attributes of a medium to the artist, we must then deny him the state of consciousness on the esthetic plane about what he is doing or why he is doing it. All his decisions in the artistic execution of the work rest with pure intuition and cannot be translated into a self-analysis, spoken or written, or even thought out.

        T.S. Eliot, in his essay on "Tradition and Individual Talent", writes: "The more perfect the artist, the more completely separate in him will be the man who suffers and the mind which creates; the more perfectly will the mind digest and transmute the passions which are its material."

        Millions of artists create; only a few thousands are discussed or accepted by the spectator and many less again are consecrated by posterity.

        In the last analysis, the artist may shout from all the rooftops that he is a genius: he will have to wait for the verdict of the spectator in order that his declarations take a social value and that, finally, posterity includes him in the primers of Artist History.

        I know that this statement will not meet with the approval of many artists who refuse this mediumistic role and insist on the validity of their awareness in the creative act - yet, art history has consistently decided upon the virtues of a work of art thorough considerations completely divorced from the rationalized explanations of the artist.

        If the artist, as a human being, full of the best intentions toward himself and the whole world, plays no role at all in the judgment of his own work, how can one describe the phenomenon which prompts the spectator to react critically to the work of art? In other words, how does this reaction come about?

        This phenomenon is comparable to a transference from the artist to the spectator in the form of an esthetic osmosis taking place through the inert matter, such as pigment, piano or marble.

        But before we go further, I want to clarify our understanding of the word 'art' - to be sure, without any attempt at a definition.

        What I have in mind is that art may be bad, good or indifferent, but, whatever adjective is used, we must call it art, and bad art is still art in the same way that a bad emotion is still an emotion.

        Therefore, when I refer to 'art coefficient', it will be understood that I refer not only to great art, but I am trying to describe the subjective mechanism which produces art in the raw state .. .`a l'e`tat brut - bad, good or indifferent.

        In the creative act, the artist goes from intention to realization through a chain of totally subjective reactions. His struggle toward the realization is a series of efforts, pains, satisfaction, refusals, decisions, which also cannot and must not be fully self-conscious, at least on the esthetic plane.

        The result of this struggle is a difference between the intention and its realization, a difference which the artist is not aware of. Consequently, in the chain of reactions accompanying the creative act, a link is missing. This gap, representing the inability of the artist to express fully his intention, this difference between what he intended to realize and did realize, is the personal 'art coefficient' contained in the work.

        In other works, the personal 'art coefficient' is like a arithmetical relation between the unexpressed but intended and the unintentionally expressed.

        To avoid a misunderstanding, we must remember that this 'art coefficient' is a personal expression of art a` l'e`tat brut, that is, still in a raw state, which must be 'refined' as pure sugar from molasses by the spectator; the digit of this coefficient has no bearing whatsoever on his verdict. The creative act takes another aspect when the spectator experiences the phenomenon of transmutation: through the change from inert matter into a work of art, an actual transubtantiation has taken place, and the role of the spectator is to determine the weight of the work on the esthetic scale.

        All in all, the creative act is not performed by the artist alone; the spectator brings the work in contact with the external world by deciphering and interpreting its inner qualification and thus adds his contribution to the creative act. This becomes even more obvious when posterity gives a final verdict and sometimes rehabilitates forgotten artists.

        (From Session on the Creative Act, Convention of the American Federation of Arts, Houston, Texas, April 1957)


    1. Take into account that great love and great achievements involve great risk.

    2. When you lose, don't lose the lesson.

    3. Follow the 3 Rs
        Respect for self, 
        Respect for others
        Responsibility for all your actions.

    4. Remember that not getting what you want is sometimes a wonderful stroke of luck.

    5. Learn the rules so you know how to break them properly.

    6. Don't let a little dispute injure a great relationship.

    7. When you realize you’ve made a mistake, take immediate steps to correct it.

    8. Spend some time alone every day.

    9. Open arms to change, but don't let go of your values.

    10. Remember that silence is sometimes the best answer.

    11. Live a good, honorable life. Then when you get older and think back, you'll be able to enjoy it a second time.

    12. A loving atmosphere in your home is the foundation for your life.

    13. In disagreements with loved ones, deal only with the current situation. Don't bring up the past.

    14. Share your knowledge. It's a way to achieve immortality.

    15. Be gentle with the earth.

    16. Once a year, go someplace you've never been before.

    17. Remember that the best relationship is one in which your love for each other exceeds your need for each other.

    18. Judge your success by what you had to give up in order to get it.

    19. Approach love and cooking with reckless abandon.



    Life 2008. 4. 17. 00:30

    설익은 생각의 파편들이 이리저리 널려 있다.

    네이버 블로그에...
    다음 블로그에...
    그리고 여기에...

    도대체 어떻게 정리해야되는 거지???;;;
    정리하기는 버겁고, 그렇다고 버리기는 아까운 10년전 잡지들을 잔뜩 두고 있는 느낌이다.

    1. snowwon 2008.05.21 15:53 신고  댓글주소  수정/삭제  댓글쓰기

      ㅎ 저도 같은 생각~


    Life 2008. 4. 13. 00:07
    기적의 매커니즘은...
    동화나 전설에 나오는 것처럼 우리가 이해할 수 없는 초자연적인, 초과학적인 현상이라기보다는... 좀더 일상적인 수준.. 그리고 우리가 과학적으로 설명할 수 있다고 착각하는 그 정도 수준 안에서 일어나는 것들이 아닐까...

    누군가를 만난다든가, 어떤 일을 하게 된다든가.. 우연히 어떤 기회가 온다든가...
    와 같은 그런 작은 일상적인 일들이 모두 기적의 매커니즘 안에서 일어나고 있는 것이 아닌가라는 생각이 든다. 아주 정교하게 만들어진 매트릭스 안에서... 그 기적의 매커니즘은 살아 돌아가고 있는 것 같다.

    그래서 왠지 초자연적인 현상들을 한 종교의 근거로 삼는 것은 꽤나 무리가 있는 게 아닌가라는 생각이 든다.

    언젠가 CSI에서 그리섬이 말했던 것처럼...
    "나는 신이 사람들이 하는 것들에 관여하지 않을 정도로 현명한 존재라고 믿는다."
    나도 그렇다.

    on datamatics

    YCAM/YCAM_05_Ikeda 2008. 2. 27. 15:12
    [datamatics = C4I + data.series]

    datamatics is a project that explores aesthetic potentials of data by using data itself - from its transparency to materiality, from its ultra-speed to hyper diffusion.

    The tactics of datamatics are to derive the hidden constants of data-ness from the vast data ocean that ranges from DNA and the everyday world to the universe and pure mathematics.

    datamatics consists of C4I and data.series.
    The basic unit of the project is 'data'; sub-@rojects are each named as ''.

    C4I is an integration of the 7 key 'data.' parts of datamatics.
    The characteristics of C4I lies in the continuum of a hyper-dense split second. Its function is performed as an audiovisual concert piece and is updated constantly. It touches the threshold of one's perception and recognition, and is experienced at selected theatre spaces across the world.

    data.series is a differentiation of datamatics, which comprises various 'data.' parts; installations, CD, publications, and possibly film, DVD, scientific research.
    The characteristics of each part is optimized to the medium to be used according to the primal aim of datamatics.

    from IDEA, 2005.7.

    Wim Delvoye

    Art/Artists 2008. 1. 21. 19:20
    This little critic went to market
    Last Updated: 12:01am GMT 27/11/2005

    The film-maker Ben Lewis and a herd of Chinese pigs have one thing in common – a tattoo by the conceptual artist Wim Delvoye. So how much are they – and his shoulder – worth?

    Earlier this year I acquired a tattoo. It's on the back of my right shoulder and it shows Mickey Mouse on a crucifix with Minnie weeping at the base. It's signed 'Wim Delvoye'. Somewhere on a tiny farmstead on the outskirts of Beijing in China there's a pig running around with exactly the same tattoo, also signed 'Wim Delvoye'. That's because the pig and I got tattooed at the same time, and in the same room, by the same artist.

    사용자 삽입 이미지

    A tattooed Ben Lewis

    I had been making a short film about Delvoye for my BBC4 series, Art Safari. The tattoo was the climax, but I first met the artist at his studio in his home town of Gent, where he was putting the finishing touches to his notorious Cloaca, a reproduction of the human digestive system. It consists of a waste disposal unit at one end (the mouth), a washing machine in the middle full of digestion enzymes (the stomach), and a kind of drying unit at the other end (the sphincter). You put food in at the top (the day that I dined with the machine we ate tuna sashimi, and Belgian chocolates, washed down with a light Chablis) … and guess what comes out of the bottom? The 'products' are sun-dried and sold as works of art in Perspex boxes, at about £2,000 per poop.

    This is, of course, silly art: Delvoye's work satirises the art world, with its inflated prices and daft intellectual cul-de-sacs. Cloaca makes the ultimate criticism of modern art - that most of it is crap; that the art world has finally disappeared up its own backside. 'When I was going to art school, all my family said I was wasting my time, and now I have made a work of art about waste,' he told me happily.

    Wim Delvoye is a 40-year-old Belgian artist who has been on the margins of the contemporary art premier league for about a decade. He began with a set of gas canisters, meticulously reproduced in the style of the white and blue Delft porcelain. He then established relations with craftsmen from Indonesia, whom he persuaded to sculpt ornate wooden concrete mixers, whose baroque foliage was finished off with gold leaf. Such ambiguous artworks are both a way of ennobling the everyday, and a merciless mockery of the values of art.

    Delvoye drove me from his studio to see his latest large-scale work in progress. We parked up by a big shed on an industrial estate. Inside, two workmen were laser-cutting metal panels for model bulldozers and tipper trucks in the style of high European gothic. Delvoye had photographed numerous stained-glass windows, then worked the images into a model of a construction vehicle, in which every panel was decorated with Gothic motifs. It's a very contemporary way of working, using available objects and styles, colliding past and present, technology and craftsmen, that has a much simpler British counterpart in Marc Quinn's sculpture of Alison Lapper in Trafalgar Square.

    Wim and I gazed at his new work. 'It's social realism, a tribute to the working class,' Wim told me. 'First we had the Iron Age, then the Bronze Age and now we are coming to the end of the Concrete Age. It's like saying goodbye to the 20th century.'

    These trucks are scale models. Delvoye will one day make life-size working vehicles in this style. That's important. The artist is no longer likely to paint from, copy, interpret or be critical of the world around him - something that links Michelangelo, Picasso and Warhol. In the future, the work of art will enter into the real world.

    This is where the pigs come in. Delvoye has a project in China that puts this 'real world' idea into practice: he invited me to visit a small farmstead outside Beijing where he keeps 24 prime pigs, tended carefully by local villagers. Every animal is tattooed. Once a week he puts them under a mild anaesthetic and etches into their skin anything from Russian prison tattoos to Disney princesses to the Louis Vuitton logo. When one of these works of art is finished, the pig is slaughtered, the skin is preserved, stretched and sold for around £35,000. It's another wonderfully ambiguous work of art which appears to save pigs from the anonymity and industrialised death of factory farming, only to replace it with a new kind of sadistic, artistic cruelty. It's difficult to tell if the message is one that ridicules human beings for the coolness of body decoration, or appeals to us to stop seeing animals as a food source.

    I sought the answer to that question in my programme. I detest the way most art programmes take a fawning approach to their subject, when most contemporary art is so weak. I will go to any lengths to find out if art means something. Just talking to the artist and looking at the work is never enough. The artists are usually inarticulate, or English is their second language, or they're just not very bright. None of these criticisms was true of Delvoye - but his art was so ambiguous it was impossible to work out what it meant. Was it raising up the lowly, or humbling the mighty? Was it optimistic or cynical? Even the artist couldn't decide. He told me that 'the act of tattooing reveals the vanity of human beings. You're imagining yourself as a rock 'n' roll star so you want to express that on your body, then you put it on a pig and your belief system becomes ridiculous.'

    I have, with most contemporary art, a lack of confidence in my own judgment. I am always worried that art that I think is good, like Delvoye's, might not be, and so I like to think up a means of testing the work. That's why I got tattooed.

    Did I like it enough to acquire it permanently in my collection? Yes! For two-and-a-half hours I lay side by side with a pig, while we both received the same tattoo. It's a beautiful drawing with highlights in white and red. As a Jew, I delighted in the ambiguous cultural clash this image of Mickey Mouse on a crucifix represented - it was a Jewish joke about Christianity and at the same time an unforgivable contravention of Jewish law, since Jews with tattoos don't go to heaven. But, looking at it a week later in the bathroom mirror, when the scars had healed, I still couldn't make up my mind whether Delvoye was satirising human vanity, or aestheticising pigs.

    I booked an appointment for a valuation with Francis Outred, a specialist in contemporary art at Sotheby's. There I found my answer. The auctioneer refused my request to auction a deed to my skin, a sort of futures option on a work of art. He said that he couldn't tell how much wear-and-tear my artwork would suffer in my lifetime. He also said, disappointingly, that he couldn't put a value on my artwork. At any rate, I was worth much less than Delvoye's taxidermised stuffed pigs, one of which had once made £20,000 at auction.

    That's when I finally understood what Delvoye's work really meant: he had made a pig more valuable than a TV presenter. That's what I call great art.