Friday, October 10, 2008

No Museum is an Island - Picking at the Safe Haven Fallacy

Kurt Stuchell is interviewed here about his proposed new social networking site for museums called Museum and Educational Social Network (MESN). As stated in the article, the intent is "to create and maintain a safe place for young people to socially interact with museums and professionals." It appears to be a return to the safe haven concept.

It will be interesting to see how this experiment plays out. I wish Kurt success, but personally, am extremely skeptical. By removing Museums from the mainstream of social media to its own island you risk making them a backwater. Refining this metaphor, you reduce the likelihood that casual browsers and the merely curious will stumble across Museum content (and then become future seekers) in their everyday ramblings. I think people are more surfers than seekers, initially going to one of a few trusted sources (their e-mail, their banking info, the front page or funny pages of their hometown newspaper) and then letting their curiosity or social proof ("Hey, what's interesting all *those* people?") lead them onward. We appreciate well-crafted linearity, but learn and explore associatively, from one tangentially-related distraction to the next.

The dead link safe haven established and promoted by the government and briefly championed in the early 2000s by Smithsonian's Center for Education and Museum Studies (Smithsonian Education) haven't been resoundingly successful, which is perhaps not a surprise. Even little kids realize that you sell more lemonade from a street corner than from a cul de sac -- the advantages from increased foot traffic overwhelms the appearance of safety. What's more, it's a false dichotomy, since Museums like other users aren't limited to pursuing a single outreach methodology, except to the extent of their staff's limited resources. Why should museum staff invest duplicate effort? I've come to think that Facebook's power lies in its unlimited and free access to all, its enforced simplicity (despite its less customizable new design), and chiefly, its reinventability (the thing that makes Museum apps like Artshare work). Plus it has already grown its own audience.

Seems to me that offering a new business model is by itself insufficient to achieve the goals of promoting museum content in new ways on the web. Anyone wanting to establish a Facebook competitor will have to offer materially distinct functionality (say, shared web tools a la GMU's Omeka, or the kind of annotated blending of media attempted once upon a time by Smithsonian Folkways' Synchrotext, about which more here, here, and in parallel invention, as used by the New York Times and better still, in Washington Post's Debate Decoder). Even at that, substantial investment would be required to seed content and establish value. Users cannot (or perhaps better, should not) be obtained through fiat, but by the consistent presentation of valued, superior content with the least barriers to entry (even where those barriers are merely those of direct navigation).

Content providers like Museums succeed where they coexist with, invite the participation of, and facilitate feedback from an unrestricted audience and wither when they establish ivory towers. Like I said, should be an interesting experiment. As they always do, truth and talent will [win] out.

Monday, August 11, 2008

Building A Free Society: Toward A More Perfect Copyright Law

According to Matt Mason in The Pirate’s Dilemma, “Copyright laws are encroaching on the public domain, but if the history of pirates is anything to go by, such laws are not often observed, become impossible to enforce, and eventually change.” (p. 99)

Okay, folks, get ready for a rather lengthy review of The Pirate’s Dilemma, or rather, a brief review of Matt Mason’s book, followed by a more extended discussion of some of the ideas contained therein as they relate to two recent DC Bar-sponsored programs in conjunction with my current thinking about copyright law. For my fellow pedants, this essay is a logical partner to the April 3 note I called, A Digital Needle in the Haystack: Finding the Good Stuff Online, which considered problems of online plagiarism and provenance as well as the part from my 2/11/08 Kickoff to an odd-thoughts blog, a relevant paragraph of which could easily serve as the synopsis of The Pirate’s Dilemma (and this very post), to wit:
So here's my rule in our great goldfish-bowl of a world. If you like anything you see badly enough that you feel it should be copied and spread like gospel (or even perhaps smeared like cream cheese)... go ahead. Take it. Do with it what you will. Just please be sure to credit your original source (that would be me, I believe, as the author here). I make no claims to originality, except in the copyright sense. All my thoughts and work are surely derivative of whatever I've consumed (and the more recent, the more influence on the regurgitation), but at least it's been processed through this man's wetware.
My point here is less to quote myself, than to indicate the emergence of a new zeitgeist from a mere two data points. First, the brief review. Matt Mason’s book is a quick read that offers glib patter (e.g., "DJ Fezzy is getting ready for his set. It’s a cold, dark Christmas Eve in his studio, and the time is coming up to 9:00 p.m. Fezzy has come pre¬pared for a crazy-hot show, packing an arsenal of scripted material, instruments, and records, set to deliver a sonic blast of talk radio and live music. Then he’ll throw down on the wheels of steel," p. 39, referencing the first radio show broadcast in 1906 by Reginald Fessenden), lots of annoying internal hype (e.g., "That is… perhaps the most important economic and cultural question of the twenty-first century," p. 4; “The game has changed,” p. 236), a bunch of fascinating anecdotal examples of “piracy,” and a game theory-inspired model for contemporary business, sans analysis or conclusion. The anecdotes and the initial definition of the dilemma (compete with or try to suppress piracy?) are the book’s strength, and worthy of a couple hours’ browse. The book’s weaknesses preclude a need to read, however, given that much of the text is given over to filibustering platitudes and inconsistent (and therefore largely meaningless) application of the concepts “piracy” (used here to cover a gamut ranging from any crime that can be construed as social protest to any unregulated activity that has market potential, such as the first broadcasts that emerged with the discovery of radio transmission), “punk capitalism” (which ranges from idealistic kids working for love rather than money to do-it-yourself entrepreneurship), and “hip-hop culture” (the vaguest term of all, which Mason applies to everything from “youth culture” or “youth movements” as a whole dating back to the mid-to-late ‘80s to anything involving the combination of pre-existing elements that Mason likes to call remixing irrespective of context, as he uses it freely to reference collage, architectural influence/homage, music sampling, and which extrapolates as well to grade school papers derived from traditional secondary source material).

Let's forget about Mason's book now and deal simply with its eponymous dilemma -- whether it is more effective to defeat pirates indirectly by competition or directly by force. In that vein, I recently attended the first in an anticipated series of symposia whose overarching theme is, "Creative Industries in Transition." On this day, the topic was "The Continuing Vitality of Music Performance Rights Organizations," featuring a talk by UC-Berkeley Law Professor Robert Merges (hosted by rights organization BMI), the big take-home being (surprise!) that so-called music PROs like ASCAP, BMI, and SESAC are needed now more than ever to serve as clearing-houses and collective bargainers for rights holders because through economies of scale, they help minimize transaction costs.

All of this I think begs the question: what is copyright for? To channel Lawrence Lessig for a moment, why do we bother with it? Originally, the idea behind our copyright law was threefold. (1)Allow creators to control the way in which their works could be exploited (the concept of 'droit morale,' moral rights) so that, by virtue of this control, society could (2) provide creators a means of making a living, so that (3) society would benefit from a constant influx of new, creative ideas. No control, no money. No money, no (or at least insufficient) new ideas. In other words, bestowing and limiting copyright protection basically came down to incentivizing production and creating a framework for negotiation that enables distribution for the benefit of the public and ideally assures the livelihood of sufficiently popular creators. Understand that for these purposes, we don't care about the guy who sings for his shower-head or the gal who writes for her desk drawer. From a public policy standpoint, if a tree falls in a forest and no one is around to broadcast a sound recording, there is no sound.

Accepting the logic of copyright's original premises as a platform for negotiation between creator (owner) and would-be user (audience/owner) raises a surfeit of interesting questions about the nature of control we impose by law on those who would enjoy creative works (essentially, anyone with an iPod) or allow to be imposed by creators as a contingent requirement of further use or enjoyment (essentially, freedom from theft, distortion, and plagiarism). How much protection is needed to administer and enforce creators' rights to get paid and manage the exploitation of their work? What kind of controls or barriers to access should we allow (technological, legal, etc.), and how do we balance the transaction costs of seeking & granting permission against the societal benefits of free use? Should we allow a distinction between authorship and ownership, and if so, when the values of growing (or circulating) communal wealth and growing (or circulating) communal knowledge are in opposition, which should prevail?

If you buy that copyright was established primarily as an economic regime to lubricate the cogs of creativity (as I do), then I'd argue that the best way of addressing these questions and likewise the piracy 'problem' is by maintaining a close relationship between the actual cost to create, copy, adapt, and/or distribute creative works and the amount of control we give to creators or copyright owners. Your average sculptor, photographer, or designer of drapes needs no incentive to express themselves, only the wherewithal to spend their time doing so and still afford the oatmeal needed to fuel their waking existence. Application of the current copyright protection regime must ape market behavior, and if we want to see the emergence of vastly expensive shows, we must find a way for producers to reap a return on their collective investment. There's no guaranteed return from copyright protection to assure creation: the point of monopoly is less to minimize the monopolist's risk than to give them sole power to manage it, so fair punishment to the fools who brought us "Heaven's Gate."

Now consider the flip-side to the 'cost = control' premise, namely that the cheaper production and distribution are, the fewer controls we should impose. "Cheaper production? Cheaper distribution?" asks the guy typing these thoughts on a workstation for instantaneous upload to a blog and worldwide publication. Welcome to the age of digital democracy, in which the costs of production and distribution are for all intents and purposes universally low. Under this economic analysis, we must relinquish the ideal of allowing copyright owners control of works of authorship in today's digital world. Take movies again, for example, which in their Hollywood blockbuster incarnation are notoriously expensive to produce and distribute. If it doesn't (or needn't) cost much of anything to shoot a decent video and post it on YouTube (or digitally transmit to theaters), we don't need to grant so much as a limited monopoly to assure the producer breaks even. We can move the point of risk assessment back from the point of exploitation (what's the best way to maximize my profit?) to the point of production (how can I best afford to make something right now?).

You see where this is leading. Is (the need for) copyright protection obsolete? Should we keep fiddling under the hood or is it time to send the car to the scrapheap? I'm almost there, but have one more service station to visit -- Ethos.

The foregoing discussion has conveniently ignored the social justice component(s) of copyright policy. In so doing, I do not mean to gainsay the value of an author's moral rights. I think it would be a shame to allow the willy-nilly destruction of a creator's art or reputation, simply because the quid pro quo of creation renders control irrelevant. Expedience should not dictate our ethics, but to the extent that enforcement costs resources (time, money, and effort), I do think that practical considerations force us to become more precise about the protections we afford. Even in the context of the droit moral the digital world challenges the traditional paradigm of copyright control.

Increasingly, we are choosing (or forced) to sacrifice privacy for convenience. Cell phones invite eavesdropping, electronic banking invites identity theft, and social apps make us exhibitionists in a virtual, parasocial community. Data mining and semantic data association facilitates targeted communication (and observation) in a way that threatens even the inherent protection of anonymity. Our world is beginning to resemble a giant terrarium, such that it is becoming impossible for a tree to fall in a forest without being seen by somebody. In such a context, attempts to control (in this case meaning "prevent") exploitations of creative works are futile. If you don't want anyone to read your thoughts (or hear your music, see your drawing, taste your recipe, etc.), you'd best keep them to yourself. Therefore, society must relinquish the concept of control as a moral foundation of copyright. This level of copyright protection is now available only to those who can afford to pay for enforcement, and is not viable in any case.

Isn’t that where we've come to with orphan works? "Good users" who purport to be public stewards of knowledge and ideas (museums, documentarians) had been hamstrung by copyright protection in cases where the legitimate owner of a work could not be readily identified. Authorship/ownership is left ambiguous for many works (fonts, field recordings, collective efforts), perversely chilling exploitation even by users who would be only too happy to pay a reasonable usage fee or whose usage might otherwise be encouraged and let gratis. A recent DC Bar panel at Arent Fox called, "Will Orphan Works Finally Find a Home?" established that new laws resolving this issue are imminent. You can get into the nitty-gritty of this issue (as well as the nuts-and-bolts of recently passed legislation) here.

As you can see, each special interest group and pending bill articulates the details of their orphan works solution slightly differently, but the commonalities are these. If you make a good-faith attempt to identify and notify the legitimate owner of a work ahead of time and come up blank, you're free to make use of the work however you like at no cost. If and when the legitimate owner emerges, you either negotiate a reasonable use fee or stop using the work. Of course, it's not so simple, since the greater your investment in use, the greater the leverage of the revealed owner. For this reason, each proposed legislative solution tries to find a way of defining "reasonable compensation" or a transactional mechanism for establishing one. At the panel I attended, representatives of the Copyright Office recommended against imposing a compulsory royalty scheme such as the one that exists for digital rights in sound recordings and music publishing, claiming that the (transaction) cost of the bureaucracy needed for oversight and enforcement was too clunky and expensive. Still, a formal orphan works resolution is imminent, even if the business model takes a while to fine tune.

Ugh! Those pernicious transaction costs, the friction that impedes the free exchange of ideas and trade! Well, wait a minute, let's recast this in light of what we know about our digital universe. Taking time to identify the legitimate owner has a cost -- if nothing else, then as a judgment call. Negotiating with the owner costs at least the value of time. Enforcing one's entitlements in the courts has an absolute cost, but that's arguably the penalty of living outside the Badlands. It seems to me that what we have in orphan works is a system whereby we say to good actors, "Go ahead and use whatever you like however you please until you get caught. Then pay for it." If we agree that some kind of regime is necessary to regulate the payment part of things, why apply this only to orphan works? To my mind, we're still trying to fix a copyright protection scheme that our would-be frictionless digital environment has rendered irretrievably broken.

WHO-OAH, THERE'S A SOLUTION...

(Thank you, Steve Miller.)

In keeping with my "Digital Needle" take-homes:
Here, then, are a few things that museums should do to assure the continued purity and vitality of the marketplace of ideas in an increasingly-polluted digital world:

1. Be an authority:
  • seek out authors and remain vigilant about properly attributing all sources;
  • keep primary source material alive and digital so that it can be referenced;
  • build semantic widgets to accurately and efficiently tag their "good stuff;"
2. Be a filter:
  • dedicate resources to portal activity to identify others' "good stuff;" and
3. Be a good citizen:
  • participate in discussions to create statutory royalty reservoirs
I think the foundation of copyright law needs to change from its obsolete "control" paradigm to a purely moral foundation of attribution and transparency. The pirate's dilemma as defined by Mason disappears when pirates are legally recognized as legitimate entrepreneurs as opposed to parasites. Acknowledging a situation that already exists, once a creator makes a work, it's "out there," and has to be considered fair game for anyone and everyone to exploit. In the digital environment, if we want to continue to incentivize creativity, then I think the way to do so is to be sure that creators receive the credit they are due. By recognizing legitimate authorship through enforced attribution, we allow the public to directly engage and support creators while at the same time protecting them from later distortions for which they are not responsible. By requiring transparency of authorship we assure the provenance of creative works that is so critical to the preservation of their communicative value. For those exploiting the works of others for fun/profit (by re-publication, broadcast, or other distribution; by sampling or adaptation; by display or performance; etc.), mandating transparency of cost/compensation allows the public to distinguish among those exploitations that they feel are fair in an otherwise crowded forum.

In one sweeping move, we eliminate piracy and the concept of the public domain. All uses are legitimate, all uses are fair, and copyright protection subsists for as long as a creator's estate is on hand to stand up for the right to be counted. Maximizing income from exploitation is a business problem, not a creative one, and the marketplace should be allowed to take care of itself (albeit, as I think will happen, on the backs of the orphan works' compensatory solution, more on which below). As the Lynn Ahrens song says, we impose law to "establish justice, insure domestic tranquility, provide for the common defense, promote the general welfare, and secure the blessings of liberty to ourselves and our posterity," and I think these are best accomplished by restricting copyright protection to promoting the ethical values of fair attribution and fair competition (in the form of mandating a transparent marketplace).

Who pays for all this free content? As Trey Parker and Matt Stone so eloquently put it in "Team America," freedom isn't free (although as they would have it, it costs $1.05, which is right now higher than the cost of the average download). It's easy to be dismissively glib here, but it has been pointed out to me by my intellectual superiors that economic incentive is a founding problem in the recognition and promotion of intellectual property. Absent a legal framework (control) to force negotiation between creators and subsequent users, the only way to assure compensation for creators is via the establishment of a compulsory licensing scheme. This, in turn, opens the door to the endless fighting over rates, the constant lobbying for re-regulation or legislative changes that take forever to implement, and the incessant fiddling in the market and Capitol to set rates at a level to protect those with the least power or by volume of exploitation, all of which will inherently change the business structure to make it possible for people to still enjoy the production of movies/operas and other complex or high-investment creative undertakings. Got that?

Not to repeat myself, but isn't that what we're coming to with orphan works? If we're setting up a scheme to regulate fair compensation to re-discovered copyright owners for otherwise grandfathered (meaning uncontrolled) exploitations, and we're groping toward a mechanism which nonetheless favors exploitation (that is, remains affordable), then economies of scale would favor extension of this model across the full spectra of creative endeavor, from architecture to zither recordings. Understand that as I conveniently gloss over the issue of monetization, I do not mean to imply that I regard this as a trivial problem. Establishing the playing field that will allow for a reasonable quid-pro-quo structure is arguably the lynchpin holding together the existing copyright structure. However, I do think these issues are resolvable and that the time is ripe to take them on. Heck, I think it's essential we do. If this blog could act as a call to arms in service of initiating this dialogue, so much the better. You'll excuse me for suggesting that the parameters yielding a new, practical, and fair payment regime for creative works will likely require much lengthier consideration, analysis, and surely heated debate than can be afforded by this short essay.

This note began with Matt Mason, and so it seems fitting to end there as well. As he writes on p. 159 of The Pirate's Dilemma:
"This new democracy looks a lot like the model used by the music business in China. A total of 95 percent of all CDs sold there are pirate copies. This is because there are such tight restrictions on the legiti¬mate sale of foreign media, and also because in Chinese society, the idea of paying for downloading music is, by and large, considered ridiculous. Recorded music is effectively a public good, free at the point of consumption. Yet a large middle class of artists make a living there, primarily from live performances. As columnist Kevin Maney wrote for USA Today, “Chinese rock stars aren’t getting as wealthy as, say, Michael Jackson, but . . . why should they? Only a relatively few American rockers ever sell enough CDs to get fabulously rich. Should society care if rockers can’t afford to build their own backyard amusement parks?”
I say no, but society should care if rockers can't stand up and demand recognition for rightful authorship, and if commercial exploiters can hide or camouflage the means by which they exact ROI from their investment. Free speech and sunshine are the cornerstones of a strong democracy. They lead to an informed citizenry, or at least to a cacophony of voices that vent the public boiler continuously enough to keep it from exploding (or if not, to give we-the-people sufficient warning signs to hopefully proactively, positively intervene before an explosion can take place). And while ideas and expression are freed from artificial constraints, popular creators can still commoditize themselves by selling access to their performances, appearances, participation in new projects, and commissioning of new works.

The digital revolution invalidates the traditional copyright paradigm, but presents a tremendous opportunity for social progress. Embracing change rather than fighting it is the best (or at worst, least disruptive) way to move forward. We must therefore retool the law to accommodate works whose circulation and evolution cannot practically be controlled and which it is legal folly to persist in trying to prevent.

Talk amongst yourselves.

Thursday, April 3, 2008

A Digital Needle in the Haystack: Finding the Good Stuff Online

"Wikipedia does not publish original research (OR) or original thought. This includes unpublished facts, arguments, speculation, and ideas; and any unpublished analysis or synthesis of published material that serves to advance a position…. Citing sources and avoiding original research are inextricably linked: to demonstrate that you are not presenting original research, you must cite reliable sources that provide information directly related to the topic of the article, and that directly support the information as it is presented."
-- (From Wikipedia's NOR article)

Wikipedia, as an authority considered by a study published in Nature to be as (or more) accurate than the Encyclopedia Britannica and which now dwarfs it in volume of entries aims to be a repository for established (if not common) knowledge. As the above quote indicates, Wikipedia's chief weapon in this pursuit is reliance on citations to reliable sources. But in these heady digital days in which original ideas are promulgated at light-speed alongside substandard copies and half-baked iterations (guilty as charged?), how do users identify those sources that are reliable? Taken in a museum context, this question could invite a books' worth of consideration, but for the sake of this blog entry I'll just aim for a thumbnail sketch and touch on specific concerns of plagiarism, of museum authority, and of orphan works.


An Information Theory Approach to Plagiarism

To judge from the complaints I've heard, teachers, professors, and editors are being driven to distraction now more than ever before by a generation which does not seem to understand the importance of properly attributing their source material. When not dealing with ethically-challenged sloths who prefer to submit third party-drafted essays as their own homework, the watchdogs of the new recognize a more insidious copy-and-paste dilemma fomented by the internet, one in which paraphrasing and proper sourcing are increasingly (and nonetheless erroneously) viewed as passe. However, plagiarism represents a bigger threat to scholarship than simple laziness or academic fraud would make it appear.

My reader(s) presumably will accept my argument that proper attribution is, like provenance, crucial to credibility of content (to say nothing of the underlying author's ego and pocketbook, which surely are entitled to at least minor limning as a means of encouraging/making possible future contributions to the marketplace of ideas). However, the further down the road we get with digital publishing, the fewer the obstacles that emerge to impede the proliferation of unintended plagiarism. Left unchecked, popularity and ease-of-indexing become more the arbiters of influence and ready identification than do originality and authenticity. What makes this so insidious is that as the amount of content on the internet increases exponentially, so does the ratio of noise-to-signal. The prevalence of citations to identical articles or parts or paraphrases taken from such articles that have been erroneously attributed to different authors is only going to increase in the infinite plane of hypertext. Therefore, museums and academic publishers must not only remain vigilant about properly attributing original authorship, but develop, identify, and take advantage of new, user-friendly means of assigning accurate credit.

Thanks to Claude Shannon, the father-author of information theory, we have a means of comparatively quantifying the new, and can therefore deal with the paradox of intellectual relativism posed by the Universal Library -- that fictional infinite repository which contains every volume from A to ZZZZZ… including not only the complete works of Shakespeare (and translations to as-yet-uninvented languages and every binary-encoded video incarnation of performances of these), but somewhat less helpfully the complete works of Shakespeare less the second-to-last lowercase letter 'r.' Pull a "video" from off the Universal Library's shelf and you are 10x times more likely to see snow as to you are to see anything that passes for a performance of Macbeth. (For all you lay science readers out there, I highly recommend William Poundstone's books, "The Recursive Universe: Cosmic Complexity and the Limits of Scientific Knowledge," "Labyrinths of Reason: Paradox, Puzzles, and the Frailty of Knowledge," and especially, "Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street" which provides more insight on Shannon specifically.) Further, most source material lacks the authorial power of celebrity that Shakespeare's works enjoy. Apart even from issues of accurate attribution, for the ideas subsumed in these original works to retain their resonance and value, there must be a way to distinguish inaccurate copies beyond authorial brand recognition.


Thanks, But I Prefer Museum-Brand Filters

To combat this ever-rising tide of online ignorance, I think museums must serve at least two functions. First, they must establish themselves as a filter, an online brand that represents the "accurate," "well-researched," and true. Above and beyond accuracy, a museum should never publish or republish any content for which it cannot verify provenance and authorship. More than this, and in lieu of presenting themselves as an exclusive vehicle for "the good stuff," museums should dedicate at least part of their online outreach efforts to portal activity by linking to or otherwise calling attention to this "good stuff." As Jim points out in his post on federated authentication, there's a movement afoot to share or network login approvals among the respective staff of museums and cultural organizations (much the way that banks' respective ATMs acknowledge their customer's respective cards and account information). Where A trusts B and B trusts C, so should A be able to trust C. The public should remain confident that what it finds on or via museum websites will lead quickly and easily back to original and respected sources.

Second, museums must act as an authority on what people should take seriously with their explicit content (exhibits, articles, and research) and especially their implicit content (metadata, taxonomical standards, and search tools). This is something that makes museum involvement in the theoretical semantic web so important. Museums as much as other authoritative content providers owe it to their public to lead them to what they regard as "relevant" and "right." But can the data itself help users reach such conclusions?

Consider the attitude of a lay user with an interest in banjo music. At this moment, a search for "banjo" on Smithsonian Global Sound yields 316 pages and 3158 results. That's certainly better than starting the same search on Google, which produces 232,000 results for "banjo bluegrass" and 628,000 results for "banjo blues" (all results presumably dated as of the publication of this blog posting), but still off-putting to someone who just wants quick access to the "good stuff." What to make of any of this? Curated mediation by initial article/item selection (i.e., that which has been included in the SGS database), cross-referencing, context, and related articles will sometimes point confused users in the direction of "favorites" and icons of virtuosity. However, it's beginning to look as though the semantic web offers the possibility that a straightforward set of algorithms can sort this overwhelming offering of material by "relevance" and rightness" -- for example by telling users which results are both most distinctive (using unique attributes as a measures of originality) and most frequently referenced (as a synonym for determining influence on later work). In this semantic utopia, users should then be able to follow the trail of influence from an original "root" of authorship (say, those forbears as existed in 18th and 19th century broadsheet ballads or slave songs) to its further branches of influence (say, Brownie McGhee, Pete Seeger, and Bruce Springsteen). My personal ignorance of "true" banjo-based blues progenitors notwithstanding, my point is that the data here may be seen as containing the DNA of its own provenance.


The Fallacy of the Orphan Works Dilemma

For academic and edu-cultural organizations to fulfill their proper role as both filter and authority, they must be able to act on content (meaning exploit and further promulgate as a component of research and diffusion), whose authorship and/or ownership may be a bit on the cloudy side. In some cases, this may be considered usage that exceeds "fair use" under the Copyright Act. Certainly, given that copyright law leaves the ultimate definition of "fair use" to the courts on a case-by-case basis, every museum use inherently involves some exposure to claims of infringement, and this often has an impact in determining which images are included in exhibition catalogs, books, and websites and like decisions that straddle the traditional worlds of commerce (chiefly publishing) and education, reportage, and critical commentary (traditionally considered well within the boundaries of copyright's "fair use" defense). Museum staffers enjoy opportunities to engage authors/artists in discussions about the potential use of their work (perhaps less so authors'/artists' estates), but spend lots of frustrating time stymied by so-called "orphan works." (I have a colleague in legal who has had to hold up museum use of a bunch of sound recordings for over a year while chasing down sound recording ownership issues.)

According to the US Copyright Office, “orphan works” are those works within the term of copyright protection whose owner(s) cannot be identified and located. “Orphans” are considered public domain, available for unfettered exploitation by all. (See this white paper, published in 2006.) Thanks to the European Union and the Sonny Bono copyright extension law, the term of the majority of works under copyright now lasts for the life of the author plus 70 years. This isn’t the place to go into details over the intricacies of copyright law, but suffice it to say that considering the statutory duration exceeds that of any single human life, we can make a few simple assumptions. First, this is a heck of a long time as by definition the term of copyright protection in any work will outlive its author. Therefore, it follows that at some point in a work’s copyright life, a would-be licensor will have to deal with the author’s estate, if such there be (the author being dead and gone). Further, if such readily identifiable estates there be not (and things can get pretty murky some 50 years after anyone’s death, notwithstanding the presence of estate lawyers), would-be licensors may well be considering use of an “orphan.”

This issue has been explored in better fora than this (in an April 24, 2007 DC Bar panel program, for example), but in a nutshell the debate centers around how we can assure that lawful copyright owners can receive the compensation and protection to which the law entitles them without unnecessarily removing a large volume of relatively contemporary work from circulation just because we believe a lawful owner has yet to be identified. Let’s remember as well that orphan works include not just those “abandoned” by an artist’s death, but those whose initial attribution may not have been well-identified to begin with (stolen or grey-market wallpaper designs, papers authored by a collective long since disbanded, traditional “folk” art, sound recordings of naïve performers, etc.). Though I summarize the problem in breezy fashion, so-called “orphan works” present a potentially serious dilemma for cultural organizations inasmuch as they set in opposition two mainstays of museum credibility as regard cultural and historical materials, use/publication/distribution and sensitive treatment. The flimsy solution floated to solve the problem requires would-be users to exercise due diligence before considering a work to be “orphaned,” and upon notification by a legitimate owner promptly cease use or else pay up for continued use. It’s the niggling details of what levels of effort should constitute “due diligence” and be sufficient to recant (or pay for) the sin of use that make the solution a rather flimsy one.

Why bother with a solution at all? Perhaps copyright use prohibitions ought to be struck in favor of a new regime that promotes clear attribution of original authorship while establishing statutory licensing fees across the board (as is already the case for cover artists re-recording yesterday’s new releases). Setting principal aside, the digital environment is not one which lends itself to authorial control. As I pointed out above in my observations about online plagiarism, the creator(s) foolish enough to publish a work today will see it self-replicate, mutate, and disseminate the moment a binary-source facsimile is produced. The only way to keep the virtual cat in the bag is for the cat not to exist at all, and I think most creators would find that somewhat self-defeating.

If the world wide web renders copyright enforcement difficult, if not impossible, perhaps the presence of a uniform, published billing structure could increase the likelihood of authors receiving compensation while assuring authorial recognition. Would "open-sourcing" works chill distribution and minimize compensation by depriving authors/owners of the commercial benefits afforded by monopolistic control? It’s doubtful. The success of online micropayment vehicles like i-Tunes and PayPal pretty clearly demonstrate that enough people prefer to pay for affordable, desirable content to allow for a valid business model. “Open-sourcing” works works. The argument that authors should not be required to relinquish commercial control of their work simply because the internet makes it easier to co-opt works or copy them is, I think, a moot one. Reality is an amoral (as opposed to immoral) place; we must adapt our social structures to deal with what life throws at us.

Viewed from the Wikipedia perspective rather than that of current international law, the orphan works problem is misstated. The more the marketplace of ideas fills with noise, the more critical it becomes for us to be able to identify a good signal. It is therefore far more important that original works of authorship be recognizable and be reliably recognized. Yes, authors of all stripes should be able to be fairly compensated (and therefore hopefully incentivized) for their creative production. We continue to have a need for innovative, low transaction cost mechanisms for collecting and distributing money (and for fair enforcement of same). However, the focus on orphan works should prioritize the need for accurate source attribution, something which as stated above, must be considered central to the museum’s “brand.” In an age of mass information consumption, it is imperative that the contents of our firehose not be filled with empty calories.


Endpaper - The Talking Points

Here, then, are a few things that museums should do to assure the continued purity and vitality of the marketplace of ideas in an increasingly-polluted digital world:

1. Be an authority:

  • seek out authors and remain vigilant about properly attributing all sources;
  • keep primary source material alive and digital so that it can be referenced;
  • build semantic widgets to accurately and efficiently tag their "good stuff;"*
2. Be a filter:
  • dedicate resources to portal activity to identify others' "good stuff;" and
3. Be a good citizen:
  • participate in discussions to create statutory royalty reservoirs

* [The Powerhouse Museum, a lead participant in the www.steve.museum project, may be among the first to take aggressive advantage of this, see this article.]

Tuesday, April 1, 2008

Virtual Worlds - Real Experiences

There's a lot of hype about virtual worlds. Everyone is excited about SecondLife, Whyville and other virtual environments. Many organizations are rushing in to mount a virtual museum in one of these simulated environments. What why are we doing it? How can virtual environments provide real learning experiences?

This blog posting is intended to get you thinking about the vast potential of virtual environments. So let's look at an example.

Art Conservation Training

As most of you know, hands-on training is the best way to learn. There is nothing like direct observation and interaction with objects. Virtual environments provide some of these same advantages. Let's discuss a scenario where a museum might want to teach museum visitors about conservation and restoration.

The museum might set up a gallery in a virtual environments. The gallery might be stunning in design. Vaulted ceilings. Windows that provide natural lighting. Perhaps even a small fountain above a koi pond. But a closer look can show problems that threaten the irreplaceable objects in the collection.

A painting might be poorly situated so that the sun in the afternoon falls upon it. A collection of wood and hide drums may be placed near the fountain. One of the drums may show signs of biological damage caused by humidity.

Visitors and students could be asked to walk through the gallery and look for problems. Once a problem is found, the visitor could click on it to get more in-depth information. A museum educator could be present so that visitors and students can ask questions. Students could team up to identify problems. Finally, everyone could gather together at the end of the session and a museum curator could join the group and give a short talk on the museum's conservation and restoration efforts.

What makes this a useful experience? How could your organization use a virtual environment to teach? Why might this kind of experience be especially useful when instructing children?

Please use the comment feature on this blog to post your comments and responses. I'll follow up in a few days with some of my own answers.

Friday, March 28, 2008

Federated Authentication: Creating Identity over the Web

I'm fairly certain that most of you have received an email that peaked your interest about a new Web service or application. It might be something that your bank is offering or perhaps a way to track your physical fitness routine online. So you visit the site. And what's the first thing you're asked to do?

Create a unique user name and password.

Personally, the incentive must be pretty high to get me to do this. I'm already managing this kind of information for scores of different Web sites and services. Why must I do it yet again?

Well, you shouldn't have to. And times are changing fast. Very fast. And museums need to be aware of what's happening and prepare. Today.

If you're as old as I am, you remember when your bank made its first ATM available. You could withdraw cash from a machine placed outside the bank! Within a very short time, you found that you could withdraw cash from an ATM at any of your bank's branches. And before long, banks were establishing trusted networks so that you could withdraw cash from nearly any bank world wide. This is what is happening on the Web. Organizations are getting together, working out standards and establishing trusted relationships.

Before too long, you will have a single identity on the Web and will be recognized no matter what service you access.

The National Institutes of Health (NIH) is leading one such effort. Last fall, they held a "Federated Authentication" Town Hall meeting. Federated authentication allows staff to collaborate with colleagues from diverse universities and organizations across the world.

In simple terms, it works like this. John Doe from the NIH may wish to collaborate with Mary Buck at the Center for Disease Control (CDC). The NIH and the CDC have both joined a "trusted" network. That is, the NIH and the CDC have each agreed to trust the other organization to authenticate its own staff. This means that if Mary wants to access resources on John's network, John simply needs to let his network know which resources Mary is being given permission to access. When Mary tries to access those resources, John's network asks Mary's network to authenticate her. Once authenticated, John's network then opens up access to the permitted resources.

The key point is that Mary doesn't need a separate userid and password to access John's network. She uses her CDC credentials. The CDC authenticates her and the NIH gives her access to permitted resources.

This example illustrates the start of a very, very important trend.

Someday, you will always be connected to the resources that help you do your job and lead your life. Your mobile phone will likely be the device that facilitates this initially. You will need to move seamlessly from one network to another. You'll need access to the varied resources that exist on different networks. Each of these networks and resources will know who you are and what your preferences are. You'll have access to the tools you need to do your job and you'll have access to information that helps you manage your life. Looking for a restaurant in a strange city? The local networks will recognize you and your preferences. It will know where you're located and alert you to the location of your favorite eateries as well as a little cafe that your mother ate at last month...

Facilitating collaboration between research staff at two different museums or with a university is the start. For more information on one approach, please visit the InCommon Federation. If you're aware of similar collaborations in the museum community, please post a comment here!

Wednesday, March 26, 2008

Is there a divide between Web 2.0 and Web 3.0?

Fair-warning... this is a straw man topic. I began by making the overly-haughty provocation on a web 3.0 (I guess?) enabled space, "Before we, the self-appointed 'committee of the whole,' get too unwieldy in scope and participants to actually communally-generate useful, shared tools, can we establish a representative subcommittee who can recommend initiatives back to the group at large and potentially commit the resources of our respective institutions to a central pool?" In response, I was asked how I would recommend bridging Web 2.0 and Web 3.0 communities. Implicit in this question, I think, was the assumption that Web 3.0 community was a more appropriate sphere for cultural organizations (the "authorities") than Web 2.0 (everybody else, including authorities in mufti).

So, to attack the straw man. If Web 2.0 is defined as social/wiki websites on which users participate in generating site content and Web 3.0 is the semantic web, an interlinking of reliable, bot-filterable metatags (and their respective communities the authors/participants in said content generation), I'm not sure there's really a divide. To the extent that the semantic web is reliant on reliable, consistent metadata and shared taxonomies (I dislike the term 'ontology,' which seems like a malapropism to me), it would seem that the Web 2.0 is the place where these will begin to be built and continually tested. Having read Cory Doctorow's 2001 "Metacrap" article today, I think I have a more immediate understanding of the potential pitfalls of unguided folksonomy and unmoderated wiki (or vice-versa), but I likewise think that the types of techniques Luis Von Ahn has and continues to propose are one way around them [go here for more on Luis]. Furthermore, top-down metadata development like adherence to latin naming conventions, Dublin Core, and the kind of metadata entry required by SIRIS and the Library of Congress, which are already of use to the semantic web-building of academia (and by extension, lay researchers with "serious" purpose accessing collections), are likewise promoted through shared workspaces such as this and social networking environments (such as this, for those coming to this blog via my Facebook page).

We absolutely need fora for communities to gather and spitball/play out ideas. But that still leaves action to chance. It would be nice for the community itself to empower a subgroup itself responsive to and accountable back to the community that is capable capable of action. The empowering resources required are money (whether tithed or derived from third-party funding sources), server/storage access (the workshop for development and testing), and time (for example, via commitment of relevantly skilled staff to patronized projects). I'm assuming that the online community can serve as its own extranet regarding reporting requirements, publication (of white papers and/or code), and distribution.

Wuxtry! Wuxtry! Step right up and donate your money to build the semantic web!

I was coincidentally invited by Mark Baltzegar to join a social networking group called "Museum Futures" on the Twine betasite (www.twine.com) at the same time that Jim so generously offered me a forum here. On Twine, a would-be angel posted some provocative questions. Speaking as one who has worked in museums for a while (without speaking in an official capacity), I've tried to address these questions informally. I'm a lawyer, not a programmer, so I invite anyone to format and add to my response(s) as they see fit.

One of my most pressing concerns right now, as a funder, is in understanding better the technology capacity of the museum community. What skills are embedded in the museums themselves, v. their vendors?

My experience is that this varies from museum to museum, even within a larger complex like that of the Smithsonian Institution. Museums, like the communities of nonprofits and especially those of arts/cultural organizations which they inhabit, are chronically underfunded. I've seen different strategies adopted for dealing with this. Everyone needs a base infrastructure, whether begged, borrowed, or bought, and I've seen acquisition of enterprise-level CMS, DAM, e-mail, and ERP apps for IT staff to maintain and parcel out to the boots on the ground (and more often workaround apps which lack the same capacity to scale but which are easier/cheaper to use and support until the independent, redundant silos overwhelm).

For activity which has direct impact on research or public education on the web, I've seen two approaches. Some hire significant staff (with varying levels of HTML, Flash, KML, and other design and programming skills) in hopes of internally managing Museum work. This represents a significant and continuous commitment to web development, but has a few significant potential drawbacks: (1) it restricts concurrent development of multiple projects to the bandwidth that available human resources can accomplish; (2) it limits institutional knowledge and capacity to the capabilities of those willing to accept a $30-60K annual paycheck as supervised by those willing to accept $90K or less; and (3) it is at the mercy of often frequent attrition. Others throw money at ad hoc development projects as they succeed in soliciting funding. Given the golden rule ("S/he who has the gold…"), this likewise presents significant drawbacks, some of which include: (1) it can skew a museum's agenda by prioritizing development arbitrarily on the basis of that which can be successfully pitched or otherwise capture donors' (or development officer's) interests instead of as dictated by institutional mission or strategic goals; (2) it often limits the control and scope of development to the budget and cash flow secured for the project; and (3) it risks kudzu-like deployment of incomplete apps and functions in an environment where information-sharing throughout the museum is frequently less than perfect (ever try to harmonize the intent of education staff focused on primary-school outreach with curator-historian-academics with exhibition developers… not even considering the pressures imposed by administrative entities?).

Worse, from my 30,000 foot view, is the opportunity cost that is engendered the ongoing commitment of scarce museum resources on a piecemeal museum-by-museum basis. Were cultural organizations capable of sharing development, building upon, and improving a core toolset, all would be significantly more productive and better off, leveraging economies of scale not only on the costs of independent development but also saving funds which otherwise would be expended in one-off discrete online exhibition development. From this vantage point, projects like GMU's Omeka or the IMLS-funded Project Steve (www.omeka.org and www.steve.museum) are extremely welcome and need to be encouraged, promoted, and funded (hint, hint) inasmuch as they carry the potential to prevent museums from dropping between $50,000-125,000 on a URL silo site which while nice on its own, is ultimately incapable of growth. The irony of this is that in terms of outreach, these costs represent a small fraction of the investment inherent in traditional museum means of communication (physical exhibition development, traveling exhibits, audiovisual productions, and hard copy publication) while carrying the potential to reach much greater audiences. I am a staunch believer that museums get their greatest ROI (in terms of audience outreach) from their investment in new media/internet projects.

How are the skills distributed among institutions, by size, category, or any other salient dimension? Most of our work involves funding collaborations--we seldom fund a single institution to do anything--so I'm particularly interested in collaborative nexi or loci (present or nascent) for museums and technology.

Rather than waiting for a coalition to emerge for funding purposes (look to a temporarily derailed NOAA/NMNH Ocean Portal project as an example of good intentions that appear to have been groupthought -- among other factors -- into hopefully transitory submission), I think you're better off finding an initiative leader with a promising project, even (especially?) if self-nominated. The project funding should be split appropriately (not necessarily evenly) between the amount required for development and the amount required for promotion to and adoption by like organizations. Diffusion theory represents an area of significant academic and empirical study (see Everett M. Rogers "Diffusion of Innovations") as well as getting some recent lay attention (most notably via Malcolm Gladwell's "The Tipping Point"). According to Rogers, large-scale adoption is often dependent upon a leader who can provide a model for successful adoption, offer a reasonable or low barrier to entry, and provide a means for "reinvention" of the product to suit the specific needs of the adopter. This is a primary reason why I'm such an advocate (if nonetheless naïve) of joint tool development with broad application. The emphasis must be on broad, since museums (committees in and of themselves) act inherently modularly.

We're also committed to the sustainability of the projects that we build, so my definition of 'capacity' extends to include what museum leaders understand about designing, creating, and managing technology solutions. I know that most leaders don't understand the bits and bytes--no reason why they should. But do they understand how to plan strategically to maximize the ROI on their tech investments? Do they know how to structure organizational relationships within their institutions to produce the most gain and the least pain? Are they prepared to consider innovative business models around technology, such as "community source" solutions to shared problems, in order to reduce the risk and cost of technology development?

In my painful experience, these assets are extremely hard to come by and, when obtained, become more strained the longer that project development is stretched and the less that the sponsor organization has control over initial funding. In a vaccuum of funds, I have been a strong proponent of "funding" multimedia/software development projects by allowing the commercial developers who were ultimately responsible for production a reasonable opportunity to capitalize commercially on the tools they had developed. Let the museums guinea pig/form the impetus for development of the use of these tools and provide the model for successful deployment and let the commercial providers who capitalize risk reap equivalent reward. However, the longer and more complex the project "funded" in this way, the more frequently the model has failed (I'm being coy here, as I'm not quite ready to name names). The carrot of museums' prestigious imprimatur, incessant wheedling, and -- as a last resort -- threats of default are insufficient to guarantee successful outcome unless the developer(s) remain committed to bona fide execution of museum intent and a true spirit of partnership. All of which brings us back to the venal version of the golden rule, as expressed above.

However, if it is impossible to give a selected museum messiah the power of the purse, another way of giving museums some 'oomph' to put behind these types of "in-kind" deals would be shared physical possession and ongoing publicity. As a means of possession, consider an agreement which requires that fully-annotated code, tested and annotated by a mutually acceptable independent party, be placed in escrow or a copy provided to the museum at each major development milestone, with the museum free to divulge this information or make it available to any other developer upon material breach or protracted dispute. For publicity (yay, sunshine!), share everything with an open-source community under a strict gnu-license as it is being built. Invest in ongoing promotion (the cheap version of which is an e-mailed newsletter) to build a coalition of interested participants/adopters ready to alpha-test, beta-test, and ultimately use the product.

Cultural organizations are not about the apps themselves, as these are merely a means to the end of promulgating ideas: information of and about the collections, cultures, and the world we live in, and preserving same (collections, cultures, and the world we live in) for future generations. Museums' primary focus will (should?) always be on funding this activity first and foremost (well, after staying afloat that is). We crave external technology funders with the willingness, resources, and courage to help us develop the apps we're not even aware we need to succeed.

We also must find a way to stimulate meaningful sharing and codevelopment of these apps to save ourselves precious time and still more precious money.

Folksonomies, Taxonomies and User-centered Design

By their nature, blog postings are informal and personal. So why am I going to start this article with a series of definitions? As the writers of the American declaration of independence said "We hold these truths to be self-evident", I foresee that you will rapidly discern the reason for the definitions and the point I plan to make.

"Taxonomy is the practice and science of classification. Taxonomies are composed of taxonomic units or 'kinds of things' that are arranged frequently in a hierarchical structure..."
Wikipedia, March 26, 2008

"Folksonomy is the practice and method of collaboratively creating and managing tags to annotate and categorize content. In contrast to traditional subject indexing, metadata is not only generated by experts but also by creators and consumers of the content. Usually, freely chosen keywords are used instead of a controlled vocabulary."
Wikipedia, March 26, 2008

"User-centered design is a design philosophy and process in which the needs, wants and limitations of the end user of an interface or document are given extensive attention at each stage of the design process. The chief difference from other interface design philosophies is that user-centered design tries to optimize the user interface around how people can, want, or need to work, rather than forcing the users to change how they work to accommodate the system or function."
Wikipedia, March 26, 2008

If taxonomy has a citadel, then it is surely a museum. The term taxonomy was coined by naturalists in an effort to categorize and organize the Earth's diverse life forms and is now used by museums around the world to bring order and meaning to their collections. So one might expect a little resistance to the idea of folksonomies where visitors essentially create their own informal taxonomies for objects.

User-centered design principles ask that the needs of the ultimate "users" or consumers of a product be accounted for during the design phase. Museums have embraced this philosophy and incorporate user-centered design principles in the creation of exhibits and other interpretive materials. By the same token, museums should embrace folksonomies or user-tagging because it organizes the information in ways that the user or visitor understands.

We don't want to give up our controlled vocabularies or formal classification schemes but if we want to fully engage the visitor then we should embrace folksonomies and other developments coming out of the social media movement.

Tuesday, March 25, 2008

Retrieving Exchange Email on Your iPhone

I've had a number of people ask me how I managed to set up my iPhone to send and receive mail from my work account. My work uses Microsoft's exchange server to manage email. For security reasons, they don't allow folks to send mail unless they use either the official MS Web client or they're inside the firewall. They do, however, allow you to retrieve your email. This is the key that makes it possible to use your iPhone when you don't have official technical support.

You can set up a new account on your iPhone. For the incoming mail server, use your work settings. I prefer to use "IMAP" rather than "POP" because "IMAP" will synchronize with your exchange account and will also allow you to see any sub folders you may have created.

As for the outgoing mail server, you need to set up a free gmail account. Once that account is set up, you use the gmail outgoing or "SMTP" server. Gmail will allow you to set the "reply to" address to your work address, so to all appearances, when you send email, it comes from your work account.

I haven't included the details here, but this is the basic idea.

Good luck!

Thursday, March 6, 2008

Web 2.0 & Brainstorming the Possibilities at CAM

On February 26, 2008 at the California Association of Museums annual conference in Fresno, California, I presented two sessions and promised to make the slides available here.

What's all the Buzz?
Using New Technologies to Educate and Increase Participation
Session Description
Slides (PDF, 26.6 MB)

Web 2.0: Brainstorming the Possibilities
Session Description
Slides (PDF, 6.7 MB)

I will be posting the results of the brainstorming session on this blog as I have the opportunity to write up the ideas. Stay tuned!

Jim Angus

Wednesday, February 20, 2008

Online Curriculum and Creating Community

I met with several staff yesterday from the National Institutes of Health Office of Science Education. Like many offices and organizations, they would like to increase the usefulness of their online materials. This office is amazing. Over the years, they've written no less than 11 major curricula that can be used by both teachers and folks that provide training to teachers. All of their materials are available on the Web and many have been written with Web parts that provide interactivity.

So what's their problem? Success is their problem. Their curricula are in use by thousands across the country. They'd like to provide more help to the users and would really like to help teachers understand how entire units can be used to teach biology. But like many offices, they don't have the resources to redevelop the existing materials in any kind of reasonable time frame.

So what's the solution? The answer lies within the Web 2.0 revolution. What is Web 2.0? It excitement and energy. It is innovation. It is what we saw in the 90s when everyone had to have a Web site. Most importantly though, Web 2.0 is community. Community is what makes social animals different from solitary hunters that live out most of their lives alone. The behavior of social animals are complex and driven in part by what is happening to their fellows. In 1994 Web pages were essentially solitary animals. Yes, they could link to other pages but their behavior wasn't affected by those links. Enter now the twenty first century. Web pages are no longer alone. They interact with other pages and are changed by those interactions. This is called "social media" and it is the essence of the Web 2.0 revolution. It is the difference between a solitary insect such as a preying mantis and a social insect like a honey bee.

Have I digressed? One solution is to build a community where the members can help each other. This will reduce the burden on OSE resources while promoting successful self help. Social media can be used to create an environment where users will find what they need quickly, share what they've learned and connect with others who have similar interests and goals.

The essential idea is to create a Web site where members can easily post questions or information. Over time, users will establish their own folksonomy for the Web site. Foksonomies are informal taxonomies where community members assign their own tags or key words to information. The system uses those tags to create connections with other related information. Users can quickly follow those key words to information and answers that are likely to help them.

Members will also be able to set up profiles and share information about themselves and their aspirations. Profiles allow members to be found by colleagues and other like-minded folks. And more importantly, members will be able to view the profiles of the "friends of their friends", so to speak. This allows members to rapidly find like-minded individuals.

The success of this approach to Web site design is proved by the rapid proliferation of social media sites such as LinkedIn, Flickr, YouTube, MySpace and Del.icio.us.

The office is only in the planning stages now. But stay tuned, I think the drive and creativity of this group will produce a site that will challenge the public's view of a "government" Web site.

Jim Angus