ver on the Aviva Directory site, you’ll find a well-researched compendium, “12 Important U.S. Laws Every Blogger Needs to Know.” Beneath topics like “Whether to Disclose Paid Posts,” “The Legal Use of Images and Thumbnails,” and “Laws that Protect You from Stolen Content,” the article offers tips and pointers to deeper information. Given that ignorance of the law is not a defense, this is a good page to bookmark.
rguments about the advantages and disadvantages of web-based applications are raging across the net. If the topic interests you, the discussion going on over at Read/Write Web is well worth a read. On that site yesterday, Ebrahim Ezzy posted an article titled Webified Desktop Apps vs Browser-based Apps. In it Ezzy cites downsides to the new web-based apps, including being at the mercy of the network and server load, issues with authentication, security, privacy, and reliability, as well as questions about backward compatibility as these new apps evolve. In a post titled Discussion: Webified Desktop Apps, Richard MacManus highlights the main points being raised by other bloggers. Those favoring web-based solutions counter Ezzy by noting that apps and databases accessed via browsers have the advantage of being available from any connected computer, are platform agnostic, and are well suited to collaborative projects. Richard MacManus, the man behind the Read/Write blog, wisely cautions that we don’t need to think in either/or terms. Still, it pays to understand the rationale behind both sides of this important question as we negotiate increasingly complex content waters.
n this interview with Dispatches from Blogistan, science fiction author, online rights activist, and longtime blogger Cory Doctorow covers the gamut. Cory touches on the history of Boing Boing, provides background and context for questions of fair use and copyright, explains why open source is to our mutual benefit, discusses the value of tagging as a means to discovery, and talks about how blogging and associated social networks may facilitate healthier democracies. It’s a big bill but Cory fills it well. Read the interview here.
n. Ongoing project of the World Wide Web Consortium (W3C), under the direction of Tim Berners-Lee, that is working on a semantic framework for web content that will allow software to determine the type of HTML page—blog, catalog, or glossary, for instance—promising to improve search relevance ranking.
I know bloggers aren’t supposed to apologize, but I’m just about to upload a few dozen entries all at once. I realize this will cause a bit of a stir for those subscribing to the newsfeed and for the nice people at the WELL.com blog aggregator. The entries are interlinked glossary items from the book. Each item has its own comment field and I welcome any feedback. I madly coded all the links during a couple of late-night sessions, so do let me know if you find typos or anything breaks and I’ll fix it. I hope a few people find the glossary to be of use.
n one of his On Language columns for the New York Times, William Safire attempts to do a light piece touching on some of the new words and phrases growing out of the blogging phenomenon. The article bears his usual erudite stamp, but sadly, he gets several things wrong.
The word ping is generally considered to be an acronym for Packet INternet Groper (52,100 Google hits), not Packet INternet Gopher (1290 Google hits), as Safire would have it. If he really wanted to have fun, Safire could have pointed out that ping is one of many backronyms, acronyms constructed from words that match the letters of a word already in use.
As for his definition of meme, surely Safire must be familiar with Richard Dawkins and his 1976 book, The Selfish Gene. In that seminal tome, Dawkins defines a meme as a cultural entity–a song, a phrase, or an idea, for instance–that humans replicate by passing from one to another. It’s a perfect term for describing the way trends ripple across blogdom and so it is, indeed, often used by bloggers. Here is Safire’s description:
“A meme is a type of online chain letter,” explains Teli Adlam, a glossarian at blogossary.com, “where bloggers answer questions designed to give a quick overview of the blogger’s personality.” The author is then supposed to tag — that is, to induce — other bloggers to participate by answering the same questions. Tag, as a noun, is a descriptive label applied to an individual post.
He’s almost right about tags, although tagging may be applied to online photographs, videos, and podcasts, as well as blog posts. In touching on tagging and the remarkably popular social bookmarking site, del.icio.us, Safire oddly write:
Many bloggers strive to make it onto the del.icio.us front page (otherwise known as being popular).” This has led to the verbal noun or gerund deliciousing.
Deliciousing? Really? I must not run in the right blogging circles because I’ve never heard the term. Safire admits he collected most of what he reports in the column while blegging, or begging bloggers for information. His reporting won’t do much to remedy the perception of blogging as a poor source of information, but perhaps he had a bit of fun. For more fun, may we suggest that he try the Wikipedia and a few Web searches next time around.
In 2004, the Merriam-Webster dictionary named “blog” Word of the Year. This year, the New Oxford American Dictionary named “podcast” Word of the Year. This is remarkable when you consider that most people have no idea what a blog or podcast actually is. This is changing as mainstream media and old-school reference tomes tout the new technologies. How long until wiki makes it to word of the year?
I couldn’t help it. The interface was so clean. A single entry box, a la Google. I typed in the URL for this blog, hit the return key, and the Web 2.0 Validator promptly returned a score. This blog gets points for being in public beta, referring to Flickr, del.icio.us, mash-ups and, of course, Web 2.0. I got dinged for not being built on Ruby on Rails and for not integrating Google Maps API or a Bloglines blogroll. Also, no mentions of VCs, RDF or Semantic Web
The validator has tongue at least partially in cheek, but the mechanisms behind the 30 Second Rule effort are interesting. For one thing, the rules against which sites are gauged change daily as new criteria are pulled at random from user submissions. To submit your Web 2.0 rule, simply bookmark the validator URL and type your entry into the “notes” field. That, in itself, is a pretty cool implementation.
For the record, dispatchesfromblogistan.com scored 6 out of 16. I may have to try again tomorrow. No telling if the above mention of venture capital will count toward anything then, but that’s pretty much always a turkey shoot.
With the second annual Web 2.0 Conference convening in San Francisco on Wednesday, it’s little wonder that the blog world is full of discussion on the topic. Still, it’s a bit of a challenge to get a handle on just what is at the heart of the movement. Among those who lay claim to the Web 2.0 mantle, there are as many definitions as there are definers, but that’s in the spirit of the “radical decentralization” and “architecture of participation” that distinguish the movement.
Advocates are generally in agreement when pointing to examples of Web 2.0 success stories: the active community that’s grown up around photo-upload site Flickr; the ingenious and addictive bookmarking database at del.icio.us; the real-time delivery of video and other large files across BitTorrent’s decentralized network in which every client is a server; the breadth and utility of the user-written Wikipedia project; and, of course, the phenomenal growth of blogs and syndicated feeds.
What distinguishes all these efforts is the fact that they have adopted the web as a platform, delivering services across networks and devices rather than distributing software artifacts. They are all free and, as such, can afford to be in perpetual beta release. Users happily provide testing, feedback, and suggestions. Some even take advantage of the open source code beneath the surface, contributing valuable variations and enhancements.
By design and default, individual users evangelize and populate the databases. The more people use Web 2.0 services, the more robust and useful they become, and all at very little additional cost. Because much of the contributed content is niche, users are rewarded with an unprecedented array of choices.
Indeed, a whole new argot is growing up around the movement. Folksonomy, a portmanteau for folks and taxonomy, describes the popular use of tagging or freely chosen keywords to categorize content from the bottom-up rather than the top-down hierarchies employed by traditional systems. The Long Tail, with its L-shaped distribution curve, illustrates the cost-effective access to niche content allowed by Web 2.0 mechanisms. Remixing is stolen from the pop culture world and is used to describe hybrids like the many homegrown Google Map applications.
Whether these grand experiments in “radical trust” and the “wisdom of crowds” survive and evolve viable business plans is yet to be seen, but the next few days should provide us with some intriguing prognostications from those on the frontlines.
Searching blogs is still a bit of a turkey shoot. General web search engines like Google, Yahoo, and MSN depend on spider bots that crawl websites at intervals too far apart to suit most bloggers. Search engines like Technorati, Feedster, Ice Rocket, and PubSub that are designed to work specifically with blogs and web feeds solve the time lag problem by registering pings from blogs each time one is updated or by indexing RSS and Atom web feeds, but the sheer number of blogs and blog posts make scaling a challenge for these newer search companies. Both types of search engines favor blogs with the greatest number of inbound links, a point of contention among bloggers who address smaller, more niche audiences. More robust blog search capabilities have been on most bloggers’ wish lists for some time now.
The release today of Google Blog Search (GBS) raises the hopes of many that they will be better able to find blog entries of interest and that others may discover their own blogs more easily. Initial response to the new service is mixed. The new beta service appears to gather its data by indexing RSS or Atom feeds published by many bloggers. How well this solves the latency issue with traditional Google web searches is yet to be seen, but a good number of bloggers are already testing and critiquing the service.
For a bit of context, the Wall Street Journal Online’s Vauhini Vara compiles a useful list of the major blog-specific search engines calling out their pluses and minuses.
Today, SearchEngineWatch’s Gary Price lists Google Blog Search features that he would like to see in the future, including the ability to screen out blogs that merely scrape headlines, to cluster “related blog” posts, and to search by location for entries that contain a Geotag. Price also requests a better understanding about how Google Blog search differs from Google News searches.
The Blog Herald posts initial thoughts about Google’s new search. The Herald particularly liked the ability to place any search term into a web feed and found the split results between “related blogs” and a general index to be useful, although they echo the complaint of many that the latter seems to merely mimic Google News results. The overall size of the index is the major problem (under nine million blogs searched as compared to Technorati’s more than 17 million, for instance), but this may improve over time.
Microsoft’s Robert Scoble reports that the search speed is excellent and the results contain less spam and fewer duplicates than other search results. Technorati still has more up-to-date results, but at this point, Scoble is inclined to favor Google over the other engines.
We’ll have to watch and see how Google measures up and which of the other search giants decide to enter the ring. It was heartening that the new service indexed this blog, which is still relatively small and recent. Stay tuned and we’ll report back on how well Google’s new search suits the overall needs of an ever-growing blog population.
The phenomenon of folksonomy, the bottom-up categorization of content that lies at the heart of social bookmarking site del,icio.us and photo sharing site Flickr is catching on all across the web. Earlier this year, blog search giant Technorati began tracking user-generated tags and their tag page offers a constantly evolving topography of the categories of greatest current interest to participants. The ease-of-use and self-correcting nature of folksonomies underlie the success of the phenomenon, but inevitably debates arise. Today, Peter Merholz responds to an earlier article by Clay Shirky. Both authors see value in user-generated metadata, but Peter takes Clay to task for an ideological bias, claiming Shirky denigrates classic hierarchical organization schemes in favor of folksonomy. Peter argues for an integrated approach. Reading both articles goes a long way toward helping the rest of us make sense of this new and fascinating territory.
Well, in that single second, one new blog launched. Or so writes David Sifry, founder of blog search engine Technorati. That’s more than 80,000 new blogs a day. No wonder it’s so hard to keep up.
Some doubt Technorati’s claims that these numbers exclude spam blogs, but few question the fact that the overall number of blogs is doubling every five or six months. If the trend holds steady, that would mean 20 million blogs by the end of the year.