Further Reading: Google Ripples

This post came down my Google+ stream today, which I wanted to share:

Reshare this post so we can test the new Google+ Ripples!

"Reshare this post so we can test the new Google+ Ripples!"

This post is a request to share another post and an invitation to the check out Google+ Ripples.  Google+ Ripples is a Google+ feature that visualizes a post’s activitiy.  It will show you a post’s broadcast potential by visualizing who has shared it:

Visualized Shares in Google Ripples

This is how Google Ripples has visualized the way the original post has been shared

Brad Matthies does a good job showing you how to view a Ripple for any G+ post – check out his blog for more information.

I don’t have much to say yet on Google Ripples – it’s still very new and novel and I think people are playing with it more than they are thinking about what it does, how it does it, and what it might mean (if it means anything.).

One thing did cross my mind as people in my own Google+ stream started to share my own share of the post, though. I’m curious to see how Google+ Ripples will turn out. It may only be visualizing and making public the links we all make to one post on the Goog, but I’m interested to know if there might be a backlash against it.

This is interesting because people, including myself, often become uncomfortable and vocal when information about our relationships between ourselves and the information that actually connects us is revealed on the Internet, so I’m almost expecting a public pushback against Google+ Ripples (even though the information and not the carrier is centered in its graphs).   But once the Info.Corps back down and put the cover back over our social graphs, we stop worrying and carry on with our day on the Internet – business as usual.  The thing that gets to me, though, is that we don’t really think twice about the fact that this information about us has already been collected, and on some social media networks, is being shared with third parties without our knowledge.

Don’t take that to think that I’m being alarmist – I’m not suggesting we shut down all of our accounts immediately. I’m only observing the way we sometimes object to the public display of our social graphs but don’t seem to worry that the information is being collected in the first place.

P.s. I think the + in Google+ is a little silly at this point.  I really want to just call it “Google Ripples.”

Muppet News Flash: Digital Libraries cost a lot of money (but don’t tell the general public)

A recent piece in the New York Times is reminding me why people don’t understand the enormous costs (let alone the time and effort) associated with digitizing a world’s culture.  Natasha Singer’s January 8 article does a great job at helping the public imagine the possibility of a Great American Digital Library, and she even quotes Benjamin Franklin to lend her argument a certain value that is created when ideas are linked to the nation’s forefathers.  What the news piece is real light on, however, are financial figures.  Check it out and see.

The article neatly summarizes the digitization efforts of certain national governments, compares them to the Library of Congress’s American Memory project and then to the Google Books project.  We learn that the LoC project has no formal connections to any public library projects and that several leading figures and organizations would like to collaborate on one giant digitization venture.  Wouldn’t it be great if we could coordinate our efforts, standardize systems and processes, and make it accessible to archivists, researchers, and to the public?  Yes Virginia, there is a Santa Claus.

My problem lies with the way the article pays short shrift to the costs of such an effort. After telling us that Harvard University’s Berkman Center for Internet & Society would like to develop a “digital public library of America,” Singer tells her readers that:

Of course, practical matters — like cost, copyright issues and technology — would need to be resolved first.

“The crucial question in many ways is, ‘How do you find a common technical infrastructure that yields interoperability for the scholar, the casual inquirer or the K-12 student?’” Dr. Billington says.

The New York Times does precious little in this article to break down the costs associated with a digitization project, let alone one of the magnitude to which it alludes.  What we’ve got listed are “copyright issues” and “technology”, which don’t touch the human capital required to develop and then maintain this digital archive.  And when Singer says technology, I don’t know if she means hardware, software, maintenance, preservation, or all of these things together.  And the piece says nothing of the physical plant required to house these servers, because even computers must be stored somewhere.  A digital library on any scale is expensive, but this article doesn’t explain why.

Now consider this second quotation, which considers the value of the Google Books project and then notes that in the future, copyright costs may have to be settled by universities, research facilities, and other PSEs:

People can read out-of-print items at no cost on Google Books, if those works are no longer subject to copyright protection. But if a judge approves a settlement between Google and copyright holders, subscription fees to access scans of out-of-print books still covered by copyright will have to be paid by universities and other institutions.

An American digital public library would serve as a nonprofit institutional alternative to Google Books, Professor Darnton says.

Now we have an example that raises the spectre of “subscription fees” without explaining the burden these fees are to universities.  I have no doubt that when most people read “subscription fees . . . will have to be paid by universities”, they don’t have a sense of the business models and financing at play; for a lot of people, what will really matter is that the buck stops somewhere but thankfully not with them.  As librarians, we know that our electronic resources, as valuable and cost-effective as they are, eat up a large part of our (largely taxpayer-funded) budgets.  These “subscription fees” are not at all like the fees people pay for cable tv or internet connection at home.  PSEs and their libraries pay through the nose to large, for-profit organizations for electronic access to materials that are often funded by the PSEs themselves.  And even in the case of non-profit organizations like JSTOR, the fees remain costly.  So a “non-profit institutional alternative” that seeks to facilitate digitization and access to a nation’s cultural heritage at reasonable rates could still leave a collections librarian bruised.

Digital Preservation is a labor of love: this machine cost nothing to Penn State and this man is happy to volunteer his time and expertise.

We need to get real when we talk about digitization projects to the public, especially when we talk about huge mega-projects like a mass digitization of American cultural history.  Articles like this New York Times piece do nothing to explain the real costs involved in digitization, collections, and electronic access.  And cost is where it counts.  Too often at the reference desk do I find myself explaining to students that material found on Internet is not free and that the dollars they pay for access on their smartphones or at home covers the cost of transmission but not for “content.”   We need to start educating people so they understand that a monthly data plan or Internet bill pays only for the pipes through which content is downloaded to their devices and not for the actual development and maintenance of the content they are retrieving, let alone the infrastructure (human and physical) required to maintain it.

I apologize if I sound like a cranky curmudgeon here.  Like most librarians, I fully believe that information wants to be free.  But that’s only a desire.  Information may want to be free, but right now it isn’t.  And it’s up to people like us to explain to the world the real costs associated in our information landscape.

 

Google v. Blekko v. The Librarian. (The librarian wins.)

In the past week I’ve heard three different librarians say something like, “We lost to Google years ago”.  We know that this sort of statement isn’t complete hyperbole.  When it comes to discovering or verifying quick facts, people turn to search engines faster than they ever turned to an encyclopedia at home or a reference collection at the library.  While there are many things librarians can do better than Google, like help people find the needle the information haystack, or teach people how to make wise, informed decisions when researching, when it comes to ready reference, most of the time Google has got us beat.

The big thing Librarians still have over Google, though, is criticism and control.  We not only know how to quickly manipulate Google’s search engine (and other companies’ engines) to discover decent results, but we are pretty good at separating the wheat from the chaff.  I notice this especially with government documents and government data on the web: people who visit me at the reference desk who are looking for government data have a hard time finding information and then being able to verify its authority.  There are no second readers on the web – people have to rely on their own experience and understanding of information organization and information architecture to locate documents, and then be willing to using them with confidence.  Librarians, however, can help people locate information sources, draw relationships between items, and determine the value of this knowledge to their own work.  For these reasons alone, we’re kind of a big deal and shouldn’t be afraid to say so.

Click through for a great example on why Google is *not* a good search engine.

 

Especially in this so-called digital age, our ability to help people choose information sources makes us essential to information management and research services.  For all of our complaints about people’s reliance on the Google search engine and index, we can at least take comfort knowing that our “editorial” function vis-a-vis the Internet is still necessary and valued.  What’s a curator but a selector of items of value?  I’m not saying that librarians curate the web, but on the whole, we certainly have a broad understanding of the tools and resources needed to help you find what data you’re looking or to take your work to the next level.

But now, Internet, Inc has developed the latest, greatest search engine that apparently should leave us shaking in our boots: BlekkoBlekko is receiving a lot of new-startup-PR this month because it is doing what librarians have done for ages (and what Google doesn’t bother to do) – it separates the good from the downright ugly on the Internet.  Although Blekko has indexed over 3 billion webpages, it lists only top results in order to cut down on website “pollution” from content farms and simple dirty spam.  I’ll let the New York Times take over from here:

People who search for a topic in one of seven categories that Blekko considers to be polluted with spamlike search results — health, recipes, autos, hotels, song lyrics, personal finance and colleges — automatically see edited results.

And furthermore, their comparative example:

In some cases, Blekko’s top results are different from Google’s and more useful. Search “pregnancy tips,” for instance, and only one of the top 10 results, cdc.gov, is the same on each site. Blekko’s top results showed government sites, a nonprofit group and well-known parenting sites while Google’s included OfficialDatingResource.com.

“Google has a hard time telling whether two articles on the same topic are written by Demand Media, which paid 50 cents for it, or whether a doctor wrote it,” said Tim Connors, founder of PivotNorth Capital and an investor in Blekko. “Humans are pretty good at that.”

Blekko's logo - featuring a real live person (a librarian, no doubt)

Blekko’s founders are basically looking Google in the eye and saying the Internet isn’t going to be a wild west any more, that editorial control (if not authority control, too?) is required to organize all the information available to anyone ready to jack in to the web.

This is verging on librarians’ territory.  Should we be concerned?  I don’t think so.  Should Blekko succeed at helping the entire world discern what is valuable and critical from what is a bottle of plonk on the Internet, then I think we’ve got a problem, but given the fact that information is synthesized into knowledge at the local level, I think we still have something on the these apparent new search engine masters.  And I don’t feel like I’m sticking my head in the sand by saying that, either.  Sure, the Internet can give us a run for our money at times, but if anything it’s made the work we do all the more important to the people we serve.  With so much information available to people since the development of the web, it’s useful to have other people (i.e., us) close at hand to help them determine their particular information needs and help them solve it.

Librarians - We are electronic performers (apologies to Air)

Blekko won’t know, for instance, what titles our local public library holds, and neither it will be certain which electronic databases our local universities subscribe to.  And I can pretty much guarantee it won’t have any Canadian socio-economic data (longform or no longform) and very few government documents.  This is where the person on the ground – the librarian – can step in and act as an intermediary between our patron and what the Internet has to offer.

Funny.  I nearly called the Internet an “Interblob” just now.  Because that’s what it is – a big doughy blob of information.  But because I’m a librarian, I can help you find what you’re looking for on it – Google or no Google, Blekko or no Blekko.

Google.cn and “Don’t be evil”

Here’s a little something think about and e-mail to all of your friends of neighbours over the next few days. June 2009, you will recall, has the unfortunate pleasure of marking the tenth anniversary of the Tiananmen Square Uprising. Many people were hurt, killed, and repressed. But, as a good friend noted when he forwarded me this article, many more today don’t even realize it happened or just consider it only a tiny blip on the way to domestic prosperity. Mirroring this sentiment is a post that a user at LIS News made that links to Frontline‘s 2006 exposé on Google’s self-censorship on keyword searches on Google.cn.  This page is a stirring reminder that Google’s mantra of “Don’t be evil” is often lost in translation or subordinated to a footnote in a corporate annual report.

Looking at the screenshots that Frontline made in 2006 is a good way to put our love for Google in check. So many of us (and I count myself in this bunch) are inextricably tired to various Google products and apps, from its powerful index and search engine to its free e-mail service and RSS reader. Google makes the Internet so easy to work with that we hardly notice the tidy profit they make from our love for their wares. Yet, when we are given a chance to take a serious look at what Google does to its own index of the Internet so that it may increase its market share in foreign or closed markets, we can see clearly that Google and “Don’t be evil” aren’t tied to the hip like we like to believe they are.

I’m not going to cut-and-paste Frontline’s images into this post because I think it’s important that you visit the Frontline website for yourself – continued traffic will hopefully give them pause to keep this page up for a rather long time. When you do make it there, however, I’m confident you’ll be at least a tiny bit shocked to see firsthand what Google sacrificed in order to enter the domestic Chinese market. Self-censorship has become key to the Google business model in China. Searching for “Tiananmen Square” in Google’s image database from a café in Chengdu won’t retrieve links to the ubiquitous photo of the anonymous student who confronted a column of tanks as it would in Toronto, London or Rome but will rather show warm photos of the tourist site. Meanwhile, searching for “Falun Dafa” will not return a single hit. In order to maintain a place in the Chinese market, Google restricts access to its database and sees higher ad rates and click-throughs (and presumably a healthier bottom line) in the long run.

That’s the problem with Google and “Don’t be evil.” Sitting in our offices and cozy living rooms and dens and bedrooms here in the West, we don’t notice that Google has self-censored and altered access to its index. We hear about this from time to time, but it hardly affects us, so we tend to forget about it. Google China‘s practices isn’t doing any harm to you or I, and they’re certainly not doing any harm to its shareholders’ ROI, but they are actively restricting the type of information that can be retrieved for millions of Internet users on the other side of world.

This is perhaps one of the largest reminders we have that corporate interests are not always aligned with the interests of the people. Think for a second who controls the way you access information on the Internet, from your local service providers to search engines to backbone consortia. The Internet isn’t free and Google isn’t benevolent. Remember that the next time you might look for something controversial in an index.