Category Archives: Information Society

The Beastie Boys Library Sabotage Mashup

While I generally don’t get too geared up for zany library funtimes (because I want people to take me seriously since I do serious work), this video is definitely worth watching.  It mashes up one of the greatest bands of the late 20th century with a little LIS:

http://vimeo.com/66169135

Happy Monday!  Now everyone go listen to favourite Beastie Boys album on rdio

 

Thoughts on The Globe and Mail’s Paywall Announcement

Here are some quick thoughts on The Globe and Mail’s announcement that it will create a paywall around their online content on October 22, 2012. On the one hand, I’m okay with this. Creators ought to be compensated for their work, and though The Globe has very, very deep pockets, I believe that it’s important that consumers begin to remember again that content isn’t free, labour has been invested into the journalism they are reading, and that there is a cost to what they want to read and view online.  (Of course, let’s hope the price stays fair. Journalism may have value, but no one wants to be gouged a few years down the line).

My professional perspective, though, is marked by a somewhat cynical reaction against change:

[blackbirdpie url="https://twitter.com/steeleworthy/status/258020126866165760"]

Putting my librarian’s hat on, my first thought is that Globe’s paywall will alter how students find, access, and use news and current events in their assignments. Showing our users the obvious benefits of using scholarly databases (i.e., do you want to find high-quality material or do you want instant, yet sub-par results?) can already be a trying experience, and I wonder how many people we might lose – people who won’t want to turn to our Factiva and Lexis subscriptions to get the Globe’s perspective – after the paywall is installed? Why bother loading up Yet Another Database when there are so many other free newspapers to consider, or even just Google News?

Of course, it isn’t that bad.  Once students are told that scholarly databases are a requisite for their research, and once they’re given an opportunity to test and learn them, many do use them, and many do use them well enough.  So when next term’s assignment calls for two or three newspaper sources alongside the scholarly resources, we’ll show them how to use Factiva and Lexis, and many of our users will navigate these databases effectively. Come down off the cliff, librarian. Things are going to be okay. 

But I still have a concern about this paywall announcement, and it lies in the everyday consumption of news information. Honestly, I quickly got over the Librarian’s Dilemma that I noted above; I sincerely do think that things will be okay (or at least “okay enough”). But I’m thinking about the day when most reliable journalism sources will have instituted a paywall, and how the constraints that this subscription model places on our ability to read top-flight information would affect our culture. While I desire a balance between access to information and valued compensation to information creators, producers, etc., I do want to see easy, widespread access to high quality information.  I don’t ever want to see an Internet that has priced the best reporting, art, opinion, and literature out of the hands of the majority of internet users, and I hope that a “paywalled” web will not inhibit our society’s ability to access and consume the best content out there.

Let me put it another way.  I’m not really a fan of opera (even though I’d like to be) and I don’t go to the opera, but I appreciate that opera is no longer an exclusive night out for the highbrow elite amoung us. Today, opera is accessible and affordable, and anyone who can at least get to a stage can catch a performance if they want to see one.  Let’s hope that the come age of Internet paywalls doesn’t price most of us out of the best art and news and content that this medium can deliver.  So yes, I think a paywall is acceptable, but let’s just make sure we will all still be able to get to goods that same way that many of us can see legitimate theatre when we want to.

Adapting to a changed world

A funny thing happened while Mark Lamster was writing about how libraries are more popular than ever for Metropolis Magazine.  Well, two funny things happened.  First, he didn’t really give enough evidence to prove his claim.  And second, he rhymed off the different ways that libraries have manifested themselves through the ages.

(I’m not going to fault Lamster for not sufficiently explaining his popularity claim since I assume that an editor wrote the headline. Besides, this is from a news site that is focused on architecture as opposed to information science. His concern lies elsewhere.)

I think all librarians need to read this article.  And I don’t mean that everyone should skim it between e-mails. I mean everyone should sit down and read it.  It won’t take long to do, I promise, so you don’t have to give me the tl;dr line. What’s so important about this article is that an outside observer is speaking to other outside observers about what libraries have looked like in the ages-old past, in the recent past and present, and what they will look like in the future. Lamster’s article inadvertently explains that the library as so many of us like to think of it – as a wondrous cathedral of knowledge and public reading room – is only a recent understanding of the term. Rather, libraries have been closed buildings with closed stacks and difficult-to-use technologies for centuries. Libraries have been loci of state power much longer than they’ve ever been the democratic, open, free spaces we think of them today. But as society shifted to become freer and more egalitarian, so did the nature of the library shift to become what we know it to be today. And libraries will change again, and again, to meet the needs and opportunities that future users bring.

I’m linking to Lamster’s article because I want to stress that libraries are not defined wholly by their collections. I don’t think that libraries must be filled with books in order to have any use to the world. Libraries with books are great. Libraries with books are beautiful, wondrous things, actually. But have no need to fear the library that has fewer books compared to x years ago. Libraries with fewer books can still be information hubs and community centres. A library that gives up stack space for meeting rooms and research space, that moves books off-site to make way for local business support centres or for information portals such as geospatial data centres, language labs, digitization centres, local collections, archives, etc., is still a useful space that serves the public good.

There’s a good chance some you will object or reserve judgment since the jury is still out on the long-term viability of collection models the prioritize access to licensed material rather than purchasing content outright. Some of the people Lamster interviewed also expressed consternation that libraries are adapting to the digital world in ways that don’t suit them. That’s okay: we can continue Whither Print? debate another time.  My argument is not whether collecting print is good or bad.  My argument is that we must honestly come to terms, as a profession, with the effects that advances in information technology has on our services and spaces. The world around us has changed, full stop. Online information repositories have increased in number and become easier to access, and our users now keep Internet access devices (read: smartphones) in their pockets where ever they go, so libraries must continue adapt.

Don’t fear change, fellow librarians, and don’t fear technology.  Embrace it, because your users have done so already. It’s better for us to be part of the online access movement and be able to guide its direction than to react to it. Let’s build and be present in the networks that link people, the places they inhabit (physical and digital), and the information they seek in their lives.

Part of me is cringing since I’m typing this today, in 2012.  And another part of me wonders if writing this on a blog will only preach to the converted.  That may be so.  But I feel a need to continue flying the flag and to declare what our role can be in this online environment.  I hope you do, too.

Halifax_CMA

Halifax population changes, 2006 to 2011

A few years ago, I designed a few rudimentary Google maps of Halifax from StatCan data.  This was before I really knew anything about stats and data (n.b. I still don’t think I know much more than “some things” about stats and data), licenses, and how to properly interpret them. One map that I created showed Halifax’s population change, tract by tract, from 2001 to 2006. I’m giving myself embarrassment cringes by linking to it, but all the same: view it here.

StatCan has produced PDF images that show tract-by-tract population changes from 2006 to 2011 for all census metropolitan areas (CMAs), including HalifaxClick here to see Halifax’s population change table per tract.

Halifax Population Change from Census Year 2006 to 2011, Statistics Canada

Halifax Population Change from Census Year 2006 to 2011, Statistics Canada

Of note: the suburbs clearly rule the roost when it comes to Halifax’s population changes from 2006 to 2011. The only tract on the peninsula showing a significant increase (i.e., over 11.9%) is Tract 2050019.00, in the middle of the peninsula.  The increase in this tract is due, I’m certain, to the Gladstone redevelopment, the first major phases of which were completed – if memory serves me correct – in 2007 or 2008.

For what it’s worth, I’m not sure if I’m going to build a google map from 2011 census tract data. The work is time-consuming and there are other people in my field who have the expertise and software to do a much better job than I can. (And besides, my own hobby at the moment has more to do with plotting historic maps with Google Earth!) My work finding socio-economic data, making the odd remark here and there, and helping others make sense of it, is enough work – and fun – for one person.  :)

Finally, here are a few outbound links to keep you interested:

data_gc_ca_header

Discussing data.gc.ca, Nesstar, and Open Data in Canada with Tony Clement

On Friday, June 8th, 2012, I had the opportunity to discuss Open Data and Open Government with Tony Clement, the President of the Treasury Board Secretariat (TBS), as well as other MPs and members of the Laurier Institute of Public Opinion and Policy (LISPOP).  Our time with the Minister was short and the agenda was tight, but we managed to have a great discussion on data.gc.ca, the Treasury Board’s pilot programme to build a single, unified government data and statistics portal for Canadians.

When you get a chance to sit with industry, government and academic leaders, you better come prepared.  I knew data.gc.ca from recent experience and had already compared it to other Government of Canada (GC) data portals, as well as some other national portals. I also printed recommendations to distribute to the room, which I believed helped focus the meeting on a few key points. Given the people we were speaking to, i.e., high-level ministers, aides and MPs, I focused my talking points on managerial and strategic concerns they can address in their own planning and visioning exercises. What’s important here is understanding what your audience wants to learn, and then communicating it in a manner than will help them visualize the idea they have themselves. The Minister’s chief concerns probably lie not in which research community uses which programme for statistical analysis as they do in the ability to develop the means for people from all backgrounds to access government-collected data regardless of platform. In moments like this, it’s best to keep your focus on broad, easily communicated strategies that can produce short-term and long-term results.  No conversation is not without a persuasive element, and if I had a chance to speak, I wanted to be sure I was understood.

data.gc.ca is a pilot project, so we shouldn’t be quick to judge its merits and deficiencies. The Minister and his aides emphasized that the site is a work in progress and that he was looking for advice on how to improve its design and the end-user’s ability to access government data and statistics through it. In short, he wanted to meet us so that we could share our opinions and make recommendations to improve the service now and in the future.

The site itself represents the Treasury Board’s attempt to build a “union catalogue” of Government of Canada datasets and statistical tables. This is a great start at an issue that the LIS community has been grappling with for hundreds of years, i.e., developing one catalogue for all sorts of stuff from all sorts of places, and I think we should commend them on their efforts. Given the government’s current fiscal restraint, I’m impressed that this had even started.

information dump: data.gc.ca search results are wordy

In my mind, the main issue with data.gc.ca has to do with convincing the managers at the very top of the project (i.e., the Minister and his aides, who give direction but don’t necessarily have day-to-day input on implementation) that data management and access issues must be resolved if this site is to be a long-term success.  The site has content, but it lacks organization, has poor access issues, and may suffer from long-term data management planning with other government departments.  This is not so much a data issue as it is a data management issue.  And thankfully, there is a solution (more on that later).

Here’s a short list of data.gc.ca‘s benefits and challenges that were addressed in the meeting:

  • Benefits
    • Offers a single portal to access government statistics and data
    • Offers easy to use search and browse functions
    • Maintains the regular Government of Canada (GC) website look-and-feel
    • Contains geospatial data
    • Offers data in the common .CSV format for Excel
  • Challenges
    • The portal does not always conform to “plain language principles.”  -  Some pages are wordy, and others are littered with government or data jargon that is not always explained and therefore may irritate the user.
    • Search and Browse functions are easy to use, but their results are not useful.  -  There are few search refinement opportunities, inconsistent keyword indexing, and few search result and survey descriptions. This creates an information dump for novice and expert users alike.
    • Records are inconsistent.  -  Sometimes, there are duplicate records for the same French and English data; other times, there are two unmarked links in one record.
    • Records are incomplete.  -  Most records contain more empty fields than complete fields. The novice user will be confused; the expert user will be annoyed.
    • Retrieved records are difficult to browse.  -  GC look-and-feel web standards are designed for text pages and not for database search results; this makes the website cumbersome.
    • Data is offered only in CSV and XML.  -  In some ways, this is a “good thing” since it reduces the amount of files and file versions (or links) that the DB maintainers must worry about. On the other hand, Treasury Board and GC departments may want to consider encoding their datasets into other formats for their users in order to reduce the chance of file corruption, programming error, or user misinterpretation.

This list isn’t exhaustive, and in the portal’s current form, there is a greater number of challenges than benefits. But I would argue that it’s better to consider the quality or weight of all these items as opposed to their number since the portal is still a pilot programme.  Many things must be improved (namely data organization, management, access, and data usefulness for both novice and expert users), but I think that Treasury Board may be on the right track here.  I argued that TBS must take three actions to accelerate their forward momentum:

  1. Standardize data management principles across departments.  This is a case of herding cats, I know, but if TBS truly wants data.gc.ca to become a premier stats/data engine, then it must coordinate with other departments to ensure that the records in its database are consistent and complete.  In its current form, too many records have too many incomplete fields, so expert users will likely turn directly to the departmental websites, and novice users will either learn to do the same or turn away completely.
  2. Consider acquiring a license for Nesstar, which will improve the data presentation, manipulation, and access. Nesstar can facilitate cross-tabulations for novice users or people with immediate statistical needs, and if the government chooses to do so, also develop a repository for large datasets in a variety of formats (e.g., .CSV, SPSS, SAS, STATA, etc.). This will give TBS and other government departments the assurance that the data they are releasing “into the wild” is accurate and cannot be corrupted or misinterpreted when exporting from one format to another.
    • (2a:) I also noted that several GC departments, provincial governments, other national governments, and the Canadian research community are already experienced users of Nesstar. I suggested that TBS discuss implementation and management plans with OCUL, which has been using Nesstar for several years now for <odesi> through ScholarsPortal.
  3. Consult with Statistics Canada regarding license development and outreach. StatCan is a world leader in data collection and dissemination; not involving them fully would be lost opportunity to TBS and a disservice to this agency’s expertise.

 

<odesi> screenshot

A screen caption of, the OCUL data repository

I can’t say exactly how well the Minister and his aides appreciated the information we gave, but it seemed to have been well-received.  For my part, I intentionally used plain language to focus on resource management and policy development since these areas are most directly under their control and are things they can work on today, if they’re so willing. Hopefully, they walked away with a better understanding of the need for data management and access, and can now contextualize what areas they must work on to transform their big-picture thinking exercises into a successful public data portal.

On a personal note, this was one of the most productive hours I’ve had in a long time. My knowledge drove the discussion at times, and I walked back to my office feeling as though I successfully informed people who can implement programmes and make change in Canada. I’m pretty sure I raised the profile of data librarianship, of repositories like <odesi>, and of the need to pay more attention to data management today and in the future. I’m certainly no expert when it comes to data librarianship and data management, but for a moment, I was able to “speak truth to power,” which was a good reminder that yes, we know things in our field, and yes, I may know a few things, too.  And that felt good.  (I told you this paragraph was a personal note!)

 

parliamenthill

Budget cuts to libraries, archives, and information centres jeopardize access to Canadian government information

This has not been a good spring for Canadian librarians and archivists, especially those who work at federal libraries and archives, which are being de-funded and dismantled by federal budget cuts. These information centres sustain government and public research capacity. Their ability to create, preserve, and provide access to public information in our country is at risk.

These cuts, and the centres and programmes in jeopardy, include:

I’m missing some announcements since I was away when so many of these cuts were announced, but this list nonetheless clarifies the seriousness of the situation. In the space of a few weeks, the federal government has severely hampered the nation’s ability to gather, document, use, and disseminate government and cultural information.

You can learn what many of these cuts mean in clear, practical terms by reading this post written by my archivist friend, Creighton Barrett, at Dalhousie University’s Archives and Special Collections.  Creighton explains how these cuts negatively affect the university’s ability to collect and maintain the records used by scholars and citizens in one community alone, and rightly notes that they are a “devastating” blow to information access in Canada. Now, consider how Creighton’s list grows when you add to it the ways in which these same cuts affect the libraries and archives in your own community, and then all other libraries and archives in Canada. And we haven’t even touched what these broader cuts mean for LAC’s programming and resources, StatCan programming, and the research capacity of federal departments and agencies. “Devastating,” may well be an understatement in the long run.

These budget cuts are a knock-out punch to how public information is accessed and used across the country. The cuts not only affect the library community and possibly your civil-service-friend who lives down the road. They will affect the manner in which our society is able to find and use public information.  If public data is no longer collected (see StatCan), preserved (see LAC, NADP, CCA), disseminated and used (see PDS/DSP and cuts at departmental libraries), then does the information even exist in the first place? There will be less government and public information, fewer means to access this information, and fewer opportunities to do so.

Take a moment and recall the freedom you have been afforded to speak freely in this nation.  The utility of that freedom is dependent on your ability to access the information you use to learn, to criticize, to praise, or to condemn.  If knowledge is power, then a public whose national information centres and access points are ill-funded is a weakling. Libraries and archives provide Canadians with direct access to key government information, and for that very reason, they should be funded to the hilt.

This is where I get to my point: We are now facing a situation in Canada where government information has suddenly become far more difficult to collect, to access, and to use. The funding cuts that Canada’s libraries and archives face is an affront to the proper functioning of a contemporary democratic society. These cuts will impede the country’s ability to access public and government information, which will make it difficult for Canadians to criticize government practices, past and present.

I mentioned on Twitter that these cuts show us that the work of librarians and archivists are crucial to the nation’s interest. We are not mere record keepers, and neither do we spend our days merely dusting cobwebs off of old books. We are the people who maintain collections of public information, and we are the people who provide and nurture access to information. Many of us see ourselves as guardians of the public’s right to access information.  If we take on that guardianship, then we must defend and protect these collections and access points. I’m not talking about a Monday-to-Friday, 9-to-5 job. I’m talking about advocacy, which doesn’t have an on/off switch. Either you do it or you don’t.

So, what should you do? Get informed, speak up, and act.  Write letters to the editor. Write to your professional associations and other like-minded organizations; lend them your support, and when needed, tell them to add force to their own statements. Write to your MPs, to other MPs (especially to MPs who sit on government benches), to cabinet members, and to the PMO. When you’re socializing with friends who aren’t librarians and archivists, mention how our work affects their work and their personal lives. Massive cuts to the nation’s libraries and archives do not serve the public good. These cuts may help balance the financial books, but they create an information deficit that inhibits research, stymies dialogue and criticism, and makes government more distant from the people.

RSC-1985-C-42_Copyright_Act_Canada

Why we must be apprehensive about DRM and digital locks

This little piece of news hasn’t yet got much coverage in the popular press, but it should. It shows why Canadians (and everyone, really) must be concerned about digital locks.  Librarians and lawyers are the ones taking note of it right now, but it’s an issue we should all worry about:

[blackbirdpie url="http://twitter.com/#!/librarybazaar/status/139008379023667201"]

 

Yes, that’s right – as Michael Geist reports, if you are Canadian and have ever purchased music through Napster Canada, then you run the risk of losing access to content you have paid for:

These downloads are DRM-encoded WMA files and can be backed up by burning them to audio CDs. Doing this will allow you access to your music on any CD player and generally have a maintenance free permanent copy. If you do not back up your purchased Napster music downloads by burning them to CD and you later change or reinstall your computer’s operating system, have a system failure or experience DRM corruption, then the downloads will stop playing and you will permanently lose access to them.

(Source: Napster Canada PR via Geist’s blog)

Let’s put this into perspective:

  • Customers have purchased items (music, objects, widgets, whatever) from a company with the assurance that these items can be accessed.  But the use of these music files are limited by a lock that the company will no longer support now that it has pulled out of the market and been bought by a competitor.
  • Customers have been advised by the company to effectively circumvent their digital locks if they want to continue listening to their music.  

I suppose that Napster Canada/Rhapsody is acting in good faith when they explain to Canadian customers how to ensure that the content they have already purchased will always be accessible. Napster/Rhapsody has informed customers that all they need to do is copy the data to audio CDs to ensure that the music can be played even if the digital lock on the file is ever corrupted. But does anyone else find it a tiny bit illogical that a company that normally espouses the use of digital locks is now effectively telling its customers to break the law and circumvent the lock in order to make sure they will always be able to access this music?

Digital Rights Management is something we must be wary of.  DRM limits the consumer’s rights to the content he or she has purchased; it “manages” rights by taking them away from the consumer. This is of particular concern in Canada, when so many organizations are subsidiaries of larger companies located elsewhere. If Napster pulls out of the Canadian market, will the digital locks that limit access to the content you purchased still be supported? It seems not. If Amazon were ever to pull out of the Canadian market (which is an unlikely scenario, but a worthy point to make), would its digital locks that limit access to the content you purchased still be supported? That would be up to Amazon to decide.  Digital locks keep your purchases at the mercy of the vendor, which is reason enough to oppose them.

Copyright is a mess, especially in Canada.  The law is antiquated and it does need an overhaul to actually work in our digital landscape.  But DRM and digital locks place an undue burden and risk on consumers (be they individuals, families, or libraries), most of whom are law-abiding citizens, respect intellectually property and rights, and do not copy content.

 

Post script: Am I suggesting we back out of all e-content on account of DRM?  No, I’m not. What I’m trying to show, like so many others, is that the system is out of balance right now and will remain so in the future.  Advocacy is required to fix this.

The Documentary History of the State of Maine (1869), Vol. 1, Scanned by "David" for Google Books

Contemplations: Marginalia, Texts, and Analog Trails

This morning, I was playing around on Google Books while doing some research on the documentary history of the Province of Nova Scotia. Google Books is not my first choice as a resource since it’s such a difficult beast to break in spite of all its great historical content, but I was curious to see what might be digitized on the subject. That’s when I came across these pages in the front of The Documentary History of the State of Maine (1869), Vol. 1:

The Documentary History of the State of Maine (1869), Vol. 1, Scanned by "David" for Google Books

The Documentary History of the State of Maine (1869), Vol. 1, Scanned by "David" for Google Books

It seems that in the act of digitizing the text, David, our digitizer, has scanned his hand right into the book.  David, his ring, and his tiny-finger gloves have become digitized marginalia. Like a student’s note in the margins or a phone number quickly inscribed in the front matter, David’s fingers are now part of the text, forever.¹

Marginalia has always fascinated me. I owe this to a distinguished professor who held court in one of my undergraduate seminars many years ago.  He once explained to us the pleasure he found when discovering his students’ notes in the margins of texts in the university library.  As he was an older professor and had taught at the school for many years, he knew the library’s collection and his students’ use of texts in his field quite well. He enjoyed discovering hand-written notes in his assigned texts or in books that were pertinent to his subject matter since these notes became “analog trails” (my term, and a pun on “digital trails”, of course) that led back to the discussions held in his seminars and to the knowledge developed in them.

I’ve since come to look upon marginalia as tiny clues that show how a text has linked different people and ideas together. I often wonder, in a nostalgic way, how these bonds will change when ebooks become ubiquitous. We can append and share notes in digital texts, of course. But these notes, which were at one time inscribed in the book or on a piece of paper and left to be discovered by another reader, have been transformed by common fonts and encoding that might link and share thoughts but don’t show significance or meaning in quite the same way. In the e-book cloud and on our social websites, readers and the value of notes are flattened, which, I think, affects the importance and allure of this marginalia.²

It goes without saying that the e-book has altered our relationship to the text and to knowledge. No longer do we have a one-to-one relationship with the physical object in front of us. Now we can potentially have a one-to-many relationship with all of the text’s readers. There are clear benefits to be gained from this, i.e., don’t think that I’m a Luddite and want to turn my back on the new communities of readers that are developing thanks to e-book innovations.  But my thoughts today (and what this post is only scratching the surface of) are focused on how the physical manifestation of a text – i.e. the book, affects our relationship with its content. A book’s marginalia often represents one person’s relationship with a particular copy of a text rather than one’s relationship with a community of fellow readers. Reading marginalia is almost like reading a diary since one is reading notes and thoughts left primarily for personal consumption. When we encounter marginalia, we are discovering secrets and clues left behind by other readers – clues that can alter our interpretations of the text, but only in the copy we are holding in our hands.

Marginalia also individualizes or “makes unique” texts that are published in large volumes. Just as violinists treasure their violin’s lineage from one musician to another, many readers treasure the sign’s of a book’s “borrowing history”: the notes on the pages left behind by previous readers, the dog-eared corners, the discolored, yellowed pages which signify its age and in some ways, its value to the collection. All these marks, notes, dents, and scribbles create a “lineage” of readers for the text. They show the would-be reader the value that others have found in the text, and the added value he or she may acquire upon reading it.  These scribbles and folds haunt a physical book; they create a history of reading, marked in time and place by the thoughts of its previous readers.

We are shifting away from a centuries-old period where the content and its container were inseparable – where the content was signified by the container, and where the container gave the reader clues about the content’s worth. Although it hasn’t been difficult for our culture to make the transition to our new digital period where the container’s role has been diminished, I wonder if we should be paying more attention to how our interaction with texts – whether it is writing marginalia or selecting ebooks from a virtual shelf – affects our understanding of knowledge and the development of “collective wisdom.” That’s not to say that things are worse (or better) off today compared to “time before e-books” so much as it is to suggest that when our interaction with knowledge has for so long been focused on reading the written word with a pen and paper close at hand, it may be a useful to exercise to study how our new tools and technologies affect the ways we think and learn.

I’ll leave these theoretical and literary implications alone for another day when I have the courage to transform these meandering thoughts into a well-sourced argument that might provide understanding. And I’ll end by acknowledging the irony found in writing these thoughts in digital form for a larger community of readers.

1.  For the record, David caught his mistake and re-scanned the page.  The next scans in the Google Books scroll of images are clean digital images of these pages.  Also, my research on early Nova Scotian documents continues.

2.  I am not suggesting that no extra meaning or significance can be found in e-book notes or on social reading websites. Social sites actually do an incredible job at adding meaning to a text, but they do this in different ways, e.g., crowd-sourced discussions and reviews.

Google Reader Takes a Bow as Google Plus Takes the Stage: the death of critical reading on the Internet

The new Google Reader was released this week.  Its UI changes have streamlined its sharing functions in order to integrate it as easily as possible with Google Plus.  At first, I didn’t mind the changes, mostly because the clean white interface has kept distractions to a minimum on the Reader Interface.

But I’ve now changed my mind. I’m not sure I like the Google Reader changes. The new interface’s clean lines means that readability has stayed the same, if not improved, i.e., its look and feel seem to promote the act of reading over skimming. But the changes to its sharing function really is an issue. Sure, I can share things to G+, but clicking on the +1 button is akin to shouting into the din of the Internet.  I can share and share and share as much as I like, but I don’t know if people are sharing their own Google Reader items back into my G+ stream. Furthermore, I don’t know what kind of content they’re sharing anymore.  Are the shared items in my Google+ Stream coming from a valued Google Reader store?  Or are these items just clicks and pages found while surfing the net?  At best, the former Google Reader dialogue is now feigned (it’s now a monologue on G+), and the quality of those shared pages on G+ is indeterminable.  I’m now looking for options.

The changes we’ve seen to Google Reader has got me thinking again about the nature of reading, skimming, and sharing on the Internet. What made Google Reader so great (aside from the emphasis on reading, see above) was the assurance of quality that came with its shared items.  Shared Items on Google Reader were posts that came from blogs and websites that people believed were important enough to read regularly as opposed to mere posts and pages found while they or their friends surfed – and skimmed – the Internet.  People who used Google Reader had a better assurance that the content they found in their shared folder was carefully chosen, was fit for consumption, and required some of their time and attention in order to synthesize.

The fact that blog posts and shared items in Google Reader sat in a folder until the user actually read them shows the importance of the items’ content.  Google Reader’s interface – like all RSS interfaces – demanded the user actually read the content he or she saved to the system: content did not disappear until you at least saw that it arrived for you to read. This premise behind Google Reader, i.e., posts are to be saved for later reading, meant that its users selected content that was not merely ephemeral.  By its very nature, Google Reader asked the user to choose only the best content on the web and to store it in a separate space to read at a later time.  By and large, shared items on Google Reader had a quality assurance label stuck to them: these posts were determined to be distinct from the general “of the moment” nature of the web and therefore should be treated with care. Anything shared on Google Reader required special attention because some one said, “This content came from a valued source and ought to be read, and it is not going away until you at least see that I’ve shared it with you.”

Google Plus, Twitter, Facebook, and so many other social sites do not do this.  Social networks promote connections above all else, so the content is almost always “of the moment” (that’s the second time I’ve said that).  Content on social sites is pinned to a moment in time, but the conversation always moves forward. If you log in to Facebook at 2pm, you only see the conversations happening at 2pm; you must look carefully for what your friends shared earlier in the day.  That shared content may have been valuable, but there is no easy way to flag that value on social networks since content is subordinated to relationships and connections as they exist the moment you are online.

I like Google Plus, I really do. But like other social media sites, Google Plus emphasizes shared connections and the constant stream of chatter that arrives on your screen.  Of course, we can stop that stream at any time, click on a link, and fully consume what has been offered to us, but a social site’s design promotes social conversations over thought and analysis. I still believe that the “Internet Age” is an age of skimming. We are living in a time where thorough, critical analysis has been subordinated to the conversation. I’d like to see a balance restored between the two. So long as we aren’t reading well – so long as we aren’t taking the time to think critically about what we visit and read online – we are preventing those conversations from reaching their full potential.

Internet 2.0! Now skim faster and shallower!

Internet 2.0! Now skim faster and shallower!

Further Reading: Google Ripples

This post came down my Google+ stream today, which I wanted to share:

Reshare this post so we can test the new Google+ Ripples!

"Reshare this post so we can test the new Google+ Ripples!"

This post is a request to share another post and an invitation to the check out Google+ Ripples.  Google+ Ripples is a Google+ feature that visualizes a post’s activitiy.  It will show you a post’s broadcast potential by visualizing who has shared it:

Visualized Shares in Google Ripples

This is how Google Ripples has visualized the way the original post has been shared

Brad Matthies does a good job showing you how to view a Ripple for any G+ post – check out his blog for more information.

I don’t have much to say yet on Google Ripples – it’s still very new and novel and I think people are playing with it more than they are thinking about what it does, how it does it, and what it might mean (if it means anything.).

One thing did cross my mind as people in my own Google+ stream started to share my own share of the post, though. I’m curious to see how Google+ Ripples will turn out. It may only be visualizing and making public the links we all make to one post on the Goog, but I’m interested to know if there might be a backlash against it.

This is interesting because people, including myself, often become uncomfortable and vocal when information about our relationships between ourselves and the information that actually connects us is revealed on the Internet, so I’m almost expecting a public pushback against Google+ Ripples (even though the information and not the carrier is centered in its graphs).   But once the Info.Corps back down and put the cover back over our social graphs, we stop worrying and carry on with our day on the Internet – business as usual.  The thing that gets to me, though, is that we don’t really think twice about the fact that this information about us has already been collected, and on some social media networks, is being shared with third parties without our knowledge.

Don’t take that to think that I’m being alarmist – I’m not suggesting we shut down all of our accounts immediately. I’m only observing the way we sometimes object to the public display of our social graphs but don’t seem to worry that the information is being collected in the first place.

P.s. I think the + in Google+ is a little silly at this point.  I really want to just call it “Google Ripples.”