Christina's LIS Rant

Sunday morning I was all set to do another essay – just had to pick a
question source and question – when my mother in law called to say she
would be stopping by at about the same time I would be finishing up the
2 hour window, leaving no time for emergency house cleaning (no, I
haven’t grown out of that yet despite being married for >10
years). So here are a few readings on “community” which I’ll drop like
a hot potato and then run to clean the house.

Both Wellman and Rheingold dispute the idea that we’re all “ href="http://www.worldcat.org/oclc/43599073">Bowling Alone”
and assert that virtual communities appearing in computer mediated
communication are
real communities, but what does “community” look like online?
 Is the implementation of a “community” software tool enough?
 We’re in a second wave of all sorts of vendors offering their
own online communities – this was also done in the 90s.  Are
these communities?  Only when they succeed?  Never?
It depends?  On what?  At the same time, there are
lots of articles coming out in the physics literature on mathematical
ways to identify cohesive subgroups in networks and they
call this process identifying communities.  Are they
identifying communities or only cohesive subgroups? Could you develop
an algorithm to locate a community?  How would you test what
you found to see if it’s really a community (or maybe it’s a group of
people all disputing a knowledge claim, what Collins called a core
set)?  Is a binary yes or no enough or do we need to know what
participants feel and why?

Blanchard, A. L., & Horan, T. (1998). Virtual Communities and
Social Capital. Social
Science Computer Review
, 16(3), 293-307

This article is more or less in direct response to Putnam’s style="font-style: italic;">Bowling Alone.
 His thesis was that increasing online activity lead to
decreasing community participation and civic engagement and that this
low participation hurts the community as a whole.  They look
at three possible outcomes of online communities: 1) that online
communities enhance f2f
communities, 2) that online communities detract
from f2f communities, or 3) that they are unrelated. Since this was
written, social capital has been defined (and operationalized) at an
individual level, a group level, and then a societal level.
 Putnam looks really at the societal level. They quote him
describing it as “the features of social organization such as networks,
norms, and social trust that facilitate coordination and cooperation
for mutual benefit.”  When they define virtual communities,
they differentiate between online places for physical communities (my
neighborhood has a Yahoo! Group) and online-only communities.

Networks in virtual communities might be larger and more geographically
dispersed.  They might also encourage participation by some
who might not participate in f2f..  Norms in communities
include reciprocity – doing favors and having favors returned.
 The idea in this article is that generalized reciprocity (not
direct, Mary does for Bob, but Mary does for Bob, Sue sees, and
 Sue does a favor for Mary) is increased in virtual
communities because helping acts are visible (see, however, Wasko
& Faraj, discussed on my old blog – they found that reciprocity
didn’t really explain any variance in contribution to a professional
virtual community).  Blanchard and Horan also discuss lurking
as a negative social norm, akin to free riding (see, however, various
discussions by Nonnecke and Preece as well as those by Lave and Wenger
on legitimate peripheral participation).  With respect to
trust, it might be increased by increased social identity in virtual
groups and decreased social cues (less stereotyping by physical
attributes), but it will be decreased by flaming, trolls, and deception.

Blanchard, A. L. (2004). Blogs as Virtual Communities: Identifying a
Sense of Community in the Julie/Julia Project. style="font-style: italic;">Into the Blogosphere: Rhetoric,
Community, and Culture of Weblogs, Retrieved from href="http://blog.lib.umn.edu/blogosphere/blogs_as_virtual.html">http://blog.lib.umn.edu/blogosphere/blogs_as_virtual.html

When I talk about blogs as communities, I mean like between blogs, or
collections of blogs, or bloggers linking to each other and commenting
on each others’ blogs.  In this paper, Blanchard looks at a
community that formed within the comments of a single blog (that became
a book, and isn’t there a movie coming out)?  The comments in
this blog were like a forum and sometimes wandered from the topic of
the post and had a life of their own.  She asks the question
whether this is truely a community or only a virtual settlement.
 Virtual settlement comes from a paper in JCMC by Jones. It is
defined as when there is “a) a minimal number of b) public interactions
c) with a variety of communicators in which  d) there is a
minimal level of sustained membership over a period of time.”
Communities, on the other hand has a sense of community, which includes
a) feelings of membership, b) feelings of influence, c) integration and
fulfillment of needs, and d) shared emotional connection.
 This “sense of community” comes from f2f research on
communities (the next article discusses measuring it in virtual
situations). She did a survey of the commenters after the blog had been
around for 11 months.  Some respondents who commented
frequently felt strongly that it was a community while others who kind
of read it like they would a newspaper, thought not (oh, really? :) )

Blanchard, A. L. (2007). Developing a Sense of Virtual Community
Measure. CyberPsychology
& Behavior,
10(6), 827-830. DOI:
10.1089/cpb.2007.9947 
This one was done a few years later (obviously) and she was trying to
develop a valid and repeatable sense of community measure for virtual
communities. In previous work, people pretty much just adapted the f2f
sense of community, but it turns out that community might feel
different in virtual settings than f2f. This measure was developed like
others – f2f scales were modified, and new questions were added to
address things that are different in virtual settings. There was a
pilot, and then it was tested with other groups (total n=256, 7 usenet
groups and listservs).  Factor analysis with maximum liklihood
factoring and a promax rotation.  Once things were dropped
that didn’t load where they were supposed to, the internal reliability
coefficient for the SOVC scale was 0.93.  Tested with the
groups, it explained 53% of the variation while the standard sense of
community only explained 46% (better, but eh.)