💾 Archived View for gemini.spam.works › mirrors › textfiles › magazines › CURRENTCITIES › 2000.11-2 captured on 2022-06-12 at 11:13:26.

View Raw

More Information

-=-=-=-=-=-=-

                     Current Cites
   
              Volume 11, no. 2, February 2000
                Edited by Teri Andrews Rinne
  
   The Library, University of California, Berkeley, 94720
                       ISSN: 1060-2356
   http://sunsite.berkeley.edu/CurrentCites/2000/cc00.11.2.html
   
   Contributors: Terry Huwe, Michael Levy, Leslie Myrick , Margaret
   Phillips, Jim Ronningen, Lisa Rowlison, Roy Tennant
   
   Atkins, Helen, _et. al._ "Reference Linking with DOIs: A Case Study
   D-Lib Magazine 6(2) (February 2000)
   (http://www.dlib.org/dlib/february00/02risher.html) - Digital Object
   Identifiers (DOIs, see http://www.doi.org/) were developed in 1997 by
   the Association of American Publishers as a persistent identifier for
   digital objects. DOIs can be considered roughly analogous to ISBNs in
   that it is a unique ID for a specific work, but also more complicated
   than an ISBN since it can identify article-level objects. To make DOIs
   work in a web environment, there must be a way to take the unique
   identifier and resolve it into a pointer to that item wherever it may
   exist. Therefore, a key piece of the infrastructure to support DOIs is
   some sort of resolution service, which this article outlines. The
   present DOI resolution service is a prototype metadata database system
   dubbed DOI-X. It is based on XML and the CNRI Handle System (see
   http://www.handle.net/ for more information). For anyone interested
   in persistent linking to digital objects, this work is well worth
   watching. - RT
   
   Bambrick, Jane. "Dreams of the Perfect Database" EContent 23(1)
   (February 2000): p. 21-24. How could I resist citing an article that
   begins: "Last night I dreamt of the 'perfect database' again"? This
   essay offers, in the form of a dream vision, a primer of good database
   and interface design. As someone who entertains plenty of dreams about
   right-on databases and nightmares about recalcitrant or ill-designed
   ones, I was happily drawn into Bambrick's vision of precisely what
   features might add up to the perfect database to accommodate the needs
   of students, faculty and librarians, and even those outside of
   academia. Needless to say, the Ur-database she envisions may be "the
   stuff of dreams," but any combination of her desiderata would make for
   a solid start. EContent is soliciting your dreams, too. - LM
   
   "The Digital Divide" Intellectual Capital 5(6) ( February 10-17, 2000)
   (http://www.intellectualcapital.com/issues/issue345/main345.asp) This
   special issue of Intellectual Capital addresses The Digital Divide,
   recently designated by Al Gore as "the number one civil and economic issue." 
   In the wake of the catastrophic picture painted by the Commerce Department's
   report "Falling Through the Net," Clinton's budgetary reaction has
   been to earmark some 2 billion for the development of community- and
   educational-based technology training in low-income rural and
   inner-city areas. This being Intellectual Capital, the concern is
   primarily centered on e-commerce and e-business (as well as e-labor),
   and most of the articles explore what the balance should be between
   depending on government subsidies to overcome the divide and letting
   the market offer its own solutions, with heavy emphasis on the latter.
   Keith Fulton, in "On the Road to Fat Pipes," examines how business
   strategists are beginning to make a connection between fat pipes for
   data and fat pipes for human capital development. Citing Ford
   Corporation's recent move to provide its labor force with home
   computers and training, he lauds the vision of some corporate leaders
   to find a solution to the labor divide from within. Maureen Sinhal, in
   "A New War on Poverty," examines the dangers inherent in trusting
   government subsidies alone to redress the problem. Wary of the
   statistical analyses of reports such as "Falling Through the Net,"
   she, too, points to a market-based solution and calls for an
   assessment of what, precisely, the outcome of lack of computer and web
   access means?will those on the wrong side of the divide be merely
   inconvenienced? or left behind? Lee Hubbard, in "A Disingenuous
   Divide," offers plenty of statistical studies that show how middle
   class African Americans are availing themselves of the web in
   ever-increasing numbers. He cites the efforts of Jesse Jackson and
   websites such as OneNetNow to make relevant content available for a
   burgeoning African American market, suggesting that measuring the
   digital divide along strictly racial (vs. economic) lines is
   disingenuous. - LM
   
   Durrance, Joan C. and Karen E. Pettigrew. "Community Information: the
   Technological Touch" Library Journal (http://www.ljdigital.com/)
   125(2) (February 1, 2000): 44-46. - The public library role as a
   center for community information has grown with the advent of
   electronic access. The authors are currently conducting a study of the
   ways in which this function is being performed, and describe their
   findings to date. (The URL for their project site is
   http://www.si.umich.edu/helpseek/). In the first phase of their
   research, hundreds of libraries were surveyed, and the 227 which were
   identified as heavily involved in community information were sent
   follup surveys; 136 responded. The authors are currently conducting
   intensive case studies of three public library systems. Their
   narrative here highlights notable development histories and outreach
   efforts, with the emphasis on the use of information technology, and
   links are given wherever relevant. The article and associated links
   comprise a wonderful gateway to resources on the subject. - JR
   
   Guernsey, Lisa. "Suddenly, Everybody's an Expert" New York Times
   (February 3, 2000): Section G, p.1. - Guernsey describes the
   phenomenon of online experts and web sites, often called expert sites
   or knowledge networks. In a twist on traditional library reference
   service, Internet users are using real people to answer information
   requests - but for a fee. Such expert advice can either be seen as the
   "democratization of expertise" or "psuedoresearch." Some sites
   generate income by charging fees for their experts, while others make
   commissions off goods purchased as a result of expert recommendations.
   This opens up a number of issues crucial to information professionals,
   including the authenticity and accuracy of information, the
   credentials and background of the expert and their objectivity given
   potential relationships with commercial enterprises. In particular
   these issues assume critical proportions when the advice being sought
   is medical. One expert on nutrition described her credentials in the
   following way: "It's not my experience that you care about. It's your
   problem." Some sites have disclaimers about their responsibility for
   harmful advice, others have rating systems similar to online auction
   sites in order to build credibility. - ML
   
   Junion-Metz, Gail. Coaching Kids for the Internet: A Guide for
   Librarians, Teachers, and Parents. Internet Workshop Series Number 9.
   Berkeley, California: Library Solutions Press, 2000. ISBN
   1-882208-29-3 (http://www.library-solutions.com/coaching.html). - As a
   newly-christened children's librarian (and parent), who works with
   teachers on a daily basis, I can wholeheartedly endorse this book on
   behalf of each of its intended audiences. Designed as a sequel to K-12
   Resources on the Internet, Junion-Metz focuses on the adult as
   Internet coach guiding the child. Not only are basic instructional
   information and practice exercises included, but also administrative
   guidance in planning and acquiring Internet access in schools and
   libraries. An accompanying disk includes links to Internet resources
   for kids (grouped by subject matter and age ranges), and reference and
   instructional resources for teachers, librarians, and parents. An
   added bonus is that these links are kept up to date on the author's
   web site, which the user has access to via a link on the disk. - TR
   
   Kenney, Anne R. and Oya Y. Rieger. Moving Theory into Practice:
   Digital Imaging for Libraries and Archives. Mountain View, CA:
   Research Libraries Group, 2000
   (http://www.rlg.org/preserv/mtip2000.html). - One of the difficulties
   of digital library work has been the dearth of solid, practical
   information on what to do and how to do it. Lately some very useful
   papers, articles, and guidelines have appeared, but so far few books
   of any practical use. One of the few that has been useful to digital
   library developers was Kenney's earlier work with Stephen Chapman,
   Digital Imaging for Libraries and Archives. Now Kenney has teamed up
   with Oya Rieger to produce this latest workiithat moves what was
   largely theory into production, with all of the lessons such a move
   entails. In doing so, Kenney and Rieger highlight the knowledge and
   experience of dozens of the most experienced and authoritative digital
   imaging practitioners. Here you will find down-to-earth practical
   advice and proven strategies. The people who have contributed chapters
   or sidebars to this book have been through it, and are telling you
   what they learned so that you can share their success or avoid their
   failure. Don't let the rather steep price put you off -- this book is
   worth every penny and should be in the hands of any librarian or
   archivist tackling a digital imaging project. - RT
   
   Pace, Andrew L. "Digital Preservation: Everything New is Old Again"
   Computers in Libraries (February 2000):p. 55-58.
   (http://www.infotoday.com/cilmag/feb00/pace.htm) - The February issue
   of CIL focuses on "Archiving Considerations for a Digital Age," from
   which I will single out Andrew Pace's maiden article in the Coming
   Full Circle column as more or less paradigmatic of the discussion that
   may be found there. As a sidenote, this issue also contains an
   interesting and well-illustrated article by the principle
   investigators of the Digital Atheneum project, for which I cited a
   different article last month. To paraphrase the callouts for Pace's
   Coming Full Circle article, at issue is the preservation of digital
   materials as material artifacts. The rhetorical question of the day
   has to be: "Are digital materials to be seen as artifacts or simply
   intellectual content?" Pace postulates that the vision of the digital
   library has so favored a self-concept as a "accessible repository"
   that the longevity of the digital object (not to mention any interface
   to it) often seems to have taken a back seat to issues of
   accessibility. In the end, paradoxically, our capacity to store
   digital data is increasing in inverse proportion to the longevity of
   the media at hand. He outlines a handful of digital preservation
   strategies: 1) Refreshing the physical medium (e.g. from floppy to CD
   to DVD); 2) Migration to new software formats; 3) Preservation of
   Outdated Technology (e.g. keeping an old Commodore 64 handy, perhaps
   down in Special Collections). 4) Digital Archaeology; 5) Emulation ?
   retaining information about the process of digital creation and
   access, so that future generations can recreate it using their archaic
   Pentium III PCs; and 6) Preservation through Redundancy ? letting
   surrogates stand for the originals. - LM
   
   Pritcher, Lynn. "Ad*Access: Seeking Copyright Permissions for a
   Digital Age" D-Lib Magazine 6(2) (February 2000)
   (http://www.dlib.org/dlib/february00/pritcher/02pritcher.html). - Fear
   of copyright infringement and the possibility of lawsuit must be one
   of the most common companions of library digitization projects. Unless
   a project is focused solely on public domain material, permission to
   digitize must often be obtained from the rights holder before work
   begins. In this interesting piece, Pritcher describes how the Digital
   Scriptorium at Duke University went about receiving permission to
   digitize more than 7,000 advertisements from newspapers and magazines
   published mainly in the U.S. between 1911 and 1955. Their decisions,
   and the reasons for them, are quite interesting and may be useful to
   others wishing to do similar projects. The result of their effort can
   be seen at the Ad*Access site
   (http://scriptorium.lib.duke.edu/adaccess/). - RT
   
   Silberman, Steve. "The Quest for Meaning" Wired (February 2000):
   p.173-179. (http://www.wired.com/wired/archive/8.02/autonomy.html) - A
   British startup company called Autonomy is using the mathematical
   theories of 18th century Presbyterian minister Thomas Bayes as a basis
   for creating sophisticated information retrieval tools. Bayes helped
   shape modern probability theory with his method of statistitical
   inference - using mathematics to predict the outcome of events.
   Basically Bayes theorem takes into account previously held knowledge
   as well as new observations to infer the probable occurrence of an
   event. In the modern era this Bayesian model allows a computer to
   incorporate prior knowledge of millions of events and then build a
   base of prior probabilities which can be factored into current
   decision-making. In the article Silberman gives the example of the
   word penguin that might refer to the bird or the hockey team. If the
   word is clustered near words such as ice and South Pole then the
   system will infer that it is talking about the bird. In fact, even if
   the word penguin is not explicitly mentioned the software can still
   recognize that the text is about a penguin given the clustering of
   words. For retrieval tools prior knowledge of what most users are
   trying to locate can be incorporated into retrieval strategies, with
   the Bayesian system "teaching" the computer about relationships
   between words. Various software programs are being developed that
   would create custom-tailored pages based on past preferences and
   searching, predicting how a user would react to a new information
   source. The hope is that such sophisticated software will allow us to
   navigate through the morass of information sources. - ML
   
   Van de Sompel, Herbert and Carl Lagoze. "The Santa Fe Convention of
   the Open Archives Initiative" D-Lib Magazine 6(2) (February 2000)
   (http://www.dlib.org/dlib/february00/vandesompel-oai/02vandesompel-oai
   .html). - The Open Archives initiative is a collaboration among
   several successful electronic preprint (e-print) archives to develop
   an interoperable technical infrastructure to allow a user at any one
   e-print archive to transparently query another e-print archive. The
   Santa Fe Convention (http://www.openarchives.org/sfc/sfc_entry.htm)
   defines a set of agreements that form the essential organizational and
   technical infrastructure to achieve interoperability. Pieces of this
   infrastructure include the Open Archives Metadata Set (OAMS, see
   http://www.openarchives.org/sfc/sfc_oams.htm), a set of nine metadata
   elements to assist in resource discovery, the O pen Archives Dienst
   Subset (a subset of the full Dienst protocol developed by the NCSTRL
   project, http://www.ncstrl.org/, see
   http://www.cs.cornell.edu/cdlrg/dienst/protocols/OpenArchivesDienst.ht
   m), and an organizational framework. The Open Archives initiative is
   an important development, and one that bears watching. - RT
   
   Van de Sompel, Herbert, _et. al._ "The UPS Prototype: An Experimental
   End-User Service across E-Print Archives" D-Lib Magazine 6(2)
   (February 2000)
   (http://www.dlib.org/dlib/february00/vandesompel-ups/02vandesompel-ups
   .html). - The Universal Preprint Service Prototype (UPS, see
   http://ups.cs.odu.edu/) was developed to demonstrate interoperability
   between disparate archives of electronic preprints (e-prints),
   specifically for a meeting of e-print archive developers in Santa Fe
   in October 1999 (see the citation for the Santa Fe Convention in this
   issue of Current Cites). The prototype gathered nearly 200,000 records
   for e-prints from several different archives and made them available
   for searching through the same interface. The experience gained from
   this project was fed directly into the deliberations of the attendees
   to the Santa Fe meeting, which no doubt contributed to a more useful
   and realistic result from that meeting. - RT
     _________________________________________________________________
   
   Current Cites 11(2) (February 2000) ISSN: 1060-2356
   Copyright (c) 2000 by the Library, University of California,
   Berkeley. _All rights reserved._
   
   Copying is permitted for noncommercial use by computerized bulletin
   board/conference systems, individual scholars, and libraries.
   Libraries are authorized to add the journal to their collections at no
   cost. This message must appear on copied material. All commercial use
   requires permission from the editor. All product names are trademarks
   or registered trade marks of their respective holders. Mention of a
   product in this publication does not necessarily imply endorsement of
   the product. To subscribe to the Current Cites distribution list, send
   the message "sub cites [your name]" to listserv@library.berkeley.edu,
   replacing "[your name]" with your name. To unsubscribe, send the
   message "unsub cites" to the same address. Editor: Teri Andrews Rinne,
   trinne@library. berkeley.edu.