QR code (abbreviated from Quick Response Code) is the trademark for a type of matrix barcode (or two-dimensional barcode) first designed for the automotive industry in Japan. A barcode is a machine-readable optical label that contains information about the item to which it is attached. A QR code uses four standardized encoding modes (numeric, alphanumeric, byte/binary, and kanji) to efficiently store data; extensions may also be used. (Source: https://en.wikipedia.org/wiki/QR_code) Accessed on 31/10/17
Open data is data that anyone can access, use and share. Governments, businesses and individuals can use open data to bring about social, economic and environmental benefits. Open data is data that anyone can access, use and share. Open data becomes usable when made available in a common, machine-readable format. Open data must be licensed. Its licence must permit people to use the data in any way they want, including transforming, combining and sharing it with others, even commercially.
Big data is characterized by three Vs: Volume, Velocity, and Variety. The first V, volume, is the easiest to understand. Big data differs from regular data in that the size of the data sets are huge. How huge? That depends on the industry or discipline, but big data is loosely defined as data that cannot be stored or analyzed by conventional hardware and software. Traditional software can handle megabyte and kilobyte sized data sets, while big data tools can handle terabyte and petabyte sized data sets. The second V, velocity, covers the speed in which data is created. Think of the speed in which someone can create a single tweet in Twitter, or post to Facebook, or how quickly thousands of remote sensors constantly measure and report on changing seawater temperatures. The third V, variety, makes big data sets more challenging to organize and analyze. Traditionally the type of data collected by business and researchers was strictly controlled and structured, such as data entered into a spreadsheet with specific rows and columns, nice and clean. Big data sets can contain unstructured data such as email messages, photographs, postings on internet forums, and even phone transcripts.
Assistive software, also called adaptive software, refers to computer programs designed for specialized hardware used by physically challenged people. Examples include programs for screen magnification, screen reading, speech recognition, text-to-speech , Braille printers, Braille scanners, touch screen displays, oversized mice , and oversized joystick s. Assistive software and associated hardware are the computer-related components of a larger category of products known as assistive technology or adaptive technology. (Source: http://whatis.techtarget.com/definition/assistive-software-adaptive-software)
The Dublin Core Schema is a small set of vocabulary terms that can be used to describe web resources (video, images, web pages, etc.), as well as physical resources such as books or CDs, and objects like artworks. The full set of Dublin Core metadata terms can be found on the Dublin Core Metadata Initiative (DCMI) website. The original set of 15 classic metadata terms, known as the Dublin Core Metadata Element Set. The original Dublin Core Metadata Element Set consists of 15 metadata elements: Title, Creator, Subject, Description, Publisher, Contributor, Date, Type, Format, Identifier, Source, Language, Relation, Coverage & Rights.
(Source: https://en.wikipedia.org/wiki/Dublin_Core) Accessed on 31/10/17
A Green Library is designed to minimize negative impact on the natural environment and maximize indoor environmental quality by means of careful site selection, use of natural construction materials and biodegradable products, conservation of resources (water, energy, paper), and responsible waste disposal (recycling, etc.). In new construction and library renovation, sustainability is increasingly achieved through Leadership in Energy and Environmental Design (LEED) certification, a rating system developed and administered by the U.S. Green Building Council (USGBC).
(Source: https://en.wikipedia.org/wiki/Green_library) Accessed on 31/10/17
A Digital Library is a special library with a collection of digital objects that can include text, visual material, audio material, video material, stored as electronic media formats (as opposed to print, or other media. ), along with means for organizing, storing, and retrieving the files and media contained in the library collection. Digital libraries can vary immensely in size and scope, and can be maintained by individuals, organizations, or affiliated with established physical library buildings or institutions, or with academic institutions. The digital content may be stored locally, or accessed remotely via computer networks. An electronic library is a type of information retrieval system.
(Source: https://en.wikipedia.org/wiki/Digital_library) Accessed on 31/10/17
Hybrid libraries are mixes of traditional print material such as books and magazines, as well as electronic based material such as downloadable audio books, electronic journals, e-books, etc. Hybrid libraries are the new norm in most public and academic libraries. It seems that the term “hybrid library” was first coined in 1998 by Chris Rusbridge in an article for D-Lib Magazine.
(Source: https://en.wikipedia.org/wiki/Hybrid_library) Accessed on 31/10/17
Open Library Environment (OLE) – An active community of academic and research libraries collaborating to build open source, extensible, and service-driven library management tools. The OLE Partners share a common vision to empower librarians and libraries by pooling our resources and directing our expertise and insights. The OLE Community formed in 2008 and has worked in the Kuali Community to build an open source library management system, Kuali OLE, released in 2014 and implemented at three of our Partner sites. OLE is grateful for generous funding from the Andrew W Mellon Foundation, and to the many dedicated and experienced staff of our Partners who have worked to make OLE a success. Today, OLE is excited to be joining in and working with the FOLIO Community along with EBSCO Information Services and Index Data. FOLIO builds on, and continues, the OLE vision of deep collaboration among librarians, developers, strategists, service providers, and vendors.
(Source: https://www.openlibraryenvironment.org/) Accessed on 31/10/17
Bibliometrics is a study or measurement of formal aspects of texts, documents, books and information. Scientometrics analyses the quantitative aspects of the production, dissemination and use of scientific information with the aim of achieving a better understanding of the mechanisms of scientific research as a social activity. Informetrics is a subdiscipline of information sciences and is defined as the application of mathematical methods to the content of information science. Webmetrics is the application of informetrical methods to the World Wide Web (WWW).
MARC (MAchine-Readable Cataloging) standards are a set of digital formats for the description of items catalogued by libraries, such as books. Working with the Library of Congress, American computer scientist Henriette Avram developed MARC in the 1960s to create records that could be read by computers and shared among libraries. By 1971, MARC formats had become the US national standard for dissemination of bibliographic data. Two years later, they became the international standard. There are several versions of MARC in use around the world, the most predominant being MARC 21, created in 1999 as a result of the harmonization of U.S. and Canadian MARC formats, and UNIMARC, widely used in Europe. The MARC 21 family of standards now includes formats for authority records, holdings records, classification schedules, and community information, in addition to the format for bibliographic records. MARC 21 is a result of the combination of the United States and Canadian MARC formats (USMARC and CAN/MARC). MARC21 is based on the NISO/ANSI standard Z39.2, which allows users of different software products to communicate with each other and to exchange data. (Source: https://en.wikipedia.org/wiki/MARC_standards) Accessed on 31/10/17
Metadata is “data [information] that provides information about other data”. Three distinct types of metadata exist: descriptive metadata, structural metadata, and administrative metadata. (Source: https://en.wikipedia.org/wiki/Metadata)
- Descriptive metadata describes a resource for purposes such as discovery and identification. It can include elements such as title, abstract, author, and keywords.
- Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters. It describes the types, versions, relationships and other characteristics of digital materials.
- Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, and who can access it. (Source: https://en.wikipedia.org/wiki/Metadata)
Linked Data is about using the Web to connect related data that wasn’t previously linked, or using the Web to lower the barriers to linking data currently linked using other methods. More specifically, Wikipedia defines Linked Data as “a term used to describe a recommended best practice for exposing, sharing, and connecting pieces of data, information, and knowledge on the Semantic Web using URIs and RDF.”
(Source : http://linkeddata.org/)
A learning management system (LMS) is a software application for the administration, documentation, tracking, reporting and delivery of educational courses or training programs. They help the instructor deliver material to the students, administer tests and other assignments, track student progress, and manage record-keeping. LMSs are focused on online learning delivery but support a range of uses, acting as a platform for fully online courses, as well as several hybrid forms, such as blended learning and flipped classrooms. LMSs can be complemented by other learning technologies such as a training management system to manage instructor-led training or a Learning Record Store to store and track learning data. (Source: https://en.wikipedia.org/wiki/Learning_management_system)
Blended learning is an education program (formal or non-formal) that combines online digital media with traditional classroom methods. It requires the physical presence of both teacher and student, with some element of student control over time, place, path, or pace. While students still attend “brick-and-mortar” schools with a teacher present, face-to-face classroom practices are combined with computer-mediated activities regarding content and delivery. Blended learning is also used in professional development and training settings. A lack of consensus on a definition of blended learning has led to difficulties in research on its effectiveness in the classroom. Blended learning is also highly context-dependent and therefore a universal conception of it is hard to come by.
“Blended learning” is sometimes used in the same breath as “personalized learning and differentiated instruction.
A CMS is essentially a software package that lets you create and edit website content — including text, pictures, menus, and more — without having to know how to write code. A well-designed, up-to-date website is critical for a library of any size. Your patrons rely on your website for basic information about your library, such as directions to a branch or upcoming events. They also may go to your website hoping to search an online public access catalog (OPAC), download an e-book, or browse an online exhibit. A Content Management System, or CMS, can help you provide these services and manage them effectively, whether you have a volunteer managing your site or an entire department doing so. (Source: http://www.techsoupforlibraries.org/blog/content-management-systems-for-library-websites)
Lucene Core, our flagship sub-project, provides Java-based indexing and search technology, as well as spellchecking, hit highlighting and advanced analysis/tokenization capabilities. (Source: https://lucene.apache.org/)
SolrTM is a high performance search server built using Lucene Core, with XML/HTTP and JSON/Python/Ruby APIs, hit highlighting, faceted search, caching, replication, and a web admin interface. (Source: https://lucene.apache.org/)
FOAF is a descriptive vocabulary expressed using the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Computers may use these FOAF profiles to find, for example, all people living in Europe, or to list all people both you and a friend of yours know. This is accomplished by defining relationships between people. Each profile has a unique identifier (such as the person’s e-mail addresses, international telephone number, Facebook account name, a Jabber ID, or a URI of the homepage or weblog of the person), which is used when defining these relationships. (Source: https://en.wikipedia.org/wiki/FOAF_(ontology) )
Functional Requirements for Authority Data (FRAD), formerly known as Functional Requirements for Authority Records (FRAR) is a conceptual entity-relationship modeldeveloped by the International Federation of Library Associations and Institutions (IFLA) for relating the data that are recorded in library authority records to the needs of the users of those records and facilitate and sharing of that data.
The draft was presented in 2004 at the 70th IFLA General Conference and Council in Buenos Aires by Glenn Patton. It is an extension and expansion to the FRBR model, adding numerous entities and attributes.
The conceptual work and future implementations are aimed at supporting four tasks, frequently executed by users in a library context — either the library patrons (the first three tasks), or the librarians themselves (all four tasks):
- Find: Find an entity or set of entities corresponding to stated criteria;
- Identify: Identify an entity;
- Contextualize: Place a person, corporate body, work, etc. in context;
- Justify: Document the authority record creator’s reason for choosing the name or form of name on which an access point is based.
CORAL is an Electronic Resources Management System consisting of interoperable modules designed around the core components of managing electronic resources. It is made available as a free, open source program. (Source: http://coral-erm.org/)
SUSHI stands for Standardized Usage Statistics Harvesting Initiative. It is a standard protocol (ANSI/NISO Z39.93-2003) that can be used by electronic resource management (ERM) systems (and other systems) to automate the transport of COUNTER formatted usage statistics. It can also be used to retrieve non-COUNTER reports that meet the specified requirements for retrieval by SUSHI.The SUSHI protocol is a standard client/server web service utilizing a SOAP request/response to retrieve the XML version of a COUNTER or COUNTER-like report.
COUNTER (Counting Online User NeTworked Electronic Resources) is a not-for-profit organization formed in 2002 to develop standardized methods and reports for measuring the use of electronic resources. COUNTER created Codes of Practice, which define how to count and record usage, and the fields, formats, schedule of reports and protocols for combining usage reports from direct use and from use via intermediaries. COUNTER currently provides two Codes of Practice, one for Journals and Databases and one for Books and Reference Works. In August 2008, Release 3 is the valid Code of Practice for Journals and Databases went into effect. (Deadline for vendors’ compliance was July 31, 2009) The current release for Books and Reference Works is Release 1. (Source: http://www.niso.org/workrooms/sushi/faq/general )
Relationship between SUSHI and COUNTER
In the context of SUSHI, the COUNTER reports formatted in XML are the payload which is requested and delivered using the SUSHI protocol. Delivery of COUNTER reports via the SUSHI protocol is included as a requirement in Release 3 of the COUNTER Code of Practice. The implementation of the XML-based SUSHI protocol by vendors will allow the automated retrieval of the COUNTER usage reports into local systems, making this process much less time consuming for the librarian or library consortium administrator.
The FRBR (Functional requirements for bibliographic records) Final Report was first published in print in 1998 by K.G. Saur as volume 19 of UBCIM publications, new series, as well as PDF and HTML files on the IFLA Web site.
The Contents of Periodicals in Science and Technology (COPSAT) is a current awareness service provided by INFLIBNET in collaboration with National Centre for Science Information (NCSI), Indian Institute of Science, Bangalore. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-14 Library and Information Network in India)
Document Delivery Service: INFLIBNET has established 6 centers as Document Delivery Centre (DDC). This service is given on ‘No profit No loss’ basis. The 6 centers are:
Library and Information Network in India
- Banaras Hindu University, Varanasi;
- University of Hyderabad, Hyderabad;
- Indian Institute of Science, Bangalore;
- Jawaharlal Nehru University, New Delhi;
- Punjab University, Chandigarh;
- Tata Institute of Social Science, Mumbai;
(Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-14 Library and Information Network in India)
COPAC (http://copac.ac.uk/): Copac is a union catalogue that provides free access to the merged online catalogues of members of the CURL. There are some 30 million records on Copac representing the merged holdings of 26 CURL member institutions, including the British Library and National Library of Scotland, plus special collections from a small number of non-CURL libraries. The remaining CURL libraries’ catalogues are also being loaded. The Copac web site contains service information and support materials. Copac is funded by the JISC. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
Archives Hub (http://www.archiveshub.ac.uk/): The Archives Hub is a collaborative service, which provides a single point of access to descriptions of archive collections held in universities and colleges throughout the United Kingdom. Over 60 institutions are contributing high-quality information to the Hub, which covers over 20,000 archives. The website is free to use and contains information relevant to a wide range of research areas. The service is funded by the Joint Information Systems Committee (JISC) and is overseen by CURL. MIMAS runs the service at the University of Manchester and development work on the Archives Hub software is undertaken by the Cheshire Development Team at the University of Liverpool. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
Britain in Print (http://www.britaininprint.net/): The Britain in Print project, funded by the Heritage Lottery Fund, is a collaborative venture led by Edinburgh University Library involves participation of ten CURL libraries including the Edinburgh Royal College of Physicians and the Mitchell Library in Glasgow. All ten libraries have significant collections of pre-1700 British books which are not yet catalogued in electronic form. Launched in January 2003, the Britain in Print project will provide free access to information about the rich collections of early British books that are held in twenty-one of the nation’s most important libraries. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
CURL-CoFoR (http://www.cocorees.ac.uk/): CoFoR (Collaboration For Research) is a new CURL initiative, set up to provide its members and other research libraries with practical tools (templates, guidelines and recommendations) for collaborative acquisition and retention. It will also give special attention to techniques for serial de duplication and to the mapping of relationships between research activity and library provision. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
IT Enabled Services: The ITES (formerly INDONET) was engineered and commissioned as India’s first data network in 1986 by the CMC Limited for the computer user community in India. The ITES offers different services integrated in a single delivery mechanism to end-users. It has been used for a number of well-known projects dealing with education, examinations, libraries and electoral cards. It is a powerful Internet service provider focused on providing business-to-business (B2B) eCommerce solutions, specifically in the area of electronic data interchange (EDI). (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
INET was commissioned by Department of Telecommunication (DoT) during 1991 and paved way for highly reliable, cost effective and flexible ways of national data transfer and information access. INET is now managed by Bharat Sanchar Nigam Ltd. Packet switching enables error free transmission with dynamic rerouting of calls and provides interconnection between computers / terminals at different speeds and protocols. In its first phase, INET comprise nodes at New Delhi, Mumbai, Calcutta, Chennai, Bangalore, Hyderabad, Pune, Kanpur and Ahmedabad; and connected through 9.6 kbps and 64 kbps links. In subsequent phases, this facility was extended to 88 other cities throughout the country. Inet is now available in 102 cities in India grouped on the basis of business activity and demand. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
The SIRNET (Scientific and Industrial Network), an initiative of NISCAIR (formerly INSDOC) aimed at networking all 40 CSIR laboratories under SIRNET was made operational in December 1989. The SIRNET provided electronic mail facility as its first application service from the SIRNET servers with a mail number of user nodes. For transmitting a message, a user had to deposit message to one of the SIRNET mail service nodes situated at the NISCAIR(earlier INSDOC), Delhi and at its regional centre at Bangalore from where it was transmitted to its destination which may be any of CSIR laboratory presently linked to the mail node. The SIRNET, in turn, was connected to a large network-ERNET (Educational and Research Network) which is connected to the international network UUNET (Unix User Network) through which other international networks like BITNET, CSNET and JANET are accessible. The SIRNET’s mail node at the NISCAIR also acted as a gateway to ERNET and through ERNET to other networks. Connections between various laboratories of CSIR was established using dialup telephone lines, while SIRNET was directly connected to BSNL Mail server. However SIRNET has been presently shelved. (IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
BTISNET (http://www.btisnet.nic.in/) Recognizing the importance of information technology for pursuing advanced research in modern biology and biotechnology, a bioinformatics programme, envisaged as a distributed database and network organisation, was launched during 1986-87. The programme has become a very successful vehicle for transfer and exchange of information, scientific knowledge, technology packages, and references in the country involving 10-12 thousand scientific personnel. Ten Distributed Information Centers and an Apex Centre at the Department of Biotechnology, and 44 Sub-Distributed Information Centers, located in universities and research institutes of national importance, are fully engaged in this task. Six national facilities have been set up for interactive graphics based molecular modeling and other bio computational needs. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit-12 Library and Information Networks)
International Federation of Library Associations and Institutions (IFLA): The International Federation of Library Associations and Institutions (IFLA), founded in 1927 in Edinburgh, Scotland with the goal of promoting international contacts among library associations and librarians. It is a non-governmental professional organisation. It is presently one of the leading international bodies representing the interest of library and information services and their users. It is considered to be the global voice of library and information profession. In the year 1971, the IFLA set up a permanent secretariat in The Hague, Netherlands. (IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
The Core Programmes
IFLA has presently six core programmes of major importance to the library and information profession. The programmes are called Core Programmes because they intersect the interests and concerns of all libraries and their users, wherever located. IFLA’s 3 Core Programmes, namely, Universal Bibliographic Control and International MARC (UBCIM), Universal Availability of Publications (UAP) and Universal Dataflow and Telecommunications (UDT) had made tremendous contributions to the profession till the end of the 20th century. Keeping in view the current developments in the library and information profession and the developments in the modern day technologies many new core programmes have been added and the above mentioned three core programme have been marged with other core programmes. These programmes are:
1) Action for Development through Libraries Programme (ALP): The Advancement of Librarianship referred to as ALP Programme was launched in 1984 and the name was changed in 2004 to “Action for Development through Libraries Programme”, however, the acronym still remains as “ALP”. The goal of ALP is to further the library profession, library institutions and library and information services in the developing countries of Africa, Asia and Oceania and Latin America and the Caribbean. Within the special ALP areas the goals are to assist in continuing education and training; to support the development of library associations; to promote the establishment and development of library and information services to the general public, including the promotion of literacy; and to introduce new technology into library services. ALP also functions as a catalyst within IFLA for the organisation’s activities in Third World countries. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
2) Committee on Copyright and other Legal Matters (CLM): IFLA’s concerns in the area of copyright are taken care by the Committee on Copyright and other Legal Matters (CLM). CLM raises the voice of the international library community in copyright matters. The Committee consists of elected members who represent their own country or wider region, with expert resource persons who have particular knowledge valuable to CLM. While copyright and intellectual property remain the main thrust area of interest for CLM. The Committee is also concerned with other legal matters such as economic and trade barriers for rendering effective library services. CLM takes account of the activities of the World Trade Organisation, especially GATS. CLM has represented IFLA at key WTO meetings. CLM works closely with other regional library organisations with shared interests and is also concerned with other legal issues, for example, licensing and the relationship between copyright law and contract law, disputed claims of ownership of library material and its repatriation, and the difficult technical area of anti-circumvention technology. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
3) Committee on Free Access to Information and Freedom of Expression (FAIFE): IFLA/FAIFE is a core programme of IFLA to defend and promote the basic human rights as defined in Article 19 of the United Nations Universal Declaration of Human Rights. The IFLA/FAIFE Committee and Office support free access to information and freedom of expression in all aspects, directly or indirectly, related to libraries and librarianship. The prgramme also monitors the state of intellectual freedom within the library community world-wide, supports IFLA policy development and co-operation with other international human rights organisations, and responds to violations of free access to information and freedom of expression. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
4) IFLA-CDNL Alliance for Bibliographic Standards (ICABS): The IFLA Universal Bibliographic Control and International MARC (UBCIM) Core Activity which was established thirty years ago came to an end in 2003. The objective of UBCIM was “to coordinate activities aimed at the development of systems and standards for bibliographic control at the national level and the international exchange of bibliographic data, including the support for professional activities of appropriate IFLA Sections and Divisions”. UBCIM coordinated the development of the UNIMARC format and ensured publication of reports on projects related to international bibliographic and format standards and proceedings of relevant meetings and seminars.
The British Library hosted UBCIM from 1973 to 1989 and later Die Deutsche Bibliothek from 1990 to the beginning of 2003. The Biblioteca Nacional de Portugal took over the responsibility for both UNIMARC and ICBC (International Cataloguing and Bibliographic Control, a quarterly journal of IFLA, formerly of UBCIM).
Another component of ICABS is the programme of the former Universal Dataflow and Telecommunications (UDT) Core Activity. UDT supported analysis and promotion of technologies and standards for application to the digital environment in the areas of networked resource discovery, information retrieval, digitisation, and metadata. It was hosted at the National Library of Canada (NLC) from its inception in the late 1980s to 2001. UDT developed and maintained IFLA’s primary communications tool, IFLANET, which was hosted for many years at NLC. IFLANET was moved to Institut de l’Information Scientifique et Technique (INIST) in France in 2001 and is no longer a part of the ICABS activity.
Besides this, Conference of Directors of National Libraries (CDNL), has provided support and funding for core activities. It has recently established – the CDNL Committee on Digital Issues (CDI) to monitor digital library developments. The Committee’s work on bibliographic standards and digital preservation is being added into the ICABS goal. The National Library of Australia will take care of the committiee’s work on deposite agreements.
The National Library of Australia, the Library of Congress, the British Library, the Koninkiljke Bibliotheek and Die Deutche Bibliothek have agreed to collaborate in a joint alliance together with the Biblioteca Nacional de Portugal, IFLA and CDNL for ongoing coordination, communication and support for key activities in the areas of bibliographic and resource control for all types of resources and related format and protocol standards. This new alliance is known as “IFLA-CDNL Alliance for Bibliographic Standards (ICABS).”
The main focus of the alliance is to offer a practical way to improve international coordination and to enhance developments in these key areas. The alliance aims to ‘‘maintain, promote, and harmonize existing standards and concepts related to bibliographic and resource control, to develop strategies for bibliographic and resource control, and to advance understanding of issues related to long-term archiving of electronic resources, including the promotion of new and recommended conventions for such archiving.’’ (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
5) Preservation and Conservation (PAC): IFLA Core Activity on Preservation and Conservation (PAC) was started in 1984. PAC focuses efforts on issues of preservation and initiates worldwide cooperation for the preservation of library materials. PAC has been conceived in a ‘‘decentralized way where a Focal Point implements the global strategy and Regional Centres manage activities in their specific regions’’. The Focal Point which is the International Centre has been hosted by the Bibliothèque nationale de France in Paris since 1992 and there are Regional Centres all over the globe. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
6) IFLA UNIMARC (UNIMARC): The IFLA UNIMARC Core Activity succeeds the earlier UBCIM Core Programme in the part related to International MARC. The aim of the UNIMARC Core Activity is to ‘‘coordinate activities aimed at the development, maintenance and promotion of the Universal MARC format (UNIMARC), originally created by IFLA to facilitate the international exchange of bibliographic data. Maintenance and update of UNIMARC, now a set of four formats – Bibliographic, Authorities, Classification and Holdings- is the responsibility of the Permanent UNIMARC Committee’’. The UNIMARC Core Activity collaborates with IFLA’s Bibliographic Control Division and with ICABS – IFLA/CNDL Alliance for Bibliographic Standards and also liaise with other international organisations such as ISO TC46, the ISBN and ISSN International Agencies, ICA/CDS – Committee on Descriptive Standards and the Consortium of European Research Libraries (CERL). (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
IFLANET: In the year 1993 the Universal Dataflow and Telecommunications (UDT) Core Programme initiated IFLA’s network. It was hosted by the National Library of Canada. IFLANET and its services have been developed ‘‘for improving communication within IFLA and its various organs and to provide a virtual presence for the organization all the time, that is, all 7 days of the week and 24 hours of the day.’’ IFLANET takes are of general administration, frames policy on centralisation and independent websites, provides procedures and guidelines for preparing document for submission and assists in creation of IFLA-sponsored mailing lists. Presently IFLANET is administered by the IFLA HQ and hosted by the Institut de l’Information Scientifique et Technique (INIST), France. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
IFLA Publications: IFLA offers several publications free of charge to its members. These publications are: IFLA Journal (Quarterly), IFLA Annual, IFLA Trends (Biennial Report), IFLA Medium Term Programme, IFLA Statutes and Rules of Procedure, Divisional and Sectional Newsletters. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
IFLA launched its Universal Bibliographic Control (UBC) programme in 1974. The purpose of the Universal Bibliographic Control was to coordinate activities aimed at the development of systems and standards for bibliographic control at the national level and the international exchange of bibliographic data, including the support for professional activities of appropriate IFLA Sections and Divisions. Although the UBC was concerned with the bibliographic data and not with the exchange of documents themselves, it provided help in standardization of bibliographic records, which is essential for creation of standardized databases and interchange of bibliographic information. The Universal Bibliographic Control and International MARC Core Activity (UBCIM) which was hosted by Die Deutsche Bibliothek since 1990, is closed with effect from March 2003. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
The IFLA also launched Universal Availability of Publications (UAP) programme in the year 1976 with an objective of achieve widest availability of published material to the users, wherever and whenever they need it and in the format required. Published materials include not only printed materials, including so-called “grey literature”, but audio-visual materials and publications recorded in electronic (digital or analogue) form. To work towards this objective, the programme aimed to improve availability at all levels, from the local to the international, and at all stages, from the publication of new material to the retention of last copies, both by positive action and by the removal of barriers. UAP aimed to ensure that improved access to information on publications is matched by improved access to the publications themselves. The Universal Availability of Publications Core Activity (UAP) and Office for International Lending (OIL), which has been hosted by the British Library at Boston Spa, United Kingdom since the late 1970s, is closed with effect from 31st March 2003. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
Institute for Scientific Information (ISI): The ISI was set up in 1960. It has been servicing the scientific, academic and business communities as an information provider. It provides direct and easy access to the bibliographic data, cited references and abstracts contained in the world’s most important scientific, technical and scholarly publications. ISI has been taken over by Thompson Scientific, a segment of Thomson Corporation is now referred to as Thompson ISI. The goal of ISI is to ‘‘increase the impact of research by providing researchers integrated information solutions delivered by the most innovative technologies’’. A recent development of ISI is the ISI Web of Knowledge which is the single window from which researchers can access, analyse, and manage information. ISI Web of Knowledge enables users to locate high quality information with help from evaluation tools and bibliographic management products. It also provides innovative search tools for cross-content and web document searching. It is equipped with a sophisticated linking gateway as the ISI Web of Knowledge content is multidisciplinary, and supports research conducted at academic, corporate, government, and not-for profit organisations world over. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
International Federation for Information and Documentation (FID): FID was the international professional association for documentalists, information scientists and other specialists in information management. The International Federation for Information and Documentation (the word information was added to the name in 1986, but the acronym FID continued) was founded in 1895 by Paul Otlet and Henri La Fontaine as the Institut International de Bibliographie (IIB) and was renamed in 1930 as International Federation for Documentation. At the time of its inception, the main aim was the ‘‘creation and maintenance of a comprehensive world repertory of knowledge and development of the Universal Decimal Classification (UDC) from the Dewey Decimal Classification for providing order and access to the bibliographic entries in the world repertory’’. The Universal Bibliographic Repertory
Project had failed, but the IIB left legacies of great value in providing a nucleus for the evolution of FID and in the development of UDC, which is presently a major scheme of classification. FID became a Federation in the year 1924 and got its legal status in 1959 as an international non-governmental organisation under the Belgian act granting incorporation to international non-profit associations pursuing a scientific, artistic or educational goal. In 1928, it headquarters shifted to The Hague. After World War II, its membership increased and activities expanded. The activities of FID, however, have ceased since the year 2002 due to paucity of funds. But the organisation still exists in name. As FID had played a major role in various activities related to libraries and information centres, we are giving below a brief account of how it came into existence and its major activities till the time its offices were shut down. It has to be noted here that although FID is no more functional, its Universal Decimal Classification (UDC) activity is still active as UDC became an independent consortium in 1990s. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
International Council for Science (ICSU): ICSU – International Council for Science is, a non-governmental organisation. It was started in 1931 as International Council of Scientific Union for the benefit of mankind. It was set up to act or exchange of ideas, the communication of scientific information and the development of standards in methodology, nomenclature and units. Another main aim of ICSU was to ‘‘encourage international scientific activity for the benefit of mankind’’. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
Committee on Data for Science and Technology (CODATA): CODATA, the Committee on Data for Science and Technology, is an interdisciplinary Scientific Committee of the International Council for Science (ICSU). It was established in 1972 as an interdisciplinary Scientific Committee of the International Council for Science (ICSU) with the aim to ‘‘promote and encourage, on a worldwide basis, the compilation, evaluation and dissemination of reliable numerical data of importance to science and technology’’. The goal of CODATA is to improve the quality, reliability, management and accessibility of data of importance to all fields of science and technology. CODATA is, therefore, a resource that provides scientists and engineers with access to international data activities for increased awareness, direct cooperation and new knowledge. Presently, it has 23 countries as members, and 24 national member delegates and committees. (IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
Committee on Data for Science and Technology Publications
CODATA has several publications, which include CODATA Newsletter, Conference Proceedings, Books and Monographs, Special Reports on CODATA activities and CODATA Bulletins in various subject areas. The CODATA secretariat is presently located at 51, Boulevard de Montmorency, 75016 Paris, France. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
United Nations Educational Scientific and Cultural Organisation (UNESCO): UNESCO was established in 1946. It is a specialised agency of the United Nations Systems concerned with information matters. Networks of UNESCO
- MEDLIB –Internet-based Virtual Library Network
- APIN – Asia and Pacific Information Network (APIN) is a network formed by merging the Regional Network for the Exchange of Information and Experiences (ASTINFO), the Regional Informatics Network for Southeast Asia and the Pacific (RINSEAP) and the Regional Informatics Network for South and Central Asia (RINSCA). APIN is loosely linked with UNESCO’s Information for All Programme (IFAP), It promotes ICT literacy and application, information and knowledge networking, sharing of information resources, and use of international standards and best practices in communication, information and informatics. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks) or IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
- RINAF – Regional Information Society Network for Africa
- JOURNET – Global Network for Education in Journalism (JOURNET): The Global Network for Education in Journalism and the Media was launched in 1999. As its mission, JOURNET seeks to expand and improve journalism and media practices worldwide through better professional education in this field and to do so by linking educational institutions, training centres, associations, networks and organizations that share the ideals of UNESCO in a Network that will catalyze their cooperation and share Taking full advantage of new communication technologies, the network operates principally through electronic databases and interconnections linking the major institutions. The African Council for Communication Education based at the University of Nairobi, Kenya was nominated as the Network Coordinator. The Network implements a programme of activities including training, curricular design and assistance in equipping needy training centres. Aside from the founding members, the Network membership is open to all journalism schools and institutions that adhere to the basic ideals of freedom of expression and the principles laid down in the statutes. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
- UNAL – UNESCO Network of Associated Libraries
- INFOYOUTH – International Information and Data Exchange Network onYouth
- ACCESS-net – Association of Computer Centres for Exploiting Sustainable Synergy
- HeritageNet – The Electronic Network of Cultural Institutions in Central Asia
- INFORLAC – Information Society Programme for Latin America and the Caribbean (INFOLAC) INFOLAC was established in 1986 as an inter-governmental forum for the exchange of expertise and experiences for the development of the Information Society in Latin America and the Caribbean, is open to all public, private or professional institutions, through its quarterly journal INFOLAC Newsletter and its website http://infolac.ucol.mx. INFOLAC’s membership is open to all Latin American and Caribbean States, which are members of UNESCO. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
- ORBICOM – Orbicom is an international network that links communications leaders from academic, media, corporate and government circles with a view to providing for the exchange of information and the development of shared projects. Orbicom is supported by internationally-based institutions, media, governments and corporations. However, Orbicom’s mandate derives from UNESCO’s New Communications Strategy unanimously adopted at the 1989 General Conference. This Conference foresaw that new communications technologies would have a significant impact upon the complex processes shaping economies, the environment, social justice, democracy, and peace. Jointly created in 1994 by UNESCO and Université du Québec à Montréal (UQAM), ORBICOM will ultimately embody a network of 300 associate members and 27 UNESCO Chairs in Communications from around the world. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks)
- UNESCO Chairs/UNITWIN – The International Network of UNESCO Chairs in Communications.
(Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
UNESCO Network of Associated Libraries (UNAL): UNAL was established in 1990 to promote co-operation among public libraries to build international understanding and to establish contacts between libraries of the North and of the South. UNAL’s principal objective is to encourage libraries that are open to the public to undertake activities in UNESCO’s fields such as the promotion of human rights and peace, cultural dialogue, protection of the environment, fight against illiteracy, etc. Over 500 libraries around the world are members of the Network. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, UNIT-12 Library and Information Networks) or (IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
Information for All Programme (IFAP): UNESCO’s Information for All Programme provides a platform for international policy discussions and guidelines for action on:
- Preservation of information and universal access to information,
- Participation of all in the emerging global information society, and
- Ethical, legal and societal consequences of ICT developments.
The Information for All Programme promotes a framework for international cooperation and international and regional partnerships. It supports the development of common strategies, methods and tools for building a just and free information society. This programme is for narrowing the gap between the information rich and the information poor. The IFAP is a the most important element in the fulfillment of UNESCO’s goal, i.e., “education for all”, “free exchange of ideas and knowledge” and “increase the means of communication between peoples”. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
International Programme for the Development of Communication (IPDC): The International Programme for the Development of Communication (IPDC) ‘‘promotes free and pluralistic media in developing countries and the countries in transition’’. With the help of media development IPDC strengthen communicative and analytical skills of the people and increase their participation in democratic governance. IPDC gives priority to the projects promoting press freedom and media pluralism, development of community media, enhancing professional capacity and building partnerships for media improvements. We all know that the media, newspapers, radio or television, are ways of informing people and prompting them to interact. ‘‘Free and pluralistic media results in good and honest governments and make development investments fruitful.’’ All types of media are essential for the construction of democratic societies as they are crucial for economic growth and nurturing the democratic process. ‘‘Media pluralism alone can guarantee every community the opportunity to express its concerns without exclusion or discrimination’’. Some inadequacies of media in many countries put barriers for people to voice their democratic aspirations, from sharing and accessing information, and from making life-saving decisions. ‘‘UNESCO created the International Programme for the Development of Communication (IPDC) in 1980 to address these needs and to accelerate media development’’. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
UNISIST (World Science Information System) In 1973 with the launching of UNISIST programme a new phase in UNESCO’s work in library, documentation and information field was marked. UNISIST was a conceptual framework with emphasis on scientific and technological information. ‘‘UNISIST was planned as a continuing, flexible programme to coordinate existing trends towards cooperation and to act as a catalyst for the necessary development in scientific information’’. The main goal was the establishment of a flexible and loosely connected network of information systems and services based on voluntary cooperation. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
PGI— General Information Programme: The General Information Programme (PGI) was created in 1976 by merging UNISIST with a programme concerned with the development of documentation, libraries and archives. An Intergovernmental Council having 30 Member States replaced the former UNISIST Steering Committee and guided the planning and implementation of PGI. During UNISIST II Conference (1979), it was felt that the creation of PGI had brought a number of benefits, for example, it reduced the number of inconsistencies in UNESCO’s dealings with Member States on matters relating to information transfer, infrastructure development, education and training and provided an integrated approach to information systems planning and development. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
CIP (Cataloguing-in-Publication): The Library of Congress pioneered cataloguing-in-publication in 1971, and the British Library followed the suite in 1975. A Cataloguing in Publication record or CIP data, is a bibliographic record, prepared by a national library of a country (or any other central agency) for books that has not yet been published. When the book is published, the publisher includes the CIP data on the copyright page thereby facilitating book processing for libraries and book dealers. The aim of the programme is to provide bibliographic data for new books in advance. It depends heavily on the voluntary cooperation of publishers. Records are compiled from the information supplied by the publishers on a standard datasheet submitted to the Library of Congress or the British Library Bibliographic Services or national library of respective countries. In the UK, a UK MARC CIP entry appears in the printed British National Bibliography and on the BLAISE Online, and after receipt of the published book in the Copyright Receipt Office an amended entry may appear. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Cataloguing Service Bulletin (CSB): The Library of Congress publishes a quarterly bulletin that contains article on current, new, and revised information about LC cataloguing and classification practices and policies. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
National Union Catalogue (NUC): The Union Catalogue Division of Library of Congress started working of the National Union Catalogue (NUC) project in the year 1909. The union catalogue contain holding data for the collection of the Library of Congress and other participating libraries, several sequences have been printed since 1956 when the first set of 167 volumes covering pre- 1942 material was printed. Music, recordings and motion pictures are also included in separate volumes. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Kinetica: Australia’s Library Network (http://www.nla.gov.au/kinetica/) Launched in March 1999, Kinetica extended, the Australian Bibliographic Network (ABN), which was created in 1981 to foster resource sharing by Australian libraries. Kinetica is a modern Internet-based service for Australian libraries and their users. It provides access to the national database of material held in Australian libraries, known as the National Bibliographic Database. A user can search for any item and locate which library in Australia holds it. Gateways to other major library databases are also provided. In addition, Kinetica supports cooperation and resource sharing within the Australian library community through the delivery of MARC records and the provision of a document delivery service. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
NCCP (National Cooperative Cataloguing Project): The National Cooperative Cataloguing Project began in late1980s with an aim to improve on the standard model of shared cataloguing in a bibliographic utility by building a national database in which all the records are of high quality that can be accepted into a local database of a library without modification. The programme allowed selected ten libraries to catalogue into a single database into which the records they contribute will be integrated in a consistent manner. The essential features that distinguished “coordinated cataloguing” from “shared cataloguing” was agreement to follow LC procedures and rules interpretations, submission of new names and subjects for inclusion in the national name and subject authority files, operation via LSP, distribution of records in LCs file. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Program for Cooperative Cataloguing (PCC) (http://www.loc.gov/catdir/pcc/) The Program for Cooperative Cataloguing, coordinated by the Library of Congress (LC), is an international cooperative effort involving cooperation among several hundred libraries with an aim to create and update cataloguing records jointly. Although the PCC began in 1994, some of its components date back more than 20 years under different names. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
NACO (Name Authority Cooperative Program): The NACO program began in 1977. Through this program, participants contribute authority records for names, uniform titles, and series to the national authority file. NACO participants may contribute new name authority records and make changes to existing records within certain parameters. In addition, participants may contribute series and uniform title authority records. An individual institution may join this program, or a group of libraries with a common interest may form a funnel project to contribute records via a coordinator. Participants of the program agree to follow a common set of standards and guidelines when creating or changing authority records in order to maintain the integrity of a large shared authority file. Participants of NACO are required to undergo a training programme either in their home institutions or at the LC. During the training, guidelines are discussed and expanded upon with an ever-growing awareness of the need to streamline cataloguing efforts while building a consistent and predictable file. This file helps the global library community work more efficiently and effectively, allowing it to maximize its resources. The underlying principle of the NACO authorities project is that participants agree to follow a common set of standards and guidelines when creating or changing authority records in order to maintain the integrity of a large shared authority file. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
SACO (Subject Authority Cooperative Program): The SACO evolved from the Cooperative Subject Cataloguing Project that begun in 1983. The Subject Authority Cooperative Program (SACO) was established to provide a means for libraries to submit subject headings and classification numbers to the Library of Congress via the Program for Cooperative Cataloguing (PCC). The changes in subject heading are proposed for inclusion in Library of Congress Subject Headings (LCSH) and changes in classification number are proposed for inclusion in the Library of Congress Classification (LCC) schedules. To prevent duplication of efforts, participants have access to the online Library of Congress authority files, both for the name authorities and subject authorities, for searching purposes. For training the participants in the program, the SACO workshops are offered by the PCC either in conjunction with library-related meetings or conferences, or as part of the jointly-developed training programme on subject cataloguing workshops. In order to facilitate participation in the SACO programme, instructional materials and forms have been developed that enable a contributor to submit proposals and changes to the Library of Congress. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
BIBCO (Bibliographic Cooperative Program): BIBCO is the bibliographic record component program of the PCC evolved in 1995 from the National Coordinated Cataloguing Programme that began in 1988. BIBCO facilitates participants to contribute monograph records to a central database using mutually agreed upon standards. The PCC BIBCO Project is meant to create new bibliographic records and modify existing bibliographic records in the OCLC database in accordance with national level standards, and code these records in such a way that they will be identifiable to the library community as PCC records. Through this program participants that are already NACO members contribute bibliographic records to the national databases. The BIBCO members are responsible for contributing full or core level bibliographic records. These records are identified as PCC records and notable for their complete authority work (both descriptive and subject), a national level call number (such as LC classification or NLM classification), and at least one subject access point drawn from nationally recognized thesauri such as LCSH, MeSH, etc., as appropriate. Participating librarians are required to attend a training session designed specifically for their needs held at their own institution. The course focuses on the core bibliographic record and the values and decision-making skills necessary to cataloguers in producing quality-cataloguing data. Expert staff from PCC libraries provides training. As members of BIBCO, participants contribute bibliographic records for monographs in all formats to the national databases and participate in the development of standards. An individual institution may join this program, or a group of libraries with a common interest may form a funnel project to contribute via a coordinator who will represent the funnel participants. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
CONSER (Cooperative Online Serials Program): CONSER began in 1973 as a project to convert manual serial cataloguing into machine readable records with an aim to create and maintain high quality bibliographic records for serials. In keeping with its evolution, the name was changed in 1986 from the CONSER (CONversion of SERials) Project to the CONSER (Cooperative ONline SERials) Program. In October 1997, CONSER became a bibliographic component of the Program for Cooperative Cataloguing. The CONSER database resides within the OCLC Online Union Catalogue. CONSER members input, authenticate, and modify serial cataloguing records on OCLC or contribute original records via FTP. The process of authentication involves approving of bibliographic elements in the record and providing for the record’s availability through distribution services and bibliographic products. The members of the CONSER program includes national libraries of the United States and Canada, selected universities, government, research, special, and public libraries; participants in the United States Newspaper Program (USNP), selected library associations and subscription agencies and abstracting and indexing services. The need for CONSER stems from the dynamic nature of serial publications. Unlike most monographs, serials are constantly changing in a variety of ways. Modifications to CONSER records accommodate the changes in the serials themselves and in the rules for their cataloguing. Through the CONSER Program, members are given the authority to modify the master serial records in the OCLC database. To ensure uniformity, the participants agree to follow policies and procedures documented in the CONSER Editing Guide and the CONSER Cataloguing Manual. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Linked Systems Project (LSP): The Library of Congress with an aim to link together the major bibliographic utilities in the US including Library of Congress initiated the Linked Systems Project (LSP) in 1975. The National Commission on Libraries and Information Science (NCLIS) funded this project for the planning, development and implementation of a nationwide bibliographic utility network. Several independent library networks had emerged and substantial effort were required to allow them to communicate with one another and with the Library of Congress (LC), which was the largest repository of bibliographic data in the U.S. By 1980, Library of Congress, the Research Libraries Information Network (RLIN), and the Western Library Network (WLN) had agreed to form the Linked Systems Project (LSP) to concentrate on developing and implementing the message delivery system. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
ARL Retrospective Conversion Project In the year 1983, the US Council of Library Research funded a project on retro-conversion of bibliographic records where research libraries should participate in a coordinated program to convert research collections in a cooperative mode using subject-based approach with ARL assuming the responsibility of managing the project. During 1985, the ARL undertook the planning study and subsequently initiated a 2-years pilot project to begin implementation of a programme to coordinate systematic conversion of 6 to 7 million bibliographic records for monographs. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Indexing Language/Methods of Vocabulary Control
Semantic Structure: Semantics refers to the aspects of meaning. In the context of an indexing language, two kinds of relationships between concepts – hierarchical and non-hierarchical can be identified. The hierarchical relationships may be Genus-Species and Whole- Part Relationships. The Non-hierarchical relationships may be Equivalence or Associative relationships.
Hierarchical Relationships: It is a permanent relationship.
- Genus-Species (Example: Telephone is always a kind of Telecommunication)
- Whole-Part (Example: Human Body – Respiratory system)
- Instance (Example: Television – Phillips TV).
Non-Hierarchical Relationships: It may be of two kinds – Equivalence and Associate.
- Synonym (Example: Defects – Flaws, Elevator or Lift): Synonyms lead to the same concept being denoted by different terms. For control, one of the terms is used/accepted, the rest of the terms are linked to the accepted term. The terms are connected through the connecting device See or Use or Used For (UF), e.g., child medicine and pediatrics are synonyms. If child medicine were the accepted term, then there would be a link.
See Child Medicine or
Use Child Medicine
and another link
- Homonym/Homograph (Example: Fatigue (of metals), Fatigue (of humans), Plant (a living organism) or Plant (a factory): Homonyms are concepts where the same term may have different meaning and cause a problem in understanding the concept behind a term. Vocabulary control is achieved by providing the context in brackets along with the term, e.g., Bridge (Road), Bridge (Game)
- Associate: It refers to the relationships in which concepts are semantically related but do not necessarily belong to same hierarchy (e.g. Weaving and cloth).
Syntactic Structure: As you know the word syntax refers to grammar. In the context of indexing language syntax governs the sequence of occurrence of terms in a subject heading viz., for the title export of iron, it may be Iron, Export or Export, Iron.
Syndetic Structure: To show the relationships described at semantic structure, syndetic structure should be built in indexing language (viz., see, see also; use, use for). Syndetic structure in the indexing language aims to link related concepts otherwise scattered and helps to collocate related concepts. It guides the indexer and the searcher to formulate index entries and to search for his/her information. (Source: IGNOU MLISc, information retrieval, unit 2 indexing languages – part i: concepts and types, subject headings lists and thesauri)
Sears List Of Subject Headings (SLSH): This List owes its name to its originator Minnie Earl Sears who gave its first edition in 1923 as List of Subject Headings for Small Libraries that was based on the list of subject headings used by nine small well-catalogued libraries. She edited the List till 1933 when it was in its 3rd edition. The name of the list was changed to the present one since its 6th edition. Small Libraries was removed from the title as medium sized libraries also started using it. Sears was added to the title in recognition to the contribution of Minnie Earl Sears. It is in its 18th edition, which was published in 2004. SLSH has got a new face since the 15th edition, which was published in 1994. Since then editions have been coming quite regularly viz., 16th in 1997, 17th in 2000, and 18th in 2004.The new face is due to the change in format that follows the NISO standards for thesauri. Earlier references, See, See also, x, and xx have been replaced with USE, NT, BT, RT and UF. (Source: IGNOU MLISc, information retrieval, unit 2 indexing languages – part i: concepts and types, subject headings lists and thesauri)
Library Of Congress List Of Subject Headings: Library of Congress has been providing subject headings in its catalogue since 1898. The libraries using L.C. cards requested it to publish these headings for other libraries to use. L.C. started publishing these, when its first edition was published between 1909 and 1914. SLSH is based on Library of Congress List of Subject Headings (LCSH) designed for small and medium sized libraries. LCSH was published for the first time as “Subject Headings used in the Dictionary Catalogues of the LC” between 1909 and 1914. Later on supplements were published followed by the second edition issued in 1919. The list is in its 26th edition at present, which was published in 2003. It is in five volumes. The present editor of the list is Ronald A. Gowdreas. The list is generated from a database accumulated since its inception. The idea of the size of the list can be had from the fact that, it has 2.7 lakh records compared to 2.63 lakh records in LCSH 25. (Source: IGNOU MLISc, information retrieval, unit 2 indexing languages – part i: concepts and types, subject headings lists and thesauri)
Headings in LCSH Vs. SLSH
- Library education – India Library education – India
- Library orientation – India Bibliographic Instruction
- Blood – Cancer – Treatment Blood Cancer – Treatment
- English language- Dictionaries English language- Dictionaries
- India – Foreign relations – Pakistan India-Foreign relations – Pakistan
(Source: IGNOU MLISc, information retrieval, unit 2 indexing languages – part i: concepts and types, subject headings lists and thesauri)
Cutter’s Rules for Dictionary Catalogue: It was Charles Ammi Cutter who first gave a generalised set of rules for subject indexing in his Rules for a Dictionary Catalogue (RDC) published in 1876. Cutter never used the term ‘indexing’; rather he used the term ‘cataloguing’. (Source: IGNOU MLISc, information retrieval, unit 4 Indexing systems and techniques techniques)
Kaiser’s Systematic Indexing: Julius Otto Kaiser systematised alphabetical subject heading practice by developing the principles behind Cutter’s rules so as to form consistent grammar logic. Kaiser was the first person who applied the idea of Cutter in indexing micro documents in the library of Tariff Commission as its librarian. He started from the point where Cutter left. J. Kaiser in his “Systematic Indexing”, published in 1911, pointed out that compound subjects might be analysed by determining the relative significance of the different component terms of compound subject through classificatory approach. He categorized the component terms into two fundamental categories: (1) Concrete and (2) Process. According to Kaiser,
Concrete refers to
- Things, place and abstract terms, not signifying any action or process; e.g. gold, India, Physics, etc.
Process refers to
- Mode of treatment of the subject by the author; e.g. Evaluation of IR system, Critical analysis of a drama.
- An action or process described in the document; e.g. Indexing of web documents.
- An adjective related to the concrete as component of the subject; e.g. Strength of metal.
(Source: IGNOU MLISc, information retrieval, unit 4 Indexing systems and techniques techniques)
Relational Indexing: J. E. L. Farradane devised a scheme of pre-coordinate indexing known as Relational Indexing. This indexing system was developed first in the early 1950s and has been modified since then. The latest change may be noted from Farradane’s own paper in 1980. The basic principle of Farradane’s Relational Indexing is to identify the relationship between each pair of terms of a given subject and to represent those relations by relational operators suggested by him and thus creating ‘Analets’. (Source: IGNOU MLISc, information retrieval, unit 4 Indexing systems and techniques techniques)
Coates’s Subject Indexing: Idea of E. J. Coates is not considered as original in nature. From the contributions of Cutter, Kaiser and Ranganathan, the concept of Term Significance was drawn. From the contribution of Farradane, the concept of Term relationship was drawn. Coates, in his contribution, has made a synthesis of above two concepts. It was advantageous for Coates to apply his idea on British Technology Index (now Current Technology Index) of which he had been the editor from its inception in 1963 until his retirement in 1976.
The most significant term in a compound subject heading is the one that is most readily available to the memory of the enquirer. From this, Coates has developed the idea of Thing and Action like Kaiser’s Concrete and Process.
A ‘Thing’ is whatever one can think, that is to say whatever can be thought as a static image. It is the image that comes straight into our mind, i.e. which we can visualize first. It includes not only the names of physical objects but systems and organisations of a mental kind, e.g. Democracy [system].
An Action refers to any thing in action or process denotes by term / word. Example: Heat treatment of aluminium
ALUMINIUM [Action] / Heat treatment [Thing]
(Source: IGNOU MLISc, information retrieval, unit 4 Indexing systems and techniques techniques)
PRECIS: Dereck Austin developed PRECIS, the PREserved Context Index System, in 1974 as an alternative procedure for deriving subject headings and generating index entries for British National Bibliography (BNB) which since 1952, was following Chain Indexing. Two most important factors played significant role in looking for an alternative method, ultimately resulted in the development of PRECIS: i) ideas of replacing chain indexing technique of BNB; and ii) the decision of the British Library to generate computer produced BNB with all the indexes. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
COMPASS: In 1990, it was decided to revise UKMARC and to replace PRECIS by a more simplified system of subject indexing. As a result Computer Aided Subject System (COMPASS) was introduced for BNB from 1991 using the same kind of basic principles of PRECIS and the PRECIS was dropped. PRECIS was designed for the specific purpose of generating coextensive subject statement at each entry point in a form that was suitable for a printed bibliography. This was not necessarily the best format for online searching. (IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
Term Entry System: Here, index entries for a document are made under each of the appropriate subject headings and these entries are filed according to alphabetical order. Under this system, the number of index entries for a document is dependent on the number of component terms associated with the thought content of the document. Here, terms are posted on the item. Searching of two files (Term Profile and Document Profile) is required in this system. For examples, Uniterm, Peek-a-boo, etc. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
Item Entry System: It takes the opposite approach to term entry system and prepares a single entry for each document (item), using a physical form, which permits access to the entry from all appropriate headings. Here, items are posted on the term. Item entry system involves the searching of one file (i.e. Term Profile) only. For example, Edge-notched card. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
Social Science Citation Index (SSCI): ISI brought out SSCI in 1973. It is multidisciplinary index to the literature published in the world’s leading 1400 journals in social sciences, and to selected relevant items from 3,100 science journals. SSCI covers almost all the social science subjects like Anthropology, Business, Communication, Criminology, Education, Geography, History, Information Science, Library Science, Law, Linguistics, Philosophy, Sociology, etc. It appears 3 times a year and has a calendar year index. The online version of the SSCI, known as SOCIAL SCISEARCH enables speedy and accurate searches of social science literature. The compact disc version of the SSCI is available individually for the year 1986 to 1991 along with a 5-year cumulation covering 1981—1985. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
Arts & Humanities Citation Index (A&HCI): ISI brought out the A & HCI in 1978 after intensive two-year marketing research. A & HCI covers more than 25 arts and humanities disciplines like Archeology, Language, Architecture, Literature, Arts, Music, Classics, Philosophy, Dance, Religious studies, Films, Television & Radio, Folklore, Theatre, History, Theology, etc. It is published three times a year. The structure, format and search of the A & HCI are the same as those of SCI and SSCI. Like SCI and SSCI, A & HCI consists of Citation Index, Source Index, Permuterm Subject Index, and Corporate Index. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
Uniform Resource Identifier (URI): A URI is an identifier used to identify objects in a space. The most popularly used URI is the URL. URL is a subset of URI, they are not synonymous. A URI can be anything from the name of a person or an email address or URL or any other literal. The URI specifies a generic syntax. It consists of a generic set of schemes that identify any document/resource like URL, URN (Uniform Resource Name), URC (Uniform Resource Characteristic), etc. (Source: IGNOU MLISc, Information Retrieval, Unit 4 Indexing systems and techniques techniques)
ISBDs: An international meeting of cataloguing experts was held in Copenhagen (Denmark) in 1969 to find a standard for description of bibliographic items of documents. The International Standard Bibliographic Description (ISBD) has endured for nearly 35 years and has proved to be the most successful effort in the area of bibliographic description. The first ISBD appeared in 1971 as ISBD for Monographic Publications [ISBD(M)]. ISBD(M) is followed by the development of a series of ISBDs for serials, non-book materials, cartographic materials, rare books, printed music, electronic resources, etc. ISBD(G) (1977) has provided a framework to which all ISBDs have conformed. Existing ISBDs underwent major editorial review twice (in 1980s and 1990s) for three major objectives – to harmonise provisions, achieving increased consistency, to improve examples and to make the provisions more applicable to cataloguers working with materials published in non-roman scripts. In the first review project all the existing ISBDs had been thoroughly considered, and they were re-published as ‘revised editions’. The second general revised project has already been initiated to ensure conformity between the provisions of the ISBDs and FRBR (Functional Requirements for Bibliographic Record). (IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Dublin Core Metadata Schema (DCMS) [http://dublincore.org]: It was developed in 1995 to be simple and concise scheme, and to describe web based documents. The original objective of the Dublin Core was to define a set of elements that could be used by authors to describe their own resources. Dublin Core is a set of 15 main elements that fall into three groups – Contents, Intellectual Property and Instantiation. (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Global Information Locator Service (GILS) [http://www.usgs.gov/gils/index.html] GILS grew out of U.S. government requirement for public access to government information, – both digital and non-digital. The National Archives and Records Administration, U.S. has defined the core elements of GILS. GILS specifies a profile of the Z39.50 protocol for search and retrieval. GILS records are intended to describe aggregates such as catalogues, publishing services and databases. (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Text Encoding Initiative (TEI) [http://www.tei-c.org]: It is a scheme for marking up electronic text. It also specifies a header portion to accommodate metadata about the object to be described. TEI headers can be used to record bibliographic information of both electronic and non-electronic sources. The TEI header can be mapped to and from MARC. (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Online Information Exchange International (ONIX) [http://www/editeur.org/onix.html]: ONIX is an XML-based metadata schema developed for publishing industry to response enormous growth in online books sales. It records basic bibliographic data along with trade data and promotional information. ONIX may play a major role in the creation of provisional /order-level bibliographic records for library processing works just like CIP data. Mapping between ONIX and both USMARC and UNIMARC exist. (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Functional Requirements for Bibliographic Records (FRBR): FRBR is an entity-relationship model framed by IFLA in 1998. The model represents a generalised view of the bibliographic universe. The FRBR model:
- Identifies the bibliographic entities and defines their nature and scope;
- Analyses the attributes associated with each of the entities;
- Provides a comprehensive listing of individual data elements associated with each attribute;
- Delineates the nature of relationships that operate at a generalized level and between specific instances of entities;
- Maps the attributes and relationships associated with each entity to four generic user tasks (find, identify, select, obtain); and
- Recommends basic data requirements for national bibliographic records.
(Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
UKOLN’s Analytical Model of Collections and their Catalogues. This model has been developed in 2000 by United Kingdom Office for Library and Information Networking (UKOLN) under the Research Support Libraries Programme (RSLP). It is applicable to physical and digital collections of all kinds, including library, art and museum materials. This model identifies 3 main entities and associated attributes — Objects (Content, Item, Collection, Location, Content-Component, Item-Component); Agents (Creator, Producer, Collector, Owner, Administrator); Indirect-Agents (Creator’s Assignee, Producer’s Assignee). It also prescribes two types of relationships — internal relationships (relationships among the entities in Collection Description) and external relationships (relationships among Collection Descriptions themselves). The model tries to clarify the points at which rights and conditions of access and use become operable and attempts to act as a bridge linking collections and their users. (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
XML Organic Bibliographic Information Schema (XOBIS): XOBIS attempts to restructure bibliographic and authority data in a consistent and unified manner using Extensible Markup Language (XML). It has been developed at Lane Medical Library, Stanford University under the Medlane Project. The preliminary version (alpha version) of XOBIS appeared in September 2002. XOBIS prescribes a tripartite record element based structure in which each record consists of three required components. These are Control Data (contains metadata about record), Principal Elements (10 categories of data that provide bibliographic access and authority control to a wide variety of resources) and Relationships (element that accommodates links between any pair of principal elements). (Source: IGNOU MLISc, Information Retrieval, Unit 6 Principles and evolution of bibliographic description)
Machine-Readable Record Format: Library of Congress (LC) was the first to design and experiment on a MAchine- Readable Catalogue (MARC) record format for the purpose of communicating bibliographic information to large number of libraries. When MARC-I commenced as a pilot project in 1966 in LC, there were no established MARC formats available. Library professionals had reached no consensus as to what all access points were required for taking full advantage of an automated catalogue. The pilot project known as MARC-I began in the year 1965 with the main aim of creation and distribution of machine-readable cataloging data to other libraries with Library of Congress (LC) as the distributing point. MARC-I only dealt with books. The development of MARC-II started in 1968. It was planned to cover all types of materials including books and monographs. During 1970-1973 documentation was issued for other materials, i.e., in 1972 films records were issued, 1973 for serials, maps and French books and by 1975 records for German, Spanish, and Portuguese material [Simmons and Hopkinson, 1988]. In the year 1999, USMARC and CAN/MARC were harmonized and named as MARC21 [McCallum 1989]. The MARC21 bibliographic format, as well as all official MARC21 documentation, is maintained by the Library of Congress and by Canadian National Library [MARBI, 1996]. Recently UKMARC is also being merged with MARC21 and British Library is shifting from UKMARC to MARC21. The Library of Congress and the National Library of Canada serve as the maintenance agency for the MARC21 formats for bibliographic, authority, holdings, classification, and community information data. (Source: IGNOU MLISc, Information Retrieval, Unit 8 Standards for bibliographic record formats: ISBD, MARC21, CCF structure)
ASCII (American Standard Code for Information Interchange) is one such standard, which was developed by a committee of software giants in 1962 and revised in 1967. But it could hardly address the problem of representing more than one script thus, failing to become a global character encoding system. ASCII is a one-byte code for character representation, which can represent only 256 (0–255) characters in extended form at a stretch but it uses only 7-bits, that means only 128 characters are accommodated. On the lines of ASCII, other countries drafted their standards for their individual language script. On the same guidelines, to address the problem in India, Bureau of Indian Standards (BIS) has developed ISCII (Indian Standard Code for Information Interchange). (Source: IGNOU MLISc, Information Retrieval, Unit 13 multilingual content development (using UNICODE))
Graphics and Intelligence based Scripting Technology (GIST), developed by CDAC (India), supports major Indic languages. GIST technology uses ASCII code for character representation using the extended ASCII table. GIST product uses underlying standards like ISCII, ISFOC (for font representation on screen) and INSCRIPT (common keyboard layout for Indian scripts). The Indic languages can be divided in two parts – Consonants and Vowels. The layout of script of all the Indic languages is same. GIST is developed on 8-bit encoding system, i.e., it is an extended ASCII system. It uses same code for same vowel and consonant in different Indic language. Thus transliteration becomes very handy. Only it is required to change the mode (or switches) of language. Indian languages (multilingual) word-processing software, like iLeap, also developed by C-DAC, work on the same principle and enables transliteration. (Source: IGNOU MLISc, Information Retrieval, Unit 13 multilingual content development (using UNICODE))
ISCII (Indian Standard Code for Information Interchange) is extended ASCII. It uses last 128 characters position for characters representation in Indic scripts in the range 0-255 provided by ASCII. It uses the same Keyboard as of ASCII with Inscription of other 10 languages over it. It uses control characters SO (Shift out) and SI (Shift in) for selection of ASCII and ISCII respectively. In ISCII, ASCII character are placed in the lower half (0-127) of the 8-bit code table while Indian script characters are in the upper half (160-255). ISCII caters to the following 10 Indian scripts – Devanagari, Gujarati, Punjabi, Bengali, Assamese, Oriya, Telugu, Tamil, Malayalam, Kannada. The ISCII code table is a superset of all the characters required for the above mentioned scripts. First version was released in 1983 and adopted by the Bureau of Indian Standards (BIS) in 1991 after revisions in 1986 and 1988. (for read section 4 above). (Source: IGNOU MLISc, Information Retrieval, Unit 13 multilingual content development (using UNICODE))
Text formats can be of following types:
Simple Text, such as, texts using ASCII, ISCII, and UNICODE character codes. Structured Text Format, such as, Standard Generalized Markup Language (SGML), Hypertext Markup Language (HTML), Extensible Markup Language (XML), etc. Page description languages, such as, postscript, portable document format (PDF), Tex, etc.
Image formats can be of following types:
Graphic Image Files (GIF), Joint Photographic Experts Group (JPEG/ JPG/ JIF), Targa File Format (TGA), Tagged Image File Format (TIFF), Bitmap Image (BMP), etc.
Video file formats can be of following types:
Audio/Video Interleaved (AVI), Movie (MOV), Moving Picture Experts Group (MPEG/ MP3), etc.
Audio file formats can be of following types:
Audio Interchange File Format (AIF), Moving Picture Experts Group (MPEG/ MP3), Musical Instrument Digital Interface (MIDI), VOC Format, WAV Format, etc. (IGNOU MLISc, Information Retrieval, Unit 15 compatibility of ISAR Systems) (Source: IGNOU MLISc, Information Retrieval, Unit 15 compatibility of ISAR Systems)
Truncation Search: Truncation is a search facility whereby a search can be conducted for all the different forms of a word having the same common root. As an example, the truncated word COMPUT* will retrieve items on COMPUTER, COMPUTING, COMPUTATION, COMPUTE, etc. A number of different options are available for truncation, viz., right truncation, left truncation, and masking of letters in the middle of the word. Left truncation retrieves all words having the same characters at the right hand part, e.g., ‘*hyl’ will retrieve words like ‘methyl’, ‘ethyl’, etc. Similarly, middle truncation retrieves all words having the same characters at the left and right hand part. For example, a middle truncated search term ‘col*r’ will retrieve both the terms ‘colour’ and ‘color’. A ‘wild card’ is used to allow any letter to appear in a specific location within a word. Right truncation and character masking or wild card are the most common truncation search facilities available in search systems. Operators used for truncation search vary from one information retrieval system to another; the most commonly used truncation operators include: *, $, !, and ?. (Source: IGNOU MLISc, Information Retrieval, Unit Search strategies, processes and techniques)
Proximity Search: This search facility allows user to specify (1) whether two search terms should occur adjacent to each other, (2) whether one or more words occur in between the search terms, (3) whether the search terms should occur in the same paragraph irrespective of the intervening words, and so on. The operators used for proximity search and their meaning differ from one search system to the other. A proximity search is as good as a Boolean And search in the sense that it searches for the occurrence of two or more search terms in the documents. However, it adds more constraints by specifying the distance between the search terms, and therefore the search output becomes more specific.
sun within 4 words after moon;
sun within 3 words before moon;
sun within 2 words of moon;
ice within 2 words after fire;
ice within 4 words before fire;
fire within 5 words of ice;
Field-Specific Search: A search can be conducted on all the fields in a database, or it may restricted to one or more chosen fields to produce more specific results. Specific fields and codes vary according to the search systems and database. The following examples show some valid DIALOG searches that have been restricted to some specific fields. The general format for using suffix codes is “Syntax: SELECT / xx,xx … where xx is a Basic Index field code(s)”
Select computer?/TI Terms searched in the Title (/TI) field only.
S (information OR Terms searched in either the Descriptor (/DE) or
Communication)/DE,ID Identifier(/ID) field.
S S12/TI,AB Restricts set S12 to either the Title (/TI) or Abstract (/AB) field.
In some cases one can use some prefix codes to restrict a search in a specific field. For example, in DIALOG one can enter the following search expressions to restrict the search in author or corporate source:
Select AU= Chowdhury,G
(Source: IGNOU MLISc, Information Retrieval, Unit Search strategies, processes and techniques)
Limiting Search: Sometimes a user may want to limit a given search by using certain criteria, such as language, year of publication, type of information sources, and so on. These are called limiting searches. Parameters that can be used to limit a search are decided by the database concerned. The following are some examples of limiting searches in DIALOG:
Limit Restriction Qualifier Example
English-language documents /ENG SELECT URBAN(S)CRIME?/ENG
Patents /PAT S TRANSISTOR?/PAT
(Source: IGNOU MLISc, Information Retrieval, Unit Search strategies, processes and techniques)
Range Search: Range search is very useful with numerical information. It is important in selecting records within certain data ranges. The following options are usually available for range searching, though the exact number of operators, their meaning etc., differ from one search system to another:
- Greater than (>)
- Less than (<)
- Not equal to ( |= or <>)
- Greater than or equal to (>=)
- Less than or equal to (<=)
The following examples of range search are from DIALOG:
Publication Year /yyyy S Internet/2004
/yyyy:yyyy S Internet/2003:2004
(Source: IGNOU MLISc, Information Retrieval, Unit 19 Search strategies, processes and techniques)
Expert systems are sophisticated computer programs that manipulate knowledge to solve problems efficiently and effectively in a narrow problem area. Knowledge based systems enhance the value of expert knowledge by making it readily and widely accessible. Like human experts, these systems use symbolic logic and heuristics to find solutions. They are also capable of learning from experience through inferencing mechanism. According to Liebowitz ‘the role of experts systems is to better understand how humans think, reason and learn’ [Leibowitz, 1990].
AI programs that achieve expert-level competence in solving problems in task areas through the knowledge about specific tasks are called knowledge-based or expert systems. Often, the term expert systems is reserved for programs whose knowledge base contains the knowledge used by human experts, in contrast to knowledge gathered from textbooks or non-experts. More often than not, the two terms, expert systems (ES) and knowledge-based systems (KBS) are used synonymously. Taken together, they represent the most widespread type of AI application. The area of human intellectual endeavor to be captured in an expert system is called the task domain. Task refers to some goal-oriented, problem solving activity. Domain refers to the area within which the task is being performed. Typical tasks are diagnosis, planning, scheduling, configuration and design.
Building an expert system is known as knowledge engineering and its practitioners are called knowledge engineers. The knowledge engineer must make sure that the computer has all the knowledge needed to solve a problem. The knowledge engineer must choose one or more forms in which to represent the required knowledge as symbol patterns in the memory of the computer – that is, he (or she) must choose a knowledge representation. He must also ensure that the computer can use the knowledge efficiently by selecting from a handful of reasoning methods.
Expert Systems for Information Processing and Retrieval
A few expert systems in the field of information processing, information analysis and retrieval are discussed below.
COMIT: COMIT is an early AI language developed at MIT (Massachusetts Institute of Technology). The program is designed for information retrieval. It matches the contents of a highly structured query to highly structured descriptions of biographical sources [Weil, 1968]. A categorization scheme is used as a basis for inferencing and the system follows a pattern matching procedure. Reference sources are described in functional rather than bibliographic terms. The hits or successful matches are ranked according to the degree of certainty that they could answer a query. A declarative knowledge representation scheme is employed for COMIT.
REFSEARCH: REFSEARCH was developed by Joseph Meredith and team, and attempted representation of entire universe of reference works. This program is developed as an instructional tool for teaching librarians the basic principles of reference work. The system aims to study the “principles that would apply to the collection as a whole, to the sum of the data contained in the collection, and to networks of paths leading to the data” [Meredith, 1971]. REFSEARCH describes the various reference sources in terms of the functions they performed. The focus is to describe the reference works as specific types of tools likely to resolve types of information problems. The schemata for knowledge representation in REFSEARCH is quite similar to the methods adopted in present expert systems.
RESEDA: RESEDA developed by Zarri, is a system based on restructuring the knowledge encoded in printed texts. The Reseda system is based on the principles of linguistics. The purpose of RESEDA is recording information found in standard printed works on medieval French biography and history. The goal of Reseda is to use the knowledge base derived from the texts to answer natural language queries about the domain [Zarri, 1985].
CANSEARCH: Politt [1986, 1987] has developed Cansearch, an expert system for access to cancer therapy literature in the Medline database. Cansearch is a hybrid system using rules and frames for knowledge representation. Controlled vocabulary terms and hierarchical relationships between concepts expressed in the Medical Subject Headings are used to guide the search process. Queries formulated are based on knowledge of typical searches, concepts within the domain and the Medline query language.
IR-NLI (Information Retrieval – Natural Language Interface): IR-NLI translates the user’s initial query into a formal problem statement. It then models the intermediary’s behaviour by consulting a knowledge base of expert intermediary knowledge. The knowledge base contains rules relating to tactics, strategies, and approaches used by searchers. Domain knowledge fed is terminological knowledge about the subject domain of the target database. A formalizer module then generates formal, syntactically correct search strategies [Brajnik and Tasso, 1986]. (Source: IGNOU MLISc, Information Retrieval, Unit 16 Intelligent Information Retrieval Systems)
LOCAS: The British Library introduced in late 1970 7s, what is known as LOCAS (Local Catalogue Service). A subfile is maintained for each subscribing library. The data selected from UK MARC files and the local data supplied by the subscribing libraries are maintained in the subfile. The data are amended to the local requirements by adding data like location or special indexes and stripped of the details not needed. The file is processed once in a month and the updated catalogue is supplied to the local library mostly in the microform. The system is extremely flexible and can be manipulated to the type and form of the catalogue required by the local library. The OCLC in USA is also providing a similar service. (Source: IGNOU MLISc, Library Automation, Unit 9 Automation of Cataloguing)
Division of Labour: Specialisation in the nature of work leads to division of labour. This results in efficiency in the use of labour.
Unity of Command: This means that employees should receive orders from one superior authority only, i.e., accountability to one authority only. This authority is distributed among various levels in the hierarchy of positions in the organisation.
Scalar Chain: Positions in an organisation follow a “chain of superiors” from the highest — to the lowest rank. Authority flows through the chain. This chain should not be short circuited unless-following it is detrimental to the organisation. Such cases are not normal.
Esprit de Corps: This fosters brotherhood among, employees and forms a key factor in raising employees’ stake in the growth of an organisation. This is an extension of the principle of unity of command. (Source: IGNOU MLISc, Management, Unit 1 concepts and schools of management thought)
Human-Relations School: Elton Mayo is considered as the father of the human relations movement, which ‘later become organisational behaviour. The other two important co-researchers of this school are F.J. Roethlisberger and William J Dickson. (Source: IGNOU MLISc, Management, Unit 1 concepts and schools of management thought)
Per Capita Method: In this method, a minimum amount per head is fixed which is considered essential for providing standard library services. The educational and cultural standards of a community, the expectations of its future’ needs,’ the per capita income of the society, the average cost of published reading material, and the salary levels of the library staff are the common factors that go to determine the per capita library finance in public and academic libraries. The per capita estimate can be based either on the number either of literate persons or of adults. However, the safest method is to calculate library finance per head of population. The University Grants Commission Library Committee `recommended that a university should provide Rs. 15 per student and Rs. 200 per teacher for acquiring reading material for its library. The Kothari Education Commission (1964-66), however, recommended that as a norm, a university should spend each year about Rs. 25 for each student and Rs. 300 per teacher“. Ranganathan suggested that per capita expenditure on university and college libraries should be Rs. 20 per student and Rs. 300 per teacher, or Rs. 50 per student. In schools, per student appropriation at the rate of Rs. 10 should be made available for the library. For public libraries, Ranganathan suggested a 50 paise per capita expenditure way back in 1950. Now, the per capita figure must be much higher and this is the inherent limitation of the method as it does not provide for inflation and devaluation. (Source: IGNOU MLISc, Management, Unit 16 Library finance)
Proportional Method: This method presupposes that the authorities provide adequate finances to the library out of their regular budget, and that a particular minimum limit is fixed. A generally used measure of adequate support is the percentage of the institutional budget which is allocated for library purposes. Various standards have been recommended for deciding this limit in India. The University Education Commission had recommended that 6.5 per cent of a university’s budget would be a reasonable expenditure on its library. The Commission suggested that “this expenditure could vary from 6.5 to 10 per cent, depending on the stage of development of each university library”. In practice, the majority of the universities in India hardly spend three percent of their total budget on their libraries. It is generally agreed by most authorities that a college should allocate to the library four to five per cent of its total expenditure. Ranganathan suggested that either 10 per cent of the total budget or 6 per cent should be earmarked for public library purposes. This method is likely to lead to a high disparity in case of special libraries as the budgets of high technology and capital intensive organisations are much larger than the budgets of pure research, social science and humanities institutions. (Source: IGNOU MLISc, Management, Unit 16 Library finance)
Method of Details: According to this method all items of expenditure of a library are accounted for while preparing the financial estimates. These are of two types, viz., i) recurring or current expenditure and ii) non-recurring or capital expenditure. For estimating public library finances, Ranganathan suggested the circulation of recurring/ current expenditure and non-recurring/capital expenditure. The Advisory Committee for libraries, Government of India followed almost a similar method for estimating the financial requirements for establishing a countrywide public library system. The UGC Library Committee in its report suggested a staff formula for finding out the quantum of library staff members of various categories required for college and university libraries. It has also laid down their respective pay scales. The total amount required for meeting the cost of the staff can be calculated by this formula. For cost of books and other reading materials, the Committee has suggested a per capita expenditure formula. Lastly, a suitable combination of the above methods may be ideal in some situations. (IGNOU MLISc, Management, Unit 16 Library finance) (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 2 Library Automation , Unit 7 Digitisation Concept, Need, Methods and Equipment) (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 11 Reference Services) (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 11 Reference Services) (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 11 Reference Services) (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 11 Reference Services)
Some Document Supplying Agencies
- i) ISI Document Solutions (IDS) (http://www.isinet.com/products/dosdelivery/ids): This is a document delivery service offered by Institute of Scientific Information (ISI), Pittsburg, USA. IDS provides full text articles from over 16,000 titles covered in the ISI database which covers scholarly journals in the disciplines of science and technology, social sciences, arts and humanities. Articles can be ordered via OCLC ILL, DataStar, OVID, Dialog, through the basic indexes of ISI and the ISI websites as well as through email, fax, telephone and by mail. The charges include a standard processing fee; a variable copyright fee and the copies are copyright-cleared. These are delivered to users via fax, mail, or courier. Users can open a deposit account with a sum of US $ 500 and each time ISI will send a statement of balance amount to the user. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- ii) Canadian Institute for Scientific and Technical Information (CISTI) (http://cisti-icist.nrc-cnrc.gc.ca/dosdel/delivery): CISTI is one of the world’s major sources of information in all areas of science, technology, engineering and medicine. One can access CISTI collection as well as the collection of the Canadian Agriculture Library (CAL). Articles, conference proceedings, books, etc. can be ordered from CISTI via OCLC ILL, DOCLINE, OVID Web Gateway, email, fax, telephone or mail. Articles are delivered via fax, Ariel, or courier. Books are delivered via courier only. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
iii) British Library Document Supply Centre (BLDSC) (http://www.bl.uk/services/document/articles.html): The British Library Document Supply Centre (BLDSC) is the world’s foremost document supply service. It receives over 4 million requests each year (including over 1 million from other than UK) for all categories of literature. BLDSC is one of the well-known Centres in the field of document supply. It provides a rapid and comprehensive document delivery and interlibrary loan service to researchers and scholars in all kinds of libraries and organisations. It is regarded as the central organisation for interlibrary loans in UK.Though, particularly strong in sciences and technology and medicine, the BLDSC collects materials in all subject areas of human knowledge in many languages. The Centre holds journals, books, conference proceedings, reports, theses, official publications, gray literature of all kinds, music, patents, etc. It supplies documents from its own collection held at Boston Spa and other parts of the British Library. It satisfies almost 83% requesters from its own stock including 45,000 currently subscribed titles and 2, 50, 000 titles of back issues of journals. Documents of any length and data can be ordered from BLDSC via OCLC/ILL, Dialog Data Star, ARTTel, ARTEmail, British Library Automated Information Service (BLAISE)/ LINE, telephone, fax and mail. Documents supplied are copyright cleared because it has direct agreement with publishers and copyright clearing agencies in United Kingdom. Articles are delivered via fax, Ariel or mail. Books are delievered via airmail. Billing can be through the NELINET monthly statement and/or the Interlibrary loan Fee Management (IFM) option. BLDSC provides two ways for passing the requests. Organisations or libraries can request document on loan or they can order documents through the Article Direct Service of BLDSC. Each user organisation must register with the BLDSC and establish an account before using the Library Privilege Photocopy Service (formerly known as International Photocopy Service). Individual user also needs to open his customer account with BLDSC with minimum deposit of Pound 100 or US$160. Payment can also be made through credit card or through the Web Order Form or a performa invoice can be sent through fax or email. One article, regardless of length, airmail delivery cost comes to Pound 7.25 or US$11.50. If document is taken on loan, the present charges for airmail delivery is Pound 13.25 or US $21.25. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- iv) National Library of Australia (http://www.nla.gov.au/webpac/): The National Library of Australia’s Document Supply Service (NLADSS) is the Australia’s largest document delivery service. The catalogue of NLA provides information on its holdings. The majority of materials held at the library are available for loan. Some of the materials, however, are part of a special collection and is not available for loan. Individuals who wish to use library materials need to contact their institution library or local public library to arrange for them to borrow the materials on their behalf. Since June 1, 2000 NLADSS has ceased accepting vouchers as payment for inter library loans and document supply. In place of this, libraries, using the NLADSS, can choose the following methods of payment:
_ Kinetic Document Delivery (KDD) payment service.
_ Monthly account (payable by cheque or credit card). (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- v) Inforetrieve – The Article Store (http://www.inforetrieve.com): Inforetrieve has assembled one of the world’s largest libraries of articles and journal contents, through ongoing partnerships and alliances with publishers and content producers. It provides copies of published documents, in accordance with copyright law. It uses a network of libraries, electronic resources, publishers and other primary document suppliers to deliver the documents. This service claims to fulfill 95% of the users documents demand. Articles can be ordered from Inforetrieve via OCLC ILL, Web, e-mail, fax, and telephone. Articles are copyright cleared and delivered to the user via Ariel, fax, mail, or courier. Ariel is a commercial software package, produced by the Inforetrieve, which sends and receives documents as TIFF or PDF files over the Internet. It must be installed on the requester’s workstation in order to use this delivery method. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- vi) Ingenta Journals (http://www.ingenta.com/) Ingenta Journals was launched in May 1998 and now it offers a single point of free access to the abstracts of over 9,00,000 full text articles from over 2800 academic and professional journals from over 35 leading publishers. It provides documents to more than 3 million users annually. It allows any one any where in the world to browse and search database of articles free of charge and to view titles of contents, bibliographic information and abstracts. More than 8000 academic, research and corporate libraries, institutions and consortia from all over the world are currently using this service. Subscribers can view full text articles free of cost whereas non-subscribers have to pay for that. Users can order documents for immediate electronic delivery and payment can be made through credit cards on the screen. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
vii) AskIEEE (http://www.ieee.org/services/askieee/): The AskIEEE service is the Document Delivery Service of the Institute of Electrical and Electronics Engineers, Inc which provides photocopies of articles published by the IEEE. It has different levels of delivery options i.e. first class mail, facsimile, etc. All requests are fulfilled from its own collections. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
viii) Chemical Abstracts Service/Document Delivery Service (CASDDS) (http://www.cas.org/support/dds.html) The CASDDS supplies most of the documents cited in Chemical Abstracts. It provides photocopies for non-copyrighted publications, publications registered with Copyright Clearance Centre, American Chemical Society publications, and publications from organisations with whom CAS has right-to-copy agreements. It also provides loan service to international customers for a period of 28 days. Documents are supplied through normal post, airmail, fax, and courier. It does not supply multiple copies of a single document. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- ix) OCLC Full Text Option Program: Using the OCLC/ILL Full Text Option, libraries can request ASCII text documents using OCLC/ILL procedures and workflows. The documents are delivered within minutes of the receipt of the request in the body of e-mail messages. Requesters can use the OCLC/ILL Fee Management service to pay for the supplied documents. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- x) PEAK: Pricing Electronic Access to Knowledge (http://www.umdl.umich.edu/peak): PEAK is a research project of University of Michigan in which Elsevier Science and the University of Michigan are working together to create and manage a host service for all 1110 journals published by the Elsevier, including North Holland, Butterworth and Pergamon Press. In this service, user can have unlimited access to a specific article for a fixed price. User can also select and purchase a fixed number of articles for use. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- xi) Science Direct (http://www.sciencedirect.com): Science Direct is an online host facility for scientific and technical information. It offers libraries and their users desk top access to remotely stored full text of journals published by the Elsevier Science and other participating publishers. Users can have single user or multi-user license for one or more year subscriptions. This service is very popular in science and technology libraries all over the world. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
xii) Uncover Desktop Image Delivery (http://www.uncweb.carl.org/uncover/imgfaq.html): Uncover offers full text articles from its database from over 2500 journals. Nearly 300 scholarly, universities and trade publishers have granted permission to deliver articlesfrom their publications. It allows users to order articles and to download including images, graphics, photographs, etc. Articles can be delivered by the Uncover after charging handling. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
xiii) National Technical Information Service (NTIS) (http://supprot.dialog.com/publications/docdelivery): The NTIS is the largest single resource for the government- funded scientific, technical, engineering and business related information available today. It provides access to over 2 million publications covering over 35 subject areas brought out during the past 50 years. Users can make their payment through credit cards or they can open their deposit accounts with NTIS. Documents are delivered by the NTIS through airmail or surface mail. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
xiv) University Microfilms International (UMI) Dissertations Services (http://www.umi.com): University Microfilms International offers comprehensive full text document delivery service of dissertations and masters theses in a variety of formats. Documents can be ordered through credit cards or users can open deposit account with UMI. Copies can be had in microfilm, microfiche, soft cover paper or hard cover paper. UMI charges different costs for different countries and different types of users i.e. academics, non-academics, etc. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- xv) United States Patent and Trademark Office (USPTO) (http://www.uspto.gov/patft/): United States Patents Full Text and Full Image Database contains two parts:
1) Patents Grant Database which includes full text of patents granted since 1976 and full images since 1790;
2) Patent Application Database: Users can search a particular patent document using Quick Search option, Advance Search option and Application Number Search option. They can display the search result on computer monitor or download it or place an order for printed copies to the USPTO.
xvi) Chicago Public Library: The Chicago Public Library is one of the Patents and Trademarks Depository Libraries in the United States. The library provides copies of U.S. patents as well as foreign patents. The library holds an almost complete collection of British patents from 1617 to mid-1994 in German patents from 1912 to 1938. The copies can be obtained through the Xerox Copy Centre. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
xvii) DERWENT Publications Ltd.: The Patents Supply Division of Derwent Publications Ltd offers a comprehensive patent delivery service with very rapid turnaround at low prices. Patents published anywhere in the world and cited in the World Patent Index may be ordered from Derwent. Delivery options available are: normal service, express service, fax service or translated documents from other languages. Order can be placed through any mode and user can open deposit account with the Derwent. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
xviii) Patent Information System (PIS) To provide technological information contained in patents or patent related literature through publication service, search service and patent copy supply service, the Government of India established Patent Information System (PIS) at Nagpur in 1980. PIS operates subscriber advance payment scheme. Under this scheme, user interested in availing regular service by way of procurement of patent document, may remit an amount of not less than Rs. 1000/- or in multiples of thereof and open an account in his name. On receipt of payment, the user is allotted a Subscriber Account Number (SAN) by the office of PIS. The charges for the information or service provided are debited to the account and debit credit statements are sent. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
- xx) Other Document Delivery Agencies: Besides these agencies, there are some other national and international document delivery services, which are providing documents in various disciplines. Some of these are:
- Information Quest (http://www.eiq.com/)
- ACM Digital Library (http://www.acm.org/dl/)
- Articles in Physics (http://www.ojps.aip.org/)
- Bioline Publications (http://www.bioline.org.br/)
- BioMedNet: Internet Community for Biological and Medical Researchers (http://www.biomednet.com/)
- ChemPort (http://www.chemport.org/html/english/about.html)
- ChemWeb : The World Wide Club for the Chemical Community (http://www.chemweb.com/)
- News Library (http://www.newslibrary.infi.net/noframe2.htm)
- Docdeliver (http://www.docdeliver.com/)
- The Electric Library (http://www.elibrary.com/)
Fire Walls: Firewalls are the first line of defence for an institutional network. A firewall is a combination of hardware and software that separates a local area network (LAN) into two or more parts for security purpose. It is a set of related programs, located at a network gateway server that protects the resources of a private network from users of other networks. All network connections to and from the institutional network pass through the firewall, which acts as a gatekeeper to give access to valid requests and blocks out invalid and unauthorized requests and transmissions. An enterprise with an intranet that allows its users to access the Internet installs a firewall to prevent outsiders from accessing its own private data resources and for controlling what outside resources its own users have access to. A firewall refers to the concept of a security interface or gateway between a closed system or network and the outside Internet that blocks or manages communications in and out of the system. Basically, a firewall, working closely with a router program, examines each network packet to determine whether to forward it towards its destination. A firewall also includes or works with a proxy server that makes network requests on behalf of workstation users. The firewall is often installed in a specially designated computer separate from the rest of the network so that no incoming request can get directly at private network resources. A number of companies make firewall products. The firewall facilitates features such as logging and reporting, automatic alarms at given thresholds of attack, and a graphical user interface for controlling the firewall. The following are the popular firewall software programs, which can be installed:
Personal computer firewall – Black ICE Agent, e-safe Desktop, McAfee Internet Guard Dog, Norton Internet Security and Zone Alarm.
Office firewalls – D-Link Residential Gateway, Linksys Eatherfort cable, Netgear and Sonic wall.
Corporate firewall – Check Point, CISCO Secure PIX Firewall, e-Soft-Interceptor and Sonic Wall Pro. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 4 Internet Resources and Services , Unit 12 Basics of Internet)
Proxy Servers: In an enterprise that uses the Internet, a proxy server is a server that acts as an intermediary between a workstation user and the Internet so that the enterprise can ensure security, administrative control and caching service. A proxy server is associated with or part of a gateway server that separates the enterprise network from outside intrusion. A proxy server receives a request for an Internet service (such as a web page request) from a user. If it passes filtering requirements, the proxy server, assuming it is also a cache server, looks in its local cache of previously downloaded web pages. If it finds the page, it returns it to the user without forwarding it to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user. When a firewall is used to stop company workers from accessing the Internet, a proxy server is used to provide access. It also acts as a security device by providing buffer between inside and outside (on Internet) computers. The steps in the functioning of a typical proxy server are given below:
- i) Request for a file from a client is sent to the proxy server;
- ii) The proxy server contacts the web server to get the file;
iii) The proxy server keeps a copy of the file in its cache; and
- iv) Forwards a copy of file to the client.
The functions of proxy server, firewall and caching can be segregated on to separate programs or combined in a single package. These programs can be hosted on to different servers or on a single server. For example, a proxy server may be in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall. To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly coming to the Internet server. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 4 Internet Resources and Services , Unit 12 Basics of Internet)
Intrusion Detection System (IDS): A specialised software termed as intrusion detection system can be deployed to monitor and detect any malicious activity on the network generated from the organisation. There are two types of intrusion detection systems currently being used, i.e., scanners and monitors. Both of these systems can be deployed onto a network or individual computers. Scanners are static IDS systems that keep a watch over a network system, like a security guard. Scanners check for things like wrong passwords, security holes and misconfigured computers. Some scanners take snapshots of the current state of the network and compare it with the snapshot taken at a later date to see if there were any changes in the system that are unwarranted. If the scanner senses such changes, it may sound an alarm or act proactively by replacing changed files with clean copies. The monitors IDS, on the other hand, are dynamic systems that look for attacks on the network while in progress. Monitor IDS are also known as threat monitors. (IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 4 Internet Resources and Services , Unit 12 Basics of Internet)
Virtual Private Network (VPN): A Virtual Private Network (VPN) can be used by organisations as a means of protection from the outside attack. A VPN is a private data network that uses public networks (such as Internet), tunnelling protocols, and security procedures to tunnel data from one network to another. The data sent on a VPN is usually encrypted before it is transmitted on the net. (Source: IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 4 Internet Resources and Services , Unit 12 Basics of Internet)
Information scouting consists of keeping abreast of the data like; who has what information and where it is available and referring the requests to the appropriate persons and/or to facility in the organisation or outside it, or both. Knowledge of such information may become available through personal interviews with originator by a member of the staff or the originator may notify the organisation that his/her research efforts will be recorded at a conference or a meeting. To glean such information, information centres should be staffed with well informed and well-linked persons. (Source: IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
National Institute of Science Communication and Information Resources (NISCAIR), CSIR, New Delhi NISCAIR was formed in 2002 by merger of Indian National Scientific Documentation Centre (INSDOC) and National Institute of Science Communication (NISCOM). INSDOC was established in 1952 by Council of Scientific and Industrial Research (CSIR) with UNESCO’s technical assistance to serve the scientists and technologists in industry, government, universities and research institutes by providing documentation services. NISCOM had been involved in publication of scientific literature especially journals. The NISCAIR maintains the National Science Library and provides various services/ activities that include electronic library, document procurement, organisation and dissemination of S & T information, translation of S & T documents, networking, online bibliographic information, conducting training programmes for library/information professionals and preparation of databases. NISCAIR has several publications that include primary and secondary journals. The major initiatives presently handled by NISCAIR are a Traditional Knowledge Digital Library (TKDL), e-journals consortia and National Science Digital Library. (Source: IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
National Institute of Science Communication and Information Resources (NISCAIR): The Indian National Scientific Documentation Centre (INSDOC) now NISCAIR has been offering document delivery service at national level since 1952. The service is provided utilising the entire country’s resources including those of National Science Library and the Pilot Electronic Library of NISCAIR. The requests are received by mail, fax, telex and e-mail. The location of required document is identified using the computerised National Union Catalogue of Scientific Serials in India (NUCSSI). Requests from scientific and technical libraries located in the country are received for document delivery. Another form of document delivery service offered by NISCAIR is Contents, Abstracts and Photocopies Service (CAPS) and Full Text Journal Service (FTJS). (Source: IGNOU MLISc, MLII-104 Information Communication Technologies – Applications, Block 3 Library and Information Services , Unit 10 Document Delivery Services)
The erstwhile INSDOC introduced Centralised Acquisition of Periodicals (CAPS) for the libraries of CSIR. However, the service is discontinued now. (Source: IGNOU MLISc 101 Fundamentals of Information Communication Technologies, Block-4 Resource Sharing Networks, Unit 13 Bibliographic Utility)
Journals Published by NISCAIR
NISCAIR is bringing out 17 primary journals in various subject fields related to science and technology. These are:
1) Journal of Scientific and Industrial Research ( monthly)
2) Indian Journal of Chemistry A (monthly)
3) Indian Journal of Chemistry B (monthly)
4) Indian Journal of Experimental Biology (monthly)
5) Indian Journal of Pure & Applied Physics (monthly)
6) Indian Journal of Biochemistry & Biophysics ( bi-monthly)
7) Indian Journal of Engineering & Material Sciences ( bi-monthly)
8) Indian Journal of Chemical Technology (bi-monthly)
9) Indian Journal of Radio & Space Physics ( bi-monthly)
10) Journal of Intellectual Property Rights (bi-monthly)
11) Indian Journal of Marine Sciences (quarterly)
12) Indian Journal of Fibre & Textile Research (quarterly)
13) National Product Radiance (bi-monthly)
14) Indian Journal of Biotechnology (quarterly)
15) Indian Journal of Traditional Knowledge (quarterly)
16) Annals of Library and Information Studies (quarterly)
17) Bhartiya Vaigyanik evam Audyogik Anusandhan Patrika (Hindi) (half-yearly)
Besides the primary journals, NISCAIR also publishes two abstracting journals, they are:
1) Medicinal and Aromatic Plants Abstracts (bi-monthly)
2) Indian Science Abstracts (fortnightly)
(Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
Human Resource Development: Development of human resources in library, documentation and information science has been a major activity of erstwhile INSDOC since 1964. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
National Science Library (NSL): NSL was established in 1964. It aims to acquire all important S&T publications published in the country and strengthening its resource base for foreign periodicals by acquiring the journals on CD-ROM or other electronic form. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
Information Services by NISCAIR
NISCAIR offers a number of information services, some of these have been continuing since the inception of the erstwhile INSDOC.
- Medicinal and Aromatic Plants Information Services (MAPIS) based on the Wealth of India and MAPA databases
- Content Abstract and Photocopy Service (CAPS), is a highly personalised service. This service provides contents information from journals on a regular basis.
- Literature Search Service is offered by providing access to over 6000 international databases.
- NISCAIR is the National Centre for ISSN International Centre for assigning ISSN number to serials published in India.
- NISCAIR provides S&T translation services from major foreign languages such as Japanese, German, French, Spanish, Chinese and Russian into English.
- Bibliometrics Services: NISCAIR renders bibliometrics services on specialised subjects for studying the growth, development and spread of any area of research.
(Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
ii) National Social Science Documentation Centre (NASSDOC), ICSSR, New Delhi: National Social Science Documentation Centre (NASSDOC), was established in 1969 as a Division of ICSSR with the chief objective to provide library and information support services in the field of social sciences. Services are available to researchers in social sciences; those working in academic institutions, autonomous research organisations, policy making, planning and research units of government departments, business and industry, etc. in India. NASSDOC also provides guidance to libraries of ICSSR Regional Centres and ICSSR maintained Research Institutes. Facilities available at NASSDOC include library and reference services; collection of unpublished doctoral dissertations, research project reports, current and old volumes of selected social science journals of Indian and foreign origin; literature search service from databases; compilation of bibliographies, and document delivery service. NASSDOC is also involved in union catalogue project and conducting short term training courses. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
iii) Defence Scientific Information and Documentation Centre (DESIDOC), DRDO, New Delhi: DESIDOC was established in 1958 under the Defence Research and Development Organisation (DRDO) and is a central information agency for collection, processing and dissemination of scientific and technical information of interest to DRDO laboratories, establishments and other agencies of the Ministry of Defence. Facilities of DESIDOC include Defence Science Library (DSL), translation services, information network and communication facility and multimedia facility. DESIDOC brings out a few periodicals and some adhoc publications and the centre also acts as the publication wing of DRDO. In order to provide efficient information service to the DRDO scientists by sharing resources of all the DRDO labs/establishments, DESIDOC has provided e-mail and Internet facilities to all the DRDO labs/establishments. (Source: IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
DESIDOC was functional in 1958 as Scientific Information Bureau (SIB), a division of the Defence Science Laboratory (DSL) presently the Defence Science Centre. The Defence Research and Development Organisation (DRDO) library, established in 1948, became a division of SIB in 1959. In the year 1967 SIB was reorganized and renamed Defence Scientific Information and Documentation Centre (DESIDOC). It is still functioning under the administrative control of DSL. During in 1970, DESIDOC became an independent unit and one of the laboratories of DRDO. The Centre initially functioned in the main building of Metcalfe House, a national monument and in 1988 moved to a new building in the same complex. After it became an independent and self-accounting unit, DESIDOC has been functioning as a central information resource for DRDO. It provides S&T information, based on its library and other information resources, to the DRDO headquarters and its various laboratories located all over India. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block-2 Information Sources, Systems and Programmes , Unit 9 National and International Information Organisations)
- iv) Small Enterprises National Documentation Centre (SENDOC): Small Enterprises National Documentation Centre (SENDOC) was established in 1970 at the National Small Industry Extension Training Institute (NISIET) as a unique knowledge resource centre to cater to the information needs of Small and Medium Enterprises (SMEs). The centre has a rich collection of knowledge resources like books, journals, reports, business and industrial information in print and electronic media. SENDOC operates several online databases and its activities include : lending of books, inter-library lending of documents, reference, preparation of bibliographies, literature search, newspaper clippings and technical enquiry service. Equipped with rich knowledge resources, expertise and facilities, the centre has been conducting since 1971 national and international programmes in information science and other user-centric programmes and has trained library and information professionals from more than 108 developing countries. The centre also possesses the state-of the-art facilities in IT and provides a variety of documentation and information services through its division Business Information Bureau (BIB). (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- v) Wildlife Institute of India – Library and Documentation Centre, Dehradun Set up in 1986, with only four library services, the centre now offers nearly 22 services to its users in the area of wildlife. Besides the library, Information Analysis and Documentation is a separate unit within the library. This unit is responsible for providing information and bibliographic services. Major activities include monthly Current Content Service, preparation of topical bibliographies, Indian Wildlife Abstracting Service, newspaper clipping service and maintenance of specialised in house and CD-ROM databases concerning wildlife. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- vi) BARC Scientific Information Resource Division – Central Library, Depository Library and Divisional Libraries: The Bhabha Atomic Research Centre (BARC) has a Scientific Information Resource Division taking care of library and documentation activities of the centre. All the activities are in turn covered by the Central Library, Depository Library and Divisional Libraries. The Central Library at BARC has got the necessary infrastructure and facilities including electronic information resources to meet the information requirements of the scientists and engineers in BARC and other institutions under the Department of Atomic Energy. The Depository Library (DL) is the most comprehensive library in the country for scientific and technical reports in the field of nuclear science and technology. There are over 5,50,000 reports. A majority of them are available in microfilm (microcard/microfiche) and some on CD-ROM. The reports are loaned to scientists on request. In addition to the Central Library many important scientific divisions of BARC have Divisional Libraries containing books and other publications which are needed for day-to-day reference. However, the Central Library is responsible for procurement and processing of materials required by the Divisional Libraries. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
vii) Bioinformatics Centre, National Institutre of Oceanography, Goa: Bioinformatics Centre was established at the National Institute of Oceanography (CSIR), Dona Paula, Goa in January 1990, recognising the importance and necessity to develop the emerging areas of Bioinformatics and its application in Marine Sciences, under the Biotechnology Information System (BTIS) programme. Bioinformatics Centre at NIO has been engaged in collection and dissemination of data/information on
Marine Biodiversity, building databases in the subject, developing networking and communication and provides training to potential users. (Source: IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
viii) Agricultural Research Information Centre, ICAR, New Delhi: The Indian Council of Agricultural Research (ICAR) is running an Agricultural Research Information Centre (ARIC) as the Central source of information on all research projects and schemes financed by the ICAR since 1967. The centre maintains databases on agricultural projects, deputation reports and research projects of institutions which have computerised updating of the databases. The ARIC is the national input centre for the AGRIS and CARIS agricultural databases of the FAO, which is the largest information system of its kind in the world. The ARIC is also the National Focal Point for the SAARC Agricultural Information Centre (SAIC). (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- i) Japan Information Centre of Science and Technology – JICST, Japan: JICST covers almost all vital information in science and technology in the world and its collection is world’s most comprehensive one in the field of science and technology, especially for literature published in Japan. The collection of JICST includes a large number of not only domestic serials titles but serials titles from over 60 nations, with difficult-to-obtain literature such as technical reports and proceedings in and out of Japan. They are deposited in the JICST libraries for reading and photocopying services. JICST offers computerised information through JOIS and STN and other customised services including speedy photo-copying, translation services and manual search services. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- ii) Bangladesh National Scientific and Technical Documentation Centre (BANSDOC), Dhaka, Bangladesh Bangladesh National Scientific and Technical Documentation Centre (BANSDOC), established in 1972 as an unit of Bangladesh Council of Scientific and Industrial Research (BCSIR), is a premier science and technology (S&T) information organisation in Bangladesh. In 1987, BANSDOC and National Science Library (NSL) were amalgamated and the newly created organisation was renamed as BANSDOC owing the status of a premier national organisation and apex body in the country with the recommendation of the National Science and Technology Policy envisaged in 1987. BANSDOC provides the scientists, technologists, technicians, industrialists, planners and policy-makers with scientific and technological information and upliftment of socio-economic development of the country is its mandate. With that end of view, BANSDOC collects, processes and stores information and data on scientific research and experimental development in all branches of science and technology and disseminates such information to researchers, irrespective of their affiliations whether they are engaged in research, academic institutions, planning organisations, policy making bodies or in the nationalised and private sector industries. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
iii) Institute for Scientific and Technical Information (INIST), National Centre Scientific Research (CNRS), France (Institut de l’Information Scientifique et Technique (INIST) – Centre National de la Recherche Scientifique (CNRS), France) INIST, is the centre for scientific and technical information of the CNRS, France, is one of world’s major scientific and technological information facility with a huge library that has large document collections available to the public through document delivery services; indexing services of literature in Science, Technology, Medicine, Humanities and Social Sciences designed to contribute to bibliographic databases; information services available online or on a variety of electronic media, and; its document holdings cover the core international scientific and technical literature. The collection of INIST includes journals, scientific reports, conference proceedings, books and doctoral dissertations and grey literature. INIST is not a lending library but supplies photocopies of documents from not only its own collection base but from anywhere in the world. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- iv) Pakistan Scientific and Technological Information Centre (PASTIC), Pakistan PASTIC, a unit of the Pakistan Science Foundation, is a premier organisation in the field of information dissemination, serving researchers in Pakistan requiring scientific and technological information. PASTIC procures, processes and disseminates scientific and technological information to the researchers as well as provides bibliographic and translation services. PASTIC also develops inter-library cooperation and resources sharing at national level. It also develops and strengthens the National Sciences Reference Library. Its services include bibliography compilation, indexing and abstracting, technical translations, current awareness services, reference, patent information and duplicating information on request. PASTIC is also the national focal point for various international organisations including the SAARC Documentation Centre. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- v) SAARC Documentation Centre (SDC), New Delhi, India SDC, a regional centre of the South Asian Association for Regional Cooperation (SAARC), was set up in 1994 in New Delhi, India to enable exchange of information in the field of science, technology, industry, trade, commerce and development matters amongst SAARC Member States. The centre aims to meet the information needs of scholars in the SAARC region, provides timely access to relevant and accurate information, develops human resources in the area of library/information science/ information technology and is involved in developing a SAARC Traditional Knowledge Digital Library (TKDL) for protecting traditional knowledge, genetic resources, etc. in the SAARC Region. SDC acts as a repository of documents/reports in the region and on the region and offers information services for scholars in the SAARC region. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
- vi) Coconut Information Centre (CIC), Sri Lanka: CIC was established in 1979 to acquire, classify and disseminate scientific and technological information about coconuts. It is one of the best institutions of the world in the field of coconut research and information. The centre was set up and efforts were made to capture and record coconut documentation and release information products. Presently, this centre is involved in consolidating and augmenting CIC services in order to provide researchers with easier access to current literature. The centre collects information products related to the area of coconuts and has also been instrumental in publishing the COCONIS Newsletter and bringing out a Directory of Coconut Research Workers. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
vii) All-Russian Scientific and Technical Information Institute of Russian Academy of Sciences – Vserossiisky Institut Nauchnoi i Tekhnicheskoi Informatsii (VINITI), Moscow, Russia VINITI, the leading information centre in Russia has been supplying the world community with scientific and technical information since 1952. VINITI is established and financially supported by the Russian Academy of Sciences, Russian Ministry of Industry, Science and Technologies. The main task of VINITI is to provide information support to scientists and specialists of Russia in natural and technical sciences. VINITI produces information products in printed and electronic form and also publishes journals. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
viii) Canada Institute for Scientific and Technical Information (CISTI), National Research Council, Canada: CISTI, the Canada Institute for Scientific and Technical Information, is one of the world’s major sources of information in all areas of science, technology, engineering and medicine. CISTI started over 75 years ago as the library of the National Research Council of Canada, the leading agency for R&D in Canada, and became the National Science Library in 1957. The change to CISTI came in 1974 to reflect the wider scope of services provided and the increasing role in the development of electronic information products and services for the scientific and technical community. CISTI’s headquarters in Ottawa houses one of the world’s most comprehensive collections of publications in science, technology and medicine. NRC Research Press is CISTI’s publishing arm, with 15 international journals of research plus several books and conference proceedings. CISTI can provide any information a user needs, whether an article from a journal or an in-depth literature search, or a referral to an expert. The institute has easy-to-use electronic information tools that enable users to remain up-to-date with new developments in their field. (IGNOU MLISc, MLII-101 Information Sources, Systems and Services, Block-1 Information Institutions , Unit 2 Information Centres: Types and their organisation)
‘Information Broker’ refers to those individuals or firms, who provide information service for a fee. A great impetus for the growth of information brokers has come from the recognition that knowledge is a business and information is a commodity. Many of the users of today’s Information Brokers are special libraries. However, most of the clients of Information Brokers are firms and individuals who do not have their own special library. Information Brokers are basically the information clearing houses acting as a link between the information that is available and those who need it. Information Brokers generally perform two types of activities relating to information services. Handling client’s day-to-day information needs – usually questions requiring less than two hours of search time is one type of activity. The other one is to handle more complex activities requiring attention at greater depth. Apart from these two basic types, other range of services provided by them are: quick telephone surveys, translations, information consulting, current awareness services, literature searching, compilation of bibliographies, abstracting, report writing, retrieval of specific facts or statistics from any non-classified source, provision of photocopies of published material, provision of copies of published material from electronic source, etc. Services of the Information Brokers are availed by a wide variety of organisations, e.g., corporate organisations, different categories of industries, advertising agencies, publishers, media houses, etc. Information Brokers should possess certain characteristics to perform their roles effectively and efficiently. Such characteristics according to Alice Johnson (1991) are:
- Understanding the power of information;
- Ability to understand the actual needs of the client, not necessarily those which are stated;
- Skills in interviewing, listening, communicating;
- Adaptability to new situations;
- Ability to organise concepts as well as objects or things;
- Ability to analyse, interpret, synthesise and repackage information;
- Ability to train and work with non-library-oriented staff;
- Administrative ability and business expertise;
- Research experience;
- Ability to interact with databases and Internet/ Intranet resources;
- Ability to work independently;
- Entrepreneurship skills.
Categories of Information Brokers
Information Brokers are generally categorised into two types:
1) Independent information brokers whose main source of income is by selling information services. Many information entrepreneurs, information consultants and freelance librarians can be included in this group.
2) Fee-based services attached to an organisation or institution usually in the public sector or non-profit making institutions. Some of the services provided are free of charges, it is only for some services such as online searching, compilation of bibliographies, document delivery, or photocopying which are charged and at times may be at subsidised rates. For example, information services of NISCAIR,New Delhi (National Institute of Science Communication and Information Resources) may fall under this category.
Scientists, research workers and management experts seek information through formal channels of communication. In the sphere of research there are other informal channels of information such as, personal contacts, peer group exchanges, which are usually approached for seeking vital information not normally accessible through formal channels. One of these channels is through Technological Gatekeepers. The Technological Gatekeepers are those who are usually consulted by their colleagues for information and act as a link between the internal users of any organisation, institution, etc. and the external sources of information. This consultation takes place as a first step though other avenues are also available to the enquirer. Allen (1968) named them “Gatekeepers’ as they open the gate of information for others. This group makes exhaustive use of information services and has well developed outside contacts. Information immediacy is increasing, i.e., the users need more timely access to information. Moreover, there is a trend of diversification of branches of knowledge, increasing the number of narrow fields of interest. All these have resulted in an exponential growth of information. The function of Technological Gatekeepers is to gather and repackage information and supply them to the researchers of the organisation. The gatekeeper’s function is, however, not limited to the inflow of information. They are also concerned with the outflow of information that is generated in the organisation itself in the form of research outputs. The characteristics of Gatekeepers are that they are well informed and have specific and recent information in their fields. Technological Gatekeepers are different from Invisible Colleges. The invisible colleges are concerned with the flow of information among individuals in different institutions, whereas gatekeepers are concerned with the flow of information within an institution. The role of the gatekeeper is getting enhanced in this era of Internet working and gateways and also with the radical changes in sphere of information. It is not possible for any R&D organisation to acquire all the information required by the researchers working there. Thus, a gatekeeper is appointed who works as a liaison officer between the sources of information and the organisation.
Invisible Colleges are yet another form of non-profit making Information Intermediaries. In invisible college, a group of persons informally forming a group for exchange of information and ideas. It is a small body of individuals who dominate and set the tone and agenda for the societal sector. Such elite groups are universally found in governments, agriculture, manufacturing, commercial and trade enterprises, etc. Invisible college may be an informal group or a formal group, where member communities share their expertise and subject knowledge. They are also found in the various disciplines making up the scholarly community. Invisible colleges are very helpful as information intermediaries as they are well aware of new developments and vistas of research. These groups sometimes communicate with their member community through listserv or any other Internet-based mechanism. (Source: IGNOU MLISc 101 Information Sources, Systems and Services, Block- 4 Information Intermediaries , Unit 15 Information Intermediaries)