Internet research

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Internet research is the practice of using Internet information, especially free information on the World Wide Web, in research. It is:

  • focused and purposeful (so not recreational browsing),
  • uses Internet information or Internet-based resources (like Internet discussion forum)
  • tends towards the immediate (drawing answers from information you can access without delay)
  • and tends to access information without a purchase price.

Internet research has had a profound impact on the way ideas are formed and knowledge is created. Common applications of Internet research include personal research on a particular subject (something mentioned on the news, a health problem, etc.), students doing research for academic projects and papers, and journalists and other writers researching stories.

Research is a broad term. Here, it is used to mean "looking something up (on the Web)". It includes any activity where a topic is identified, and an effort is made to actively gather information for the purpose of furthering understanding. It may include some post-collection analysis like a concern for quality or synthesis.

For example, on the Net, the Web can be searched and typically hundreds or thousands of pages can be found with some relation to the topic, within seconds. In addition, email (including mailing lists), online discussion forums (aka message boards, BBS's), and other personal communication facilities (instant messaging, IRC, newsgroups, etc.) can provide direct access to experts and other individuals with relevant interests and knowledge.

So defined, Internet research is distinct from library research (focusing on library-bound resources) and commercial database research (focusing on commercial databases). While many commercial databases are delivered through the Internet, and some libraries purchase access to library databases on behalf of their patrons, searching such databases is generally not considered part of “Internet research”. It should also be distinguished from scientific research (research following a defined and rigorous process) carried out on the Internet, from straightforward retrieving of details like a name or phone number, and from research about the Internet.

Internet research has strengths and weaknesses. Strengths include speed, immediacy, and a complete disregard for physical distance. The quality of research can be superior to other forms of research but usually is not. Weaknesses include unrecognized bias, difficulties in verifying a writer's credentials (and therefore the accuracy or pertinence of the information obtained) and whether the searcher has sufficient skill to draw meaningful results from the abundance of material typically available.[1] The first resources retrieved may not be the most suitable resources to answer a particular question. For example, popularity is often a factor used in structuring Internet search results but popular information is not always most correct or representative of the breadth of knowledge and opinion on a topic.

While conducting commercial research fosters a deep concern with costs, and library research fosters a concern with access, Internet research fosters a deep concern for quality, managing the abundance of information and with avoiding unintended bias. This is partly because Internet research occurs in a less mature information environment: an environment with less sophisticated / poorly communicated search skills and much less effort in organizing information. Library and commercial research has many search tactics and strategies unavailable on the Internet and the library and commercial environments invest more deeply in organizing and vetting their information.

Search tools

The most popular search tools for finding information on the Internet include Web search engines, meta search engines, Web directories, and specialty search services. A Web search engine uses software known as a Web crawler to follow the hyperlinks connecting the pages on the World Wide Web. The information on these Web pages is indexed and stored by the search engine. To access this information, a user enters keywords in a search form and the search engine queries its algorithms, which take into consideration the location and frequency of keywords on a Web page, along with the quality and number of external hyperlinks pointing at the Web page.

A Meta search engine enables users to enter a search query once and it runs against multiple search engines simultaneously, creating a list of aggregated search results. Since no single search engine covers the entire web, a meta search engine can produce a more comprehensive search of the web. Most meta search engines automatically eliminate duplicate search results. However, meta search engines have a significant limitation because the most popular search engines, such as Google, are not included because of legal restrictions.

A Web directory organizes subjects in a hierarchical fashion that lets users investigate the breadth of a specific topic and drill down to find relevant links and content. Web directories can be assembled automatically by algorithms or handcrafted. Human-edited Web directories have the distinct advantage of higher quality and reliability, while those produced by algorithms can offer more comprehensive coverage. The scope of Web directories are generally broad, such as DOZ, Yahoo! and The WWW Virtual Library, covering a wide range of subjects, while others focus on specific topics.

Specialty search tools enable users to find information that conventional search engines and meta search engines cannot access because the content is stored in databases. In fact, the vast majority of information on the web is stored in databases that require users to go to a specific site and access it through a search form. Often, the content is generated dynamically. As a consequence, Web crawlers are unable to index this information. In a sense, this content is "hidden" from search engines, leading to the term invisible or deep Web. Specialty search tools have evolved to provide users with the means to quickly and easily find deep Web content. These specialty tools rely on advanced bot and intelligent agent technologies to search the deep Web and automatically generate specialty Web directories, such as the Virtual Private Library.

Website authorship

When using the Internet for research, countless websites appear for whatever search query is entered. Each of these sites has one or more authors or associated organizations. Who authored or sponsored a website is very important to the accuracy and reliability of the information presented on the website.

While it is very imperative that the authorship be determined for every website during Internet research, who authored or sponsored a website is essential culture when one cares about the accuracy and reliability of the information, bias, and/or web safety. For example, a website about civil rights that is authored by a member of an extremist group most likely will not contain accurate or unbiased information.

The author or sponsoring organization of a website may be found in several ways. Sometimes the author or organization can be found at the bottom of the website home page. Another way is by looking in the ‘Contact Us’ section of the website. It may be directly listed, determined from the email address, or by emailing and asking. If the author’s name or sponsoring organization cannot be determined, one should question the trustworthiness of the website. If the author’s name or sponsoring organization is found, a simple Internet search can provide information that can be used to determine if the website is reliable and unbiased.

Internet research software

Internet Research Software enables you to capture information you find while performing Internet research. This information can then be organized in various ways included tagging and hierarchical trees. The goal is to collect information relevant to a specific research project in the one place, so that it can be found and accessed again quickly.

These tools also allow captured content to be edited and annotated and some allow the ability to export to other formats. Other features common to outliners include the ability to use full text search which aids in quickly locating information and filters enable you to drill down to see only information relevant to a specific query.

By capturing and keeping information you don't have to worry about web pages and whole sites disappearing or being inaccessible. Internet Research Software greatly enhances Internet research by enabling you to build knowledge and reuse it. Some of the most popular available software includes Surfulater and WebResearch Professional for Windows, Evernote (multiple platforms), DEVONthink (MacOSX), Springpad and Diigo (web based) and Scrapbook, a Firefox extension. See also the tool WebSurfer Magellan 2.0 by Internet Applications in 2013.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.

General

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links