ebooksgratis.com

See also ebooksgratis.com: no banners, no cookies, totally FREE.

CLASSICISTRANIERI HOME PAGE - YOUTUBE CHANNEL
Privacy Policy Cookie Policy Terms and Conditions
Screen scraping - Wikipedia, the free encyclopedia

Screen scraping

From Wikipedia, the free encyclopedia

Screen scraping is a technique in which a computer program extracts data from the display output of another program. The program doing the scraping is called a screen scraper. The key element that distinguishes screen scraping from regular parsing is that the output being scraped was intended for final display to a human user, rather than as input to another program, and is therefore usually neither documented nor structured for convenient parsing. Screen scraping often involves ignoring binary data (usually images or multimedia data) and formatting elements that obscure the essential, desired text data. Optical character recognition software is a kind of visual scraper.

There are a number of synonyms for screen scraping, including: Data scraping, data extraction, web scraping, page scraping, web page wrapping and HTML scraping (the last four being specific to scraping web pages).

Contents

[edit] Description

Normally, data transfer between programs is accomplished using data structures suited for automated processing by computers, not people. Such interchange formats and protocols are typically rigidly structured, well-documented, easily parsed, compact, and keep ambiguity and duplication to a minimum. Very often, these transmissions are not human-readable at all.

In contrast, output intended to be human-readable is often the antithesis of this, with display formatting, redundant labels, superfluous commentary, and other information which is either irrelevant or inimical to automated processing. However, when the only output available is such a human-oriented display, screen scraping becomes the only automated way of accomplishing a data transfer.

Originally, screen scraping referred to the practice of reading text data from a computer display terminal's screen. This was generally done by reading the terminal's memory through its auxiliary port, or by connecting the terminal output port of one computer system to an input port on another. By analogy, screen scraping has also come to mean computerized parsing of the HTML text in web pages. In all cases, the screen scraper has to be programmed to not only process the text data of interest, but also to recognize and discard unwanted data, images, and display formatting.

Screen scraping is most often done to either (1) interface to a legacy system which has no other mechanism which is compatible with current hardware, or (2) interface to a third-party system which does not provide a more convenient API. In the second case, the operator of the third-party system may even see screen scraping as unwanted, due to reasons such as increased system load, the loss of advertisement revenue, or the loss of control of the information content.

Screen scraping is generally considered an ad-hoc, inelegant technique, often used only as a "last resort" when no other mechanism is available. Aside from the higher programming and processing overhead, output displays intended for human consumption often change structure frequently. Humans can cope with this easily, but computer programs will often crash or produce incorrect results.

Screen scraping generally requires intensive text parsing algorithms. Computer languages that have strong support for regular expressions and other text processing are thus a popular choice for writing screen scraping programs.

[edit] Web scraping

Main article: Web scraping

Web pages are built using text-based mark-up languages (HTML and XHTML), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human consumption, and frequently mix content with presentation. Thus, screen scrapers were reborn in the web era to extract machine-friendly data from HTML and other markup. Even general-purpose search engines and other web crawlers use many techniques in the same vein as web scraping.

[edit] Scraping by design: towards the Semantic Web

The emergence of XML and web services has lent itself to the creation of technologies that improve the process of extracting machine-friendly data from web pages. Indeed, an explicit goal of the Semantic Web project is to enable the creation of documents which are easily read by both humans and machines. While this is seen as less efficient in terms of computer resources, it is asserted that computer technology has advanced to the point where such efficiency arguments are no longer a primary concern.

Extracting data from a web page or service explicitly designed to be machine-readable differs somewhat from the traditional meaning of screen scraping, which implies a preferred mechanism is not available. However, the techniques used in traditional web scraping are so similar that the same tools are often usable in both situations.

Screen scraping has thus recently taken a new dimension with tools such as Piggy Bank --a part of W3C and MIT's SIMILE joint project. The purpose of such technologies is to give the Internet community tools to increase the interoperability of disparate digital resources by adding a new semantic layer to online information. Some of these tools use user-designed scrapers, others analyze the data structure of Web pages and store structures and annotations as metadata, sometimes putting it back online as shared repositories, linking to the original sources.

Tools like Kapow RoboMaker and web-based Dapper enable wrappers to be created for all kinds of web sites, meaning data can be harvested from web sites and converted to XML. More advanced tools like EasyWrap Mashup Studio automate the creation of web wrappers and even allow the creation of RESTful APIs for accessing web sites programmatically.

[edit] Technical measures to stop scraping

With the prevalence of web scraping, many website owners have begun using anti-screen scraping techniques. See Web scraping

[edit] Examples

As a concrete example of a classic screen scraper, consider a hypothetical legacy system dating from the 1960s -- the dawn of computerized data processing. Computer to user interfaces from that era were often simply text-based dumb terminals which were not much more than virtual teleprinters. (Such systems are still in use today, for various reasons.) The desire to interface such a system to more modern systems is common. An elegant solution will often require things no longer available, such as source code, system documentation, APIs, and/or programmers with experience in a 45 year old computer system. In such cases, the only feasible solution may be to write a screen scraper which "pretends" to be a user at a terminal. The screen scraper might connect to the legacy system via Telnet, emulate the keystrokes needed to navigate the old user interface, process the resulting display output, extract the desired data, and pass it on to the modern system.

Modern web scrapers are much easier to find. For example, there are numerous programs and utilities which query commercial web sites (e.g., Google Product Search) to get product information and display it out of the context of the commercial service. Such usage is also an example of why some web-site operators see web scraping as undesirable. A popular method to protect a site from being web scraped is the use of CAPTCHA, which attempts to block automated access to a website.

[edit] Implementations

The Perl language, and modules from the Comprehensive Perl Archive Network, contain many features suitable for screen scraping, some purpose-built for it.

Microsoft has built into its implementation of web services the ability to create a web service which extracts its data from a web page with the help of an extension to the WSDL standard and the use of regular expressions.

The PHP programming language has developed in areas suited to creating web scraping applications. The release of PHP5 included many new XML and DOM additions, including functions to parse badly formed HTML documents into DOM-trees, and work on them as if they were well-formed XML.

Java offers support for web scraping techniques, by leveraging the W3C's XQuery specification.

Python and Ruby also have libraries for web scraping.

Scroogle is a screen scraping proxy that allows users to perform Google searches without receiving Google advertisements.

Many Greasemonkey or Opera user scripts work by interpreting and adapting website code.

The Outwit platform is a Web Collection Engine and development platform for Web automation. A library of recognition and extraction functions (OutWit Kernel) is available as a Firefox extension, to be used in specific collection applications.

In Unix-like environments, one can render formatted output with e.g.,

$ lynx -dump URL
$ w3m -dump URL

[edit] References

[edit] Books

  • Hemenway, Kevin and Calishain, Tara. Spidering Hacks. Cambridge, Massachusetts: O'Reilly, 2003. ISBN 0-596-00577-6.

[edit] External links


aa - ab - af - ak - als - am - an - ang - ar - arc - as - ast - av - ay - az - ba - bar - bat_smg - bcl - be - be_x_old - bg - bh - bi - bm - bn - bo - bpy - br - bs - bug - bxr - ca - cbk_zam - cdo - ce - ceb - ch - cho - chr - chy - co - cr - crh - cs - csb - cu - cv - cy - da - de - diq - dsb - dv - dz - ee - el - eml - en - eo - es - et - eu - ext - fa - ff - fi - fiu_vro - fj - fo - fr - frp - fur - fy - ga - gan - gd - gl - glk - gn - got - gu - gv - ha - hak - haw - he - hi - hif - ho - hr - hsb - ht - hu - hy - hz - ia - id - ie - ig - ii - ik - ilo - io - is - it - iu - ja - jbo - jv - ka - kaa - kab - kg - ki - kj - kk - kl - km - kn - ko - kr - ks - ksh - ku - kv - kw - ky - la - lad - lb - lbe - lg - li - lij - lmo - ln - lo - lt - lv - map_bms - mdf - mg - mh - mi - mk - ml - mn - mo - mr - mt - mus - my - myv - mzn - na - nah - nap - nds - nds_nl - ne - new - ng - nl - nn - no - nov - nrm - nv - ny - oc - om - or - os - pa - pag - pam - pap - pdc - pi - pih - pl - pms - ps - pt - qu - quality - rm - rmy - rn - ro - roa_rup - roa_tara - ru - rw - sa - sah - sc - scn - sco - sd - se - sg - sh - si - simple - sk - sl - sm - sn - so - sr - srn - ss - st - stq - su - sv - sw - szl - ta - te - tet - tg - th - ti - tk - tl - tlh - tn - to - tpi - tr - ts - tt - tum - tw - ty - udm - ug - uk - ur - uz - ve - vec - vi - vls - vo - wa - war - wo - wuu - xal - xh - yi - yo - za - zea - zh - zh_classical - zh_min_nan - zh_yue - zu -