Class SiteCapturer
java.lang.Object
org.htmlparser.parserapplications.SiteCapturer
- Direct Known Subclasses:
WikiCapturer
Save a web site locally.
Illustrative program to save a web site contents locally.
It was created to demonstrate URL rewriting in it's simplest form.
It uses customized tags in the NodeFactory to alter the URLs.
This program has a number of limitations:
- it doesn't capture forms, this would involve too many assumptions
- it doesn't capture script references, so funky onMouseOver and other non-static content will not be faithfully reproduced
- it doesn't handle style sheets
- it doesn't dig into attributes that might reference resources, so for example, background images won't necessarily be captured
- worst of all, it gets confused when a URL both has content and is the prefix for other content, i.e. http://whatever.com/top and http://whatever.com/top/sub.html both yield content, since this cannot be faithfully replicated to a static directory structure (this happens a lot with servlet based sites)
-
Field Summary
FieldsModifier and TypeFieldDescriptionprotected boolean
Iftrue
, save resources locally too, otherwise, leave resource links pointing to original page.protected HashSet
The set of resources already copied.protected NodeFilter
The filter to apply to the nodes retrieved.protected HashSet
The set of pages already captured.protected ArrayList
The list of resources to copy.protected ArrayList
The list of pages to capture.protected Parser
The parser to use for processing.protected String
The web site to capture.protected String
The local directory to capture to.protected final int
Copy buffer size. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoid
capture()
Perform the capture.protected void
copy()
Copy a resource (image) locally.protected String
Unescape a URL to form a file name.boolean
Getter for property captureResources.Getter for property filter.Getter for property source.Getter for property target.protected boolean
Returnstrue
if the link contains text/html content.protected boolean
isToBeCaptured
(String link) Returnstrue
if the link is one we are interested in.static void
Mainline to capture a web site locally.protected String
makeLocalLink
(String link, String current) Converts a link to local.protected void
process
(NodeFilter filter) Process a single page.void
setCaptureResources
(boolean capture) Setter for property captureResources.void
setFilter
(NodeFilter filter) Setter for property filter.void
Setter for property source.void
Setter for property target.
-
Field Details
-
mSource
The web site to capture. This is used as the base URL in deciding whether to adjust a link and whether to capture a page or not. -
mTarget
The local directory to capture to. This is used as a base prefix for files saved locally. -
mPages
The list of pages to capture. Links are added to this list as they are discovered, and removed in sequential order (FIFO queue) leading to a breadth first traversal of the web site space. -
mFinished
The set of pages already captured. Used to avoid repeated acquisition of the same page. -
mImages
The list of resources to copy. Images and other resources are added to this list as they are discovered. -
mCopied
The set of resources already copied. Used to avoid repeated acquisition of the same images and other resources. -
mParser
The parser to use for processing. -
mCaptureResources
protected boolean mCaptureResourcesIftrue
, save resources locally too, otherwise, leave resource links pointing to original page. -
mFilter
The filter to apply to the nodes retrieved. -
TRANSFER_SIZE
protected final int TRANSFER_SIZECopy buffer size. Resources are moved to disk in chunks this size or less.- See Also:
-
-
Constructor Details
-
SiteCapturer
public SiteCapturer()Create a web site capturer.
-
-
Method Details
-
getSource
Getter for property source.- Returns:
- Value of property source.
-
setSource
Setter for property source. This is the base URL to capture. URL's that don't start with this prefix are ignored (left as is), while the ones with this URL as a base are re-homed to the local target.- Parameters:
source
- New value of property source.
-
getTarget
Getter for property target.- Returns:
- Value of property target.
-
setTarget
Setter for property target. This is the local directory under which to save the site's pages.- Parameters:
target
- New value of property target.
-
getCaptureResources
public boolean getCaptureResources()Getter for property captureResources. Iftrue
, the images and other resources referenced by the site and within the base URL tree are also copied locally to the target directory. Iffalse
, the image links are left 'as is', still refering to the original site.- Returns:
- Value of property captureResources.
-
setCaptureResources
public void setCaptureResources(boolean capture) Setter for property captureResources.- Parameters:
capture
- New value of property captureResources.
-
getFilter
Getter for property filter.- Returns:
- Value of property filter.
-
setFilter
Setter for property filter.- Parameters:
filter
- New value of property filter.
-
isToBeCaptured
Returnstrue
if the link is one we are interested in.- Parameters:
link
- The link to be checked.- Returns:
true
if the link has the source URL as a prefix and doesn't contain '?' or '#'; the former because we won't be able to handle server side queries in the static target directory structure and the latter because presumably the full page with that reference has already been captured previously. This performs a case insensitive comparison, which is cheating really, but it's cheap.
-
isHtml
Returnstrue
if the link contains text/html content.- Parameters:
link
- The URL to check for content type.- Returns:
true
if the HTTP header indicates the type is "text/html".- Throws:
ParserException
- If the supplied URL can't be read from.
-
makeLocalLink
Converts a link to local. A relative link can be used to construct both a URL and a file name. Basically, the operation is to strip off the base url, if any, and then prepend as many dot-dots as necessary to make it relative to the current page. A bit of a kludge handles the root page specially by calling it index.html, even though that probably isn't it's real file name. This isn't pretty, but it works for me.- Parameters:
link
- The link to make relative.current
- The current page URL, or empty if it's an absolute URL that needs to be converted.- Returns:
- The URL relative to the current page.
-
decode
Unescape a URL to form a file name. Very crude.- Parameters:
raw
- The escaped URI.- Returns:
- The native URI.
-
copy
protected void copy()Copy a resource (image) locally. Removes one element from the 'to be copied' list and saves the resource it points to locally as a file. -
process
Process a single page.- Parameters:
filter
- The filter to apply to the collected nodes.- Throws:
ParserException
- If a parse error occurs.
-
capture
public void capture()Perform the capture. -
main
Mainline to capture a web site locally.- Parameters:
args
- The command line arguments. There are three arguments the web site to capture, the local directory to save it to, and a flag (true or false) to indicate whether resources such as images and video are to be captured as well. These are requested via dialog boxes if not supplied.- Throws:
MalformedURLException
- If the supplied URL is invalid.IOException
- If an error occurs reading the page or resources.
-