www-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

www/software/wget .symlinks wget.html wgetdev.h...


From: Karl Berry
Subject: www/software/wget .symlinks wget.html wgetdev.h...
Date: Thu, 21 Jun 2007 17:10:32 +0000

CVSROOT:        /web/www
Module name:    www
Changes by:     Karl Berry <karl>       07/06/21 17:10:31

Removed files:
        software/wget  : .symlinks wget.html wgetdev.html 
        software/wget/manual: .symlinks 
        software/wget/manual/wget: .symlinks 
        software/wget/manual/wget-1.8.1: .symlinks wget.html 
        software/wget/manual/wget-1.8.1/dvi: wget.dvi.gz 
        software/wget/manual/wget-1.8.1/html_chapter: 
                                                      
wget.texi_html_chapter.tar.gz 
                                                      wget_1.html 
                                                      wget_10.html 
                                                      wget_11.html 
                                                      wget_2.html 
                                                      wget_3.html 
                                                      wget_4.html 
                                                      wget_5.html 
                                                      wget_6.html 
                                                      wget_7.html 
                                                      wget_8.html 
                                                      wget_9.html 
                                                      wget_foot.html 
                                                      wget_toc.html 
        software/wget/manual/wget-1.8.1/html_mono: wget.html 
                                                   wget.html.gz 
        software/wget/manual/wget-1.8.1/html_node: 
                                                   wget.texi_html_node.tar.gz 
                                                   wget_1.html 
                                                   wget_10.html 
                                                   wget_11.html 
                                                   wget_12.html 
                                                   wget_13.html 
                                                   wget_14.html 
                                                   wget_15.html 
                                                   wget_16.html 
                                                   wget_17.html 
                                                   wget_18.html 
                                                   wget_19.html 
                                                   wget_2.html 
                                                   wget_20.html 
                                                   wget_21.html 
                                                   wget_22.html 
                                                   wget_23.html 
                                                   wget_24.html 
                                                   wget_25.html 
                                                   wget_26.html 
                                                   wget_27.html 
                                                   wget_28.html 
                                                   wget_29.html 
                                                   wget_3.html 
                                                   wget_30.html 
                                                   wget_31.html 
                                                   wget_32.html 
                                                   wget_33.html 
                                                   wget_34.html 
                                                   wget_35.html 
                                                   wget_36.html 
                                                   wget_37.html 
                                                   wget_38.html 
                                                   wget_39.html 
                                                   wget_4.html 
                                                   wget_40.html 
                                                   wget_41.html 
                                                   wget_42.html 
                                                   wget_43.html 
                                                   wget_44.html 
                                                   wget_45.html 
                                                   wget_46.html 
                                                   wget_47.html 
                                                   wget_5.html 
                                                   wget_6.html 
                                                   wget_7.html 
                                                   wget_8.html 
                                                   wget_9.html 
                                                   wget_foot.html 
                                                   wget_toc.html 
        software/wget/manual/wget-1.8.1/info: wget-info.tar.gz 
        software/wget/manual/wget-1.8.1/ps: wget.ps.gz 
        software/wget/manual/wget-1.8.1/texi: wget.texi.tar.gz 
        software/wget/manual/wget-1.8.1/text: wget.txt wget.txt.gz 

Log message:
        wget is its own savannah project

CVSWeb URLs:
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/.symlinks?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/wget.html?cvsroot=www&r1=1.19&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/wgetdev.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/.symlinks?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget/.symlinks?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/.symlinks?cvsroot=www&r1=1.6&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/wget.html?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/dvi/wget.dvi.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget.texi_html_chapter.tar.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_1.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_10.html?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_11.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_2.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_3.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_4.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_5.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_6.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_7.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_8.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_9.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_foot.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_chapter/wget_toc.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_mono/wget.html?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_mono/wget.html.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget.texi_html_node.tar.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_1.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_10.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_11.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_12.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_13.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_14.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_15.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_16.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_17.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_18.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_19.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_2.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_20.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_21.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_22.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_23.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_24.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_25.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_26.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_27.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_28.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_29.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_3.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_30.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_31.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_32.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_33.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_34.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_35.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_36.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_37.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_38.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_39.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_4.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_40.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_41.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_42.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_43.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_44.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_45.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_46.html?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_47.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_5.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_6.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_7.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_8.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_9.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_foot.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/html_node/wget_toc.html?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/info/wget-info.tar.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/ps/wget.ps.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/texi/wget.texi.tar.gz?cvsroot=www&r1=1.1&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/text/wget.txt?cvsroot=www&r1=1.2&r2=0
http://web.cvs.savannah.gnu.org/viewcvs/www/software/wget/manual/wget-1.8.1/text/wget.txt.gz?cvsroot=www&r1=1.1&r2=0

Patches:
Index: .symlinks
===================================================================
RCS file: .symlinks
diff -N .symlinks
--- .symlinks   28 Feb 2001 00:19:34 -0000      1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,2 +0,0 @@
-wget.html index.html
-wget.html wget.es.html

Index: wget.html
===================================================================
RCS file: wget.html
diff -N wget.html
--- wget.html   19 Jun 2007 18:22:11 -0000      1.19
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,297 +0,0 @@
-<!DOCTYPE html PUBLIC "-//IETF//DTD HTML 2.0//EN">
-<html>
-<head>
-<title>GNU wget - GNU Project - Free Software Foundation (FSF)</title>
-<link rev="made" href="mailto:address@hidden";>
-</head>
-<body bgcolor="#FFFFFF" text="#000000" link="#1F00FF" alink="#FF0000"
-vlink="#9900DD">
-<h3>GNU wget</h3>
-<a href="/graphics/agnuhead.html"><img src="/graphics/gnu-head-sm.jpg"
-alt=" [image of the Head of a GNU] " width="129" height="122"></a> [ 
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
- <a href="/software/wget/wget.html">English</a> 
-<!-- | A HREF="/wget.LG.html" LANGUAGE /A  -->
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
-] 
-
-<p>GNU wget is looking for a maintainer!  Time, inclination, and some C
-network programming knowledge are all important.  The TODO list in the
-distribution gives an overview of some open problems; organizing
-releases and updating infrastructure files are also essential tasks.</p>
-
-<p>If you are interested, please write address@hidden</p>
-
-<h4>Table of Contents</h4>
-
-<ul>
-<li><a href="#introduction" name="TOCintroduction">Introduction to GNU
-wget</a></li>
-
-<li><a href="#news" name="TOCnews">News</a></li>
-
-<li><a href="#downloading" name="TOCdownloading">Downloading GNU
-wget</a></li>
-
-<li><a href="#documentation" name=
-"TOCdocumentation">Documentation</a></li>
-
-<li><a href="#mailinglists" name="TOCmailinglists">Mailing
-lists</a></li>
-
-<li><a href="#addons" name="TOCaddons">Add-ons</a></li>
-
-<li><a href="wgetdev.html#development" name="TOCdevelopment">Development of
-GNU wget</a></li>
-</ul>
-
-<hr>
-<p><!-- Introduction -->
-</p>
-
-<h4><a href="#TOCintroduction" name="introduction">Introduction to GNU
-wget</a></h4>
-
-<p>GNU Wget is a <a href=
-"http://www.gnu.org/philosophy/free-sw.html";>free software</a> package
-for retrieving files using HTTP, HTTPS and FTP, the most widely-used
-Internet protocols. It is a non-interactive commandline tool, so it may
-easily be called from scripts, <tt>cron</tt> jobs, terminals without
-Xsupport, etc.</p>
-
-<p>Wget has many features to make retrieving large files or mirroring
-entire web or FTP sites easy, including:</p>
-
-<ul>
-<li>Can resume aborted downloads, using <tt>REST</tt> and
-<tt>RANGE</tt></li>
-
-<li>Can use filename wild cards and recursively mirror directories</li>
-
-<li>NLS-based message files for many different languages</li>
-
-<li>Optionally converts absolute links in downloaded documents to
-relative, so that downloaded documents may link to each other
-locally</li>
-
-<li>Runs on most Unix-like operating systems as well as Microsoft
-Windows</li>
-
-<li>Supports HTTP and SOCKS proxies</li>
-
-<li>Supports HTTP cookies</li>
-
-<li>Supports persistent HTTP connections</li>
-
-<li>Unattended / background operation</li>
-
-<li>Uses local file timestamps to determine whether documents need to
-be re-downloaded when mirroring</li>
-
-<li>GNU wget is distributed under the <a href=
-"http://www.gnu.org/copyleft/gpl.html";>GNU General Public
-License</a>.</li>
-</ul>
-
-<!-- News -->
-<h4><a href="#TOCnews" name="news">News</a></h4>
-
-<p>The latest stable version of Wget is 1.8.  This release introduces
-several important new features since 1.7.1; see
-<a 
href="http://cvs.sunsite.dk/viewcvs.cgi/wget/NEWS?rev=WGET_1_8&content-type=text/plain";>NEWS</a>
-for details.
-
-<!-- Downloading -->
-<h4><a href="#TOCdownloading" name="downloading">Downloading GNU
-wget</a></h4>
-
-<p>The main distribution point for Wget is the GNU software archive.
-Please choose a <a href="http://www.gnu.org/order/ftp.html";>mirror
-site</a> close to you. The master directory is <a href=
-"http://ftp.gnu.org/pub/gnu/wget/";>http://ftp.gnu.org/pub/gnu/wget/</a>.</p>
-
-<p>Microsoft Windows binaries are available from SunSITE FTP server at <a
-href="ftp://sunsite.dk/projects/wget/windows/";
->ftp://sunsite.dk/projects/wget/windows/</a> or <a href=
-"http://space.tin.it/computer/hherold/";
->http://space.tin.it/computer/hherold/</a> and have been kindly provided by
-Heiko Herold</a>. An MS-DOS binary designed to be used
-under plain DOS with a packet driver has been made available by Doug
-Kaufman. It is available from <a href= "http://www.rahul.net/dkaufman/";
->http://www.rahul.net/dkaufman/</a>. <a
-href="http://www.antinode.org/";>Antinode.org</a> offers a <a
-href="http://www.antinode.org/dec/sw/wget.html";>VMS port</a> of Wget.</p>
-
-<p>
-The latest <i>development source</i> for Wget is always available via <a href=
-"wgetdev.html#development">anonymous CVS</a>. The source and binary versions
-of the current development sources patched for compilation in the MS Windows
-environment are made available by Heiko Herold as well and are available from
-the URLs mentioned above.</p>
-
-<!-- Documentation -->
-<h4>
-<a href="#TOCdocumentation" name="documentation">Documentation</a>
-</h4>
-
-<p>
-The <a href="/software/wget/manual/">
-documentation for Wget 1.9</a> is now available in all common 
-formats.  For other manuals, please see 
-<a href="/manual">http://www.gnu.org/manual</a>.
-</p>
-
-<p><a href="ftp://sunsite.dk/projects/wget/wgethelp.zip";>Documentation
-for Wget 1.5.2 in Windows help format</a> is available from <a href=
-"ftp://sunsite.dk/";>SunSITE Denmark's FTP server</a> along with a copy
-of <a href="ftp://sunsite.dk/projects/wget/makeinfo.zip";>makeinfo for
-Win32</a>, which is used to build the Windows help file.</p>
-
-<!-- Mailing lists -->
-<h4><a href="#TOCmailinglists" name="mailinglists">Mailing
-lists</a></h4>
-
-The Wget mailing lists are kindly hosted at <a href=
-"http://sunsite.dk/";>SunSITE Denmark</a> thanks to <a href=
-"mailto:address@hidden";>Karsten Thygesen</a>.  To subscribe to
-them, send an email to
-<tt><var>list_name</var>address@hidden</tt>. For instance, to
-subscribe to the <tt>wget-patches</tt> list send an empty mail to:
-
-<blockquote><a href=
-"mailto:address@hidden";>address@hidden</a></blockquote>
-
-<p>To post to a list, send your email to
-<tt><var>list_name</var>@sunsite.dk</tt>. The lists are open for
-posting by non-subscribers. If you send a post to the list and you're
-not subscribed, please include a note that you'd like to be cc'd in
-replies to your post. Otherwise people will assume you are subscribed
-and will delete your address when replying so as not to send you two
-copies of their posts.</p>
-
-<p>For back posts, send an email to
-<tt><var>list_name</var>-get.<var>first</var>_<var>last</var>@sunsite.dk</tt>.
-The mail server limits the number of returned back posts to 100, so
-requesting more will actually return you only posts in range
-<var>first</var> - <var>first+99</var>. For instance, sending an email
-to the following address will retrieve posts 3500 to 3517:</p>
-
-<blockquote><a href=
-"mailto:address@hidden";>address@hidden</a></blockquote>
-
-<p>To unsubscribe to a list, send an email to
-<tt><var>list_name</var>address@hidden</tt>. For more
-information on list commands, send an email to
-<tt><var>list_name</var>address@hidden</tt>.</p>
-
-<p>
-Following mailing lists that deal with Wget are run at SunSITE:
-
-<blockquote>
-<dl>
-<dt><i>wget</i></dt>
-<dd>This is the main Wget discussion list. Currently the Wget bug
-reporting address, <a href=
-"mailto:address@hidden";>address@hidden</a>, simply forwards to this
-list.</p>
-
-<p>There are archives of the main Wget list at:</p>
-
-<ul>
-<li><a href=
-"http://fly.cc.fer.hr/archive/wget";>http://fly.cc.fer.hr/archive/wget</a></li>
-
-<li><a href=
-"http://www.mail-archive.com/wget%40sunsite.dk/";>http://www.mail-archive.com/wget%40sunsite.dk/</a></li>
-
-<li><a href=
-"http://www.geocrawler.com/archives/3/409/";>http://www.geocrawler.com/archives/3/409/</a></li>
-</ul>
-
-<p>The <tt>wget</tt> list is bidirectionally gatewayed to <a href=
-"news://sunsite.dk/sunsite.wget";>sunsite.wget</a>, a local newsgroup on
-<tt>sunsite.dk</tt>.
-</dd>
-
-<dt><i>wget-patches</i></dt>
-<dd>If you're submitting a patch to Wget to fix a bug or add a feature,
-please send it to this list. For more information, please read the Wget
-<a href="wgetdev.html#development">Development</a> section.</dd>
-
-<dt><i>wget-cvs</i></dt>
-<dd>Posts to this list are automatically generated. Each time a
-developer commits CVS sources, an email is sent to this list with the
-files, versions, and the comment used when committing. People very
-interested in the development of Wget can get a detailed play-by-play
-by subscribing to this list.</dd>
-
-<dt><i>wget-website-cvs</i></dt>
-<dd>This is just like the <tt>wget-cvs</tt> list, but for the
-<tt>wget-website</tt> CVS project rather than the <tt>wget</tt>
-project.</dd>
-
-</dl>
-</blockquote>
-
-<!-- Addons -->
-<h4><a href="#TOCaddons" name="addons">Add-ons</a></h4>
-
-<p>Several people have contributed to Wget and maintain their own web
-pages:</p>
-
-<ul>
-<li><a href="mailto:address@hidden";>Antonio Rosella</a> has
-written a <a href=
-"http://www.mit.edu/afs/sipb/machine/charon2/src/wget-1.5.3/util/download.html";>Wget
-Gateway</a>, to allow usage of a socksified Wget behind a
-firewall.</li>
-
-<li>Lachlan Cranswick has a <a href=
-"http://www.ccp14.ac.uk/mirror/wget.htm";>very nice page</a> with
-a lot of Wget tips and goodies, especially <a href=
-"http://www.ccp14.ac.uk/mirror/wget.htm#script";>shell
-scripts</a> available for setting up Wget and auto mirroring.</li>
-
-<li>A graphical user interface for Wget on MS Windows has been
-developed by Jens Roesner and is available from <a
-href="http://www.jensroesner.de/wgetgui/";
->http://www.jensroesner.de/wgetgui/</a>.</li>
-</ul>
-
-<hr>
-[ <!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
- <a href="/wget.html">English</a> 
-<!-- | A HREF="/wget.LG.html" LANGUAGE /A  -->
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
-] 
-
-<hr>
-<p>Return to <a href="http://www.gnu.org/home.html";>GNU's home page</a>.</p>
-
-<p>Please send FSF &amp; GNU inquiries &amp; questions to <a href=
-"mailto:address@hidden";><em>address@hidden</em></a>. There are also <a href=
-"http://www.gnu.org/home.html#ContactInfo";>other ways to contact</a> the 
FSF.</p>
-
-<p>Please send comments on these web pages to <a href=
-"mailto:address@hidden";><em>address@hidden</em></a>, send other
-questions to <a href="mailto:address@hidden";><em>address@hidden</em></a>.</p>
-
-<p>Copyright 2001, 2007 Free Software Foundation, Inc., 59 Temple Place -
-Suite 330, Boston, MA 02111, USA</p>
-
-<p>Verbatim copying and distribution of this entire article is
-permitted in any medium, provided this notice is preserved.</p>
-
-<p>Updated: 
-<!-- hhmts start -->
-$Date: 2007/06/19 18:22:11 $ $Author: karl $
-<!-- hhmts end -->
-</p>
-
-<hr>
-</body>
-</html>

Index: wgetdev.html
===================================================================
RCS file: wgetdev.html
diff -N wgetdev.html
--- wgetdev.html        17 Jan 2002 22:59:17 -0000      1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,297 +0,0 @@
-<!DOCTYPE html PUBLIC "-//IETF//DTD HTML 2.0//EN">
-<html>
-<head>
-<title>GNU wget - GNU Project - Free Software Foundation (FSF)</title>
-<link rev="made" href="mailto:address@hidden";>
-</head>
-<body bgcolor="#FFFFFF" text="#000000" link="#1F00FF" alink="#FF0000"
-vlink="#9900DD">
-<h3>GNU wget</h3>
-<a href="http://www.gnu.org/graphics/agnuhead.html";><img 
src="http://www.gnu.org/graphics/gnu-head-sm.jpg";
-alt=" [image of the Head of a GNU] " width="129" height="122"></a> [ 
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
- <a href="/wget.html">English</a> 
-<!-- | A HREF="/wget.LG.html" LANGUAGE /A  -->
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
-] 
-
-<p>
-</p>
-
-<h4>Table of Contents</h4>
-
-<ul>
-<li><a href="wget.html#introduction" name="TOCintroduction">Introduction to GNU
-wget</a></li>
-
-<li><a href="wget.html#news" name="TOCnews">News</a></li>
-
-<li><a href="wget.html#downloading" name="TOCdownloading">Downloading GNU
-wget</a></li>
-
-<li><a href="wget.html#documentation" name=
-"TOCdocumentation">Documentation</a></li>
-
-<li><a href="wget.html#mailinglists" name="TOCmailinglists">Mailing
-lists</a></li>
-
-<li><a href="wget.html#addons" name="TOCaddons">Add-ons</a></li>
-
-<li><a href="wgetdev.html#development" name="TOCdevelopment">Development of GNU
-wget</a></li>
-</ul>
-
-<hr>
-<!-- Development -->
-<h4><a href="#TOCdevelopment" name="development">Development</a></h4>
-
-<p>
-If you would just like to browse the CVS source, you can use the <a
-href="http://sunsite.dk/cvsweb/wget/";>cvsweb</a> interface. It's convenient
-for viewing file commit histories, particular file versions, and diffs between
-one version and another.
-</p>
-
-<p>
-If you would like to help with development of Wget, be sure you have
-subscribed to the <a href="wget.html#mailinglists">Wget mailing list</a>. It's
-generally best to use the latest CVS source as a base for your development,
-rather than the latest stable release version. To check out the latest
-source:
-</p>
-
-<ol>
-<li>If don't already have it, get <a href="http://www.cvshome.org/";>the
-CVS software</a>.</li>
-
-<li>Make a directory where your CVS copy of Wget will reside -- let's
-call it <code>CVS</code>. Change to this directory:<br>
- <samp>%&nbsp;</samp> <kbd>cd CVS</kbd></li>
-
-<li>Log in to the CVS server:<br>
- <samp>%&nbsp;</samp> <kbd>cvs -d :pserver:address@hidden:/pack/anoncvs
-login</kbd><br>
- <samp>CVS&nbsp;password:&nbsp;</samp> <kbd>cvs</kbd></li>
-
-<li>Get a copy of the source:<br>
- <samp>%&nbsp;</samp> <kbd>cvs -d :pserver:address@hidden:/pack/anoncvs
-checkout wget</kbd><br>
-<br>
- In the unusual case where the Wget development tree has been split
-into multiple branches in order to allow a feature-frozen pre-release
-version to be "burned in" while development continues unabated on the
-main branch, you can check out the separate branch using the
-<kbd>-r</kbd> parameter to <kbd>checkout</kbd> or <kbd>update</kbd>.
-For example, you can check out the branch that was split off
-anticipating the 1.6 release using:<br>
- <samp>%&nbsp;</samp> <kbd>cvs -d :pserver:address@hidden:/pack/anoncvs
-checkout -r release-1_6 wget</kbd></li>
-</ol>
-
-<p>To reduce bandwidth and needless updates, the CVS tree does not
-contain automatically-generated files, even when these are normally
-present in the distribution tarballs.</p>
-
-<p>Therefore, to build Wget from the CVS sources, you need to have <a
-href="http://www.gnu.org/";>GNU</a> <a href=
-"http://www.gnu.org/software/autoconf/";>autoconf</a>, <a href=
-"http://www.gnu.org/software/gettext/";>gettext</a>, and <a href=
-"http://www.gnu.org/software/texinfo/";>texinfo</a> installed on your
-machine to do the automatic generation.</p>
-
-<p>For those who might be confused as to what to do once they check out
-the CVS source, considering <tt>configure</tt> and <tt>Makefile</tt> do
-not yet exist at that point, a file called <tt>Makefile.cvs</tt> has
-been provided, and may be called using <kbd>make -f Makefile.cvs</kbd>.
-Currently, that makefile simply calls <tt>autoconf</tt>, after which
-you're ready to build Wget in the normal fashion, with
-<kbd>configure</kbd> and <kbd>make</kbd>.</p>
-
-<p>So, to sum up, after checking out the CVS sources as described
-above, you may proceed as follows:</p>
-
-<ol>
-<li>Change to the topmost Wget directory:<br>
- <samp>%&nbsp;</samp> <kbd>cd wget</kbd></li>
-
-<li>Generate all the automatically-generated files required prior to
-configuring the package:<br>
- <samp>%&nbsp;</samp> <kbd>make -f Makefile.cvs</kbd></li>
-
-<li>Configure the package and compile it:<br>
- <samp>%&nbsp;</samp> <kbd>./configure
-[<var>some_parameters</var>]</kbd><br>
- <samp>%&nbsp;</samp> <kbd>make</kbd></li>
-
-<li>Hack, compile, test, hack, compile, test...</li>
-</ol>
-
-<a name="patches"></a> 
-
-<h4>Patches</h4>
-
-<p>Patches to Wget should be mailed to <a href=
-"mailto:address@hidden";>address@hidden</a>. Each
-patch will be reviewed by the developers, and will be acknowledged and
-added to the distribution, or rejected with an explanation.</p>
-
-<p>To increase the chances of your patch being accepted, please make
-sure it applies successfully (and correctly!) to the latest CVS source.
-To update to the latest:</p>
-
-<ol>
-<li>Change to the directory where your CVS version of Wget resides:<br>
- <samp>%&nbsp;</samp> <kbd>cd CVS/wget</kbd></li>
-
-<li>Tell CVS to update your sources:<br>
- <samp>%&nbsp;</samp> <kbd>cvs update -d</kbd></li>
-</ol>
-
-If you have changed files in your source tree, the update will not
-write over them, so you might need to move your changed versions to a
-new name while you update, or else check out a separate copy of the
-source tree. 
-
-<p>There are two ways of generating a patch (current working directory
-is still <code>CVS/wget</code>):</p>
-
-<ol>
-<li>Store your original <code>file.c</code> as <code>file.c.orig</code>
-and generate patch using the "unified diff" or "context diff" option of
-the <code>diff</code> program (using ordinary, context-free diffs is
-notoriously prone to error, as line numbers tend to change when others
-make changes to the same source file). Create the patch in the top
-level of the Wget source directory (<code>CVS/wget</code> in this
-example) in the following way:<br>
- <samp>%&nbsp;</samp> <kbd>diff -u <var>path/to/file.c</var>.orig
-<var>path/to/file.c</var> &gt; <var>file</var>.patch</kbd><br>
- or, if your <tt>diff</tt> does not support the <code>-u</code>
-option,<br>
- <samp>%&nbsp;</samp> <kbd>diff -c <var>path/to/file.c</var>.orig
-<var>path/to/file.c</var> &gt; <var>file</var>.patch</kbd><br>
-</li>
-
-<li>An alternative, and generally easier way, is to use CVS's
-<tt>diff</tt> command:<br>
- <samp>%&nbsp;</samp> <kbd>cvs diff -u &gt;
-<var>name</var>.patch</kbd><br>
- and again, if that doesn't work:<br>
- <samp>%&nbsp;</samp> <kbd>cvs diff -c &gt;
-<var>name</var>.patch</kbd><br>
-</li>
-</ol>
-
-<p>If your mail program or gateway is inclined to munge patches, e.g.
-by line-wrapping them, send them as an attachment. Otherwise, patches
-simply inserted into an email message are fine.</p>
-
-<p>Each patch should be accompanied by an update to the appropriate
-<code>ChangeLog</code> file, but please don't mail <em>patches</em> to
-<code>ChangeLog</code> itself because they have an extremely high rate
-of failure. Just mail us the new part of the <code>ChangeLog</code> you
-added. Guidelines for writing <code>ChangeLog</code> entries are
-governed by the <a href=
-"http://www.gnu.org/prep/standards_toc.html";>GNU coding
-standards</a>.</p>
-
-<p>If you make feature additions or changes, please also make sure to
-provide diffs to the <tt>wget.texi</tt> documentation file. Finally,
-all significant user-visible changes need to be mentioned in the
-top-level NEWS file.</p>
-
-<p>If you're interested, here is a more detailed description of how the
-Wget patching process works:</p>
-
-<ol>
-<li>All semantically visible changes to the source go through the patch
-list, including changes written by the Wget developers with
-write-access to the CVS repository. All interested parties can
-subscribe to the patch list by mailing <a href=
-"mailto:address@hidden";><code>address@hidden</code></a>,
-and are invited to comment on the patches. General discussion of Wget
-implementation should still be directed to the general Wget mailing
-list (<a href=
-"mailto:address@hidden";><code>address@hidden</code></a>),
-however.</li>
-
-<li>All dedicated developers have CVS write access, and can apply
-patches as soon as they mail them to the patch list. If other
-developers strongly disagree with the patch, it can be taken out
-later.</li>
-
-<li>Outside contributors will simply mail patches to the patch list.
-Once there, a patch needs to be approved by at least one person before
-it is applied to CVS. The person that applies the patch doesn't need be
-to the same person who reviews/approves it.</li>
-</ol>
-
-<p>People who want a detailed play-by-play of CVS commits should
-subscribe to the <a href="wget.html#mailinglists">wget-cvs</a> mailing 
list.</p>
-
-<h4>Translation</h4>
-
-<p>If you are able to provide translations of the message files to a
-language other than English, please check out the <a href=
-"http://www2.iro.umontreal.ca/~pinard/po/registry.cgi?domain=wget";>Translation
-Project page for Wget</a>, where coordination of such efforts is
-done.</p>
-
-<h4>Website CVS project</h4>
-
-<p>To ease maintenance of the Wget website, a separate CVS project
-exists for it, with write access for the same people who have write
-access to the main project. The CVS project name is, unsurprisingly,
-<tt>wget-website</tt>. As with the main <tt>wget</tt> project, there is
-a <a href="../cvsweb/wget-website/">cvsweb interface to the
-<tt>wget-website</tt> project</a>.</p>
-
-<p>People without CVS write access who want to submit changes to the
-website should submit them as patches to <a href=
-"mailto:address@hidden";><code>address@hidden</code></a>,
-as with changes to the main project. 
-
-<!--
-  * If needed, change the copyright block at the bottom. In general, all pages
-    on the GNU web server should have the section about verbatim
-    copying.  Please do NOT remove this without talking with the webmasters
-    first.
--->
-</p>
-
-<hr>
-[ <!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
- <a href="/wget.html">English</a> 
-<!-- | A HREF="/wget.LG.html" LANGUAGE /A  -->
-<!-- Please keep this list alphabetical -->
-<!-- PLEASE UPDATE THE LIST AT THE BOTTOM (OR TOP) OF THE PAGE TOO! -->
-] 
-
-<hr>
-<p>Return to <a href="http://www.gnu.org/home.html";>GNU's home page</a>.</p>
-
-<p>Please send FSF &amp; GNU inquiries &amp; questions to <a href=
-"mailto:address@hidden";><em>address@hidden</em></a>. There are also <a href=
-"http://www.gnu.org/home.html#ContactInfo";>other ways to contact</a> the 
FSF.</p>
-
-<p>Please send comments on these web pages to <a href=
-"mailto:address@hidden";><em>address@hidden</em></a>, send other
-questions to <a href="mailto:address@hidden";><em>address@hidden</em></a>.</p>
-
-<p>Copyright (C) 2001 Free Software Foundation, Inc., 59 Temple Place -
-Suite 330, Boston, MA 02111, USA</p>
-
-<p>Verbatim copying and distribution of this entire article is
-permitted in any medium, provided this notice is preserved.</p>
-
-<p>Updated: 
-<!-- hhmts start -->
-    $Id: wgetdev.html,v 1.1 2002/01/17 22:59:17 yrp001 Exp $
-<!-- hhmts end -->
-</p>
-
-<hr>
-</body>
-</html>

Index: manual/.symlinks
===================================================================
RCS file: manual/.symlinks
diff -N manual/.symlinks
--- manual/.symlinks    20 Oct 2003 05:11:43 -0000      1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,2 +0,0 @@
-manual.html index.html
-/software/wget/manual/wget-1.8.1/wget.html manual.html

Index: manual/wget/.symlinks
===================================================================
RCS file: manual/wget/.symlinks
diff -N manual/wget/.symlinks
--- manual/wget/.symlinks       20 Oct 2003 05:11:43 -0000      1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,2 +0,0 @@
-wget.html index.html
-/software/wget/manual/wget-1.8.1/wget.html wget.html

Index: manual/wget-1.8.1/.symlinks
===================================================================
RCS file: manual/wget-1.8.1/.symlinks
diff -N manual/wget-1.8.1/.symlinks
--- manual/wget-1.8.1/.symlinks 29 Dec 2004 19:04:55 -0000      1.6
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1 +0,0 @@
-index.html wget.html

Index: manual/wget-1.8.1/wget.html
===================================================================
RCS file: manual/wget-1.8.1/wget.html
diff -N manual/wget-1.8.1/wget.html
--- manual/wget-1.8.1/wget.html 13 Oct 2006 19:24:22 -0000      1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,90 +0,0 @@
-<!DOCTYPE html PUBLIC "-//IETF//DTD HTML 2.0//EN">
-<HTML>
-<HEAD>
-<TITLE>GNU Wget Manual - Table of Contents - GNU Project - Free Software 
Foundation (FSF)</TITLE>
-<LINK REV="made" HREF="mailto:address@hidden";>
-</HEAD>
-<BODY BGCOLOR="#FFFFFF" TEXT="#000000" LINK="#1F00FF" ALINK="#FF0000" 
VLINK="#9900DD">
-<H1>GNU Wget Manual - Table of Contents</H1>
-<ADDRESS>Free Software Foundation</ADDRESS>
-<ADDRESS>last updated January 17, 2002</ADDRESS>
-<P>
-<A HREF="/graphics/gnu-head-sm.jpg"><IMG SRC="/graphics/gnu-head-sm.jpg"
-   ALT=" [image of the Head of a GNU] "
-   WIDTH="129" HEIGHT="122">&#32;(jpeg 7k)</A>
-<A HREF="/graphics/gnu-head.jpg">(jpeg 21k)</A>
-
-<P>
-<P>
-<P><HR><P>
-<P>
-This manual is available in the following formats:
-<P>
-<UL>
-  <LI>formatted in <A HREF="html_mono/wget.html">HTML 
-      (161K characters)</A> entirely on one web page.
-  <P>
-  <LI>formatted in <A HREF="html_mono/wget.html.gz">HTML 
-      (49K gzipped characters)</A> entirely on 
-      one web page.
-  <P>
-  <LI> formatted in <a href="html_chapter/wget_toc.html">HTML</a> 
-       with one web page per chapter.
-  <p>
-  <LI> formatted in <a href="html_chapter/wget.texi_html_chapter.tar.gz">HTML
-       (51K gzipped tar file)</a> 
-       with one web page per chapter.
-  <p>
-  <LI> formatted in <a href="html_node/wget_toc.html">HTML</a> 
-       with one web page per node.
-  <p>
-  <LI> formatted in <a href="html_node/wget.texi_html_node.tar.gz">HTML
-       (54K gzipped tar file)</a> 
-       with one web page per node.
-  <p>
-  <LI>formatted as an
-      <A HREF="info/wget-info.tar.gz">Info document (50K characters
-      gzipped tar file)</A>.
-  <P>
-  <LI>formatted as
-      <A HREF="text/wget.txt">ASCII text (142K characters)</A>.
-  <P>
-  <LI>formatted as
-      <A HREF="text/wget.txt.gz">ASCII text 
-      (46K gzipped characters)</A>.
-  <P>
-  <LI>formatted as
-      <A HREF="dvi/wget.dvi.gz">a TeX dvi file (73K characters
-      gzipped)</A>.
-  <P>
-  <li>formatted as
-      <A href="ps/wget.ps.gz">a PostScript file (134K characters
-      gzipped)</a>.
-  <p>
-  <LI>the original 
-      <A HREF="texi/wget.texi.tar.gz">Texinfo source (96K characters
-      gzipped tar file)</A>
-  <P>
-</UL>
-<P>
-
-<HR>
-
-Return to <A HREF="/home.html">GNU's home page</A>.
-<P>
-FSF &amp; GNU inquiries &amp; questions to
-<A HREF="mailto:address@hidden";><EM>address@hidden</EM></A>.
-Other <A HREF="/home.html#ContactInfo">ways to contact</A> the FSF.
-<P>
-Comments on these web pages to
-<A HREF="mailto:address@hidden";><EM>address@hidden</EM></A>,
-send other questions to
-<A HREF="mailto:address@hidden";><EM>address@hidden</EM></A>.
-<P>
-Copyright (C) 1997, 1998 Free Software Foundation, Inc.,
-59 Temple Place - Suite 330, Boston, MA  02111,  USA
-<P>
-Verbatim copying and distribution of this entire article is
-permitted in any medium, provided this notice is preserved.<HR>
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/dvi/wget.dvi.gz
===================================================================
RCS file: manual/wget-1.8.1/dvi/wget.dvi.gz
diff -N manual/wget-1.8.1/dvi/wget.dvi.gz
Binary files /tmp/cvsmumIGv and /dev/null differ

Index: manual/wget-1.8.1/html_chapter/wget.texi_html_chapter.tar.gz
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget.texi_html_chapter.tar.gz
diff -N manual/wget-1.8.1/html_chapter/wget.texi_html_chapter.tar.gz
Binary files /tmp/cvsn2fvUy and /dev/null differ

Index: manual/wget-1.8.1/html_chapter/wget_1.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_1.html
diff -N manual/wget-1.8.1/html_chapter/wget_1.html
--- manual/wget-1.8.1/html_chapter/wget_1.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,124 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Overview</TITLE>
-</HEAD>
-<BODY>
-Go to the first, previous, <A HREF="wget_2.html">next</A>, <A 
HREF="wget_11.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-<P>
address@hidden Net Utilities
address@hidden World Wide Web
-* Wget: (wget).         The non-interactive network downloader.
-
-
-<P>
-Copyright (C) 1996, 1997, 1998, 2000, 2001 Free Software
-Foundation, Inc.
-
-
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-
-
-
-
-<H1><A NAME="SEC1" HREF="wget_toc.html#TOC1">Overview</A></H1>
-<P>
-<A NAME="IDX1"></A>
-<A NAME="IDX2"></A>
-
-
-<P>
-GNU Wget is a free utility for non-interactive download of files from
-the Web.  It supports HTTP, HTTPS, and FTP protocols, as
-well as retrieval through HTTP proxies.
-
-
-<P>
-This chapter is a partial overview of Wget's features.
-
-
-
-<UL>
-<LI>
-
-Wget is non-interactive, meaning that it can work in the background,
-while the user is not logged on.  This allows you to start a retrieval
-and disconnect from the system, letting Wget finish the work.  By
-contrast, most of the Web browsers require constant user's presence,
-which can be a great hindrance when transferring a lot of data.
-
-<LI>
-
-Wget can follow links in HTML pages and create local versions of
-remote web sites, fully recreating the directory structure of the
-original site.  This is sometimes referred to as "recursive
-downloading."  While doing that, Wget respects the Robot Exclusion
-Standard (<TT>`/robots.txt'</TT>).  Wget can be instructed to convert the
-links in downloaded HTML files to the local files for offline
-viewing.
-
-<LI>
-
-File name wildcard matching and recursive mirroring of directories are
-available when retrieving via FTP.  Wget can read the time-stamp
-information given by both HTTP and FTP servers, and store it
-locally.  Thus Wget can see if the remote file has changed since last
-retrieval, and automatically retrieve the new version if it has.  This
-makes Wget suitable for mirroring of FTP sites, as well as home
-pages.
-
-<LI>
-
-Wget has been designed for robustness over slow or unstable network
-connections; if a download fails due to a network problem, it will
-keep retrying until the whole file has been retrieved.  If the server
-supports regetting, it will instruct the server to continue the
-download from where it left off.
-
-<LI>
-
-Wget supports proxy servers, which can lighten the network load, speed
-up retrieval and provide access behind firewalls.  However, if you are
-behind a firewall that requires that you use a socks style gateway, you
-can get the socks library and build Wget with support for socks.  Wget
-also supports the passive FTP downloading as an option.
-
-<LI>
-
-Builtin features offer mechanisms to tune which links you wish to follow
-(see section <A HREF="wget_4.html#SEC14">Following Links</A>).
-
-<LI>
-
-The retrieval is conveniently traced with printing dots, each dot
-representing a fixed amount of data received (1KB by default).  These
-representations can be customized to your preferences.
-
-<LI>
-
-Most of the features are fully configurable, either through command line
-options, or via the initialization file <TT>`.wgetrc'</TT> (see section <A 
HREF="wget_6.html#SEC24">Startup File</A>).  Wget allows you to define 
<EM>global</EM> startup files
-(<TT>`/usr/local/etc/wgetrc'</TT> by default) for site settings.
-
-<LI>
-
-Finally, GNU Wget is free software.  This means that everyone may use
-it, redistribute it and/or modify it under the terms of the GNU General
-Public License, as published by the Free Software Foundation
-(see section <A HREF="wget_10.html#SEC44">Copying</A>).
-</UL>
-
-<P><HR><P>
-Go to the first, previous, <A HREF="wget_2.html">next</A>, <A 
HREF="wget_11.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_10.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_10.html
diff -N manual/wget-1.8.1/html_chapter/wget_10.html
--- manual/wget-1.8.1/html_chapter/wget_10.html 29 Jun 2005 21:04:13 -0000      
1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,923 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Copying</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_9.html">previous</A>, 
<A HREF="wget_11.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC44" HREF="wget_toc.html#TOC44">Copying</A></H1>
-<P>
-<A NAME="IDX150"></A>
-<A NAME="IDX151"></A>
-<A NAME="IDX152"></A>
-<A NAME="IDX153"></A>
-
-
-<P>
-GNU Wget is licensed under the GNU GPL, which makes it <EM>free
-software</EM>.
-
-
-<P>
-Please note that "free" in "free software" refers to liberty, not
-price.  As some GNU project advocates like to point out, think of "free
-speech" rather than "free beer".  The exact and legally binding
-distribution terms are spelled out below; in short, you have the right
-(freedom) to run and change Wget and distribute it to other people, and
-even--if you want--charge money for doing either.  The important
-restriction is that you have to grant your recipients the same rights
-and impose the same restrictions.
-
-
-<P>
-This method of licensing software is also known as <EM>open source</EM>
-because, among other things, it makes sure that all recipients will
-receive the source code along with the program, and be able to improve
-it.  The GNU project prefers the term "free software" for reasons
-outlined at
-<A 
HREF="http://www.gnu.org/philosophy/free-software-for-freedom.html";>http://www.gnu.org/philosophy/free-software-for-freedom.html</A>.
-
-
-<P>
-The exact license terms are defined by this paragraph and the GNU
-General Public License it refers to:
-
-
-
-<BLOCKQUOTE>
-<P>
-GNU Wget is free software; you can redistribute it and/or modify it
-under the terms of the GNU General Public License as published by the
-Free Software Foundation; either version 2 of the License, or (at your
-option) any later version.
-
-
-<P>
-GNU Wget is distributed in the hope that it will be useful, but WITHOUT
-ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
-for more details.
-
-
-<P>
-A copy of the GNU General Public License is included as part of this
-manual; if you did not receive it, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</BLOCKQUOTE>
-
-<P>
-In addition to this, this manual is free in the same sense:
-
-
-
-<BLOCKQUOTE>
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-</BLOCKQUOTE>
-
-<P>
-The full texts of the GNU General Public License and of the GNU Free
-Documentation License are available below.
-
-
-
-
-<H2><A NAME="SEC45" HREF="wget_toc.html#TOC45">GNU General Public 
License</A></H2>
-<P>
-Version 2, June 1991
-
-
-
-<PRE>
-Copyright (C) 1989, 1991 Free Software Foundation, Inc.
-675 Mass Ave, Cambridge, MA 02139, USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-
-<H2><A NAME="SEC46" HREF="wget_toc.html#TOC46">Preamble</A></H2>
-
-<P>
-  The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-License is intended to guarantee your freedom to share and change free
-software--to make sure the software is free for all its users.  This
-General Public License applies to most of the Free Software
-Foundation's software and to any other program whose authors commit to
-using it.  (Some other Free Software Foundation software is covered by
-the GNU Library General Public License instead.)  You can apply it to
-your programs, too.
-
-
-<P>
-  When we speak of free software, we are referring to freedom, not
-price.  Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-this service if you wish), that you receive source code or can get it
-if you want it, that you can change the software or use pieces of it
-in new free programs; and that you know you can do these things.
-
-
-<P>
-  To protect your rights, we need to make restrictions that forbid
-anyone to deny you these rights or to ask you to surrender the rights.
-These restrictions translate to certain responsibilities for you if you
-distribute copies of the software, or if you modify it.
-
-
-<P>
-  For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must give the recipients all the rights that
-you have.  You must make sure that they, too, receive or can get the
-source code.  And you must show them these terms so they know their
-rights.
-
-
-<P>
-  We protect your rights with two steps: (1) copyright the software, and
-(2) offer you this license which gives you legal permission to copy,
-distribute and/or modify the software.
-
-
-<P>
-  Also, for each author's protection and ours, we want to make certain
-that everyone understands that there is no warranty for this free
-software.  If the software is modified by someone else and passed on, we
-want its recipients to know that what they have is not the original, so
-that any problems introduced by others will not reflect on the original
-authors' reputations.
-
-
-<P>
-  Finally, any free program is threatened constantly by software
-patents.  We wish to avoid the danger that redistributors of a free
-program will individually obtain patent licenses, in effect making the
-program proprietary.  To prevent this, we have made it clear that any
-patent must be licensed for everyone's free use or not licensed at all.
-
-
-<P>
-  The precise terms and conditions for copying, distribution and
-modification follow.
-
-
-
-
-<H2><A NAME="SEC47" HREF="wget_toc.html#TOC47">TERMS AND CONDITIONS FOR 
COPYING, DISTRIBUTION AND MODIFICATION</A></H2>
-
-
-<OL>
-<LI>
-
-This License applies to any program or other work which contains
-a notice placed by the copyright holder saying it may be distributed
-under the terms of this General Public License.  The "Program", below,
-refers to any such program or work, and a "work based on the Program"
-means either the Program or any derivative work under copyright law:
-that is to say, a work containing the Program or a portion of it,
-either verbatim or with modifications and/or translated into another
-language.  (Hereinafter, translation is included without limitation in
-the term "modification".)  Each licensee is addressed as "you".
-
-Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope.  The act of
-running the Program is not restricted, and the output from the Program
-is covered only if its contents constitute a work based on the
-Program (independent of having been made by running the Program).
-Whether that is true depends on what the Program does.
-
-<LI>
-
-You may copy and distribute verbatim copies of the Program's
-source code as you receive it, in any medium, provided that you
-conspicuously and appropriately publish on each copy an appropriate
-copyright notice and disclaimer of warranty; keep intact all the
-notices that refer to this License and to the absence of any warranty;
-and give any other recipients of the Program a copy of this License
-along with the Program.
-
-You may charge a fee for the physical act of transferring a copy, and
-you may at your option offer warranty protection in exchange for a fee.
-
-<LI>
-
-You may modify your copy or copies of the Program or any portion
-of it, thus forming a work based on the Program, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
-
-<OL>
-<LI>
-
-You must cause the modified files to carry prominent notices
-stating that you changed the files and the date of any change.
-
-<LI>
-
-You must cause any work that you distribute or publish, that in
-whole or in part contains or is derived from the Program or any
-part thereof, to be licensed as a whole at no charge to all third
-parties under the terms of this License.
-
-<LI>
-
-If the modified program normally reads commands interactively
-when run, you must cause it, when started running for such
-interactive use in the most ordinary way, to print or display an
-announcement including an appropriate copyright notice and a
-notice that there is no warranty (or else, saying that you provide
-a warranty) and that users may redistribute the program under
-these conditions, and telling the user how to view a copy of this
-License.  (Exception: if the Program itself is interactive but
-does not normally print such an announcement, your work based on
-the Program is not required to print an announcement.)
-</OL>
-
-These requirements apply to the modified work as a whole.  If
-identifiable sections of that work are not derived from the Program,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works.  But when you
-distribute the same sections as part of a whole which is a work based
-on the Program, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Program.
-
-In addition, mere aggregation of another work not based on the Program
-with the Program (or with a work based on the Program) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
-<LI>
-
-You may copy and distribute the Program (or a work based on it,
-under Section 2) in object code or executable form under the terms of
-Sections 1 and 2 above provided that you also do one of the following:
-
-
-<OL>
-<LI>
-
-Accompany it with the complete corresponding machine-readable
-source code, which must be distributed under the terms of Sections
-1 and 2 above on a medium customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with a written offer, valid for at least three
-years, to give any third party, for a charge no more than your
-cost of physically performing source distribution, a complete
-machine-readable copy of the corresponding source code, to be
-distributed under the terms of Sections 1 and 2 above on a medium
-customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with the information you received as to the offer
-to distribute corresponding source code.  (This alternative is
-allowed only for noncommercial distribution and only if you
-received the program in object code or executable form with such
-an offer, in accord with Subsection b above.)
-</OL>
-
-The source code for a work means the preferred form of the work for
-making modifications to it.  For an executable work, complete source
-code means all the source code for all modules it contains, plus any
-associated interface definition files, plus the scripts used to
-control compilation and installation of the executable.  However, as a
-special exception, the source code distributed need not include
-anything that is normally distributed (in either source or binary
-form) with the major components (compiler, kernel, and so on) of the
-operating system on which the executable runs, unless that component
-itself accompanies the executable.
-
-If distribution of executable or object code is made by offering
-access to copy from a designated place, then offering equivalent
-access to copy the source code from the same place counts as
-distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-<LI>
-
-You may not copy, modify, sublicense, or distribute the Program
-except as expressly provided under this License.  Any attempt
-otherwise to copy, modify, sublicense or distribute the Program is
-void, and will automatically terminate your rights under this License.
-However, parties who have received copies, or rights, from you under
-this License will not have their licenses terminated so long as such
-parties remain in full compliance.
-
-<LI>
-
-You are not required to accept this License, since you have not
-signed it.  However, nothing else grants you permission to modify or
-distribute the Program or its derivative works.  These actions are
-prohibited by law if you do not accept this License.  Therefore, by
-modifying or distributing the Program (or any work based on the
-Program), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Program or works based on it.
-
-<LI>
-
-Each time you redistribute the Program (or any work based on the
-Program), the recipient automatically receives a license from the
-original licensor to copy, distribute or modify the Program subject to
-these terms and conditions.  You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties to
-this License.
-
-<LI>
-
-If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License.  If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Program at all.  For example, if a patent
-license would not permit royalty-free redistribution of the Program by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Program.
-
-If any portion of this section is held invalid or unenforceable under
-any particular circumstance, the balance of the section is intended to
-apply and the section as a whole is intended to apply in other
-circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system, which is
-implemented by public license practices.  Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
-<LI>
-
-If the distribution and/or use of the Program is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Program under this License
-may add an explicit geographical distribution limitation excluding
-those countries, so that distribution is permitted only in or among
-countries not thus excluded.  In such case, this License incorporates
-the limitation as if written in the body of this License.
-
-<LI>
-
-The Free Software Foundation may publish revised and/or new versions
-of the General Public License from time to time.  Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
-Each version is given a distinguishing version number.  If the Program
-specifies a version number of this License which applies to it and "any
-later version", you have the option of following the terms and conditions
-either of that version or of any later version published by the Free
-Software Foundation.  If the Program does not specify a version number of
-this License, you may choose any version ever published by the Free Software
-Foundation.
-
-<LI>
-
-If you wish to incorporate parts of the Program into other free
-programs whose distribution conditions are different, write to the author
-to ask for permission.  For software which is copyrighted by the Free
-Software Foundation, write to the Free Software Foundation; we sometimes
-make exceptions for this.  Our decision will be guided by the two goals
-of preserving the free status of all derivatives of our free software and
-of promoting the sharing and reuse of software generally.
-
-
-
-<P><STRONG>NO WARRANTY</STRONG>
-<A NAME="IDX154"></A>
-
-<LI>
-
-BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
-FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
-OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
-PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
-OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
-TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
-PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
-REPAIR OR CORRECTION.
-
-<LI>
-
-IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
-INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
-OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
-TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
-YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
-PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGES.
-</OL>
-
-
-<H2>END OF TERMS AND CONDITIONS</H2>
-
-
-
-<H2><A NAME="SEC48" HREF="wget_toc.html#TOC48">How to Apply These Terms to 
Your New Programs</A></H2>
-
-<P>
-  If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
-
-<P>
-  To do so, attach the following notices to the program.  It is safest
-to attach them to the start of each source file to most effectively
-convey the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
-
-<PRE>
-<VAR>one line to give the program's name and an idea of what it does.</VAR>
-Copyright (C) 19<VAR>yy</VAR>  <VAR>name of author</VAR>
-
-This program is free software; you can redistribute it and/or
-modify it under the terms of the GNU General Public License
-as published by the Free Software Foundation; either version 2
-of the License, or (at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</PRE>
-
-<P>
-Also add information on how to contact you by electronic and paper mail.
-
-
-<P>
-If the program is interactive, make it output a short notice like this
-when it starts in an interactive mode:
-
-
-
-<PRE>
-Gnomovision version 69, Copyright (C) 19<VAR>yy</VAR> <VAR>name of author</VAR>
-Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
-type `show w'.  This is free software, and you are welcome
-to redistribute it under certain conditions; type `show c'
-for details.
-</PRE>
-
-<P>
-The hypothetical commands <SAMP>`show w'</SAMP> and <SAMP>`show c'</SAMP> 
should show
-the appropriate parts of the General Public License.  Of course, the
-commands you use may be called something other than <SAMP>`show w'</SAMP> and
-<SAMP>`show c'</SAMP>; they could even be mouse-clicks or menu items--whatever
-suits your program.
-
-
-<P>
-You should also get your employer (if you work as a programmer) or your
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary.  Here is a sample; alter the names:
-
-
-
-<PRE>
-Yoyodyne, Inc., hereby disclaims all copyright
-interest in the program `Gnomovision'
-(which makes passes at compilers) written
-by James Hacker.
-
-<VAR>signature of Ty Coon</VAR>, 1 April 1989
-Ty Coon, President of Vice
-</PRE>
-
-<P>
-This General Public License does not permit incorporating your program into
-proprietary programs.  If your program is a subroutine library, you may
-consider it more useful to permit linking proprietary applications with the
-library.  If this is what you want to do, use the GNU Library General
-Public License instead of this License.
-
-
-
-
-<H2><A NAME="SEC49" HREF="wget_toc.html#TOC49">GNU Free Documentation 
License</A></H2>
-<P>
-Version 1.1, March 2000
-
-
-
-<PRE>
-Copyright (C) 2000  Free Software Foundation, Inc.
-51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-<OL>
-<LI>
-
-PREAMBLE
-
-The purpose of this License is to make a manual, textbook, or other
-written document "free" in the sense of freedom: to assure everyone
-the effective freedom to copy and redistribute it, with or without
-modifying it, either commercially or noncommercially.  Secondarily,
-this License preserves for the author and publisher a way to get
-credit for their work, while not being considered responsible for
-modifications made by others.
-
-This License is a kind of "copyleft", which means that derivative
-works of the document must themselves be free in the same sense.  It
-complements the GNU General Public License, which is a copyleft
-license designed for free software.
-
-We have designed this License in order to use it for manuals for free
-software, because free software needs free documentation: a free
-program should come with manuals providing the same freedoms that the
-software does.  But this License is not limited to software manuals;
-it can be used for any textual work, regardless of subject matter or
-whether it is published as a printed book.  We recommend this License
-principally for works whose purpose is instruction or reference.
-
-<LI>
-
-APPLICABILITY AND DEFINITIONS
-
-This License applies to any manual or other work that contains a
-notice placed by the copyright holder saying it can be distributed
-under the terms of this License.  The "Document", below, refers to any
-such manual or work.  Any member of the public is a licensee, and is
-addressed as "you".
-
-A "Modified Version" of the Document means any work containing the
-Document or a portion of it, either copied verbatim, or with
-modifications and/or translated into another language.
-
-A "Secondary Section" is a named appendix or a front-matter section of
-the Document that deals exclusively with the relationship of the
-publishers or authors of the Document to the Document's overall subject
-(or to related matters) and contains nothing that could fall directly
-within that overall subject.  (For example, if the Document is in part a
-textbook of mathematics, a Secondary Section may not explain any
-mathematics.)  The relationship could be a matter of historical
-connection with the subject or with related matters, or of legal,
-commercial, philosophical, ethical or political position regarding
-them.
-
-The "Invariant Sections" are certain Secondary Sections whose titles
-are designated, as being those of Invariant Sections, in the notice
-that says that the Document is released under this License.
-
-The "Cover Texts" are certain short passages of text that are listed,
-as Front-Cover Texts or Back-Cover Texts, in the notice that says that
-the Document is released under this License.
-
-A "Transparent" copy of the Document means a machine-readable copy,
-represented in a format whose specification is available to the
-general public, whose contents can be viewed and edited directly and
-straightforwardly with generic text editors or (for images composed of
-pixels) generic paint programs or (for drawings) some widely available
-drawing editor, and that is suitable for input to text formatters or
-for automatic translation to a variety of formats suitable for input
-to text formatters.  A copy made in an otherwise Transparent file
-format whose markup has been designed to thwart or discourage
-subsequent modification by readers is not Transparent.  A copy that is
-not "Transparent" is called "Opaque".
-
-Examples of suitable formats for Transparent copies include plain
-ASCII without markup, Texinfo input format, LaTeX input format, SGML
-or XML using a publicly available DTD, and standard-conforming simple
-HTML designed for human modification.  Opaque formats include
-PostScript, PDF, proprietary formats that can be read and edited only
-by proprietary word processors, SGML or XML for which the DTD and/or
-processing tools are not generally available, and the
-machine-generated HTML produced by some word processors for output
-purposes only.
-
-The "Title Page" means, for a printed book, the title page itself,
-plus such following pages as are needed to hold, legibly, the material
-this License requires to appear in the title page.  For works in
-formats which do not have any title page as such, "Title Page" means
-the text near the most prominent appearance of the work's title,
-preceding the beginning of the body of the text.
-<LI>
-
-VERBATIM COPYING
-
-You may copy and distribute the Document in any medium, either
-commercially or noncommercially, provided that this License, the
-copyright notices, and the license notice saying this License applies
-to the Document are reproduced in all copies, and that you add no other
-conditions whatsoever to those of this License.  You may not use
-technical measures to obstruct or control the reading or further
-copying of the copies you make or distribute.  However, you may accept
-compensation in exchange for copies.  If you distribute a large enough
-number of copies you must also follow the conditions in section 3.
-
-You may also lend copies, under the same conditions stated above, and
-you may publicly display copies.
-<LI>
-
-COPYING IN QUANTITY
-
-If you publish printed copies of the Document numbering more than 100,
-and the Document's license notice requires Cover Texts, you must enclose
-the copies in covers that carry, clearly and legibly, all these Cover
-Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
-the back cover.  Both covers must also clearly and legibly identify
-you as the publisher of these copies.  The front cover must present
-the full title with all words of the title equally prominent and
-visible.  You may add other material on the covers in addition.
-Copying with changes limited to the covers, as long as they preserve
-the title of the Document and satisfy these conditions, can be treated
-as verbatim copying in other respects.
-
-If the required texts for either cover are too voluminous to fit
-legibly, you should put the first ones listed (as many as fit
-reasonably) on the actual cover, and continue the rest onto adjacent
-pages.
-
-If you publish or distribute Opaque copies of the Document numbering
-more than 100, you must either include a machine-readable Transparent
-copy along with each Opaque copy, or state in or with each Opaque copy
-a publicly-accessible computer-network location containing a complete
-Transparent copy of the Document, free of added material, which the
-general network-using public has access to download anonymously at no
-charge using public-standard network protocols.  If you use the latter
-option, you must take reasonably prudent steps, when you begin
-distribution of Opaque copies in quantity, to ensure that this
-Transparent copy will remain thus accessible at the stated location
-until at least one year after the last time you distribute an Opaque
-copy (directly or through your agents or retailers) of that edition to
-the public.
-
-It is requested, but not required, that you contact the authors of the
-Document well before redistributing any large number of copies, to give
-them a chance to provide you with an updated version of the Document.
-<LI>
-
-MODIFICATIONS
-
-You may copy and distribute a Modified Version of the Document under
-the conditions of sections 2 and 3 above, provided that you release
-the Modified Version under precisely this License, with the Modified
-Version filling the role of the Document, thus licensing distribution
-and modification of the Modified Version to whoever possesses a copy
-of it.  In addition, you must do these things in the Modified Version:
-
-A. Use in the Title Page (and on the covers, if any) a title distinct
-   from that of the Document, and from those of previous versions
-   (which should, if there were any, be listed in the History section
-   of the Document).  You may use the same title as a previous version
-   if the original publisher of that version gives permission.<BR>
-B. List on the Title Page, as authors, one or more persons or entities
-   responsible for authorship of the modifications in the Modified
-   Version, together with at least five of the principal authors of the
-   Document (all of its principal authors, if it has less than five).<BR>
-C. State on the Title page the name of the publisher of the
-   Modified Version, as the publisher.<BR>
-D. Preserve all the copyright notices of the Document.<BR>
-E. Add an appropriate copyright notice for your modifications
-   adjacent to the other copyright notices.<BR>
-F. Include, immediately after the copyright notices, a license notice
-   giving the public permission to use the Modified Version under the
-   terms of this License, in the form shown in the Addendum below.<BR>
-G. Preserve in that license notice the full lists of Invariant Sections
-   and required Cover Texts given in the Document's license notice.<BR>
-H. Include an unaltered copy of this License.<BR>
-I. Preserve the section entitled "History", and its title, and add to
-   it an item stating at least the title, year, new authors, and
-   publisher of the Modified Version as given on the Title Page.  If
-   there is no section entitled "History" in the Document, create one
-   stating the title, year, authors, and publisher of the Document as
-   given on its Title Page, then add an item describing the Modified
-   Version as stated in the previous sentence.<BR>
-J. Preserve the network location, if any, given in the Document for
-   public access to a Transparent copy of the Document, and likewise
-   the network locations given in the Document for previous versions
-   it was based on.  These may be placed in the "History" section.
-   You may omit a network location for a work that was published at
-   least four years before the Document itself, or if the original
-   publisher of the version it refers to gives permission.<BR>
-K. In any section entitled "Acknowledgements" or "Dedications",
-   preserve the section's title, and preserve in the section all the
-   substance and tone of each of the contributor acknowledgements
-   and/or dedications given therein.<BR>
-L. Preserve all the Invariant Sections of the Document,
-   unaltered in their text and in their titles.  Section numbers
-   or the equivalent are not considered part of the section titles.<BR>
-M. Delete any section entitled "Endorsements".  Such a section
-   may not be included in the Modified Version.<BR>
-N. Do not retitle any existing section as "Endorsements"
-   or to conflict in title with any Invariant Section.<BR>
-If the Modified Version includes new front-matter sections or
-appendices that qualify as Secondary Sections and contain no material
-copied from the Document, you may at your option designate some or all
-of these sections as invariant.  To do this, add their titles to the
-list of Invariant Sections in the Modified Version's license notice.
-These titles must be distinct from any other section titles.
-
-You may add a section entitled "Endorsements", provided it contains
-nothing but endorsements of your Modified Version by various
-parties--for example, statements of peer review or that the text has
-been approved by an organization as the authoritative definition of a
-standard.
-
-You may add a passage of up to five words as a Front-Cover Text, and a
-passage of up to 25 words as a Back-Cover Text, to the end of the list
-of Cover Texts in the Modified Version.  Only one passage of
-Front-Cover Text and one of Back-Cover Text may be added by (or
-through arrangements made by) any one entity.  If the Document already
-includes a cover text for the same cover, previously added by you or
-by arrangement made by the same entity you are acting on behalf of,
-you may not add another; but you may replace the old one, on explicit
-permission from the previous publisher that added the old one.
-
-The author(s) and publisher(s) of the Document do not by this License
-give permission to use their names for publicity for or to assert or
-imply endorsement of any Modified Version.
-<LI>
-
-COMBINING DOCUMENTS
-
-You may combine the Document with other documents released under this
-License, under the terms defined in section 4 above for modified
-versions, provided that you include in the combination all of the
-Invariant Sections of all of the original documents, unmodified, and
-list them all as Invariant Sections of your combined work in its
-license notice.
-
-The combined work need only contain one copy of this License, and
-multiple identical Invariant Sections may be replaced with a single
-copy.  If there are multiple Invariant Sections with the same name but
-different contents, make the title of each such section unique by
-adding at the end of it, in parentheses, the name of the original
-author or publisher of that section if known, or else a unique number.
-Make the same adjustment to the section titles in the list of
-Invariant Sections in the license notice of the combined work.
-
-In the combination, you must combine any sections entitled "History"
-in the various original documents, forming one section entitled
-"History"; likewise combine any sections entitled "Acknowledgements",
-and any sections entitled "Dedications".  You must delete all sections
-entitled "Endorsements."
-<LI>
-
-COLLECTIONS OF DOCUMENTS
-
-You may make a collection consisting of the Document and other documents
-released under this License, and replace the individual copies of this
-License in the various documents with a single copy that is included in
-the collection, provided that you follow the rules of this License for
-verbatim copying of each of the documents in all other respects.
-
-You may extract a single document from such a collection, and distribute
-it individually under this License, provided you insert a copy of this
-License into the extracted document, and follow this License in all
-other respects regarding verbatim copying of that document.
-<LI>
-
-AGGREGATION WITH INDEPENDENT WORKS
-
-A compilation of the Document or its derivatives with other separate
-and independent documents or works, in or on a volume of a storage or
-distribution medium, does not as a whole count as a Modified Version
-of the Document, provided no compilation copyright is claimed for the
-compilation.  Such a compilation is called an "aggregate", and this
-License does not apply to the other self-contained works thus compiled
-with the Document, on account of their being thus compiled, if they
-are not themselves derivative works of the Document.
-
-If the Cover Text requirement of section 3 is applicable to these
-copies of the Document, then if the Document is less than one quarter
-of the entire aggregate, the Document's Cover Texts may be placed on
-covers that surround only the Document within the aggregate.
-Otherwise they must appear on covers around the whole aggregate.
-<LI>
-
-TRANSLATION
-
-Translation is considered a kind of modification, so you may
-distribute translations of the Document under the terms of section 4.
-Replacing Invariant Sections with translations requires special
-permission from their copyright holders, but you may include
-translations of some or all Invariant Sections in addition to the
-original versions of these Invariant Sections.  You may include a
-translation of this License provided that you also include the
-original English version of this License.  In case of a disagreement
-between the translation and the original English version of this
-License, the original English version will prevail.
-<LI>
-
-TERMINATION
-
-You may not copy, modify, sublicense, or distribute the Document except
-as expressly provided for under this License.  Any other attempt to
-copy, modify, sublicense or distribute the Document is void, and will
-automatically terminate your rights under this License.  However,
-parties who have received copies, or rights, from you under this
-License will not have their licenses terminated so long as such
-parties remain in full compliance.
-<LI>
-
-FUTURE REVISIONS OF THIS LICENSE
-
-The Free Software Foundation may publish new, revised versions
-of the GNU Free Documentation License from time to time.  Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns.  See
-http://www.gnu.org/copyleft/.
-
-Each version of the License is given a distinguishing version number.
-If the Document specifies that a particular numbered version of this
-License "or any later version" applies to it, you have the option of
-following the terms and conditions either of that specified version or
-of any later version that has been published (not as a draft) by the
-Free Software Foundation.  If the Document does not specify a version
-number of this License, you may choose any version ever published (not
-as a draft) by the Free Software Foundation.
-
-</OL>
-
-
-
-<H2><A NAME="SEC50" HREF="wget_toc.html#TOC50">ADDENDUM: How to use this 
License for your documents</A></H2>
-
-<P>
-To use this License in a document you have written, include a copy of
-the License in the document and put the following copyright and
-license notices just after the title page:
-
-
-
-<PRE>
-
-  Copyright (C)  <VAR>year</VAR>  <VAR>your name</VAR>.
-  Permission is granted to copy, distribute and/or modify this document
-  under the terms of the GNU Free Documentation License, Version 1.1
-  or any later version published by the Free Software Foundation;
-  with the Invariant Sections being <VAR>list their titles</VAR>, with the
-  Front-Cover Texts being <VAR>list</VAR>, and with the Back-Cover Texts being 
<VAR>list</VAR>.
-  A copy of the license is included in the section entitled ``GNU
-  Free Documentation License''.
-</PRE>
-
-<P>
-If you have no Invariant Sections, write "with no Invariant Sections"
-instead of saying which ones are invariant.  If you have no
-Front-Cover Texts, write "no Front-Cover Texts" instead of
-"Front-Cover Texts being <VAR>list</VAR>"; likewise for Back-Cover Texts.
-
-
-<P>
-If your document contains nontrivial examples of program code, we
-recommend releasing these examples in parallel under your choice of
-free software license, such as the GNU General Public License,
-to permit their use in free software.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_9.html">previous</A>, 
<A HREF="wget_11.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_11.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_11.html
diff -N manual/wget-1.8.1/html_chapter/wget_11.html
--- manual/wget-1.8.1/html_chapter/wget_11.html 19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,283 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Concept Index</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_10.html">previous</A>, next, last section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC51" HREF="wget_toc.html#TOC51">Concept Index</A></H1>
-<P>
-Jump to:
-<A HREF="#cindex_.">.</A>
--
-<A HREF="#cindex_a">a</A>
--
-<A HREF="#cindex_b">b</A>
--
-<A HREF="#cindex_c">c</A>
--
-<A HREF="#cindex_d">d</A>
--
-<A HREF="#cindex_e">e</A>
--
-<A HREF="#cindex_f">f</A>
--
-<A HREF="#cindex_g">g</A>
--
-<A HREF="#cindex_h">h</A>
--
-<A HREF="#cindex_i">i</A>
--
-<A HREF="#cindex_l">l</A>
--
-<A HREF="#cindex_m">m</A>
--
-<A HREF="#cindex_n">n</A>
--
-<A HREF="#cindex_o">o</A>
--
-<A HREF="#cindex_p">p</A>
--
-<A HREF="#cindex_q">q</A>
--
-<A HREF="#cindex_r">r</A>
--
-<A HREF="#cindex_s">s</A>
--
-<A HREF="#cindex_t">t</A>
--
-<A HREF="#cindex_u">u</A>
--
-<A HREF="#cindex_v">v</A>
--
-<A HREF="#cindex_w">w</A>
-<P>
-<H2><A NAME="cindex_.">.</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX49">.html extension</A>
-<LI><A HREF="wget_2.html#IDX70">.listing files, removing</A>
-<LI><A HREF="wget_6.html#IDX123">.netrc</A>
-<LI><A HREF="wget_6.html#IDX121">.wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_a">a</A></H2>
-<DIR>
-<LI><A HREF="wget_4.html#IDX104">accept directories</A>
-<LI><A HREF="wget_4.html#IDX93">accept suffixes</A>
-<LI><A HREF="wget_4.html#IDX92">accept wildcards</A>
-<LI><A HREF="wget_2.html#IDX14">append to log</A>
-<LI><A HREF="wget_2.html#IDX5">arguments</A>
-<LI><A HREF="wget_2.html#IDX52">authentication</A>
-</DIR>
-<H2><A NAME="cindex_b">b</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX79">backing up converted files</A>
-<LI><A HREF="wget_2.html#IDX20">base for relative links in input file</A>
-<LI><A HREF="wget_2.html#IDX21">bind() address</A>
-<LI><A HREF="wget_8.html#IDX140">bug reports</A>
-<LI><A HREF="wget_8.html#IDX138">bugs</A>
-</DIR>
-<H2><A NAME="cindex_c">c</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX54">cache</A>
-<LI><A HREF="wget_2.html#IDX22">client IP address</A>
-<LI><A HREF="wget_2.html#IDX27">clobbering, file</A>
-<LI><A HREF="wget_2.html#IDX4">command line</A>
-<LI><A HREF="wget_2.html#IDX60">Content-Length, ignore</A>
-<LI><A HREF="wget_2.html#IDX30">continue retrieval</A>
-<LI><A HREF="wget_9.html#IDX149">contributors</A>
-<LI><A HREF="wget_2.html#IDX77">conversion of links</A>
-<LI><A HREF="wget_2.html#IDX55">cookies</A>
-<LI><A HREF="wget_2.html#IDX57">cookies, loading</A>
-<LI><A HREF="wget_2.html#IDX59">cookies, saving</A>
-<LI><A HREF="wget_10.html#IDX150">copying</A>
-<LI><A HREF="wget_2.html#IDX47">cut directories</A>
-</DIR>
-<H2><A NAME="cindex_d">d</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX15">debug</A>
-<LI><A HREF="wget_2.html#IDX75">delete after retrieval</A>
-<LI><A HREF="wget_4.html#IDX100">directories</A>
-<LI><A HREF="wget_4.html#IDX105">directories, exclude</A>
-<LI><A HREF="wget_4.html#IDX102">directories, include</A>
-<LI><A HREF="wget_4.html#IDX101">directory limits</A>
-<LI><A HREF="wget_2.html#IDX48">directory prefix</A>
-<LI><A HREF="wget_2.html#IDX34">dot style</A>
-<LI><A HREF="wget_2.html#IDX28">downloading multiple times</A>
-</DIR>
-<H2><A NAME="cindex_e">e</A></H2>
-<DIR>
-<LI><A HREF="wget_7.html#IDX130">examples</A>
-<LI><A HREF="wget_4.html#IDX106">exclude directories</A>
-<LI><A HREF="wget_2.html#IDX11">execute wgetrc command</A>
-</DIR>
-<H2><A NAME="cindex_f">f</A></H2>
-<DIR>
-<LI><A HREF="wget_1.html#IDX2">features</A>
-<LI><A HREF="wget_2.html#IDX76">filling proxy cache</A>
-<LI><A HREF="wget_2.html#IDX82">follow FTP links</A>
-<LI><A HREF="wget_4.html#IDX110">following ftp links</A>
-<LI><A HREF="wget_4.html#IDX88">following links</A>
-<LI><A HREF="wget_2.html#IDX19">force html</A>
-<LI><A HREF="wget_10.html#IDX153">free software</A>
-<LI><A HREF="wget_5.html#IDX118">ftp time-stamping</A>
-</DIR>
-<H2><A NAME="cindex_g">g</A></H2>
-<DIR>
-<LI><A HREF="wget_10.html#IDX152">GFDL</A>
-<LI><A HREF="wget_2.html#IDX71">globbing, toggle</A>
-<LI><A HREF="wget_10.html#IDX151">GPL</A>
-</DIR>
-<H2><A NAME="cindex_h">h</A></H2>
-<DIR>
-<LI><A HREF="wget_8.html#IDX144">hangup</A>
-<LI><A HREF="wget_2.html#IDX62">header, add</A>
-<LI><A HREF="wget_4.html#IDX90">hosts, spanning</A>
-<LI><A HREF="wget_2.html#IDX51">http password</A>
-<LI><A HREF="wget_2.html#IDX66">http referer</A>
-<LI><A HREF="wget_5.html#IDX117">http time-stamping</A>
-<LI><A HREF="wget_2.html#IDX50">http user</A>
-</DIR>
-<H2><A NAME="cindex_i">i</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX61">ignore length</A>
-<LI><A HREF="wget_4.html#IDX103">include directories</A>
-<LI><A HREF="wget_2.html#IDX31">incomplete downloads</A>
-<LI><A HREF="wget_5.html#IDX114">incremental updating</A>
-<LI><A HREF="wget_2.html#IDX18">input-file</A>
-<LI><A HREF="wget_2.html#IDX3">invoking</A>
-<LI><A HREF="wget_2.html#IDX23">IP address, client</A>
-</DIR>
-<H2><A NAME="cindex_l">l</A></H2>
-<DIR>
-<LI><A HREF="wget_8.html#IDX135">latest version</A>
-<LI><A HREF="wget_2.html#IDX78">link conversion</A>
-<LI><A HREF="wget_4.html#IDX87">links</A>
-<LI><A HREF="wget_8.html#IDX137">list</A>
-<LI><A HREF="wget_2.html#IDX56">loading cookies</A>
-<LI><A HREF="wget_6.html#IDX125">location of wgetrc</A>
-<LI><A HREF="wget_2.html#IDX13">log file</A>
-</DIR>
-<H2><A NAME="cindex_m">m</A></H2>
-<DIR>
-<LI><A HREF="wget_8.html#IDX136">mailing list</A>
-<LI><A HREF="wget_7.html#IDX132">mirroring</A>
-</DIR>
-<H2><A NAME="cindex_n">n</A></H2>
-<DIR>
-<LI><A HREF="wget_4.html#IDX108">no parent</A>
-<LI><A HREF="wget_10.html#IDX154">no warranty</A>
-<LI><A HREF="wget_2.html#IDX29">no-clobber</A>
-<LI><A HREF="wget_2.html#IDX6">nohup</A>
-<LI><A HREF="wget_2.html#IDX26">number of retries</A>
-</DIR>
-<H2><A NAME="cindex_o">o</A></H2>
-<DIR>
-<LI><A HREF="wget_8.html#IDX142">operating systems</A>
-<LI><A HREF="wget_2.html#IDX9">option syntax</A>
-<LI><A HREF="wget_2.html#IDX12">output file</A>
-<LI><A HREF="wget_1.html#IDX1">overview</A>
-</DIR>
-<H2><A NAME="cindex_p">p</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX80">page requisites</A>
-<LI><A HREF="wget_2.html#IDX72">passive ftp</A>
-<LI><A HREF="wget_2.html#IDX39">pause</A>
-<LI><A HREF="wget_8.html#IDX141">portability</A>
-<LI><A HREF="wget_2.html#IDX33">progress indicator</A>
-<LI><A HREF="wget_8.html#IDX134">proxies</A>
-<LI><A HREF="wget_2.html#IDX45">proxy</A>, <A 
HREF="wget_2.html#IDX53">proxy</A>
-<LI><A HREF="wget_2.html#IDX65">proxy authentication</A>
-<LI><A HREF="wget_2.html#IDX74">proxy filling</A>
-<LI><A HREF="wget_2.html#IDX64">proxy password</A>
-<LI><A HREF="wget_2.html#IDX63">proxy user</A>
-</DIR>
-<H2><A NAME="cindex_q">q</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX16">quiet</A>
-<LI><A HREF="wget_2.html#IDX46">quota</A>
-</DIR>
-<H2><A NAME="cindex_r">r</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX44">random wait</A>
-<LI><A HREF="wget_3.html#IDX84">recursion</A>
-<LI><A HREF="wget_3.html#IDX86">recursive retrieval</A>
-<LI><A HREF="wget_7.html#IDX131">redirecting output</A>
-<LI><A HREF="wget_2.html#IDX67">referer, http</A>
-<LI><A HREF="wget_4.html#IDX107">reject directories</A>
-<LI><A HREF="wget_4.html#IDX97">reject suffixes</A>
-<LI><A HREF="wget_4.html#IDX96">reject wildcards</A>
-<LI><A HREF="wget_4.html#IDX109">relative links</A>
-<LI><A HREF="wget_8.html#IDX139">reporting bugs</A>
-<LI><A HREF="wget_2.html#IDX81">required images, downloading</A>
-<LI><A HREF="wget_2.html#IDX32">resume download</A>
-<LI><A HREF="wget_2.html#IDX24">retries</A>
-<LI><A HREF="wget_2.html#IDX41">retries, waiting between</A>
-<LI><A HREF="wget_3.html#IDX85">retrieving</A>
-<LI><A HREF="wget_9.html#IDX145">robots</A>
-<LI><A HREF="wget_9.html#IDX146">robots.txt</A>
-</DIR>
-<H2><A NAME="cindex_s">s</A></H2>
-<DIR>
-<LI><A HREF="wget_6.html#IDX129">sample wgetrc</A>
-<LI><A HREF="wget_2.html#IDX58">saving cookies</A>
-<LI><A HREF="wget_9.html#IDX148">security</A>
-<LI><A HREF="wget_9.html#IDX147">server maintenance</A>
-<LI><A HREF="wget_2.html#IDX35">server response, print</A>
-<LI><A HREF="wget_2.html#IDX68">server response, save</A>
-<LI><A HREF="wget_8.html#IDX143">signal handling</A>
-<LI><A HREF="wget_4.html#IDX89">spanning hosts</A>
-<LI><A HREF="wget_2.html#IDX37">spider</A>
-<LI><A HREF="wget_6.html#IDX122">startup</A>
-<LI><A HREF="wget_6.html#IDX119">startup file</A>
-<LI><A HREF="wget_4.html#IDX95">suffixes, accept</A>
-<LI><A HREF="wget_4.html#IDX99">suffixes, reject</A>
-<LI><A HREF="wget_2.html#IDX73">symbolic links, retrieving</A>
-<LI><A HREF="wget_2.html#IDX10">syntax of options</A>
-<LI><A HREF="wget_6.html#IDX127">syntax of wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_t">t</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX83">tag-based recursive pruning</A>
-<LI><A HREF="wget_5.html#IDX111">time-stamping</A>
-<LI><A HREF="wget_5.html#IDX115">time-stamping usage</A>
-<LI><A HREF="wget_2.html#IDX38">timeout</A>
-<LI><A HREF="wget_5.html#IDX112">timestamping</A>
-<LI><A HREF="wget_2.html#IDX25">tries</A>
-<LI><A HREF="wget_4.html#IDX91">types of files</A>
-</DIR>
-<H2><A NAME="cindex_u">u</A></H2>
-<DIR>
-<LI><A HREF="wget_5.html#IDX113">updating the archives</A>
-<LI><A HREF="wget_2.html#IDX7">URL</A>
-<LI><A HREF="wget_2.html#IDX8">URL syntax</A>
-<LI><A HREF="wget_5.html#IDX116">usage, time-stamping</A>
-<LI><A HREF="wget_2.html#IDX69">user-agent</A>
-</DIR>
-<H2><A NAME="cindex_v">v</A></H2>
-<DIR>
-<LI><A HREF="wget_8.html#IDX133">various</A>
-<LI><A HREF="wget_2.html#IDX17">verbose</A>
-</DIR>
-<H2><A NAME="cindex_w">w</A></H2>
-<DIR>
-<LI><A HREF="wget_2.html#IDX40">wait</A>
-<LI><A HREF="wget_2.html#IDX43">wait, random</A>
-<LI><A HREF="wget_2.html#IDX42">waiting between retries</A>
-<LI><A HREF="wget_2.html#IDX36">Wget as spider</A>
-<LI><A HREF="wget_6.html#IDX120">wgetrc</A>
-<LI><A HREF="wget_6.html#IDX128">wgetrc commands</A>
-<LI><A HREF="wget_6.html#IDX124">wgetrc location</A>
-<LI><A HREF="wget_6.html#IDX126">wgetrc syntax</A>
-<LI><A HREF="wget_4.html#IDX94">wildcards, accept</A>
-<LI><A HREF="wget_4.html#IDX98">wildcards, reject</A>
-</DIR>
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_10.html">previous</A>, next, last section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_2.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_2.html
diff -N manual/wget-1.8.1/html_chapter/wget_2.html
--- manual/wget-1.8.1/html_chapter/wget_2.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,1324 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Invoking</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_1.html">previous</A>, 
<A HREF="wget_3.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC2" HREF="wget_toc.html#TOC2">Invoking</A></H1>
-<P>
-<A NAME="IDX3"></A>
-<A NAME="IDX4"></A>
-<A NAME="IDX5"></A>
-<A NAME="IDX6"></A>
-
-
-<P>
-By default, Wget is very simple to invoke.  The basic syntax is:
-
-
-
-<PRE>
-wget [<VAR>option</VAR>]... [<VAR>URL</VAR>]...
-</PRE>
-
-<P>
-Wget will simply download all the URLs specified on the command
-line.  <VAR>URL</VAR> is a <EM>Uniform Resource Locator</EM>, as defined below.
-
-
-<P>
-However, you may wish to change some of the default parameters of
-Wget.  You can do it two ways: permanently, adding the appropriate
-command to <TT>`.wgetrc'</TT> (see section <A HREF="wget_6.html#SEC24">Startup 
File</A>), or specifying it on
-the command line.
-
-
-
-
-<H2><A NAME="SEC3" HREF="wget_toc.html#TOC3">URL Format</A></H2>
-<P>
-<A NAME="IDX7"></A>
-<A NAME="IDX8"></A>
-
-
-<P>
-<EM>URL</EM> is an acronym for Uniform Resource Locator.  A uniform
-resource locator is a compact string representation for a resource
-available via the Internet.  Wget recognizes the URL syntax as per
-RFC1738.  This is the most widely used form (square brackets denote
-optional parts):
-
-
-
-<PRE>
-http://host[:port]/directory/file
-ftp://host[:port]/directory/file
-</PRE>
-
-<P>
-You can also encode your username and password within a URL:
-
-
-
-<PRE>
-ftp://user:address@hidden/path
-http://user:address@hidden/path
-</PRE>
-
-<P>
-Either <VAR>user</VAR> or <VAR>password</VAR>, or both, may be left out.  If 
you
-leave out either the HTTP username or password, no authentication
-will be sent.  If you leave out the FTP username, <SAMP>`anonymous'</SAMP>
-will be used.  If you leave out the FTP password, your email
-address will be supplied as a default password.<A NAME="DOCF1" 
HREF="wget_foot.html#FOOT1">(1)</A>
-
-
-<P>
-You can encode unsafe characters in a URL as <SAMP>`%xy'</SAMP>, 
<CODE>xy</CODE>
-being the hexadecimal representation of the character's ASCII
-value.  Some common unsafe characters include <SAMP>`%'</SAMP> (quoted as
-<SAMP>`%25'</SAMP>), <SAMP>`:'</SAMP> (quoted as <SAMP>`%3A'</SAMP>), and 
<SAMP>`@'</SAMP> (quoted as
-<SAMP>`%40'</SAMP>).  Refer to RFC1738 for a comprehensive list of unsafe
-characters.
-
-
-<P>
-Wget also supports the <CODE>type</CODE> feature for FTP URLs.  By
-default, FTP documents are retrieved in the binary mode (type
-<SAMP>`i'</SAMP>), which means that they are downloaded unchanged.  Another
-useful mode is the <SAMP>`a'</SAMP> (<EM>ASCII</EM>) mode, which converts the 
line
-delimiters between the different operating systems, and is thus useful
-for text files.  Here is an example:
-
-
-
-<PRE>
-ftp://host/directory/file;type=a
-</PRE>
-
-<P>
-Two alternative variants of URL specification are also supported,
-because of historical (hysterical?) reasons and their widespreaded use.
-
-
-<P>
-FTP-only syntax (supported by <CODE>NcFTP</CODE>):
-
-<PRE>
-host:/dir/file
-</PRE>
-
-<P>
-HTTP-only syntax (introduced by <CODE>Netscape</CODE>):
-
-<PRE>
-host[:port]/dir/file
-</PRE>
-
-<P>
-These two alternative forms are deprecated, and may cease being
-supported in the future.
-
-
-<P>
-If you do not understand the difference between these notations, or do
-not know which one to use, just use the plain ordinary format you use
-with your favorite browser, like <CODE>Lynx</CODE> or <CODE>Netscape</CODE>.
-
-
-
-
-<H2><A NAME="SEC4" HREF="wget_toc.html#TOC4">Option Syntax</A></H2>
-<P>
-<A NAME="IDX9"></A>
-<A NAME="IDX10"></A>
-
-
-<P>
-Since Wget uses GNU getopts to process its arguments, every option has a
-short form and a long form.  Long options are more convenient to
-remember, but take time to type.  You may freely mix different option
-styles, or specify options after the command-line arguments.  Thus you
-may write:
-
-
-
-<PRE>
-wget -r --tries=10 http://fly.srk.fer.hr/ -o log
-</PRE>
-
-<P>
-The space between the option accepting an argument and the argument may
-be omitted.  Instead <SAMP>`-o log'</SAMP> you can write <SAMP>`-olog'</SAMP>.
-
-
-<P>
-You may put several options that do not require arguments together,
-like:
-
-
-
-<PRE>
-wget -drc <VAR>URL</VAR>
-</PRE>
-
-<P>
-This is a complete equivalent of:
-
-
-
-<PRE>
-wget -d -r -c <VAR>URL</VAR>
-</PRE>
-
-<P>
-Since the options can be specified after the arguments, you may
-terminate them with <SAMP>`--'</SAMP>.  So the following will try to download
-URL <SAMP>`-x'</SAMP>, reporting failure to <TT>`log'</TT>:
-
-
-
-<PRE>
-wget -o log -- -x
-</PRE>
-
-<P>
-The options that accept comma-separated lists all respect the convention
-that specifying an empty list clears its value.  This can be useful to
-clear the <TT>`.wgetrc'</TT> settings.  For instance, if your 
<TT>`.wgetrc'</TT>
-sets <CODE>exclude_directories</CODE> to <TT>`/cgi-bin'</TT>, the following
-example will first reset it, and then set it to exclude <TT>`/~nobody'</TT>
-and <TT>`/~somebody'</TT>.  You can also clear the lists in <TT>`.wgetrc'</TT>
-(see section <A HREF="wget_6.html#SEC26">Wgetrc Syntax</A>).
-
-
-
-<PRE>
-wget -X '' -X /~nobody,/~somebody
-</PRE>
-
-
-
-<H2><A NAME="SEC5" HREF="wget_toc.html#TOC5">Basic Startup Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-V'</SAMP>
-<DD>
-<DT><SAMP>`--version'</SAMP>
-<DD>
-Display the version of Wget.
-
-<DT><SAMP>`-h'</SAMP>
-<DD>
-<DT><SAMP>`--help'</SAMP>
-<DD>
-Print a help message describing all of Wget's command-line options.
-
-<DT><SAMP>`-b'</SAMP>
-<DD>
-<DT><SAMP>`--background'</SAMP>
-<DD>
-Go to background immediately after startup.  If no output file is
-specified via the <SAMP>`-o'</SAMP>, output is redirected to 
<TT>`wget-log'</TT>.
-
-<A NAME="IDX11"></A>
-<DT><SAMP>`-e <VAR>command</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--execute <VAR>command</VAR>'</SAMP>
-<DD>
-Execute <VAR>command</VAR> as if it were a part of <TT>`.wgetrc'</TT>
-(see section <A HREF="wget_6.html#SEC24">Startup File</A>).  A command thus 
invoked will be executed
-<EM>after</EM> the commands in <TT>`.wgetrc'</TT>, thus taking precedence over
-them.
-</DL>
-
-
-
-<H2><A NAME="SEC6" HREF="wget_toc.html#TOC6">Logging and Input File 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-o <VAR>logfile</VAR>'</SAMP>
-<DD>
-<A NAME="IDX12"></A>
- <A NAME="IDX13"></A>
- 
-<DT><SAMP>`--output-file=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Log all messages to <VAR>logfile</VAR>.  The messages are normally reported
-to standard error.
-
-<A NAME="IDX14"></A>
-<DT><SAMP>`-a <VAR>logfile</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--append-output=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Append to <VAR>logfile</VAR>.  This is the same as <SAMP>`-o'</SAMP>, only it 
appends
-to <VAR>logfile</VAR> instead of overwriting the old log file.  If
-<VAR>logfile</VAR> does not exist, a new file is created.
-
-<A NAME="IDX15"></A>
-<DT><SAMP>`-d'</SAMP>
-<DD>
-<DT><SAMP>`--debug'</SAMP>
-<DD>
-Turn on debug output, meaning various information important to the
-developers of Wget if it does not work properly.  Your system
-administrator may have chosen to compile Wget without debug support, in
-which case <SAMP>`-d'</SAMP> will not work.  Please note that compiling with
-debug support is always safe--Wget compiled with the debug support will
-<EM>not</EM> print any debug info unless requested with <SAMP>`-d'</SAMP>.
-See section <A HREF="wget_8.html#SEC37">Reporting Bugs</A>, for more 
information on how to use <SAMP>`-d'</SAMP> for
-sending bug reports.
-
-<A NAME="IDX16"></A>
-<DT><SAMP>`-q'</SAMP>
-<DD>
-<DT><SAMP>`--quiet'</SAMP>
-<DD>
-Turn off Wget's output.
-
-<A NAME="IDX17"></A>
-<DT><SAMP>`-v'</SAMP>
-<DD>
-<DT><SAMP>`--verbose'</SAMP>
-<DD>
-Turn on verbose output, with all the available data.  The default output
-is verbose.
-
-<DT><SAMP>`-nv'</SAMP>
-<DD>
-<DT><SAMP>`--non-verbose'</SAMP>
-<DD>
-Non-verbose output--turn off verbose without being completely quiet
-(use <SAMP>`-q'</SAMP> for that), which means that error messages and basic
-information still get printed.
-
-<A NAME="IDX18"></A>
-<DT><SAMP>`-i <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--input-file=<VAR>file</VAR>'</SAMP>
-<DD>
-Read URLs from <VAR>file</VAR>, in which case no URLs need to be on
-the command line.  If there are URLs both on the command line and
-in an input file, those on the command lines will be the first ones to
-be retrieved.  The <VAR>file</VAR> need not be an HTML document (but no
-harm if it is)---it is enough if the URLs are just listed
-sequentially.
-
-However, if you specify <SAMP>`--force-html'</SAMP>, the document will be
-regarded as <SAMP>`html'</SAMP>.  In that case you may have problems with
-relative links, which you can solve either by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to the documents or by specifying
-<SAMP>`--base=<VAR>url</VAR>'</SAMP> on the command line.
-
-<A NAME="IDX19"></A>
-<DT><SAMP>`-F'</SAMP>
-<DD>
-<DT><SAMP>`--force-html'</SAMP>
-<DD>
-When input is read from a file, force it to be treated as an HTML
-file.  This enables you to retrieve relative links from existing
-HTML files on your local disk, by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to HTML, or using the <SAMP>`--base'</SAMP> 
command-line
-option.
-
-<A NAME="IDX20"></A>
-<DT><SAMP>`-B <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--base=<VAR>URL</VAR>'</SAMP>
-<DD>
-When used in conjunction with <SAMP>`-F'</SAMP>, prepends <VAR>URL</VAR> to 
relative
-links in the file specified by <SAMP>`-i'</SAMP>.
-</DL>
-
-
-
-<H2><A NAME="SEC7" HREF="wget_toc.html#TOC7">Download Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`--bind-address=<VAR>ADDRESS</VAR>'</SAMP>
-<DD>
-<A NAME="IDX21"></A>
- <A NAME="IDX22"></A>
- <A NAME="IDX23"></A>
- 
-When making client TCP/IP connections, <CODE>bind()</CODE> to 
<VAR>ADDRESS</VAR> on
-the local machine.  <VAR>ADDRESS</VAR> may be specified as a hostname or IP
-address.  This option can be useful if your machine is bound to multiple
-IPs.
-
-<A NAME="IDX24"></A>
-<A NAME="IDX25"></A>
-<A NAME="IDX26"></A>
-<DT><SAMP>`-t <VAR>number</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--tries=<VAR>number</VAR>'</SAMP>
-<DD>
-Set number of retries to <VAR>number</VAR>.  Specify 0 or <SAMP>`inf'</SAMP> 
for
-infinite retrying.
-
-<DT><SAMP>`-O <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--output-document=<VAR>file</VAR>'</SAMP>
-<DD>
-The documents will not be written to the appropriate files, but all will
-be concatenated together and written to <VAR>file</VAR>.  If <VAR>file</VAR>
-already exists, it will be overwritten.  If the <VAR>file</VAR> is 
<SAMP>`-'</SAMP>,
-the documents will be written to standard output.  Including this option
-automatically sets the number of tries to 1.
-
-<A NAME="IDX27"></A>
-<A NAME="IDX28"></A>
-<A NAME="IDX29"></A>
-<DT><SAMP>`-nc'</SAMP>
-<DD>
-<DT><SAMP>`--no-clobber'</SAMP>
-<DD>
-If a file is downloaded more than once in the same directory, Wget's
-behavior depends on a few options, including <SAMP>`-nc'</SAMP>.  In certain
-cases, the local file will be <EM>clobbered</EM>, or overwritten, upon
-repeated download.  In other cases it will be preserved.
-
-When running Wget without <SAMP>`-N'</SAMP>, <SAMP>`-nc'</SAMP>, or 
<SAMP>`-r'</SAMP>,
-downloading the same file in the same directory will result in the
-original copy of <VAR>file</VAR> being preserved and the second copy being
-named <SAMP>`<VAR>file</VAR>.1'</SAMP>.  If that file is downloaded yet again, 
the
-third copy will be named <SAMP>`<VAR>file</VAR>.2'</SAMP>, and so on.  When
-<SAMP>`-nc'</SAMP> is specified, this behavior is suppressed, and Wget will
-refuse to download newer copies of <SAMP>`<VAR>file</VAR>'</SAMP>.  Therefore,
-"<CODE>no-clobber</CODE>" is actually a misnomer in this mode--it's not
-clobbering that's prevented (as the numeric suffixes were already
-preventing clobbering), but rather the multiple version saving that's
-prevented.
-
-When running Wget with <SAMP>`-r'</SAMP>, but without <SAMP>`-N'</SAMP> or 
<SAMP>`-nc'</SAMP>,
-re-downloading a file will result in the new copy simply overwriting the
-old.  Adding <SAMP>`-nc'</SAMP> will prevent this behavior, instead causing the
-original version to be preserved and any newer copies on the server to
-be ignored.
-
-When running Wget with <SAMP>`-N'</SAMP>, with or without <SAMP>`-r'</SAMP>, 
the
-decision as to whether or not to download a newer copy of a file depends
-on the local and remote timestamp and size of the file
-(see section <A HREF="wget_5.html#SEC20">Time-Stamping</A>).  
<SAMP>`-nc'</SAMP> may not be specified at the same
-time as <SAMP>`-N'</SAMP>.
-
-Note that when <SAMP>`-nc'</SAMP> is specified, files with the suffixes
-<SAMP>`.html'</SAMP> or (yuck) <SAMP>`.htm'</SAMP> will be loaded from the 
local disk
-and parsed as if they had been retrieved from the Web.
-
-<A NAME="IDX30"></A>
-<A NAME="IDX31"></A>
-<A NAME="IDX32"></A>
-<DT><SAMP>`-c'</SAMP>
-<DD>
-<DT><SAMP>`--continue'</SAMP>
-<DD>
-Continue getting a partially-downloaded file.  This is useful when you
-want to finish up a download started by a previous instance of Wget, or
-by another program.  For instance:
-
-
-<PRE>
-wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
-</PRE>
-
-If there is a file named <TT>`ls-lR.Z'</TT> in the current directory, Wget
-will assume that it is the first portion of the remote file, and will
-ask the server to continue the retrieval from an offset equal to the
-length of the local file.
-
-Note that you don't need to specify this option if you just want the
-current invocation of Wget to retry downloading a file should the
-connection be lost midway through.  This is the default behavior.
-<SAMP>`-c'</SAMP> only affects resumption of downloads started <EM>prior</EM> 
to
-this invocation of Wget, and whose local files are still sitting around.
-
-Without <SAMP>`-c'</SAMP>, the previous example would just download the remote
-file to <TT>`ls-lR.Z.1'</TT>, leaving the truncated <TT>`ls-lR.Z'</TT> file
-alone.
-
-Beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a non-empty file, and
-it turns out that the server does not support continued downloading,
-Wget will refuse to start the download from scratch, which would
-effectively ruin existing contents.  If you really want the download to
-start from scratch, remove the file.
-
-Also beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a file which is 
of
-equal size as the one on the server, Wget will refuse to download the
-file and print an explanatory message.  The same happens when the file
-is smaller on the server than locally (presumably because it was changed
-on the server since your last download attempt)---because "continuing"
-is not meaningful, no download occurs.
-
-On the other side of the coin, while using <SAMP>`-c'</SAMP>, any file that's
-bigger on the server than locally will be considered an incomplete
-download and only <CODE>(length(remote) - length(local))</CODE> bytes will be
-downloaded and tacked onto the end of the local file.  This behavior can
-be desirable in certain cases--for instance, you can use <SAMP>`wget -c'</SAMP>
-to download just the new portion that's been appended to a data
-collection or log file.
-
-However, if the file is bigger on the server because it's been
-<EM>changed</EM>, as opposed to just <EM>appended</EM> to, you'll end up
-with a garbled file.  Wget has no way of verifying that the local file
-is really a valid prefix of the remote file.  You need to be especially
-careful of this when using <SAMP>`-c'</SAMP> in conjunction with 
<SAMP>`-r'</SAMP>,
-since every file will be considered as an "incomplete download" candidate.
-
-Another instance where you'll get a garbled file if you try to use
-<SAMP>`-c'</SAMP> is if you have a lame HTTP proxy that inserts a
-"transfer interrupted" string into the local file.  In the future a
-"rollback" option may be added to deal with this case.
-
-Note that <SAMP>`-c'</SAMP> only works with FTP servers and with HTTP
-servers that support the <CODE>Range</CODE> header.
-
-<A NAME="IDX33"></A>
-<A NAME="IDX34"></A>
-<DT><SAMP>`--progress=<VAR>type</VAR>'</SAMP>
-<DD>
-Select the type of the progress indicator you wish to use.  Legal
-indicators are "dot" and "bar".
-
-The "dot" indicator is used by default.  It traces the retrieval by
-printing dots on the screen, each dot representing a fixed amount of
-downloaded data.
-
-When using the dotted retrieval, you may also set the <EM>style</EM> by
-specifying the type as <SAMP>`dot:<VAR>style</VAR>'</SAMP>.  Different styles 
assign
-different meaning to one dot.  With the <CODE>default</CODE> style each dot
-represents 1K, there are ten dots in a cluster and 50 dots in a line.
-The <CODE>binary</CODE> style has a more "computer"-like orientation--8K
-dots, 16-dots clusters and 48 dots per line (which makes for 384K
-lines).  The <CODE>mega</CODE> style is suitable for downloading very large
-files--each dot represents 64K retrieved, there are eight dots in a
-cluster, and 48 dots on each line (so each line contains 3M).
-
-Specifying <SAMP>`--progress=bar'</SAMP> will draw a nice ASCII progress bar
-graphics (a.k.a "thermometer" display) to indicate retrieval.  If the
-output is not a TTY, this option will be ignored, and Wget will revert
-to the dot indicator.  If you want to force the bar indicator, use
-<SAMP>`--progress=bar:force'</SAMP>.
-
-<DT><SAMP>`-N'</SAMP>
-<DD>
-<DT><SAMP>`--timestamping'</SAMP>
-<DD>
-Turn on time-stamping.  See section <A 
HREF="wget_5.html#SEC20">Time-Stamping</A>, for details.
-
-<A NAME="IDX35"></A>
-<DT><SAMP>`-S'</SAMP>
-<DD>
-<DT><SAMP>`--server-response'</SAMP>
-<DD>
-Print the headers sent by HTTP servers and responses sent by
-FTP servers.
-
-<A NAME="IDX36"></A>
-<A NAME="IDX37"></A>
-<DT><SAMP>`--spider'</SAMP>
-<DD>
-When invoked with this option, Wget will behave as a Web <EM>spider</EM>,
-which means that it will not download the pages, just check that they
-are there.  You can use it to check your bookmarks, e.g. with:
-
-
-<PRE>
-wget --spider --force-html -i bookmarks.html
-</PRE>
-
-This feature needs much more work for Wget to get close to the
-functionality of real WWW spiders.
-
-<A NAME="IDX38"></A>
-<DT><SAMP>`-T seconds'</SAMP>
-<DD>
-<DT><SAMP>`--timeout=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Set the read timeout to <VAR>seconds</VAR> seconds.  Whenever a network read
-is issued, the file descriptor is checked for a timeout, which could
-otherwise leave a pending connection (uninterrupted read).  The default
-timeout is 900 seconds (fifteen minutes).  Setting timeout to 0 will
-disable checking for timeouts.
-
-Please do not lower the default timeout value with this option unless
-you know what you are doing.
-
-<A NAME="IDX39"></A>
-<A NAME="IDX40"></A>
-<DT><SAMP>`-w <VAR>seconds</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--wait=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Wait the specified number of seconds between the retrievals.  Use of
-this option is recommended, as it lightens the server load by making the
-requests less frequent.  Instead of in seconds, the time can be
-specified in minutes using the <CODE>m</CODE> suffix, in hours using 
<CODE>h</CODE>
-suffix, or in days using <CODE>d</CODE> suffix.
-
-Specifying a large value for this option is useful if the network or the
-destination host is down, so that Wget can wait long enough to
-reasonably expect the network error to be fixed before the retry.
-
-<A NAME="IDX41"></A>
-<A NAME="IDX42"></A>
-<DT><SAMP>`--waitretry=<VAR>seconds</VAR>'</SAMP>
-<DD>
-If you don't want Wget to wait between <EM>every</EM> retrieval, but only
-between retries of failed downloads, you can use this option.  Wget will
-use <EM>linear backoff</EM>, waiting 1 second after the first failure on a
-given file, then waiting 2 seconds after the second failure on that
-file, up to the maximum number of <VAR>seconds</VAR> you specify.  Therefore,
-a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
-seconds per file.
-
-Note that this option is turned on by default in the global
-<TT>`wgetrc'</TT> file.
-
-<A NAME="IDX43"></A>
-<A NAME="IDX44"></A>
-<DT><SAMP>`--random-wait'</SAMP>
-<DD>
-Some web sites may perform log analysis to identify retrieval programs
-such as Wget by looking for statistically significant similarities in
-the time between requests. This option causes the time between requests
-to vary between 0 and 2 * <VAR>wait</VAR> seconds, where <VAR>wait</VAR> was
-specified using the <SAMP>`-w'</SAMP> or <SAMP>`--wait'</SAMP> options, in 
order to mask
-Wget's presence from such analysis.
-
-A recent article in a publication devoted to development on a popular
-consumer platform provided code to perform this analysis on the fly.
-Its author suggested blocking at the class C address level to ensure
-automated retrieval programs were blocked despite changing DHCP-supplied
-addresses.
-
-The <SAMP>`--random-wait'</SAMP> option was inspired by this ill-advised
-recommendation to block many unrelated users from a web site due to the
-actions of one.
-
-<A NAME="IDX45"></A>
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-Turn proxy support on or off.  The proxy is on by default if the
-appropriate environmental variable is defined.
-
-<A NAME="IDX46"></A>
-<DT><SAMP>`-Q <VAR>quota</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--quota=<VAR>quota</VAR>'</SAMP>
-<DD>
-Specify download quota for automatic retrievals.  The value can be
-specified in bytes (default), kilobytes (with <SAMP>`k'</SAMP> suffix), or
-megabytes (with <SAMP>`m'</SAMP> suffix).
-
-Note that quota will never affect downloading a single file.  So if you
-specify <SAMP>`wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz'</SAMP>, all of 
the
-<TT>`ls-lR.gz'</TT> will be downloaded.  The same goes even when several
-URLs are specified on the command-line.  However, quota is
-respected when retrieving either recursively, or from an input file.
-Thus you may safely type <SAMP>`wget -Q2m -i sites'</SAMP>---download will be
-aborted when the quota is exceeded.
-
-Setting quota to 0 or to <SAMP>`inf'</SAMP> unlimits the download quota.
-</DL>
-
-
-
-<H2><A NAME="SEC8" HREF="wget_toc.html#TOC8">Directory Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nd'</SAMP>
-<DD>
-<DT><SAMP>`--no-directories'</SAMP>
-<DD>
-Do not create a hierarchy of directories when retrieving recursively.
-With this option turned on, all files will get saved to the current
-directory, without clobbering (if a name shows up more than once, the
-filenames will get extensions <SAMP>`.n'</SAMP>).
-
-<DT><SAMP>`-x'</SAMP>
-<DD>
-<DT><SAMP>`--force-directories'</SAMP>
-<DD>
-The opposite of <SAMP>`-nd'</SAMP>---create a hierarchy of directories, even if
-one would not have been created otherwise.  E.g. <SAMP>`wget -x
-http://fly.srk.fer.hr/robots.txt'</SAMP> will save the downloaded file to
-<TT>`fly.srk.fer.hr/robots.txt'</TT>.
-
-<DT><SAMP>`-nH'</SAMP>
-<DD>
-<DT><SAMP>`--no-host-directories'</SAMP>
-<DD>
-Disable generation of host-prefixed directories.  By default, invoking
-Wget with <SAMP>`-r http://fly.srk.fer.hr/'</SAMP> will create a structure of
-directories beginning with <TT>`fly.srk.fer.hr/'</TT>.  This option disables
-such behavior.
-
-<A NAME="IDX47"></A>
-<DT><SAMP>`--cut-dirs=<VAR>number</VAR>'</SAMP>
-<DD>
-Ignore <VAR>number</VAR> directory components.  This is useful for getting a
-fine-grained control over the directory where recursive retrieval will
-be saved.
-
-Take, for example, the directory at
-<SAMP>`ftp://ftp.xemacs.org/pub/xemacs/'</SAMP>.  If you retrieve it with
-<SAMP>`-r'</SAMP>, it will be saved locally under
-<TT>`ftp.xemacs.org/pub/xemacs/'</TT>.  While the <SAMP>`-nH'</SAMP> option can
-remove the <TT>`ftp.xemacs.org/'</TT> part, you are still stuck with
-<TT>`pub/xemacs'</TT>.  This is where <SAMP>`--cut-dirs'</SAMP> comes in 
handy; it
-makes Wget not "see" <VAR>number</VAR> remote directory components.  Here
-are several examples of how <SAMP>`--cut-dirs'</SAMP> option works.
-
-
-<PRE>
-No options        -&#62; ftp.xemacs.org/pub/xemacs/
--nH               -&#62; pub/xemacs/
--nH --cut-dirs=1  -&#62; xemacs/
--nH --cut-dirs=2  -&#62; .
-
---cut-dirs=1      -&#62; ftp.xemacs.org/xemacs/
-...
-</PRE>
-
-If you just want to get rid of the directory structure, this option is
-similar to a combination of <SAMP>`-nd'</SAMP> and <SAMP>`-P'</SAMP>.  
However, unlike
-<SAMP>`-nd'</SAMP>, <SAMP>`--cut-dirs'</SAMP> does not lose with 
subdirectories--for
-instance, with <SAMP>`-nH --cut-dirs=1'</SAMP>, a <TT>`beta/'</TT> 
subdirectory will
-be placed to <TT>`xemacs/beta'</TT>, as one would expect.
-
-<A NAME="IDX48"></A>
-<DT><SAMP>`-P <VAR>prefix</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--directory-prefix=<VAR>prefix</VAR>'</SAMP>
-<DD>
-Set directory prefix to <VAR>prefix</VAR>.  The <EM>directory prefix</EM> is 
the
-directory where all other files and subdirectories will be saved to,
-i.e. the top of the retrieval tree.  The default is <SAMP>`.'</SAMP> (the
-current directory).
-</DL>
-
-
-
-<H2><A NAME="SEC9" HREF="wget_toc.html#TOC9">HTTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-E'</SAMP>
-<DD>
-<A NAME="IDX49"></A>
- 
-<DT><SAMP>`--html-extension'</SAMP>
-<DD>
-If a file of type <SAMP>`text/html'</SAMP> is downloaded and the URL does not
-end with the regexp <SAMP>`\.[Hh][Tt][Mm][Ll]?'</SAMP>, this option will cause
-the suffix <SAMP>`.html'</SAMP> to be appended to the local filename.  This is
-useful, for instance, when you're mirroring a remote site that uses
-<SAMP>`.asp'</SAMP> pages, but you want the mirrored pages to be viewable on
-your stock Apache server.  Another good use for this is when you're
-downloading the output of CGIs.  A URL like
-<SAMP>`http://site.com/article.cgi?25'</SAMP> will be saved as
-<TT>`article.cgi?25.html'</TT>.
-
-Note that filenames changed in this way will be re-downloaded every time
-you re-mirror a site, because Wget can't tell that the local
-<TT>`<VAR>X</VAR>.html'</TT> file corresponds to remote URL 
<SAMP>`<VAR>X</VAR>'</SAMP> (since
-it doesn't yet know that the URL produces output of type
-<SAMP>`text/html'</SAMP>.  To prevent this re-downloading, you must use
-<SAMP>`-k'</SAMP> and <SAMP>`-K'</SAMP> so that the original version of the 
file will be
-saved as <TT>`<VAR>X</VAR>.orig'</TT> (see section <A 
HREF="wget_2.html#SEC11">Recursive Retrieval Options</A>).
-
-<A NAME="IDX50"></A>
-<A NAME="IDX51"></A>
-<A NAME="IDX52"></A>
-<DT><SAMP>`--http-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--http-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> on an
-HTTP server.  According to the type of the challenge, Wget will
-encode them using either the <CODE>basic</CODE> (insecure) or the
-<CODE>digest</CODE> authentication scheme.
-
-Another way to specify username and password is in the URL itself
-(see section <A HREF="wget_2.html#SEC3">URL Format</A>).  For more information 
about security issues with
-Wget, See section <A HREF="wget_9.html#SEC42">Security Considerations</A>.
-
-<A NAME="IDX53"></A>
-<A NAME="IDX54"></A>
-<DT><SAMP>`-C on/off'</SAMP>
-<DD>
-<DT><SAMP>`--cache=on/off'</SAMP>
-<DD>
-When set to off, disable server-side cache.  In this case, Wget will
-send the remote server an appropriate directive (<SAMP>`Pragma:
-no-cache'</SAMP>) to get the file from the remote service, rather than
-returning the cached version.  This is especially useful for retrieving
-and flushing out-of-date documents on proxy servers.
-
-Caching is allowed by default.
-
-<A NAME="IDX55"></A>
-<DT><SAMP>`--cookies=on/off'</SAMP>
-<DD>
-When set to off, disable the use of cookies.  Cookies are a mechanism
-for maintaining server-side state.  The server sends the client a cookie
-using the <CODE>Set-Cookie</CODE> header, and the client responds with the
-same cookie upon further requests.  Since cookies allow the server
-owners to keep track of visitors and for sites to exchange this
-information, some consider them a breach of privacy.  The default is to
-use cookies; however, <EM>storing</EM> cookies is not on by default.
-
-<A NAME="IDX56"></A>
-<A NAME="IDX57"></A>
-<DT><SAMP>`--load-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Load cookies from <VAR>file</VAR> before the first HTTP retrieval.
-<VAR>file</VAR> is a textual file in the format originally used by Netscape's
-<TT>`cookies.txt'</TT> file.
-
-You will typically use this option when mirroring sites that require
-that you be logged in to access some or all of their content.  The login
-process typically works by the web server issuing an HTTP cookie
-upon receiving and verifying your credentials.  The cookie is then
-resent by the browser when accessing that part of the site, and so
-proves your identity.
-
-Mirroring such a site requires Wget to send the same cookies your
-browser sends when communicating with the site.  This is achieved by
-<SAMP>`--load-cookies'</SAMP>---simply point Wget to the location of the
-<TT>`cookies.txt'</TT> file, and it will send the same cookies your browser
-would send in the same situation.  Different browsers keep textual
-cookie files in different locations:
-
-<DL COMPACT>
-
-<DT>Netscape 4.x.
-<DD>
-The cookies are in <TT>`~/.netscape/cookies.txt'</TT>.
-
-<DT>Mozilla and Netscape 6.x.
-<DD>
-Mozilla's cookie file is also named <TT>`cookies.txt'</TT>, located
-somewhere under <TT>`~/.mozilla'</TT>, in the directory of your profile.
-The full path usually ends up looking somewhat like
-<TT>`~/.mozilla/default/<VAR>some-weird-string</VAR>/cookies.txt'</TT>.
-
-<DT>Internet Explorer.
-<DD>
-You can produce a cookie file Wget can use by using the File menu,
-Import and Export, Export Cookies.  This has been tested with Internet
-Explorer 5; it is not guaranteed to work with earlier versions.
-
-<DT>Other browsers.
-<DD>
-If you are using a different browser to create your cookies,
-<SAMP>`--load-cookies'</SAMP> will only work if you can locate or produce a
-cookie file in the Netscape format that Wget expects.
-</DL>
-
-If you cannot use <SAMP>`--load-cookies'</SAMP>, there might still be an
-alternative.  If your browser supports a "cookie manager", you can use
-it to view the cookies used when accessing the site you're mirroring.
-Write down the name and value of the cookie, and manually instruct Wget
-to send those cookies, bypassing the "official" cookie support:
-
-
-<PRE>
-wget --cookies=off --header "Cookie: <VAR>name</VAR>=<VAR>value</VAR>"
-</PRE>
-
-<A NAME="IDX58"></A>
-<A NAME="IDX59"></A>
-<DT><SAMP>`--save-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Save cookies from <VAR>file</VAR> at the end of session.  Cookies whose
-expiry time is not specified, or those that have already expired, are
-not saved.
-
-<A NAME="IDX60"></A>
-<A NAME="IDX61"></A>
-<DT><SAMP>`--ignore-length'</SAMP>
-<DD>
-Unfortunately, some HTTP servers (CGI programs, to be more
-precise) send out bogus <CODE>Content-Length</CODE> headers, which makes Wget
-go wild, as it thinks not all the document was retrieved.  You can spot
-this syndrome if Wget retries getting the same document again and again,
-each time claiming that the (otherwise normal) connection has closed on
-the very same byte.
-
-With this option, Wget will ignore the <CODE>Content-Length</CODE> header--as
-if it never existed.
-
-<A NAME="IDX62"></A>
-<DT><SAMP>`--header=<VAR>additional-header</VAR>'</SAMP>
-<DD>
-Define an <VAR>additional-header</VAR> to be passed to the HTTP servers.
-Headers must contain a <SAMP>`:'</SAMP> preceded by one or more non-blank
-characters, and must not contain newlines.
-
-You may define more than one additional header by specifying
-<SAMP>`--header'</SAMP> more than once.
-
-
-<PRE>
-wget --header='Accept-Charset: iso-8859-2' \
-     --header='Accept-Language: hr'        \
-       http://fly.srk.fer.hr/
-</PRE>
-
-Specification of an empty string as the header value will clear all
-previous user-defined headers.
-
-<A NAME="IDX63"></A>
-<A NAME="IDX64"></A>
-<A NAME="IDX65"></A>
-<DT><SAMP>`--proxy-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--proxy-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> for
-authentication on a proxy server.  Wget will encode them using the
-<CODE>basic</CODE> authentication scheme.
-
-<A NAME="IDX66"></A>
-<A NAME="IDX67"></A>
-<DT><SAMP>`--referer=<VAR>url</VAR>'</SAMP>
-<DD>
-Include `Referer: <VAR>url</VAR>' header in HTTP request.  Useful for
-retrieving documents with server-side processing that assume they are
-always being retrieved by interactive web browsers and only come out
-properly when Referer is set to one of the pages that point to them.
-
-<A NAME="IDX68"></A>
-<DT><SAMP>`-s'</SAMP>
-<DD>
-<DT><SAMP>`--save-headers'</SAMP>
-<DD>
-Save the headers sent by the HTTP server to the file, preceding the
-actual contents, with an empty line as the separator.
-
-<A NAME="IDX69"></A>
-<DT><SAMP>`-U <VAR>agent-string</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--user-agent=<VAR>agent-string</VAR>'</SAMP>
-<DD>
-Identify as <VAR>agent-string</VAR> to the HTTP server.
-
-The HTTP protocol allows the clients to identify themselves using a
-<CODE>User-Agent</CODE> header field.  This enables distinguishing the
-WWW software, usually for statistical purposes or for tracing of
-protocol violations.  Wget normally identifies as
-<SAMP>`Wget/<VAR>version</VAR>'</SAMP>, <VAR>version</VAR> being the current 
version
-number of Wget.
-
-However, some sites have been known to impose the policy of tailoring
-the output according to the <CODE>User-Agent</CODE>-supplied information.
-While conceptually this is not such a bad idea, it has been abused by
-servers denying information to clients other than <CODE>Mozilla</CODE> or
-Microsoft <CODE>Internet Explorer</CODE>.  This option allows you to change
-the <CODE>User-Agent</CODE> line issued by Wget.  Use of this option is
-discouraged, unless you really know what you are doing.
-</DL>
-
-
-
-<H2><A NAME="SEC10" HREF="wget_toc.html#TOC10">FTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nr'</SAMP>
-<DD>
-<A NAME="IDX70"></A>
- 
-<DT><SAMP>`--dont-remove-listing'</SAMP>
-<DD>
-Don't remove the temporary <TT>`.listing'</TT> files generated by FTP
-retrievals.  Normally, these files contain the raw directory listings
-received from FTP servers.  Not removing them can be useful for
-debugging purposes, or when you want to be able to easily check on the
-contents of remote server directories (e.g. to verify that a mirror
-you're running is complete).
-
-Note that even though Wget writes to a known filename for this file,
-this is not a security hole in the scenario of a user making
-<TT>`.listing'</TT> a symbolic link to <TT>`/etc/passwd'</TT> or something and
-asking <CODE>root</CODE> to run Wget in his or her directory.  Depending on
-the options used, either Wget will refuse to write to <TT>`.listing'</TT>,
-making the globbing/recursion/time-stamping operation fail, or the
-symbolic link will be deleted and replaced with the actual
-<TT>`.listing'</TT> file, or the listing will be written to a
-<TT>`.listing.<VAR>number</VAR>'</TT> file.
-
-Even though this situation isn't a problem, though, <CODE>root</CODE> should
-never run Wget in a non-trusted user's directory.  A user could do
-something as simple as linking <TT>`index.html'</TT> to <TT>`/etc/passwd'</TT>
-and asking <CODE>root</CODE> to run Wget with <SAMP>`-N'</SAMP> or 
<SAMP>`-r'</SAMP> so the file
-will be overwritten.
-
-<A NAME="IDX71"></A>
-<DT><SAMP>`-g on/off'</SAMP>
-<DD>
-<DT><SAMP>`--glob=on/off'</SAMP>
-<DD>
-Turn FTP globbing on or off.  Globbing means you may use the
-shell-like special characters (<EM>wildcards</EM>), like <SAMP>`*'</SAMP>,
-<SAMP>`?'</SAMP>, <SAMP>`['</SAMP> and <SAMP>`]'</SAMP> to retrieve more than 
one file from the
-same directory at once, like:
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/*.msg
-</PRE>
-
-By default, globbing will be turned on if the URL contains a
-globbing character.  This option may be used to turn globbing on or off
-permanently.
-
-You may have to quote the URL to protect it from being expanded by
-your shell.  Globbing makes Wget look for a directory listing, which is
-system-specific.  This is why it currently works only with Unix FTP
-servers (and the ones emulating Unix <CODE>ls</CODE> output).
-
-<A NAME="IDX72"></A>
-<DT><SAMP>`--passive-ftp'</SAMP>
-<DD>
-Use the <EM>passive</EM> FTP retrieval scheme, in which the client
-initiates the data connection.  This is sometimes required for FTP
-to work behind firewalls.
-
-<A NAME="IDX73"></A>
-<DT><SAMP>`--retr-symlinks'</SAMP>
-<DD>
-Usually, when retrieving FTP directories recursively and a symbolic
-link is encountered, the linked-to file is not downloaded.  Instead, a
-matching symbolic link is created on the local filesystem.  The
-pointed-to file will not be downloaded unless this recursive retrieval
-would have encountered it separately and downloaded it anyway.
-
-When <SAMP>`--retr-symlinks'</SAMP> is specified, however, symbolic links are
-traversed and the pointed-to files are retrieved.  At this time, this
-option does not cause Wget to traverse symlinks to directories and
-recurse through them, but in the future it should be enhanced to do
-this.
-
-Note that when retrieving a file (not a directory) because it was
-specified on the commandline, rather than because it was recursed to,
-this option has no effect.  Symbolic links are always traversed in this
-case.
-</DL>
-
-
-
-<H2><A NAME="SEC11" HREF="wget_toc.html#TOC11">Recursive Retrieval 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-r'</SAMP>
-<DD>
-<DT><SAMP>`--recursive'</SAMP>
-<DD>
-Turn on recursive retrieving.  See section <A 
HREF="wget_3.html#SEC13">Recursive Retrieval</A>, for more
-details.
-
-<DT><SAMP>`-l <VAR>depth</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--level=<VAR>depth</VAR>'</SAMP>
-<DD>
-Specify recursion maximum depth level <VAR>depth</VAR> (see section <A 
HREF="wget_3.html#SEC13">Recursive Retrieval</A>).  The default maximum depth 
is 5.
-
-<A NAME="IDX74"></A>
-<A NAME="IDX75"></A>
-<A NAME="IDX76"></A>
-<DT><SAMP>`--delete-after'</SAMP>
-<DD>
-This option tells Wget to delete every single file it downloads,
-<EM>after</EM> having done so.  It is useful for pre-fetching popular
-pages through a proxy, e.g.:
-
-
-<PRE>
-wget -r -nd --delete-after http://whatever.com/~popular/page/
-</PRE>
-
-The <SAMP>`-r'</SAMP> option is to retrieve recursively, and 
<SAMP>`-nd'</SAMP> to not
-create directories.  
-
-Note that <SAMP>`--delete-after'</SAMP> deletes files on the local machine.  It
-does not issue the <SAMP>`DELE'</SAMP> command to remote FTP sites, for
-instance.  Also note that when <SAMP>`--delete-after'</SAMP> is specified,
-<SAMP>`--convert-links'</SAMP> is ignored, so <SAMP>`.orig'</SAMP> files are 
simply not
-created in the first place.
-
-<A NAME="IDX77"></A>
-<A NAME="IDX78"></A>
-<DT><SAMP>`-k'</SAMP>
-<DD>
-<DT><SAMP>`--convert-links'</SAMP>
-<DD>
-After the download is complete, convert the links in the document to
-make them suitable for local viewing.  This affects not only the visible
-hyperlinks, but any part of the document that links to external content,
-such as embedded images, links to style sheets, hyperlinks to non-HTML
-content, etc.
-
-Each link will be changed in one of the two ways:
-
-
-<UL>
-<LI>
-
-The links to files that have been downloaded by Wget will be changed to
-refer to the file they point to as a relative link.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT>, also downloaded, then the link in <TT>`doc.html'</TT>
-will be modified to point to <SAMP>`../bar/img.gif'</SAMP>.  This kind of
-transformation works reliably for arbitrary combinations of directories.
-
-<LI>
-
-The links to files that have not been downloaded by Wget will be changed
-to include host name and absolute path of the location they point to.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT> (or to <TT>`../bar/img.gif'</TT>), then the link in
-<TT>`doc.html'</TT> will be modified to point to
-<TT>`http://<VAR>hostname</VAR>/bar/img.gif'</TT>.
-</UL>
-
-Because of this, local browsing works reliably: if a linked file was
-downloaded, the link will refer to its local name; if it was not
-downloaded, the link will refer to its full Internet address rather than
-presenting a broken link.  The fact that the former links are converted
-to relative links ensures that you can move the downloaded hierarchy to
-another directory.
-
-Note that only at the end of the download can Wget know which links have
-been downloaded.  Because of that, the work done by <SAMP>`-k'</SAMP> will be
-performed at the end of all the downloads.
-
-<A NAME="IDX79"></A>
-<DT><SAMP>`-K'</SAMP>
-<DD>
-<DT><SAMP>`--backup-converted'</SAMP>
-<DD>
-When converting a file, back up the original version with a 
<SAMP>`.orig'</SAMP>
-suffix.  Affects the behavior of <SAMP>`-N'</SAMP> (see section <A 
HREF="wget_5.html#SEC22">HTTP Time-Stamping Internals</A>).
-
-<DT><SAMP>`-m'</SAMP>
-<DD>
-<DT><SAMP>`--mirror'</SAMP>
-<DD>
-Turn on options suitable for mirroring.  This option turns on recursion
-and time-stamping, sets infinite recursion depth and keeps FTP
-directory listings.  It is currently equivalent to
-<SAMP>`-r -N -l inf -nr'</SAMP>.
-
-<A NAME="IDX80"></A>
-<A NAME="IDX81"></A>
-<DT><SAMP>`-p'</SAMP>
-<DD>
-<DT><SAMP>`--page-requisites'</SAMP>
-<DD>
-This option causes Wget to download all the files that are necessary to
-properly display a given HTML page.  This includes such things as
-inlined images, sounds, and referenced stylesheets.
-
-Ordinarily, when downloading a single HTML page, any requisite documents
-that may be needed to display it properly are not downloaded.  Using
-<SAMP>`-r'</SAMP> together with <SAMP>`-l'</SAMP> can help, but since Wget 
does not
-ordinarily distinguish between external and inlined documents, one is
-generally left with "leaf documents" that are missing their
-requisites.
-
-For instance, say document <TT>`1.html'</TT> contains an 
<CODE>&#60;IMG&#62;</CODE> tag
-referencing <TT>`1.gif'</TT> and an <CODE>&#60;A&#62;</CODE> tag pointing to 
external
-document <TT>`2.html'</TT>.  Say that <TT>`2.html'</TT> is similar but that its
-image is <TT>`2.gif'</TT> and it links to <TT>`3.html'</TT>.  Say this
-continues up to some arbitrarily high number.
-
-If one executes the command:
-
-
-<PRE>
-wget -r -l 2 http://<VAR>site</VAR>/1.html
-</PRE>
-
-then <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, <TT>`2.gif'</TT>, 
and
-<TT>`3.html'</TT> will be downloaded.  As you can see, <TT>`3.html'</TT> is
-without its requisite <TT>`3.gif'</TT> because Wget is simply counting the
-number of hops (up to 2) away from <TT>`1.html'</TT> in order to determine
-where to stop the recursion.  However, with this command:
-
-
-<PRE>
-wget -r -l 2 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-all the above files <EM>and</EM> <TT>`3.html'</TT>'s requisite <TT>`3.gif'</TT>
-will be downloaded.  Similarly,
-
-
-<PRE>
-wget -r -l 1 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-will cause <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, and 
<TT>`2.gif'</TT>
-to be downloaded.  One might think that:
-
-
-<PRE>
-wget -r -l 0 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-would download just <TT>`1.html'</TT> and <TT>`1.gif'</TT>, but unfortunately
-this is not the case, because <SAMP>`-l 0'</SAMP> is equivalent to
-<SAMP>`-l inf'</SAMP>---that is, infinite recursion.  To download a single HTML
-page (or a handful of them, all specified on the commandline or in a
-<SAMP>`-i'</SAMP> URL input file) and its (or their) requisites, simply leave 
off
-<SAMP>`-r'</SAMP> and <SAMP>`-l'</SAMP>:
-
-
-<PRE>
-wget -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-Note that Wget will behave as if <SAMP>`-r'</SAMP> had been specified, but only
-that single page and its requisites will be downloaded.  Links from that
-page to external documents will not be followed.  Actually, to download
-a single page and all its requisites (even if they exist on separate
-websites), and make sure the lot displays properly locally, this author
-likes to use a few options in addition to <SAMP>`-p'</SAMP>:
-
-
-<PRE>
-wget -E -H -k -K -p http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-To finish off this topic, it's worth knowing that Wget's idea of an
-external document link is any URL specified in an <CODE>&#60;A&#62;</CODE> 
tag, an
-<CODE>&#60;AREA&#62;</CODE> tag, or a <CODE>&#60;LINK&#62;</CODE> tag other 
than <CODE>&#60;LINK
-REL="stylesheet"&#62;</CODE>.
-</DL>
-
-
-
-<H2><A NAME="SEC12" HREF="wget_toc.html#TOC12">Recursive Accept/Reject 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR> --accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`-R <VAR>rejlist</VAR> --reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-Specify comma-separated lists of file name suffixes or patterns to
-accept or reject (see section <A HREF="wget_4.html#SEC16">Types of Files</A> 
for more details).
-
-<DT><SAMP>`-D <VAR>domain-list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--domains=<VAR>domain-list</VAR>'</SAMP>
-<DD>
-Set domains to be followed.  <VAR>domain-list</VAR> is a comma-separated list
-of domains.  Note that it does <EM>not</EM> turn on <SAMP>`-H'</SAMP>.
-
-<DT><SAMP>`--exclude-domains <VAR>domain-list</VAR>'</SAMP>
-<DD>
-Specify the domains that are <EM>not</EM> to be followed.
-(see section <A HREF="wget_4.html#SEC15">Spanning Hosts</A>).
-
-<A NAME="IDX82"></A>
-<DT><SAMP>`--follow-ftp'</SAMP>
-<DD>
-Follow FTP links from HTML documents.  Without this option,
-Wget will ignore all the FTP links.
-
-<A NAME="IDX83"></A>
-<DT><SAMP>`--follow-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-Wget has an internal table of HTML tag / attribute pairs that it
-considers when looking for linked documents during a recursive
-retrieval.  If a user wants only a subset of those tags to be
-considered, however, he or she should be specify such tags in a
-comma-separated <VAR>list</VAR> with this option.
-
-<DT><SAMP>`-G <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--ignore-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-This is the opposite of the <SAMP>`--follow-tags'</SAMP> option.  To skip
-certain HTML tags when recursively looking for documents to download,
-specify them in a comma-separated <VAR>list</VAR>.  
-
-In the past, the <SAMP>`-G'</SAMP> option was the best bet for downloading a
-single page and its requisites, using a commandline like:
-
-
-<PRE>
-wget -Ga,area -H -k -K -r http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-However, the author of this option came across a page with tags like
-<CODE>&#60;LINK REL="home" HREF="/"&#62;</CODE> and came to the realization 
that
-<SAMP>`-G'</SAMP> was not enough.  One can't just tell Wget to ignore
-<CODE>&#60;LINK&#62;</CODE>, because then stylesheets will not be downloaded.  
Now the
-best bet for downloading a single page and its requisites is the
-dedicated <SAMP>`--page-requisites'</SAMP> option.
-
-<DT><SAMP>`-H'</SAMP>
-<DD>
-<DT><SAMP>`--span-hosts'</SAMP>
-<DD>
-Enable spanning across hosts when doing recursive retrieving
-(see section <A HREF="wget_4.html#SEC15">Spanning Hosts</A>).
-
-<DT><SAMP>`-L'</SAMP>
-<DD>
-<DT><SAMP>`--relative'</SAMP>
-<DD>
-Follow relative links only.  Useful for retrieving a specific home page
-without any distractions, not even those from the same hosts
-(see section <A HREF="wget_4.html#SEC18">Relative Links</A>).
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading (see section <A HREF="wget_4.html#SEC17">Directory-Based 
Limits</A> for more details.)  Elements
-of <VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download (see section <A HREF="wget_4.html#SEC17">Directory-Based Limits</A> 
for more details.)  Elements of
-<VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-Do not ever ascend to the parent directory when retrieving recursively.
-This is a useful option, since it guarantees that only the files
-<EM>below</EM> a certain hierarchy will be downloaded.
-See section <A HREF="wget_4.html#SEC17">Directory-Based Limits</A>, for more 
details.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_1.html">previous</A>, 
<A HREF="wget_3.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_3.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_3.html
diff -N manual/wget-1.8.1/html_chapter/wget_3.html
--- manual/wget-1.8.1/html_chapter/wget_3.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,104 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Recursive Retrieval</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_2.html">previous</A>, 
<A HREF="wget_4.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC13" HREF="wget_toc.html#TOC13">Recursive Retrieval</A></H1>
-<P>
-<A NAME="IDX84"></A>
-<A NAME="IDX85"></A>
-<A NAME="IDX86"></A>
-
-
-<P>
-GNU Wget is capable of traversing parts of the Web (or a single
-HTTP or FTP server), following links and directory structure.
-We refer to this as to <EM>recursive retrieving</EM>, or <EM>recursion</EM>.
-
-
-<P>
-With HTTP URLs, Wget retrieves and parses the HTML from
-the given URL, documents, retrieving the files the HTML
-document was referring to, through markups like <CODE>href</CODE>, or
-<CODE>src</CODE>.  If the freshly downloaded file is also of type
-<CODE>text/html</CODE>, it will be parsed and followed further.
-
-
-<P>
-Recursive retrieval of HTTP and HTML content is
-<EM>breadth-first</EM>.  This means that Wget first downloads the requested
-HTML document, then the documents linked from that document, then the
-documents linked by them, and so on.  In other words, Wget first
-downloads the documents at depth 1, then those at depth 2, and so on
-until the specified maximum depth.
-
-
-<P>
-The maximum <EM>depth</EM> to which the retrieval may descend is specified
-with the <SAMP>`-l'</SAMP> option.  The default maximum depth is five layers.
-
-
-<P>
-When retrieving an FTP URL recursively, Wget will retrieve all
-the data from the given directory tree (including the subdirectories up
-to the specified depth) on the remote server, creating its mirror image
-locally.  FTP retrieval is also limited by the <CODE>depth</CODE>
-parameter.  Unlike HTTP recursion, FTP recursion is performed
-depth-first.
-
-
-<P>
-By default, Wget will create a local directory tree, corresponding to
-the one found on the remote server.
-
-
-<P>
-Recursive retrieving can find a number of applications, the most
-important of which is mirroring.  It is also useful for WWW
-presentations, and any other opportunities where slow network
-connections should be bypassed by storing the files locally.
-
-
-<P>
-You should be warned that recursive downloads can overload the remote
-servers.  Because of that, many administrators frown upon them and may
-ban access from your site if they detect very fast downloads of big
-amounts of content.  When downloading from Internet servers, consider
-using the <SAMP>`-w'</SAMP> option to introduce a delay between accesses to the
-server.  The download will take a while longer, but the server
-administrator will not be alarmed by your rudeness.
-
-
-<P>
-Of course, recursive download may cause problems on your machine.  If
-left to run unchecked, it can easily fill up the disk.  If downloading
-from local network, it can also take bandwidth on the system, as well as
-consume memory and CPU.
-
-
-<P>
-Try to specify the criteria that match the kind of download you are
-trying to achieve.  If you want to download only one page, use
-<SAMP>`--page-requisites'</SAMP> without any additional recursion.  If you want
-to download things under one directory, use <SAMP>`-np'</SAMP> to avoid
-downloading things from other directories.  If you want to download all
-the files from one directory, use <SAMP>`-l 1'</SAMP> to make sure the 
recursion
-depth never exceeds one.  See section <A HREF="wget_4.html#SEC14">Following 
Links</A>, for more information
-about this.
-
-
-<P>
-Recursive retrieval should be used with care.  Don't say you were not
-warned.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_2.html">previous</A>, 
<A HREF="wget_4.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_4.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_4.html
diff -N manual/wget-1.8.1/html_chapter/wget_4.html
--- manual/wget-1.8.1/html_chapter/wget_4.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,355 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Following Links</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_3.html">previous</A>, 
<A HREF="wget_5.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC14" HREF="wget_toc.html#TOC14">Following Links</A></H1>
-<P>
-<A NAME="IDX87"></A>
-<A NAME="IDX88"></A>
-
-
-<P>
-When retrieving recursively, one does not wish to retrieve loads of
-unnecessary data.  Most of the time the users bear in mind exactly what
-they want to download, and want Wget to follow only specific links.
-
-
-<P>
-For example, if you wish to download the music archive from
-<SAMP>`fly.srk.fer.hr'</SAMP>, you will not want to download all the home pages
-that happen to be referenced by an obscure part of the archive.
-
-
-<P>
-Wget possesses several mechanisms that allows you to fine-tune which
-links it will follow.
-
-
-
-
-<H2><A NAME="SEC15" HREF="wget_toc.html#TOC15">Spanning Hosts</A></H2>
-<P>
-<A NAME="IDX89"></A>
-<A NAME="IDX90"></A>
-
-
-<P>
-Wget's recursive retrieval normally refuses to visit hosts different
-than the one you specified on the command line.  This is a reasonable
-default; without it, every retrieval would have the potential to turn
-your Wget into a small version of google.
-
-
-<P>
-However, visiting different hosts, or <EM>host spanning,</EM> is sometimes
-a useful option.  Maybe the images are served from a different server.
-Maybe you're mirroring a site that consists of pages interlinked between
-three servers.  Maybe the server has two equivalent names, and the HTML
-pages refer to both interchangeably.
-
-
-<DL COMPACT>
-
-<DT>Span to any host---<SAMP>`-H'</SAMP>
-<DD>
-The <SAMP>`-H'</SAMP> option turns on host spanning, thus allowing Wget's
-recursive run to visit any host referenced by a link.  Unless sufficient
-recursion-limiting criteria are applied depth, these foreign hosts will
-typically link to yet more hosts, and so on until Wget ends up sucking
-up much more data than you have intended.
-
-<DT>Limit spanning to certain domains---<SAMP>`-D'</SAMP>
-<DD>
-The <SAMP>`-D'</SAMP> option allows you to specify the domains that will be
-followed, thus limiting the recursion only to the hosts that belong to
-these domains.  Obviously, this makes sense only in conjunction with
-<SAMP>`-H'</SAMP>.  A typical example would be downloading the contents of
-<SAMP>`www.server.com'</SAMP>, but allowing downloads from
-<SAMP>`images.server.com'</SAMP>, etc.:
-
-
-<PRE>
-wget -rH -Dserver.com http://www.server.com/
-</PRE>
-
-You can specify more than one address by separating them with a comma,
-e.g. <SAMP>`-Ddomain1.com,domain2.com'</SAMP>.
-
-<DT>Keep download off certain domains---<SAMP>`--exclude-domains'</SAMP>
-<DD>
-If there are domains you want to exclude specifically, you can do it
-with <SAMP>`--exclude-domains'</SAMP>, which accepts the same type of arguments
-of <SAMP>`-D'</SAMP>, but will <EM>exclude</EM> all the listed domains.  For
-example, if you want to download all the hosts from <SAMP>`foo.edu'</SAMP>
-domain, with the exception of <SAMP>`sunsite.foo.edu'</SAMP>, you can do it 
like
-this:
-
-
-<PRE>
-wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
-    http://www.foo.edu/
-</PRE>
-
-</DL>
-
-
-
-<H2><A NAME="SEC16" HREF="wget_toc.html#TOC16">Types of Files</A></H2>
-<P>
-<A NAME="IDX91"></A>
-
-
-<P>
-When downloading material from the web, you will often want to restrict
-the retrieval to only certain file types.  For example, if you are
-interested in downloading GIFs, you will not be overjoyed to get
-loads of PostScript documents, and vice versa.
-
-
-<P>
-Wget offers two options to deal with this problem.  Each option
-description lists a short name, a long name, and the equivalent command
-in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX92"></A>
-<A NAME="IDX93"></A>
-<A NAME="IDX94"></A>
-<A NAME="IDX95"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`accept = <VAR>acclist</VAR>'</SAMP>
-<DD>
-The argument to <SAMP>`--accept'</SAMP> option is a list of file suffixes or
-patterns that Wget will download during recursive retrieval.  A suffix
-is the ending part of a file, and consists of "normal" letters,
-e.g. <SAMP>`gif'</SAMP> or <SAMP>`.jpg'</SAMP>.  A matching pattern contains 
shell-like
-wildcards, e.g. <SAMP>`books*'</SAMP> or <SAMP>`zelazny*196[0-9]*'</SAMP>.
-
-So, specifying <SAMP>`wget -A gif,jpg'</SAMP> will make Wget download only the
-files ending with <SAMP>`gif'</SAMP> or <SAMP>`jpg'</SAMP>, i.e. GIFs and
-JPEGs.  On the other hand, <SAMP>`wget -A "zelazny*196[0-9]*"'</SAMP> will
-download only files beginning with <SAMP>`zelazny'</SAMP> and containing 
numbers
-from 1960 to 1969 anywhere within.  Look up the manual of your shell for
-a description of how pattern matching works.
-
-Of course, any number of suffixes and patterns can be combined into a
-comma-separated list, and given as an argument to <SAMP>`-A'</SAMP>.
-
-<A NAME="IDX96"></A>
-<A NAME="IDX97"></A>
-<A NAME="IDX98"></A>
-<A NAME="IDX99"></A>
-<DT><SAMP>`-R <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`reject = <VAR>rejlist</VAR>'</SAMP>
-<DD>
-The <SAMP>`--reject'</SAMP> option works the same way as 
<SAMP>`--accept'</SAMP>, only
-its logic is the reverse; Wget will download all files <EM>except</EM> the
-ones matching the suffixes (or patterns) in the list.
-
-So, if you want to download a whole page except for the cumbersome
-MPEGs and .AU files, you can use <SAMP>`wget -R mpg,mpeg,au'</SAMP>.
-Analogously, to download all files except the ones beginning with
-<SAMP>`bjork'</SAMP>, use <SAMP>`wget -R "bjork*"'</SAMP>.  The quotes are to 
prevent
-expansion by the shell.
-</DL>
-
-<P>
-The <SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP> options may be combined to achieve 
even
-better fine-tuning of which files to retrieve.  E.g. <SAMP>`wget -A
-"*zelazny*" -R .ps'</SAMP> will download all the files having 
<SAMP>`zelazny'</SAMP> as
-a part of their name, but <EM>not</EM> the PostScript files.
-
-
-<P>
-Note that these two options do not affect the downloading of HTML
-files; Wget must load all the HTMLs to know where to go at
-all--recursive retrieval would make no sense otherwise.
-
-
-
-
-<H2><A NAME="SEC17" HREF="wget_toc.html#TOC17">Directory-Based Limits</A></H2>
-<P>
-<A NAME="IDX100"></A>
-<A NAME="IDX101"></A>
-
-
-<P>
-Regardless of other link-following facilities, it is often useful to
-place the restriction of what files to retrieve based on the directories
-those files are placed in.  There can be many reasons for this--the
-home pages may be organized in a reasonable directory structure; or some
-directories may contain useless information, e.g. <TT>`/cgi-bin'</TT> or
-<TT>`/dev'</TT> directories.
-
-
-<P>
-Wget offers three different options to deal with this requirement.  Each
-option description lists a short name, a long name, and the equivalent
-command in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX102"></A>
-<A NAME="IDX103"></A>
-<A NAME="IDX104"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`include_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-I'</SAMP> option accepts a comma-separated list of directories included
-in the retrieval.  Any other directories will simply be ignored.  The
-directories are absolute paths.
-
-So, if you wish to download from <SAMP>`http://host/people/bozo/'</SAMP>
-following only links to bozo's colleagues in the <TT>`/people'</TT>
-directory and the bogus scripts in <TT>`/cgi-bin'</TT>, you can specify:
-
-
-<PRE>
-wget -I /people,/cgi-bin http://host/people/bozo/
-</PRE>
-
-<A NAME="IDX105"></A>
-<A NAME="IDX106"></A>
-<A NAME="IDX107"></A>
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`exclude_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-X'</SAMP> option is exactly the reverse of <SAMP>`-I'</SAMP>---this is 
a list of
-directories <EM>excluded</EM> from the download.  E.g. if you do not want
-Wget to download things from <TT>`/cgi-bin'</TT> directory, specify <SAMP>`-X
-/cgi-bin'</SAMP> on the command line.
-
-The same as with <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP>, these two options can be 
combined
-to get a better fine-tuning of downloading subdirectories.  E.g. if you
-want to load all the files from <TT>`/pub'</TT> hierarchy except for
-<TT>`/pub/worthless'</TT>, specify <SAMP>`-I/pub -X/pub/worthless'</SAMP>.
-
-<A NAME="IDX108"></A>
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-<DT><SAMP>`no_parent = on'</SAMP>
-<DD>
-The simplest, and often very useful way of limiting directories is
-disallowing retrieval of the links that refer to the hierarchy
-<EM>above</EM> than the beginning directory, i.e. disallowing ascent to the
-parent directory/directories.
-
-The <SAMP>`--no-parent'</SAMP> option (short <SAMP>`-np'</SAMP>) is useful in 
this case.
-Using it guarantees that you will never leave the existing hierarchy.
-Supposing you issue Wget with:
-
-
-<PRE>
-wget -r --no-parent http://somehost/~luzer/my-archive/
-</PRE>
-
-You may rest assured that none of the references to
-<TT>`/~his-girls-homepage/'</TT> or <TT>`/~luzer/all-my-mpegs/'</TT> will be
-followed.  Only the archive you are interested in will be downloaded.
-Essentially, <SAMP>`--no-parent'</SAMP> is similar to
-<SAMP>`-I/~luzer/my-archive'</SAMP>, only it handles redirections in a more
-intelligent fashion.
-</DL>
-
-
-
-<H2><A NAME="SEC18" HREF="wget_toc.html#TOC18">Relative Links</A></H2>
-<P>
-<A NAME="IDX109"></A>
-
-
-<P>
-When <SAMP>`-L'</SAMP> is turned on, only the relative links are ever followed.
-Relative links are here defined those that do not refer to the web
-server root.  For example, these links are relative:
-
-
-
-<PRE>
-&#60;a href="foo.gif"&#62;
-&#60;a href="foo/bar.gif"&#62;
-&#60;a href="../foo/bar.gif"&#62;
-</PRE>
-
-<P>
-These links are not relative:
-
-
-
-<PRE>
-&#60;a href="/foo.gif"&#62;
-&#60;a href="/foo/bar.gif"&#62;
-&#60;a href="http://www.server.com/foo/bar.gif"&#62;
-</PRE>
-
-<P>
-Using this option guarantees that recursive retrieval will not span
-hosts, even without <SAMP>`-H'</SAMP>.  In simple cases it also allows 
downloads
-to "just work" without having to convert links.
-
-
-<P>
-This option is probably not very useful and might be removed in a future
-release.
-
-
-
-
-<H2><A NAME="SEC19" HREF="wget_toc.html#TOC19">Following FTP Links</A></H2>
-<P>
-<A NAME="IDX110"></A>
-
-
-<P>
-The rules for FTP are somewhat specific, as it is necessary for
-them to be.  FTP links in HTML documents are often included
-for purposes of reference, and it is often inconvenient to download them
-by default.
-
-
-<P>
-To have FTP links followed from HTML documents, you need to
-specify the <SAMP>`--follow-ftp'</SAMP> option.  Having done that, FTP
-links will span hosts regardless of <SAMP>`-H'</SAMP> setting.  This is 
logical,
-as FTP links rarely point to the same host where the HTTP
-server resides.  For similar reasons, the <SAMP>`-L'</SAMP> options has no
-effect on such downloads.  On the other hand, domain acceptance
-(<SAMP>`-D'</SAMP>) and suffix rules (<SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP>) 
apply normally.
-
-
-<P>
-Also note that followed links to FTP directories will not be
-retrieved recursively further.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_3.html">previous</A>, 
<A HREF="wget_5.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_5.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_5.html
diff -N manual/wget-1.8.1/html_chapter/wget_5.html
--- manual/wget-1.8.1/html_chapter/wget_5.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,240 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Time-Stamping</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_4.html">previous</A>, 
<A HREF="wget_6.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC20" HREF="wget_toc.html#TOC20">Time-Stamping</A></H1>
-<P>
-<A NAME="IDX111"></A>
-<A NAME="IDX112"></A>
-<A NAME="IDX113"></A>
-<A NAME="IDX114"></A>
-
-
-<P>
-One of the most important aspects of mirroring information from the
-Internet is updating your archives.
-
-
-<P>
-Downloading the whole archive again and again, just to replace a few
-changed files is expensive, both in terms of wasted bandwidth and money,
-and the time to do the update.  This is why all the mirroring tools
-offer the option of incremental updating.
-
-
-<P>
-Such an updating mechanism means that the remote server is scanned in
-search of <EM>new</EM> files.  Only those new files will be downloaded in
-the place of the old ones.
-
-
-<P>
-A file is considered new if one of these two conditions are met:
-
-
-
-<OL>
-<LI>
-
-A file of that name does not already exist locally.
-
-<LI>
-
-A file of that name does exist, but the remote file was modified more
-recently than the local file.
-</OL>
-
-<P>
-To implement this, the program needs to be aware of the time of last
-modification of both local and remote files.  We call this information the
-<EM>time-stamp</EM> of a file.
-
-
-<P>
-The time-stamping in GNU Wget is turned on using <SAMP>`--timestamping'</SAMP>
-(<SAMP>`-N'</SAMP>) option, or through <CODE>timestamping = on</CODE> 
directive in
-<TT>`.wgetrc'</TT>.  With this option, for each file it intends to download,
-Wget will check whether a local file of the same name exists.  If it
-does, and the remote file is older, Wget will not download it.
-
-
-<P>
-If the local file does not exist, or the sizes of the files do not
-match, Wget will download the remote file no matter what the time-stamps
-say.
-
-
-
-
-<H2><A NAME="SEC21" HREF="wget_toc.html#TOC21">Time-Stamping Usage</A></H2>
-<P>
-<A NAME="IDX115"></A>
-<A NAME="IDX116"></A>
-
-
-<P>
-The usage of time-stamping is simple.  Say you would like to download a
-file so that it keeps its date of modification.
-
-
-
-<PRE>
-wget -S http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-A simple <CODE>ls -l</CODE> shows that the time stamp on the local file equals
-the state of the <CODE>Last-Modified</CODE> header, as returned by the server.
-As you can see, the time-stamping info is preserved locally, even
-without <SAMP>`-N'</SAMP> (at least for HTTP).
-
-
-<P>
-Several days later, you would like Wget to check if the remote file has
-changed, and download it if it has.
-
-
-
-<PRE>
-wget -N http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-Wget will ask the server for the last-modified date.  If the local file
-has the same timestamp as the server, or a newer one, the remote file
-will not be re-fetched.  However, if the remote file is more recent,
-Wget will proceed to fetch it.
-
-
-<P>
-The same goes for FTP.  For example:
-
-
-
-<PRE>
-wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*";
-</PRE>
-
-<P>
-(The quotes around that URL are to prevent the shell from trying to
-interpret the <SAMP>`*'</SAMP>.)
-
-
-<P>
-After download, a local directory listing will show that the timestamps
-match those on the remote server.  Reissuing the command with <SAMP>`-N'</SAMP>
-will make Wget re-fetch <EM>only</EM> the files that have been modified
-since the last download.
-
-
-<P>
-If you wished to mirror the GNU archive every week, you would use a
-command like the following, weekly:
-
-
-
-<PRE>
-wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
-</PRE>
-
-<P>
-Note that time-stamping will only work for files for which the server
-gives a timestamp.  For HTTP, this depends on getting a
-<CODE>Last-Modified</CODE> header.  For FTP, this depends on getting a
-directory listing with dates in a format that Wget can parse
-(see section <A HREF="wget_5.html#SEC23">FTP Time-Stamping Internals</A>).
-
-
-
-
-<H2><A NAME="SEC22" HREF="wget_toc.html#TOC22">HTTP Time-Stamping 
Internals</A></H2>
-<P>
-<A NAME="IDX117"></A>
-
-
-<P>
-Time-stamping in HTTP is implemented by checking of the
-<CODE>Last-Modified</CODE> header.  If you wish to retrieve the file
-<TT>`foo.html'</TT> through HTTP, Wget will check whether
-<TT>`foo.html'</TT> exists locally.  If it doesn't, <TT>`foo.html'</TT> will be
-retrieved unconditionally.
-
-
-<P>
-If the file does exist locally, Wget will first check its local
-time-stamp (similar to the way <CODE>ls -l</CODE> checks it), and then send a
-<CODE>HEAD</CODE> request to the remote server, demanding the information on
-the remote file.
-
-
-<P>
-The <CODE>Last-Modified</CODE> header is examined to find which file was
-modified more recently (which makes it "newer").  If the remote file
-is newer, it will be downloaded; if it is older, Wget will give
-up.<A NAME="DOCF2" HREF="wget_foot.html#FOOT2">(2)</A>
-
-
-<P>
-When <SAMP>`--backup-converted'</SAMP> (<SAMP>`-K'</SAMP>) is specified in 
conjunction
-with <SAMP>`-N'</SAMP>, server file <SAMP>`<VAR>X</VAR>'</SAMP> is compared to 
local file
-<SAMP>`<VAR>X</VAR>.orig'</SAMP>, if extant, rather than being compared to 
local file
-<SAMP>`<VAR>X</VAR>'</SAMP>, which will always differ if it's been converted by
-<SAMP>`--convert-links'</SAMP> (<SAMP>`-k'</SAMP>).
-
-
-<P>
-Arguably, HTTP time-stamping should be implemented using the
-<CODE>If-Modified-Since</CODE> request.
-
-
-
-
-<H2><A NAME="SEC23" HREF="wget_toc.html#TOC23">FTP Time-Stamping 
Internals</A></H2>
-<P>
-<A NAME="IDX118"></A>
-
-
-<P>
-In theory, FTP time-stamping works much the same as HTTP, only
-FTP has no headers--time-stamps must be ferreted out of directory
-listings.
-
-
-<P>
-If an FTP download is recursive or uses globbing, Wget will use the
-FTP <CODE>LIST</CODE> command to get a file listing for the directory
-containing the desired file(s).  It will try to analyze the listing,
-treating it like Unix <CODE>ls -l</CODE> output, extracting the time-stamps.
-The rest is exactly the same as for HTTP.  Note that when
-retrieving individual files from an FTP server without using
-globbing or recursion, listing files will not be downloaded (and thus
-files will not be time-stamped) unless <SAMP>`-N'</SAMP> is specified.
-
-
-<P>
-Assumption that every directory listing is a Unix-style listing may
-sound extremely constraining, but in practice it is not, as many
-non-Unix FTP servers use the Unixoid listing format because most
-(all?) of the clients understand it.  Bear in mind that RFC959
-defines no standard way to get a file list, let alone the time-stamps.
-We can only hope that a future standard will define this.
-
-
-<P>
-Another non-standard solution includes the use of <CODE>MDTM</CODE> command
-that is supported by some FTP servers (including the popular
-<CODE>wu-ftpd</CODE>), which returns the exact time of the specified file.
-Wget may support this command in the future.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_4.html">previous</A>, 
<A HREF="wget_6.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_6.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_6.html
diff -N manual/wget-1.8.1/html_chapter/wget_6.html
--- manual/wget-1.8.1/html_chapter/wget_6.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,612 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Startup File</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_5.html">previous</A>, 
<A HREF="wget_7.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC24" HREF="wget_toc.html#TOC24">Startup File</A></H1>
-<P>
-<A NAME="IDX119"></A>
-<A NAME="IDX120"></A>
-<A NAME="IDX121"></A>
-<A NAME="IDX122"></A>
-<A NAME="IDX123"></A>
-
-
-<P>
-Once you know how to change default settings of Wget through command
-line arguments, you may wish to make some of those settings permanent.
-You can do that in a convenient way by creating the Wget startup
-file---<TT>`.wgetrc'</TT>.
-
-
-<P>
-Besides <TT>`.wgetrc'</TT> is the "main" initialization file, it is
-convenient to have a special facility for storing passwords.  Thus Wget
-reads and interprets the contents of <TT>`$HOME/.netrc'</TT>, if it finds
-it.  You can find <TT>`.netrc'</TT> format in your system manuals.
-
-
-<P>
-Wget reads <TT>`.wgetrc'</TT> upon startup, recognizing a limited set of
-commands.
-
-
-
-
-<H2><A NAME="SEC25" HREF="wget_toc.html#TOC25">Wgetrc Location</A></H2>
-<P>
-<A NAME="IDX124"></A>
-<A NAME="IDX125"></A>
-
-
-<P>
-When initializing, Wget will look for a <EM>global</EM> startup file,
-<TT>`/usr/local/etc/wgetrc'</TT> by default (or some prefix other than
-<TT>`/usr/local'</TT>, if Wget was not installed there) and read commands
-from there, if it exists.
-
-
-<P>
-Then it will look for the user's file.  If the environmental variable
-<CODE>WGETRC</CODE> is set, Wget will try to load that file.  Failing that, no
-further attempts will be made.
-
-
-<P>
-If <CODE>WGETRC</CODE> is not set, Wget will try to load 
<TT>`$HOME/.wgetrc'</TT>.
-
-
-<P>
-The fact that user's settings are loaded after the system-wide ones
-means that in case of collision user's wgetrc <EM>overrides</EM> the
-system-wide wgetrc (in <TT>`/usr/local/etc/wgetrc'</TT> by default).
-Fascist admins, away!
-
-
-
-
-<H2><A NAME="SEC26" HREF="wget_toc.html#TOC26">Wgetrc Syntax</A></H2>
-<P>
-<A NAME="IDX126"></A>
-<A NAME="IDX127"></A>
-
-
-<P>
-The syntax of a wgetrc command is simple:
-
-
-
-<PRE>
-variable = value
-</PRE>
-
-<P>
-The <EM>variable</EM> will also be called <EM>command</EM>.  Valid
-<EM>values</EM> are different for different commands.
-
-
-<P>
-The commands are case-insensitive and underscore-insensitive.  Thus
-<SAMP>`DIr__PrefiX'</SAMP> is the same as <SAMP>`dirprefix'</SAMP>.  Empty 
lines, lines
-beginning with <SAMP>`#'</SAMP> and lines containing white-space only are
-discarded.
-
-
-<P>
-Commands that expect a comma-separated list will clear the list on an
-empty command.  So, if you wish to reset the rejection list specified in
-global <TT>`wgetrc'</TT>, you can do it with:
-
-
-
-<PRE>
-reject =
-</PRE>
-
-
-
-<H2><A NAME="SEC27" HREF="wget_toc.html#TOC27">Wgetrc Commands</A></H2>
-<P>
-<A NAME="IDX128"></A>
-
-
-<P>
-The complete set of commands is listed below.  Legal values are listed
-after the <SAMP>`='</SAMP>.  Simple Boolean values can be set or unset using
-<SAMP>`on'</SAMP> and <SAMP>`off'</SAMP> or <SAMP>`1'</SAMP> and 
<SAMP>`0'</SAMP>.  A fancier kind of
-Boolean allowed in some cases is the <EM>lockable Boolean</EM>, which may
-be set to <SAMP>`on'</SAMP>, <SAMP>`off'</SAMP>, <SAMP>`always'</SAMP>, or 
<SAMP>`never'</SAMP>.  If an
-option is set to <SAMP>`always'</SAMP> or <SAMP>`never'</SAMP>, that value 
will be
-locked in for the duration of the Wget invocation--commandline options
-will not override.
-
-
-<P>
-Some commands take pseudo-arbitrary values.  <VAR>address</VAR> values can be
-hostnames or dotted-quad IP addresses.  <VAR>n</VAR> can be any positive
-integer, or <SAMP>`inf'</SAMP> for infinity, where appropriate.  
<VAR>string</VAR>
-values can be any non-empty string.
-
-
-<P>
-Most of these commands have commandline equivalents (see section <A 
HREF="wget_2.html#SEC2">Invoking</A>),
-though some of the more obscure or rarely used ones do not.
-
-
-<DL COMPACT>
-
-<DT>accept/reject = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP> (see section <A 
HREF="wget_4.html#SEC16">Types of Files</A>).
-
-<DT>add_hostdir = on/off
-<DD>
-Enable/disable host-prefixed file names.  <SAMP>`-nH'</SAMP> disables it.
-
-<DT>continue = on/off
-<DD>
-If set to on, force continuation of preexistent partially retrieved
-files.  See <SAMP>`-c'</SAMP> before setting it.
-
-<DT>background = on/off
-<DD>
-Enable/disable going to background--the same as <SAMP>`-b'</SAMP> (which
-enables it).
-
-<DT>backup_converted = on/off
-<DD>
-Enable/disable saving pre-converted files with the suffix
-<SAMP>`.orig'</SAMP>---the same as <SAMP>`-K'</SAMP> (which enables it).
-
-<DT>base = <VAR>string</VAR>
-<DD>
-Consider relative URLs in URL input files forced to be
-interpreted as HTML as being relative to <VAR>string</VAR>---the same as
-<SAMP>`-B'</SAMP>.
-
-<DT>bind_address = <VAR>address</VAR>
-<DD>
-Bind to <VAR>address</VAR>, like the <SAMP>`--bind-address'</SAMP> option.
-
-<DT>cache = on/off
-<DD>
-When set to off, disallow server-caching.  See the <SAMP>`-C'</SAMP> option.
-
-<DT>convert links = on/off
-<DD>
-Convert non-relative links locally.  The same as <SAMP>`-k'</SAMP>.
-
-<DT>cookies = on/off
-<DD>
-When set to off, disallow cookies.  See the <SAMP>`--cookies'</SAMP> option.
-
-<DT>load_cookies = <VAR>file</VAR>
-<DD>
-Load cookies from <VAR>file</VAR>.  See <SAMP>`--load-cookies'</SAMP>.
-
-<DT>save_cookies = <VAR>file</VAR>
-<DD>
-Save cookies to <VAR>file</VAR>.  See <SAMP>`--save-cookies'</SAMP>.
-
-<DT>cut_dirs = <VAR>n</VAR>
-<DD>
-Ignore <VAR>n</VAR> remote directory components.
-
-<DT>debug = on/off
-<DD>
-Debug mode, same as <SAMP>`-d'</SAMP>.
-
-<DT>delete_after = on/off
-<DD>
-Delete after download--the same as <SAMP>`--delete-after'</SAMP>.
-
-<DT>dir_prefix = <VAR>string</VAR>
-<DD>
-Top of directory tree--the same as <SAMP>`-P'</SAMP>.
-
-<DT>dirstruct = on/off
-<DD>
-Turning dirstruct on or off--the same as <SAMP>`-x'</SAMP> or 
<SAMP>`-nd'</SAMP>,
-respectively.
-
-<DT>domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-D'</SAMP> (see section <A HREF="wget_4.html#SEC15">Spanning 
Hosts</A>).
-
-<DT>dot_bytes = <VAR>n</VAR>
-<DD>
-Specify the number of bytes "contained" in a dot, as seen throughout
-the retrieval (1024 by default).  You can postfix the value with
-<SAMP>`k'</SAMP> or <SAMP>`m'</SAMP>, representing kilobytes and megabytes,
-respectively.  With dot settings you can tailor the dot retrieval to
-suit your needs, or you can use the predefined <EM>styles</EM>
-(see section <A HREF="wget_2.html#SEC7">Download Options</A>).
-
-<DT>dots_in_line = <VAR>n</VAR>
-<DD>
-Specify the number of dots that will be printed in each line throughout
-the retrieval (50 by default).
-
-<DT>dot_spacing = <VAR>n</VAR>
-<DD>
-Specify the number of dots in a single cluster (10 by default).
-
-<DT>exclude_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download--the same as <SAMP>`-X'</SAMP> (see section <A 
HREF="wget_4.html#SEC17">Directory-Based Limits</A>).
-
-<DT>exclude_domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`--exclude-domains'</SAMP> (see section <A 
HREF="wget_4.html#SEC15">Spanning Hosts</A>).
-
-<DT>follow_ftp = on/off
-<DD>
-Follow FTP links from HTML documents--the same as
-<SAMP>`--follow-ftp'</SAMP>.
-
-<DT>follow_tags = <VAR>string</VAR>
-<DD>
-Only follow certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`--follow-tags'</SAMP>.
-
-<DT>force_html = on/off
-<DD>
-If set to on, force the input filename to be regarded as an HTML
-document--the same as <SAMP>`-F'</SAMP>.
-
-<DT>ftp_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as FTP proxy, instead of the one specified in
-environment.
-
-<DT>glob = on/off
-<DD>
-Turn globbing on/off--the same as <SAMP>`-g'</SAMP>.
-
-<DT>header = <VAR>string</VAR>
-<DD>
-Define an additional header, like <SAMP>`--header'</SAMP>.
-
-<DT>html_extension = on/off
-<DD>
-Add a <SAMP>`.html'</SAMP> extension to <SAMP>`text/html'</SAMP> files without 
it, like
-<SAMP>`-E'</SAMP>.
-
-<DT>http_passwd = <VAR>string</VAR>
-<DD>
-Set HTTP password.
-
-<DT>http_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as HTTP proxy, instead of the one specified in
-environment.
-
-<DT>http_user = <VAR>string</VAR>
-<DD>
-Set HTTP user to <VAR>string</VAR>.
-
-<DT>ignore_length = on/off
-<DD>
-When set to on, ignore <CODE>Content-Length</CODE> header; the same as
-<SAMP>`--ignore-length'</SAMP>.
-
-<DT>ignore_tags = <VAR>string</VAR>
-<DD>
-Ignore certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`-G'</SAMP> / <SAMP>`--ignore-tags'</SAMP>.
-
-<DT>include_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading--the same as <SAMP>`-I'</SAMP>.
-
-<DT>input = <VAR>string</VAR>
-<DD>
-Read the URLs from <VAR>string</VAR>, like <SAMP>`-i'</SAMP>.
-
-<DT>kill_longer = on/off
-<DD>
-Consider data longer than specified in content-length header as invalid
-(and retry getting it).  The default behaviour is to save as much data
-as there is, provided there is more than or equal to the value in
-<CODE>Content-Length</CODE>.
-
-<DT>logfile = <VAR>string</VAR>
-<DD>
-Set logfile--the same as <SAMP>`-o'</SAMP>.
-
-<DT>login = <VAR>string</VAR>
-<DD>
-Your user name on the remote machine, for FTP.  Defaults to
-<SAMP>`anonymous'</SAMP>.
-
-<DT>mirror = on/off
-<DD>
-Turn mirroring on/off.  The same as <SAMP>`-m'</SAMP>.
-
-<DT>netrc = on/off
-<DD>
-Turn reading netrc on or off.
-
-<DT>noclobber = on/off
-<DD>
-Same as <SAMP>`-nc'</SAMP>.
-
-<DT>no_parent = on/off
-<DD>
-Disallow retrieving outside the directory hierarchy, like
-<SAMP>`--no-parent'</SAMP> (see section <A 
HREF="wget_4.html#SEC17">Directory-Based Limits</A>).
-
-<DT>no_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as the comma-separated list of domains to avoid in
-proxy loading, instead of the one specified in environment.
-
-<DT>output_document = <VAR>string</VAR>
-<DD>
-Set the output filename--the same as <SAMP>`-O'</SAMP>.
-
-<DT>page_requisites = on/off
-<DD>
-Download all ancillary documents necessary for a single HTML page to
-display properly--the same as <SAMP>`-p'</SAMP>.
-
-<DT>passive_ftp = on/off/always/never
-<DD>
-Set passive FTP---the same as <SAMP>`--passive-ftp'</SAMP>.  Some scripts
-and <SAMP>`.pm'</SAMP> (Perl module) files download files using <SAMP>`wget
---passive-ftp'</SAMP>.  If your firewall does not allow this, you can set
-<SAMP>`passive_ftp = never'</SAMP> to override the commandline.
-
-<DT>passwd = <VAR>string</VAR>
-<DD>
-Set your FTP password to <VAR>password</VAR>.  Without this setting, the
-password defaults to <SAMP>address@hidden'</SAMP>.
-
-<DT>progress = <VAR>string</VAR>
-<DD>
-Set the type of the progress indicator.  Legal types are "dot" and
-"bar".
-
-<DT>proxy_user = <VAR>string</VAR>
-<DD>
-Set proxy authentication user name to <VAR>string</VAR>, like 
<SAMP>`--proxy-user'</SAMP>.
-
-<DT>proxy_passwd = <VAR>string</VAR>
-<DD>
-Set proxy authentication password to <VAR>string</VAR>, like 
<SAMP>`--proxy-passwd'</SAMP>.
-
-<DT>referer = <VAR>string</VAR>
-<DD>
-Set HTTP <SAMP>`Referer:'</SAMP> header just like <SAMP>`--referer'</SAMP>.  
(Note it
-was the folks who wrote the HTTP spec who got the spelling of
-"referrer" wrong.)
-
-<DT>quiet = on/off
-<DD>
-Quiet mode--the same as <SAMP>`-q'</SAMP>.
-
-<DT>quota = <VAR>quota</VAR>
-<DD>
-Specify the download quota, which is useful to put in the global
-<TT>`wgetrc'</TT>.  When download quota is specified, Wget will stop
-retrieving after the download sum has become greater than quota.  The
-quota can be specified in bytes (default), kbytes <SAMP>`k'</SAMP> appended) or
-mbytes (<SAMP>`m'</SAMP> appended).  Thus <SAMP>`quota = 5m'</SAMP> will set 
the quota
-to 5 mbytes.  Note that the user's startup file overrides system
-settings.
-
-<DT>reclevel = <VAR>n</VAR>
-<DD>
-Recursion level--the same as <SAMP>`-l'</SAMP>.
-
-<DT>recursive = on/off
-<DD>
-Recursive on/off--the same as <SAMP>`-r'</SAMP>.
-
-<DT>relative_only = on/off
-<DD>
-Follow only relative links--the same as <SAMP>`-L'</SAMP> (see section <A 
HREF="wget_4.html#SEC18">Relative Links</A>).
-
-<DT>remove_listing = on/off
-<DD>
-If set to on, remove FTP listings downloaded by Wget.  Setting it
-to off is the same as <SAMP>`-nr'</SAMP>.
-
-<DT>retr_symlinks = on/off
-<DD>
-When set to on, retrieve symbolic links as if they were plain files; the
-same as <SAMP>`--retr-symlinks'</SAMP>.
-
-<DT>robots = on/off
-<DD>
-Use (or not) <TT>`/robots.txt'</TT> file (see section <A 
HREF="wget_9.html#SEC41">Robots</A>).  Be sure to know
-what you are doing before changing the default (which is <SAMP>`on'</SAMP>).
-
-<DT>server_response = on/off
-<DD>
-Choose whether or not to print the HTTP and FTP server
-responses--the same as <SAMP>`-S'</SAMP>.
-
-<DT>span_hosts = on/off
-<DD>
-Same as <SAMP>`-H'</SAMP>.
-
-<DT>timeout = <VAR>n</VAR>
-<DD>
-Set timeout value--the same as <SAMP>`-T'</SAMP>.
-
-<DT>timestamping = on/off
-<DD>
-Turn timestamping on/off.  The same as <SAMP>`-N'</SAMP> (see section <A 
HREF="wget_5.html#SEC20">Time-Stamping</A>).
-
-<DT>tries = <VAR>n</VAR>
-<DD>
-Set number of retries per URL---the same as <SAMP>`-t'</SAMP>.
-
-<DT>use_proxy = on/off
-<DD>
-Turn proxy support on/off.  The same as <SAMP>`-Y'</SAMP>.
-
-<DT>verbose = on/off
-<DD>
-Turn verbose on/off--the same as <SAMP>`-v'</SAMP>/<SAMP>`-nv'</SAMP>.
-
-<DT>wait = <VAR>n</VAR>
-<DD>
-Wait <VAR>n</VAR> seconds between retrievals--the same as <SAMP>`-w'</SAMP>.
-
-<DT>waitretry = <VAR>n</VAR>
-<DD>
-Wait up to <VAR>n</VAR> seconds between retries of failed retrievals
-only--the same as <SAMP>`--waitretry'</SAMP>.  Note that this is turned on by
-default in the global <TT>`wgetrc'</TT>.
-
-<DT>randomwait = on/off
-<DD>
-Turn random between-request wait times on or off. The same as 
-<SAMP>`--random-wait'</SAMP>.
-</DL>
-
-
-
-<H2><A NAME="SEC28" HREF="wget_toc.html#TOC28">Sample Wgetrc</A></H2>
-<P>
-<A NAME="IDX129"></A>
-
-
-<P>
-This is the sample initialization file, as given in the distribution.
-It is divided in two section--one for global usage (suitable for global
-startup file), and one for local usage (suitable for
-<TT>`$HOME/.wgetrc'</TT>).  Be careful about the things you change.
-
-
-<P>
-Note that almost all the lines are commented out.  For a command to have
-any effect, you must remove the <SAMP>`#'</SAMP> character at the beginning of
-its line.
-
-
-
-<PRE>
-###
-### Sample Wget initialization file .wgetrc
-###
-
-## You can use this file to change the default behaviour of wget or to
-## avoid having to type many many command-line options. This file does
-## not contain a comprehensive list of commands -- look at the manual
-## to find out what you can put into this file.
-## 
-## Wget initialization file can reside in /usr/local/etc/wgetrc
-## (global, for all users) or $HOME/.wgetrc (for a single user).
-##
-## To use the settings in this file, you will have to uncomment them,
-## as well as change them, in most cases, as the values on the
-## commented-out lines are the default values (e.g. "off").
-
-##
-## Global settings (useful for setting up in /usr/local/etc/wgetrc).
-## Think well before you change them, since they may reduce wget's
-## functionality, and make it behave contrary to the documentation:
-##
-
-# You can set retrieve quota for beginners by specifying a value
-# optionally followed by 'K' (kilobytes) or 'M' (megabytes).  The
-# default quota is unlimited.
-#quota = inf
-
-# You can lower (or raise) the default number of retries when
-# downloading a file (default is 20).
-#tries = 20
-
-# Lowering the maximum depth of the recursive retrieval is handy to
-# prevent newbies from going too "deep" when they unwittingly start
-# the recursive retrieval.  The default is 5.
-#reclevel = 5
-
-# Many sites are behind firewalls that do not allow initiation of
-# connections from the outside.  On these sites you have to use the
-# `passive' feature of FTP.  If you are behind such a firewall, you
-# can turn this on to make Wget use passive FTP by default.
-#passive_ftp = off
-
-# The "wait" command below makes Wget wait between every connection.
-# If, instead, you want Wget to wait only between retries of failed
-# downloads, set waitretry to maximum number of seconds to wait (Wget
-# will use "linear backoff", waiting 1 second after the first failure
-# on a file, 2 seconds after the second failure, etc. up to this max).
-waitretry = 10
-
-##
-## Local settings (for a user to set in his $HOME/.wgetrc).  It is
-## *highly* undesirable to put these settings in the global file, since
-## they are potentially dangerous to "normal" users.
-##
-## Even when setting up your own ~/.wgetrc, you should know what you
-## are doing before doing so.
-##
-
-# Set this to on to use timestamping by default:
-#timestamping = off
-
-# It is a good idea to make Wget send your email address in a `From:'
-# header with your request (so that server administrators can contact
-# you in case of errors).  Wget does *not* send `From:' by default.
-#header = From: Your Name &#60;address@hidden&#62;
-
-# You can set up other headers, like Accept-Language.  Accept-Language
-# is *not* sent by default.
-#header = Accept-Language: en
-
-# You can set the default proxies for Wget to use for http and ftp.
-# They will override the value in the environment.
-#http_proxy = http://proxy.yoyodyne.com:18023/
-#ftp_proxy = http://proxy.yoyodyne.com:18023/
-
-# If you do not want to use proxy at all, set this to off.
-#use_proxy = on
-
-# You can customize the retrieval outlook.  Valid options are default,
-# binary, mega and micro.
-#dot_style = default
-
-# Setting this to off makes Wget not download /robots.txt.  Be sure to
-# know *exactly* what /robots.txt is and how it is used before changing
-# the default!
-#robots = on
-
-# It can be useful to make Wget wait between connections.  Set this to
-# the number of seconds you want Wget to wait.
-#wait = 0
-
-# You can force creating directory structure, even if a single is being
-# retrieved, by setting this to on.
-#dirstruct = off
-
-# You can turn on recursive retrieving by default (don't do this if
-# you are not sure you know what it means) by setting this to on.
-#recursive = off
-
-# To always back up file X as X.orig before converting its links (due
-# to -k / --convert-links / convert_links = on having been specified),
-# set this variable to on:
-#backup_converted = off
-
-# To have Wget follow FTP links from HTML files by default, set this
-# to on:
-#follow_ftp = off
-</PRE>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_5.html">previous</A>, 
<A HREF="wget_7.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_7.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_7.html
diff -N manual/wget-1.8.1/html_chapter/wget_7.html
--- manual/wget-1.8.1/html_chapter/wget_7.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,310 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Examples</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_6.html">previous</A>, 
<A HREF="wget_8.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC29" HREF="wget_toc.html#TOC29">Examples</A></H1>
-<P>
-<A NAME="IDX130"></A>
-
-
-<P>
-The examples are divided into three sections loosely based on their
-complexity.
-
-
-
-
-<H2><A NAME="SEC30" HREF="wget_toc.html#TOC30">Simple Usage</A></H2>
-
-
-<UL>
-<LI>
-
-Say you want to download a URL.  Just type:
-
-
-<PRE>
-wget http://fly.srk.fer.hr/
-</PRE>
-
-<LI>
-
-But what will happen if the connection is slow, and the file is lengthy?
-The connection will probably fail before the whole file is retrieved,
-more than once.  In this case, Wget will try getting the file until it
-either gets the whole of it, or exceeds the default number of retries
-(this being 20).  It is easy to change the number of tries to 45, to
-insure that the whole file will arrive safely:
-
-
-<PRE>
-wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
-</PRE>
-
-<LI>
-
-Now let's leave Wget to work in the background, and write its progress
-to log file <TT>`log'</TT>.  It is tiring to type <SAMP>`--tries'</SAMP>, so we
-shall use <SAMP>`-t'</SAMP>.
-
-
-<PRE>
-wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &#38;
-</PRE>
-
-The ampersand at the end of the line makes sure that Wget works in the
-background.  To unlimit the number of retries, use <SAMP>`-t inf'</SAMP>.
-
-<LI>
-
-The usage of FTP is as simple.  Wget will take care of login and
-password.
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/welcome.msg
-</PRE>
-
-<LI>
-
-If you specify a directory, Wget will retrieve the directory listing,
-parse it and convert it to HTML.  Try:
-
-
-<PRE>
-wget ftp://prep.ai.mit.edu/pub/gnu/
-links index.html
-</PRE>
-
-</UL>
-
-
-
-<H2><A NAME="SEC31" HREF="wget_toc.html#TOC31">Advanced Usage</A></H2>
-
-
-<UL>
-<LI>
-
-You have a file that contains the URLs you want to download?  Use the
-<SAMP>`-i'</SAMP> switch:
-
-
-<PRE>
-wget -i <VAR>file</VAR>
-</PRE>
-
-If you specify <SAMP>`-'</SAMP> as file name, the URLs will be read from
-standard input.
-
-<LI>
-
-Create a five levels deep mirror image of the GNU web site, with the
-same directory structure the original has, with only one try per
-document, saving the log of the activities to <TT>`gnulog'</TT>:
-
-
-<PRE>
-wget -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-The same as the above, but convert the links in the HTML files to
-point to local files, so you can view the documents off-line:
-
-
-<PRE>
-wget --convert-links -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-Retrieve only one HTML page, but make sure that all the elements needed
-for the page to be displayed, such as inline images and external style
-sheets, are also downloaded.  Also make sure the downloaded page
-references the downloaded links.
-
-
-<PRE>
-wget -p --convert-links http://www.server.com/dir/page.html
-</PRE>
-
-The HTML page will be saved to <TT>`www.server.com/dir/page.html'</TT>, and
-the images, stylesheets, etc., somewhere under <TT>`www.server.com/'</TT>,
-depending on where they were on the remote server.
-
-<LI>
-
-The same as the above, but without the <TT>`www.server.com/'</TT> directory.
-In fact, I don't want to have all those random server directories
-anyway--just save <EM>all</EM> those files under a <TT>`download/'</TT>
-subdirectory of the current directory.
-
-
-<PRE>
-wget -p --convert-links -nH -nd -Pdownload \
-     http://www.server.com/dir/page.html
-</PRE>
-
-<LI>
-
-Retrieve the index.html of <SAMP>`www.lycos.com'</SAMP>, showing the original
-server headers:
-
-
-<PRE>
-wget -S http://www.lycos.com/
-</PRE>
-
-<LI>
-
-Save the server headers with the file, perhaps for post-processing.
-
-
-<PRE>
-wget -s http://www.lycos.com/
-more index.html
-</PRE>
-
-<LI>
-
-Retrieve the first two levels of <SAMP>`wuarchive.wustl.edu'</SAMP>, saving 
them
-to <TT>`/tmp'</TT>.
-
-
-<PRE>
-wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
-</PRE>
-
-<LI>
-
-You want to download all the GIFs from a directory on an HTTP
-server.  You tried <SAMP>`wget http://www.server.com/dir/*.gif'</SAMP>, but 
that
-didn't work because HTTP retrieval does not support globbing.  In
-that case, use:
-
-
-<PRE>
-wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
-</PRE>
-
-More verbose, but the effect is the same.  <SAMP>`-r -l1'</SAMP> means to
-retrieve recursively (see section <A HREF="wget_3.html#SEC13">Recursive 
Retrieval</A>), with maximum depth
-of 1.  <SAMP>`--no-parent'</SAMP> means that references to the parent directory
-are ignored (see section <A HREF="wget_4.html#SEC17">Directory-Based 
Limits</A>), and <SAMP>`-A.gif'</SAMP> means to
-download only the GIF files.  <SAMP>`-A "*.gif"'</SAMP> would have worked
-too.
-
-<LI>
-
-Suppose you were in the middle of downloading, when Wget was
-interrupted.  Now you do not want to clobber the files already present.
-It would be:
-
-
-<PRE>
-wget -nc -r http://www.gnu.org/
-</PRE>
-
-<LI>
-
-If you want to encode your own username and password to HTTP or
-FTP, use the appropriate URL syntax (see section <A 
HREF="wget_2.html#SEC3">URL Format</A>).
-
-
-<PRE>
-wget ftp://hniksic:address@hidden/.emacs
-</PRE>
-
-<A NAME="IDX131"></A>
-<LI>
-
-You would like the output documents to go to standard output instead of
-to files?
-
-
-<PRE>
-wget -O - http://jagor.srce.hr/ http://www.srce.hr/
-</PRE>
-
-You can also combine the two options and make pipelines to retrieve the
-documents from remote hotlists:
-
-
-<PRE>
-wget -O - http://cool.list.com/ | wget --force-html -i -
-</PRE>
-
-</UL>
-
-
-
-<H2><A NAME="SEC32" HREF="wget_toc.html#TOC32">Very Advanced Usage</A></H2>
-
-<P>
-<A NAME="IDX132"></A>
-
-<UL>
-<LI>
-
-If you wish Wget to keep a mirror of a page (or FTP
-subdirectories), use <SAMP>`--mirror'</SAMP> (<SAMP>`-m'</SAMP>), which is the 
shorthand
-for <SAMP>`-r -l inf -N'</SAMP>.  You can put Wget in the crontab file asking 
it
-to recheck a site each Sunday:
-
-
-<PRE>
-crontab
-0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-In addition to the above, you want the links to be converted for local
-viewing.  But, after having read this manual, you know that link
-conversion doesn't play well with timestamping, so you also want Wget to
-back up the original HTML files before the conversion.  Wget invocation
-would look like this:
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted  \
-     http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-But you've also noticed that local viewing doesn't work all that well
-when HTML files are saved under extensions other than <SAMP>`.html'</SAMP>,
-perhaps because they were served as <TT>`index.cgi'</TT>.  So you'd like
-Wget to rename all the files served with content-type <SAMP>`text/html'</SAMP>
-to <TT>`<VAR>name</VAR>.html'</TT>.
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted \
-     --html-extension -o /home/me/weeklog        \
-     http://www.gnu.org/
-</PRE>
-
-Or, with less typing:
-
-
-<PRE>
-wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-</UL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_6.html">previous</A>, 
<A HREF="wget_8.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_8.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_8.html
diff -N manual/wget-1.8.1/html_chapter/wget_8.html
--- manual/wget-1.8.1/html_chapter/wget_8.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,290 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Various</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_7.html">previous</A>, 
<A HREF="wget_9.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC33" HREF="wget_toc.html#TOC33">Various</A></H1>
-<P>
-<A NAME="IDX133"></A>
-
-
-<P>
-This chapter contains all the stuff that could not fit anywhere else.
-
-
-
-
-<H2><A NAME="SEC34" HREF="wget_toc.html#TOC34">Proxies</A></H2>
-<P>
-<A NAME="IDX134"></A>
-
-
-<P>
-<EM>Proxies</EM> are special-purpose HTTP servers designed to transfer
-data from remote servers to local clients.  One typical use of proxies
-is lightening network load for users behind a slow connection.  This is
-achieved by channeling all HTTP and FTP requests through the
-proxy which caches the transferred data.  When a cached resource is
-requested again, proxy will return the data from cache.  Another use for
-proxies is for companies that separate (for security reasons) their
-internal networks from the rest of Internet.  In order to obtain
-information from the Web, their users connect and retrieve remote data
-using an authorized proxy.
-
-
-<P>
-Wget supports proxies for both HTTP and FTP retrievals.  The
-standard way to specify proxy location, which Wget recognizes, is using
-the following environment variables:
-
-
-<DL COMPACT>
-
-<DT><CODE>http_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for HTTP
-connections.
-
-<DT><CODE>ftp_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for FTP
-connections.  It is quite common that HTTP_PROXY and FTP_PROXY
-are set to the same URL.
-
-<DT><CODE>no_proxy</CODE>
-<DD>
-This variable should contain a comma-separated list of domain extensions
-proxy should <EM>not</EM> be used for.  For instance, if the value of
-<CODE>no_proxy</CODE> is <SAMP>`.mit.edu'</SAMP>, proxy will not be used to 
retrieve
-documents from MIT.
-</DL>
-
-<P>
-In addition to the environment variables, proxy location and settings
-may be specified from within Wget itself.
-
-
-<DL COMPACT>
-
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-<DT><SAMP>`proxy = on/off'</SAMP>
-<DD>
-This option may be used to turn the proxy support on or off.  Proxy
-support is on by default, provided that the appropriate environment
-variables are set.
-
-<DT><SAMP>`http_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`ftp_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`no_proxy = <VAR>string</VAR>'</SAMP>
-<DD>
-These startup file variables allow you to override the proxy settings
-specified by the environment.
-</DL>
-
-<P>
-Some proxy servers require authorization to enable you to use them.  The
-authorization consists of <EM>username</EM> and <EM>password</EM>, which must
-be sent by Wget.  As with HTTP authorization, several
-authentication schemes exist.  For proxy authorization only the
-<CODE>Basic</CODE> authentication scheme is currently implemented.
-
-
-<P>
-You may specify your username and password either through the proxy
-URL or through the command-line options.  Assuming that the
-company's proxy is located at <SAMP>`proxy.company.com'</SAMP> at port 8001, a
-proxy URL location containing authorization data might look like
-this:
-
-
-
-<PRE>
-http://hniksic:address@hidden:8001/
-</PRE>
-
-<P>
-Alternatively, you may use the <SAMP>`proxy-user'</SAMP> and
-<SAMP>`proxy-password'</SAMP> options, and the equivalent <TT>`.wgetrc'</TT>
-settings <CODE>proxy_user</CODE> and <CODE>proxy_passwd</CODE> to set the proxy
-username and password.
-
-
-
-
-<H2><A NAME="SEC35" HREF="wget_toc.html#TOC35">Distribution</A></H2>
-<P>
-<A NAME="IDX135"></A>
-
-
-<P>
-Like all GNU utilities, the latest version of Wget can be found at the
-master GNU archive site prep.ai.mit.edu, and its mirrors.  For example,
-Wget 1.8.1 can be found at
-<A 
HREF="ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz";>ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz</A>
-
-
-
-
-<H2><A NAME="SEC36" HREF="wget_toc.html#TOC36">Mailing List</A></H2>
-<P>
-<A NAME="IDX136"></A>
-<A NAME="IDX137"></A>
-
-
-<P>
-Wget has its own mailing list at <A 
HREF="mailto:address@hidden";>address@hidden</A>, thanks
-to Karsten Thygesen.  The mailing list is for discussion of Wget
-features and web, reporting Wget bugs (those that you think may be of
-interest to the public) and mailing announcements.  You are welcome to
-subscribe.  The more people on the list, the better!
-
-
-<P>
-To subscribe, send mail to <A HREF="mailto:address@hidden";>address@hidden</A>.
-the magic word <SAMP>`subscribe'</SAMP> in the subject line.  Unsubscribe by
-mailing to <A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-The mailing list is archived at <A 
HREF="http://fly.srk.fer.hr/archive/wget";>http://fly.srk.fer.hr/archive/wget</A>.
-Alternative archive is available at
-<A 
HREF="http://www.mail-archive.com/wget%40sunsite.auc.dk/";>http://www.mail-archive.com/wget%40sunsite.auc.dk/</A>.
- 
-
-
-<H2><A NAME="SEC37" HREF="wget_toc.html#TOC37">Reporting Bugs</A></H2>
-<P>
-<A NAME="IDX138"></A>
-<A NAME="IDX139"></A>
-<A NAME="IDX140"></A>
-
-
-<P>
-You are welcome to send bug reports about GNU Wget to
-<A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-Before actually submitting a bug report, please try to follow a few
-simple guidelines.
-
-
-
-<OL>
-<LI>
-
-Please try to ascertain that the behaviour you see really is a bug.  If
-Wget crashes, it's a bug.  If Wget does not behave as documented,
-it's a bug.  If things work strange, but you are not sure about the way
-they are supposed to work, it might well be a bug.
-
-<LI>
-
-Try to repeat the bug in as simple circumstances as possible.  E.g. if
-Wget crashes while downloading <SAMP>`wget -rl0 -kKE -t5 -Y0
-http://yoyodyne.com -o /tmp/log'</SAMP>, you should try to see if the crash is
-repeatable, and if will occur with a simpler set of options.  You might
-even try to start the download at the page where the crash occurred to
-see if that page somehow triggered the crash.
-
-Also, while I will probably be interested to know the contents of your
-<TT>`.wgetrc'</TT> file, just dumping it into the debug message is probably
-a bad idea.  Instead, you should first try to see if the bug repeats
-with <TT>`.wgetrc'</TT> moved out of the way.  Only if it turns out that
-<TT>`.wgetrc'</TT> settings affect the bug, mail me the relevant parts of
-the file.
-
-<LI>
-
-Please start Wget with <SAMP>`-d'</SAMP> option and send the log (or the
-relevant parts of it).  If Wget was compiled without debug support,
-recompile it.  It is <EM>much</EM> easier to trace bugs with debug support
-on.
-
-<LI>
-
-If Wget has crashed, try to run it in a debugger, e.g. <CODE>gdb `which
-wget` core</CODE> and type <CODE>where</CODE> to get the backtrace.
-</OL>
-
-
-
-<H2><A NAME="SEC38" HREF="wget_toc.html#TOC38">Portability</A></H2>
-<P>
-<A NAME="IDX141"></A>
-<A NAME="IDX142"></A>
-
-
-<P>
-Since Wget uses GNU Autoconf for building and configuring, and avoids
-using "special" ultra--mega--cool features of any particular Unix, it
-should compile (and work) on all common Unix flavors.
-
-
-<P>
-Various Wget versions have been compiled and tested under many kinds of
-Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix),
-Ultrix, *BSD, IRIX, and others; refer to the file <TT>`MACHINES'</TT> in the
-distribution directory for a comprehensive list.  If you compile it on
-an architecture not listed there, please let me know so I can update it.
-
-
-<P>
-Wget should also compile on the other Unix systems, not listed in
-<TT>`MACHINES'</TT>.  If it doesn't, please let me know.
-
-
-<P>
-Thanks to kind contributors, this version of Wget compiles and works on
-Microsoft Windows 95 and Windows NT platforms.  It has been compiled
-successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
-with Winsock as networking software.  Naturally, it is crippled of some
-features available on Unix, but it should work as a substitute for
-people stuck with Windows.  Note that the Windows port is
-<STRONG>neither tested nor maintained</STRONG> by me--all questions and
-problems should be reported to Wget mailing list at
-<A HREF="mailto:address@hidden";>address@hidden</A> where the maintainers will 
look at them.
-
-
-
-
-<H2><A NAME="SEC39" HREF="wget_toc.html#TOC39">Signals</A></H2>
-<P>
-<A NAME="IDX143"></A>
-<A NAME="IDX144"></A>
-
-
-<P>
-Since the purpose of Wget is background work, it catches the hangup
-signal (<CODE>SIGHUP</CODE>) and ignores it.  If the output was on standard
-output, it will be redirected to a file named <TT>`wget-log'</TT>.
-Otherwise, <CODE>SIGHUP</CODE> is ignored.  This is convenient when you wish
-to redirect the output of Wget after having started it.
-
-
-
-<PRE>
-$ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &#38;
-$ kill -HUP %%     # Redirect the output to wget-log
-</PRE>
-
-<P>
-Other than that, Wget will not try to interfere with signals in any way.
-<KBD>C-c</KBD>, <CODE>kill -TERM</CODE> and <CODE>kill -KILL</CODE> should 
kill it alike.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_7.html">previous</A>, 
<A HREF="wget_9.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_9.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_9.html
diff -N manual/wget-1.8.1/html_chapter/wget_9.html
--- manual/wget-1.8.1/html_chapter/wget_9.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,333 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Appendices</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_8.html">previous</A>, 
<A HREF="wget_10.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC40" HREF="wget_toc.html#TOC40">Appendices</A></H1>
-
-<P>
-This chapter contains some references I consider useful.
-
-
-
-
-<H2><A NAME="SEC41" HREF="wget_toc.html#TOC41">Robots</A></H2>
-<P>
-<A NAME="IDX145"></A>
-<A NAME="IDX146"></A>
-<A NAME="IDX147"></A>
-
-
-<P>
-It is extremely easy to make Wget wander aimlessly around a web site,
-sucking all the available data in progress.  <SAMP>`wget -r 
<VAR>site</VAR>'</SAMP>,
-and you're set.  Great?  Not for the server admin.
-
-
-<P>
-While Wget is retrieving static pages, there's not much of a problem.
-But for Wget, there is no real difference between a static page and the
-most demanding CGI.  For instance, a site I know has a section handled
-by an, uh, <EM>bitchin'</EM> CGI script that converts all the Info files to
-HTML.  The script can and does bring the machine to its knees without
-providing anything useful to the downloader.
-
-
-<P>
-For such and similar cases various robot exclusion schemes have been
-devised as a means for the server administrators and document authors to
-protect chosen portions of their sites from the wandering of robots.
-
-
-<P>
-The more popular mechanism is the <EM>Robots Exclusion Standard</EM>, or
-RES, written by Martijn Koster et al. in 1994.  It specifies the
-format of a text file containing directives that instruct the robots
-which URL paths to avoid.  To be found by the robots, the specifications
-must be placed in <TT>`/robots.txt'</TT> in the server root, which the
-robots are supposed to download and parse.
-
-
-<P>
-Wget supports RES when downloading recursively.  So, when you
-issue:
-
-
-
-<PRE>
-wget -r http://www.server.com/
-</PRE>
-
-<P>
-First the index of <SAMP>`www.server.com'</SAMP> will be downloaded.  If Wget
-finds that it wants to download more documents from that server, it will
-request <SAMP>`http://www.server.com/robots.txt'</SAMP> and, if found, use it
-for further downloads.  <TT>`robots.txt'</TT> is loaded only once per each
-server.
-
-
-<P>
-Until version 1.8, Wget supported the first version of the standard,
-written by Martijn Koster in 1994 and available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots.html";>http://www.robotstxt.org/wc/norobots.html</A>.
  As of version 1.8,
-Wget has supported the additional directives specified in the internet
-draft <SAMP>`&#60;draft-koster-robots-00.txt&#62;'</SAMP> titled "A Method for 
Web
-Robots Control".  The draft, which has as far as I know never made to
-an RFC, is available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots-rfc.txt";>http://www.robotstxt.org/wc/norobots-rfc.txt</A>.
-
-
-<P>
-This manual no longer includes the text of the Robot Exclusion Standard.
-
-
-<P>
-The second, less known mechanism, enables the author of an individual
-document to specify whether they want the links from the file to be
-followed by a robot.  This is achieved using the <CODE>META</CODE> tag, like
-this:
-
-
-
-<PRE>
-&#60;meta name="robots" content="nofollow"&#62;
-</PRE>
-
-<P>
-This is explained in some detail at
-<A 
HREF="http://www.robotstxt.org/wc/meta-user.html";>http://www.robotstxt.org/wc/meta-user.html</A>.
  Wget supports this
-method of robot exclusion in addition to the usual <TT>`/robots.txt'</TT>
-exclusion.
-
-
-
-
-<H2><A NAME="SEC42" HREF="wget_toc.html#TOC42">Security Considerations</A></H2>
-<P>
-<A NAME="IDX148"></A>
-
-
-<P>
-When using Wget, you must be aware that it sends unencrypted passwords
-through the network, which may present a security problem.  Here are the
-main issues, and some solutions.
-
-
-
-<OL>
-<LI>
-
-The passwords on the command line are visible using <CODE>ps</CODE>.  If this
-is a problem, avoid putting passwords from the command line--e.g. you
-can use <TT>`.netrc'</TT> for this.
-
-<LI>
-
-Using the insecure <EM>basic</EM> authentication scheme, unencrypted
-passwords are transmitted through the network routers and gateways.
-
-<LI>
-
-The FTP passwords are also in no way encrypted.  There is no good
-solution for this at the moment.
-
-<LI>
-
-Although the "normal" output of Wget tries to hide the passwords,
-debugging logs show them, in all forms.  This problem is avoided by
-being careful when you send debug logs (yes, even when you send them to
-me).
-</OL>
-
-
-
-<H2><A NAME="SEC43" HREF="wget_toc.html#TOC43">Contributors</A></H2>
-<P>
-<A NAME="IDX149"></A>
-
-
-<P>
-GNU Wget was written by Hrvoje address@hidden'{c} <A 
HREF="mailto:address@hidden";>address@hidden</A>.
-However, its development could never have gone as far as it has, were it
-not for the help of many people, either with bug reports, feature
-proposals, patches, or letters saying "Thanks!".
-
-
-<P>
-Special thanks goes to the following people (no particular order):
-
-
-
-<UL>
-<LI>
-
-Karsten Thygesen--donated system resources such as the mailing list,
-web space, and FTP space, along with a lot of time to make these
-actually work.
-
-<LI>
-
-Shawn McHorse--bug reports and patches.
-
-<LI>
-
-Kaveh R. Ghazi--on-the-fly <CODE>ansi2knr</CODE>-ization.  Lots of
-portability fixes.
-
-<LI>
-
-Gordon Matzigkeit---<TT>`.netrc'</TT> support.
-
-<LI>
-
-Zlatko @address@hidden'{c}, Tomislav Vujec and address@hidden
address@hidden suggestions and "philosophical" discussions.
-
-<LI>
-
-Darko Budor--initial port to Windows.
-
-<LI>
-
-Antonio Rosella--help and suggestions, plus the Italian translation.
-
-<LI>
-
-Tomislav Petrovi'{c}, Mario address@hidden'{c}---many bug reports and
-suggestions.
-
-<LI>
-
-Fran@,{c}ois Pinard--many thorough bug reports and discussions.
-
-<LI>
-
-Karl Eichwalder--lots of help with internationalization and other
-things.
-
-<LI>
-
-Junio Hamano--donated support for Opie and HTTP <CODE>Digest</CODE>
-authentication.
-
-<LI>
-
-The people who provided donations for development, including Brian
-Gough.
-</UL>
-
-<P>
-The following people have provided patches, bug/build reports, useful
-suggestions, beta testing services, fan mail and all the other things
-that make maintenance so much fun:
-
-
-<P>
-Ian Abbott
-Tim Adam,
-Adrian Aichner,
-Martin Baehr,
-Dieter Baron,
-Roger Beeman,
-Dan Berger,
-T. Bharath,
-Paul Bludov,
-Daniel Bodea,
-Mark Boyns,
-John Burden,
-Wanderlei Cavassin,
-Gilles Cedoc,
-Tim Charron,
-Noel Cragg,
-Kristijan @address@hidden,
-John Daily,
-Andrew Davison,
-Andrew Deryabin,
-Ulrich Drepper,
-Marc Duponcheel,
-Damir address@hidden,
-Alan Eldridge,
-Aleksandar Erkalovi'{c},
-Andy Eskilsson,
-Christian Fraenkel,
-Masashi Fujita,
-Howard Gayle,
-Marcel Gerrits,
-Lemble Gregory,
-Hans Grobler,
-Mathieu Guillaume,
-Dan Harkless,
-Herold Heiko,
-Jochen Hein,
-Karl Heuer,
-HIROSE Masaaki,
-Gregor Hoffleit,
-Erik Magnus Hulthen,
-Richard Huveneers,
-Jonas Jensen,
-Simon Josefsson,
-Mario Juri'{c},
-Hack address@hidden rn,
-Const Kaplinsky,
-Goran Kezunovi'{c},
-Robert Kleine,
-KOJIMA Haime,
-Fila Kolodny,
-Alexander Kourakos,
-Martin Kraemer,
-Hrvoje Lacko,
-Daniel S. Lewart,
-Nicol'{a}s Lichtmeier,
-Dave Love,
-Alexander V. Lukyanov,
-Jordan Mendelson,
-Lin Zhe Min,
-Tim Mooney,
-Simon Munton,
-Charlie Negyesi,
-R. K. Owen,
-Andrew Pollock,
-Steve Pothier,
-Jan address@hidden,
-Marin Purgar,
-Csaba R'{a}duly,
-Keith Refson,
-Tyler Riddle,
-Tobias Ringstrom,
-Edward J. Sabol,
-Heinz Salzmann,
-Robert Schmidt,
-Andreas Schwab,
-Chris Seawood,
-Toomas Soome,
-Tage Stabell-Kulo,
-Sven Sternberger,
-Markus Strasser,
-John Summerfield,
-Szakacsits Szabolcs,
-Mike Thomas,
-Philipp Thomas,
-Dave Turner,
-Russell Vincent,
-Charles G Waldman,
-Douglas E. Wegscheid,
-Jasmin Zainul,
-Bojan @v{Z}drnja,
-Kristijan Zimmer.
-
-
-<P>
-Apologies to all who I accidentally left out, and many thanks to all the
-subscribers of the Wget mailing list.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_8.html">previous</A>, 
<A HREF="wget_10.html">next</A>, <A HREF="wget_11.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_foot.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_foot.html
diff -N manual/wget-1.8.1/html_chapter/wget_foot.html
--- manual/wget-1.8.1/html_chapter/wget_foot.html       19 Oct 2003 23:07:43 
-0000      1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,27 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Footnotes</TITLE>
-</HEAD>
-<BODY>
-<H1>GNU Wget</H1>
-<H2>The noninteractive downloading utility</H2>
-<H2>Updated for Wget 1.8.1, December 2001</H2>
-<ADDRESS>by Hrvoje <A HREF="mailto:address@hidden";>address@hidden</A>{s}i'{c} 
and the developers</ADDRESS>
-<P>
-<P><HR><P>
-<H3><A NAME="FOOT1" HREF="wget_2.html#DOCF1">(1)</A></H3>
-<P>If you have a
-<TT>`.netrc'</TT> file in your home directory, password will also be
-searched for there.
-<H3><A NAME="FOOT2" HREF="wget_5.html#DOCF2">(2)</A></H3>
-<P>As an additional check, Wget will look at the
-<CODE>Content-Length</CODE> header, and compare the sizes; if they are not the
-same, the remote file will be downloaded no matter what the time-stamp
-says.
-<P><HR><P>
-This document was generated on 17 January 2002 using
-<A HREF="http://wwwinfo.cern.ch/dis/texi2html/";>texi2html</A>&nbsp;1.56k.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_chapter/wget_toc.html
===================================================================
RCS file: manual/wget-1.8.1/html_chapter/wget_toc.html
diff -N manual/wget-1.8.1/html_chapter/wget_toc.html
--- manual/wget-1.8.1/html_chapter/wget_toc.html        19 Oct 2003 23:07:43 
-0000      1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,87 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Table of Contents</TITLE>
-</HEAD>
-<BODY>
-<H1>GNU Wget</H1>
-<H2>The noninteractive downloading utility</H2>
-<H2>Updated for Wget 1.8.1, December 2001</H2>
-<ADDRESS>by Hrvoje <A HREF="mailto:address@hidden";>address@hidden</A>{s}i'{c} 
and the developers</ADDRESS>
-<P>
-<P><HR><P>
-<UL>
-<LI><A NAME="TOC1" HREF="wget_1.html#SEC1">Overview</A>
-<LI><A NAME="TOC2" HREF="wget_2.html#SEC2">Invoking</A>
-<UL>
-<LI><A NAME="TOC3" HREF="wget_2.html#SEC3">URL Format</A>
-<LI><A NAME="TOC4" HREF="wget_2.html#SEC4">Option Syntax</A>
-<LI><A NAME="TOC5" HREF="wget_2.html#SEC5">Basic Startup Options</A>
-<LI><A NAME="TOC6" HREF="wget_2.html#SEC6">Logging and Input File Options</A>
-<LI><A NAME="TOC7" HREF="wget_2.html#SEC7">Download Options</A>
-<LI><A NAME="TOC8" HREF="wget_2.html#SEC8">Directory Options</A>
-<LI><A NAME="TOC9" HREF="wget_2.html#SEC9">HTTP Options</A>
-<LI><A NAME="TOC10" HREF="wget_2.html#SEC10">FTP Options</A>
-<LI><A NAME="TOC11" HREF="wget_2.html#SEC11">Recursive Retrieval Options</A>
-<LI><A NAME="TOC12" HREF="wget_2.html#SEC12">Recursive Accept/Reject 
Options</A>
-</UL>
-<LI><A NAME="TOC13" HREF="wget_3.html#SEC13">Recursive Retrieval</A>
-<LI><A NAME="TOC14" HREF="wget_4.html#SEC14">Following Links</A>
-<UL>
-<LI><A NAME="TOC15" HREF="wget_4.html#SEC15">Spanning Hosts</A>
-<LI><A NAME="TOC16" HREF="wget_4.html#SEC16">Types of Files</A>
-<LI><A NAME="TOC17" HREF="wget_4.html#SEC17">Directory-Based Limits</A>
-<LI><A NAME="TOC18" HREF="wget_4.html#SEC18">Relative Links</A>
-<LI><A NAME="TOC19" HREF="wget_4.html#SEC19">Following FTP Links</A>
-</UL>
-<LI><A NAME="TOC20" HREF="wget_5.html#SEC20">Time-Stamping</A>
-<UL>
-<LI><A NAME="TOC21" HREF="wget_5.html#SEC21">Time-Stamping Usage</A>
-<LI><A NAME="TOC22" HREF="wget_5.html#SEC22">HTTP Time-Stamping Internals</A>
-<LI><A NAME="TOC23" HREF="wget_5.html#SEC23">FTP Time-Stamping Internals</A>
-</UL>
-<LI><A NAME="TOC24" HREF="wget_6.html#SEC24">Startup File</A>
-<UL>
-<LI><A NAME="TOC25" HREF="wget_6.html#SEC25">Wgetrc Location</A>
-<LI><A NAME="TOC26" HREF="wget_6.html#SEC26">Wgetrc Syntax</A>
-<LI><A NAME="TOC27" HREF="wget_6.html#SEC27">Wgetrc Commands</A>
-<LI><A NAME="TOC28" HREF="wget_6.html#SEC28">Sample Wgetrc</A>
-</UL>
-<LI><A NAME="TOC29" HREF="wget_7.html#SEC29">Examples</A>
-<UL>
-<LI><A NAME="TOC30" HREF="wget_7.html#SEC30">Simple Usage</A>
-<LI><A NAME="TOC31" HREF="wget_7.html#SEC31">Advanced Usage</A>
-<LI><A NAME="TOC32" HREF="wget_7.html#SEC32">Very Advanced Usage</A>
-</UL>
-<LI><A NAME="TOC33" HREF="wget_8.html#SEC33">Various</A>
-<UL>
-<LI><A NAME="TOC34" HREF="wget_8.html#SEC34">Proxies</A>
-<LI><A NAME="TOC35" HREF="wget_8.html#SEC35">Distribution</A>
-<LI><A NAME="TOC36" HREF="wget_8.html#SEC36">Mailing List</A>
-<LI><A NAME="TOC37" HREF="wget_8.html#SEC37">Reporting Bugs</A>
-<LI><A NAME="TOC38" HREF="wget_8.html#SEC38">Portability</A>
-<LI><A NAME="TOC39" HREF="wget_8.html#SEC39">Signals</A>
-</UL>
-<LI><A NAME="TOC40" HREF="wget_9.html#SEC40">Appendices</A>
-<UL>
-<LI><A NAME="TOC41" HREF="wget_9.html#SEC41">Robots</A>
-<LI><A NAME="TOC42" HREF="wget_9.html#SEC42">Security Considerations</A>
-<LI><A NAME="TOC43" HREF="wget_9.html#SEC43">Contributors</A>
-</UL>
-<LI><A NAME="TOC44" HREF="wget_10.html#SEC44">Copying</A>
-<UL>
-<LI><A NAME="TOC45" HREF="wget_10.html#SEC45">GNU General Public License</A>
-<LI><A NAME="TOC46" HREF="wget_10.html#SEC46">Preamble</A>
-<LI><A NAME="TOC47" HREF="wget_10.html#SEC47">TERMS AND CONDITIONS FOR 
COPYING, DISTRIBUTION AND MODIFICATION</A>
-<LI><A NAME="TOC48" HREF="wget_10.html#SEC48">How to Apply These Terms to Your 
New Programs</A>
-<LI><A NAME="TOC49" HREF="wget_10.html#SEC49">GNU Free Documentation 
License</A>
-<LI><A NAME="TOC50" HREF="wget_10.html#SEC50">ADDENDUM: How to use this 
License for your documents</A>
-</UL>
-<LI><A NAME="TOC51" HREF="wget_11.html#SEC51">Concept Index</A>
-</UL>
-<P><HR><P>
-This document was generated on 17 January 2002 using
-<A HREF="http://wwwinfo.cern.ch/dis/texi2html/";>texi2html</A>&nbsp;1.56k.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_mono/wget.html
===================================================================
RCS file: manual/wget-1.8.1/html_mono/wget.html
diff -N manual/wget-1.8.1/html_mono/wget.html
--- manual/wget-1.8.1/html_mono/wget.html       29 Jun 2005 21:04:13 -0000      
1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,4855 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual</TITLE>
-</HEAD>
-<BODY>
-<H1>GNU Wget</H1>
-<H2>The noninteractive downloading utility</H2>
-<H2>Updated for Wget 1.8.1, December 2001</H2>
-<ADDRESS>by Hrvoje <A HREF="mailto:address@hidden";>address@hidden</A>{s}i'{c} 
and the developers</ADDRESS>
-<P>
-<P><HR><P>
-<H1>Table of Contents</H1>
-<UL>
-<LI><A NAME="TOC1" HREF="wget.html#SEC1">Overview</A>
-<LI><A NAME="TOC2" HREF="wget.html#SEC2">Invoking</A>
-<UL>
-<LI><A NAME="TOC3" HREF="wget.html#SEC3">URL Format</A>
-<LI><A NAME="TOC4" HREF="wget.html#SEC4">Option Syntax</A>
-<LI><A NAME="TOC5" HREF="wget.html#SEC5">Basic Startup Options</A>
-<LI><A NAME="TOC6" HREF="wget.html#SEC6">Logging and Input File Options</A>
-<LI><A NAME="TOC7" HREF="wget.html#SEC7">Download Options</A>
-<LI><A NAME="TOC8" HREF="wget.html#SEC8">Directory Options</A>
-<LI><A NAME="TOC9" HREF="wget.html#SEC9">HTTP Options</A>
-<LI><A NAME="TOC10" HREF="wget.html#SEC10">FTP Options</A>
-<LI><A NAME="TOC11" HREF="wget.html#SEC11">Recursive Retrieval Options</A>
-<LI><A NAME="TOC12" HREF="wget.html#SEC12">Recursive Accept/Reject Options</A>
-</UL>
-<LI><A NAME="TOC13" HREF="wget.html#SEC13">Recursive Retrieval</A>
-<LI><A NAME="TOC14" HREF="wget.html#SEC14">Following Links</A>
-<UL>
-<LI><A NAME="TOC15" HREF="wget.html#SEC15">Spanning Hosts</A>
-<LI><A NAME="TOC16" HREF="wget.html#SEC16">Types of Files</A>
-<LI><A NAME="TOC17" HREF="wget.html#SEC17">Directory-Based Limits</A>
-<LI><A NAME="TOC18" HREF="wget.html#SEC18">Relative Links</A>
-<LI><A NAME="TOC19" HREF="wget.html#SEC19">Following FTP Links</A>
-</UL>
-<LI><A NAME="TOC20" HREF="wget.html#SEC20">Time-Stamping</A>
-<UL>
-<LI><A NAME="TOC21" HREF="wget.html#SEC21">Time-Stamping Usage</A>
-<LI><A NAME="TOC22" HREF="wget.html#SEC22">HTTP Time-Stamping Internals</A>
-<LI><A NAME="TOC23" HREF="wget.html#SEC23">FTP Time-Stamping Internals</A>
-</UL>
-<LI><A NAME="TOC24" HREF="wget.html#SEC24">Startup File</A>
-<UL>
-<LI><A NAME="TOC25" HREF="wget.html#SEC25">Wgetrc Location</A>
-<LI><A NAME="TOC26" HREF="wget.html#SEC26">Wgetrc Syntax</A>
-<LI><A NAME="TOC27" HREF="wget.html#SEC27">Wgetrc Commands</A>
-<LI><A NAME="TOC28" HREF="wget.html#SEC28">Sample Wgetrc</A>
-</UL>
-<LI><A NAME="TOC29" HREF="wget.html#SEC29">Examples</A>
-<UL>
-<LI><A NAME="TOC30" HREF="wget.html#SEC30">Simple Usage</A>
-<LI><A NAME="TOC31" HREF="wget.html#SEC31">Advanced Usage</A>
-<LI><A NAME="TOC32" HREF="wget.html#SEC32">Very Advanced Usage</A>
-</UL>
-<LI><A NAME="TOC33" HREF="wget.html#SEC33">Various</A>
-<UL>
-<LI><A NAME="TOC34" HREF="wget.html#SEC34">Proxies</A>
-<LI><A NAME="TOC35" HREF="wget.html#SEC35">Distribution</A>
-<LI><A NAME="TOC36" HREF="wget.html#SEC36">Mailing List</A>
-<LI><A NAME="TOC37" HREF="wget.html#SEC37">Reporting Bugs</A>
-<LI><A NAME="TOC38" HREF="wget.html#SEC38">Portability</A>
-<LI><A NAME="TOC39" HREF="wget.html#SEC39">Signals</A>
-</UL>
-<LI><A NAME="TOC40" HREF="wget.html#SEC40">Appendices</A>
-<UL>
-<LI><A NAME="TOC41" HREF="wget.html#SEC41">Robots</A>
-<LI><A NAME="TOC42" HREF="wget.html#SEC42">Security Considerations</A>
-<LI><A NAME="TOC43" HREF="wget.html#SEC43">Contributors</A>
-</UL>
-<LI><A NAME="TOC44" HREF="wget.html#SEC44">Copying</A>
-<UL>
-<LI><A NAME="TOC45" HREF="wget.html#SEC45">GNU General Public License</A>
-<LI><A NAME="TOC46" HREF="wget.html#SEC46">Preamble</A>
-<LI><A NAME="TOC47" HREF="wget.html#SEC47">TERMS AND CONDITIONS FOR COPYING, 
DISTRIBUTION AND MODIFICATION</A>
-<LI><A NAME="TOC48" HREF="wget.html#SEC48">How to Apply These Terms to Your 
New Programs</A>
-<LI><A NAME="TOC49" HREF="wget.html#SEC49">GNU Free Documentation License</A>
-<LI><A NAME="TOC50" HREF="wget.html#SEC50">ADDENDUM: How to use this License 
for your documents</A>
-</UL>
-<LI><A NAME="TOC51" HREF="wget.html#SEC51">Concept Index</A>
-</UL>
-<P><HR><P>
-
-<P>
address@hidden Net Utilities
address@hidden World Wide Web
-* Wget: (wget).         The non-interactive network downloader.
-
-
-<P>
-Copyright (C) 1996, 1997, 1998, 2000, 2001 Free Software
-Foundation, Inc.
-
-
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-
-
-
-
-<H1><A NAME="SEC1" HREF="wget.html#TOC1">Overview</A></H1>
-<P>
-<A NAME="IDX1"></A>
-<A NAME="IDX2"></A>
-
-
-<P>
-GNU Wget is a free utility for non-interactive download of files from
-the Web.  It supports HTTP, HTTPS, and FTP protocols, as
-well as retrieval through HTTP proxies.
-
-
-<P>
-This chapter is a partial overview of Wget's features.
-
-
-
-<UL>
-<LI>
-
-Wget is non-interactive, meaning that it can work in the background,
-while the user is not logged on.  This allows you to start a retrieval
-and disconnect from the system, letting Wget finish the work.  By
-contrast, most of the Web browsers require constant user's presence,
-which can be a great hindrance when transferring a lot of data.
-
-<LI>
-
-Wget can follow links in HTML pages and create local versions of
-remote web sites, fully recreating the directory structure of the
-original site.  This is sometimes referred to as "recursive
-downloading."  While doing that, Wget respects the Robot Exclusion
-Standard (<TT>`/robots.txt'</TT>).  Wget can be instructed to convert the
-links in downloaded HTML files to the local files for offline
-viewing.
-
-<LI>
-
-File name wildcard matching and recursive mirroring of directories are
-available when retrieving via FTP.  Wget can read the time-stamp
-information given by both HTTP and FTP servers, and store it
-locally.  Thus Wget can see if the remote file has changed since last
-retrieval, and automatically retrieve the new version if it has.  This
-makes Wget suitable for mirroring of FTP sites, as well as home
-pages.
-
-<LI>
-
-Wget has been designed for robustness over slow or unstable network
-connections; if a download fails due to a network problem, it will
-keep retrying until the whole file has been retrieved.  If the server
-supports regetting, it will instruct the server to continue the
-download from where it left off.
-
-<LI>
-
-Wget supports proxy servers, which can lighten the network load, speed
-up retrieval and provide access behind firewalls.  However, if you are
-behind a firewall that requires that you use a socks style gateway, you
-can get the socks library and build Wget with support for socks.  Wget
-also supports the passive FTP downloading as an option.
-
-<LI>
-
-Builtin features offer mechanisms to tune which links you wish to follow
-(see section <A HREF="wget.html#SEC14">Following Links</A>).
-
-<LI>
-
-The retrieval is conveniently traced with printing dots, each dot
-representing a fixed amount of data received (1KB by default).  These
-representations can be customized to your preferences.
-
-<LI>
-
-Most of the features are fully configurable, either through command line
-options, or via the initialization file <TT>`.wgetrc'</TT> (see section <A 
HREF="wget.html#SEC24">Startup File</A>).  Wget allows you to define 
<EM>global</EM> startup files
-(<TT>`/usr/local/etc/wgetrc'</TT> by default) for site settings.
-
-<LI>
-
-Finally, GNU Wget is free software.  This means that everyone may use
-it, redistribute it and/or modify it under the terms of the GNU General
-Public License, as published by the Free Software Foundation
-(see section <A HREF="wget.html#SEC44">Copying</A>).
-</UL>
-
-
-
-<H1><A NAME="SEC2" HREF="wget.html#TOC2">Invoking</A></H1>
-<P>
-<A NAME="IDX3"></A>
-<A NAME="IDX4"></A>
-<A NAME="IDX5"></A>
-<A NAME="IDX6"></A>
-
-
-<P>
-By default, Wget is very simple to invoke.  The basic syntax is:
-
-
-
-<PRE>
-wget [<VAR>option</VAR>]... [<VAR>URL</VAR>]...
-</PRE>
-
-<P>
-Wget will simply download all the URLs specified on the command
-line.  <VAR>URL</VAR> is a <EM>Uniform Resource Locator</EM>, as defined below.
-
-
-<P>
-However, you may wish to change some of the default parameters of
-Wget.  You can do it two ways: permanently, adding the appropriate
-command to <TT>`.wgetrc'</TT> (see section <A HREF="wget.html#SEC24">Startup 
File</A>), or specifying it on
-the command line.
-
-
-
-
-<H2><A NAME="SEC3" HREF="wget.html#TOC3">URL Format</A></H2>
-<P>
-<A NAME="IDX7"></A>
-<A NAME="IDX8"></A>
-
-
-<P>
-<EM>URL</EM> is an acronym for Uniform Resource Locator.  A uniform
-resource locator is a compact string representation for a resource
-available via the Internet.  Wget recognizes the URL syntax as per
-RFC1738.  This is the most widely used form (square brackets denote
-optional parts):
-
-
-
-<PRE>
-http://host[:port]/directory/file
-ftp://host[:port]/directory/file
-</PRE>
-
-<P>
-You can also encode your username and password within a URL:
-
-
-
-<PRE>
-ftp://user:address@hidden/path
-http://user:address@hidden/path
-</PRE>
-
-<P>
-Either <VAR>user</VAR> or <VAR>password</VAR>, or both, may be left out.  If 
you
-leave out either the HTTP username or password, no authentication
-will be sent.  If you leave out the FTP username, <SAMP>`anonymous'</SAMP>
-will be used.  If you leave out the FTP password, your email
-address will be supplied as a default password.<A NAME="DOCF1" 
HREF="wget.html#FOOT1">(1)</A>
-
-
-<P>
-You can encode unsafe characters in a URL as <SAMP>`%xy'</SAMP>, 
<CODE>xy</CODE>
-being the hexadecimal representation of the character's ASCII
-value.  Some common unsafe characters include <SAMP>`%'</SAMP> (quoted as
-<SAMP>`%25'</SAMP>), <SAMP>`:'</SAMP> (quoted as <SAMP>`%3A'</SAMP>), and 
<SAMP>`@'</SAMP> (quoted as
-<SAMP>`%40'</SAMP>).  Refer to RFC1738 for a comprehensive list of unsafe
-characters.
-
-
-<P>
-Wget also supports the <CODE>type</CODE> feature for FTP URLs.  By
-default, FTP documents are retrieved in the binary mode (type
-<SAMP>`i'</SAMP>), which means that they are downloaded unchanged.  Another
-useful mode is the <SAMP>`a'</SAMP> (<EM>ASCII</EM>) mode, which converts the 
line
-delimiters between the different operating systems, and is thus useful
-for text files.  Here is an example:
-
-
-
-<PRE>
-ftp://host/directory/file;type=a
-</PRE>
-
-<P>
-Two alternative variants of URL specification are also supported,
-because of historical (hysterical?) reasons and their widespreaded use.
-
-
-<P>
-FTP-only syntax (supported by <CODE>NcFTP</CODE>):
-
-<PRE>
-host:/dir/file
-</PRE>
-
-<P>
-HTTP-only syntax (introduced by <CODE>Netscape</CODE>):
-
-<PRE>
-host[:port]/dir/file
-</PRE>
-
-<P>
-These two alternative forms are deprecated, and may cease being
-supported in the future.
-
-
-<P>
-If you do not understand the difference between these notations, or do
-not know which one to use, just use the plain ordinary format you use
-with your favorite browser, like <CODE>Lynx</CODE> or <CODE>Netscape</CODE>.
-
-
-
-
-<H2><A NAME="SEC4" HREF="wget.html#TOC4">Option Syntax</A></H2>
-<P>
-<A NAME="IDX9"></A>
-<A NAME="IDX10"></A>
-
-
-<P>
-Since Wget uses GNU getopts to process its arguments, every option has a
-short form and a long form.  Long options are more convenient to
-remember, but take time to type.  You may freely mix different option
-styles, or specify options after the command-line arguments.  Thus you
-may write:
-
-
-
-<PRE>
-wget -r --tries=10 http://fly.srk.fer.hr/ -o log
-</PRE>
-
-<P>
-The space between the option accepting an argument and the argument may
-be omitted.  Instead <SAMP>`-o log'</SAMP> you can write <SAMP>`-olog'</SAMP>.
-
-
-<P>
-You may put several options that do not require arguments together,
-like:
-
-
-
-<PRE>
-wget -drc <VAR>URL</VAR>
-</PRE>
-
-<P>
-This is a complete equivalent of:
-
-
-
-<PRE>
-wget -d -r -c <VAR>URL</VAR>
-</PRE>
-
-<P>
-Since the options can be specified after the arguments, you may
-terminate them with <SAMP>`--'</SAMP>.  So the following will try to download
-URL <SAMP>`-x'</SAMP>, reporting failure to <TT>`log'</TT>:
-
-
-
-<PRE>
-wget -o log -- -x
-</PRE>
-
-<P>
-The options that accept comma-separated lists all respect the convention
-that specifying an empty list clears its value.  This can be useful to
-clear the <TT>`.wgetrc'</TT> settings.  For instance, if your 
<TT>`.wgetrc'</TT>
-sets <CODE>exclude_directories</CODE> to <TT>`/cgi-bin'</TT>, the following
-example will first reset it, and then set it to exclude <TT>`/~nobody'</TT>
-and <TT>`/~somebody'</TT>.  You can also clear the lists in <TT>`.wgetrc'</TT>
-(see section <A HREF="wget.html#SEC26">Wgetrc Syntax</A>).
-
-
-
-<PRE>
-wget -X '' -X /~nobody,/~somebody
-</PRE>
-
-
-
-<H2><A NAME="SEC5" HREF="wget.html#TOC5">Basic Startup Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-V'</SAMP>
-<DD>
-<DT><SAMP>`--version'</SAMP>
-<DD>
-Display the version of Wget.
-
-<DT><SAMP>`-h'</SAMP>
-<DD>
-<DT><SAMP>`--help'</SAMP>
-<DD>
-Print a help message describing all of Wget's command-line options.
-
-<DT><SAMP>`-b'</SAMP>
-<DD>
-<DT><SAMP>`--background'</SAMP>
-<DD>
-Go to background immediately after startup.  If no output file is
-specified via the <SAMP>`-o'</SAMP>, output is redirected to 
<TT>`wget-log'</TT>.
-
-<A NAME="IDX11"></A>
-<DT><SAMP>`-e <VAR>command</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--execute <VAR>command</VAR>'</SAMP>
-<DD>
-Execute <VAR>command</VAR> as if it were a part of <TT>`.wgetrc'</TT>
-(see section <A HREF="wget.html#SEC24">Startup File</A>).  A command thus 
invoked will be executed
-<EM>after</EM> the commands in <TT>`.wgetrc'</TT>, thus taking precedence over
-them.
-</DL>
-
-
-
-<H2><A NAME="SEC6" HREF="wget.html#TOC6">Logging and Input File 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-o <VAR>logfile</VAR>'</SAMP>
-<DD>
-<A NAME="IDX12"></A>
- <A NAME="IDX13"></A>
- 
-<DT><SAMP>`--output-file=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Log all messages to <VAR>logfile</VAR>.  The messages are normally reported
-to standard error.
-
-<A NAME="IDX14"></A>
-<DT><SAMP>`-a <VAR>logfile</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--append-output=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Append to <VAR>logfile</VAR>.  This is the same as <SAMP>`-o'</SAMP>, only it 
appends
-to <VAR>logfile</VAR> instead of overwriting the old log file.  If
-<VAR>logfile</VAR> does not exist, a new file is created.
-
-<A NAME="IDX15"></A>
-<DT><SAMP>`-d'</SAMP>
-<DD>
-<DT><SAMP>`--debug'</SAMP>
-<DD>
-Turn on debug output, meaning various information important to the
-developers of Wget if it does not work properly.  Your system
-administrator may have chosen to compile Wget without debug support, in
-which case <SAMP>`-d'</SAMP> will not work.  Please note that compiling with
-debug support is always safe--Wget compiled with the debug support will
-<EM>not</EM> print any debug info unless requested with <SAMP>`-d'</SAMP>.
-See section <A HREF="wget.html#SEC37">Reporting Bugs</A>, for more information 
on how to use <SAMP>`-d'</SAMP> for
-sending bug reports.
-
-<A NAME="IDX16"></A>
-<DT><SAMP>`-q'</SAMP>
-<DD>
-<DT><SAMP>`--quiet'</SAMP>
-<DD>
-Turn off Wget's output.
-
-<A NAME="IDX17"></A>
-<DT><SAMP>`-v'</SAMP>
-<DD>
-<DT><SAMP>`--verbose'</SAMP>
-<DD>
-Turn on verbose output, with all the available data.  The default output
-is verbose.
-
-<DT><SAMP>`-nv'</SAMP>
-<DD>
-<DT><SAMP>`--non-verbose'</SAMP>
-<DD>
-Non-verbose output--turn off verbose without being completely quiet
-(use <SAMP>`-q'</SAMP> for that), which means that error messages and basic
-information still get printed.
-
-<A NAME="IDX18"></A>
-<DT><SAMP>`-i <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--input-file=<VAR>file</VAR>'</SAMP>
-<DD>
-Read URLs from <VAR>file</VAR>, in which case no URLs need to be on
-the command line.  If there are URLs both on the command line and
-in an input file, those on the command lines will be the first ones to
-be retrieved.  The <VAR>file</VAR> need not be an HTML document (but no
-harm if it is)---it is enough if the URLs are just listed
-sequentially.
-
-However, if you specify <SAMP>`--force-html'</SAMP>, the document will be
-regarded as <SAMP>`html'</SAMP>.  In that case you may have problems with
-relative links, which you can solve either by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to the documents or by specifying
-<SAMP>`--base=<VAR>url</VAR>'</SAMP> on the command line.
-
-<A NAME="IDX19"></A>
-<DT><SAMP>`-F'</SAMP>
-<DD>
-<DT><SAMP>`--force-html'</SAMP>
-<DD>
-When input is read from a file, force it to be treated as an HTML
-file.  This enables you to retrieve relative links from existing
-HTML files on your local disk, by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to HTML, or using the <SAMP>`--base'</SAMP> 
command-line
-option.
-
-<A NAME="IDX20"></A>
-<DT><SAMP>`-B <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--base=<VAR>URL</VAR>'</SAMP>
-<DD>
-When used in conjunction with <SAMP>`-F'</SAMP>, prepends <VAR>URL</VAR> to 
relative
-links in the file specified by <SAMP>`-i'</SAMP>.
-</DL>
-
-
-
-<H2><A NAME="SEC7" HREF="wget.html#TOC7">Download Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`--bind-address=<VAR>ADDRESS</VAR>'</SAMP>
-<DD>
-<A NAME="IDX21"></A>
- <A NAME="IDX22"></A>
- <A NAME="IDX23"></A>
- 
-When making client TCP/IP connections, <CODE>bind()</CODE> to 
<VAR>ADDRESS</VAR> on
-the local machine.  <VAR>ADDRESS</VAR> may be specified as a hostname or IP
-address.  This option can be useful if your machine is bound to multiple
-IPs.
-
-<A NAME="IDX24"></A>
-<A NAME="IDX25"></A>
-<A NAME="IDX26"></A>
-<DT><SAMP>`-t <VAR>number</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--tries=<VAR>number</VAR>'</SAMP>
-<DD>
-Set number of retries to <VAR>number</VAR>.  Specify 0 or <SAMP>`inf'</SAMP> 
for
-infinite retrying.
-
-<DT><SAMP>`-O <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--output-document=<VAR>file</VAR>'</SAMP>
-<DD>
-The documents will not be written to the appropriate files, but all will
-be concatenated together and written to <VAR>file</VAR>.  If <VAR>file</VAR>
-already exists, it will be overwritten.  If the <VAR>file</VAR> is 
<SAMP>`-'</SAMP>,
-the documents will be written to standard output.  Including this option
-automatically sets the number of tries to 1.
-
-<A NAME="IDX27"></A>
-<A NAME="IDX28"></A>
-<A NAME="IDX29"></A>
-<DT><SAMP>`-nc'</SAMP>
-<DD>
-<DT><SAMP>`--no-clobber'</SAMP>
-<DD>
-If a file is downloaded more than once in the same directory, Wget's
-behavior depends on a few options, including <SAMP>`-nc'</SAMP>.  In certain
-cases, the local file will be <EM>clobbered</EM>, or overwritten, upon
-repeated download.  In other cases it will be preserved.
-
-When running Wget without <SAMP>`-N'</SAMP>, <SAMP>`-nc'</SAMP>, or 
<SAMP>`-r'</SAMP>,
-downloading the same file in the same directory will result in the
-original copy of <VAR>file</VAR> being preserved and the second copy being
-named <SAMP>`<VAR>file</VAR>.1'</SAMP>.  If that file is downloaded yet again, 
the
-third copy will be named <SAMP>`<VAR>file</VAR>.2'</SAMP>, and so on.  When
-<SAMP>`-nc'</SAMP> is specified, this behavior is suppressed, and Wget will
-refuse to download newer copies of <SAMP>`<VAR>file</VAR>'</SAMP>.  Therefore,
-"<CODE>no-clobber</CODE>" is actually a misnomer in this mode--it's not
-clobbering that's prevented (as the numeric suffixes were already
-preventing clobbering), but rather the multiple version saving that's
-prevented.
-
-When running Wget with <SAMP>`-r'</SAMP>, but without <SAMP>`-N'</SAMP> or 
<SAMP>`-nc'</SAMP>,
-re-downloading a file will result in the new copy simply overwriting the
-old.  Adding <SAMP>`-nc'</SAMP> will prevent this behavior, instead causing the
-original version to be preserved and any newer copies on the server to
-be ignored.
-
-When running Wget with <SAMP>`-N'</SAMP>, with or without <SAMP>`-r'</SAMP>, 
the
-decision as to whether or not to download a newer copy of a file depends
-on the local and remote timestamp and size of the file
-(see section <A HREF="wget.html#SEC20">Time-Stamping</A>).  <SAMP>`-nc'</SAMP> 
may not be specified at the same
-time as <SAMP>`-N'</SAMP>.
-
-Note that when <SAMP>`-nc'</SAMP> is specified, files with the suffixes
-<SAMP>`.html'</SAMP> or (yuck) <SAMP>`.htm'</SAMP> will be loaded from the 
local disk
-and parsed as if they had been retrieved from the Web.
-
-<A NAME="IDX30"></A>
-<A NAME="IDX31"></A>
-<A NAME="IDX32"></A>
-<DT><SAMP>`-c'</SAMP>
-<DD>
-<DT><SAMP>`--continue'</SAMP>
-<DD>
-Continue getting a partially-downloaded file.  This is useful when you
-want to finish up a download started by a previous instance of Wget, or
-by another program.  For instance:
-
-
-<PRE>
-wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
-</PRE>
-
-If there is a file named <TT>`ls-lR.Z'</TT> in the current directory, Wget
-will assume that it is the first portion of the remote file, and will
-ask the server to continue the retrieval from an offset equal to the
-length of the local file.
-
-Note that you don't need to specify this option if you just want the
-current invocation of Wget to retry downloading a file should the
-connection be lost midway through.  This is the default behavior.
-<SAMP>`-c'</SAMP> only affects resumption of downloads started <EM>prior</EM> 
to
-this invocation of Wget, and whose local files are still sitting around.
-
-Without <SAMP>`-c'</SAMP>, the previous example would just download the remote
-file to <TT>`ls-lR.Z.1'</TT>, leaving the truncated <TT>`ls-lR.Z'</TT> file
-alone.
-
-Beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a non-empty file, and
-it turns out that the server does not support continued downloading,
-Wget will refuse to start the download from scratch, which would
-effectively ruin existing contents.  If you really want the download to
-start from scratch, remove the file.
-
-Also beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a file which is 
of
-equal size as the one on the server, Wget will refuse to download the
-file and print an explanatory message.  The same happens when the file
-is smaller on the server than locally (presumably because it was changed
-on the server since your last download attempt)---because "continuing"
-is not meaningful, no download occurs.
-
-On the other side of the coin, while using <SAMP>`-c'</SAMP>, any file that's
-bigger on the server than locally will be considered an incomplete
-download and only <CODE>(length(remote) - length(local))</CODE> bytes will be
-downloaded and tacked onto the end of the local file.  This behavior can
-be desirable in certain cases--for instance, you can use <SAMP>`wget -c'</SAMP>
-to download just the new portion that's been appended to a data
-collection or log file.
-
-However, if the file is bigger on the server because it's been
-<EM>changed</EM>, as opposed to just <EM>appended</EM> to, you'll end up
-with a garbled file.  Wget has no way of verifying that the local file
-is really a valid prefix of the remote file.  You need to be especially
-careful of this when using <SAMP>`-c'</SAMP> in conjunction with 
<SAMP>`-r'</SAMP>,
-since every file will be considered as an "incomplete download" candidate.
-
-Another instance where you'll get a garbled file if you try to use
-<SAMP>`-c'</SAMP> is if you have a lame HTTP proxy that inserts a
-"transfer interrupted" string into the local file.  In the future a
-"rollback" option may be added to deal with this case.
-
-Note that <SAMP>`-c'</SAMP> only works with FTP servers and with HTTP
-servers that support the <CODE>Range</CODE> header.
-
-<A NAME="IDX33"></A>
-<A NAME="IDX34"></A>
-<DT><SAMP>`--progress=<VAR>type</VAR>'</SAMP>
-<DD>
-Select the type of the progress indicator you wish to use.  Legal
-indicators are "dot" and "bar".
-
-The "dot" indicator is used by default.  It traces the retrieval by
-printing dots on the screen, each dot representing a fixed amount of
-downloaded data.
-
-When using the dotted retrieval, you may also set the <EM>style</EM> by
-specifying the type as <SAMP>`dot:<VAR>style</VAR>'</SAMP>.  Different styles 
assign
-different meaning to one dot.  With the <CODE>default</CODE> style each dot
-represents 1K, there are ten dots in a cluster and 50 dots in a line.
-The <CODE>binary</CODE> style has a more "computer"-like orientation--8K
-dots, 16-dots clusters and 48 dots per line (which makes for 384K
-lines).  The <CODE>mega</CODE> style is suitable for downloading very large
-files--each dot represents 64K retrieved, there are eight dots in a
-cluster, and 48 dots on each line (so each line contains 3M).
-
-Specifying <SAMP>`--progress=bar'</SAMP> will draw a nice ASCII progress bar
-graphics (a.k.a "thermometer" display) to indicate retrieval.  If the
-output is not a TTY, this option will be ignored, and Wget will revert
-to the dot indicator.  If you want to force the bar indicator, use
-<SAMP>`--progress=bar:force'</SAMP>.
-
-<DT><SAMP>`-N'</SAMP>
-<DD>
-<DT><SAMP>`--timestamping'</SAMP>
-<DD>
-Turn on time-stamping.  See section <A 
HREF="wget.html#SEC20">Time-Stamping</A>, for details.
-
-<A NAME="IDX35"></A>
-<DT><SAMP>`-S'</SAMP>
-<DD>
-<DT><SAMP>`--server-response'</SAMP>
-<DD>
-Print the headers sent by HTTP servers and responses sent by
-FTP servers.
-
-<A NAME="IDX36"></A>
-<A NAME="IDX37"></A>
-<DT><SAMP>`--spider'</SAMP>
-<DD>
-When invoked with this option, Wget will behave as a Web <EM>spider</EM>,
-which means that it will not download the pages, just check that they
-are there.  You can use it to check your bookmarks, e.g. with:
-
-
-<PRE>
-wget --spider --force-html -i bookmarks.html
-</PRE>
-
-This feature needs much more work for Wget to get close to the
-functionality of real WWW spiders.
-
-<A NAME="IDX38"></A>
-<DT><SAMP>`-T seconds'</SAMP>
-<DD>
-<DT><SAMP>`--timeout=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Set the read timeout to <VAR>seconds</VAR> seconds.  Whenever a network read
-is issued, the file descriptor is checked for a timeout, which could
-otherwise leave a pending connection (uninterrupted read).  The default
-timeout is 900 seconds (fifteen minutes).  Setting timeout to 0 will
-disable checking for timeouts.
-
-Please do not lower the default timeout value with this option unless
-you know what you are doing.
-
-<A NAME="IDX39"></A>
-<A NAME="IDX40"></A>
-<DT><SAMP>`-w <VAR>seconds</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--wait=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Wait the specified number of seconds between the retrievals.  Use of
-this option is recommended, as it lightens the server load by making the
-requests less frequent.  Instead of in seconds, the time can be
-specified in minutes using the <CODE>m</CODE> suffix, in hours using 
<CODE>h</CODE>
-suffix, or in days using <CODE>d</CODE> suffix.
-
-Specifying a large value for this option is useful if the network or the
-destination host is down, so that Wget can wait long enough to
-reasonably expect the network error to be fixed before the retry.
-
-<A NAME="IDX41"></A>
-<A NAME="IDX42"></A>
-<DT><SAMP>`--waitretry=<VAR>seconds</VAR>'</SAMP>
-<DD>
-If you don't want Wget to wait between <EM>every</EM> retrieval, but only
-between retries of failed downloads, you can use this option.  Wget will
-use <EM>linear backoff</EM>, waiting 1 second after the first failure on a
-given file, then waiting 2 seconds after the second failure on that
-file, up to the maximum number of <VAR>seconds</VAR> you specify.  Therefore,
-a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
-seconds per file.
-
-Note that this option is turned on by default in the global
-<TT>`wgetrc'</TT> file.
-
-<A NAME="IDX43"></A>
-<A NAME="IDX44"></A>
-<DT><SAMP>`--random-wait'</SAMP>
-<DD>
-Some web sites may perform log analysis to identify retrieval programs
-such as Wget by looking for statistically significant similarities in
-the time between requests. This option causes the time between requests
-to vary between 0 and 2 * <VAR>wait</VAR> seconds, where <VAR>wait</VAR> was
-specified using the <SAMP>`-w'</SAMP> or <SAMP>`--wait'</SAMP> options, in 
order to mask
-Wget's presence from such analysis.
-
-A recent article in a publication devoted to development on a popular
-consumer platform provided code to perform this analysis on the fly.
-Its author suggested blocking at the class C address level to ensure
-automated retrieval programs were blocked despite changing DHCP-supplied
-addresses.
-
-The <SAMP>`--random-wait'</SAMP> option was inspired by this ill-advised
-recommendation to block many unrelated users from a web site due to the
-actions of one.
-
-<A NAME="IDX45"></A>
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-Turn proxy support on or off.  The proxy is on by default if the
-appropriate environmental variable is defined.
-
-<A NAME="IDX46"></A>
-<DT><SAMP>`-Q <VAR>quota</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--quota=<VAR>quota</VAR>'</SAMP>
-<DD>
-Specify download quota for automatic retrievals.  The value can be
-specified in bytes (default), kilobytes (with <SAMP>`k'</SAMP> suffix), or
-megabytes (with <SAMP>`m'</SAMP> suffix).
-
-Note that quota will never affect downloading a single file.  So if you
-specify <SAMP>`wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz'</SAMP>, all of 
the
-<TT>`ls-lR.gz'</TT> will be downloaded.  The same goes even when several
-URLs are specified on the command-line.  However, quota is
-respected when retrieving either recursively, or from an input file.
-Thus you may safely type <SAMP>`wget -Q2m -i sites'</SAMP>---download will be
-aborted when the quota is exceeded.
-
-Setting quota to 0 or to <SAMP>`inf'</SAMP> unlimits the download quota.
-</DL>
-
-
-
-<H2><A NAME="SEC8" HREF="wget.html#TOC8">Directory Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nd'</SAMP>
-<DD>
-<DT><SAMP>`--no-directories'</SAMP>
-<DD>
-Do not create a hierarchy of directories when retrieving recursively.
-With this option turned on, all files will get saved to the current
-directory, without clobbering (if a name shows up more than once, the
-filenames will get extensions <SAMP>`.n'</SAMP>).
-
-<DT><SAMP>`-x'</SAMP>
-<DD>
-<DT><SAMP>`--force-directories'</SAMP>
-<DD>
-The opposite of <SAMP>`-nd'</SAMP>---create a hierarchy of directories, even if
-one would not have been created otherwise.  E.g. <SAMP>`wget -x
-http://fly.srk.fer.hr/robots.txt'</SAMP> will save the downloaded file to
-<TT>`fly.srk.fer.hr/robots.txt'</TT>.
-
-<DT><SAMP>`-nH'</SAMP>
-<DD>
-<DT><SAMP>`--no-host-directories'</SAMP>
-<DD>
-Disable generation of host-prefixed directories.  By default, invoking
-Wget with <SAMP>`-r http://fly.srk.fer.hr/'</SAMP> will create a structure of
-directories beginning with <TT>`fly.srk.fer.hr/'</TT>.  This option disables
-such behavior.
-
-<A NAME="IDX47"></A>
-<DT><SAMP>`--cut-dirs=<VAR>number</VAR>'</SAMP>
-<DD>
-Ignore <VAR>number</VAR> directory components.  This is useful for getting a
-fine-grained control over the directory where recursive retrieval will
-be saved.
-
-Take, for example, the directory at
-<SAMP>`ftp://ftp.xemacs.org/pub/xemacs/'</SAMP>.  If you retrieve it with
-<SAMP>`-r'</SAMP>, it will be saved locally under
-<TT>`ftp.xemacs.org/pub/xemacs/'</TT>.  While the <SAMP>`-nH'</SAMP> option can
-remove the <TT>`ftp.xemacs.org/'</TT> part, you are still stuck with
-<TT>`pub/xemacs'</TT>.  This is where <SAMP>`--cut-dirs'</SAMP> comes in 
handy; it
-makes Wget not "see" <VAR>number</VAR> remote directory components.  Here
-are several examples of how <SAMP>`--cut-dirs'</SAMP> option works.
-
-
-<PRE>
-No options        -&#62; ftp.xemacs.org/pub/xemacs/
--nH               -&#62; pub/xemacs/
--nH --cut-dirs=1  -&#62; xemacs/
--nH --cut-dirs=2  -&#62; .
-
---cut-dirs=1      -&#62; ftp.xemacs.org/xemacs/
-...
-</PRE>
-
-If you just want to get rid of the directory structure, this option is
-similar to a combination of <SAMP>`-nd'</SAMP> and <SAMP>`-P'</SAMP>.  
However, unlike
-<SAMP>`-nd'</SAMP>, <SAMP>`--cut-dirs'</SAMP> does not lose with 
subdirectories--for
-instance, with <SAMP>`-nH --cut-dirs=1'</SAMP>, a <TT>`beta/'</TT> 
subdirectory will
-be placed to <TT>`xemacs/beta'</TT>, as one would expect.
-
-<A NAME="IDX48"></A>
-<DT><SAMP>`-P <VAR>prefix</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--directory-prefix=<VAR>prefix</VAR>'</SAMP>
-<DD>
-Set directory prefix to <VAR>prefix</VAR>.  The <EM>directory prefix</EM> is 
the
-directory where all other files and subdirectories will be saved to,
-i.e. the top of the retrieval tree.  The default is <SAMP>`.'</SAMP> (the
-current directory).
-</DL>
-
-
-
-<H2><A NAME="SEC9" HREF="wget.html#TOC9">HTTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-E'</SAMP>
-<DD>
-<A NAME="IDX49"></A>
- 
-<DT><SAMP>`--html-extension'</SAMP>
-<DD>
-If a file of type <SAMP>`text/html'</SAMP> is downloaded and the URL does not
-end with the regexp <SAMP>`\.[Hh][Tt][Mm][Ll]?'</SAMP>, this option will cause
-the suffix <SAMP>`.html'</SAMP> to be appended to the local filename.  This is
-useful, for instance, when you're mirroring a remote site that uses
-<SAMP>`.asp'</SAMP> pages, but you want the mirrored pages to be viewable on
-your stock Apache server.  Another good use for this is when you're
-downloading the output of CGIs.  A URL like
-<SAMP>`http://site.com/article.cgi?25'</SAMP> will be saved as
-<TT>`article.cgi?25.html'</TT>.
-
-Note that filenames changed in this way will be re-downloaded every time
-you re-mirror a site, because Wget can't tell that the local
-<TT>`<VAR>X</VAR>.html'</TT> file corresponds to remote URL 
<SAMP>`<VAR>X</VAR>'</SAMP> (since
-it doesn't yet know that the URL produces output of type
-<SAMP>`text/html'</SAMP>.  To prevent this re-downloading, you must use
-<SAMP>`-k'</SAMP> and <SAMP>`-K'</SAMP> so that the original version of the 
file will be
-saved as <TT>`<VAR>X</VAR>.orig'</TT> (see section <A 
HREF="wget.html#SEC11">Recursive Retrieval Options</A>).
-
-<A NAME="IDX50"></A>
-<A NAME="IDX51"></A>
-<A NAME="IDX52"></A>
-<DT><SAMP>`--http-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--http-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> on an
-HTTP server.  According to the type of the challenge, Wget will
-encode them using either the <CODE>basic</CODE> (insecure) or the
-<CODE>digest</CODE> authentication scheme.
-
-Another way to specify username and password is in the URL itself
-(see section <A HREF="wget.html#SEC3">URL Format</A>).  For more information 
about security issues with
-Wget, See section <A HREF="wget.html#SEC42">Security Considerations</A>.
-
-<A NAME="IDX53"></A>
-<A NAME="IDX54"></A>
-<DT><SAMP>`-C on/off'</SAMP>
-<DD>
-<DT><SAMP>`--cache=on/off'</SAMP>
-<DD>
-When set to off, disable server-side cache.  In this case, Wget will
-send the remote server an appropriate directive (<SAMP>`Pragma:
-no-cache'</SAMP>) to get the file from the remote service, rather than
-returning the cached version.  This is especially useful for retrieving
-and flushing out-of-date documents on proxy servers.
-
-Caching is allowed by default.
-
-<A NAME="IDX55"></A>
-<DT><SAMP>`--cookies=on/off'</SAMP>
-<DD>
-When set to off, disable the use of cookies.  Cookies are a mechanism
-for maintaining server-side state.  The server sends the client a cookie
-using the <CODE>Set-Cookie</CODE> header, and the client responds with the
-same cookie upon further requests.  Since cookies allow the server
-owners to keep track of visitors and for sites to exchange this
-information, some consider them a breach of privacy.  The default is to
-use cookies; however, <EM>storing</EM> cookies is not on by default.
-
-<A NAME="IDX56"></A>
-<A NAME="IDX57"></A>
-<DT><SAMP>`--load-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Load cookies from <VAR>file</VAR> before the first HTTP retrieval.
-<VAR>file</VAR> is a textual file in the format originally used by Netscape's
-<TT>`cookies.txt'</TT> file.
-
-You will typically use this option when mirroring sites that require
-that you be logged in to access some or all of their content.  The login
-process typically works by the web server issuing an HTTP cookie
-upon receiving and verifying your credentials.  The cookie is then
-resent by the browser when accessing that part of the site, and so
-proves your identity.
-
-Mirroring such a site requires Wget to send the same cookies your
-browser sends when communicating with the site.  This is achieved by
-<SAMP>`--load-cookies'</SAMP>---simply point Wget to the location of the
-<TT>`cookies.txt'</TT> file, and it will send the same cookies your browser
-would send in the same situation.  Different browsers keep textual
-cookie files in different locations:
-
-<DL COMPACT>
-
-<DT>Netscape 4.x.
-<DD>
-The cookies are in <TT>`~/.netscape/cookies.txt'</TT>.
-
-<DT>Mozilla and Netscape 6.x.
-<DD>
-Mozilla's cookie file is also named <TT>`cookies.txt'</TT>, located
-somewhere under <TT>`~/.mozilla'</TT>, in the directory of your profile.
-The full path usually ends up looking somewhat like
-<TT>`~/.mozilla/default/<VAR>some-weird-string</VAR>/cookies.txt'</TT>.
-
-<DT>Internet Explorer.
-<DD>
-You can produce a cookie file Wget can use by using the File menu,
-Import and Export, Export Cookies.  This has been tested with Internet
-Explorer 5; it is not guaranteed to work with earlier versions.
-
-<DT>Other browsers.
-<DD>
-If you are using a different browser to create your cookies,
-<SAMP>`--load-cookies'</SAMP> will only work if you can locate or produce a
-cookie file in the Netscape format that Wget expects.
-</DL>
-
-If you cannot use <SAMP>`--load-cookies'</SAMP>, there might still be an
-alternative.  If your browser supports a "cookie manager", you can use
-it to view the cookies used when accessing the site you're mirroring.
-Write down the name and value of the cookie, and manually instruct Wget
-to send those cookies, bypassing the "official" cookie support:
-
-
-<PRE>
-wget --cookies=off --header "Cookie: <VAR>name</VAR>=<VAR>value</VAR>"
-</PRE>
-
-<A NAME="IDX58"></A>
-<A NAME="IDX59"></A>
-<DT><SAMP>`--save-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Save cookies from <VAR>file</VAR> at the end of session.  Cookies whose
-expiry time is not specified, or those that have already expired, are
-not saved.
-
-<A NAME="IDX60"></A>
-<A NAME="IDX61"></A>
-<DT><SAMP>`--ignore-length'</SAMP>
-<DD>
-Unfortunately, some HTTP servers (CGI programs, to be more
-precise) send out bogus <CODE>Content-Length</CODE> headers, which makes Wget
-go wild, as it thinks not all the document was retrieved.  You can spot
-this syndrome if Wget retries getting the same document again and again,
-each time claiming that the (otherwise normal) connection has closed on
-the very same byte.
-
-With this option, Wget will ignore the <CODE>Content-Length</CODE> header--as
-if it never existed.
-
-<A NAME="IDX62"></A>
-<DT><SAMP>`--header=<VAR>additional-header</VAR>'</SAMP>
-<DD>
-Define an <VAR>additional-header</VAR> to be passed to the HTTP servers.
-Headers must contain a <SAMP>`:'</SAMP> preceded by one or more non-blank
-characters, and must not contain newlines.
-
-You may define more than one additional header by specifying
-<SAMP>`--header'</SAMP> more than once.
-
-
-<PRE>
-wget --header='Accept-Charset: iso-8859-2' \
-     --header='Accept-Language: hr'        \
-       http://fly.srk.fer.hr/
-</PRE>
-
-Specification of an empty string as the header value will clear all
-previous user-defined headers.
-
-<A NAME="IDX63"></A>
-<A NAME="IDX64"></A>
-<A NAME="IDX65"></A>
-<DT><SAMP>`--proxy-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--proxy-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> for
-authentication on a proxy server.  Wget will encode them using the
-<CODE>basic</CODE> authentication scheme.
-
-<A NAME="IDX66"></A>
-<A NAME="IDX67"></A>
-<DT><SAMP>`--referer=<VAR>url</VAR>'</SAMP>
-<DD>
-Include `Referer: <VAR>url</VAR>' header in HTTP request.  Useful for
-retrieving documents with server-side processing that assume they are
-always being retrieved by interactive web browsers and only come out
-properly when Referer is set to one of the pages that point to them.
-
-<A NAME="IDX68"></A>
-<DT><SAMP>`-s'</SAMP>
-<DD>
-<DT><SAMP>`--save-headers'</SAMP>
-<DD>
-Save the headers sent by the HTTP server to the file, preceding the
-actual contents, with an empty line as the separator.
-
-<A NAME="IDX69"></A>
-<DT><SAMP>`-U <VAR>agent-string</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--user-agent=<VAR>agent-string</VAR>'</SAMP>
-<DD>
-Identify as <VAR>agent-string</VAR> to the HTTP server.
-
-The HTTP protocol allows the clients to identify themselves using a
-<CODE>User-Agent</CODE> header field.  This enables distinguishing the
-WWW software, usually for statistical purposes or for tracing of
-protocol violations.  Wget normally identifies as
-<SAMP>`Wget/<VAR>version</VAR>'</SAMP>, <VAR>version</VAR> being the current 
version
-number of Wget.
-
-However, some sites have been known to impose the policy of tailoring
-the output according to the <CODE>User-Agent</CODE>-supplied information.
-While conceptually this is not such a bad idea, it has been abused by
-servers denying information to clients other than <CODE>Mozilla</CODE> or
-Microsoft <CODE>Internet Explorer</CODE>.  This option allows you to change
-the <CODE>User-Agent</CODE> line issued by Wget.  Use of this option is
-discouraged, unless you really know what you are doing.
-</DL>
-
-
-
-<H2><A NAME="SEC10" HREF="wget.html#TOC10">FTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nr'</SAMP>
-<DD>
-<A NAME="IDX70"></A>
- 
-<DT><SAMP>`--dont-remove-listing'</SAMP>
-<DD>
-Don't remove the temporary <TT>`.listing'</TT> files generated by FTP
-retrievals.  Normally, these files contain the raw directory listings
-received from FTP servers.  Not removing them can be useful for
-debugging purposes, or when you want to be able to easily check on the
-contents of remote server directories (e.g. to verify that a mirror
-you're running is complete).
-
-Note that even though Wget writes to a known filename for this file,
-this is not a security hole in the scenario of a user making
-<TT>`.listing'</TT> a symbolic link to <TT>`/etc/passwd'</TT> or something and
-asking <CODE>root</CODE> to run Wget in his or her directory.  Depending on
-the options used, either Wget will refuse to write to <TT>`.listing'</TT>,
-making the globbing/recursion/time-stamping operation fail, or the
-symbolic link will be deleted and replaced with the actual
-<TT>`.listing'</TT> file, or the listing will be written to a
-<TT>`.listing.<VAR>number</VAR>'</TT> file.
-
-Even though this situation isn't a problem, though, <CODE>root</CODE> should
-never run Wget in a non-trusted user's directory.  A user could do
-something as simple as linking <TT>`index.html'</TT> to <TT>`/etc/passwd'</TT>
-and asking <CODE>root</CODE> to run Wget with <SAMP>`-N'</SAMP> or 
<SAMP>`-r'</SAMP> so the file
-will be overwritten.
-
-<A NAME="IDX71"></A>
-<DT><SAMP>`-g on/off'</SAMP>
-<DD>
-<DT><SAMP>`--glob=on/off'</SAMP>
-<DD>
-Turn FTP globbing on or off.  Globbing means you may use the
-shell-like special characters (<EM>wildcards</EM>), like <SAMP>`*'</SAMP>,
-<SAMP>`?'</SAMP>, <SAMP>`['</SAMP> and <SAMP>`]'</SAMP> to retrieve more than 
one file from the
-same directory at once, like:
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/*.msg
-</PRE>
-
-By default, globbing will be turned on if the URL contains a
-globbing character.  This option may be used to turn globbing on or off
-permanently.
-
-You may have to quote the URL to protect it from being expanded by
-your shell.  Globbing makes Wget look for a directory listing, which is
-system-specific.  This is why it currently works only with Unix FTP
-servers (and the ones emulating Unix <CODE>ls</CODE> output).
-
-<A NAME="IDX72"></A>
-<DT><SAMP>`--passive-ftp'</SAMP>
-<DD>
-Use the <EM>passive</EM> FTP retrieval scheme, in which the client
-initiates the data connection.  This is sometimes required for FTP
-to work behind firewalls.
-
-<A NAME="IDX73"></A>
-<DT><SAMP>`--retr-symlinks'</SAMP>
-<DD>
-Usually, when retrieving FTP directories recursively and a symbolic
-link is encountered, the linked-to file is not downloaded.  Instead, a
-matching symbolic link is created on the local filesystem.  The
-pointed-to file will not be downloaded unless this recursive retrieval
-would have encountered it separately and downloaded it anyway.
-
-When <SAMP>`--retr-symlinks'</SAMP> is specified, however, symbolic links are
-traversed and the pointed-to files are retrieved.  At this time, this
-option does not cause Wget to traverse symlinks to directories and
-recurse through them, but in the future it should be enhanced to do
-this.
-
-Note that when retrieving a file (not a directory) because it was
-specified on the commandline, rather than because it was recursed to,
-this option has no effect.  Symbolic links are always traversed in this
-case.
-</DL>
-
-
-
-<H2><A NAME="SEC11" HREF="wget.html#TOC11">Recursive Retrieval Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-r'</SAMP>
-<DD>
-<DT><SAMP>`--recursive'</SAMP>
-<DD>
-Turn on recursive retrieving.  See section <A HREF="wget.html#SEC13">Recursive 
Retrieval</A>, for more
-details.
-
-<DT><SAMP>`-l <VAR>depth</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--level=<VAR>depth</VAR>'</SAMP>
-<DD>
-Specify recursion maximum depth level <VAR>depth</VAR> (see section <A 
HREF="wget.html#SEC13">Recursive Retrieval</A>).  The default maximum depth is 
5.
-
-<A NAME="IDX74"></A>
-<A NAME="IDX75"></A>
-<A NAME="IDX76"></A>
-<DT><SAMP>`--delete-after'</SAMP>
-<DD>
-This option tells Wget to delete every single file it downloads,
-<EM>after</EM> having done so.  It is useful for pre-fetching popular
-pages through a proxy, e.g.:
-
-
-<PRE>
-wget -r -nd --delete-after http://whatever.com/~popular/page/
-</PRE>
-
-The <SAMP>`-r'</SAMP> option is to retrieve recursively, and 
<SAMP>`-nd'</SAMP> to not
-create directories.  
-
-Note that <SAMP>`--delete-after'</SAMP> deletes files on the local machine.  It
-does not issue the <SAMP>`DELE'</SAMP> command to remote FTP sites, for
-instance.  Also note that when <SAMP>`--delete-after'</SAMP> is specified,
-<SAMP>`--convert-links'</SAMP> is ignored, so <SAMP>`.orig'</SAMP> files are 
simply not
-created in the first place.
-
-<A NAME="IDX77"></A>
-<A NAME="IDX78"></A>
-<DT><SAMP>`-k'</SAMP>
-<DD>
-<DT><SAMP>`--convert-links'</SAMP>
-<DD>
-After the download is complete, convert the links in the document to
-make them suitable for local viewing.  This affects not only the visible
-hyperlinks, but any part of the document that links to external content,
-such as embedded images, links to style sheets, hyperlinks to non-HTML
-content, etc.
-
-Each link will be changed in one of the two ways:
-
-
-<UL>
-<LI>
-
-The links to files that have been downloaded by Wget will be changed to
-refer to the file they point to as a relative link.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT>, also downloaded, then the link in <TT>`doc.html'</TT>
-will be modified to point to <SAMP>`../bar/img.gif'</SAMP>.  This kind of
-transformation works reliably for arbitrary combinations of directories.
-
-<LI>
-
-The links to files that have not been downloaded by Wget will be changed
-to include host name and absolute path of the location they point to.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT> (or to <TT>`../bar/img.gif'</TT>), then the link in
-<TT>`doc.html'</TT> will be modified to point to
-<TT>`http://<VAR>hostname</VAR>/bar/img.gif'</TT>.
-</UL>
-
-Because of this, local browsing works reliably: if a linked file was
-downloaded, the link will refer to its local name; if it was not
-downloaded, the link will refer to its full Internet address rather than
-presenting a broken link.  The fact that the former links are converted
-to relative links ensures that you can move the downloaded hierarchy to
-another directory.
-
-Note that only at the end of the download can Wget know which links have
-been downloaded.  Because of that, the work done by <SAMP>`-k'</SAMP> will be
-performed at the end of all the downloads.
-
-<A NAME="IDX79"></A>
-<DT><SAMP>`-K'</SAMP>
-<DD>
-<DT><SAMP>`--backup-converted'</SAMP>
-<DD>
-When converting a file, back up the original version with a 
<SAMP>`.orig'</SAMP>
-suffix.  Affects the behavior of <SAMP>`-N'</SAMP> (see section <A 
HREF="wget.html#SEC22">HTTP Time-Stamping Internals</A>).
-
-<DT><SAMP>`-m'</SAMP>
-<DD>
-<DT><SAMP>`--mirror'</SAMP>
-<DD>
-Turn on options suitable for mirroring.  This option turns on recursion
-and time-stamping, sets infinite recursion depth and keeps FTP
-directory listings.  It is currently equivalent to
-<SAMP>`-r -N -l inf -nr'</SAMP>.
-
-<A NAME="IDX80"></A>
-<A NAME="IDX81"></A>
-<DT><SAMP>`-p'</SAMP>
-<DD>
-<DT><SAMP>`--page-requisites'</SAMP>
-<DD>
-This option causes Wget to download all the files that are necessary to
-properly display a given HTML page.  This includes such things as
-inlined images, sounds, and referenced stylesheets.
-
-Ordinarily, when downloading a single HTML page, any requisite documents
-that may be needed to display it properly are not downloaded.  Using
-<SAMP>`-r'</SAMP> together with <SAMP>`-l'</SAMP> can help, but since Wget 
does not
-ordinarily distinguish between external and inlined documents, one is
-generally left with "leaf documents" that are missing their
-requisites.
-
-For instance, say document <TT>`1.html'</TT> contains an 
<CODE>&#60;IMG&#62;</CODE> tag
-referencing <TT>`1.gif'</TT> and an <CODE>&#60;A&#62;</CODE> tag pointing to 
external
-document <TT>`2.html'</TT>.  Say that <TT>`2.html'</TT> is similar but that its
-image is <TT>`2.gif'</TT> and it links to <TT>`3.html'</TT>.  Say this
-continues up to some arbitrarily high number.
-
-If one executes the command:
-
-
-<PRE>
-wget -r -l 2 http://<VAR>site</VAR>/1.html
-</PRE>
-
-then <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, <TT>`2.gif'</TT>, 
and
-<TT>`3.html'</TT> will be downloaded.  As you can see, <TT>`3.html'</TT> is
-without its requisite <TT>`3.gif'</TT> because Wget is simply counting the
-number of hops (up to 2) away from <TT>`1.html'</TT> in order to determine
-where to stop the recursion.  However, with this command:
-
-
-<PRE>
-wget -r -l 2 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-all the above files <EM>and</EM> <TT>`3.html'</TT>'s requisite <TT>`3.gif'</TT>
-will be downloaded.  Similarly,
-
-
-<PRE>
-wget -r -l 1 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-will cause <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, and 
<TT>`2.gif'</TT>
-to be downloaded.  One might think that:
-
-
-<PRE>
-wget -r -l 0 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-would download just <TT>`1.html'</TT> and <TT>`1.gif'</TT>, but unfortunately
-this is not the case, because <SAMP>`-l 0'</SAMP> is equivalent to
-<SAMP>`-l inf'</SAMP>---that is, infinite recursion.  To download a single HTML
-page (or a handful of them, all specified on the commandline or in a
-<SAMP>`-i'</SAMP> URL input file) and its (or their) requisites, simply leave 
off
-<SAMP>`-r'</SAMP> and <SAMP>`-l'</SAMP>:
-
-
-<PRE>
-wget -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-Note that Wget will behave as if <SAMP>`-r'</SAMP> had been specified, but only
-that single page and its requisites will be downloaded.  Links from that
-page to external documents will not be followed.  Actually, to download
-a single page and all its requisites (even if they exist on separate
-websites), and make sure the lot displays properly locally, this author
-likes to use a few options in addition to <SAMP>`-p'</SAMP>:
-
-
-<PRE>
-wget -E -H -k -K -p http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-To finish off this topic, it's worth knowing that Wget's idea of an
-external document link is any URL specified in an <CODE>&#60;A&#62;</CODE> 
tag, an
-<CODE>&#60;AREA&#62;</CODE> tag, or a <CODE>&#60;LINK&#62;</CODE> tag other 
than <CODE>&#60;LINK
-REL="stylesheet"&#62;</CODE>.
-</DL>
-
-
-
-<H2><A NAME="SEC12" HREF="wget.html#TOC12">Recursive Accept/Reject 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR> --accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`-R <VAR>rejlist</VAR> --reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-Specify comma-separated lists of file name suffixes or patterns to
-accept or reject (see section <A HREF="wget.html#SEC16">Types of Files</A> for 
more details).
-
-<DT><SAMP>`-D <VAR>domain-list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--domains=<VAR>domain-list</VAR>'</SAMP>
-<DD>
-Set domains to be followed.  <VAR>domain-list</VAR> is a comma-separated list
-of domains.  Note that it does <EM>not</EM> turn on <SAMP>`-H'</SAMP>.
-
-<DT><SAMP>`--exclude-domains <VAR>domain-list</VAR>'</SAMP>
-<DD>
-Specify the domains that are <EM>not</EM> to be followed.
-(see section <A HREF="wget.html#SEC15">Spanning Hosts</A>).
-
-<A NAME="IDX82"></A>
-<DT><SAMP>`--follow-ftp'</SAMP>
-<DD>
-Follow FTP links from HTML documents.  Without this option,
-Wget will ignore all the FTP links.
-
-<A NAME="IDX83"></A>
-<DT><SAMP>`--follow-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-Wget has an internal table of HTML tag / attribute pairs that it
-considers when looking for linked documents during a recursive
-retrieval.  If a user wants only a subset of those tags to be
-considered, however, he or she should be specify such tags in a
-comma-separated <VAR>list</VAR> with this option.
-
-<DT><SAMP>`-G <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--ignore-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-This is the opposite of the <SAMP>`--follow-tags'</SAMP> option.  To skip
-certain HTML tags when recursively looking for documents to download,
-specify them in a comma-separated <VAR>list</VAR>.  
-
-In the past, the <SAMP>`-G'</SAMP> option was the best bet for downloading a
-single page and its requisites, using a commandline like:
-
-
-<PRE>
-wget -Ga,area -H -k -K -r http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-However, the author of this option came across a page with tags like
-<CODE>&#60;LINK REL="home" HREF="/"&#62;</CODE> and came to the realization 
that
-<SAMP>`-G'</SAMP> was not enough.  One can't just tell Wget to ignore
-<CODE>&#60;LINK&#62;</CODE>, because then stylesheets will not be downloaded.  
Now the
-best bet for downloading a single page and its requisites is the
-dedicated <SAMP>`--page-requisites'</SAMP> option.
-
-<DT><SAMP>`-H'</SAMP>
-<DD>
-<DT><SAMP>`--span-hosts'</SAMP>
-<DD>
-Enable spanning across hosts when doing recursive retrieving
-(see section <A HREF="wget.html#SEC15">Spanning Hosts</A>).
-
-<DT><SAMP>`-L'</SAMP>
-<DD>
-<DT><SAMP>`--relative'</SAMP>
-<DD>
-Follow relative links only.  Useful for retrieving a specific home page
-without any distractions, not even those from the same hosts
-(see section <A HREF="wget.html#SEC18">Relative Links</A>).
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading (see section <A HREF="wget.html#SEC17">Directory-Based Limits</A> 
for more details.)  Elements
-of <VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download (see section <A HREF="wget.html#SEC17">Directory-Based Limits</A> for 
more details.)  Elements of
-<VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-Do not ever ascend to the parent directory when retrieving recursively.
-This is a useful option, since it guarantees that only the files
-<EM>below</EM> a certain hierarchy will be downloaded.
-See section <A HREF="wget.html#SEC17">Directory-Based Limits</A>, for more 
details.
-</DL>
-
-
-
-<H1><A NAME="SEC13" HREF="wget.html#TOC13">Recursive Retrieval</A></H1>
-<P>
-<A NAME="IDX84"></A>
-<A NAME="IDX85"></A>
-<A NAME="IDX86"></A>
-
-
-<P>
-GNU Wget is capable of traversing parts of the Web (or a single
-HTTP or FTP server), following links and directory structure.
-We refer to this as to <EM>recursive retrieving</EM>, or <EM>recursion</EM>.
-
-
-<P>
-With HTTP URLs, Wget retrieves and parses the HTML from
-the given URL, documents, retrieving the files the HTML
-document was referring to, through markups like <CODE>href</CODE>, or
-<CODE>src</CODE>.  If the freshly downloaded file is also of type
-<CODE>text/html</CODE>, it will be parsed and followed further.
-
-
-<P>
-Recursive retrieval of HTTP and HTML content is
-<EM>breadth-first</EM>.  This means that Wget first downloads the requested
-HTML document, then the documents linked from that document, then the
-documents linked by them, and so on.  In other words, Wget first
-downloads the documents at depth 1, then those at depth 2, and so on
-until the specified maximum depth.
-
-
-<P>
-The maximum <EM>depth</EM> to which the retrieval may descend is specified
-with the <SAMP>`-l'</SAMP> option.  The default maximum depth is five layers.
-
-
-<P>
-When retrieving an FTP URL recursively, Wget will retrieve all
-the data from the given directory tree (including the subdirectories up
-to the specified depth) on the remote server, creating its mirror image
-locally.  FTP retrieval is also limited by the <CODE>depth</CODE>
-parameter.  Unlike HTTP recursion, FTP recursion is performed
-depth-first.
-
-
-<P>
-By default, Wget will create a local directory tree, corresponding to
-the one found on the remote server.
-
-
-<P>
-Recursive retrieving can find a number of applications, the most
-important of which is mirroring.  It is also useful for WWW
-presentations, and any other opportunities where slow network
-connections should be bypassed by storing the files locally.
-
-
-<P>
-You should be warned that recursive downloads can overload the remote
-servers.  Because of that, many administrators frown upon them and may
-ban access from your site if they detect very fast downloads of big
-amounts of content.  When downloading from Internet servers, consider
-using the <SAMP>`-w'</SAMP> option to introduce a delay between accesses to the
-server.  The download will take a while longer, but the server
-administrator will not be alarmed by your rudeness.
-
-
-<P>
-Of course, recursive download may cause problems on your machine.  If
-left to run unchecked, it can easily fill up the disk.  If downloading
-from local network, it can also take bandwidth on the system, as well as
-consume memory and CPU.
-
-
-<P>
-Try to specify the criteria that match the kind of download you are
-trying to achieve.  If you want to download only one page, use
-<SAMP>`--page-requisites'</SAMP> without any additional recursion.  If you want
-to download things under one directory, use <SAMP>`-np'</SAMP> to avoid
-downloading things from other directories.  If you want to download all
-the files from one directory, use <SAMP>`-l 1'</SAMP> to make sure the 
recursion
-depth never exceeds one.  See section <A HREF="wget.html#SEC14">Following 
Links</A>, for more information
-about this.
-
-
-<P>
-Recursive retrieval should be used with care.  Don't say you were not
-warned.
-
-
-
-
-<H1><A NAME="SEC14" HREF="wget.html#TOC14">Following Links</A></H1>
-<P>
-<A NAME="IDX87"></A>
-<A NAME="IDX88"></A>
-
-
-<P>
-When retrieving recursively, one does not wish to retrieve loads of
-unnecessary data.  Most of the time the users bear in mind exactly what
-they want to download, and want Wget to follow only specific links.
-
-
-<P>
-For example, if you wish to download the music archive from
-<SAMP>`fly.srk.fer.hr'</SAMP>, you will not want to download all the home pages
-that happen to be referenced by an obscure part of the archive.
-
-
-<P>
-Wget possesses several mechanisms that allows you to fine-tune which
-links it will follow.
-
-
-
-
-<H2><A NAME="SEC15" HREF="wget.html#TOC15">Spanning Hosts</A></H2>
-<P>
-<A NAME="IDX89"></A>
-<A NAME="IDX90"></A>
-
-
-<P>
-Wget's recursive retrieval normally refuses to visit hosts different
-than the one you specified on the command line.  This is a reasonable
-default; without it, every retrieval would have the potential to turn
-your Wget into a small version of google.
-
-
-<P>
-However, visiting different hosts, or <EM>host spanning,</EM> is sometimes
-a useful option.  Maybe the images are served from a different server.
-Maybe you're mirroring a site that consists of pages interlinked between
-three servers.  Maybe the server has two equivalent names, and the HTML
-pages refer to both interchangeably.
-
-
-<DL COMPACT>
-
-<DT>Span to any host---<SAMP>`-H'</SAMP>
-<DD>
-The <SAMP>`-H'</SAMP> option turns on host spanning, thus allowing Wget's
-recursive run to visit any host referenced by a link.  Unless sufficient
-recursion-limiting criteria are applied depth, these foreign hosts will
-typically link to yet more hosts, and so on until Wget ends up sucking
-up much more data than you have intended.
-
-<DT>Limit spanning to certain domains---<SAMP>`-D'</SAMP>
-<DD>
-The <SAMP>`-D'</SAMP> option allows you to specify the domains that will be
-followed, thus limiting the recursion only to the hosts that belong to
-these domains.  Obviously, this makes sense only in conjunction with
-<SAMP>`-H'</SAMP>.  A typical example would be downloading the contents of
-<SAMP>`www.server.com'</SAMP>, but allowing downloads from
-<SAMP>`images.server.com'</SAMP>, etc.:
-
-
-<PRE>
-wget -rH -Dserver.com http://www.server.com/
-</PRE>
-
-You can specify more than one address by separating them with a comma,
-e.g. <SAMP>`-Ddomain1.com,domain2.com'</SAMP>.
-
-<DT>Keep download off certain domains---<SAMP>`--exclude-domains'</SAMP>
-<DD>
-If there are domains you want to exclude specifically, you can do it
-with <SAMP>`--exclude-domains'</SAMP>, which accepts the same type of arguments
-of <SAMP>`-D'</SAMP>, but will <EM>exclude</EM> all the listed domains.  For
-example, if you want to download all the hosts from <SAMP>`foo.edu'</SAMP>
-domain, with the exception of <SAMP>`sunsite.foo.edu'</SAMP>, you can do it 
like
-this:
-
-
-<PRE>
-wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
-    http://www.foo.edu/
-</PRE>
-
-</DL>
-
-
-
-<H2><A NAME="SEC16" HREF="wget.html#TOC16">Types of Files</A></H2>
-<P>
-<A NAME="IDX91"></A>
-
-
-<P>
-When downloading material from the web, you will often want to restrict
-the retrieval to only certain file types.  For example, if you are
-interested in downloading GIFs, you will not be overjoyed to get
-loads of PostScript documents, and vice versa.
-
-
-<P>
-Wget offers two options to deal with this problem.  Each option
-description lists a short name, a long name, and the equivalent command
-in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX92"></A>
-<A NAME="IDX93"></A>
-<A NAME="IDX94"></A>
-<A NAME="IDX95"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`accept = <VAR>acclist</VAR>'</SAMP>
-<DD>
-The argument to <SAMP>`--accept'</SAMP> option is a list of file suffixes or
-patterns that Wget will download during recursive retrieval.  A suffix
-is the ending part of a file, and consists of "normal" letters,
-e.g. <SAMP>`gif'</SAMP> or <SAMP>`.jpg'</SAMP>.  A matching pattern contains 
shell-like
-wildcards, e.g. <SAMP>`books*'</SAMP> or <SAMP>`zelazny*196[0-9]*'</SAMP>.
-
-So, specifying <SAMP>`wget -A gif,jpg'</SAMP> will make Wget download only the
-files ending with <SAMP>`gif'</SAMP> or <SAMP>`jpg'</SAMP>, i.e. GIFs and
-JPEGs.  On the other hand, <SAMP>`wget -A "zelazny*196[0-9]*"'</SAMP> will
-download only files beginning with <SAMP>`zelazny'</SAMP> and containing 
numbers
-from 1960 to 1969 anywhere within.  Look up the manual of your shell for
-a description of how pattern matching works.
-
-Of course, any number of suffixes and patterns can be combined into a
-comma-separated list, and given as an argument to <SAMP>`-A'</SAMP>.
-
-<A NAME="IDX96"></A>
-<A NAME="IDX97"></A>
-<A NAME="IDX98"></A>
-<A NAME="IDX99"></A>
-<DT><SAMP>`-R <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`reject = <VAR>rejlist</VAR>'</SAMP>
-<DD>
-The <SAMP>`--reject'</SAMP> option works the same way as 
<SAMP>`--accept'</SAMP>, only
-its logic is the reverse; Wget will download all files <EM>except</EM> the
-ones matching the suffixes (or patterns) in the list.
-
-So, if you want to download a whole page except for the cumbersome
-MPEGs and .AU files, you can use <SAMP>`wget -R mpg,mpeg,au'</SAMP>.
-Analogously, to download all files except the ones beginning with
-<SAMP>`bjork'</SAMP>, use <SAMP>`wget -R "bjork*"'</SAMP>.  The quotes are to 
prevent
-expansion by the shell.
-</DL>
-
-<P>
-The <SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP> options may be combined to achieve 
even
-better fine-tuning of which files to retrieve.  E.g. <SAMP>`wget -A
-"*zelazny*" -R .ps'</SAMP> will download all the files having 
<SAMP>`zelazny'</SAMP> as
-a part of their name, but <EM>not</EM> the PostScript files.
-
-
-<P>
-Note that these two options do not affect the downloading of HTML
-files; Wget must load all the HTMLs to know where to go at
-all--recursive retrieval would make no sense otherwise.
-
-
-
-
-<H2><A NAME="SEC17" HREF="wget.html#TOC17">Directory-Based Limits</A></H2>
-<P>
-<A NAME="IDX100"></A>
-<A NAME="IDX101"></A>
-
-
-<P>
-Regardless of other link-following facilities, it is often useful to
-place the restriction of what files to retrieve based on the directories
-those files are placed in.  There can be many reasons for this--the
-home pages may be organized in a reasonable directory structure; or some
-directories may contain useless information, e.g. <TT>`/cgi-bin'</TT> or
-<TT>`/dev'</TT> directories.
-
-
-<P>
-Wget offers three different options to deal with this requirement.  Each
-option description lists a short name, a long name, and the equivalent
-command in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX102"></A>
-<A NAME="IDX103"></A>
-<A NAME="IDX104"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`include_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-I'</SAMP> option accepts a comma-separated list of directories included
-in the retrieval.  Any other directories will simply be ignored.  The
-directories are absolute paths.
-
-So, if you wish to download from <SAMP>`http://host/people/bozo/'</SAMP>
-following only links to bozo's colleagues in the <TT>`/people'</TT>
-directory and the bogus scripts in <TT>`/cgi-bin'</TT>, you can specify:
-
-
-<PRE>
-wget -I /people,/cgi-bin http://host/people/bozo/
-</PRE>
-
-<A NAME="IDX105"></A>
-<A NAME="IDX106"></A>
-<A NAME="IDX107"></A>
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`exclude_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-X'</SAMP> option is exactly the reverse of <SAMP>`-I'</SAMP>---this is 
a list of
-directories <EM>excluded</EM> from the download.  E.g. if you do not want
-Wget to download things from <TT>`/cgi-bin'</TT> directory, specify <SAMP>`-X
-/cgi-bin'</SAMP> on the command line.
-
-The same as with <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP>, these two options can be 
combined
-to get a better fine-tuning of downloading subdirectories.  E.g. if you
-want to load all the files from <TT>`/pub'</TT> hierarchy except for
-<TT>`/pub/worthless'</TT>, specify <SAMP>`-I/pub -X/pub/worthless'</SAMP>.
-
-<A NAME="IDX108"></A>
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-<DT><SAMP>`no_parent = on'</SAMP>
-<DD>
-The simplest, and often very useful way of limiting directories is
-disallowing retrieval of the links that refer to the hierarchy
-<EM>above</EM> than the beginning directory, i.e. disallowing ascent to the
-parent directory/directories.
-
-The <SAMP>`--no-parent'</SAMP> option (short <SAMP>`-np'</SAMP>) is useful in 
this case.
-Using it guarantees that you will never leave the existing hierarchy.
-Supposing you issue Wget with:
-
-
-<PRE>
-wget -r --no-parent http://somehost/~luzer/my-archive/
-</PRE>
-
-You may rest assured that none of the references to
-<TT>`/~his-girls-homepage/'</TT> or <TT>`/~luzer/all-my-mpegs/'</TT> will be
-followed.  Only the archive you are interested in will be downloaded.
-Essentially, <SAMP>`--no-parent'</SAMP> is similar to
-<SAMP>`-I/~luzer/my-archive'</SAMP>, only it handles redirections in a more
-intelligent fashion.
-</DL>
-
-
-
-<H2><A NAME="SEC18" HREF="wget.html#TOC18">Relative Links</A></H2>
-<P>
-<A NAME="IDX109"></A>
-
-
-<P>
-When <SAMP>`-L'</SAMP> is turned on, only the relative links are ever followed.
-Relative links are here defined those that do not refer to the web
-server root.  For example, these links are relative:
-
-
-
-<PRE>
-&#60;a href="foo.gif"&#62;
-&#60;a href="foo/bar.gif"&#62;
-&#60;a href="../foo/bar.gif"&#62;
-</PRE>
-
-<P>
-These links are not relative:
-
-
-
-<PRE>
-&#60;a href="/foo.gif"&#62;
-&#60;a href="/foo/bar.gif"&#62;
-&#60;a href="http://www.server.com/foo/bar.gif"&#62;
-</PRE>
-
-<P>
-Using this option guarantees that recursive retrieval will not span
-hosts, even without <SAMP>`-H'</SAMP>.  In simple cases it also allows 
downloads
-to "just work" without having to convert links.
-
-
-<P>
-This option is probably not very useful and might be removed in a future
-release.
-
-
-
-
-<H2><A NAME="SEC19" HREF="wget.html#TOC19">Following FTP Links</A></H2>
-<P>
-<A NAME="IDX110"></A>
-
-
-<P>
-The rules for FTP are somewhat specific, as it is necessary for
-them to be.  FTP links in HTML documents are often included
-for purposes of reference, and it is often inconvenient to download them
-by default.
-
-
-<P>
-To have FTP links followed from HTML documents, you need to
-specify the <SAMP>`--follow-ftp'</SAMP> option.  Having done that, FTP
-links will span hosts regardless of <SAMP>`-H'</SAMP> setting.  This is 
logical,
-as FTP links rarely point to the same host where the HTTP
-server resides.  For similar reasons, the <SAMP>`-L'</SAMP> options has no
-effect on such downloads.  On the other hand, domain acceptance
-(<SAMP>`-D'</SAMP>) and suffix rules (<SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP>) 
apply normally.
-
-
-<P>
-Also note that followed links to FTP directories will not be
-retrieved recursively further.
-
-
-
-
-<H1><A NAME="SEC20" HREF="wget.html#TOC20">Time-Stamping</A></H1>
-<P>
-<A NAME="IDX111"></A>
-<A NAME="IDX112"></A>
-<A NAME="IDX113"></A>
-<A NAME="IDX114"></A>
-
-
-<P>
-One of the most important aspects of mirroring information from the
-Internet is updating your archives.
-
-
-<P>
-Downloading the whole archive again and again, just to replace a few
-changed files is expensive, both in terms of wasted bandwidth and money,
-and the time to do the update.  This is why all the mirroring tools
-offer the option of incremental updating.
-
-
-<P>
-Such an updating mechanism means that the remote server is scanned in
-search of <EM>new</EM> files.  Only those new files will be downloaded in
-the place of the old ones.
-
-
-<P>
-A file is considered new if one of these two conditions are met:
-
-
-
-<OL>
-<LI>
-
-A file of that name does not already exist locally.
-
-<LI>
-
-A file of that name does exist, but the remote file was modified more
-recently than the local file.
-</OL>
-
-<P>
-To implement this, the program needs to be aware of the time of last
-modification of both local and remote files.  We call this information the
-<EM>time-stamp</EM> of a file.
-
-
-<P>
-The time-stamping in GNU Wget is turned on using <SAMP>`--timestamping'</SAMP>
-(<SAMP>`-N'</SAMP>) option, or through <CODE>timestamping = on</CODE> 
directive in
-<TT>`.wgetrc'</TT>.  With this option, for each file it intends to download,
-Wget will check whether a local file of the same name exists.  If it
-does, and the remote file is older, Wget will not download it.
-
-
-<P>
-If the local file does not exist, or the sizes of the files do not
-match, Wget will download the remote file no matter what the time-stamps
-say.
-
-
-
-
-<H2><A NAME="SEC21" HREF="wget.html#TOC21">Time-Stamping Usage</A></H2>
-<P>
-<A NAME="IDX115"></A>
-<A NAME="IDX116"></A>
-
-
-<P>
-The usage of time-stamping is simple.  Say you would like to download a
-file so that it keeps its date of modification.
-
-
-
-<PRE>
-wget -S http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-A simple <CODE>ls -l</CODE> shows that the time stamp on the local file equals
-the state of the <CODE>Last-Modified</CODE> header, as returned by the server.
-As you can see, the time-stamping info is preserved locally, even
-without <SAMP>`-N'</SAMP> (at least for HTTP).
-
-
-<P>
-Several days later, you would like Wget to check if the remote file has
-changed, and download it if it has.
-
-
-
-<PRE>
-wget -N http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-Wget will ask the server for the last-modified date.  If the local file
-has the same timestamp as the server, or a newer one, the remote file
-will not be re-fetched.  However, if the remote file is more recent,
-Wget will proceed to fetch it.
-
-
-<P>
-The same goes for FTP.  For example:
-
-
-
-<PRE>
-wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*";
-</PRE>
-
-<P>
-(The quotes around that URL are to prevent the shell from trying to
-interpret the <SAMP>`*'</SAMP>.)
-
-
-<P>
-After download, a local directory listing will show that the timestamps
-match those on the remote server.  Reissuing the command with <SAMP>`-N'</SAMP>
-will make Wget re-fetch <EM>only</EM> the files that have been modified
-since the last download.
-
-
-<P>
-If you wished to mirror the GNU archive every week, you would use a
-command like the following, weekly:
-
-
-
-<PRE>
-wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
-</PRE>
-
-<P>
-Note that time-stamping will only work for files for which the server
-gives a timestamp.  For HTTP, this depends on getting a
-<CODE>Last-Modified</CODE> header.  For FTP, this depends on getting a
-directory listing with dates in a format that Wget can parse
-(see section <A HREF="wget.html#SEC23">FTP Time-Stamping Internals</A>).
-
-
-
-
-<H2><A NAME="SEC22" HREF="wget.html#TOC22">HTTP Time-Stamping 
Internals</A></H2>
-<P>
-<A NAME="IDX117"></A>
-
-
-<P>
-Time-stamping in HTTP is implemented by checking of the
-<CODE>Last-Modified</CODE> header.  If you wish to retrieve the file
-<TT>`foo.html'</TT> through HTTP, Wget will check whether
-<TT>`foo.html'</TT> exists locally.  If it doesn't, <TT>`foo.html'</TT> will be
-retrieved unconditionally.
-
-
-<P>
-If the file does exist locally, Wget will first check its local
-time-stamp (similar to the way <CODE>ls -l</CODE> checks it), and then send a
-<CODE>HEAD</CODE> request to the remote server, demanding the information on
-the remote file.
-
-
-<P>
-The <CODE>Last-Modified</CODE> header is examined to find which file was
-modified more recently (which makes it "newer").  If the remote file
-is newer, it will be downloaded; if it is older, Wget will give
-up.<A NAME="DOCF2" HREF="wget.html#FOOT2">(2)</A>
-
-
-<P>
-When <SAMP>`--backup-converted'</SAMP> (<SAMP>`-K'</SAMP>) is specified in 
conjunction
-with <SAMP>`-N'</SAMP>, server file <SAMP>`<VAR>X</VAR>'</SAMP> is compared to 
local file
-<SAMP>`<VAR>X</VAR>.orig'</SAMP>, if extant, rather than being compared to 
local file
-<SAMP>`<VAR>X</VAR>'</SAMP>, which will always differ if it's been converted by
-<SAMP>`--convert-links'</SAMP> (<SAMP>`-k'</SAMP>).
-
-
-<P>
-Arguably, HTTP time-stamping should be implemented using the
-<CODE>If-Modified-Since</CODE> request.
-
-
-
-
-<H2><A NAME="SEC23" HREF="wget.html#TOC23">FTP Time-Stamping Internals</A></H2>
-<P>
-<A NAME="IDX118"></A>
-
-
-<P>
-In theory, FTP time-stamping works much the same as HTTP, only
-FTP has no headers--time-stamps must be ferreted out of directory
-listings.
-
-
-<P>
-If an FTP download is recursive or uses globbing, Wget will use the
-FTP <CODE>LIST</CODE> command to get a file listing for the directory
-containing the desired file(s).  It will try to analyze the listing,
-treating it like Unix <CODE>ls -l</CODE> output, extracting the time-stamps.
-The rest is exactly the same as for HTTP.  Note that when
-retrieving individual files from an FTP server without using
-globbing or recursion, listing files will not be downloaded (and thus
-files will not be time-stamped) unless <SAMP>`-N'</SAMP> is specified.
-
-
-<P>
-Assumption that every directory listing is a Unix-style listing may
-sound extremely constraining, but in practice it is not, as many
-non-Unix FTP servers use the Unixoid listing format because most
-(all?) of the clients understand it.  Bear in mind that RFC959
-defines no standard way to get a file list, let alone the time-stamps.
-We can only hope that a future standard will define this.
-
-
-<P>
-Another non-standard solution includes the use of <CODE>MDTM</CODE> command
-that is supported by some FTP servers (including the popular
-<CODE>wu-ftpd</CODE>), which returns the exact time of the specified file.
-Wget may support this command in the future.
-
-
-
-
-<H1><A NAME="SEC24" HREF="wget.html#TOC24">Startup File</A></H1>
-<P>
-<A NAME="IDX119"></A>
-<A NAME="IDX120"></A>
-<A NAME="IDX121"></A>
-<A NAME="IDX122"></A>
-<A NAME="IDX123"></A>
-
-
-<P>
-Once you know how to change default settings of Wget through command
-line arguments, you may wish to make some of those settings permanent.
-You can do that in a convenient way by creating the Wget startup
-file---<TT>`.wgetrc'</TT>.
-
-
-<P>
-Besides <TT>`.wgetrc'</TT> is the "main" initialization file, it is
-convenient to have a special facility for storing passwords.  Thus Wget
-reads and interprets the contents of <TT>`$HOME/.netrc'</TT>, if it finds
-it.  You can find <TT>`.netrc'</TT> format in your system manuals.
-
-
-<P>
-Wget reads <TT>`.wgetrc'</TT> upon startup, recognizing a limited set of
-commands.
-
-
-
-
-<H2><A NAME="SEC25" HREF="wget.html#TOC25">Wgetrc Location</A></H2>
-<P>
-<A NAME="IDX124"></A>
-<A NAME="IDX125"></A>
-
-
-<P>
-When initializing, Wget will look for a <EM>global</EM> startup file,
-<TT>`/usr/local/etc/wgetrc'</TT> by default (or some prefix other than
-<TT>`/usr/local'</TT>, if Wget was not installed there) and read commands
-from there, if it exists.
-
-
-<P>
-Then it will look for the user's file.  If the environmental variable
-<CODE>WGETRC</CODE> is set, Wget will try to load that file.  Failing that, no
-further attempts will be made.
-
-
-<P>
-If <CODE>WGETRC</CODE> is not set, Wget will try to load 
<TT>`$HOME/.wgetrc'</TT>.
-
-
-<P>
-The fact that user's settings are loaded after the system-wide ones
-means that in case of collision user's wgetrc <EM>overrides</EM> the
-system-wide wgetrc (in <TT>`/usr/local/etc/wgetrc'</TT> by default).
-Fascist admins, away!
-
-
-
-
-<H2><A NAME="SEC26" HREF="wget.html#TOC26">Wgetrc Syntax</A></H2>
-<P>
-<A NAME="IDX126"></A>
-<A NAME="IDX127"></A>
-
-
-<P>
-The syntax of a wgetrc command is simple:
-
-
-
-<PRE>
-variable = value
-</PRE>
-
-<P>
-The <EM>variable</EM> will also be called <EM>command</EM>.  Valid
-<EM>values</EM> are different for different commands.
-
-
-<P>
-The commands are case-insensitive and underscore-insensitive.  Thus
-<SAMP>`DIr__PrefiX'</SAMP> is the same as <SAMP>`dirprefix'</SAMP>.  Empty 
lines, lines
-beginning with <SAMP>`#'</SAMP> and lines containing white-space only are
-discarded.
-
-
-<P>
-Commands that expect a comma-separated list will clear the list on an
-empty command.  So, if you wish to reset the rejection list specified in
-global <TT>`wgetrc'</TT>, you can do it with:
-
-
-
-<PRE>
-reject =
-</PRE>
-
-
-
-<H2><A NAME="SEC27" HREF="wget.html#TOC27">Wgetrc Commands</A></H2>
-<P>
-<A NAME="IDX128"></A>
-
-
-<P>
-The complete set of commands is listed below.  Legal values are listed
-after the <SAMP>`='</SAMP>.  Simple Boolean values can be set or unset using
-<SAMP>`on'</SAMP> and <SAMP>`off'</SAMP> or <SAMP>`1'</SAMP> and 
<SAMP>`0'</SAMP>.  A fancier kind of
-Boolean allowed in some cases is the <EM>lockable Boolean</EM>, which may
-be set to <SAMP>`on'</SAMP>, <SAMP>`off'</SAMP>, <SAMP>`always'</SAMP>, or 
<SAMP>`never'</SAMP>.  If an
-option is set to <SAMP>`always'</SAMP> or <SAMP>`never'</SAMP>, that value 
will be
-locked in for the duration of the Wget invocation--commandline options
-will not override.
-
-
-<P>
-Some commands take pseudo-arbitrary values.  <VAR>address</VAR> values can be
-hostnames or dotted-quad IP addresses.  <VAR>n</VAR> can be any positive
-integer, or <SAMP>`inf'</SAMP> for infinity, where appropriate.  
<VAR>string</VAR>
-values can be any non-empty string.
-
-
-<P>
-Most of these commands have commandline equivalents (see section <A 
HREF="wget.html#SEC2">Invoking</A>),
-though some of the more obscure or rarely used ones do not.
-
-
-<DL COMPACT>
-
-<DT>accept/reject = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP> (see section <A 
HREF="wget.html#SEC16">Types of Files</A>).
-
-<DT>add_hostdir = on/off
-<DD>
-Enable/disable host-prefixed file names.  <SAMP>`-nH'</SAMP> disables it.
-
-<DT>continue = on/off
-<DD>
-If set to on, force continuation of preexistent partially retrieved
-files.  See <SAMP>`-c'</SAMP> before setting it.
-
-<DT>background = on/off
-<DD>
-Enable/disable going to background--the same as <SAMP>`-b'</SAMP> (which
-enables it).
-
-<DT>backup_converted = on/off
-<DD>
-Enable/disable saving pre-converted files with the suffix
-<SAMP>`.orig'</SAMP>---the same as <SAMP>`-K'</SAMP> (which enables it).
-
-<DT>base = <VAR>string</VAR>
-<DD>
-Consider relative URLs in URL input files forced to be
-interpreted as HTML as being relative to <VAR>string</VAR>---the same as
-<SAMP>`-B'</SAMP>.
-
-<DT>bind_address = <VAR>address</VAR>
-<DD>
-Bind to <VAR>address</VAR>, like the <SAMP>`--bind-address'</SAMP> option.
-
-<DT>cache = on/off
-<DD>
-When set to off, disallow server-caching.  See the <SAMP>`-C'</SAMP> option.
-
-<DT>convert links = on/off
-<DD>
-Convert non-relative links locally.  The same as <SAMP>`-k'</SAMP>.
-
-<DT>cookies = on/off
-<DD>
-When set to off, disallow cookies.  See the <SAMP>`--cookies'</SAMP> option.
-
-<DT>load_cookies = <VAR>file</VAR>
-<DD>
-Load cookies from <VAR>file</VAR>.  See <SAMP>`--load-cookies'</SAMP>.
-
-<DT>save_cookies = <VAR>file</VAR>
-<DD>
-Save cookies to <VAR>file</VAR>.  See <SAMP>`--save-cookies'</SAMP>.
-
-<DT>cut_dirs = <VAR>n</VAR>
-<DD>
-Ignore <VAR>n</VAR> remote directory components.
-
-<DT>debug = on/off
-<DD>
-Debug mode, same as <SAMP>`-d'</SAMP>.
-
-<DT>delete_after = on/off
-<DD>
-Delete after download--the same as <SAMP>`--delete-after'</SAMP>.
-
-<DT>dir_prefix = <VAR>string</VAR>
-<DD>
-Top of directory tree--the same as <SAMP>`-P'</SAMP>.
-
-<DT>dirstruct = on/off
-<DD>
-Turning dirstruct on or off--the same as <SAMP>`-x'</SAMP> or 
<SAMP>`-nd'</SAMP>,
-respectively.
-
-<DT>domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-D'</SAMP> (see section <A HREF="wget.html#SEC15">Spanning 
Hosts</A>).
-
-<DT>dot_bytes = <VAR>n</VAR>
-<DD>
-Specify the number of bytes "contained" in a dot, as seen throughout
-the retrieval (1024 by default).  You can postfix the value with
-<SAMP>`k'</SAMP> or <SAMP>`m'</SAMP>, representing kilobytes and megabytes,
-respectively.  With dot settings you can tailor the dot retrieval to
-suit your needs, or you can use the predefined <EM>styles</EM>
-(see section <A HREF="wget.html#SEC7">Download Options</A>).
-
-<DT>dots_in_line = <VAR>n</VAR>
-<DD>
-Specify the number of dots that will be printed in each line throughout
-the retrieval (50 by default).
-
-<DT>dot_spacing = <VAR>n</VAR>
-<DD>
-Specify the number of dots in a single cluster (10 by default).
-
-<DT>exclude_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download--the same as <SAMP>`-X'</SAMP> (see section <A 
HREF="wget.html#SEC17">Directory-Based Limits</A>).
-
-<DT>exclude_domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`--exclude-domains'</SAMP> (see section <A 
HREF="wget.html#SEC15">Spanning Hosts</A>).
-
-<DT>follow_ftp = on/off
-<DD>
-Follow FTP links from HTML documents--the same as
-<SAMP>`--follow-ftp'</SAMP>.
-
-<DT>follow_tags = <VAR>string</VAR>
-<DD>
-Only follow certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`--follow-tags'</SAMP>.
-
-<DT>force_html = on/off
-<DD>
-If set to on, force the input filename to be regarded as an HTML
-document--the same as <SAMP>`-F'</SAMP>.
-
-<DT>ftp_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as FTP proxy, instead of the one specified in
-environment.
-
-<DT>glob = on/off
-<DD>
-Turn globbing on/off--the same as <SAMP>`-g'</SAMP>.
-
-<DT>header = <VAR>string</VAR>
-<DD>
-Define an additional header, like <SAMP>`--header'</SAMP>.
-
-<DT>html_extension = on/off
-<DD>
-Add a <SAMP>`.html'</SAMP> extension to <SAMP>`text/html'</SAMP> files without 
it, like
-<SAMP>`-E'</SAMP>.
-
-<DT>http_passwd = <VAR>string</VAR>
-<DD>
-Set HTTP password.
-
-<DT>http_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as HTTP proxy, instead of the one specified in
-environment.
-
-<DT>http_user = <VAR>string</VAR>
-<DD>
-Set HTTP user to <VAR>string</VAR>.
-
-<DT>ignore_length = on/off
-<DD>
-When set to on, ignore <CODE>Content-Length</CODE> header; the same as
-<SAMP>`--ignore-length'</SAMP>.
-
-<DT>ignore_tags = <VAR>string</VAR>
-<DD>
-Ignore certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`-G'</SAMP> / <SAMP>`--ignore-tags'</SAMP>.
-
-<DT>include_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading--the same as <SAMP>`-I'</SAMP>.
-
-<DT>input = <VAR>string</VAR>
-<DD>
-Read the URLs from <VAR>string</VAR>, like <SAMP>`-i'</SAMP>.
-
-<DT>kill_longer = on/off
-<DD>
-Consider data longer than specified in content-length header as invalid
-(and retry getting it).  The default behaviour is to save as much data
-as there is, provided there is more than or equal to the value in
-<CODE>Content-Length</CODE>.
-
-<DT>logfile = <VAR>string</VAR>
-<DD>
-Set logfile--the same as <SAMP>`-o'</SAMP>.
-
-<DT>login = <VAR>string</VAR>
-<DD>
-Your user name on the remote machine, for FTP.  Defaults to
-<SAMP>`anonymous'</SAMP>.
-
-<DT>mirror = on/off
-<DD>
-Turn mirroring on/off.  The same as <SAMP>`-m'</SAMP>.
-
-<DT>netrc = on/off
-<DD>
-Turn reading netrc on or off.
-
-<DT>noclobber = on/off
-<DD>
-Same as <SAMP>`-nc'</SAMP>.
-
-<DT>no_parent = on/off
-<DD>
-Disallow retrieving outside the directory hierarchy, like
-<SAMP>`--no-parent'</SAMP> (see section <A 
HREF="wget.html#SEC17">Directory-Based Limits</A>).
-
-<DT>no_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as the comma-separated list of domains to avoid in
-proxy loading, instead of the one specified in environment.
-
-<DT>output_document = <VAR>string</VAR>
-<DD>
-Set the output filename--the same as <SAMP>`-O'</SAMP>.
-
-<DT>page_requisites = on/off
-<DD>
-Download all ancillary documents necessary for a single HTML page to
-display properly--the same as <SAMP>`-p'</SAMP>.
-
-<DT>passive_ftp = on/off/always/never
-<DD>
-Set passive FTP---the same as <SAMP>`--passive-ftp'</SAMP>.  Some scripts
-and <SAMP>`.pm'</SAMP> (Perl module) files download files using <SAMP>`wget
---passive-ftp'</SAMP>.  If your firewall does not allow this, you can set
-<SAMP>`passive_ftp = never'</SAMP> to override the commandline.
-
-<DT>passwd = <VAR>string</VAR>
-<DD>
-Set your FTP password to <VAR>password</VAR>.  Without this setting, the
-password defaults to <SAMP>address@hidden'</SAMP>.
-
-<DT>progress = <VAR>string</VAR>
-<DD>
-Set the type of the progress indicator.  Legal types are "dot" and
-"bar".
-
-<DT>proxy_user = <VAR>string</VAR>
-<DD>
-Set proxy authentication user name to <VAR>string</VAR>, like 
<SAMP>`--proxy-user'</SAMP>.
-
-<DT>proxy_passwd = <VAR>string</VAR>
-<DD>
-Set proxy authentication password to <VAR>string</VAR>, like 
<SAMP>`--proxy-passwd'</SAMP>.
-
-<DT>referer = <VAR>string</VAR>
-<DD>
-Set HTTP <SAMP>`Referer:'</SAMP> header just like <SAMP>`--referer'</SAMP>.  
(Note it
-was the folks who wrote the HTTP spec who got the spelling of
-"referrer" wrong.)
-
-<DT>quiet = on/off
-<DD>
-Quiet mode--the same as <SAMP>`-q'</SAMP>.
-
-<DT>quota = <VAR>quota</VAR>
-<DD>
-Specify the download quota, which is useful to put in the global
-<TT>`wgetrc'</TT>.  When download quota is specified, Wget will stop
-retrieving after the download sum has become greater than quota.  The
-quota can be specified in bytes (default), kbytes <SAMP>`k'</SAMP> appended) or
-mbytes (<SAMP>`m'</SAMP> appended).  Thus <SAMP>`quota = 5m'</SAMP> will set 
the quota
-to 5 mbytes.  Note that the user's startup file overrides system
-settings.
-
-<DT>reclevel = <VAR>n</VAR>
-<DD>
-Recursion level--the same as <SAMP>`-l'</SAMP>.
-
-<DT>recursive = on/off
-<DD>
-Recursive on/off--the same as <SAMP>`-r'</SAMP>.
-
-<DT>relative_only = on/off
-<DD>
-Follow only relative links--the same as <SAMP>`-L'</SAMP> (see section <A 
HREF="wget.html#SEC18">Relative Links</A>).
-
-<DT>remove_listing = on/off
-<DD>
-If set to on, remove FTP listings downloaded by Wget.  Setting it
-to off is the same as <SAMP>`-nr'</SAMP>.
-
-<DT>retr_symlinks = on/off
-<DD>
-When set to on, retrieve symbolic links as if they were plain files; the
-same as <SAMP>`--retr-symlinks'</SAMP>.
-
-<DT>robots = on/off
-<DD>
-Use (or not) <TT>`/robots.txt'</TT> file (see section <A 
HREF="wget.html#SEC41">Robots</A>).  Be sure to know
-what you are doing before changing the default (which is <SAMP>`on'</SAMP>).
-
-<DT>server_response = on/off
-<DD>
-Choose whether or not to print the HTTP and FTP server
-responses--the same as <SAMP>`-S'</SAMP>.
-
-<DT>span_hosts = on/off
-<DD>
-Same as <SAMP>`-H'</SAMP>.
-
-<DT>timeout = <VAR>n</VAR>
-<DD>
-Set timeout value--the same as <SAMP>`-T'</SAMP>.
-
-<DT>timestamping = on/off
-<DD>
-Turn timestamping on/off.  The same as <SAMP>`-N'</SAMP> (see section <A 
HREF="wget.html#SEC20">Time-Stamping</A>).
-
-<DT>tries = <VAR>n</VAR>
-<DD>
-Set number of retries per URL---the same as <SAMP>`-t'</SAMP>.
-
-<DT>use_proxy = on/off
-<DD>
-Turn proxy support on/off.  The same as <SAMP>`-Y'</SAMP>.
-
-<DT>verbose = on/off
-<DD>
-Turn verbose on/off--the same as <SAMP>`-v'</SAMP>/<SAMP>`-nv'</SAMP>.
-
-<DT>wait = <VAR>n</VAR>
-<DD>
-Wait <VAR>n</VAR> seconds between retrievals--the same as <SAMP>`-w'</SAMP>.
-
-<DT>waitretry = <VAR>n</VAR>
-<DD>
-Wait up to <VAR>n</VAR> seconds between retries of failed retrievals
-only--the same as <SAMP>`--waitretry'</SAMP>.  Note that this is turned on by
-default in the global <TT>`wgetrc'</TT>.
-
-<DT>randomwait = on/off
-<DD>
-Turn random between-request wait times on or off. The same as 
-<SAMP>`--random-wait'</SAMP>.
-</DL>
-
-
-
-<H2><A NAME="SEC28" HREF="wget.html#TOC28">Sample Wgetrc</A></H2>
-<P>
-<A NAME="IDX129"></A>
-
-
-<P>
-This is the sample initialization file, as given in the distribution.
-It is divided in two section--one for global usage (suitable for global
-startup file), and one for local usage (suitable for
-<TT>`$HOME/.wgetrc'</TT>).  Be careful about the things you change.
-
-
-<P>
-Note that almost all the lines are commented out.  For a command to have
-any effect, you must remove the <SAMP>`#'</SAMP> character at the beginning of
-its line.
-
-
-
-<PRE>
-###
-### Sample Wget initialization file .wgetrc
-###
-
-## You can use this file to change the default behaviour of wget or to
-## avoid having to type many many command-line options. This file does
-## not contain a comprehensive list of commands -- look at the manual
-## to find out what you can put into this file.
-## 
-## Wget initialization file can reside in /usr/local/etc/wgetrc
-## (global, for all users) or $HOME/.wgetrc (for a single user).
-##
-## To use the settings in this file, you will have to uncomment them,
-## as well as change them, in most cases, as the values on the
-## commented-out lines are the default values (e.g. "off").
-
-##
-## Global settings (useful for setting up in /usr/local/etc/wgetrc).
-## Think well before you change them, since they may reduce wget's
-## functionality, and make it behave contrary to the documentation:
-##
-
-# You can set retrieve quota for beginners by specifying a value
-# optionally followed by 'K' (kilobytes) or 'M' (megabytes).  The
-# default quota is unlimited.
-#quota = inf
-
-# You can lower (or raise) the default number of retries when
-# downloading a file (default is 20).
-#tries = 20
-
-# Lowering the maximum depth of the recursive retrieval is handy to
-# prevent newbies from going too "deep" when they unwittingly start
-# the recursive retrieval.  The default is 5.
-#reclevel = 5
-
-# Many sites are behind firewalls that do not allow initiation of
-# connections from the outside.  On these sites you have to use the
-# `passive' feature of FTP.  If you are behind such a firewall, you
-# can turn this on to make Wget use passive FTP by default.
-#passive_ftp = off
-
-# The "wait" command below makes Wget wait between every connection.
-# If, instead, you want Wget to wait only between retries of failed
-# downloads, set waitretry to maximum number of seconds to wait (Wget
-# will use "linear backoff", waiting 1 second after the first failure
-# on a file, 2 seconds after the second failure, etc. up to this max).
-waitretry = 10
-
-##
-## Local settings (for a user to set in his $HOME/.wgetrc).  It is
-## *highly* undesirable to put these settings in the global file, since
-## they are potentially dangerous to "normal" users.
-##
-## Even when setting up your own ~/.wgetrc, you should know what you
-## are doing before doing so.
-##
-
-# Set this to on to use timestamping by default:
-#timestamping = off
-
-# It is a good idea to make Wget send your email address in a `From:'
-# header with your request (so that server administrators can contact
-# you in case of errors).  Wget does *not* send `From:' by default.
-#header = From: Your Name &#60;address@hidden&#62;
-
-# You can set up other headers, like Accept-Language.  Accept-Language
-# is *not* sent by default.
-#header = Accept-Language: en
-
-# You can set the default proxies for Wget to use for http and ftp.
-# They will override the value in the environment.
-#http_proxy = http://proxy.yoyodyne.com:18023/
-#ftp_proxy = http://proxy.yoyodyne.com:18023/
-
-# If you do not want to use proxy at all, set this to off.
-#use_proxy = on
-
-# You can customize the retrieval outlook.  Valid options are default,
-# binary, mega and micro.
-#dot_style = default
-
-# Setting this to off makes Wget not download /robots.txt.  Be sure to
-# know *exactly* what /robots.txt is and how it is used before changing
-# the default!
-#robots = on
-
-# It can be useful to make Wget wait between connections.  Set this to
-# the number of seconds you want Wget to wait.
-#wait = 0
-
-# You can force creating directory structure, even if a single is being
-# retrieved, by setting this to on.
-#dirstruct = off
-
-# You can turn on recursive retrieving by default (don't do this if
-# you are not sure you know what it means) by setting this to on.
-#recursive = off
-
-# To always back up file X as X.orig before converting its links (due
-# to -k / --convert-links / convert_links = on having been specified),
-# set this variable to on:
-#backup_converted = off
-
-# To have Wget follow FTP links from HTML files by default, set this
-# to on:
-#follow_ftp = off
-</PRE>
-
-
-
-<H1><A NAME="SEC29" HREF="wget.html#TOC29">Examples</A></H1>
-<P>
-<A NAME="IDX130"></A>
-
-
-<P>
-The examples are divided into three sections loosely based on their
-complexity.
-
-
-
-
-<H2><A NAME="SEC30" HREF="wget.html#TOC30">Simple Usage</A></H2>
-
-
-<UL>
-<LI>
-
-Say you want to download a URL.  Just type:
-
-
-<PRE>
-wget http://fly.srk.fer.hr/
-</PRE>
-
-<LI>
-
-But what will happen if the connection is slow, and the file is lengthy?
-The connection will probably fail before the whole file is retrieved,
-more than once.  In this case, Wget will try getting the file until it
-either gets the whole of it, or exceeds the default number of retries
-(this being 20).  It is easy to change the number of tries to 45, to
-insure that the whole file will arrive safely:
-
-
-<PRE>
-wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
-</PRE>
-
-<LI>
-
-Now let's leave Wget to work in the background, and write its progress
-to log file <TT>`log'</TT>.  It is tiring to type <SAMP>`--tries'</SAMP>, so we
-shall use <SAMP>`-t'</SAMP>.
-
-
-<PRE>
-wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &#38;
-</PRE>
-
-The ampersand at the end of the line makes sure that Wget works in the
-background.  To unlimit the number of retries, use <SAMP>`-t inf'</SAMP>.
-
-<LI>
-
-The usage of FTP is as simple.  Wget will take care of login and
-password.
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/welcome.msg
-</PRE>
-
-<LI>
-
-If you specify a directory, Wget will retrieve the directory listing,
-parse it and convert it to HTML.  Try:
-
-
-<PRE>
-wget ftp://prep.ai.mit.edu/pub/gnu/
-links index.html
-</PRE>
-
-</UL>
-
-
-
-<H2><A NAME="SEC31" HREF="wget.html#TOC31">Advanced Usage</A></H2>
-
-
-<UL>
-<LI>
-
-You have a file that contains the URLs you want to download?  Use the
-<SAMP>`-i'</SAMP> switch:
-
-
-<PRE>
-wget -i <VAR>file</VAR>
-</PRE>
-
-If you specify <SAMP>`-'</SAMP> as file name, the URLs will be read from
-standard input.
-
-<LI>
-
-Create a five levels deep mirror image of the GNU web site, with the
-same directory structure the original has, with only one try per
-document, saving the log of the activities to <TT>`gnulog'</TT>:
-
-
-<PRE>
-wget -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-The same as the above, but convert the links in the HTML files to
-point to local files, so you can view the documents off-line:
-
-
-<PRE>
-wget --convert-links -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-Retrieve only one HTML page, but make sure that all the elements needed
-for the page to be displayed, such as inline images and external style
-sheets, are also downloaded.  Also make sure the downloaded page
-references the downloaded links.
-
-
-<PRE>
-wget -p --convert-links http://www.server.com/dir/page.html
-</PRE>
-
-The HTML page will be saved to <TT>`www.server.com/dir/page.html'</TT>, and
-the images, stylesheets, etc., somewhere under <TT>`www.server.com/'</TT>,
-depending on where they were on the remote server.
-
-<LI>
-
-The same as the above, but without the <TT>`www.server.com/'</TT> directory.
-In fact, I don't want to have all those random server directories
-anyway--just save <EM>all</EM> those files under a <TT>`download/'</TT>
-subdirectory of the current directory.
-
-
-<PRE>
-wget -p --convert-links -nH -nd -Pdownload \
-     http://www.server.com/dir/page.html
-</PRE>
-
-<LI>
-
-Retrieve the index.html of <SAMP>`www.lycos.com'</SAMP>, showing the original
-server headers:
-
-
-<PRE>
-wget -S http://www.lycos.com/
-</PRE>
-
-<LI>
-
-Save the server headers with the file, perhaps for post-processing.
-
-
-<PRE>
-wget -s http://www.lycos.com/
-more index.html
-</PRE>
-
-<LI>
-
-Retrieve the first two levels of <SAMP>`wuarchive.wustl.edu'</SAMP>, saving 
them
-to <TT>`/tmp'</TT>.
-
-
-<PRE>
-wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
-</PRE>
-
-<LI>
-
-You want to download all the GIFs from a directory on an HTTP
-server.  You tried <SAMP>`wget http://www.server.com/dir/*.gif'</SAMP>, but 
that
-didn't work because HTTP retrieval does not support globbing.  In
-that case, use:
-
-
-<PRE>
-wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
-</PRE>
-
-More verbose, but the effect is the same.  <SAMP>`-r -l1'</SAMP> means to
-retrieve recursively (see section <A HREF="wget.html#SEC13">Recursive 
Retrieval</A>), with maximum depth
-of 1.  <SAMP>`--no-parent'</SAMP> means that references to the parent directory
-are ignored (see section <A HREF="wget.html#SEC17">Directory-Based 
Limits</A>), and <SAMP>`-A.gif'</SAMP> means to
-download only the GIF files.  <SAMP>`-A "*.gif"'</SAMP> would have worked
-too.
-
-<LI>
-
-Suppose you were in the middle of downloading, when Wget was
-interrupted.  Now you do not want to clobber the files already present.
-It would be:
-
-
-<PRE>
-wget -nc -r http://www.gnu.org/
-</PRE>
-
-<LI>
-
-If you want to encode your own username and password to HTTP or
-FTP, use the appropriate URL syntax (see section <A HREF="wget.html#SEC3">URL 
Format</A>).
-
-
-<PRE>
-wget ftp://hniksic:address@hidden/.emacs
-</PRE>
-
-<A NAME="IDX131"></A>
-<LI>
-
-You would like the output documents to go to standard output instead of
-to files?
-
-
-<PRE>
-wget -O - http://jagor.srce.hr/ http://www.srce.hr/
-</PRE>
-
-You can also combine the two options and make pipelines to retrieve the
-documents from remote hotlists:
-
-
-<PRE>
-wget -O - http://cool.list.com/ | wget --force-html -i -
-</PRE>
-
-</UL>
-
-
-
-<H2><A NAME="SEC32" HREF="wget.html#TOC32">Very Advanced Usage</A></H2>
-
-<P>
-<A NAME="IDX132"></A>
-
-<UL>
-<LI>
-
-If you wish Wget to keep a mirror of a page (or FTP
-subdirectories), use <SAMP>`--mirror'</SAMP> (<SAMP>`-m'</SAMP>), which is the 
shorthand
-for <SAMP>`-r -l inf -N'</SAMP>.  You can put Wget in the crontab file asking 
it
-to recheck a site each Sunday:
-
-
-<PRE>
-crontab
-0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-In addition to the above, you want the links to be converted for local
-viewing.  But, after having read this manual, you know that link
-conversion doesn't play well with timestamping, so you also want Wget to
-back up the original HTML files before the conversion.  Wget invocation
-would look like this:
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted  \
-     http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-But you've also noticed that local viewing doesn't work all that well
-when HTML files are saved under extensions other than <SAMP>`.html'</SAMP>,
-perhaps because they were served as <TT>`index.cgi'</TT>.  So you'd like
-Wget to rename all the files served with content-type <SAMP>`text/html'</SAMP>
-to <TT>`<VAR>name</VAR>.html'</TT>.
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted \
-     --html-extension -o /home/me/weeklog        \
-     http://www.gnu.org/
-</PRE>
-
-Or, with less typing:
-
-
-<PRE>
-wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-</UL>
-
-
-
-<H1><A NAME="SEC33" HREF="wget.html#TOC33">Various</A></H1>
-<P>
-<A NAME="IDX133"></A>
-
-
-<P>
-This chapter contains all the stuff that could not fit anywhere else.
-
-
-
-
-<H2><A NAME="SEC34" HREF="wget.html#TOC34">Proxies</A></H2>
-<P>
-<A NAME="IDX134"></A>
-
-
-<P>
-<EM>Proxies</EM> are special-purpose HTTP servers designed to transfer
-data from remote servers to local clients.  One typical use of proxies
-is lightening network load for users behind a slow connection.  This is
-achieved by channeling all HTTP and FTP requests through the
-proxy which caches the transferred data.  When a cached resource is
-requested again, proxy will return the data from cache.  Another use for
-proxies is for companies that separate (for security reasons) their
-internal networks from the rest of Internet.  In order to obtain
-information from the Web, their users connect and retrieve remote data
-using an authorized proxy.
-
-
-<P>
-Wget supports proxies for both HTTP and FTP retrievals.  The
-standard way to specify proxy location, which Wget recognizes, is using
-the following environment variables:
-
-
-<DL COMPACT>
-
-<DT><CODE>http_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for HTTP
-connections.
-
-<DT><CODE>ftp_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for FTP
-connections.  It is quite common that HTTP_PROXY and FTP_PROXY
-are set to the same URL.
-
-<DT><CODE>no_proxy</CODE>
-<DD>
-This variable should contain a comma-separated list of domain extensions
-proxy should <EM>not</EM> be used for.  For instance, if the value of
-<CODE>no_proxy</CODE> is <SAMP>`.mit.edu'</SAMP>, proxy will not be used to 
retrieve
-documents from MIT.
-</DL>
-
-<P>
-In addition to the environment variables, proxy location and settings
-may be specified from within Wget itself.
-
-
-<DL COMPACT>
-
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-<DT><SAMP>`proxy = on/off'</SAMP>
-<DD>
-This option may be used to turn the proxy support on or off.  Proxy
-support is on by default, provided that the appropriate environment
-variables are set.
-
-<DT><SAMP>`http_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`ftp_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`no_proxy = <VAR>string</VAR>'</SAMP>
-<DD>
-These startup file variables allow you to override the proxy settings
-specified by the environment.
-</DL>
-
-<P>
-Some proxy servers require authorization to enable you to use them.  The
-authorization consists of <EM>username</EM> and <EM>password</EM>, which must
-be sent by Wget.  As with HTTP authorization, several
-authentication schemes exist.  For proxy authorization only the
-<CODE>Basic</CODE> authentication scheme is currently implemented.
-
-
-<P>
-You may specify your username and password either through the proxy
-URL or through the command-line options.  Assuming that the
-company's proxy is located at <SAMP>`proxy.company.com'</SAMP> at port 8001, a
-proxy URL location containing authorization data might look like
-this:
-
-
-
-<PRE>
-http://hniksic:address@hidden:8001/
-</PRE>
-
-<P>
-Alternatively, you may use the <SAMP>`proxy-user'</SAMP> and
-<SAMP>`proxy-password'</SAMP> options, and the equivalent <TT>`.wgetrc'</TT>
-settings <CODE>proxy_user</CODE> and <CODE>proxy_passwd</CODE> to set the proxy
-username and password.
-
-
-
-
-<H2><A NAME="SEC35" HREF="wget.html#TOC35">Distribution</A></H2>
-<P>
-<A NAME="IDX135"></A>
-
-
-<P>
-Like all GNU utilities, the latest version of Wget can be found at the
-master GNU archive site prep.ai.mit.edu, and its mirrors.  For example,
-Wget 1.8.1 can be found at
-<A 
HREF="ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz";>ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz</A>
-
-
-
-
-<H2><A NAME="SEC36" HREF="wget.html#TOC36">Mailing List</A></H2>
-<P>
-<A NAME="IDX136"></A>
-<A NAME="IDX137"></A>
-
-
-<P>
-Wget has its own mailing list at <A 
HREF="mailto:address@hidden";>address@hidden</A>, thanks
-to Karsten Thygesen.  The mailing list is for discussion of Wget
-features and web, reporting Wget bugs (those that you think may be of
-interest to the public) and mailing announcements.  You are welcome to
-subscribe.  The more people on the list, the better!
-
-
-<P>
-To subscribe, send mail to <A HREF="mailto:address@hidden";>address@hidden</A>.
-the magic word <SAMP>`subscribe'</SAMP> in the subject line.  Unsubscribe by
-mailing to <A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-The mailing list is archived at <A 
HREF="http://fly.srk.fer.hr/archive/wget";>http://fly.srk.fer.hr/archive/wget</A>.
-Alternative archive is available at
-<A 
HREF="http://www.mail-archive.com/wget%40sunsite.auc.dk/";>http://www.mail-archive.com/wget%40sunsite.auc.dk/</A>.
- 
-
-
-<H2><A NAME="SEC37" HREF="wget.html#TOC37">Reporting Bugs</A></H2>
-<P>
-<A NAME="IDX138"></A>
-<A NAME="IDX139"></A>
-<A NAME="IDX140"></A>
-
-
-<P>
-You are welcome to send bug reports about GNU Wget to
-<A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-Before actually submitting a bug report, please try to follow a few
-simple guidelines.
-
-
-
-<OL>
-<LI>
-
-Please try to ascertain that the behaviour you see really is a bug.  If
-Wget crashes, it's a bug.  If Wget does not behave as documented,
-it's a bug.  If things work strange, but you are not sure about the way
-they are supposed to work, it might well be a bug.
-
-<LI>
-
-Try to repeat the bug in as simple circumstances as possible.  E.g. if
-Wget crashes while downloading <SAMP>`wget -rl0 -kKE -t5 -Y0
-http://yoyodyne.com -o /tmp/log'</SAMP>, you should try to see if the crash is
-repeatable, and if will occur with a simpler set of options.  You might
-even try to start the download at the page where the crash occurred to
-see if that page somehow triggered the crash.
-
-Also, while I will probably be interested to know the contents of your
-<TT>`.wgetrc'</TT> file, just dumping it into the debug message is probably
-a bad idea.  Instead, you should first try to see if the bug repeats
-with <TT>`.wgetrc'</TT> moved out of the way.  Only if it turns out that
-<TT>`.wgetrc'</TT> settings affect the bug, mail me the relevant parts of
-the file.
-
-<LI>
-
-Please start Wget with <SAMP>`-d'</SAMP> option and send the log (or the
-relevant parts of it).  If Wget was compiled without debug support,
-recompile it.  It is <EM>much</EM> easier to trace bugs with debug support
-on.
-
-<LI>
-
-If Wget has crashed, try to run it in a debugger, e.g. <CODE>gdb `which
-wget` core</CODE> and type <CODE>where</CODE> to get the backtrace.
-</OL>
-
-
-
-<H2><A NAME="SEC38" HREF="wget.html#TOC38">Portability</A></H2>
-<P>
-<A NAME="IDX141"></A>
-<A NAME="IDX142"></A>
-
-
-<P>
-Since Wget uses GNU Autoconf for building and configuring, and avoids
-using "special" ultra--mega--cool features of any particular Unix, it
-should compile (and work) on all common Unix flavors.
-
-
-<P>
-Various Wget versions have been compiled and tested under many kinds of
-Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix),
-Ultrix, *BSD, IRIX, and others; refer to the file <TT>`MACHINES'</TT> in the
-distribution directory for a comprehensive list.  If you compile it on
-an architecture not listed there, please let me know so I can update it.
-
-
-<P>
-Wget should also compile on the other Unix systems, not listed in
-<TT>`MACHINES'</TT>.  If it doesn't, please let me know.
-
-
-<P>
-Thanks to kind contributors, this version of Wget compiles and works on
-Microsoft Windows 95 and Windows NT platforms.  It has been compiled
-successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
-with Winsock as networking software.  Naturally, it is crippled of some
-features available on Unix, but it should work as a substitute for
-people stuck with Windows.  Note that the Windows port is
-<STRONG>neither tested nor maintained</STRONG> by me--all questions and
-problems should be reported to Wget mailing list at
-<A HREF="mailto:address@hidden";>address@hidden</A> where the maintainers will 
look at them.
-
-
-
-
-<H2><A NAME="SEC39" HREF="wget.html#TOC39">Signals</A></H2>
-<P>
-<A NAME="IDX143"></A>
-<A NAME="IDX144"></A>
-
-
-<P>
-Since the purpose of Wget is background work, it catches the hangup
-signal (<CODE>SIGHUP</CODE>) and ignores it.  If the output was on standard
-output, it will be redirected to a file named <TT>`wget-log'</TT>.
-Otherwise, <CODE>SIGHUP</CODE> is ignored.  This is convenient when you wish
-to redirect the output of Wget after having started it.
-
-
-
-<PRE>
-$ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &#38;
-$ kill -HUP %%     # Redirect the output to wget-log
-</PRE>
-
-<P>
-Other than that, Wget will not try to interfere with signals in any way.
-<KBD>C-c</KBD>, <CODE>kill -TERM</CODE> and <CODE>kill -KILL</CODE> should 
kill it alike.
-
-
-
-
-<H1><A NAME="SEC40" HREF="wget.html#TOC40">Appendices</A></H1>
-
-<P>
-This chapter contains some references I consider useful.
-
-
-
-
-<H2><A NAME="SEC41" HREF="wget.html#TOC41">Robots</A></H2>
-<P>
-<A NAME="IDX145"></A>
-<A NAME="IDX146"></A>
-<A NAME="IDX147"></A>
-
-
-<P>
-It is extremely easy to make Wget wander aimlessly around a web site,
-sucking all the available data in progress.  <SAMP>`wget -r 
<VAR>site</VAR>'</SAMP>,
-and you're set.  Great?  Not for the server admin.
-
-
-<P>
-While Wget is retrieving static pages, there's not much of a problem.
-But for Wget, there is no real difference between a static page and the
-most demanding CGI.  For instance, a site I know has a section handled
-by an, uh, <EM>bitchin'</EM> CGI script that converts all the Info files to
-HTML.  The script can and does bring the machine to its knees without
-providing anything useful to the downloader.
-
-
-<P>
-For such and similar cases various robot exclusion schemes have been
-devised as a means for the server administrators and document authors to
-protect chosen portions of their sites from the wandering of robots.
-
-
-<P>
-The more popular mechanism is the <EM>Robots Exclusion Standard</EM>, or
-RES, written by Martijn Koster et al. in 1994.  It specifies the
-format of a text file containing directives that instruct the robots
-which URL paths to avoid.  To be found by the robots, the specifications
-must be placed in <TT>`/robots.txt'</TT> in the server root, which the
-robots are supposed to download and parse.
-
-
-<P>
-Wget supports RES when downloading recursively.  So, when you
-issue:
-
-
-
-<PRE>
-wget -r http://www.server.com/
-</PRE>
-
-<P>
-First the index of <SAMP>`www.server.com'</SAMP> will be downloaded.  If Wget
-finds that it wants to download more documents from that server, it will
-request <SAMP>`http://www.server.com/robots.txt'</SAMP> and, if found, use it
-for further downloads.  <TT>`robots.txt'</TT> is loaded only once per each
-server.
-
-
-<P>
-Until version 1.8, Wget supported the first version of the standard,
-written by Martijn Koster in 1994 and available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots.html";>http://www.robotstxt.org/wc/norobots.html</A>.
  As of version 1.8,
-Wget has supported the additional directives specified in the internet
-draft <SAMP>`&#60;draft-koster-robots-00.txt&#62;'</SAMP> titled "A Method for 
Web
-Robots Control".  The draft, which has as far as I know never made to
-an RFC, is available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots-rfc.txt";>http://www.robotstxt.org/wc/norobots-rfc.txt</A>.
-
-
-<P>
-This manual no longer includes the text of the Robot Exclusion Standard.
-
-
-<P>
-The second, less known mechanism, enables the author of an individual
-document to specify whether they want the links from the file to be
-followed by a robot.  This is achieved using the <CODE>META</CODE> tag, like
-this:
-
-
-
-<PRE>
-&#60;meta name="robots" content="nofollow"&#62;
-</PRE>
-
-<P>
-This is explained in some detail at
-<A 
HREF="http://www.robotstxt.org/wc/meta-user.html";>http://www.robotstxt.org/wc/meta-user.html</A>.
  Wget supports this
-method of robot exclusion in addition to the usual <TT>`/robots.txt'</TT>
-exclusion.
-
-
-
-
-<H2><A NAME="SEC42" HREF="wget.html#TOC42">Security Considerations</A></H2>
-<P>
-<A NAME="IDX148"></A>
-
-
-<P>
-When using Wget, you must be aware that it sends unencrypted passwords
-through the network, which may present a security problem.  Here are the
-main issues, and some solutions.
-
-
-
-<OL>
-<LI>
-
-The passwords on the command line are visible using <CODE>ps</CODE>.  If this
-is a problem, avoid putting passwords from the command line--e.g. you
-can use <TT>`.netrc'</TT> for this.
-
-<LI>
-
-Using the insecure <EM>basic</EM> authentication scheme, unencrypted
-passwords are transmitted through the network routers and gateways.
-
-<LI>
-
-The FTP passwords are also in no way encrypted.  There is no good
-solution for this at the moment.
-
-<LI>
-
-Although the "normal" output of Wget tries to hide the passwords,
-debugging logs show them, in all forms.  This problem is avoided by
-being careful when you send debug logs (yes, even when you send them to
-me).
-</OL>
-
-
-
-<H2><A NAME="SEC43" HREF="wget.html#TOC43">Contributors</A></H2>
-<P>
-<A NAME="IDX149"></A>
-
-
-<P>
-GNU Wget was written by Hrvoje address@hidden'{c} <A 
HREF="mailto:address@hidden";>address@hidden</A>.
-However, its development could never have gone as far as it has, were it
-not for the help of many people, either with bug reports, feature
-proposals, patches, or letters saying "Thanks!".
-
-
-<P>
-Special thanks goes to the following people (no particular order):
-
-
-
-<UL>
-<LI>
-
-Karsten Thygesen--donated system resources such as the mailing list,
-web space, and FTP space, along with a lot of time to make these
-actually work.
-
-<LI>
-
-Shawn McHorse--bug reports and patches.
-
-<LI>
-
-Kaveh R. Ghazi--on-the-fly <CODE>ansi2knr</CODE>-ization.  Lots of
-portability fixes.
-
-<LI>
-
-Gordon Matzigkeit---<TT>`.netrc'</TT> support.
-
-<LI>
-
-Zlatko @address@hidden'{c}, Tomislav Vujec and address@hidden
address@hidden suggestions and "philosophical" discussions.
-
-<LI>
-
-Darko Budor--initial port to Windows.
-
-<LI>
-
-Antonio Rosella--help and suggestions, plus the Italian translation.
-
-<LI>
-
-Tomislav Petrovi'{c}, Mario address@hidden'{c}---many bug reports and
-suggestions.
-
-<LI>
-
-Fran@,{c}ois Pinard--many thorough bug reports and discussions.
-
-<LI>
-
-Karl Eichwalder--lots of help with internationalization and other
-things.
-
-<LI>
-
-Junio Hamano--donated support for Opie and HTTP <CODE>Digest</CODE>
-authentication.
-
-<LI>
-
-The people who provided donations for development, including Brian
-Gough.
-</UL>
-
-<P>
-The following people have provided patches, bug/build reports, useful
-suggestions, beta testing services, fan mail and all the other things
-that make maintenance so much fun:
-
-
-<P>
-Ian Abbott
-Tim Adam,
-Adrian Aichner,
-Martin Baehr,
-Dieter Baron,
-Roger Beeman,
-Dan Berger,
-T. Bharath,
-Paul Bludov,
-Daniel Bodea,
-Mark Boyns,
-John Burden,
-Wanderlei Cavassin,
-Gilles Cedoc,
-Tim Charron,
-Noel Cragg,
-Kristijan @address@hidden,
-John Daily,
-Andrew Davison,
-Andrew Deryabin,
-Ulrich Drepper,
-Marc Duponcheel,
-Damir address@hidden,
-Alan Eldridge,
-Aleksandar Erkalovi'{c},
-Andy Eskilsson,
-Christian Fraenkel,
-Masashi Fujita,
-Howard Gayle,
-Marcel Gerrits,
-Lemble Gregory,
-Hans Grobler,
-Mathieu Guillaume,
-Dan Harkless,
-Herold Heiko,
-Jochen Hein,
-Karl Heuer,
-HIROSE Masaaki,
-Gregor Hoffleit,
-Erik Magnus Hulthen,
-Richard Huveneers,
-Jonas Jensen,
-Simon Josefsson,
-Mario Juri'{c},
-Hack address@hidden rn,
-Const Kaplinsky,
-Goran Kezunovi'{c},
-Robert Kleine,
-KOJIMA Haime,
-Fila Kolodny,
-Alexander Kourakos,
-Martin Kraemer,
-Hrvoje Lacko,
-Daniel S. Lewart,
-Nicol'{a}s Lichtmeier,
-Dave Love,
-Alexander V. Lukyanov,
-Jordan Mendelson,
-Lin Zhe Min,
-Tim Mooney,
-Simon Munton,
-Charlie Negyesi,
-R. K. Owen,
-Andrew Pollock,
-Steve Pothier,
-Jan address@hidden,
-Marin Purgar,
-Csaba R'{a}duly,
-Keith Refson,
-Tyler Riddle,
-Tobias Ringstrom,
-Edward J. Sabol,
-Heinz Salzmann,
-Robert Schmidt,
-Andreas Schwab,
-Chris Seawood,
-Toomas Soome,
-Tage Stabell-Kulo,
-Sven Sternberger,
-Markus Strasser,
-John Summerfield,
-Szakacsits Szabolcs,
-Mike Thomas,
-Philipp Thomas,
-Dave Turner,
-Russell Vincent,
-Charles G Waldman,
-Douglas E. Wegscheid,
-Jasmin Zainul,
-Bojan @v{Z}drnja,
-Kristijan Zimmer.
-
-
-<P>
-Apologies to all who I accidentally left out, and many thanks to all the
-subscribers of the Wget mailing list.
-
-
-
-
-<H1><A NAME="SEC44" HREF="wget.html#TOC44">Copying</A></H1>
-<P>
-<A NAME="IDX150"></A>
-<A NAME="IDX151"></A>
-<A NAME="IDX152"></A>
-<A NAME="IDX153"></A>
-
-
-<P>
-GNU Wget is licensed under the GNU GPL, which makes it <EM>free
-software</EM>.
-
-
-<P>
-Please note that "free" in "free software" refers to liberty, not
-price.  As some GNU project advocates like to point out, think of "free
-speech" rather than "free beer".  The exact and legally binding
-distribution terms are spelled out below; in short, you have the right
-(freedom) to run and change Wget and distribute it to other people, and
-even--if you want--charge money for doing either.  The important
-restriction is that you have to grant your recipients the same rights
-and impose the same restrictions.
-
-
-<P>
-This method of licensing software is also known as <EM>open source</EM>
-because, among other things, it makes sure that all recipients will
-receive the source code along with the program, and be able to improve
-it.  The GNU project prefers the term "free software" for reasons
-outlined at
-<A 
HREF="http://www.gnu.org/philosophy/free-software-for-freedom.html";>http://www.gnu.org/philosophy/free-software-for-freedom.html</A>.
-
-
-<P>
-The exact license terms are defined by this paragraph and the GNU
-General Public License it refers to:
-
-
-
-<BLOCKQUOTE>
-<P>
-GNU Wget is free software; you can redistribute it and/or modify it
-under the terms of the GNU General Public License as published by the
-Free Software Foundation; either version 2 of the License, or (at your
-option) any later version.
-
-
-<P>
-GNU Wget is distributed in the hope that it will be useful, but WITHOUT
-ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
-for more details.
-
-
-<P>
-A copy of the GNU General Public License is included as part of this
-manual; if you did not receive it, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</BLOCKQUOTE>
-
-<P>
-In addition to this, this manual is free in the same sense:
-
-
-
-<BLOCKQUOTE>
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-</BLOCKQUOTE>
-
-<P>
-The full texts of the GNU General Public License and of the GNU Free
-Documentation License are available below.
-
-
-
-
-<H2><A NAME="SEC45" HREF="wget.html#TOC45">GNU General Public License</A></H2>
-<P>
-Version 2, June 1991
-
-
-
-<PRE>
-Copyright (C) 1989, 1991 Free Software Foundation, Inc.
-675 Mass Ave, Cambridge, MA 02139, USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-
-<H2><A NAME="SEC46" HREF="wget.html#TOC46">Preamble</A></H2>
-
-<P>
-  The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-License is intended to guarantee your freedom to share and change free
-software--to make sure the software is free for all its users.  This
-General Public License applies to most of the Free Software
-Foundation's software and to any other program whose authors commit to
-using it.  (Some other Free Software Foundation software is covered by
-the GNU Library General Public License instead.)  You can apply it to
-your programs, too.
-
-
-<P>
-  When we speak of free software, we are referring to freedom, not
-price.  Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-this service if you wish), that you receive source code or can get it
-if you want it, that you can change the software or use pieces of it
-in new free programs; and that you know you can do these things.
-
-
-<P>
-  To protect your rights, we need to make restrictions that forbid
-anyone to deny you these rights or to ask you to surrender the rights.
-These restrictions translate to certain responsibilities for you if you
-distribute copies of the software, or if you modify it.
-
-
-<P>
-  For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must give the recipients all the rights that
-you have.  You must make sure that they, too, receive or can get the
-source code.  And you must show them these terms so they know their
-rights.
-
-
-<P>
-  We protect your rights with two steps: (1) copyright the software, and
-(2) offer you this license which gives you legal permission to copy,
-distribute and/or modify the software.
-
-
-<P>
-  Also, for each author's protection and ours, we want to make certain
-that everyone understands that there is no warranty for this free
-software.  If the software is modified by someone else and passed on, we
-want its recipients to know that what they have is not the original, so
-that any problems introduced by others will not reflect on the original
-authors' reputations.
-
-
-<P>
-  Finally, any free program is threatened constantly by software
-patents.  We wish to avoid the danger that redistributors of a free
-program will individually obtain patent licenses, in effect making the
-program proprietary.  To prevent this, we have made it clear that any
-patent must be licensed for everyone's free use or not licensed at all.
-
-
-<P>
-  The precise terms and conditions for copying, distribution and
-modification follow.
-
-
-
-
-<H2><A NAME="SEC47" HREF="wget.html#TOC47">TERMS AND CONDITIONS FOR COPYING, 
DISTRIBUTION AND MODIFICATION</A></H2>
-
-
-<OL>
-<LI>
-
-This License applies to any program or other work which contains
-a notice placed by the copyright holder saying it may be distributed
-under the terms of this General Public License.  The "Program", below,
-refers to any such program or work, and a "work based on the Program"
-means either the Program or any derivative work under copyright law:
-that is to say, a work containing the Program or a portion of it,
-either verbatim or with modifications and/or translated into another
-language.  (Hereinafter, translation is included without limitation in
-the term "modification".)  Each licensee is addressed as "you".
-
-Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope.  The act of
-running the Program is not restricted, and the output from the Program
-is covered only if its contents constitute a work based on the
-Program (independent of having been made by running the Program).
-Whether that is true depends on what the Program does.
-
-<LI>
-
-You may copy and distribute verbatim copies of the Program's
-source code as you receive it, in any medium, provided that you
-conspicuously and appropriately publish on each copy an appropriate
-copyright notice and disclaimer of warranty; keep intact all the
-notices that refer to this License and to the absence of any warranty;
-and give any other recipients of the Program a copy of this License
-along with the Program.
-
-You may charge a fee for the physical act of transferring a copy, and
-you may at your option offer warranty protection in exchange for a fee.
-
-<LI>
-
-You may modify your copy or copies of the Program or any portion
-of it, thus forming a work based on the Program, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
-
-<OL>
-<LI>
-
-You must cause the modified files to carry prominent notices
-stating that you changed the files and the date of any change.
-
-<LI>
-
-You must cause any work that you distribute or publish, that in
-whole or in part contains or is derived from the Program or any
-part thereof, to be licensed as a whole at no charge to all third
-parties under the terms of this License.
-
-<LI>
-
-If the modified program normally reads commands interactively
-when run, you must cause it, when started running for such
-interactive use in the most ordinary way, to print or display an
-announcement including an appropriate copyright notice and a
-notice that there is no warranty (or else, saying that you provide
-a warranty) and that users may redistribute the program under
-these conditions, and telling the user how to view a copy of this
-License.  (Exception: if the Program itself is interactive but
-does not normally print such an announcement, your work based on
-the Program is not required to print an announcement.)
-</OL>
-
-These requirements apply to the modified work as a whole.  If
-identifiable sections of that work are not derived from the Program,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works.  But when you
-distribute the same sections as part of a whole which is a work based
-on the Program, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Program.
-
-In addition, mere aggregation of another work not based on the Program
-with the Program (or with a work based on the Program) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
-<LI>
-
-You may copy and distribute the Program (or a work based on it,
-under Section 2) in object code or executable form under the terms of
-Sections 1 and 2 above provided that you also do one of the following:
-
-
-<OL>
-<LI>
-
-Accompany it with the complete corresponding machine-readable
-source code, which must be distributed under the terms of Sections
-1 and 2 above on a medium customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with a written offer, valid for at least three
-years, to give any third party, for a charge no more than your
-cost of physically performing source distribution, a complete
-machine-readable copy of the corresponding source code, to be
-distributed under the terms of Sections 1 and 2 above on a medium
-customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with the information you received as to the offer
-to distribute corresponding source code.  (This alternative is
-allowed only for noncommercial distribution and only if you
-received the program in object code or executable form with such
-an offer, in accord with Subsection b above.)
-</OL>
-
-The source code for a work means the preferred form of the work for
-making modifications to it.  For an executable work, complete source
-code means all the source code for all modules it contains, plus any
-associated interface definition files, plus the scripts used to
-control compilation and installation of the executable.  However, as a
-special exception, the source code distributed need not include
-anything that is normally distributed (in either source or binary
-form) with the major components (compiler, kernel, and so on) of the
-operating system on which the executable runs, unless that component
-itself accompanies the executable.
-
-If distribution of executable or object code is made by offering
-access to copy from a designated place, then offering equivalent
-access to copy the source code from the same place counts as
-distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-<LI>
-
-You may not copy, modify, sublicense, or distribute the Program
-except as expressly provided under this License.  Any attempt
-otherwise to copy, modify, sublicense or distribute the Program is
-void, and will automatically terminate your rights under this License.
-However, parties who have received copies, or rights, from you under
-this License will not have their licenses terminated so long as such
-parties remain in full compliance.
-
-<LI>
-
-You are not required to accept this License, since you have not
-signed it.  However, nothing else grants you permission to modify or
-distribute the Program or its derivative works.  These actions are
-prohibited by law if you do not accept this License.  Therefore, by
-modifying or distributing the Program (or any work based on the
-Program), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Program or works based on it.
-
-<LI>
-
-Each time you redistribute the Program (or any work based on the
-Program), the recipient automatically receives a license from the
-original licensor to copy, distribute or modify the Program subject to
-these terms and conditions.  You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties to
-this License.
-
-<LI>
-
-If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License.  If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Program at all.  For example, if a patent
-license would not permit royalty-free redistribution of the Program by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Program.
-
-If any portion of this section is held invalid or unenforceable under
-any particular circumstance, the balance of the section is intended to
-apply and the section as a whole is intended to apply in other
-circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system, which is
-implemented by public license practices.  Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
-<LI>
-
-If the distribution and/or use of the Program is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Program under this License
-may add an explicit geographical distribution limitation excluding
-those countries, so that distribution is permitted only in or among
-countries not thus excluded.  In such case, this License incorporates
-the limitation as if written in the body of this License.
-
-<LI>
-
-The Free Software Foundation may publish revised and/or new versions
-of the General Public License from time to time.  Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
-Each version is given a distinguishing version number.  If the Program
-specifies a version number of this License which applies to it and "any
-later version", you have the option of following the terms and conditions
-either of that version or of any later version published by the Free
-Software Foundation.  If the Program does not specify a version number of
-this License, you may choose any version ever published by the Free Software
-Foundation.
-
-<LI>
-
-If you wish to incorporate parts of the Program into other free
-programs whose distribution conditions are different, write to the author
-to ask for permission.  For software which is copyrighted by the Free
-Software Foundation, write to the Free Software Foundation; we sometimes
-make exceptions for this.  Our decision will be guided by the two goals
-of preserving the free status of all derivatives of our free software and
-of promoting the sharing and reuse of software generally.
-
-
-
-<P><STRONG>NO WARRANTY</STRONG>
-<A NAME="IDX154"></A>
-
-<LI>
-
-BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
-FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
-OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
-PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
-OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
-TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
-PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
-REPAIR OR CORRECTION.
-
-<LI>
-
-IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
-INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
-OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
-TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
-YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
-PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGES.
-</OL>
-
-
-<H2>END OF TERMS AND CONDITIONS</H2>
-
-
-
-<H2><A NAME="SEC48" HREF="wget.html#TOC48">How to Apply These Terms to Your 
New Programs</A></H2>
-
-<P>
-  If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
-
-<P>
-  To do so, attach the following notices to the program.  It is safest
-to attach them to the start of each source file to most effectively
-convey the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
-
-<PRE>
-<VAR>one line to give the program's name and an idea of what it does.</VAR>
-Copyright (C) 19<VAR>yy</VAR>  <VAR>name of author</VAR>
-
-This program is free software; you can redistribute it and/or
-modify it under the terms of the GNU General Public License
-as published by the Free Software Foundation; either version 2
-of the License, or (at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</PRE>
-
-<P>
-Also add information on how to contact you by electronic and paper mail.
-
-
-<P>
-If the program is interactive, make it output a short notice like this
-when it starts in an interactive mode:
-
-
-
-<PRE>
-Gnomovision version 69, Copyright (C) 19<VAR>yy</VAR> <VAR>name of author</VAR>
-Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
-type `show w'.  This is free software, and you are welcome
-to redistribute it under certain conditions; type `show c'
-for details.
-</PRE>
-
-<P>
-The hypothetical commands <SAMP>`show w'</SAMP> and <SAMP>`show c'</SAMP> 
should show
-the appropriate parts of the General Public License.  Of course, the
-commands you use may be called something other than <SAMP>`show w'</SAMP> and
-<SAMP>`show c'</SAMP>; they could even be mouse-clicks or menu items--whatever
-suits your program.
-
-
-<P>
-You should also get your employer (if you work as a programmer) or your
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary.  Here is a sample; alter the names:
-
-
-
-<PRE>
-Yoyodyne, Inc., hereby disclaims all copyright
-interest in the program `Gnomovision'
-(which makes passes at compilers) written
-by James Hacker.
-
-<VAR>signature of Ty Coon</VAR>, 1 April 1989
-Ty Coon, President of Vice
-</PRE>
-
-<P>
-This General Public License does not permit incorporating your program into
-proprietary programs.  If your program is a subroutine library, you may
-consider it more useful to permit linking proprietary applications with the
-library.  If this is what you want to do, use the GNU Library General
-Public License instead of this License.
-
-
-
-
-<H2><A NAME="SEC49" HREF="wget.html#TOC49">GNU Free Documentation 
License</A></H2>
-<P>
-Version 1.1, March 2000
-
-
-
-<PRE>
-Copyright (C) 2000  Free Software Foundation, Inc.
-51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-<OL>
-<LI>
-
-PREAMBLE
-
-The purpose of this License is to make a manual, textbook, or other
-written document "free" in the sense of freedom: to assure everyone
-the effective freedom to copy and redistribute it, with or without
-modifying it, either commercially or noncommercially.  Secondarily,
-this License preserves for the author and publisher a way to get
-credit for their work, while not being considered responsible for
-modifications made by others.
-
-This License is a kind of "copyleft", which means that derivative
-works of the document must themselves be free in the same sense.  It
-complements the GNU General Public License, which is a copyleft
-license designed for free software.
-
-We have designed this License in order to use it for manuals for free
-software, because free software needs free documentation: a free
-program should come with manuals providing the same freedoms that the
-software does.  But this License is not limited to software manuals;
-it can be used for any textual work, regardless of subject matter or
-whether it is published as a printed book.  We recommend this License
-principally for works whose purpose is instruction or reference.
-
-<LI>
-
-APPLICABILITY AND DEFINITIONS
-
-This License applies to any manual or other work that contains a
-notice placed by the copyright holder saying it can be distributed
-under the terms of this License.  The "Document", below, refers to any
-such manual or work.  Any member of the public is a licensee, and is
-addressed as "you".
-
-A "Modified Version" of the Document means any work containing the
-Document or a portion of it, either copied verbatim, or with
-modifications and/or translated into another language.
-
-A "Secondary Section" is a named appendix or a front-matter section of
-the Document that deals exclusively with the relationship of the
-publishers or authors of the Document to the Document's overall subject
-(or to related matters) and contains nothing that could fall directly
-within that overall subject.  (For example, if the Document is in part a
-textbook of mathematics, a Secondary Section may not explain any
-mathematics.)  The relationship could be a matter of historical
-connection with the subject or with related matters, or of legal,
-commercial, philosophical, ethical or political position regarding
-them.
-
-The "Invariant Sections" are certain Secondary Sections whose titles
-are designated, as being those of Invariant Sections, in the notice
-that says that the Document is released under this License.
-
-The "Cover Texts" are certain short passages of text that are listed,
-as Front-Cover Texts or Back-Cover Texts, in the notice that says that
-the Document is released under this License.
-
-A "Transparent" copy of the Document means a machine-readable copy,
-represented in a format whose specification is available to the
-general public, whose contents can be viewed and edited directly and
-straightforwardly with generic text editors or (for images composed of
-pixels) generic paint programs or (for drawings) some widely available
-drawing editor, and that is suitable for input to text formatters or
-for automatic translation to a variety of formats suitable for input
-to text formatters.  A copy made in an otherwise Transparent file
-format whose markup has been designed to thwart or discourage
-subsequent modification by readers is not Transparent.  A copy that is
-not "Transparent" is called "Opaque".
-
-Examples of suitable formats for Transparent copies include plain
-ASCII without markup, Texinfo input format, LaTeX input format, SGML
-or XML using a publicly available DTD, and standard-conforming simple
-HTML designed for human modification.  Opaque formats include
-PostScript, PDF, proprietary formats that can be read and edited only
-by proprietary word processors, SGML or XML for which the DTD and/or
-processing tools are not generally available, and the
-machine-generated HTML produced by some word processors for output
-purposes only.
-
-The "Title Page" means, for a printed book, the title page itself,
-plus such following pages as are needed to hold, legibly, the material
-this License requires to appear in the title page.  For works in
-formats which do not have any title page as such, "Title Page" means
-the text near the most prominent appearance of the work's title,
-preceding the beginning of the body of the text.
-<LI>
-
-VERBATIM COPYING
-
-You may copy and distribute the Document in any medium, either
-commercially or noncommercially, provided that this License, the
-copyright notices, and the license notice saying this License applies
-to the Document are reproduced in all copies, and that you add no other
-conditions whatsoever to those of this License.  You may not use
-technical measures to obstruct or control the reading or further
-copying of the copies you make or distribute.  However, you may accept
-compensation in exchange for copies.  If you distribute a large enough
-number of copies you must also follow the conditions in section 3.
-
-You may also lend copies, under the same conditions stated above, and
-you may publicly display copies.
-<LI>
-
-COPYING IN QUANTITY
-
-If you publish printed copies of the Document numbering more than 100,
-and the Document's license notice requires Cover Texts, you must enclose
-the copies in covers that carry, clearly and legibly, all these Cover
-Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
-the back cover.  Both covers must also clearly and legibly identify
-you as the publisher of these copies.  The front cover must present
-the full title with all words of the title equally prominent and
-visible.  You may add other material on the covers in addition.
-Copying with changes limited to the covers, as long as they preserve
-the title of the Document and satisfy these conditions, can be treated
-as verbatim copying in other respects.
-
-If the required texts for either cover are too voluminous to fit
-legibly, you should put the first ones listed (as many as fit
-reasonably) on the actual cover, and continue the rest onto adjacent
-pages.
-
-If you publish or distribute Opaque copies of the Document numbering
-more than 100, you must either include a machine-readable Transparent
-copy along with each Opaque copy, or state in or with each Opaque copy
-a publicly-accessible computer-network location containing a complete
-Transparent copy of the Document, free of added material, which the
-general network-using public has access to download anonymously at no
-charge using public-standard network protocols.  If you use the latter
-option, you must take reasonably prudent steps, when you begin
-distribution of Opaque copies in quantity, to ensure that this
-Transparent copy will remain thus accessible at the stated location
-until at least one year after the last time you distribute an Opaque
-copy (directly or through your agents or retailers) of that edition to
-the public.
-
-It is requested, but not required, that you contact the authors of the
-Document well before redistributing any large number of copies, to give
-them a chance to provide you with an updated version of the Document.
-<LI>
-
-MODIFICATIONS
-
-You may copy and distribute a Modified Version of the Document under
-the conditions of sections 2 and 3 above, provided that you release
-the Modified Version under precisely this License, with the Modified
-Version filling the role of the Document, thus licensing distribution
-and modification of the Modified Version to whoever possesses a copy
-of it.  In addition, you must do these things in the Modified Version:
-
-A. Use in the Title Page (and on the covers, if any) a title distinct
-   from that of the Document, and from those of previous versions
-   (which should, if there were any, be listed in the History section
-   of the Document).  You may use the same title as a previous version
-   if the original publisher of that version gives permission.<BR>
-B. List on the Title Page, as authors, one or more persons or entities
-   responsible for authorship of the modifications in the Modified
-   Version, together with at least five of the principal authors of the
-   Document (all of its principal authors, if it has less than five).<BR>
-C. State on the Title page the name of the publisher of the
-   Modified Version, as the publisher.<BR>
-D. Preserve all the copyright notices of the Document.<BR>
-E. Add an appropriate copyright notice for your modifications
-   adjacent to the other copyright notices.<BR>
-F. Include, immediately after the copyright notices, a license notice
-   giving the public permission to use the Modified Version under the
-   terms of this License, in the form shown in the Addendum below.<BR>
-G. Preserve in that license notice the full lists of Invariant Sections
-   and required Cover Texts given in the Document's license notice.<BR>
-H. Include an unaltered copy of this License.<BR>
-I. Preserve the section entitled "History", and its title, and add to
-   it an item stating at least the title, year, new authors, and
-   publisher of the Modified Version as given on the Title Page.  If
-   there is no section entitled "History" in the Document, create one
-   stating the title, year, authors, and publisher of the Document as
-   given on its Title Page, then add an item describing the Modified
-   Version as stated in the previous sentence.<BR>
-J. Preserve the network location, if any, given in the Document for
-   public access to a Transparent copy of the Document, and likewise
-   the network locations given in the Document for previous versions
-   it was based on.  These may be placed in the "History" section.
-   You may omit a network location for a work that was published at
-   least four years before the Document itself, or if the original
-   publisher of the version it refers to gives permission.<BR>
-K. In any section entitled "Acknowledgements" or "Dedications",
-   preserve the section's title, and preserve in the section all the
-   substance and tone of each of the contributor acknowledgements
-   and/or dedications given therein.<BR>
-L. Preserve all the Invariant Sections of the Document,
-   unaltered in their text and in their titles.  Section numbers
-   or the equivalent are not considered part of the section titles.<BR>
-M. Delete any section entitled "Endorsements".  Such a section
-   may not be included in the Modified Version.<BR>
-N. Do not retitle any existing section as "Endorsements"
-   or to conflict in title with any Invariant Section.<BR>
-If the Modified Version includes new front-matter sections or
-appendices that qualify as Secondary Sections and contain no material
-copied from the Document, you may at your option designate some or all
-of these sections as invariant.  To do this, add their titles to the
-list of Invariant Sections in the Modified Version's license notice.
-These titles must be distinct from any other section titles.
-
-You may add a section entitled "Endorsements", provided it contains
-nothing but endorsements of your Modified Version by various
-parties--for example, statements of peer review or that the text has
-been approved by an organization as the authoritative definition of a
-standard.
-
-You may add a passage of up to five words as a Front-Cover Text, and a
-passage of up to 25 words as a Back-Cover Text, to the end of the list
-of Cover Texts in the Modified Version.  Only one passage of
-Front-Cover Text and one of Back-Cover Text may be added by (or
-through arrangements made by) any one entity.  If the Document already
-includes a cover text for the same cover, previously added by you or
-by arrangement made by the same entity you are acting on behalf of,
-you may not add another; but you may replace the old one, on explicit
-permission from the previous publisher that added the old one.
-
-The author(s) and publisher(s) of the Document do not by this License
-give permission to use their names for publicity for or to assert or
-imply endorsement of any Modified Version.
-<LI>
-
-COMBINING DOCUMENTS
-
-You may combine the Document with other documents released under this
-License, under the terms defined in section 4 above for modified
-versions, provided that you include in the combination all of the
-Invariant Sections of all of the original documents, unmodified, and
-list them all as Invariant Sections of your combined work in its
-license notice.
-
-The combined work need only contain one copy of this License, and
-multiple identical Invariant Sections may be replaced with a single
-copy.  If there are multiple Invariant Sections with the same name but
-different contents, make the title of each such section unique by
-adding at the end of it, in parentheses, the name of the original
-author or publisher of that section if known, or else a unique number.
-Make the same adjustment to the section titles in the list of
-Invariant Sections in the license notice of the combined work.
-
-In the combination, you must combine any sections entitled "History"
-in the various original documents, forming one section entitled
-"History"; likewise combine any sections entitled "Acknowledgements",
-and any sections entitled "Dedications".  You must delete all sections
-entitled "Endorsements."
-<LI>
-
-COLLECTIONS OF DOCUMENTS
-
-You may make a collection consisting of the Document and other documents
-released under this License, and replace the individual copies of this
-License in the various documents with a single copy that is included in
-the collection, provided that you follow the rules of this License for
-verbatim copying of each of the documents in all other respects.
-
-You may extract a single document from such a collection, and distribute
-it individually under this License, provided you insert a copy of this
-License into the extracted document, and follow this License in all
-other respects regarding verbatim copying of that document.
-<LI>
-
-AGGREGATION WITH INDEPENDENT WORKS
-
-A compilation of the Document or its derivatives with other separate
-and independent documents or works, in or on a volume of a storage or
-distribution medium, does not as a whole count as a Modified Version
-of the Document, provided no compilation copyright is claimed for the
-compilation.  Such a compilation is called an "aggregate", and this
-License does not apply to the other self-contained works thus compiled
-with the Document, on account of their being thus compiled, if they
-are not themselves derivative works of the Document.
-
-If the Cover Text requirement of section 3 is applicable to these
-copies of the Document, then if the Document is less than one quarter
-of the entire aggregate, the Document's Cover Texts may be placed on
-covers that surround only the Document within the aggregate.
-Otherwise they must appear on covers around the whole aggregate.
-<LI>
-
-TRANSLATION
-
-Translation is considered a kind of modification, so you may
-distribute translations of the Document under the terms of section 4.
-Replacing Invariant Sections with translations requires special
-permission from their copyright holders, but you may include
-translations of some or all Invariant Sections in addition to the
-original versions of these Invariant Sections.  You may include a
-translation of this License provided that you also include the
-original English version of this License.  In case of a disagreement
-between the translation and the original English version of this
-License, the original English version will prevail.
-<LI>
-
-TERMINATION
-
-You may not copy, modify, sublicense, or distribute the Document except
-as expressly provided for under this License.  Any other attempt to
-copy, modify, sublicense or distribute the Document is void, and will
-automatically terminate your rights under this License.  However,
-parties who have received copies, or rights, from you under this
-License will not have their licenses terminated so long as such
-parties remain in full compliance.
-<LI>
-
-FUTURE REVISIONS OF THIS LICENSE
-
-The Free Software Foundation may publish new, revised versions
-of the GNU Free Documentation License from time to time.  Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns.  See
-http://www.gnu.org/copyleft/.
-
-Each version of the License is given a distinguishing version number.
-If the Document specifies that a particular numbered version of this
-License "or any later version" applies to it, you have the option of
-following the terms and conditions either of that specified version or
-of any later version that has been published (not as a draft) by the
-Free Software Foundation.  If the Document does not specify a version
-number of this License, you may choose any version ever published (not
-as a draft) by the Free Software Foundation.
-
-</OL>
-
-
-
-<H2><A NAME="SEC50" HREF="wget.html#TOC50">ADDENDUM: How to use this License 
for your documents</A></H2>
-
-<P>
-To use this License in a document you have written, include a copy of
-the License in the document and put the following copyright and
-license notices just after the title page:
-
-
-
-<PRE>
-
-  Copyright (C)  <VAR>year</VAR>  <VAR>your name</VAR>.
-  Permission is granted to copy, distribute and/or modify this document
-  under the terms of the GNU Free Documentation License, Version 1.1
-  or any later version published by the Free Software Foundation;
-  with the Invariant Sections being <VAR>list their titles</VAR>, with the
-  Front-Cover Texts being <VAR>list</VAR>, and with the Back-Cover Texts being 
<VAR>list</VAR>.
-  A copy of the license is included in the section entitled ``GNU
-  Free Documentation License''.
-</PRE>
-
-<P>
-If you have no Invariant Sections, write "with no Invariant Sections"
-instead of saying which ones are invariant.  If you have no
-Front-Cover Texts, write "no Front-Cover Texts" instead of
-"Front-Cover Texts being <VAR>list</VAR>"; likewise for Back-Cover Texts.
-
-
-<P>
-If your document contains nontrivial examples of program code, we
-recommend releasing these examples in parallel under your choice of
-free software license, such as the GNU General Public License,
-to permit their use in free software.
-
-
-
-
-<H1><A NAME="SEC51" HREF="wget.html#TOC51">Concept Index</A></H1>
-<P>
-Jump to:
-<A HREF="#cindex_.">.</A>
--
-<A HREF="#cindex_a">a</A>
--
-<A HREF="#cindex_b">b</A>
--
-<A HREF="#cindex_c">c</A>
--
-<A HREF="#cindex_d">d</A>
--
-<A HREF="#cindex_e">e</A>
--
-<A HREF="#cindex_f">f</A>
--
-<A HREF="#cindex_g">g</A>
--
-<A HREF="#cindex_h">h</A>
--
-<A HREF="#cindex_i">i</A>
--
-<A HREF="#cindex_l">l</A>
--
-<A HREF="#cindex_m">m</A>
--
-<A HREF="#cindex_n">n</A>
--
-<A HREF="#cindex_o">o</A>
--
-<A HREF="#cindex_p">p</A>
--
-<A HREF="#cindex_q">q</A>
--
-<A HREF="#cindex_r">r</A>
--
-<A HREF="#cindex_s">s</A>
--
-<A HREF="#cindex_t">t</A>
--
-<A HREF="#cindex_u">u</A>
--
-<A HREF="#cindex_v">v</A>
--
-<A HREF="#cindex_w">w</A>
-<P>
-<H2><A NAME="cindex_.">.</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX49">.html extension</A>
-<LI><A HREF="wget.html#IDX70">.listing files, removing</A>
-<LI><A HREF="wget.html#IDX123">.netrc</A>
-<LI><A HREF="wget.html#IDX121">.wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_a">a</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX104">accept directories</A>
-<LI><A HREF="wget.html#IDX93">accept suffixes</A>
-<LI><A HREF="wget.html#IDX92">accept wildcards</A>
-<LI><A HREF="wget.html#IDX14">append to log</A>
-<LI><A HREF="wget.html#IDX5">arguments</A>
-<LI><A HREF="wget.html#IDX52">authentication</A>
-</DIR>
-<H2><A NAME="cindex_b">b</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX79">backing up converted files</A>
-<LI><A HREF="wget.html#IDX20">base for relative links in input file</A>
-<LI><A HREF="wget.html#IDX21">bind() address</A>
-<LI><A HREF="wget.html#IDX140">bug reports</A>
-<LI><A HREF="wget.html#IDX138">bugs</A>
-</DIR>
-<H2><A NAME="cindex_c">c</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX54">cache</A>
-<LI><A HREF="wget.html#IDX22">client IP address</A>
-<LI><A HREF="wget.html#IDX27">clobbering, file</A>
-<LI><A HREF="wget.html#IDX4">command line</A>
-<LI><A HREF="wget.html#IDX60">Content-Length, ignore</A>
-<LI><A HREF="wget.html#IDX30">continue retrieval</A>
-<LI><A HREF="wget.html#IDX149">contributors</A>
-<LI><A HREF="wget.html#IDX77">conversion of links</A>
-<LI><A HREF="wget.html#IDX55">cookies</A>
-<LI><A HREF="wget.html#IDX57">cookies, loading</A>
-<LI><A HREF="wget.html#IDX59">cookies, saving</A>
-<LI><A HREF="wget.html#IDX150">copying</A>
-<LI><A HREF="wget.html#IDX47">cut directories</A>
-</DIR>
-<H2><A NAME="cindex_d">d</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX15">debug</A>
-<LI><A HREF="wget.html#IDX75">delete after retrieval</A>
-<LI><A HREF="wget.html#IDX100">directories</A>
-<LI><A HREF="wget.html#IDX105">directories, exclude</A>
-<LI><A HREF="wget.html#IDX102">directories, include</A>
-<LI><A HREF="wget.html#IDX101">directory limits</A>
-<LI><A HREF="wget.html#IDX48">directory prefix</A>
-<LI><A HREF="wget.html#IDX34">dot style</A>
-<LI><A HREF="wget.html#IDX28">downloading multiple times</A>
-</DIR>
-<H2><A NAME="cindex_e">e</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX130">examples</A>
-<LI><A HREF="wget.html#IDX106">exclude directories</A>
-<LI><A HREF="wget.html#IDX11">execute wgetrc command</A>
-</DIR>
-<H2><A NAME="cindex_f">f</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX2">features</A>
-<LI><A HREF="wget.html#IDX76">filling proxy cache</A>
-<LI><A HREF="wget.html#IDX82">follow FTP links</A>
-<LI><A HREF="wget.html#IDX110">following ftp links</A>
-<LI><A HREF="wget.html#IDX88">following links</A>
-<LI><A HREF="wget.html#IDX19">force html</A>
-<LI><A HREF="wget.html#IDX153">free software</A>
-<LI><A HREF="wget.html#IDX118">ftp time-stamping</A>
-</DIR>
-<H2><A NAME="cindex_g">g</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX152">GFDL</A>
-<LI><A HREF="wget.html#IDX71">globbing, toggle</A>
-<LI><A HREF="wget.html#IDX151">GPL</A>
-</DIR>
-<H2><A NAME="cindex_h">h</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX144">hangup</A>
-<LI><A HREF="wget.html#IDX62">header, add</A>
-<LI><A HREF="wget.html#IDX90">hosts, spanning</A>
-<LI><A HREF="wget.html#IDX51">http password</A>
-<LI><A HREF="wget.html#IDX66">http referer</A>
-<LI><A HREF="wget.html#IDX117">http time-stamping</A>
-<LI><A HREF="wget.html#IDX50">http user</A>
-</DIR>
-<H2><A NAME="cindex_i">i</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX61">ignore length</A>
-<LI><A HREF="wget.html#IDX103">include directories</A>
-<LI><A HREF="wget.html#IDX31">incomplete downloads</A>
-<LI><A HREF="wget.html#IDX114">incremental updating</A>
-<LI><A HREF="wget.html#IDX18">input-file</A>
-<LI><A HREF="wget.html#IDX3">invoking</A>
-<LI><A HREF="wget.html#IDX23">IP address, client</A>
-</DIR>
-<H2><A NAME="cindex_l">l</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX135">latest version</A>
-<LI><A HREF="wget.html#IDX78">link conversion</A>
-<LI><A HREF="wget.html#IDX87">links</A>
-<LI><A HREF="wget.html#IDX137">list</A>
-<LI><A HREF="wget.html#IDX56">loading cookies</A>
-<LI><A HREF="wget.html#IDX125">location of wgetrc</A>
-<LI><A HREF="wget.html#IDX13">log file</A>
-</DIR>
-<H2><A NAME="cindex_m">m</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX136">mailing list</A>
-<LI><A HREF="wget.html#IDX132">mirroring</A>
-</DIR>
-<H2><A NAME="cindex_n">n</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX108">no parent</A>
-<LI><A HREF="wget.html#IDX154">no warranty</A>
-<LI><A HREF="wget.html#IDX29">no-clobber</A>
-<LI><A HREF="wget.html#IDX6">nohup</A>
-<LI><A HREF="wget.html#IDX26">number of retries</A>
-</DIR>
-<H2><A NAME="cindex_o">o</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX142">operating systems</A>
-<LI><A HREF="wget.html#IDX9">option syntax</A>
-<LI><A HREF="wget.html#IDX12">output file</A>
-<LI><A HREF="wget.html#IDX1">overview</A>
-</DIR>
-<H2><A NAME="cindex_p">p</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX80">page requisites</A>
-<LI><A HREF="wget.html#IDX72">passive ftp</A>
-<LI><A HREF="wget.html#IDX39">pause</A>
-<LI><A HREF="wget.html#IDX141">portability</A>
-<LI><A HREF="wget.html#IDX33">progress indicator</A>
-<LI><A HREF="wget.html#IDX134">proxies</A>
-<LI><A HREF="wget.html#IDX45">proxy</A>, <A HREF="wget.html#IDX53">proxy</A>
-<LI><A HREF="wget.html#IDX65">proxy authentication</A>
-<LI><A HREF="wget.html#IDX74">proxy filling</A>
-<LI><A HREF="wget.html#IDX64">proxy password</A>
-<LI><A HREF="wget.html#IDX63">proxy user</A>
-</DIR>
-<H2><A NAME="cindex_q">q</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX16">quiet</A>
-<LI><A HREF="wget.html#IDX46">quota</A>
-</DIR>
-<H2><A NAME="cindex_r">r</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX44">random wait</A>
-<LI><A HREF="wget.html#IDX84">recursion</A>
-<LI><A HREF="wget.html#IDX86">recursive retrieval</A>
-<LI><A HREF="wget.html#IDX131">redirecting output</A>
-<LI><A HREF="wget.html#IDX67">referer, http</A>
-<LI><A HREF="wget.html#IDX107">reject directories</A>
-<LI><A HREF="wget.html#IDX97">reject suffixes</A>
-<LI><A HREF="wget.html#IDX96">reject wildcards</A>
-<LI><A HREF="wget.html#IDX109">relative links</A>
-<LI><A HREF="wget.html#IDX139">reporting bugs</A>
-<LI><A HREF="wget.html#IDX81">required images, downloading</A>
-<LI><A HREF="wget.html#IDX32">resume download</A>
-<LI><A HREF="wget.html#IDX24">retries</A>
-<LI><A HREF="wget.html#IDX41">retries, waiting between</A>
-<LI><A HREF="wget.html#IDX85">retrieving</A>
-<LI><A HREF="wget.html#IDX145">robots</A>
-<LI><A HREF="wget.html#IDX146">robots.txt</A>
-</DIR>
-<H2><A NAME="cindex_s">s</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX129">sample wgetrc</A>
-<LI><A HREF="wget.html#IDX58">saving cookies</A>
-<LI><A HREF="wget.html#IDX148">security</A>
-<LI><A HREF="wget.html#IDX147">server maintenance</A>
-<LI><A HREF="wget.html#IDX35">server response, print</A>
-<LI><A HREF="wget.html#IDX68">server response, save</A>
-<LI><A HREF="wget.html#IDX143">signal handling</A>
-<LI><A HREF="wget.html#IDX89">spanning hosts</A>
-<LI><A HREF="wget.html#IDX37">spider</A>
-<LI><A HREF="wget.html#IDX122">startup</A>
-<LI><A HREF="wget.html#IDX119">startup file</A>
-<LI><A HREF="wget.html#IDX95">suffixes, accept</A>
-<LI><A HREF="wget.html#IDX99">suffixes, reject</A>
-<LI><A HREF="wget.html#IDX73">symbolic links, retrieving</A>
-<LI><A HREF="wget.html#IDX10">syntax of options</A>
-<LI><A HREF="wget.html#IDX127">syntax of wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_t">t</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX83">tag-based recursive pruning</A>
-<LI><A HREF="wget.html#IDX111">time-stamping</A>
-<LI><A HREF="wget.html#IDX115">time-stamping usage</A>
-<LI><A HREF="wget.html#IDX38">timeout</A>
-<LI><A HREF="wget.html#IDX112">timestamping</A>
-<LI><A HREF="wget.html#IDX25">tries</A>
-<LI><A HREF="wget.html#IDX91">types of files</A>
-</DIR>
-<H2><A NAME="cindex_u">u</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX113">updating the archives</A>
-<LI><A HREF="wget.html#IDX7">URL</A>
-<LI><A HREF="wget.html#IDX8">URL syntax</A>
-<LI><A HREF="wget.html#IDX116">usage, time-stamping</A>
-<LI><A HREF="wget.html#IDX69">user-agent</A>
-</DIR>
-<H2><A NAME="cindex_v">v</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX133">various</A>
-<LI><A HREF="wget.html#IDX17">verbose</A>
-</DIR>
-<H2><A NAME="cindex_w">w</A></H2>
-<DIR>
-<LI><A HREF="wget.html#IDX40">wait</A>
-<LI><A HREF="wget.html#IDX43">wait, random</A>
-<LI><A HREF="wget.html#IDX42">waiting between retries</A>
-<LI><A HREF="wget.html#IDX36">Wget as spider</A>
-<LI><A HREF="wget.html#IDX120">wgetrc</A>
-<LI><A HREF="wget.html#IDX128">wgetrc commands</A>
-<LI><A HREF="wget.html#IDX124">wgetrc location</A>
-<LI><A HREF="wget.html#IDX126">wgetrc syntax</A>
-<LI><A HREF="wget.html#IDX94">wildcards, accept</A>
-<LI><A HREF="wget.html#IDX98">wildcards, reject</A>
-</DIR>
-
-
-<P><HR><P>
-<H1>Footnotes</H1>
-<H3><A NAME="FOOT1" HREF="wget.html#DOCF1">(1)</A></H3>
-<P>If you have a
-<TT>`.netrc'</TT> file in your home directory, password will also be
-searched for there.
-<H3><A NAME="FOOT2" HREF="wget.html#DOCF2">(2)</A></H3>
-<P>As an additional check, Wget will look at the
-<CODE>Content-Length</CODE> header, and compare the sizes; if they are not the
-same, the remote file will be downloaded no matter what the time-stamp
-says.
-<P><HR><P>
-This document was generated on 17 January 2002 using
-<A HREF="http://wwwinfo.cern.ch/dis/texi2html/";>texi2html</A>&nbsp;1.56k.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_mono/wget.html.gz
===================================================================
RCS file: manual/wget-1.8.1/html_mono/wget.html.gz
diff -N manual/wget-1.8.1/html_mono/wget.html.gz
Binary files /tmp/cvs8qNpYB and /dev/null differ

Index: manual/wget-1.8.1/html_node/wget.texi_html_node.tar.gz
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget.texi_html_node.tar.gz
diff -N manual/wget-1.8.1/html_node/wget.texi_html_node.tar.gz
Binary files /tmp/cvstHZVvC and /dev/null differ

Index: manual/wget-1.8.1/html_node/wget_1.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_1.html
diff -N manual/wget-1.8.1/html_node/wget_1.html
--- manual/wget-1.8.1/html_node/wget_1.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,124 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Overview</TITLE>
-</HEAD>
-<BODY>
-Go to the first, previous, <A HREF="wget_2.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-<P>
address@hidden Net Utilities
address@hidden World Wide Web
-* Wget: (wget).         The non-interactive network downloader.
-
-
-<P>
-Copyright (C) 1996, 1997, 1998, 2000, 2001 Free Software
-Foundation, Inc.
-
-
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-
-
-
-
-<H1><A NAME="SEC1" HREF="wget_toc.html#TOC1">Overview</A></H1>
-<P>
-<A NAME="IDX1"></A>
-<A NAME="IDX2"></A>
-
-
-<P>
-GNU Wget is a free utility for non-interactive download of files from
-the Web.  It supports HTTP, HTTPS, and FTP protocols, as
-well as retrieval through HTTP proxies.
-
-
-<P>
-This chapter is a partial overview of Wget's features.
-
-
-
-<UL>
-<LI>
-
-Wget is non-interactive, meaning that it can work in the background,
-while the user is not logged on.  This allows you to start a retrieval
-and disconnect from the system, letting Wget finish the work.  By
-contrast, most of the Web browsers require constant user's presence,
-which can be a great hindrance when transferring a lot of data.
-
-<LI>
-
-Wget can follow links in HTML pages and create local versions of
-remote web sites, fully recreating the directory structure of the
-original site.  This is sometimes referred to as "recursive
-downloading."  While doing that, Wget respects the Robot Exclusion
-Standard (<TT>`/robots.txt'</TT>).  Wget can be instructed to convert the
-links in downloaded HTML files to the local files for offline
-viewing.
-
-<LI>
-
-File name wildcard matching and recursive mirroring of directories are
-available when retrieving via FTP.  Wget can read the time-stamp
-information given by both HTTP and FTP servers, and store it
-locally.  Thus Wget can see if the remote file has changed since last
-retrieval, and automatically retrieve the new version if it has.  This
-makes Wget suitable for mirroring of FTP sites, as well as home
-pages.
-
-<LI>
-
-Wget has been designed for robustness over slow or unstable network
-connections; if a download fails due to a network problem, it will
-keep retrying until the whole file has been retrieved.  If the server
-supports regetting, it will instruct the server to continue the
-download from where it left off.
-
-<LI>
-
-Wget supports proxy servers, which can lighten the network load, speed
-up retrieval and provide access behind firewalls.  However, if you are
-behind a firewall that requires that you use a socks style gateway, you
-can get the socks library and build Wget with support for socks.  Wget
-also supports the passive FTP downloading as an option.
-
-<LI>
-
-Builtin features offer mechanisms to tune which links you wish to follow
-(see section <A HREF="wget_14.html#SEC14">Following Links</A>).
-
-<LI>
-
-The retrieval is conveniently traced with printing dots, each dot
-representing a fixed amount of data received (1KB by default).  These
-representations can be customized to your preferences.
-
-<LI>
-
-Most of the features are fully configurable, either through command line
-options, or via the initialization file <TT>`.wgetrc'</TT> (see section <A 
HREF="wget_24.html#SEC24">Startup File</A>).  Wget allows you to define 
<EM>global</EM> startup files
-(<TT>`/usr/local/etc/wgetrc'</TT> by default) for site settings.
-
-<LI>
-
-Finally, GNU Wget is free software.  This means that everyone may use
-it, redistribute it and/or modify it under the terms of the GNU General
-Public License, as published by the Free Software Foundation
-(see section <A HREF="wget_44.html#SEC44">Copying</A>).
-</UL>
-
-<P><HR><P>
-Go to the first, previous, <A HREF="wget_2.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_10.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_10.html
diff -N manual/wget-1.8.1/html_node/wget_10.html
--- manual/wget-1.8.1/html_node/wget_10.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,100 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - FTP Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_9.html">previous</A>, 
<A HREF="wget_11.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC10" HREF="wget_toc.html#TOC10">FTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nr'</SAMP>
-<DD>
-<A NAME="IDX70"></A>
- 
-<DT><SAMP>`--dont-remove-listing'</SAMP>
-<DD>
-Don't remove the temporary <TT>`.listing'</TT> files generated by FTP
-retrievals.  Normally, these files contain the raw directory listings
-received from FTP servers.  Not removing them can be useful for
-debugging purposes, or when you want to be able to easily check on the
-contents of remote server directories (e.g. to verify that a mirror
-you're running is complete).
-
-Note that even though Wget writes to a known filename for this file,
-this is not a security hole in the scenario of a user making
-<TT>`.listing'</TT> a symbolic link to <TT>`/etc/passwd'</TT> or something and
-asking <CODE>root</CODE> to run Wget in his or her directory.  Depending on
-the options used, either Wget will refuse to write to <TT>`.listing'</TT>,
-making the globbing/recursion/time-stamping operation fail, or the
-symbolic link will be deleted and replaced with the actual
-<TT>`.listing'</TT> file, or the listing will be written to a
-<TT>`.listing.<VAR>number</VAR>'</TT> file.
-
-Even though this situation isn't a problem, though, <CODE>root</CODE> should
-never run Wget in a non-trusted user's directory.  A user could do
-something as simple as linking <TT>`index.html'</TT> to <TT>`/etc/passwd'</TT>
-and asking <CODE>root</CODE> to run Wget with <SAMP>`-N'</SAMP> or 
<SAMP>`-r'</SAMP> so the file
-will be overwritten.
-
-<A NAME="IDX71"></A>
-<DT><SAMP>`-g on/off'</SAMP>
-<DD>
-<DT><SAMP>`--glob=on/off'</SAMP>
-<DD>
-Turn FTP globbing on or off.  Globbing means you may use the
-shell-like special characters (<EM>wildcards</EM>), like <SAMP>`*'</SAMP>,
-<SAMP>`?'</SAMP>, <SAMP>`['</SAMP> and <SAMP>`]'</SAMP> to retrieve more than 
one file from the
-same directory at once, like:
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/*.msg
-</PRE>
-
-By default, globbing will be turned on if the URL contains a
-globbing character.  This option may be used to turn globbing on or off
-permanently.
-
-You may have to quote the URL to protect it from being expanded by
-your shell.  Globbing makes Wget look for a directory listing, which is
-system-specific.  This is why it currently works only with Unix FTP
-servers (and the ones emulating Unix <CODE>ls</CODE> output).
-
-<A NAME="IDX72"></A>
-<DT><SAMP>`--passive-ftp'</SAMP>
-<DD>
-Use the <EM>passive</EM> FTP retrieval scheme, in which the client
-initiates the data connection.  This is sometimes required for FTP
-to work behind firewalls.
-
-<A NAME="IDX73"></A>
-<DT><SAMP>`--retr-symlinks'</SAMP>
-<DD>
-Usually, when retrieving FTP directories recursively and a symbolic
-link is encountered, the linked-to file is not downloaded.  Instead, a
-matching symbolic link is created on the local filesystem.  The
-pointed-to file will not be downloaded unless this recursive retrieval
-would have encountered it separately and downloaded it anyway.
-
-When <SAMP>`--retr-symlinks'</SAMP> is specified, however, symbolic links are
-traversed and the pointed-to files are retrieved.  At this time, this
-option does not cause Wget to traverse symlinks to directories and
-recurse through them, but in the future it should be enhanced to do
-this.
-
-Note that when retrieving a file (not a directory) because it was
-specified on the commandline, rather than because it was recursed to,
-this option has no effect.  Symbolic links are always traversed in this
-case.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_9.html">previous</A>, 
<A HREF="wget_11.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_11.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_11.html
diff -N manual/wget-1.8.1/html_node/wget_11.html
--- manual/wget-1.8.1/html_node/wget_11.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,207 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Recursive Retrieval Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_10.html">previous</A>, <A HREF="wget_12.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC11" HREF="wget_toc.html#TOC11">Recursive Retrieval 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-r'</SAMP>
-<DD>
-<DT><SAMP>`--recursive'</SAMP>
-<DD>
-Turn on recursive retrieving.  See section <A 
HREF="wget_13.html#SEC13">Recursive Retrieval</A>, for more
-details.
-
-<DT><SAMP>`-l <VAR>depth</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--level=<VAR>depth</VAR>'</SAMP>
-<DD>
-Specify recursion maximum depth level <VAR>depth</VAR> (see section <A 
HREF="wget_13.html#SEC13">Recursive Retrieval</A>).  The default maximum depth 
is 5.
-
-<A NAME="IDX74"></A>
-<A NAME="IDX75"></A>
-<A NAME="IDX76"></A>
-<DT><SAMP>`--delete-after'</SAMP>
-<DD>
-This option tells Wget to delete every single file it downloads,
-<EM>after</EM> having done so.  It is useful for pre-fetching popular
-pages through a proxy, e.g.:
-
-
-<PRE>
-wget -r -nd --delete-after http://whatever.com/~popular/page/
-</PRE>
-
-The <SAMP>`-r'</SAMP> option is to retrieve recursively, and 
<SAMP>`-nd'</SAMP> to not
-create directories.  
-
-Note that <SAMP>`--delete-after'</SAMP> deletes files on the local machine.  It
-does not issue the <SAMP>`DELE'</SAMP> command to remote FTP sites, for
-instance.  Also note that when <SAMP>`--delete-after'</SAMP> is specified,
-<SAMP>`--convert-links'</SAMP> is ignored, so <SAMP>`.orig'</SAMP> files are 
simply not
-created in the first place.
-
-<A NAME="IDX77"></A>
-<A NAME="IDX78"></A>
-<DT><SAMP>`-k'</SAMP>
-<DD>
-<DT><SAMP>`--convert-links'</SAMP>
-<DD>
-After the download is complete, convert the links in the document to
-make them suitable for local viewing.  This affects not only the visible
-hyperlinks, but any part of the document that links to external content,
-such as embedded images, links to style sheets, hyperlinks to non-HTML
-content, etc.
-
-Each link will be changed in one of the two ways:
-
-
-<UL>
-<LI>
-
-The links to files that have been downloaded by Wget will be changed to
-refer to the file they point to as a relative link.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT>, also downloaded, then the link in <TT>`doc.html'</TT>
-will be modified to point to <SAMP>`../bar/img.gif'</SAMP>.  This kind of
-transformation works reliably for arbitrary combinations of directories.
-
-<LI>
-
-The links to files that have not been downloaded by Wget will be changed
-to include host name and absolute path of the location they point to.
-
-Example: if the downloaded file <TT>`/foo/doc.html'</TT> links to
-<TT>`/bar/img.gif'</TT> (or to <TT>`../bar/img.gif'</TT>), then the link in
-<TT>`doc.html'</TT> will be modified to point to
-<TT>`http://<VAR>hostname</VAR>/bar/img.gif'</TT>.
-</UL>
-
-Because of this, local browsing works reliably: if a linked file was
-downloaded, the link will refer to its local name; if it was not
-downloaded, the link will refer to its full Internet address rather than
-presenting a broken link.  The fact that the former links are converted
-to relative links ensures that you can move the downloaded hierarchy to
-another directory.
-
-Note that only at the end of the download can Wget know which links have
-been downloaded.  Because of that, the work done by <SAMP>`-k'</SAMP> will be
-performed at the end of all the downloads.
-
-<A NAME="IDX79"></A>
-<DT><SAMP>`-K'</SAMP>
-<DD>
-<DT><SAMP>`--backup-converted'</SAMP>
-<DD>
-When converting a file, back up the original version with a 
<SAMP>`.orig'</SAMP>
-suffix.  Affects the behavior of <SAMP>`-N'</SAMP> (see section <A 
HREF="wget_22.html#SEC22">HTTP Time-Stamping Internals</A>).
-
-<DT><SAMP>`-m'</SAMP>
-<DD>
-<DT><SAMP>`--mirror'</SAMP>
-<DD>
-Turn on options suitable for mirroring.  This option turns on recursion
-and time-stamping, sets infinite recursion depth and keeps FTP
-directory listings.  It is currently equivalent to
-<SAMP>`-r -N -l inf -nr'</SAMP>.
-
-<A NAME="IDX80"></A>
-<A NAME="IDX81"></A>
-<DT><SAMP>`-p'</SAMP>
-<DD>
-<DT><SAMP>`--page-requisites'</SAMP>
-<DD>
-This option causes Wget to download all the files that are necessary to
-properly display a given HTML page.  This includes such things as
-inlined images, sounds, and referenced stylesheets.
-
-Ordinarily, when downloading a single HTML page, any requisite documents
-that may be needed to display it properly are not downloaded.  Using
-<SAMP>`-r'</SAMP> together with <SAMP>`-l'</SAMP> can help, but since Wget 
does not
-ordinarily distinguish between external and inlined documents, one is
-generally left with "leaf documents" that are missing their
-requisites.
-
-For instance, say document <TT>`1.html'</TT> contains an 
<CODE>&#60;IMG&#62;</CODE> tag
-referencing <TT>`1.gif'</TT> and an <CODE>&#60;A&#62;</CODE> tag pointing to 
external
-document <TT>`2.html'</TT>.  Say that <TT>`2.html'</TT> is similar but that its
-image is <TT>`2.gif'</TT> and it links to <TT>`3.html'</TT>.  Say this
-continues up to some arbitrarily high number.
-
-If one executes the command:
-
-
-<PRE>
-wget -r -l 2 http://<VAR>site</VAR>/1.html
-</PRE>
-
-then <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, <TT>`2.gif'</TT>, 
and
-<TT>`3.html'</TT> will be downloaded.  As you can see, <TT>`3.html'</TT> is
-without its requisite <TT>`3.gif'</TT> because Wget is simply counting the
-number of hops (up to 2) away from <TT>`1.html'</TT> in order to determine
-where to stop the recursion.  However, with this command:
-
-
-<PRE>
-wget -r -l 2 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-all the above files <EM>and</EM> <TT>`3.html'</TT>'s requisite <TT>`3.gif'</TT>
-will be downloaded.  Similarly,
-
-
-<PRE>
-wget -r -l 1 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-will cause <TT>`1.html'</TT>, <TT>`1.gif'</TT>, <TT>`2.html'</TT>, and 
<TT>`2.gif'</TT>
-to be downloaded.  One might think that:
-
-
-<PRE>
-wget -r -l 0 -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-would download just <TT>`1.html'</TT> and <TT>`1.gif'</TT>, but unfortunately
-this is not the case, because <SAMP>`-l 0'</SAMP> is equivalent to
-<SAMP>`-l inf'</SAMP>---that is, infinite recursion.  To download a single HTML
-page (or a handful of them, all specified on the commandline or in a
-<SAMP>`-i'</SAMP> URL input file) and its (or their) requisites, simply leave 
off
-<SAMP>`-r'</SAMP> and <SAMP>`-l'</SAMP>:
-
-
-<PRE>
-wget -p http://<VAR>site</VAR>/1.html
-</PRE>
-
-Note that Wget will behave as if <SAMP>`-r'</SAMP> had been specified, but only
-that single page and its requisites will be downloaded.  Links from that
-page to external documents will not be followed.  Actually, to download
-a single page and all its requisites (even if they exist on separate
-websites), and make sure the lot displays properly locally, this author
-likes to use a few options in addition to <SAMP>`-p'</SAMP>:
-
-
-<PRE>
-wget -E -H -k -K -p http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-To finish off this topic, it's worth knowing that Wget's idea of an
-external document link is any URL specified in an <CODE>&#60;A&#62;</CODE> 
tag, an
-<CODE>&#60;AREA&#62;</CODE> tag, or a <CODE>&#60;LINK&#62;</CODE> tag other 
than <CODE>&#60;LINK
-REL="stylesheet"&#62;</CODE>.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_10.html">previous</A>, <A HREF="wget_12.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_12.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_12.html
diff -N manual/wget-1.8.1/html_node/wget_12.html
--- manual/wget-1.8.1/html_node/wget_12.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,117 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Recursive Accept/Reject Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_11.html">previous</A>, <A HREF="wget_13.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC12" HREF="wget_toc.html#TOC12">Recursive Accept/Reject 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR> --accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`-R <VAR>rejlist</VAR> --reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-Specify comma-separated lists of file name suffixes or patterns to
-accept or reject (see section <A HREF="wget_16.html#SEC16">Types of Files</A> 
for more details).
-
-<DT><SAMP>`-D <VAR>domain-list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--domains=<VAR>domain-list</VAR>'</SAMP>
-<DD>
-Set domains to be followed.  <VAR>domain-list</VAR> is a comma-separated list
-of domains.  Note that it does <EM>not</EM> turn on <SAMP>`-H'</SAMP>.
-
-<DT><SAMP>`--exclude-domains <VAR>domain-list</VAR>'</SAMP>
-<DD>
-Specify the domains that are <EM>not</EM> to be followed.
-(see section <A HREF="wget_15.html#SEC15">Spanning Hosts</A>).
-
-<A NAME="IDX82"></A>
-<DT><SAMP>`--follow-ftp'</SAMP>
-<DD>
-Follow FTP links from HTML documents.  Without this option,
-Wget will ignore all the FTP links.
-
-<A NAME="IDX83"></A>
-<DT><SAMP>`--follow-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-Wget has an internal table of HTML tag / attribute pairs that it
-considers when looking for linked documents during a recursive
-retrieval.  If a user wants only a subset of those tags to be
-considered, however, he or she should be specify such tags in a
-comma-separated <VAR>list</VAR> with this option.
-
-<DT><SAMP>`-G <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--ignore-tags=<VAR>list</VAR>'</SAMP>
-<DD>
-This is the opposite of the <SAMP>`--follow-tags'</SAMP> option.  To skip
-certain HTML tags when recursively looking for documents to download,
-specify them in a comma-separated <VAR>list</VAR>.  
-
-In the past, the <SAMP>`-G'</SAMP> option was the best bet for downloading a
-single page and its requisites, using a commandline like:
-
-
-<PRE>
-wget -Ga,area -H -k -K -r http://<VAR>site</VAR>/<VAR>document</VAR>
-</PRE>
-
-However, the author of this option came across a page with tags like
-<CODE>&#60;LINK REL="home" HREF="/"&#62;</CODE> and came to the realization 
that
-<SAMP>`-G'</SAMP> was not enough.  One can't just tell Wget to ignore
-<CODE>&#60;LINK&#62;</CODE>, because then stylesheets will not be downloaded.  
Now the
-best bet for downloading a single page and its requisites is the
-dedicated <SAMP>`--page-requisites'</SAMP> option.
-
-<DT><SAMP>`-H'</SAMP>
-<DD>
-<DT><SAMP>`--span-hosts'</SAMP>
-<DD>
-Enable spanning across hosts when doing recursive retrieving
-(see section <A HREF="wget_15.html#SEC15">Spanning Hosts</A>).
-
-<DT><SAMP>`-L'</SAMP>
-<DD>
-<DT><SAMP>`--relative'</SAMP>
-<DD>
-Follow relative links only.  Useful for retrieving a specific home page
-without any distractions, not even those from the same hosts
-(see section <A HREF="wget_18.html#SEC18">Relative Links</A>).
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading (see section <A HREF="wget_17.html#SEC17">Directory-Based 
Limits</A> for more details.)  Elements
-of <VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude-directories=<VAR>list</VAR>'</SAMP>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download (see section <A HREF="wget_17.html#SEC17">Directory-Based Limits</A> 
for more details.)  Elements of
-<VAR>list</VAR> may contain wildcards.
-
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-Do not ever ascend to the parent directory when retrieving recursively.
-This is a useful option, since it guarantees that only the files
-<EM>below</EM> a certain hierarchy will be downloaded.
-See section <A HREF="wget_17.html#SEC17">Directory-Based Limits</A>, for more 
details.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_11.html">previous</A>, <A HREF="wget_13.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_13.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_13.html
diff -N manual/wget-1.8.1/html_node/wget_13.html
--- manual/wget-1.8.1/html_node/wget_13.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,104 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Recursive Retrieval</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_12.html">previous</A>, <A HREF="wget_14.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC13" HREF="wget_toc.html#TOC13">Recursive Retrieval</A></H1>
-<P>
-<A NAME="IDX84"></A>
-<A NAME="IDX85"></A>
-<A NAME="IDX86"></A>
-
-
-<P>
-GNU Wget is capable of traversing parts of the Web (or a single
-HTTP or FTP server), following links and directory structure.
-We refer to this as to <EM>recursive retrieving</EM>, or <EM>recursion</EM>.
-
-
-<P>
-With HTTP URLs, Wget retrieves and parses the HTML from
-the given URL, documents, retrieving the files the HTML
-document was referring to, through markups like <CODE>href</CODE>, or
-<CODE>src</CODE>.  If the freshly downloaded file is also of type
-<CODE>text/html</CODE>, it will be parsed and followed further.
-
-
-<P>
-Recursive retrieval of HTTP and HTML content is
-<EM>breadth-first</EM>.  This means that Wget first downloads the requested
-HTML document, then the documents linked from that document, then the
-documents linked by them, and so on.  In other words, Wget first
-downloads the documents at depth 1, then those at depth 2, and so on
-until the specified maximum depth.
-
-
-<P>
-The maximum <EM>depth</EM> to which the retrieval may descend is specified
-with the <SAMP>`-l'</SAMP> option.  The default maximum depth is five layers.
-
-
-<P>
-When retrieving an FTP URL recursively, Wget will retrieve all
-the data from the given directory tree (including the subdirectories up
-to the specified depth) on the remote server, creating its mirror image
-locally.  FTP retrieval is also limited by the <CODE>depth</CODE>
-parameter.  Unlike HTTP recursion, FTP recursion is performed
-depth-first.
-
-
-<P>
-By default, Wget will create a local directory tree, corresponding to
-the one found on the remote server.
-
-
-<P>
-Recursive retrieving can find a number of applications, the most
-important of which is mirroring.  It is also useful for WWW
-presentations, and any other opportunities where slow network
-connections should be bypassed by storing the files locally.
-
-
-<P>
-You should be warned that recursive downloads can overload the remote
-servers.  Because of that, many administrators frown upon them and may
-ban access from your site if they detect very fast downloads of big
-amounts of content.  When downloading from Internet servers, consider
-using the <SAMP>`-w'</SAMP> option to introduce a delay between accesses to the
-server.  The download will take a while longer, but the server
-administrator will not be alarmed by your rudeness.
-
-
-<P>
-Of course, recursive download may cause problems on your machine.  If
-left to run unchecked, it can easily fill up the disk.  If downloading
-from local network, it can also take bandwidth on the system, as well as
-consume memory and CPU.
-
-
-<P>
-Try to specify the criteria that match the kind of download you are
-trying to achieve.  If you want to download only one page, use
-<SAMP>`--page-requisites'</SAMP> without any additional recursion.  If you want
-to download things under one directory, use <SAMP>`-np'</SAMP> to avoid
-downloading things from other directories.  If you want to download all
-the files from one directory, use <SAMP>`-l 1'</SAMP> to make sure the 
recursion
-depth never exceeds one.  See section <A HREF="wget_14.html#SEC14">Following 
Links</A>, for more information
-about this.
-
-
-<P>
-Recursive retrieval should be used with care.  Don't say you were not
-warned.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_12.html">previous</A>, <A HREF="wget_14.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_14.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_14.html
diff -N manual/wget-1.8.1/html_node/wget_14.html
--- manual/wget-1.8.1/html_node/wget_14.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,38 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Following Links</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_13.html">previous</A>, <A HREF="wget_15.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC14" HREF="wget_toc.html#TOC14">Following Links</A></H1>
-<P>
-<A NAME="IDX87"></A>
-<A NAME="IDX88"></A>
-
-
-<P>
-When retrieving recursively, one does not wish to retrieve loads of
-unnecessary data.  Most of the time the users bear in mind exactly what
-they want to download, and want Wget to follow only specific links.
-
-
-<P>
-For example, if you wish to download the music archive from
-<SAMP>`fly.srk.fer.hr'</SAMP>, you will not want to download all the home pages
-that happen to be referenced by an obscure part of the archive.
-
-
-<P>
-Wget possesses several mechanisms that allows you to fine-tune which
-links it will follow.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_13.html">previous</A>, <A HREF="wget_15.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_15.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_15.html
diff -N manual/wget-1.8.1/html_node/wget_15.html
--- manual/wget-1.8.1/html_node/wget_15.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,80 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Spanning Hosts</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_14.html">previous</A>, <A HREF="wget_16.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC15" HREF="wget_toc.html#TOC15">Spanning Hosts</A></H2>
-<P>
-<A NAME="IDX89"></A>
-<A NAME="IDX90"></A>
-
-
-<P>
-Wget's recursive retrieval normally refuses to visit hosts different
-than the one you specified on the command line.  This is a reasonable
-default; without it, every retrieval would have the potential to turn
-your Wget into a small version of google.
-
-
-<P>
-However, visiting different hosts, or <EM>host spanning,</EM> is sometimes
-a useful option.  Maybe the images are served from a different server.
-Maybe you're mirroring a site that consists of pages interlinked between
-three servers.  Maybe the server has two equivalent names, and the HTML
-pages refer to both interchangeably.
-
-
-<DL COMPACT>
-
-<DT>Span to any host---<SAMP>`-H'</SAMP>
-<DD>
-The <SAMP>`-H'</SAMP> option turns on host spanning, thus allowing Wget's
-recursive run to visit any host referenced by a link.  Unless sufficient
-recursion-limiting criteria are applied depth, these foreign hosts will
-typically link to yet more hosts, and so on until Wget ends up sucking
-up much more data than you have intended.
-
-<DT>Limit spanning to certain domains---<SAMP>`-D'</SAMP>
-<DD>
-The <SAMP>`-D'</SAMP> option allows you to specify the domains that will be
-followed, thus limiting the recursion only to the hosts that belong to
-these domains.  Obviously, this makes sense only in conjunction with
-<SAMP>`-H'</SAMP>.  A typical example would be downloading the contents of
-<SAMP>`www.server.com'</SAMP>, but allowing downloads from
-<SAMP>`images.server.com'</SAMP>, etc.:
-
-
-<PRE>
-wget -rH -Dserver.com http://www.server.com/
-</PRE>
-
-You can specify more than one address by separating them with a comma,
-e.g. <SAMP>`-Ddomain1.com,domain2.com'</SAMP>.
-
-<DT>Keep download off certain domains---<SAMP>`--exclude-domains'</SAMP>
-<DD>
-If there are domains you want to exclude specifically, you can do it
-with <SAMP>`--exclude-domains'</SAMP>, which accepts the same type of arguments
-of <SAMP>`-D'</SAMP>, but will <EM>exclude</EM> all the listed domains.  For
-example, if you want to download all the hosts from <SAMP>`foo.edu'</SAMP>
-domain, with the exception of <SAMP>`sunsite.foo.edu'</SAMP>, you can do it 
like
-this:
-
-
-<PRE>
-wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
-    http://www.foo.edu/
-</PRE>
-
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_14.html">previous</A>, <A HREF="wget_16.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_16.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_16.html
diff -N manual/wget-1.8.1/html_node/wget_16.html
--- manual/wget-1.8.1/html_node/wget_16.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,96 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Types of Files</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_15.html">previous</A>, <A HREF="wget_17.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC16" HREF="wget_toc.html#TOC16">Types of Files</A></H2>
-<P>
-<A NAME="IDX91"></A>
-
-
-<P>
-When downloading material from the web, you will often want to restrict
-the retrieval to only certain file types.  For example, if you are
-interested in downloading GIFs, you will not be overjoyed to get
-loads of PostScript documents, and vice versa.
-
-
-<P>
-Wget offers two options to deal with this problem.  Each option
-description lists a short name, a long name, and the equivalent command
-in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX92"></A>
-<A NAME="IDX93"></A>
-<A NAME="IDX94"></A>
-<A NAME="IDX95"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-A <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--accept <VAR>acclist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`accept = <VAR>acclist</VAR>'</SAMP>
-<DD>
-The argument to <SAMP>`--accept'</SAMP> option is a list of file suffixes or
-patterns that Wget will download during recursive retrieval.  A suffix
-is the ending part of a file, and consists of "normal" letters,
-e.g. <SAMP>`gif'</SAMP> or <SAMP>`.jpg'</SAMP>.  A matching pattern contains 
shell-like
-wildcards, e.g. <SAMP>`books*'</SAMP> or <SAMP>`zelazny*196[0-9]*'</SAMP>.
-
-So, specifying <SAMP>`wget -A gif,jpg'</SAMP> will make Wget download only the
-files ending with <SAMP>`gif'</SAMP> or <SAMP>`jpg'</SAMP>, i.e. GIFs and
-JPEGs.  On the other hand, <SAMP>`wget -A "zelazny*196[0-9]*"'</SAMP> will
-download only files beginning with <SAMP>`zelazny'</SAMP> and containing 
numbers
-from 1960 to 1969 anywhere within.  Look up the manual of your shell for
-a description of how pattern matching works.
-
-Of course, any number of suffixes and patterns can be combined into a
-comma-separated list, and given as an argument to <SAMP>`-A'</SAMP>.
-
-<A NAME="IDX96"></A>
-<A NAME="IDX97"></A>
-<A NAME="IDX98"></A>
-<A NAME="IDX99"></A>
-<DT><SAMP>`-R <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--reject <VAR>rejlist</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`reject = <VAR>rejlist</VAR>'</SAMP>
-<DD>
-The <SAMP>`--reject'</SAMP> option works the same way as 
<SAMP>`--accept'</SAMP>, only
-its logic is the reverse; Wget will download all files <EM>except</EM> the
-ones matching the suffixes (or patterns) in the list.
-
-So, if you want to download a whole page except for the cumbersome
-MPEGs and .AU files, you can use <SAMP>`wget -R mpg,mpeg,au'</SAMP>.
-Analogously, to download all files except the ones beginning with
-<SAMP>`bjork'</SAMP>, use <SAMP>`wget -R "bjork*"'</SAMP>.  The quotes are to 
prevent
-expansion by the shell.
-</DL>
-
-<P>
-The <SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP> options may be combined to achieve 
even
-better fine-tuning of which files to retrieve.  E.g. <SAMP>`wget -A
-"*zelazny*" -R .ps'</SAMP> will download all the files having 
<SAMP>`zelazny'</SAMP> as
-a part of their name, but <EM>not</EM> the PostScript files.
-
-
-<P>
-Note that these two options do not affect the downloading of HTML
-files; Wget must load all the HTMLs to know where to go at
-all--recursive retrieval would make no sense otherwise.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_15.html">previous</A>, <A HREF="wget_17.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_17.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_17.html
diff -N manual/wget-1.8.1/html_node/wget_17.html
--- manual/wget-1.8.1/html_node/wget_17.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,109 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Directory-Based Limits</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_16.html">previous</A>, <A HREF="wget_18.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC17" HREF="wget_toc.html#TOC17">Directory-Based Limits</A></H2>
-<P>
-<A NAME="IDX100"></A>
-<A NAME="IDX101"></A>
-
-
-<P>
-Regardless of other link-following facilities, it is often useful to
-place the restriction of what files to retrieve based on the directories
-those files are placed in.  There can be many reasons for this--the
-home pages may be organized in a reasonable directory structure; or some
-directories may contain useless information, e.g. <TT>`/cgi-bin'</TT> or
-<TT>`/dev'</TT> directories.
-
-
-<P>
-Wget offers three different options to deal with this requirement.  Each
-option description lists a short name, a long name, and the equivalent
-command in <TT>`.wgetrc'</TT>.
-
-
-<P>
-<A NAME="IDX102"></A>
-<A NAME="IDX103"></A>
-<A NAME="IDX104"></A>
-<DL COMPACT>
-
-<DT><SAMP>`-I <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--include <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`include_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-I'</SAMP> option accepts a comma-separated list of directories included
-in the retrieval.  Any other directories will simply be ignored.  The
-directories are absolute paths.
-
-So, if you wish to download from <SAMP>`http://host/people/bozo/'</SAMP>
-following only links to bozo's colleagues in the <TT>`/people'</TT>
-directory and the bogus scripts in <TT>`/cgi-bin'</TT>, you can specify:
-
-
-<PRE>
-wget -I /people,/cgi-bin http://host/people/bozo/
-</PRE>
-
-<A NAME="IDX105"></A>
-<A NAME="IDX106"></A>
-<A NAME="IDX107"></A>
-<DT><SAMP>`-X <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--exclude <VAR>list</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`exclude_directories = <VAR>list</VAR>'</SAMP>
-<DD>
-<SAMP>`-X'</SAMP> option is exactly the reverse of <SAMP>`-I'</SAMP>---this is 
a list of
-directories <EM>excluded</EM> from the download.  E.g. if you do not want
-Wget to download things from <TT>`/cgi-bin'</TT> directory, specify <SAMP>`-X
-/cgi-bin'</SAMP> on the command line.
-
-The same as with <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP>, these two options can be 
combined
-to get a better fine-tuning of downloading subdirectories.  E.g. if you
-want to load all the files from <TT>`/pub'</TT> hierarchy except for
-<TT>`/pub/worthless'</TT>, specify <SAMP>`-I/pub -X/pub/worthless'</SAMP>.
-
-<A NAME="IDX108"></A>
-<DT><SAMP>`-np'</SAMP>
-<DD>
-<DT><SAMP>`--no-parent'</SAMP>
-<DD>
-<DT><SAMP>`no_parent = on'</SAMP>
-<DD>
-The simplest, and often very useful way of limiting directories is
-disallowing retrieval of the links that refer to the hierarchy
-<EM>above</EM> than the beginning directory, i.e. disallowing ascent to the
-parent directory/directories.
-
-The <SAMP>`--no-parent'</SAMP> option (short <SAMP>`-np'</SAMP>) is useful in 
this case.
-Using it guarantees that you will never leave the existing hierarchy.
-Supposing you issue Wget with:
-
-
-<PRE>
-wget -r --no-parent http://somehost/~luzer/my-archive/
-</PRE>
-
-You may rest assured that none of the references to
-<TT>`/~his-girls-homepage/'</TT> or <TT>`/~luzer/all-my-mpegs/'</TT> will be
-followed.  Only the archive you are interested in will be downloaded.
-Essentially, <SAMP>`--no-parent'</SAMP> is similar to
-<SAMP>`-I/~luzer/my-archive'</SAMP>, only it handles redirections in a more
-intelligent fashion.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_16.html">previous</A>, <A HREF="wget_18.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_18.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_18.html
diff -N manual/wget-1.8.1/html_node/wget_18.html
--- manual/wget-1.8.1/html_node/wget_18.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,55 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Relative Links</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_17.html">previous</A>, <A HREF="wget_19.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC18" HREF="wget_toc.html#TOC18">Relative Links</A></H2>
-<P>
-<A NAME="IDX109"></A>
-
-
-<P>
-When <SAMP>`-L'</SAMP> is turned on, only the relative links are ever followed.
-Relative links are here defined those that do not refer to the web
-server root.  For example, these links are relative:
-
-
-
-<PRE>
-&#60;a href="foo.gif"&#62;
-&#60;a href="foo/bar.gif"&#62;
-&#60;a href="../foo/bar.gif"&#62;
-</PRE>
-
-<P>
-These links are not relative:
-
-
-
-<PRE>
-&#60;a href="/foo.gif"&#62;
-&#60;a href="/foo/bar.gif"&#62;
-&#60;a href="http://www.server.com/foo/bar.gif"&#62;
-</PRE>
-
-<P>
-Using this option guarantees that recursive retrieval will not span
-hosts, even without <SAMP>`-H'</SAMP>.  In simple cases it also allows 
downloads
-to "just work" without having to convert links.
-
-
-<P>
-This option is probably not very useful and might be removed in a future
-release.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_17.html">previous</A>, <A HREF="wget_19.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_19.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_19.html
diff -N manual/wget-1.8.1/html_node/wget_19.html
--- manual/wget-1.8.1/html_node/wget_19.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,42 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - FTP Links</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_18.html">previous</A>, <A HREF="wget_20.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC19" HREF="wget_toc.html#TOC19">Following FTP Links</A></H2>
-<P>
-<A NAME="IDX110"></A>
-
-
-<P>
-The rules for FTP are somewhat specific, as it is necessary for
-them to be.  FTP links in HTML documents are often included
-for purposes of reference, and it is often inconvenient to download them
-by default.
-
-
-<P>
-To have FTP links followed from HTML documents, you need to
-specify the <SAMP>`--follow-ftp'</SAMP> option.  Having done that, FTP
-links will span hosts regardless of <SAMP>`-H'</SAMP> setting.  This is 
logical,
-as FTP links rarely point to the same host where the HTTP
-server resides.  For similar reasons, the <SAMP>`-L'</SAMP> options has no
-effect on such downloads.  On the other hand, domain acceptance
-(<SAMP>`-D'</SAMP>) and suffix rules (<SAMP>`-A'</SAMP> and <SAMP>`-R'</SAMP>) 
apply normally.
-
-
-<P>
-Also note that followed links to FTP directories will not be
-retrieved recursively further.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_18.html">previous</A>, <A HREF="wget_20.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_2.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_2.html
diff -N manual/wget-1.8.1/html_node/wget_2.html
--- manual/wget-1.8.1/html_node/wget_2.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,44 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Invoking</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_1.html">previous</A>, 
<A HREF="wget_3.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC2" HREF="wget_toc.html#TOC2">Invoking</A></H1>
-<P>
-<A NAME="IDX3"></A>
-<A NAME="IDX4"></A>
-<A NAME="IDX5"></A>
-<A NAME="IDX6"></A>
-
-
-<P>
-By default, Wget is very simple to invoke.  The basic syntax is:
-
-
-
-<PRE>
-wget [<VAR>option</VAR>]... [<VAR>URL</VAR>]...
-</PRE>
-
-<P>
-Wget will simply download all the URLs specified on the command
-line.  <VAR>URL</VAR> is a <EM>Uniform Resource Locator</EM>, as defined below.
-
-
-<P>
-However, you may wish to change some of the default parameters of
-Wget.  You can do it two ways: permanently, adding the appropriate
-command to <TT>`.wgetrc'</TT> (see section <A 
HREF="wget_24.html#SEC24">Startup File</A>), or specifying it on
-the command line.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_1.html">previous</A>, 
<A HREF="wget_3.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_20.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_20.html
diff -N manual/wget-1.8.1/html_node/wget_20.html
--- manual/wget-1.8.1/html_node/wget_20.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,77 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Time-Stamping</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_19.html">previous</A>, <A HREF="wget_21.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC20" HREF="wget_toc.html#TOC20">Time-Stamping</A></H1>
-<P>
-<A NAME="IDX111"></A>
-<A NAME="IDX112"></A>
-<A NAME="IDX113"></A>
-<A NAME="IDX114"></A>
-
-
-<P>
-One of the most important aspects of mirroring information from the
-Internet is updating your archives.
-
-
-<P>
-Downloading the whole archive again and again, just to replace a few
-changed files is expensive, both in terms of wasted bandwidth and money,
-and the time to do the update.  This is why all the mirroring tools
-offer the option of incremental updating.
-
-
-<P>
-Such an updating mechanism means that the remote server is scanned in
-search of <EM>new</EM> files.  Only those new files will be downloaded in
-the place of the old ones.
-
-
-<P>
-A file is considered new if one of these two conditions are met:
-
-
-
-<OL>
-<LI>
-
-A file of that name does not already exist locally.
-
-<LI>
-
-A file of that name does exist, but the remote file was modified more
-recently than the local file.
-</OL>
-
-<P>
-To implement this, the program needs to be aware of the time of last
-modification of both local and remote files.  We call this information the
-<EM>time-stamp</EM> of a file.
-
-
-<P>
-The time-stamping in GNU Wget is turned on using <SAMP>`--timestamping'</SAMP>
-(<SAMP>`-N'</SAMP>) option, or through <CODE>timestamping = on</CODE> 
directive in
-<TT>`.wgetrc'</TT>.  With this option, for each file it intends to download,
-Wget will check whether a local file of the same name exists.  If it
-does, and the remote file is older, Wget will not download it.
-
-
-<P>
-If the local file does not exist, or the sizes of the files do not
-match, Wget will download the remote file no matter what the time-stamps
-say.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_19.html">previous</A>, <A HREF="wget_21.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_21.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_21.html
diff -N manual/wget-1.8.1/html_node/wget_21.html
--- manual/wget-1.8.1/html_node/wget_21.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,94 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Time-Stamping Usage</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_20.html">previous</A>, <A HREF="wget_22.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC21" HREF="wget_toc.html#TOC21">Time-Stamping Usage</A></H2>
-<P>
-<A NAME="IDX115"></A>
-<A NAME="IDX116"></A>
-
-
-<P>
-The usage of time-stamping is simple.  Say you would like to download a
-file so that it keeps its date of modification.
-
-
-
-<PRE>
-wget -S http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-A simple <CODE>ls -l</CODE> shows that the time stamp on the local file equals
-the state of the <CODE>Last-Modified</CODE> header, as returned by the server.
-As you can see, the time-stamping info is preserved locally, even
-without <SAMP>`-N'</SAMP> (at least for HTTP).
-
-
-<P>
-Several days later, you would like Wget to check if the remote file has
-changed, and download it if it has.
-
-
-
-<PRE>
-wget -N http://www.gnu.ai.mit.edu/
-</PRE>
-
-<P>
-Wget will ask the server for the last-modified date.  If the local file
-has the same timestamp as the server, or a newer one, the remote file
-will not be re-fetched.  However, if the remote file is more recent,
-Wget will proceed to fetch it.
-
-
-<P>
-The same goes for FTP.  For example:
-
-
-
-<PRE>
-wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*";
-</PRE>
-
-<P>
-(The quotes around that URL are to prevent the shell from trying to
-interpret the <SAMP>`*'</SAMP>.)
-
-
-<P>
-After download, a local directory listing will show that the timestamps
-match those on the remote server.  Reissuing the command with <SAMP>`-N'</SAMP>
-will make Wget re-fetch <EM>only</EM> the files that have been modified
-since the last download.
-
-
-<P>
-If you wished to mirror the GNU archive every week, you would use a
-command like the following, weekly:
-
-
-
-<PRE>
-wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
-</PRE>
-
-<P>
-Note that time-stamping will only work for files for which the server
-gives a timestamp.  For HTTP, this depends on getting a
-<CODE>Last-Modified</CODE> header.  For FTP, this depends on getting a
-directory listing with dates in a format that Wget can parse
-(see section <A HREF="wget_23.html#SEC23">FTP Time-Stamping Internals</A>).
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_20.html">previous</A>, <A HREF="wget_22.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_22.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_22.html
diff -N manual/wget-1.8.1/html_node/wget_22.html
--- manual/wget-1.8.1/html_node/wget_22.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,55 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - HTTP Time-Stamping Internals</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_21.html">previous</A>, <A HREF="wget_23.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC22" HREF="wget_toc.html#TOC22">HTTP Time-Stamping 
Internals</A></H2>
-<P>
-<A NAME="IDX117"></A>
-
-
-<P>
-Time-stamping in HTTP is implemented by checking of the
-<CODE>Last-Modified</CODE> header.  If you wish to retrieve the file
-<TT>`foo.html'</TT> through HTTP, Wget will check whether
-<TT>`foo.html'</TT> exists locally.  If it doesn't, <TT>`foo.html'</TT> will be
-retrieved unconditionally.
-
-
-<P>
-If the file does exist locally, Wget will first check its local
-time-stamp (similar to the way <CODE>ls -l</CODE> checks it), and then send a
-<CODE>HEAD</CODE> request to the remote server, demanding the information on
-the remote file.
-
-
-<P>
-The <CODE>Last-Modified</CODE> header is examined to find which file was
-modified more recently (which makes it "newer").  If the remote file
-is newer, it will be downloaded; if it is older, Wget will give
-up.<A NAME="DOCF2" HREF="wget_foot.html#FOOT2">(2)</A>
-
-
-<P>
-When <SAMP>`--backup-converted'</SAMP> (<SAMP>`-K'</SAMP>) is specified in 
conjunction
-with <SAMP>`-N'</SAMP>, server file <SAMP>`<VAR>X</VAR>'</SAMP> is compared to 
local file
-<SAMP>`<VAR>X</VAR>.orig'</SAMP>, if extant, rather than being compared to 
local file
-<SAMP>`<VAR>X</VAR>'</SAMP>, which will always differ if it's been converted by
-<SAMP>`--convert-links'</SAMP> (<SAMP>`-k'</SAMP>).
-
-
-<P>
-Arguably, HTTP time-stamping should be implemented using the
-<CODE>If-Modified-Since</CODE> request.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_21.html">previous</A>, <A HREF="wget_23.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_23.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_23.html
diff -N manual/wget-1.8.1/html_node/wget_23.html
--- manual/wget-1.8.1/html_node/wget_23.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,53 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - FTP Time-Stamping Internals</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_22.html">previous</A>, <A HREF="wget_24.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC23" HREF="wget_toc.html#TOC23">FTP Time-Stamping 
Internals</A></H2>
-<P>
-<A NAME="IDX118"></A>
-
-
-<P>
-In theory, FTP time-stamping works much the same as HTTP, only
-FTP has no headers--time-stamps must be ferreted out of directory
-listings.
-
-
-<P>
-If an FTP download is recursive or uses globbing, Wget will use the
-FTP <CODE>LIST</CODE> command to get a file listing for the directory
-containing the desired file(s).  It will try to analyze the listing,
-treating it like Unix <CODE>ls -l</CODE> output, extracting the time-stamps.
-The rest is exactly the same as for HTTP.  Note that when
-retrieving individual files from an FTP server without using
-globbing or recursion, listing files will not be downloaded (and thus
-files will not be time-stamped) unless <SAMP>`-N'</SAMP> is specified.
-
-
-<P>
-Assumption that every directory listing is a Unix-style listing may
-sound extremely constraining, but in practice it is not, as many
-non-Unix FTP servers use the Unixoid listing format because most
-(all?) of the clients understand it.  Bear in mind that RFC959
-defines no standard way to get a file list, let alone the time-stamps.
-We can only hope that a future standard will define this.
-
-
-<P>
-Another non-standard solution includes the use of <CODE>MDTM</CODE> command
-that is supported by some FTP servers (including the popular
-<CODE>wu-ftpd</CODE>), which returns the exact time of the specified file.
-Wget may support this command in the future.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_22.html">previous</A>, <A HREF="wget_24.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_24.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_24.html
diff -N manual/wget-1.8.1/html_node/wget_24.html
--- manual/wget-1.8.1/html_node/wget_24.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,43 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Startup File</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_23.html">previous</A>, <A HREF="wget_25.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC24" HREF="wget_toc.html#TOC24">Startup File</A></H1>
-<P>
-<A NAME="IDX119"></A>
-<A NAME="IDX120"></A>
-<A NAME="IDX121"></A>
-<A NAME="IDX122"></A>
-<A NAME="IDX123"></A>
-
-
-<P>
-Once you know how to change default settings of Wget through command
-line arguments, you may wish to make some of those settings permanent.
-You can do that in a convenient way by creating the Wget startup
-file---<TT>`.wgetrc'</TT>.
-
-
-<P>
-Besides <TT>`.wgetrc'</TT> is the "main" initialization file, it is
-convenient to have a special facility for storing passwords.  Thus Wget
-reads and interprets the contents of <TT>`$HOME/.netrc'</TT>, if it finds
-it.  You can find <TT>`.netrc'</TT> format in your system manuals.
-
-
-<P>
-Wget reads <TT>`.wgetrc'</TT> upon startup, recognizing a limited set of
-commands.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_23.html">previous</A>, <A HREF="wget_25.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_25.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_25.html
diff -N manual/wget-1.8.1/html_node/wget_25.html
--- manual/wget-1.8.1/html_node/wget_25.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,45 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Wgetrc Location</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_24.html">previous</A>, <A HREF="wget_26.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC25" HREF="wget_toc.html#TOC25">Wgetrc Location</A></H2>
-<P>
-<A NAME="IDX124"></A>
-<A NAME="IDX125"></A>
-
-
-<P>
-When initializing, Wget will look for a <EM>global</EM> startup file,
-<TT>`/usr/local/etc/wgetrc'</TT> by default (or some prefix other than
-<TT>`/usr/local'</TT>, if Wget was not installed there) and read commands
-from there, if it exists.
-
-
-<P>
-Then it will look for the user's file.  If the environmental variable
-<CODE>WGETRC</CODE> is set, Wget will try to load that file.  Failing that, no
-further attempts will be made.
-
-
-<P>
-If <CODE>WGETRC</CODE> is not set, Wget will try to load 
<TT>`$HOME/.wgetrc'</TT>.
-
-
-<P>
-The fact that user's settings are loaded after the system-wide ones
-means that in case of collision user's wgetrc <EM>overrides</EM> the
-system-wide wgetrc (in <TT>`/usr/local/etc/wgetrc'</TT> by default).
-Fascist admins, away!
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_24.html">previous</A>, <A HREF="wget_26.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_26.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_26.html
diff -N manual/wget-1.8.1/html_node/wget_26.html
--- manual/wget-1.8.1/html_node/wget_26.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,53 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Wgetrc Syntax</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_25.html">previous</A>, <A HREF="wget_27.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC26" HREF="wget_toc.html#TOC26">Wgetrc Syntax</A></H2>
-<P>
-<A NAME="IDX126"></A>
-<A NAME="IDX127"></A>
-
-
-<P>
-The syntax of a wgetrc command is simple:
-
-
-
-<PRE>
-variable = value
-</PRE>
-
-<P>
-The <EM>variable</EM> will also be called <EM>command</EM>.  Valid
-<EM>values</EM> are different for different commands.
-
-
-<P>
-The commands are case-insensitive and underscore-insensitive.  Thus
-<SAMP>`DIr__PrefiX'</SAMP> is the same as <SAMP>`dirprefix'</SAMP>.  Empty 
lines, lines
-beginning with <SAMP>`#'</SAMP> and lines containing white-space only are
-discarded.
-
-
-<P>
-Commands that expect a comma-separated list will clear the list on an
-empty command.  So, if you wish to reset the rejection list specified in
-global <TT>`wgetrc'</TT>, you can do it with:
-
-
-
-<PRE>
-reject =
-</PRE>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_25.html">previous</A>, <A HREF="wget_27.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_27.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_27.html
diff -N manual/wget-1.8.1/html_node/wget_27.html
--- manual/wget-1.8.1/html_node/wget_27.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,379 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Wgetrc Commands</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_26.html">previous</A>, <A HREF="wget_28.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC27" HREF="wget_toc.html#TOC27">Wgetrc Commands</A></H2>
-<P>
-<A NAME="IDX128"></A>
-
-
-<P>
-The complete set of commands is listed below.  Legal values are listed
-after the <SAMP>`='</SAMP>.  Simple Boolean values can be set or unset using
-<SAMP>`on'</SAMP> and <SAMP>`off'</SAMP> or <SAMP>`1'</SAMP> and 
<SAMP>`0'</SAMP>.  A fancier kind of
-Boolean allowed in some cases is the <EM>lockable Boolean</EM>, which may
-be set to <SAMP>`on'</SAMP>, <SAMP>`off'</SAMP>, <SAMP>`always'</SAMP>, or 
<SAMP>`never'</SAMP>.  If an
-option is set to <SAMP>`always'</SAMP> or <SAMP>`never'</SAMP>, that value 
will be
-locked in for the duration of the Wget invocation--commandline options
-will not override.
-
-
-<P>
-Some commands take pseudo-arbitrary values.  <VAR>address</VAR> values can be
-hostnames or dotted-quad IP addresses.  <VAR>n</VAR> can be any positive
-integer, or <SAMP>`inf'</SAMP> for infinity, where appropriate.  
<VAR>string</VAR>
-values can be any non-empty string.
-
-
-<P>
-Most of these commands have commandline equivalents (see section <A 
HREF="wget_2.html#SEC2">Invoking</A>),
-though some of the more obscure or rarely used ones do not.
-
-
-<DL COMPACT>
-
-<DT>accept/reject = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-A'</SAMP>/<SAMP>`-R'</SAMP> (see section <A 
HREF="wget_16.html#SEC16">Types of Files</A>).
-
-<DT>add_hostdir = on/off
-<DD>
-Enable/disable host-prefixed file names.  <SAMP>`-nH'</SAMP> disables it.
-
-<DT>continue = on/off
-<DD>
-If set to on, force continuation of preexistent partially retrieved
-files.  See <SAMP>`-c'</SAMP> before setting it.
-
-<DT>background = on/off
-<DD>
-Enable/disable going to background--the same as <SAMP>`-b'</SAMP> (which
-enables it).
-
-<DT>backup_converted = on/off
-<DD>
-Enable/disable saving pre-converted files with the suffix
-<SAMP>`.orig'</SAMP>---the same as <SAMP>`-K'</SAMP> (which enables it).
-
-<DT>base = <VAR>string</VAR>
-<DD>
-Consider relative URLs in URL input files forced to be
-interpreted as HTML as being relative to <VAR>string</VAR>---the same as
-<SAMP>`-B'</SAMP>.
-
-<DT>bind_address = <VAR>address</VAR>
-<DD>
-Bind to <VAR>address</VAR>, like the <SAMP>`--bind-address'</SAMP> option.
-
-<DT>cache = on/off
-<DD>
-When set to off, disallow server-caching.  See the <SAMP>`-C'</SAMP> option.
-
-<DT>convert links = on/off
-<DD>
-Convert non-relative links locally.  The same as <SAMP>`-k'</SAMP>.
-
-<DT>cookies = on/off
-<DD>
-When set to off, disallow cookies.  See the <SAMP>`--cookies'</SAMP> option.
-
-<DT>load_cookies = <VAR>file</VAR>
-<DD>
-Load cookies from <VAR>file</VAR>.  See <SAMP>`--load-cookies'</SAMP>.
-
-<DT>save_cookies = <VAR>file</VAR>
-<DD>
-Save cookies to <VAR>file</VAR>.  See <SAMP>`--save-cookies'</SAMP>.
-
-<DT>cut_dirs = <VAR>n</VAR>
-<DD>
-Ignore <VAR>n</VAR> remote directory components.
-
-<DT>debug = on/off
-<DD>
-Debug mode, same as <SAMP>`-d'</SAMP>.
-
-<DT>delete_after = on/off
-<DD>
-Delete after download--the same as <SAMP>`--delete-after'</SAMP>.
-
-<DT>dir_prefix = <VAR>string</VAR>
-<DD>
-Top of directory tree--the same as <SAMP>`-P'</SAMP>.
-
-<DT>dirstruct = on/off
-<DD>
-Turning dirstruct on or off--the same as <SAMP>`-x'</SAMP> or 
<SAMP>`-nd'</SAMP>,
-respectively.
-
-<DT>domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`-D'</SAMP> (see section <A HREF="wget_15.html#SEC15">Spanning 
Hosts</A>).
-
-<DT>dot_bytes = <VAR>n</VAR>
-<DD>
-Specify the number of bytes "contained" in a dot, as seen throughout
-the retrieval (1024 by default).  You can postfix the value with
-<SAMP>`k'</SAMP> or <SAMP>`m'</SAMP>, representing kilobytes and megabytes,
-respectively.  With dot settings you can tailor the dot retrieval to
-suit your needs, or you can use the predefined <EM>styles</EM>
-(see section <A HREF="wget_7.html#SEC7">Download Options</A>).
-
-<DT>dots_in_line = <VAR>n</VAR>
-<DD>
-Specify the number of dots that will be printed in each line throughout
-the retrieval (50 by default).
-
-<DT>dot_spacing = <VAR>n</VAR>
-<DD>
-Specify the number of dots in a single cluster (10 by default).
-
-<DT>exclude_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to exclude from
-download--the same as <SAMP>`-X'</SAMP> (see section <A 
HREF="wget_17.html#SEC17">Directory-Based Limits</A>).
-
-<DT>exclude_domains = <VAR>string</VAR>
-<DD>
-Same as <SAMP>`--exclude-domains'</SAMP> (see section <A 
HREF="wget_15.html#SEC15">Spanning Hosts</A>).
-
-<DT>follow_ftp = on/off
-<DD>
-Follow FTP links from HTML documents--the same as
-<SAMP>`--follow-ftp'</SAMP>.
-
-<DT>follow_tags = <VAR>string</VAR>
-<DD>
-Only follow certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`--follow-tags'</SAMP>.
-
-<DT>force_html = on/off
-<DD>
-If set to on, force the input filename to be regarded as an HTML
-document--the same as <SAMP>`-F'</SAMP>.
-
-<DT>ftp_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as FTP proxy, instead of the one specified in
-environment.
-
-<DT>glob = on/off
-<DD>
-Turn globbing on/off--the same as <SAMP>`-g'</SAMP>.
-
-<DT>header = <VAR>string</VAR>
-<DD>
-Define an additional header, like <SAMP>`--header'</SAMP>.
-
-<DT>html_extension = on/off
-<DD>
-Add a <SAMP>`.html'</SAMP> extension to <SAMP>`text/html'</SAMP> files without 
it, like
-<SAMP>`-E'</SAMP>.
-
-<DT>http_passwd = <VAR>string</VAR>
-<DD>
-Set HTTP password.
-
-<DT>http_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as HTTP proxy, instead of the one specified in
-environment.
-
-<DT>http_user = <VAR>string</VAR>
-<DD>
-Set HTTP user to <VAR>string</VAR>.
-
-<DT>ignore_length = on/off
-<DD>
-When set to on, ignore <CODE>Content-Length</CODE> header; the same as
-<SAMP>`--ignore-length'</SAMP>.
-
-<DT>ignore_tags = <VAR>string</VAR>
-<DD>
-Ignore certain HTML tags when doing a recursive retrieval, just like
-<SAMP>`-G'</SAMP> / <SAMP>`--ignore-tags'</SAMP>.
-
-<DT>include_directories = <VAR>string</VAR>
-<DD>
-Specify a comma-separated list of directories you wish to follow when
-downloading--the same as <SAMP>`-I'</SAMP>.
-
-<DT>input = <VAR>string</VAR>
-<DD>
-Read the URLs from <VAR>string</VAR>, like <SAMP>`-i'</SAMP>.
-
-<DT>kill_longer = on/off
-<DD>
-Consider data longer than specified in content-length header as invalid
-(and retry getting it).  The default behaviour is to save as much data
-as there is, provided there is more than or equal to the value in
-<CODE>Content-Length</CODE>.
-
-<DT>logfile = <VAR>string</VAR>
-<DD>
-Set logfile--the same as <SAMP>`-o'</SAMP>.
-
-<DT>login = <VAR>string</VAR>
-<DD>
-Your user name on the remote machine, for FTP.  Defaults to
-<SAMP>`anonymous'</SAMP>.
-
-<DT>mirror = on/off
-<DD>
-Turn mirroring on/off.  The same as <SAMP>`-m'</SAMP>.
-
-<DT>netrc = on/off
-<DD>
-Turn reading netrc on or off.
-
-<DT>noclobber = on/off
-<DD>
-Same as <SAMP>`-nc'</SAMP>.
-
-<DT>no_parent = on/off
-<DD>
-Disallow retrieving outside the directory hierarchy, like
-<SAMP>`--no-parent'</SAMP> (see section <A 
HREF="wget_17.html#SEC17">Directory-Based Limits</A>).
-
-<DT>no_proxy = <VAR>string</VAR>
-<DD>
-Use <VAR>string</VAR> as the comma-separated list of domains to avoid in
-proxy loading, instead of the one specified in environment.
-
-<DT>output_document = <VAR>string</VAR>
-<DD>
-Set the output filename--the same as <SAMP>`-O'</SAMP>.
-
-<DT>page_requisites = on/off
-<DD>
-Download all ancillary documents necessary for a single HTML page to
-display properly--the same as <SAMP>`-p'</SAMP>.
-
-<DT>passive_ftp = on/off/always/never
-<DD>
-Set passive FTP---the same as <SAMP>`--passive-ftp'</SAMP>.  Some scripts
-and <SAMP>`.pm'</SAMP> (Perl module) files download files using <SAMP>`wget
---passive-ftp'</SAMP>.  If your firewall does not allow this, you can set
-<SAMP>`passive_ftp = never'</SAMP> to override the commandline.
-
-<DT>passwd = <VAR>string</VAR>
-<DD>
-Set your FTP password to <VAR>password</VAR>.  Without this setting, the
-password defaults to <SAMP>address@hidden'</SAMP>.
-
-<DT>progress = <VAR>string</VAR>
-<DD>
-Set the type of the progress indicator.  Legal types are "dot" and
-"bar".
-
-<DT>proxy_user = <VAR>string</VAR>
-<DD>
-Set proxy authentication user name to <VAR>string</VAR>, like 
<SAMP>`--proxy-user'</SAMP>.
-
-<DT>proxy_passwd = <VAR>string</VAR>
-<DD>
-Set proxy authentication password to <VAR>string</VAR>, like 
<SAMP>`--proxy-passwd'</SAMP>.
-
-<DT>referer = <VAR>string</VAR>
-<DD>
-Set HTTP <SAMP>`Referer:'</SAMP> header just like <SAMP>`--referer'</SAMP>.  
(Note it
-was the folks who wrote the HTTP spec who got the spelling of
-"referrer" wrong.)
-
-<DT>quiet = on/off
-<DD>
-Quiet mode--the same as <SAMP>`-q'</SAMP>.
-
-<DT>quota = <VAR>quota</VAR>
-<DD>
-Specify the download quota, which is useful to put in the global
-<TT>`wgetrc'</TT>.  When download quota is specified, Wget will stop
-retrieving after the download sum has become greater than quota.  The
-quota can be specified in bytes (default), kbytes <SAMP>`k'</SAMP> appended) or
-mbytes (<SAMP>`m'</SAMP> appended).  Thus <SAMP>`quota = 5m'</SAMP> will set 
the quota
-to 5 mbytes.  Note that the user's startup file overrides system
-settings.
-
-<DT>reclevel = <VAR>n</VAR>
-<DD>
-Recursion level--the same as <SAMP>`-l'</SAMP>.
-
-<DT>recursive = on/off
-<DD>
-Recursive on/off--the same as <SAMP>`-r'</SAMP>.
-
-<DT>relative_only = on/off
-<DD>
-Follow only relative links--the same as <SAMP>`-L'</SAMP> (see section <A 
HREF="wget_18.html#SEC18">Relative Links</A>).
-
-<DT>remove_listing = on/off
-<DD>
-If set to on, remove FTP listings downloaded by Wget.  Setting it
-to off is the same as <SAMP>`-nr'</SAMP>.
-
-<DT>retr_symlinks = on/off
-<DD>
-When set to on, retrieve symbolic links as if they were plain files; the
-same as <SAMP>`--retr-symlinks'</SAMP>.
-
-<DT>robots = on/off
-<DD>
-Use (or not) <TT>`/robots.txt'</TT> file (see section <A 
HREF="wget_41.html#SEC41">Robots</A>).  Be sure to know
-what you are doing before changing the default (which is <SAMP>`on'</SAMP>).
-
-<DT>server_response = on/off
-<DD>
-Choose whether or not to print the HTTP and FTP server
-responses--the same as <SAMP>`-S'</SAMP>.
-
-<DT>span_hosts = on/off
-<DD>
-Same as <SAMP>`-H'</SAMP>.
-
-<DT>timeout = <VAR>n</VAR>
-<DD>
-Set timeout value--the same as <SAMP>`-T'</SAMP>.
-
-<DT>timestamping = on/off
-<DD>
-Turn timestamping on/off.  The same as <SAMP>`-N'</SAMP> (see section <A 
HREF="wget_20.html#SEC20">Time-Stamping</A>).
-
-<DT>tries = <VAR>n</VAR>
-<DD>
-Set number of retries per URL---the same as <SAMP>`-t'</SAMP>.
-
-<DT>use_proxy = on/off
-<DD>
-Turn proxy support on/off.  The same as <SAMP>`-Y'</SAMP>.
-
-<DT>verbose = on/off
-<DD>
-Turn verbose on/off--the same as <SAMP>`-v'</SAMP>/<SAMP>`-nv'</SAMP>.
-
-<DT>wait = <VAR>n</VAR>
-<DD>
-Wait <VAR>n</VAR> seconds between retrievals--the same as <SAMP>`-w'</SAMP>.
-
-<DT>waitretry = <VAR>n</VAR>
-<DD>
-Wait up to <VAR>n</VAR> seconds between retries of failed retrievals
-only--the same as <SAMP>`--waitretry'</SAMP>.  Note that this is turned on by
-default in the global <TT>`wgetrc'</TT>.
-
-<DT>randomwait = on/off
-<DD>
-Turn random between-request wait times on or off. The same as 
-<SAMP>`--random-wait'</SAMP>.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_26.html">previous</A>, <A HREF="wget_28.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_28.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_28.html
diff -N manual/wget-1.8.1/html_node/wget_28.html
--- manual/wget-1.8.1/html_node/wget_28.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,144 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Sample Wgetrc</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_27.html">previous</A>, <A HREF="wget_29.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC28" HREF="wget_toc.html#TOC28">Sample Wgetrc</A></H2>
-<P>
-<A NAME="IDX129"></A>
-
-
-<P>
-This is the sample initialization file, as given in the distribution.
-It is divided in two section--one for global usage (suitable for global
-startup file), and one for local usage (suitable for
-<TT>`$HOME/.wgetrc'</TT>).  Be careful about the things you change.
-
-
-<P>
-Note that almost all the lines are commented out.  For a command to have
-any effect, you must remove the <SAMP>`#'</SAMP> character at the beginning of
-its line.
-
-
-
-<PRE>
-###
-### Sample Wget initialization file .wgetrc
-###
-
-## You can use this file to change the default behaviour of wget or to
-## avoid having to type many many command-line options. This file does
-## not contain a comprehensive list of commands -- look at the manual
-## to find out what you can put into this file.
-## 
-## Wget initialization file can reside in /usr/local/etc/wgetrc
-## (global, for all users) or $HOME/.wgetrc (for a single user).
-##
-## To use the settings in this file, you will have to uncomment them,
-## as well as change them, in most cases, as the values on the
-## commented-out lines are the default values (e.g. "off").
-
-##
-## Global settings (useful for setting up in /usr/local/etc/wgetrc).
-## Think well before you change them, since they may reduce wget's
-## functionality, and make it behave contrary to the documentation:
-##
-
-# You can set retrieve quota for beginners by specifying a value
-# optionally followed by 'K' (kilobytes) or 'M' (megabytes).  The
-# default quota is unlimited.
-#quota = inf
-
-# You can lower (or raise) the default number of retries when
-# downloading a file (default is 20).
-#tries = 20
-
-# Lowering the maximum depth of the recursive retrieval is handy to
-# prevent newbies from going too "deep" when they unwittingly start
-# the recursive retrieval.  The default is 5.
-#reclevel = 5
-
-# Many sites are behind firewalls that do not allow initiation of
-# connections from the outside.  On these sites you have to use the
-# `passive' feature of FTP.  If you are behind such a firewall, you
-# can turn this on to make Wget use passive FTP by default.
-#passive_ftp = off
-
-# The "wait" command below makes Wget wait between every connection.
-# If, instead, you want Wget to wait only between retries of failed
-# downloads, set waitretry to maximum number of seconds to wait (Wget
-# will use "linear backoff", waiting 1 second after the first failure
-# on a file, 2 seconds after the second failure, etc. up to this max).
-waitretry = 10
-
-##
-## Local settings (for a user to set in his $HOME/.wgetrc).  It is
-## *highly* undesirable to put these settings in the global file, since
-## they are potentially dangerous to "normal" users.
-##
-## Even when setting up your own ~/.wgetrc, you should know what you
-## are doing before doing so.
-##
-
-# Set this to on to use timestamping by default:
-#timestamping = off
-
-# It is a good idea to make Wget send your email address in a `From:'
-# header with your request (so that server administrators can contact
-# you in case of errors).  Wget does *not* send `From:' by default.
-#header = From: Your Name &#60;address@hidden&#62;
-
-# You can set up other headers, like Accept-Language.  Accept-Language
-# is *not* sent by default.
-#header = Accept-Language: en
-
-# You can set the default proxies for Wget to use for http and ftp.
-# They will override the value in the environment.
-#http_proxy = http://proxy.yoyodyne.com:18023/
-#ftp_proxy = http://proxy.yoyodyne.com:18023/
-
-# If you do not want to use proxy at all, set this to off.
-#use_proxy = on
-
-# You can customize the retrieval outlook.  Valid options are default,
-# binary, mega and micro.
-#dot_style = default
-
-# Setting this to off makes Wget not download /robots.txt.  Be sure to
-# know *exactly* what /robots.txt is and how it is used before changing
-# the default!
-#robots = on
-
-# It can be useful to make Wget wait between connections.  Set this to
-# the number of seconds you want Wget to wait.
-#wait = 0
-
-# You can force creating directory structure, even if a single is being
-# retrieved, by setting this to on.
-#dirstruct = off
-
-# You can turn on recursive retrieving by default (don't do this if
-# you are not sure you know what it means) by setting this to on.
-#recursive = off
-
-# To always back up file X as X.orig before converting its links (due
-# to -k / --convert-links / convert_links = on having been specified),
-# set this variable to on:
-#backup_converted = off
-
-# To have Wget follow FTP links from HTML files by default, set this
-# to on:
-#follow_ftp = off
-</PRE>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_27.html">previous</A>, <A HREF="wget_29.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_29.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_29.html
diff -N manual/wget-1.8.1/html_node/wget_29.html
--- manual/wget-1.8.1/html_node/wget_29.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,25 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Examples</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_28.html">previous</A>, <A HREF="wget_30.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC29" HREF="wget_toc.html#TOC29">Examples</A></H1>
-<P>
-<A NAME="IDX130"></A>
-
-
-<P>
-The examples are divided into three sections loosely based on their
-complexity.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_28.html">previous</A>, <A HREF="wget_30.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_3.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_3.html
diff -N manual/wget-1.8.1/html_node/wget_3.html
--- manual/wget-1.8.1/html_node/wget_3.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,106 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - URL Format</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_2.html">previous</A>, 
<A HREF="wget_4.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC3" HREF="wget_toc.html#TOC3">URL Format</A></H2>
-<P>
-<A NAME="IDX7"></A>
-<A NAME="IDX8"></A>
-
-
-<P>
-<EM>URL</EM> is an acronym for Uniform Resource Locator.  A uniform
-resource locator is a compact string representation for a resource
-available via the Internet.  Wget recognizes the URL syntax as per
-RFC1738.  This is the most widely used form (square brackets denote
-optional parts):
-
-
-
-<PRE>
-http://host[:port]/directory/file
-ftp://host[:port]/directory/file
-</PRE>
-
-<P>
-You can also encode your username and password within a URL:
-
-
-
-<PRE>
-ftp://user:address@hidden/path
-http://user:address@hidden/path
-</PRE>
-
-<P>
-Either <VAR>user</VAR> or <VAR>password</VAR>, or both, may be left out.  If 
you
-leave out either the HTTP username or password, no authentication
-will be sent.  If you leave out the FTP username, <SAMP>`anonymous'</SAMP>
-will be used.  If you leave out the FTP password, your email
-address will be supplied as a default password.<A NAME="DOCF1" 
HREF="wget_foot.html#FOOT1">(1)</A>
-
-
-<P>
-You can encode unsafe characters in a URL as <SAMP>`%xy'</SAMP>, 
<CODE>xy</CODE>
-being the hexadecimal representation of the character's ASCII
-value.  Some common unsafe characters include <SAMP>`%'</SAMP> (quoted as
-<SAMP>`%25'</SAMP>), <SAMP>`:'</SAMP> (quoted as <SAMP>`%3A'</SAMP>), and 
<SAMP>`@'</SAMP> (quoted as
-<SAMP>`%40'</SAMP>).  Refer to RFC1738 for a comprehensive list of unsafe
-characters.
-
-
-<P>
-Wget also supports the <CODE>type</CODE> feature for FTP URLs.  By
-default, FTP documents are retrieved in the binary mode (type
-<SAMP>`i'</SAMP>), which means that they are downloaded unchanged.  Another
-useful mode is the <SAMP>`a'</SAMP> (<EM>ASCII</EM>) mode, which converts the 
line
-delimiters between the different operating systems, and is thus useful
-for text files.  Here is an example:
-
-
-
-<PRE>
-ftp://host/directory/file;type=a
-</PRE>
-
-<P>
-Two alternative variants of URL specification are also supported,
-because of historical (hysterical?) reasons and their widespreaded use.
-
-
-<P>
-FTP-only syntax (supported by <CODE>NcFTP</CODE>):
-
-<PRE>
-host:/dir/file
-</PRE>
-
-<P>
-HTTP-only syntax (introduced by <CODE>Netscape</CODE>):
-
-<PRE>
-host[:port]/dir/file
-</PRE>
-
-<P>
-These two alternative forms are deprecated, and may cease being
-supported in the future.
-
-
-<P>
-If you do not understand the difference between these notations, or do
-not know which one to use, just use the plain ordinary format you use
-with your favorite browser, like <CODE>Lynx</CODE> or <CODE>Netscape</CODE>.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_2.html">previous</A>, 
<A HREF="wget_4.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_30.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_30.html
diff -N manual/wget-1.8.1/html_node/wget_30.html
--- manual/wget-1.8.1/html_node/wget_30.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,79 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Simple Usage</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_29.html">previous</A>, <A HREF="wget_31.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC30" HREF="wget_toc.html#TOC30">Simple Usage</A></H2>
-
-
-<UL>
-<LI>
-
-Say you want to download a URL.  Just type:
-
-
-<PRE>
-wget http://fly.srk.fer.hr/
-</PRE>
-
-<LI>
-
-But what will happen if the connection is slow, and the file is lengthy?
-The connection will probably fail before the whole file is retrieved,
-more than once.  In this case, Wget will try getting the file until it
-either gets the whole of it, or exceeds the default number of retries
-(this being 20).  It is easy to change the number of tries to 45, to
-insure that the whole file will arrive safely:
-
-
-<PRE>
-wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
-</PRE>
-
-<LI>
-
-Now let's leave Wget to work in the background, and write its progress
-to log file <TT>`log'</TT>.  It is tiring to type <SAMP>`--tries'</SAMP>, so we
-shall use <SAMP>`-t'</SAMP>.
-
-
-<PRE>
-wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &#38;
-</PRE>
-
-The ampersand at the end of the line makes sure that Wget works in the
-background.  To unlimit the number of retries, use <SAMP>`-t inf'</SAMP>.
-
-<LI>
-
-The usage of FTP is as simple.  Wget will take care of login and
-password.
-
-
-<PRE>
-wget ftp://gnjilux.srk.fer.hr/welcome.msg
-</PRE>
-
-<LI>
-
-If you specify a directory, Wget will retrieve the directory listing,
-parse it and convert it to HTML.  Try:
-
-
-<PRE>
-wget ftp://prep.ai.mit.edu/pub/gnu/
-links index.html
-</PRE>
-
-</UL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_29.html">previous</A>, <A HREF="wget_31.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_31.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_31.html
diff -N manual/wget-1.8.1/html_node/wget_31.html
--- manual/wget-1.8.1/html_node/wget_31.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,173 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Advanced Usage</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_30.html">previous</A>, <A HREF="wget_32.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC31" HREF="wget_toc.html#TOC31">Advanced Usage</A></H2>
-
-
-<UL>
-<LI>
-
-You have a file that contains the URLs you want to download?  Use the
-<SAMP>`-i'</SAMP> switch:
-
-
-<PRE>
-wget -i <VAR>file</VAR>
-</PRE>
-
-If you specify <SAMP>`-'</SAMP> as file name, the URLs will be read from
-standard input.
-
-<LI>
-
-Create a five levels deep mirror image of the GNU web site, with the
-same directory structure the original has, with only one try per
-document, saving the log of the activities to <TT>`gnulog'</TT>:
-
-
-<PRE>
-wget -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-The same as the above, but convert the links in the HTML files to
-point to local files, so you can view the documents off-line:
-
-
-<PRE>
-wget --convert-links -r http://www.gnu.org/ -o gnulog
-</PRE>
-
-<LI>
-
-Retrieve only one HTML page, but make sure that all the elements needed
-for the page to be displayed, such as inline images and external style
-sheets, are also downloaded.  Also make sure the downloaded page
-references the downloaded links.
-
-
-<PRE>
-wget -p --convert-links http://www.server.com/dir/page.html
-</PRE>
-
-The HTML page will be saved to <TT>`www.server.com/dir/page.html'</TT>, and
-the images, stylesheets, etc., somewhere under <TT>`www.server.com/'</TT>,
-depending on where they were on the remote server.
-
-<LI>
-
-The same as the above, but without the <TT>`www.server.com/'</TT> directory.
-In fact, I don't want to have all those random server directories
-anyway--just save <EM>all</EM> those files under a <TT>`download/'</TT>
-subdirectory of the current directory.
-
-
-<PRE>
-wget -p --convert-links -nH -nd -Pdownload \
-     http://www.server.com/dir/page.html
-</PRE>
-
-<LI>
-
-Retrieve the index.html of <SAMP>`www.lycos.com'</SAMP>, showing the original
-server headers:
-
-
-<PRE>
-wget -S http://www.lycos.com/
-</PRE>
-
-<LI>
-
-Save the server headers with the file, perhaps for post-processing.
-
-
-<PRE>
-wget -s http://www.lycos.com/
-more index.html
-</PRE>
-
-<LI>
-
-Retrieve the first two levels of <SAMP>`wuarchive.wustl.edu'</SAMP>, saving 
them
-to <TT>`/tmp'</TT>.
-
-
-<PRE>
-wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
-</PRE>
-
-<LI>
-
-You want to download all the GIFs from a directory on an HTTP
-server.  You tried <SAMP>`wget http://www.server.com/dir/*.gif'</SAMP>, but 
that
-didn't work because HTTP retrieval does not support globbing.  In
-that case, use:
-
-
-<PRE>
-wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
-</PRE>
-
-More verbose, but the effect is the same.  <SAMP>`-r -l1'</SAMP> means to
-retrieve recursively (see section <A HREF="wget_13.html#SEC13">Recursive 
Retrieval</A>), with maximum depth
-of 1.  <SAMP>`--no-parent'</SAMP> means that references to the parent directory
-are ignored (see section <A HREF="wget_17.html#SEC17">Directory-Based 
Limits</A>), and <SAMP>`-A.gif'</SAMP> means to
-download only the GIF files.  <SAMP>`-A "*.gif"'</SAMP> would have worked
-too.
-
-<LI>
-
-Suppose you were in the middle of downloading, when Wget was
-interrupted.  Now you do not want to clobber the files already present.
-It would be:
-
-
-<PRE>
-wget -nc -r http://www.gnu.org/
-</PRE>
-
-<LI>
-
-If you want to encode your own username and password to HTTP or
-FTP, use the appropriate URL syntax (see section <A 
HREF="wget_3.html#SEC3">URL Format</A>).
-
-
-<PRE>
-wget ftp://hniksic:address@hidden/.emacs
-</PRE>
-
-<A NAME="IDX131"></A>
-<LI>
-
-You would like the output documents to go to standard output instead of
-to files?
-
-
-<PRE>
-wget -O - http://jagor.srce.hr/ http://www.srce.hr/
-</PRE>
-
-You can also combine the two options and make pipelines to retrieve the
-documents from remote hotlists:
-
-
-<PRE>
-wget -O - http://cool.list.com/ | wget --force-html -i -
-</PRE>
-
-</UL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_30.html">previous</A>, <A HREF="wget_32.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_32.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_32.html
diff -N manual/wget-1.8.1/html_node/wget_32.html
--- manual/wget-1.8.1/html_node/wget_32.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,72 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Very Advanced Usage</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_31.html">previous</A>, <A HREF="wget_33.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC32" HREF="wget_toc.html#TOC32">Very Advanced Usage</A></H2>
-
-<P>
-<A NAME="IDX132"></A>
-
-<UL>
-<LI>
-
-If you wish Wget to keep a mirror of a page (or FTP
-subdirectories), use <SAMP>`--mirror'</SAMP> (<SAMP>`-m'</SAMP>), which is the 
shorthand
-for <SAMP>`-r -l inf -N'</SAMP>.  You can put Wget in the crontab file asking 
it
-to recheck a site each Sunday:
-
-
-<PRE>
-crontab
-0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-In addition to the above, you want the links to be converted for local
-viewing.  But, after having read this manual, you know that link
-conversion doesn't play well with timestamping, so you also want Wget to
-back up the original HTML files before the conversion.  Wget invocation
-would look like this:
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted  \
-     http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-<LI>
-
-But you've also noticed that local viewing doesn't work all that well
-when HTML files are saved under extensions other than <SAMP>`.html'</SAMP>,
-perhaps because they were served as <TT>`index.cgi'</TT>.  So you'd like
-Wget to rename all the files served with content-type <SAMP>`text/html'</SAMP>
-to <TT>`<VAR>name</VAR>.html'</TT>.
-
-
-<PRE>
-wget --mirror --convert-links --backup-converted \
-     --html-extension -o /home/me/weeklog        \
-     http://www.gnu.org/
-</PRE>
-
-Or, with less typing:
-
-
-<PRE>
-wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
-</PRE>
-
-</UL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_31.html">previous</A>, <A HREF="wget_33.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_33.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_33.html
diff -N manual/wget-1.8.1/html_node/wget_33.html
--- manual/wget-1.8.1/html_node/wget_33.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,24 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Various</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_32.html">previous</A>, <A HREF="wget_34.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC33" HREF="wget_toc.html#TOC33">Various</A></H1>
-<P>
-<A NAME="IDX133"></A>
-
-
-<P>
-This chapter contains all the stuff that could not fit anywhere else.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_32.html">previous</A>, <A HREF="wget_34.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_34.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_34.html
diff -N manual/wget-1.8.1/html_node/wget_34.html
--- manual/wget-1.8.1/html_node/wget_34.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,115 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Proxies</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_33.html">previous</A>, <A HREF="wget_35.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC34" HREF="wget_toc.html#TOC34">Proxies</A></H2>
-<P>
-<A NAME="IDX134"></A>
-
-
-<P>
-<EM>Proxies</EM> are special-purpose HTTP servers designed to transfer
-data from remote servers to local clients.  One typical use of proxies
-is lightening network load for users behind a slow connection.  This is
-achieved by channeling all HTTP and FTP requests through the
-proxy which caches the transferred data.  When a cached resource is
-requested again, proxy will return the data from cache.  Another use for
-proxies is for companies that separate (for security reasons) their
-internal networks from the rest of Internet.  In order to obtain
-information from the Web, their users connect and retrieve remote data
-using an authorized proxy.
-
-
-<P>
-Wget supports proxies for both HTTP and FTP retrievals.  The
-standard way to specify proxy location, which Wget recognizes, is using
-the following environment variables:
-
-
-<DL COMPACT>
-
-<DT><CODE>http_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for HTTP
-connections.
-
-<DT><CODE>ftp_proxy</CODE>
-<DD>
-This variable should contain the URL of the proxy for FTP
-connections.  It is quite common that HTTP_PROXY and FTP_PROXY
-are set to the same URL.
-
-<DT><CODE>no_proxy</CODE>
-<DD>
-This variable should contain a comma-separated list of domain extensions
-proxy should <EM>not</EM> be used for.  For instance, if the value of
-<CODE>no_proxy</CODE> is <SAMP>`.mit.edu'</SAMP>, proxy will not be used to 
retrieve
-documents from MIT.
-</DL>
-
-<P>
-In addition to the environment variables, proxy location and settings
-may be specified from within Wget itself.
-
-
-<DL COMPACT>
-
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-<DT><SAMP>`proxy = on/off'</SAMP>
-<DD>
-This option may be used to turn the proxy support on or off.  Proxy
-support is on by default, provided that the appropriate environment
-variables are set.
-
-<DT><SAMP>`http_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`ftp_proxy = <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`no_proxy = <VAR>string</VAR>'</SAMP>
-<DD>
-These startup file variables allow you to override the proxy settings
-specified by the environment.
-</DL>
-
-<P>
-Some proxy servers require authorization to enable you to use them.  The
-authorization consists of <EM>username</EM> and <EM>password</EM>, which must
-be sent by Wget.  As with HTTP authorization, several
-authentication schemes exist.  For proxy authorization only the
-<CODE>Basic</CODE> authentication scheme is currently implemented.
-
-
-<P>
-You may specify your username and password either through the proxy
-URL or through the command-line options.  Assuming that the
-company's proxy is located at <SAMP>`proxy.company.com'</SAMP> at port 8001, a
-proxy URL location containing authorization data might look like
-this:
-
-
-
-<PRE>
-http://hniksic:address@hidden:8001/
-</PRE>
-
-<P>
-Alternatively, you may use the <SAMP>`proxy-user'</SAMP> and
-<SAMP>`proxy-password'</SAMP> options, and the equivalent <TT>`.wgetrc'</TT>
-settings <CODE>proxy_user</CODE> and <CODE>proxy_passwd</CODE> to set the proxy
-username and password.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_33.html">previous</A>, <A HREF="wget_35.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_35.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_35.html
diff -N manual/wget-1.8.1/html_node/wget_35.html
--- manual/wget-1.8.1/html_node/wget_35.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,27 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Distribution</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_34.html">previous</A>, <A HREF="wget_36.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC35" HREF="wget_toc.html#TOC35">Distribution</A></H2>
-<P>
-<A NAME="IDX135"></A>
-
-
-<P>
-Like all GNU utilities, the latest version of Wget can be found at the
-master GNU archive site prep.ai.mit.edu, and its mirrors.  For example,
-Wget 1.8.1 can be found at
-<A 
HREF="ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz";>ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz</A>
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_34.html">previous</A>, <A HREF="wget_36.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_36.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_36.html
diff -N manual/wget-1.8.1/html_node/wget_36.html
--- manual/wget-1.8.1/html_node/wget_36.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,40 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Mailing List</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_35.html">previous</A>, <A HREF="wget_37.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC36" HREF="wget_toc.html#TOC36">Mailing List</A></H2>
-<P>
-<A NAME="IDX136"></A>
-<A NAME="IDX137"></A>
-
-
-<P>
-Wget has its own mailing list at <A 
HREF="mailto:address@hidden";>address@hidden</A>, thanks
-to Karsten Thygesen.  The mailing list is for discussion of Wget
-features and web, reporting Wget bugs (those that you think may be of
-interest to the public) and mailing announcements.  You are welcome to
-subscribe.  The more people on the list, the better!
-
-
-<P>
-To subscribe, send mail to <A HREF="mailto:address@hidden";>address@hidden</A>.
-the magic word <SAMP>`subscribe'</SAMP> in the subject line.  Unsubscribe by
-mailing to <A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-The mailing list is archived at <A 
HREF="http://fly.srk.fer.hr/archive/wget";>http://fly.srk.fer.hr/archive/wget</A>.
-Alternative archive is available at
-<A 
HREF="http://www.mail-archive.com/wget%40sunsite.auc.dk/";>http://www.mail-archive.com/wget%40sunsite.auc.dk/</A>.
- 
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_35.html">previous</A>, <A HREF="wget_37.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_37.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_37.html
diff -N manual/wget-1.8.1/html_node/wget_37.html
--- manual/wget-1.8.1/html_node/wget_37.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,70 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Reporting Bugs</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_36.html">previous</A>, <A HREF="wget_38.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC37" HREF="wget_toc.html#TOC37">Reporting Bugs</A></H2>
-<P>
-<A NAME="IDX138"></A>
-<A NAME="IDX139"></A>
-<A NAME="IDX140"></A>
-
-
-<P>
-You are welcome to send bug reports about GNU Wget to
-<A HREF="mailto:address@hidden";>address@hidden</A>.
-
-
-<P>
-Before actually submitting a bug report, please try to follow a few
-simple guidelines.
-
-
-
-<OL>
-<LI>
-
-Please try to ascertain that the behaviour you see really is a bug.  If
-Wget crashes, it's a bug.  If Wget does not behave as documented,
-it's a bug.  If things work strange, but you are not sure about the way
-they are supposed to work, it might well be a bug.
-
-<LI>
-
-Try to repeat the bug in as simple circumstances as possible.  E.g. if
-Wget crashes while downloading <SAMP>`wget -rl0 -kKE -t5 -Y0
-http://yoyodyne.com -o /tmp/log'</SAMP>, you should try to see if the crash is
-repeatable, and if will occur with a simpler set of options.  You might
-even try to start the download at the page where the crash occurred to
-see if that page somehow triggered the crash.
-
-Also, while I will probably be interested to know the contents of your
-<TT>`.wgetrc'</TT> file, just dumping it into the debug message is probably
-a bad idea.  Instead, you should first try to see if the bug repeats
-with <TT>`.wgetrc'</TT> moved out of the way.  Only if it turns out that
-<TT>`.wgetrc'</TT> settings affect the bug, mail me the relevant parts of
-the file.
-
-<LI>
-
-Please start Wget with <SAMP>`-d'</SAMP> option and send the log (or the
-relevant parts of it).  If Wget was compiled without debug support,
-recompile it.  It is <EM>much</EM> easier to trace bugs with debug support
-on.
-
-<LI>
-
-If Wget has crashed, try to run it in a debugger, e.g. <CODE>gdb `which
-wget` core</CODE> and type <CODE>where</CODE> to get the backtrace.
-</OL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_36.html">previous</A>, <A HREF="wget_38.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_38.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_38.html
diff -N manual/wget-1.8.1/html_node/wget_38.html
--- manual/wget-1.8.1/html_node/wget_38.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,52 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Portability</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_37.html">previous</A>, <A HREF="wget_39.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC38" HREF="wget_toc.html#TOC38">Portability</A></H2>
-<P>
-<A NAME="IDX141"></A>
-<A NAME="IDX142"></A>
-
-
-<P>
-Since Wget uses GNU Autoconf for building and configuring, and avoids
-using "special" ultra--mega--cool features of any particular Unix, it
-should compile (and work) on all common Unix flavors.
-
-
-<P>
-Various Wget versions have been compiled and tested under many kinds of
-Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital Unix),
-Ultrix, *BSD, IRIX, and others; refer to the file <TT>`MACHINES'</TT> in the
-distribution directory for a comprehensive list.  If you compile it on
-an architecture not listed there, please let me know so I can update it.
-
-
-<P>
-Wget should also compile on the other Unix systems, not listed in
-<TT>`MACHINES'</TT>.  If it doesn't, please let me know.
-
-
-<P>
-Thanks to kind contributors, this version of Wget compiles and works on
-Microsoft Windows 95 and Windows NT platforms.  It has been compiled
-successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
-with Winsock as networking software.  Naturally, it is crippled of some
-features available on Unix, but it should work as a substitute for
-people stuck with Windows.  Note that the Windows port is
-<STRONG>neither tested nor maintained</STRONG> by me--all questions and
-problems should be reported to Wget mailing list at
-<A HREF="mailto:address@hidden";>address@hidden</A> where the maintainers will 
look at them.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_37.html">previous</A>, <A HREF="wget_39.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_39.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_39.html
diff -N manual/wget-1.8.1/html_node/wget_39.html
--- manual/wget-1.8.1/html_node/wget_39.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,40 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Signals</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_38.html">previous</A>, <A HREF="wget_40.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC39" HREF="wget_toc.html#TOC39">Signals</A></H2>
-<P>
-<A NAME="IDX143"></A>
-<A NAME="IDX144"></A>
-
-
-<P>
-Since the purpose of Wget is background work, it catches the hangup
-signal (<CODE>SIGHUP</CODE>) and ignores it.  If the output was on standard
-output, it will be redirected to a file named <TT>`wget-log'</TT>.
-Otherwise, <CODE>SIGHUP</CODE> is ignored.  This is convenient when you wish
-to redirect the output of Wget after having started it.
-
-
-
-<PRE>
-$ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &#38;
-$ kill -HUP %%     # Redirect the output to wget-log
-</PRE>
-
-<P>
-Other than that, Wget will not try to interfere with signals in any way.
-<KBD>C-c</KBD>, <CODE>kill -TERM</CODE> and <CODE>kill -KILL</CODE> should 
kill it alike.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_38.html">previous</A>, <A HREF="wget_40.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_4.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_4.html
diff -N manual/wget-1.8.1/html_node/wget_4.html
--- manual/wget-1.8.1/html_node/wget_4.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,84 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Option Syntax</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_3.html">previous</A>, 
<A HREF="wget_5.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC4" HREF="wget_toc.html#TOC4">Option Syntax</A></H2>
-<P>
-<A NAME="IDX9"></A>
-<A NAME="IDX10"></A>
-
-
-<P>
-Since Wget uses GNU getopts to process its arguments, every option has a
-short form and a long form.  Long options are more convenient to
-remember, but take time to type.  You may freely mix different option
-styles, or specify options after the command-line arguments.  Thus you
-may write:
-
-
-
-<PRE>
-wget -r --tries=10 http://fly.srk.fer.hr/ -o log
-</PRE>
-
-<P>
-The space between the option accepting an argument and the argument may
-be omitted.  Instead <SAMP>`-o log'</SAMP> you can write <SAMP>`-olog'</SAMP>.
-
-
-<P>
-You may put several options that do not require arguments together,
-like:
-
-
-
-<PRE>
-wget -drc <VAR>URL</VAR>
-</PRE>
-
-<P>
-This is a complete equivalent of:
-
-
-
-<PRE>
-wget -d -r -c <VAR>URL</VAR>
-</PRE>
-
-<P>
-Since the options can be specified after the arguments, you may
-terminate them with <SAMP>`--'</SAMP>.  So the following will try to download
-URL <SAMP>`-x'</SAMP>, reporting failure to <TT>`log'</TT>:
-
-
-
-<PRE>
-wget -o log -- -x
-</PRE>
-
-<P>
-The options that accept comma-separated lists all respect the convention
-that specifying an empty list clears its value.  This can be useful to
-clear the <TT>`.wgetrc'</TT> settings.  For instance, if your 
<TT>`.wgetrc'</TT>
-sets <CODE>exclude_directories</CODE> to <TT>`/cgi-bin'</TT>, the following
-example will first reset it, and then set it to exclude <TT>`/~nobody'</TT>
-and <TT>`/~somebody'</TT>.  You can also clear the lists in <TT>`.wgetrc'</TT>
-(see section <A HREF="wget_26.html#SEC26">Wgetrc Syntax</A>).
-
-
-
-<PRE>
-wget -X '' -X /~nobody,/~somebody
-</PRE>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_3.html">previous</A>, 
<A HREF="wget_5.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_40.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_40.html
diff -N manual/wget-1.8.1/html_node/wget_40.html
--- manual/wget-1.8.1/html_node/wget_40.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,21 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Appendices</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_39.html">previous</A>, <A HREF="wget_41.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC40" HREF="wget_toc.html#TOC40">Appendices</A></H1>
-
-<P>
-This chapter contains some references I consider useful.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_39.html">previous</A>, <A HREF="wget_41.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_41.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_41.html
diff -N manual/wget-1.8.1/html_node/wget_41.html
--- manual/wget-1.8.1/html_node/wget_41.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,104 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Robots</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_40.html">previous</A>, <A HREF="wget_42.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC41" HREF="wget_toc.html#TOC41">Robots</A></H2>
-<P>
-<A NAME="IDX145"></A>
-<A NAME="IDX146"></A>
-<A NAME="IDX147"></A>
-
-
-<P>
-It is extremely easy to make Wget wander aimlessly around a web site,
-sucking all the available data in progress.  <SAMP>`wget -r 
<VAR>site</VAR>'</SAMP>,
-and you're set.  Great?  Not for the server admin.
-
-
-<P>
-While Wget is retrieving static pages, there's not much of a problem.
-But for Wget, there is no real difference between a static page and the
-most demanding CGI.  For instance, a site I know has a section handled
-by an, uh, <EM>bitchin'</EM> CGI script that converts all the Info files to
-HTML.  The script can and does bring the machine to its knees without
-providing anything useful to the downloader.
-
-
-<P>
-For such and similar cases various robot exclusion schemes have been
-devised as a means for the server administrators and document authors to
-protect chosen portions of their sites from the wandering of robots.
-
-
-<P>
-The more popular mechanism is the <EM>Robots Exclusion Standard</EM>, or
-RES, written by Martijn Koster et al. in 1994.  It specifies the
-format of a text file containing directives that instruct the robots
-which URL paths to avoid.  To be found by the robots, the specifications
-must be placed in <TT>`/robots.txt'</TT> in the server root, which the
-robots are supposed to download and parse.
-
-
-<P>
-Wget supports RES when downloading recursively.  So, when you
-issue:
-
-
-
-<PRE>
-wget -r http://www.server.com/
-</PRE>
-
-<P>
-First the index of <SAMP>`www.server.com'</SAMP> will be downloaded.  If Wget
-finds that it wants to download more documents from that server, it will
-request <SAMP>`http://www.server.com/robots.txt'</SAMP> and, if found, use it
-for further downloads.  <TT>`robots.txt'</TT> is loaded only once per each
-server.
-
-
-<P>
-Until version 1.8, Wget supported the first version of the standard,
-written by Martijn Koster in 1994 and available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots.html";>http://www.robotstxt.org/wc/norobots.html</A>.
  As of version 1.8,
-Wget has supported the additional directives specified in the internet
-draft <SAMP>`&#60;draft-koster-robots-00.txt&#62;'</SAMP> titled "A Method for 
Web
-Robots Control".  The draft, which has as far as I know never made to
-an RFC, is available at
-<A 
HREF="http://www.robotstxt.org/wc/norobots-rfc.txt";>http://www.robotstxt.org/wc/norobots-rfc.txt</A>.
-
-
-<P>
-This manual no longer includes the text of the Robot Exclusion Standard.
-
-
-<P>
-The second, less known mechanism, enables the author of an individual
-document to specify whether they want the links from the file to be
-followed by a robot.  This is achieved using the <CODE>META</CODE> tag, like
-this:
-
-
-
-<PRE>
-&#60;meta name="robots" content="nofollow"&#62;
-</PRE>
-
-<P>
-This is explained in some detail at
-<A 
HREF="http://www.robotstxt.org/wc/meta-user.html";>http://www.robotstxt.org/wc/meta-user.html</A>.
  Wget supports this
-method of robot exclusion in addition to the usual <TT>`/robots.txt'</TT>
-exclusion.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_40.html">previous</A>, <A HREF="wget_42.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_42.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_42.html
diff -N manual/wget-1.8.1/html_node/wget_42.html
--- manual/wget-1.8.1/html_node/wget_42.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,52 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Security Considerations</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_41.html">previous</A>, <A HREF="wget_43.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC42" HREF="wget_toc.html#TOC42">Security Considerations</A></H2>
-<P>
-<A NAME="IDX148"></A>
-
-
-<P>
-When using Wget, you must be aware that it sends unencrypted passwords
-through the network, which may present a security problem.  Here are the
-main issues, and some solutions.
-
-
-
-<OL>
-<LI>
-
-The passwords on the command line are visible using <CODE>ps</CODE>.  If this
-is a problem, avoid putting passwords from the command line--e.g. you
-can use <TT>`.netrc'</TT> for this.
-
-<LI>
-
-Using the insecure <EM>basic</EM> authentication scheme, unencrypted
-passwords are transmitted through the network routers and gateways.
-
-<LI>
-
-The FTP passwords are also in no way encrypted.  There is no good
-solution for this at the moment.
-
-<LI>
-
-Although the "normal" output of Wget tries to hide the passwords,
-debugging logs show them, in all forms.  This problem is avoided by
-being careful when you send debug logs (yes, even when you send them to
-me).
-</OL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_41.html">previous</A>, <A HREF="wget_43.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_43.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_43.html
diff -N manual/wget-1.8.1/html_node/wget_43.html
--- manual/wget-1.8.1/html_node/wget_43.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,195 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Contributors</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_42.html">previous</A>, <A HREF="wget_44.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC43" HREF="wget_toc.html#TOC43">Contributors</A></H2>
-<P>
-<A NAME="IDX149"></A>
-
-
-<P>
-GNU Wget was written by Hrvoje address@hidden'{c} <A 
HREF="mailto:address@hidden";>address@hidden</A>.
-However, its development could never have gone as far as it has, were it
-not for the help of many people, either with bug reports, feature
-proposals, patches, or letters saying "Thanks!".
-
-
-<P>
-Special thanks goes to the following people (no particular order):
-
-
-
-<UL>
-<LI>
-
-Karsten Thygesen--donated system resources such as the mailing list,
-web space, and FTP space, along with a lot of time to make these
-actually work.
-
-<LI>
-
-Shawn McHorse--bug reports and patches.
-
-<LI>
-
-Kaveh R. Ghazi--on-the-fly <CODE>ansi2knr</CODE>-ization.  Lots of
-portability fixes.
-
-<LI>
-
-Gordon Matzigkeit---<TT>`.netrc'</TT> support.
-
-<LI>
-
-Zlatko @address@hidden'{c}, Tomislav Vujec and address@hidden
address@hidden suggestions and "philosophical" discussions.
-
-<LI>
-
-Darko Budor--initial port to Windows.
-
-<LI>
-
-Antonio Rosella--help and suggestions, plus the Italian translation.
-
-<LI>
-
-Tomislav Petrovi'{c}, Mario address@hidden'{c}---many bug reports and
-suggestions.
-
-<LI>
-
-Fran@,{c}ois Pinard--many thorough bug reports and discussions.
-
-<LI>
-
-Karl Eichwalder--lots of help with internationalization and other
-things.
-
-<LI>
-
-Junio Hamano--donated support for Opie and HTTP <CODE>Digest</CODE>
-authentication.
-
-<LI>
-
-The people who provided donations for development, including Brian
-Gough.
-</UL>
-
-<P>
-The following people have provided patches, bug/build reports, useful
-suggestions, beta testing services, fan mail and all the other things
-that make maintenance so much fun:
-
-
-<P>
-Ian Abbott
-Tim Adam,
-Adrian Aichner,
-Martin Baehr,
-Dieter Baron,
-Roger Beeman,
-Dan Berger,
-T. Bharath,
-Paul Bludov,
-Daniel Bodea,
-Mark Boyns,
-John Burden,
-Wanderlei Cavassin,
-Gilles Cedoc,
-Tim Charron,
-Noel Cragg,
-Kristijan @address@hidden,
-John Daily,
-Andrew Davison,
-Andrew Deryabin,
-Ulrich Drepper,
-Marc Duponcheel,
-Damir address@hidden,
-Alan Eldridge,
-Aleksandar Erkalovi'{c},
-Andy Eskilsson,
-Christian Fraenkel,
-Masashi Fujita,
-Howard Gayle,
-Marcel Gerrits,
-Lemble Gregory,
-Hans Grobler,
-Mathieu Guillaume,
-Dan Harkless,
-Herold Heiko,
-Jochen Hein,
-Karl Heuer,
-HIROSE Masaaki,
-Gregor Hoffleit,
-Erik Magnus Hulthen,
-Richard Huveneers,
-Jonas Jensen,
-Simon Josefsson,
-Mario Juri'{c},
-Hack address@hidden rn,
-Const Kaplinsky,
-Goran Kezunovi'{c},
-Robert Kleine,
-KOJIMA Haime,
-Fila Kolodny,
-Alexander Kourakos,
-Martin Kraemer,
-Hrvoje Lacko,
-Daniel S. Lewart,
-Nicol'{a}s Lichtmeier,
-Dave Love,
-Alexander V. Lukyanov,
-Jordan Mendelson,
-Lin Zhe Min,
-Tim Mooney,
-Simon Munton,
-Charlie Negyesi,
-R. K. Owen,
-Andrew Pollock,
-Steve Pothier,
-Jan address@hidden,
-Marin Purgar,
-Csaba R'{a}duly,
-Keith Refson,
-Tyler Riddle,
-Tobias Ringstrom,
-Edward J. Sabol,
-Heinz Salzmann,
-Robert Schmidt,
-Andreas Schwab,
-Chris Seawood,
-Toomas Soome,
-Tage Stabell-Kulo,
-Sven Sternberger,
-Markus Strasser,
-John Summerfield,
-Szakacsits Szabolcs,
-Mike Thomas,
-Philipp Thomas,
-Dave Turner,
-Russell Vincent,
-Charles G Waldman,
-Douglas E. Wegscheid,
-Jasmin Zainul,
-Bojan @v{Z}drnja,
-Kristijan Zimmer.
-
-
-<P>
-Apologies to all who I accidentally left out, and many thanks to all the
-subscribers of the Wget mailing list.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_42.html">previous</A>, <A HREF="wget_44.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_44.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_44.html
diff -N manual/wget-1.8.1/html_node/wget_44.html
--- manual/wget-1.8.1/html_node/wget_44.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,96 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Copying</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_43.html">previous</A>, <A HREF="wget_45.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC44" HREF="wget_toc.html#TOC44">Copying</A></H1>
-<P>
-<A NAME="IDX150"></A>
-<A NAME="IDX151"></A>
-<A NAME="IDX152"></A>
-<A NAME="IDX153"></A>
-
-
-<P>
-GNU Wget is licensed under the GNU GPL, which makes it <EM>free
-software</EM>.
-
-
-<P>
-Please note that "free" in "free software" refers to liberty, not
-price.  As some GNU project advocates like to point out, think of "free
-speech" rather than "free beer".  The exact and legally binding
-distribution terms are spelled out below; in short, you have the right
-(freedom) to run and change Wget and distribute it to other people, and
-even--if you want--charge money for doing either.  The important
-restriction is that you have to grant your recipients the same rights
-and impose the same restrictions.
-
-
-<P>
-This method of licensing software is also known as <EM>open source</EM>
-because, among other things, it makes sure that all recipients will
-receive the source code along with the program, and be able to improve
-it.  The GNU project prefers the term "free software" for reasons
-outlined at
-<A 
HREF="http://www.gnu.org/philosophy/free-software-for-freedom.html";>http://www.gnu.org/philosophy/free-software-for-freedom.html</A>.
-
-
-<P>
-The exact license terms are defined by this paragraph and the GNU
-General Public License it refers to:
-
-
-
-<BLOCKQUOTE>
-<P>
-GNU Wget is free software; you can redistribute it and/or modify it
-under the terms of the GNU General Public License as published by the
-Free Software Foundation; either version 2 of the License, or (at your
-option) any later version.
-
-
-<P>
-GNU Wget is distributed in the hope that it will be useful, but WITHOUT
-ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
-FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
-for more details.
-
-
-<P>
-A copy of the GNU General Public License is included as part of this
-manual; if you did not receive it, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</BLOCKQUOTE>
-
-<P>
-In addition to this, this manual is free in the same sense:
-
-
-
-<BLOCKQUOTE>
-<P>
-Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-</BLOCKQUOTE>
-
-<P>
-The full texts of the GNU General Public License and of the GNU Free
-Documentation License are available below.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_43.html">previous</A>, <A HREF="wget_45.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_45.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_45.html
diff -N manual/wget-1.8.1/html_node/wget_45.html
--- manual/wget-1.8.1/html_node/wget_45.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,460 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - GNU General Public License</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_44.html">previous</A>, <A HREF="wget_46.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC45" HREF="wget_toc.html#TOC45">GNU General Public 
License</A></H2>
-<P>
-Version 2, June 1991
-
-
-
-<PRE>
-Copyright (C) 1989, 1991 Free Software Foundation, Inc.
-675 Mass Ave, Cambridge, MA 02139, USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-
-<H2><A NAME="SEC46" HREF="wget_toc.html#TOC46">Preamble</A></H2>
-
-<P>
-  The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-License is intended to guarantee your freedom to share and change free
-software--to make sure the software is free for all its users.  This
-General Public License applies to most of the Free Software
-Foundation's software and to any other program whose authors commit to
-using it.  (Some other Free Software Foundation software is covered by
-the GNU Library General Public License instead.)  You can apply it to
-your programs, too.
-
-
-<P>
-  When we speak of free software, we are referring to freedom, not
-price.  Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-this service if you wish), that you receive source code or can get it
-if you want it, that you can change the software or use pieces of it
-in new free programs; and that you know you can do these things.
-
-
-<P>
-  To protect your rights, we need to make restrictions that forbid
-anyone to deny you these rights or to ask you to surrender the rights.
-These restrictions translate to certain responsibilities for you if you
-distribute copies of the software, or if you modify it.
-
-
-<P>
-  For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must give the recipients all the rights that
-you have.  You must make sure that they, too, receive or can get the
-source code.  And you must show them these terms so they know their
-rights.
-
-
-<P>
-  We protect your rights with two steps: (1) copyright the software, and
-(2) offer you this license which gives you legal permission to copy,
-distribute and/or modify the software.
-
-
-<P>
-  Also, for each author's protection and ours, we want to make certain
-that everyone understands that there is no warranty for this free
-software.  If the software is modified by someone else and passed on, we
-want its recipients to know that what they have is not the original, so
-that any problems introduced by others will not reflect on the original
-authors' reputations.
-
-
-<P>
-  Finally, any free program is threatened constantly by software
-patents.  We wish to avoid the danger that redistributors of a free
-program will individually obtain patent licenses, in effect making the
-program proprietary.  To prevent this, we have made it clear that any
-patent must be licensed for everyone's free use or not licensed at all.
-
-
-<P>
-  The precise terms and conditions for copying, distribution and
-modification follow.
-
-
-
-
-<H2><A NAME="SEC47" HREF="wget_toc.html#TOC47">TERMS AND CONDITIONS FOR 
COPYING, DISTRIBUTION AND MODIFICATION</A></H2>
-
-
-<OL>
-<LI>
-
-This License applies to any program or other work which contains
-a notice placed by the copyright holder saying it may be distributed
-under the terms of this General Public License.  The "Program", below,
-refers to any such program or work, and a "work based on the Program"
-means either the Program or any derivative work under copyright law:
-that is to say, a work containing the Program or a portion of it,
-either verbatim or with modifications and/or translated into another
-language.  (Hereinafter, translation is included without limitation in
-the term "modification".)  Each licensee is addressed as "you".
-
-Activities other than copying, distribution and modification are not
-covered by this License; they are outside its scope.  The act of
-running the Program is not restricted, and the output from the Program
-is covered only if its contents constitute a work based on the
-Program (independent of having been made by running the Program).
-Whether that is true depends on what the Program does.
-
-<LI>
-
-You may copy and distribute verbatim copies of the Program's
-source code as you receive it, in any medium, provided that you
-conspicuously and appropriately publish on each copy an appropriate
-copyright notice and disclaimer of warranty; keep intact all the
-notices that refer to this License and to the absence of any warranty;
-and give any other recipients of the Program a copy of this License
-along with the Program.
-
-You may charge a fee for the physical act of transferring a copy, and
-you may at your option offer warranty protection in exchange for a fee.
-
-<LI>
-
-You may modify your copy or copies of the Program or any portion
-of it, thus forming a work based on the Program, and copy and
-distribute such modifications or work under the terms of Section 1
-above, provided that you also meet all of these conditions:
-
-
-<OL>
-<LI>
-
-You must cause the modified files to carry prominent notices
-stating that you changed the files and the date of any change.
-
-<LI>
-
-You must cause any work that you distribute or publish, that in
-whole or in part contains or is derived from the Program or any
-part thereof, to be licensed as a whole at no charge to all third
-parties under the terms of this License.
-
-<LI>
-
-If the modified program normally reads commands interactively
-when run, you must cause it, when started running for such
-interactive use in the most ordinary way, to print or display an
-announcement including an appropriate copyright notice and a
-notice that there is no warranty (or else, saying that you provide
-a warranty) and that users may redistribute the program under
-these conditions, and telling the user how to view a copy of this
-License.  (Exception: if the Program itself is interactive but
-does not normally print such an announcement, your work based on
-the Program is not required to print an announcement.)
-</OL>
-
-These requirements apply to the modified work as a whole.  If
-identifiable sections of that work are not derived from the Program,
-and can be reasonably considered independent and separate works in
-themselves, then this License, and its terms, do not apply to those
-sections when you distribute them as separate works.  But when you
-distribute the same sections as part of a whole which is a work based
-on the Program, the distribution of the whole must be on the terms of
-this License, whose permissions for other licensees extend to the
-entire whole, and thus to each and every part regardless of who wrote it.
-
-Thus, it is not the intent of this section to claim rights or contest
-your rights to work written entirely by you; rather, the intent is to
-exercise the right to control the distribution of derivative or
-collective works based on the Program.
-
-In addition, mere aggregation of another work not based on the Program
-with the Program (or with a work based on the Program) on a volume of
-a storage or distribution medium does not bring the other work under
-the scope of this License.
-
-<LI>
-
-You may copy and distribute the Program (or a work based on it,
-under Section 2) in object code or executable form under the terms of
-Sections 1 and 2 above provided that you also do one of the following:
-
-
-<OL>
-<LI>
-
-Accompany it with the complete corresponding machine-readable
-source code, which must be distributed under the terms of Sections
-1 and 2 above on a medium customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with a written offer, valid for at least three
-years, to give any third party, for a charge no more than your
-cost of physically performing source distribution, a complete
-machine-readable copy of the corresponding source code, to be
-distributed under the terms of Sections 1 and 2 above on a medium
-customarily used for software interchange; or,
-
-<LI>
-
-Accompany it with the information you received as to the offer
-to distribute corresponding source code.  (This alternative is
-allowed only for noncommercial distribution and only if you
-received the program in object code or executable form with such
-an offer, in accord with Subsection b above.)
-</OL>
-
-The source code for a work means the preferred form of the work for
-making modifications to it.  For an executable work, complete source
-code means all the source code for all modules it contains, plus any
-associated interface definition files, plus the scripts used to
-control compilation and installation of the executable.  However, as a
-special exception, the source code distributed need not include
-anything that is normally distributed (in either source or binary
-form) with the major components (compiler, kernel, and so on) of the
-operating system on which the executable runs, unless that component
-itself accompanies the executable.
-
-If distribution of executable or object code is made by offering
-access to copy from a designated place, then offering equivalent
-access to copy the source code from the same place counts as
-distribution of the source code, even though third parties are not
-compelled to copy the source along with the object code.
-
-<LI>
-
-You may not copy, modify, sublicense, or distribute the Program
-except as expressly provided under this License.  Any attempt
-otherwise to copy, modify, sublicense or distribute the Program is
-void, and will automatically terminate your rights under this License.
-However, parties who have received copies, or rights, from you under
-this License will not have their licenses terminated so long as such
-parties remain in full compliance.
-
-<LI>
-
-You are not required to accept this License, since you have not
-signed it.  However, nothing else grants you permission to modify or
-distribute the Program or its derivative works.  These actions are
-prohibited by law if you do not accept this License.  Therefore, by
-modifying or distributing the Program (or any work based on the
-Program), you indicate your acceptance of this License to do so, and
-all its terms and conditions for copying, distributing or modifying
-the Program or works based on it.
-
-<LI>
-
-Each time you redistribute the Program (or any work based on the
-Program), the recipient automatically receives a license from the
-original licensor to copy, distribute or modify the Program subject to
-these terms and conditions.  You may not impose any further
-restrictions on the recipients' exercise of the rights granted herein.
-You are not responsible for enforcing compliance by third parties to
-this License.
-
-<LI>
-
-If, as a consequence of a court judgment or allegation of patent
-infringement or for any other reason (not limited to patent issues),
-conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License.  If you cannot
-distribute so as to satisfy simultaneously your obligations under this
-License and any other pertinent obligations, then as a consequence you
-may not distribute the Program at all.  For example, if a patent
-license would not permit royalty-free redistribution of the Program by
-all those who receive copies directly or indirectly through you, then
-the only way you could satisfy both it and this License would be to
-refrain entirely from distribution of the Program.
-
-If any portion of this section is held invalid or unenforceable under
-any particular circumstance, the balance of the section is intended to
-apply and the section as a whole is intended to apply in other
-circumstances.
-
-It is not the purpose of this section to induce you to infringe any
-patents or other property right claims or to contest validity of any
-such claims; this section has the sole purpose of protecting the
-integrity of the free software distribution system, which is
-implemented by public license practices.  Many people have made
-generous contributions to the wide range of software distributed
-through that system in reliance on consistent application of that
-system; it is up to the author/donor to decide if he or she is willing
-to distribute software through any other system and a licensee cannot
-impose that choice.
-
-This section is intended to make thoroughly clear what is believed to
-be a consequence of the rest of this License.
-
-<LI>
-
-If the distribution and/or use of the Program is restricted in
-certain countries either by patents or by copyrighted interfaces, the
-original copyright holder who places the Program under this License
-may add an explicit geographical distribution limitation excluding
-those countries, so that distribution is permitted only in or among
-countries not thus excluded.  In such case, this License incorporates
-the limitation as if written in the body of this License.
-
-<LI>
-
-The Free Software Foundation may publish revised and/or new versions
-of the General Public License from time to time.  Such new versions will
-be similar in spirit to the present version, but may differ in detail to
-address new problems or concerns.
-
-Each version is given a distinguishing version number.  If the Program
-specifies a version number of this License which applies to it and "any
-later version", you have the option of following the terms and conditions
-either of that version or of any later version published by the Free
-Software Foundation.  If the Program does not specify a version number of
-this License, you may choose any version ever published by the Free Software
-Foundation.
-
-<LI>
-
-If you wish to incorporate parts of the Program into other free
-programs whose distribution conditions are different, write to the author
-to ask for permission.  For software which is copyrighted by the Free
-Software Foundation, write to the Free Software Foundation; we sometimes
-make exceptions for this.  Our decision will be guided by the two goals
-of preserving the free status of all derivatives of our free software and
-of promoting the sharing and reuse of software generally.
-
-
-
-<P><STRONG>NO WARRANTY</STRONG>
-<A NAME="IDX154"></A>
-
-<LI>
-
-BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
-FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN
-OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
-PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
-OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
-MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS
-TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
-PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
-REPAIR OR CORRECTION.
-
-<LI>
-
-IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
-REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
-INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
-OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
-TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
-YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
-PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
-POSSIBILITY OF SUCH DAMAGES.
-</OL>
-
-
-<H2>END OF TERMS AND CONDITIONS</H2>
-
-
-
-<H2><A NAME="SEC48" HREF="wget_toc.html#TOC48">How to Apply These Terms to 
Your New Programs</A></H2>
-
-<P>
-  If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these terms.
-
-
-<P>
-  To do so, attach the following notices to the program.  It is safest
-to attach them to the start of each source file to most effectively
-convey the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-
-
-<PRE>
-<VAR>one line to give the program's name and an idea of what it does.</VAR>
-Copyright (C) 19<VAR>yy</VAR>  <VAR>name of author</VAR>
-
-This program is free software; you can redistribute it and/or
-modify it under the terms of the GNU General Public License
-as published by the Free Software Foundation; either version 2
-of the License, or (at your option) any later version.
-
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with this program; if not, write to the Free Software
-Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-</PRE>
-
-<P>
-Also add information on how to contact you by electronic and paper mail.
-
-
-<P>
-If the program is interactive, make it output a short notice like this
-when it starts in an interactive mode:
-
-
-
-<PRE>
-Gnomovision version 69, Copyright (C) 19<VAR>yy</VAR> <VAR>name of author</VAR>
-Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
-type `show w'.  This is free software, and you are welcome
-to redistribute it under certain conditions; type `show c'
-for details.
-</PRE>
-
-<P>
-The hypothetical commands <SAMP>`show w'</SAMP> and <SAMP>`show c'</SAMP> 
should show
-the appropriate parts of the General Public License.  Of course, the
-commands you use may be called something other than <SAMP>`show w'</SAMP> and
-<SAMP>`show c'</SAMP>; they could even be mouse-clicks or menu items--whatever
-suits your program.
-
-
-<P>
-You should also get your employer (if you work as a programmer) or your
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary.  Here is a sample; alter the names:
-
-
-
-<PRE>
-Yoyodyne, Inc., hereby disclaims all copyright
-interest in the program `Gnomovision'
-(which makes passes at compilers) written
-by James Hacker.
-
-<VAR>signature of Ty Coon</VAR>, 1 April 1989
-Ty Coon, President of Vice
-</PRE>
-
-<P>
-This General Public License does not permit incorporating your program into
-proprietary programs.  If your program is a subroutine library, you may
-consider it more useful to permit linking proprietary applications with the
-library.  If this is what you want to do, use the GNU Library General
-Public License instead of this License.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_44.html">previous</A>, <A HREF="wget_46.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_46.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_46.html
diff -N manual/wget-1.8.1/html_node/wget_46.html
--- manual/wget-1.8.1/html_node/wget_46.html    29 Jun 2005 21:04:15 -0000      
1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,393 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - GNU Free Documentation License</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_45.html">previous</A>, <A HREF="wget_47.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC49" HREF="wget_toc.html#TOC49">GNU Free Documentation 
License</A></H2>
-<P>
-Version 1.1, March 2000
-
-
-
-<PRE>
-Copyright (C) 2000  Free Software Foundation, Inc.
-51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
-
-Everyone is permitted to copy and distribute verbatim copies
-of this license document, but changing it is not allowed.
-</PRE>
-
-
-<OL>
-<LI>
-
-PREAMBLE
-
-The purpose of this License is to make a manual, textbook, or other
-written document "free" in the sense of freedom: to assure everyone
-the effective freedom to copy and redistribute it, with or without
-modifying it, either commercially or noncommercially.  Secondarily,
-this License preserves for the author and publisher a way to get
-credit for their work, while not being considered responsible for
-modifications made by others.
-
-This License is a kind of "copyleft", which means that derivative
-works of the document must themselves be free in the same sense.  It
-complements the GNU General Public License, which is a copyleft
-license designed for free software.
-
-We have designed this License in order to use it for manuals for free
-software, because free software needs free documentation: a free
-program should come with manuals providing the same freedoms that the
-software does.  But this License is not limited to software manuals;
-it can be used for any textual work, regardless of subject matter or
-whether it is published as a printed book.  We recommend this License
-principally for works whose purpose is instruction or reference.
-
-<LI>
-
-APPLICABILITY AND DEFINITIONS
-
-This License applies to any manual or other work that contains a
-notice placed by the copyright holder saying it can be distributed
-under the terms of this License.  The "Document", below, refers to any
-such manual or work.  Any member of the public is a licensee, and is
-addressed as "you".
-
-A "Modified Version" of the Document means any work containing the
-Document or a portion of it, either copied verbatim, or with
-modifications and/or translated into another language.
-
-A "Secondary Section" is a named appendix or a front-matter section of
-the Document that deals exclusively with the relationship of the
-publishers or authors of the Document to the Document's overall subject
-(or to related matters) and contains nothing that could fall directly
-within that overall subject.  (For example, if the Document is in part a
-textbook of mathematics, a Secondary Section may not explain any
-mathematics.)  The relationship could be a matter of historical
-connection with the subject or with related matters, or of legal,
-commercial, philosophical, ethical or political position regarding
-them.
-
-The "Invariant Sections" are certain Secondary Sections whose titles
-are designated, as being those of Invariant Sections, in the notice
-that says that the Document is released under this License.
-
-The "Cover Texts" are certain short passages of text that are listed,
-as Front-Cover Texts or Back-Cover Texts, in the notice that says that
-the Document is released under this License.
-
-A "Transparent" copy of the Document means a machine-readable copy,
-represented in a format whose specification is available to the
-general public, whose contents can be viewed and edited directly and
-straightforwardly with generic text editors or (for images composed of
-pixels) generic paint programs or (for drawings) some widely available
-drawing editor, and that is suitable for input to text formatters or
-for automatic translation to a variety of formats suitable for input
-to text formatters.  A copy made in an otherwise Transparent file
-format whose markup has been designed to thwart or discourage
-subsequent modification by readers is not Transparent.  A copy that is
-not "Transparent" is called "Opaque".
-
-Examples of suitable formats for Transparent copies include plain
-ASCII without markup, Texinfo input format, LaTeX input format, SGML
-or XML using a publicly available DTD, and standard-conforming simple
-HTML designed for human modification.  Opaque formats include
-PostScript, PDF, proprietary formats that can be read and edited only
-by proprietary word processors, SGML or XML for which the DTD and/or
-processing tools are not generally available, and the
-machine-generated HTML produced by some word processors for output
-purposes only.
-
-The "Title Page" means, for a printed book, the title page itself,
-plus such following pages as are needed to hold, legibly, the material
-this License requires to appear in the title page.  For works in
-formats which do not have any title page as such, "Title Page" means
-the text near the most prominent appearance of the work's title,
-preceding the beginning of the body of the text.
-<LI>
-
-VERBATIM COPYING
-
-You may copy and distribute the Document in any medium, either
-commercially or noncommercially, provided that this License, the
-copyright notices, and the license notice saying this License applies
-to the Document are reproduced in all copies, and that you add no other
-conditions whatsoever to those of this License.  You may not use
-technical measures to obstruct or control the reading or further
-copying of the copies you make or distribute.  However, you may accept
-compensation in exchange for copies.  If you distribute a large enough
-number of copies you must also follow the conditions in section 3.
-
-You may also lend copies, under the same conditions stated above, and
-you may publicly display copies.
-<LI>
-
-COPYING IN QUANTITY
-
-If you publish printed copies of the Document numbering more than 100,
-and the Document's license notice requires Cover Texts, you must enclose
-the copies in covers that carry, clearly and legibly, all these Cover
-Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
-the back cover.  Both covers must also clearly and legibly identify
-you as the publisher of these copies.  The front cover must present
-the full title with all words of the title equally prominent and
-visible.  You may add other material on the covers in addition.
-Copying with changes limited to the covers, as long as they preserve
-the title of the Document and satisfy these conditions, can be treated
-as verbatim copying in other respects.
-
-If the required texts for either cover are too voluminous to fit
-legibly, you should put the first ones listed (as many as fit
-reasonably) on the actual cover, and continue the rest onto adjacent
-pages.
-
-If you publish or distribute Opaque copies of the Document numbering
-more than 100, you must either include a machine-readable Transparent
-copy along with each Opaque copy, or state in or with each Opaque copy
-a publicly-accessible computer-network location containing a complete
-Transparent copy of the Document, free of added material, which the
-general network-using public has access to download anonymously at no
-charge using public-standard network protocols.  If you use the latter
-option, you must take reasonably prudent steps, when you begin
-distribution of Opaque copies in quantity, to ensure that this
-Transparent copy will remain thus accessible at the stated location
-until at least one year after the last time you distribute an Opaque
-copy (directly or through your agents or retailers) of that edition to
-the public.
-
-It is requested, but not required, that you contact the authors of the
-Document well before redistributing any large number of copies, to give
-them a chance to provide you with an updated version of the Document.
-<LI>
-
-MODIFICATIONS
-
-You may copy and distribute a Modified Version of the Document under
-the conditions of sections 2 and 3 above, provided that you release
-the Modified Version under precisely this License, with the Modified
-Version filling the role of the Document, thus licensing distribution
-and modification of the Modified Version to whoever possesses a copy
-of it.  In addition, you must do these things in the Modified Version:
-
-A. Use in the Title Page (and on the covers, if any) a title distinct
-   from that of the Document, and from those of previous versions
-   (which should, if there were any, be listed in the History section
-   of the Document).  You may use the same title as a previous version
-   if the original publisher of that version gives permission.<BR>
-B. List on the Title Page, as authors, one or more persons or entities
-   responsible for authorship of the modifications in the Modified
-   Version, together with at least five of the principal authors of the
-   Document (all of its principal authors, if it has less than five).<BR>
-C. State on the Title page the name of the publisher of the
-   Modified Version, as the publisher.<BR>
-D. Preserve all the copyright notices of the Document.<BR>
-E. Add an appropriate copyright notice for your modifications
-   adjacent to the other copyright notices.<BR>
-F. Include, immediately after the copyright notices, a license notice
-   giving the public permission to use the Modified Version under the
-   terms of this License, in the form shown in the Addendum below.<BR>
-G. Preserve in that license notice the full lists of Invariant Sections
-   and required Cover Texts given in the Document's license notice.<BR>
-H. Include an unaltered copy of this License.<BR>
-I. Preserve the section entitled "History", and its title, and add to
-   it an item stating at least the title, year, new authors, and
-   publisher of the Modified Version as given on the Title Page.  If
-   there is no section entitled "History" in the Document, create one
-   stating the title, year, authors, and publisher of the Document as
-   given on its Title Page, then add an item describing the Modified
-   Version as stated in the previous sentence.<BR>
-J. Preserve the network location, if any, given in the Document for
-   public access to a Transparent copy of the Document, and likewise
-   the network locations given in the Document for previous versions
-   it was based on.  These may be placed in the "History" section.
-   You may omit a network location for a work that was published at
-   least four years before the Document itself, or if the original
-   publisher of the version it refers to gives permission.<BR>
-K. In any section entitled "Acknowledgements" or "Dedications",
-   preserve the section's title, and preserve in the section all the
-   substance and tone of each of the contributor acknowledgements
-   and/or dedications given therein.<BR>
-L. Preserve all the Invariant Sections of the Document,
-   unaltered in their text and in their titles.  Section numbers
-   or the equivalent are not considered part of the section titles.<BR>
-M. Delete any section entitled "Endorsements".  Such a section
-   may not be included in the Modified Version.<BR>
-N. Do not retitle any existing section as "Endorsements"
-   or to conflict in title with any Invariant Section.<BR>
-If the Modified Version includes new front-matter sections or
-appendices that qualify as Secondary Sections and contain no material
-copied from the Document, you may at your option designate some or all
-of these sections as invariant.  To do this, add their titles to the
-list of Invariant Sections in the Modified Version's license notice.
-These titles must be distinct from any other section titles.
-
-You may add a section entitled "Endorsements", provided it contains
-nothing but endorsements of your Modified Version by various
-parties--for example, statements of peer review or that the text has
-been approved by an organization as the authoritative definition of a
-standard.
-
-You may add a passage of up to five words as a Front-Cover Text, and a
-passage of up to 25 words as a Back-Cover Text, to the end of the list
-of Cover Texts in the Modified Version.  Only one passage of
-Front-Cover Text and one of Back-Cover Text may be added by (or
-through arrangements made by) any one entity.  If the Document already
-includes a cover text for the same cover, previously added by you or
-by arrangement made by the same entity you are acting on behalf of,
-you may not add another; but you may replace the old one, on explicit
-permission from the previous publisher that added the old one.
-
-The author(s) and publisher(s) of the Document do not by this License
-give permission to use their names for publicity for or to assert or
-imply endorsement of any Modified Version.
-<LI>
-
-COMBINING DOCUMENTS
-
-You may combine the Document with other documents released under this
-License, under the terms defined in section 4 above for modified
-versions, provided that you include in the combination all of the
-Invariant Sections of all of the original documents, unmodified, and
-list them all as Invariant Sections of your combined work in its
-license notice.
-
-The combined work need only contain one copy of this License, and
-multiple identical Invariant Sections may be replaced with a single
-copy.  If there are multiple Invariant Sections with the same name but
-different contents, make the title of each such section unique by
-adding at the end of it, in parentheses, the name of the original
-author or publisher of that section if known, or else a unique number.
-Make the same adjustment to the section titles in the list of
-Invariant Sections in the license notice of the combined work.
-
-In the combination, you must combine any sections entitled "History"
-in the various original documents, forming one section entitled
-"History"; likewise combine any sections entitled "Acknowledgements",
-and any sections entitled "Dedications".  You must delete all sections
-entitled "Endorsements."
-<LI>
-
-COLLECTIONS OF DOCUMENTS
-
-You may make a collection consisting of the Document and other documents
-released under this License, and replace the individual copies of this
-License in the various documents with a single copy that is included in
-the collection, provided that you follow the rules of this License for
-verbatim copying of each of the documents in all other respects.
-
-You may extract a single document from such a collection, and distribute
-it individually under this License, provided you insert a copy of this
-License into the extracted document, and follow this License in all
-other respects regarding verbatim copying of that document.
-<LI>
-
-AGGREGATION WITH INDEPENDENT WORKS
-
-A compilation of the Document or its derivatives with other separate
-and independent documents or works, in or on a volume of a storage or
-distribution medium, does not as a whole count as a Modified Version
-of the Document, provided no compilation copyright is claimed for the
-compilation.  Such a compilation is called an "aggregate", and this
-License does not apply to the other self-contained works thus compiled
-with the Document, on account of their being thus compiled, if they
-are not themselves derivative works of the Document.
-
-If the Cover Text requirement of section 3 is applicable to these
-copies of the Document, then if the Document is less than one quarter
-of the entire aggregate, the Document's Cover Texts may be placed on
-covers that surround only the Document within the aggregate.
-Otherwise they must appear on covers around the whole aggregate.
-<LI>
-
-TRANSLATION
-
-Translation is considered a kind of modification, so you may
-distribute translations of the Document under the terms of section 4.
-Replacing Invariant Sections with translations requires special
-permission from their copyright holders, but you may include
-translations of some or all Invariant Sections in addition to the
-original versions of these Invariant Sections.  You may include a
-translation of this License provided that you also include the
-original English version of this License.  In case of a disagreement
-between the translation and the original English version of this
-License, the original English version will prevail.
-<LI>
-
-TERMINATION
-
-You may not copy, modify, sublicense, or distribute the Document except
-as expressly provided for under this License.  Any other attempt to
-copy, modify, sublicense or distribute the Document is void, and will
-automatically terminate your rights under this License.  However,
-parties who have received copies, or rights, from you under this
-License will not have their licenses terminated so long as such
-parties remain in full compliance.
-<LI>
-
-FUTURE REVISIONS OF THIS LICENSE
-
-The Free Software Foundation may publish new, revised versions
-of the GNU Free Documentation License from time to time.  Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns.  See
-http://www.gnu.org/copyleft/.
-
-Each version of the License is given a distinguishing version number.
-If the Document specifies that a particular numbered version of this
-License "or any later version" applies to it, you have the option of
-following the terms and conditions either of that specified version or
-of any later version that has been published (not as a draft) by the
-Free Software Foundation.  If the Document does not specify a version
-number of this License, you may choose any version ever published (not
-as a draft) by the Free Software Foundation.
-
-</OL>
-
-
-
-<H2><A NAME="SEC50" HREF="wget_toc.html#TOC50">ADDENDUM: How to use this 
License for your documents</A></H2>
-
-<P>
-To use this License in a document you have written, include a copy of
-the License in the document and put the following copyright and
-license notices just after the title page:
-
-
-
-<PRE>
-
-  Copyright (C)  <VAR>year</VAR>  <VAR>your name</VAR>.
-  Permission is granted to copy, distribute and/or modify this document
-  under the terms of the GNU Free Documentation License, Version 1.1
-  or any later version published by the Free Software Foundation;
-  with the Invariant Sections being <VAR>list their titles</VAR>, with the
-  Front-Cover Texts being <VAR>list</VAR>, and with the Back-Cover Texts being 
<VAR>list</VAR>.
-  A copy of the license is included in the section entitled ``GNU
-  Free Documentation License''.
-</PRE>
-
-<P>
-If you have no Invariant Sections, write "with no Invariant Sections"
-instead of saying which ones are invariant.  If you have no
-Front-Cover Texts, write "no Front-Cover Texts" instead of
-"Front-Cover Texts being <VAR>list</VAR>"; likewise for Back-Cover Texts.
-
-
-<P>
-If your document contains nontrivial examples of program code, we
-recommend releasing these examples in parallel under your choice of
-free software license, such as the GNU General Public License,
-to permit their use in free software.
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_45.html">previous</A>, <A HREF="wget_47.html">next</A>, <A 
HREF="wget_47.html">last</A> section, <A HREF="wget_toc.html">table of 
contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_47.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_47.html
diff -N manual/wget-1.8.1/html_node/wget_47.html
--- manual/wget-1.8.1/html_node/wget_47.html    19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,283 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Concept Index</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_46.html">previous</A>, next, last section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H1><A NAME="SEC51" HREF="wget_toc.html#TOC51">Concept Index</A></H1>
-<P>
-Jump to:
-<A HREF="#cindex_.">.</A>
--
-<A HREF="#cindex_a">a</A>
--
-<A HREF="#cindex_b">b</A>
--
-<A HREF="#cindex_c">c</A>
--
-<A HREF="#cindex_d">d</A>
--
-<A HREF="#cindex_e">e</A>
--
-<A HREF="#cindex_f">f</A>
--
-<A HREF="#cindex_g">g</A>
--
-<A HREF="#cindex_h">h</A>
--
-<A HREF="#cindex_i">i</A>
--
-<A HREF="#cindex_l">l</A>
--
-<A HREF="#cindex_m">m</A>
--
-<A HREF="#cindex_n">n</A>
--
-<A HREF="#cindex_o">o</A>
--
-<A HREF="#cindex_p">p</A>
--
-<A HREF="#cindex_q">q</A>
--
-<A HREF="#cindex_r">r</A>
--
-<A HREF="#cindex_s">s</A>
--
-<A HREF="#cindex_t">t</A>
--
-<A HREF="#cindex_u">u</A>
--
-<A HREF="#cindex_v">v</A>
--
-<A HREF="#cindex_w">w</A>
-<P>
-<H2><A NAME="cindex_.">.</A></H2>
-<DIR>
-<LI><A HREF="wget_9.html#IDX49">.html extension</A>
-<LI><A HREF="wget_10.html#IDX70">.listing files, removing</A>
-<LI><A HREF="wget_24.html#IDX123">.netrc</A>
-<LI><A HREF="wget_24.html#IDX121">.wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_a">a</A></H2>
-<DIR>
-<LI><A HREF="wget_17.html#IDX104">accept directories</A>
-<LI><A HREF="wget_16.html#IDX93">accept suffixes</A>
-<LI><A HREF="wget_16.html#IDX92">accept wildcards</A>
-<LI><A HREF="wget_6.html#IDX14">append to log</A>
-<LI><A HREF="wget_2.html#IDX5">arguments</A>
-<LI><A HREF="wget_9.html#IDX52">authentication</A>
-</DIR>
-<H2><A NAME="cindex_b">b</A></H2>
-<DIR>
-<LI><A HREF="wget_11.html#IDX79">backing up converted files</A>
-<LI><A HREF="wget_6.html#IDX20">base for relative links in input file</A>
-<LI><A HREF="wget_7.html#IDX21">bind() address</A>
-<LI><A HREF="wget_37.html#IDX140">bug reports</A>
-<LI><A HREF="wget_37.html#IDX138">bugs</A>
-</DIR>
-<H2><A NAME="cindex_c">c</A></H2>
-<DIR>
-<LI><A HREF="wget_9.html#IDX54">cache</A>
-<LI><A HREF="wget_7.html#IDX22">client IP address</A>
-<LI><A HREF="wget_7.html#IDX27">clobbering, file</A>
-<LI><A HREF="wget_2.html#IDX4">command line</A>
-<LI><A HREF="wget_9.html#IDX60">Content-Length, ignore</A>
-<LI><A HREF="wget_7.html#IDX30">continue retrieval</A>
-<LI><A HREF="wget_43.html#IDX149">contributors</A>
-<LI><A HREF="wget_11.html#IDX77">conversion of links</A>
-<LI><A HREF="wget_9.html#IDX55">cookies</A>
-<LI><A HREF="wget_9.html#IDX57">cookies, loading</A>
-<LI><A HREF="wget_9.html#IDX59">cookies, saving</A>
-<LI><A HREF="wget_44.html#IDX150">copying</A>
-<LI><A HREF="wget_8.html#IDX47">cut directories</A>
-</DIR>
-<H2><A NAME="cindex_d">d</A></H2>
-<DIR>
-<LI><A HREF="wget_6.html#IDX15">debug</A>
-<LI><A HREF="wget_11.html#IDX75">delete after retrieval</A>
-<LI><A HREF="wget_17.html#IDX100">directories</A>
-<LI><A HREF="wget_17.html#IDX105">directories, exclude</A>
-<LI><A HREF="wget_17.html#IDX102">directories, include</A>
-<LI><A HREF="wget_17.html#IDX101">directory limits</A>
-<LI><A HREF="wget_8.html#IDX48">directory prefix</A>
-<LI><A HREF="wget_7.html#IDX34">dot style</A>
-<LI><A HREF="wget_7.html#IDX28">downloading multiple times</A>
-</DIR>
-<H2><A NAME="cindex_e">e</A></H2>
-<DIR>
-<LI><A HREF="wget_29.html#IDX130">examples</A>
-<LI><A HREF="wget_17.html#IDX106">exclude directories</A>
-<LI><A HREF="wget_5.html#IDX11">execute wgetrc command</A>
-</DIR>
-<H2><A NAME="cindex_f">f</A></H2>
-<DIR>
-<LI><A HREF="wget_1.html#IDX2">features</A>
-<LI><A HREF="wget_11.html#IDX76">filling proxy cache</A>
-<LI><A HREF="wget_12.html#IDX82">follow FTP links</A>
-<LI><A HREF="wget_19.html#IDX110">following ftp links</A>
-<LI><A HREF="wget_14.html#IDX88">following links</A>
-<LI><A HREF="wget_6.html#IDX19">force html</A>
-<LI><A HREF="wget_44.html#IDX153">free software</A>
-<LI><A HREF="wget_23.html#IDX118">ftp time-stamping</A>
-</DIR>
-<H2><A NAME="cindex_g">g</A></H2>
-<DIR>
-<LI><A HREF="wget_44.html#IDX152">GFDL</A>
-<LI><A HREF="wget_10.html#IDX71">globbing, toggle</A>
-<LI><A HREF="wget_44.html#IDX151">GPL</A>
-</DIR>
-<H2><A NAME="cindex_h">h</A></H2>
-<DIR>
-<LI><A HREF="wget_39.html#IDX144">hangup</A>
-<LI><A HREF="wget_9.html#IDX62">header, add</A>
-<LI><A HREF="wget_15.html#IDX90">hosts, spanning</A>
-<LI><A HREF="wget_9.html#IDX51">http password</A>
-<LI><A HREF="wget_9.html#IDX66">http referer</A>
-<LI><A HREF="wget_22.html#IDX117">http time-stamping</A>
-<LI><A HREF="wget_9.html#IDX50">http user</A>
-</DIR>
-<H2><A NAME="cindex_i">i</A></H2>
-<DIR>
-<LI><A HREF="wget_9.html#IDX61">ignore length</A>
-<LI><A HREF="wget_17.html#IDX103">include directories</A>
-<LI><A HREF="wget_7.html#IDX31">incomplete downloads</A>
-<LI><A HREF="wget_20.html#IDX114">incremental updating</A>
-<LI><A HREF="wget_6.html#IDX18">input-file</A>
-<LI><A HREF="wget_2.html#IDX3">invoking</A>
-<LI><A HREF="wget_7.html#IDX23">IP address, client</A>
-</DIR>
-<H2><A NAME="cindex_l">l</A></H2>
-<DIR>
-<LI><A HREF="wget_35.html#IDX135">latest version</A>
-<LI><A HREF="wget_11.html#IDX78">link conversion</A>
-<LI><A HREF="wget_14.html#IDX87">links</A>
-<LI><A HREF="wget_36.html#IDX137">list</A>
-<LI><A HREF="wget_9.html#IDX56">loading cookies</A>
-<LI><A HREF="wget_25.html#IDX125">location of wgetrc</A>
-<LI><A HREF="wget_6.html#IDX13">log file</A>
-</DIR>
-<H2><A NAME="cindex_m">m</A></H2>
-<DIR>
-<LI><A HREF="wget_36.html#IDX136">mailing list</A>
-<LI><A HREF="wget_32.html#IDX132">mirroring</A>
-</DIR>
-<H2><A NAME="cindex_n">n</A></H2>
-<DIR>
-<LI><A HREF="wget_17.html#IDX108">no parent</A>
-<LI><A HREF="wget_45.html#IDX154">no warranty</A>
-<LI><A HREF="wget_7.html#IDX29">no-clobber</A>
-<LI><A HREF="wget_2.html#IDX6">nohup</A>
-<LI><A HREF="wget_7.html#IDX26">number of retries</A>
-</DIR>
-<H2><A NAME="cindex_o">o</A></H2>
-<DIR>
-<LI><A HREF="wget_38.html#IDX142">operating systems</A>
-<LI><A HREF="wget_4.html#IDX9">option syntax</A>
-<LI><A HREF="wget_6.html#IDX12">output file</A>
-<LI><A HREF="wget_1.html#IDX1">overview</A>
-</DIR>
-<H2><A NAME="cindex_p">p</A></H2>
-<DIR>
-<LI><A HREF="wget_11.html#IDX80">page requisites</A>
-<LI><A HREF="wget_10.html#IDX72">passive ftp</A>
-<LI><A HREF="wget_7.html#IDX39">pause</A>
-<LI><A HREF="wget_38.html#IDX141">portability</A>
-<LI><A HREF="wget_7.html#IDX33">progress indicator</A>
-<LI><A HREF="wget_34.html#IDX134">proxies</A>
-<LI><A HREF="wget_7.html#IDX45">proxy</A>, <A 
HREF="wget_9.html#IDX53">proxy</A>
-<LI><A HREF="wget_9.html#IDX65">proxy authentication</A>
-<LI><A HREF="wget_11.html#IDX74">proxy filling</A>
-<LI><A HREF="wget_9.html#IDX64">proxy password</A>
-<LI><A HREF="wget_9.html#IDX63">proxy user</A>
-</DIR>
-<H2><A NAME="cindex_q">q</A></H2>
-<DIR>
-<LI><A HREF="wget_6.html#IDX16">quiet</A>
-<LI><A HREF="wget_7.html#IDX46">quota</A>
-</DIR>
-<H2><A NAME="cindex_r">r</A></H2>
-<DIR>
-<LI><A HREF="wget_7.html#IDX44">random wait</A>
-<LI><A HREF="wget_13.html#IDX84">recursion</A>
-<LI><A HREF="wget_13.html#IDX86">recursive retrieval</A>
-<LI><A HREF="wget_31.html#IDX131">redirecting output</A>
-<LI><A HREF="wget_9.html#IDX67">referer, http</A>
-<LI><A HREF="wget_17.html#IDX107">reject directories</A>
-<LI><A HREF="wget_16.html#IDX97">reject suffixes</A>
-<LI><A HREF="wget_16.html#IDX96">reject wildcards</A>
-<LI><A HREF="wget_18.html#IDX109">relative links</A>
-<LI><A HREF="wget_37.html#IDX139">reporting bugs</A>
-<LI><A HREF="wget_11.html#IDX81">required images, downloading</A>
-<LI><A HREF="wget_7.html#IDX32">resume download</A>
-<LI><A HREF="wget_7.html#IDX24">retries</A>
-<LI><A HREF="wget_7.html#IDX41">retries, waiting between</A>
-<LI><A HREF="wget_13.html#IDX85">retrieving</A>
-<LI><A HREF="wget_41.html#IDX145">robots</A>
-<LI><A HREF="wget_41.html#IDX146">robots.txt</A>
-</DIR>
-<H2><A NAME="cindex_s">s</A></H2>
-<DIR>
-<LI><A HREF="wget_28.html#IDX129">sample wgetrc</A>
-<LI><A HREF="wget_9.html#IDX58">saving cookies</A>
-<LI><A HREF="wget_42.html#IDX148">security</A>
-<LI><A HREF="wget_41.html#IDX147">server maintenance</A>
-<LI><A HREF="wget_7.html#IDX35">server response, print</A>
-<LI><A HREF="wget_9.html#IDX68">server response, save</A>
-<LI><A HREF="wget_39.html#IDX143">signal handling</A>
-<LI><A HREF="wget_15.html#IDX89">spanning hosts</A>
-<LI><A HREF="wget_7.html#IDX37">spider</A>
-<LI><A HREF="wget_24.html#IDX122">startup</A>
-<LI><A HREF="wget_24.html#IDX119">startup file</A>
-<LI><A HREF="wget_16.html#IDX95">suffixes, accept</A>
-<LI><A HREF="wget_16.html#IDX99">suffixes, reject</A>
-<LI><A HREF="wget_10.html#IDX73">symbolic links, retrieving</A>
-<LI><A HREF="wget_4.html#IDX10">syntax of options</A>
-<LI><A HREF="wget_26.html#IDX127">syntax of wgetrc</A>
-</DIR>
-<H2><A NAME="cindex_t">t</A></H2>
-<DIR>
-<LI><A HREF="wget_12.html#IDX83">tag-based recursive pruning</A>
-<LI><A HREF="wget_20.html#IDX111">time-stamping</A>
-<LI><A HREF="wget_21.html#IDX115">time-stamping usage</A>
-<LI><A HREF="wget_7.html#IDX38">timeout</A>
-<LI><A HREF="wget_20.html#IDX112">timestamping</A>
-<LI><A HREF="wget_7.html#IDX25">tries</A>
-<LI><A HREF="wget_16.html#IDX91">types of files</A>
-</DIR>
-<H2><A NAME="cindex_u">u</A></H2>
-<DIR>
-<LI><A HREF="wget_20.html#IDX113">updating the archives</A>
-<LI><A HREF="wget_3.html#IDX7">URL</A>
-<LI><A HREF="wget_3.html#IDX8">URL syntax</A>
-<LI><A HREF="wget_21.html#IDX116">usage, time-stamping</A>
-<LI><A HREF="wget_9.html#IDX69">user-agent</A>
-</DIR>
-<H2><A NAME="cindex_v">v</A></H2>
-<DIR>
-<LI><A HREF="wget_33.html#IDX133">various</A>
-<LI><A HREF="wget_6.html#IDX17">verbose</A>
-</DIR>
-<H2><A NAME="cindex_w">w</A></H2>
-<DIR>
-<LI><A HREF="wget_7.html#IDX40">wait</A>
-<LI><A HREF="wget_7.html#IDX43">wait, random</A>
-<LI><A HREF="wget_7.html#IDX42">waiting between retries</A>
-<LI><A HREF="wget_7.html#IDX36">Wget as spider</A>
-<LI><A HREF="wget_24.html#IDX120">wgetrc</A>
-<LI><A HREF="wget_27.html#IDX128">wgetrc commands</A>
-<LI><A HREF="wget_25.html#IDX124">wgetrc location</A>
-<LI><A HREF="wget_26.html#IDX126">wgetrc syntax</A>
-<LI><A HREF="wget_16.html#IDX94">wildcards, accept</A>
-<LI><A HREF="wget_16.html#IDX98">wildcards, reject</A>
-</DIR>
-
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A 
HREF="wget_46.html">previous</A>, next, last section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_5.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_5.html
diff -N manual/wget-1.8.1/html_node/wget_5.html
--- manual/wget-1.8.1/html_node/wget_5.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,49 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Basic Startup Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_4.html">previous</A>, 
<A HREF="wget_6.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC5" HREF="wget_toc.html#TOC5">Basic Startup Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-V'</SAMP>
-<DD>
-<DT><SAMP>`--version'</SAMP>
-<DD>
-Display the version of Wget.
-
-<DT><SAMP>`-h'</SAMP>
-<DD>
-<DT><SAMP>`--help'</SAMP>
-<DD>
-Print a help message describing all of Wget's command-line options.
-
-<DT><SAMP>`-b'</SAMP>
-<DD>
-<DT><SAMP>`--background'</SAMP>
-<DD>
-Go to background immediately after startup.  If no output file is
-specified via the <SAMP>`-o'</SAMP>, output is redirected to 
<TT>`wget-log'</TT>.
-
-<A NAME="IDX11"></A>
-<DT><SAMP>`-e <VAR>command</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--execute <VAR>command</VAR>'</SAMP>
-<DD>
-Execute <VAR>command</VAR> as if it were a part of <TT>`.wgetrc'</TT>
-(see section <A HREF="wget_24.html#SEC24">Startup File</A>).  A command thus 
invoked will be executed
-<EM>after</EM> the commands in <TT>`.wgetrc'</TT>, thus taking precedence over
-them.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_4.html">previous</A>, 
<A HREF="wget_6.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_6.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_6.html
diff -N manual/wget-1.8.1/html_node/wget_6.html
--- manual/wget-1.8.1/html_node/wget_6.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,113 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Logging and Input File Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_5.html">previous</A>, 
<A HREF="wget_7.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC6" HREF="wget_toc.html#TOC6">Logging and Input File 
Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-o <VAR>logfile</VAR>'</SAMP>
-<DD>
-<A NAME="IDX12"></A>
- <A NAME="IDX13"></A>
- 
-<DT><SAMP>`--output-file=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Log all messages to <VAR>logfile</VAR>.  The messages are normally reported
-to standard error.
-
-<A NAME="IDX14"></A>
-<DT><SAMP>`-a <VAR>logfile</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--append-output=<VAR>logfile</VAR>'</SAMP>
-<DD>
-Append to <VAR>logfile</VAR>.  This is the same as <SAMP>`-o'</SAMP>, only it 
appends
-to <VAR>logfile</VAR> instead of overwriting the old log file.  If
-<VAR>logfile</VAR> does not exist, a new file is created.
-
-<A NAME="IDX15"></A>
-<DT><SAMP>`-d'</SAMP>
-<DD>
-<DT><SAMP>`--debug'</SAMP>
-<DD>
-Turn on debug output, meaning various information important to the
-developers of Wget if it does not work properly.  Your system
-administrator may have chosen to compile Wget without debug support, in
-which case <SAMP>`-d'</SAMP> will not work.  Please note that compiling with
-debug support is always safe--Wget compiled with the debug support will
-<EM>not</EM> print any debug info unless requested with <SAMP>`-d'</SAMP>.
-See section <A HREF="wget_37.html#SEC37">Reporting Bugs</A>, for more 
information on how to use <SAMP>`-d'</SAMP> for
-sending bug reports.
-
-<A NAME="IDX16"></A>
-<DT><SAMP>`-q'</SAMP>
-<DD>
-<DT><SAMP>`--quiet'</SAMP>
-<DD>
-Turn off Wget's output.
-
-<A NAME="IDX17"></A>
-<DT><SAMP>`-v'</SAMP>
-<DD>
-<DT><SAMP>`--verbose'</SAMP>
-<DD>
-Turn on verbose output, with all the available data.  The default output
-is verbose.
-
-<DT><SAMP>`-nv'</SAMP>
-<DD>
-<DT><SAMP>`--non-verbose'</SAMP>
-<DD>
-Non-verbose output--turn off verbose without being completely quiet
-(use <SAMP>`-q'</SAMP> for that), which means that error messages and basic
-information still get printed.
-
-<A NAME="IDX18"></A>
-<DT><SAMP>`-i <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--input-file=<VAR>file</VAR>'</SAMP>
-<DD>
-Read URLs from <VAR>file</VAR>, in which case no URLs need to be on
-the command line.  If there are URLs both on the command line and
-in an input file, those on the command lines will be the first ones to
-be retrieved.  The <VAR>file</VAR> need not be an HTML document (but no
-harm if it is)---it is enough if the URLs are just listed
-sequentially.
-
-However, if you specify <SAMP>`--force-html'</SAMP>, the document will be
-regarded as <SAMP>`html'</SAMP>.  In that case you may have problems with
-relative links, which you can solve either by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to the documents or by specifying
-<SAMP>`--base=<VAR>url</VAR>'</SAMP> on the command line.
-
-<A NAME="IDX19"></A>
-<DT><SAMP>`-F'</SAMP>
-<DD>
-<DT><SAMP>`--force-html'</SAMP>
-<DD>
-When input is read from a file, force it to be treated as an HTML
-file.  This enables you to retrieve relative links from existing
-HTML files on your local disk, by adding <CODE>&#60;base
-href="<VAR>url</VAR>"&#62;</CODE> to HTML, or using the <SAMP>`--base'</SAMP> 
command-line
-option.
-
-<A NAME="IDX20"></A>
-<DT><SAMP>`-B <VAR>URL</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--base=<VAR>URL</VAR>'</SAMP>
-<DD>
-When used in conjunction with <SAMP>`-F'</SAMP>, prepends <VAR>URL</VAR> to 
relative
-links in the file specified by <SAMP>`-i'</SAMP>.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_5.html">previous</A>, 
<A HREF="wget_7.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_7.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_7.html
diff -N manual/wget-1.8.1/html_node/wget_7.html
--- manual/wget-1.8.1/html_node/wget_7.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,308 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Download Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_6.html">previous</A>, 
<A HREF="wget_8.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC7" HREF="wget_toc.html#TOC7">Download Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`--bind-address=<VAR>ADDRESS</VAR>'</SAMP>
-<DD>
-<A NAME="IDX21"></A>
- <A NAME="IDX22"></A>
- <A NAME="IDX23"></A>
- 
-When making client TCP/IP connections, <CODE>bind()</CODE> to 
<VAR>ADDRESS</VAR> on
-the local machine.  <VAR>ADDRESS</VAR> may be specified as a hostname or IP
-address.  This option can be useful if your machine is bound to multiple
-IPs.
-
-<A NAME="IDX24"></A>
-<A NAME="IDX25"></A>
-<A NAME="IDX26"></A>
-<DT><SAMP>`-t <VAR>number</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--tries=<VAR>number</VAR>'</SAMP>
-<DD>
-Set number of retries to <VAR>number</VAR>.  Specify 0 or <SAMP>`inf'</SAMP> 
for
-infinite retrying.
-
-<DT><SAMP>`-O <VAR>file</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--output-document=<VAR>file</VAR>'</SAMP>
-<DD>
-The documents will not be written to the appropriate files, but all will
-be concatenated together and written to <VAR>file</VAR>.  If <VAR>file</VAR>
-already exists, it will be overwritten.  If the <VAR>file</VAR> is 
<SAMP>`-'</SAMP>,
-the documents will be written to standard output.  Including this option
-automatically sets the number of tries to 1.
-
-<A NAME="IDX27"></A>
-<A NAME="IDX28"></A>
-<A NAME="IDX29"></A>
-<DT><SAMP>`-nc'</SAMP>
-<DD>
-<DT><SAMP>`--no-clobber'</SAMP>
-<DD>
-If a file is downloaded more than once in the same directory, Wget's
-behavior depends on a few options, including <SAMP>`-nc'</SAMP>.  In certain
-cases, the local file will be <EM>clobbered</EM>, or overwritten, upon
-repeated download.  In other cases it will be preserved.
-
-When running Wget without <SAMP>`-N'</SAMP>, <SAMP>`-nc'</SAMP>, or 
<SAMP>`-r'</SAMP>,
-downloading the same file in the same directory will result in the
-original copy of <VAR>file</VAR> being preserved and the second copy being
-named <SAMP>`<VAR>file</VAR>.1'</SAMP>.  If that file is downloaded yet again, 
the
-third copy will be named <SAMP>`<VAR>file</VAR>.2'</SAMP>, and so on.  When
-<SAMP>`-nc'</SAMP> is specified, this behavior is suppressed, and Wget will
-refuse to download newer copies of <SAMP>`<VAR>file</VAR>'</SAMP>.  Therefore,
-"<CODE>no-clobber</CODE>" is actually a misnomer in this mode--it's not
-clobbering that's prevented (as the numeric suffixes were already
-preventing clobbering), but rather the multiple version saving that's
-prevented.
-
-When running Wget with <SAMP>`-r'</SAMP>, but without <SAMP>`-N'</SAMP> or 
<SAMP>`-nc'</SAMP>,
-re-downloading a file will result in the new copy simply overwriting the
-old.  Adding <SAMP>`-nc'</SAMP> will prevent this behavior, instead causing the
-original version to be preserved and any newer copies on the server to
-be ignored.
-
-When running Wget with <SAMP>`-N'</SAMP>, with or without <SAMP>`-r'</SAMP>, 
the
-decision as to whether or not to download a newer copy of a file depends
-on the local and remote timestamp and size of the file
-(see section <A HREF="wget_20.html#SEC20">Time-Stamping</A>).  
<SAMP>`-nc'</SAMP> may not be specified at the same
-time as <SAMP>`-N'</SAMP>.
-
-Note that when <SAMP>`-nc'</SAMP> is specified, files with the suffixes
-<SAMP>`.html'</SAMP> or (yuck) <SAMP>`.htm'</SAMP> will be loaded from the 
local disk
-and parsed as if they had been retrieved from the Web.
-
-<A NAME="IDX30"></A>
-<A NAME="IDX31"></A>
-<A NAME="IDX32"></A>
-<DT><SAMP>`-c'</SAMP>
-<DD>
-<DT><SAMP>`--continue'</SAMP>
-<DD>
-Continue getting a partially-downloaded file.  This is useful when you
-want to finish up a download started by a previous instance of Wget, or
-by another program.  For instance:
-
-
-<PRE>
-wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
-</PRE>
-
-If there is a file named <TT>`ls-lR.Z'</TT> in the current directory, Wget
-will assume that it is the first portion of the remote file, and will
-ask the server to continue the retrieval from an offset equal to the
-length of the local file.
-
-Note that you don't need to specify this option if you just want the
-current invocation of Wget to retry downloading a file should the
-connection be lost midway through.  This is the default behavior.
-<SAMP>`-c'</SAMP> only affects resumption of downloads started <EM>prior</EM> 
to
-this invocation of Wget, and whose local files are still sitting around.
-
-Without <SAMP>`-c'</SAMP>, the previous example would just download the remote
-file to <TT>`ls-lR.Z.1'</TT>, leaving the truncated <TT>`ls-lR.Z'</TT> file
-alone.
-
-Beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a non-empty file, and
-it turns out that the server does not support continued downloading,
-Wget will refuse to start the download from scratch, which would
-effectively ruin existing contents.  If you really want the download to
-start from scratch, remove the file.
-
-Also beginning with Wget 1.7, if you use <SAMP>`-c'</SAMP> on a file which is 
of
-equal size as the one on the server, Wget will refuse to download the
-file and print an explanatory message.  The same happens when the file
-is smaller on the server than locally (presumably because it was changed
-on the server since your last download attempt)---because "continuing"
-is not meaningful, no download occurs.
-
-On the other side of the coin, while using <SAMP>`-c'</SAMP>, any file that's
-bigger on the server than locally will be considered an incomplete
-download and only <CODE>(length(remote) - length(local))</CODE> bytes will be
-downloaded and tacked onto the end of the local file.  This behavior can
-be desirable in certain cases--for instance, you can use <SAMP>`wget -c'</SAMP>
-to download just the new portion that's been appended to a data
-collection or log file.
-
-However, if the file is bigger on the server because it's been
-<EM>changed</EM>, as opposed to just <EM>appended</EM> to, you'll end up
-with a garbled file.  Wget has no way of verifying that the local file
-is really a valid prefix of the remote file.  You need to be especially
-careful of this when using <SAMP>`-c'</SAMP> in conjunction with 
<SAMP>`-r'</SAMP>,
-since every file will be considered as an "incomplete download" candidate.
-
-Another instance where you'll get a garbled file if you try to use
-<SAMP>`-c'</SAMP> is if you have a lame HTTP proxy that inserts a
-"transfer interrupted" string into the local file.  In the future a
-"rollback" option may be added to deal with this case.
-
-Note that <SAMP>`-c'</SAMP> only works with FTP servers and with HTTP
-servers that support the <CODE>Range</CODE> header.
-
-<A NAME="IDX33"></A>
-<A NAME="IDX34"></A>
-<DT><SAMP>`--progress=<VAR>type</VAR>'</SAMP>
-<DD>
-Select the type of the progress indicator you wish to use.  Legal
-indicators are "dot" and "bar".
-
-The "dot" indicator is used by default.  It traces the retrieval by
-printing dots on the screen, each dot representing a fixed amount of
-downloaded data.
-
-When using the dotted retrieval, you may also set the <EM>style</EM> by
-specifying the type as <SAMP>`dot:<VAR>style</VAR>'</SAMP>.  Different styles 
assign
-different meaning to one dot.  With the <CODE>default</CODE> style each dot
-represents 1K, there are ten dots in a cluster and 50 dots in a line.
-The <CODE>binary</CODE> style has a more "computer"-like orientation--8K
-dots, 16-dots clusters and 48 dots per line (which makes for 384K
-lines).  The <CODE>mega</CODE> style is suitable for downloading very large
-files--each dot represents 64K retrieved, there are eight dots in a
-cluster, and 48 dots on each line (so each line contains 3M).
-
-Specifying <SAMP>`--progress=bar'</SAMP> will draw a nice ASCII progress bar
-graphics (a.k.a "thermometer" display) to indicate retrieval.  If the
-output is not a TTY, this option will be ignored, and Wget will revert
-to the dot indicator.  If you want to force the bar indicator, use
-<SAMP>`--progress=bar:force'</SAMP>.
-
-<DT><SAMP>`-N'</SAMP>
-<DD>
-<DT><SAMP>`--timestamping'</SAMP>
-<DD>
-Turn on time-stamping.  See section <A 
HREF="wget_20.html#SEC20">Time-Stamping</A>, for details.
-
-<A NAME="IDX35"></A>
-<DT><SAMP>`-S'</SAMP>
-<DD>
-<DT><SAMP>`--server-response'</SAMP>
-<DD>
-Print the headers sent by HTTP servers and responses sent by
-FTP servers.
-
-<A NAME="IDX36"></A>
-<A NAME="IDX37"></A>
-<DT><SAMP>`--spider'</SAMP>
-<DD>
-When invoked with this option, Wget will behave as a Web <EM>spider</EM>,
-which means that it will not download the pages, just check that they
-are there.  You can use it to check your bookmarks, e.g. with:
-
-
-<PRE>
-wget --spider --force-html -i bookmarks.html
-</PRE>
-
-This feature needs much more work for Wget to get close to the
-functionality of real WWW spiders.
-
-<A NAME="IDX38"></A>
-<DT><SAMP>`-T seconds'</SAMP>
-<DD>
-<DT><SAMP>`--timeout=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Set the read timeout to <VAR>seconds</VAR> seconds.  Whenever a network read
-is issued, the file descriptor is checked for a timeout, which could
-otherwise leave a pending connection (uninterrupted read).  The default
-timeout is 900 seconds (fifteen minutes).  Setting timeout to 0 will
-disable checking for timeouts.
-
-Please do not lower the default timeout value with this option unless
-you know what you are doing.
-
-<A NAME="IDX39"></A>
-<A NAME="IDX40"></A>
-<DT><SAMP>`-w <VAR>seconds</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--wait=<VAR>seconds</VAR>'</SAMP>
-<DD>
-Wait the specified number of seconds between the retrievals.  Use of
-this option is recommended, as it lightens the server load by making the
-requests less frequent.  Instead of in seconds, the time can be
-specified in minutes using the <CODE>m</CODE> suffix, in hours using 
<CODE>h</CODE>
-suffix, or in days using <CODE>d</CODE> suffix.
-
-Specifying a large value for this option is useful if the network or the
-destination host is down, so that Wget can wait long enough to
-reasonably expect the network error to be fixed before the retry.
-
-<A NAME="IDX41"></A>
-<A NAME="IDX42"></A>
-<DT><SAMP>`--waitretry=<VAR>seconds</VAR>'</SAMP>
-<DD>
-If you don't want Wget to wait between <EM>every</EM> retrieval, but only
-between retries of failed downloads, you can use this option.  Wget will
-use <EM>linear backoff</EM>, waiting 1 second after the first failure on a
-given file, then waiting 2 seconds after the second failure on that
-file, up to the maximum number of <VAR>seconds</VAR> you specify.  Therefore,
-a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
-seconds per file.
-
-Note that this option is turned on by default in the global
-<TT>`wgetrc'</TT> file.
-
-<A NAME="IDX43"></A>
-<A NAME="IDX44"></A>
-<DT><SAMP>`--random-wait'</SAMP>
-<DD>
-Some web sites may perform log analysis to identify retrieval programs
-such as Wget by looking for statistically significant similarities in
-the time between requests. This option causes the time between requests
-to vary between 0 and 2 * <VAR>wait</VAR> seconds, where <VAR>wait</VAR> was
-specified using the <SAMP>`-w'</SAMP> or <SAMP>`--wait'</SAMP> options, in 
order to mask
-Wget's presence from such analysis.
-
-A recent article in a publication devoted to development on a popular
-consumer platform provided code to perform this analysis on the fly.
-Its author suggested blocking at the class C address level to ensure
-automated retrieval programs were blocked despite changing DHCP-supplied
-addresses.
-
-The <SAMP>`--random-wait'</SAMP> option was inspired by this ill-advised
-recommendation to block many unrelated users from a web site due to the
-actions of one.
-
-<A NAME="IDX45"></A>
-<DT><SAMP>`-Y on/off'</SAMP>
-<DD>
-<DT><SAMP>`--proxy=on/off'</SAMP>
-<DD>
-Turn proxy support on or off.  The proxy is on by default if the
-appropriate environmental variable is defined.
-
-<A NAME="IDX46"></A>
-<DT><SAMP>`-Q <VAR>quota</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--quota=<VAR>quota</VAR>'</SAMP>
-<DD>
-Specify download quota for automatic retrievals.  The value can be
-specified in bytes (default), kilobytes (with <SAMP>`k'</SAMP> suffix), or
-megabytes (with <SAMP>`m'</SAMP> suffix).
-
-Note that quota will never affect downloading a single file.  So if you
-specify <SAMP>`wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz'</SAMP>, all of 
the
-<TT>`ls-lR.gz'</TT> will be downloaded.  The same goes even when several
-URLs are specified on the command-line.  However, quota is
-respected when retrieving either recursively, or from an input file.
-Thus you may safely type <SAMP>`wget -Q2m -i sites'</SAMP>---download will be
-aborted when the quota is exceeded.
-
-Setting quota to 0 or to <SAMP>`inf'</SAMP> unlimits the download quota.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_6.html">previous</A>, 
<A HREF="wget_8.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_8.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_8.html
diff -N manual/wget-1.8.1/html_node/wget_8.html
--- manual/wget-1.8.1/html_node/wget_8.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,90 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Directory Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_7.html">previous</A>, 
<A HREF="wget_9.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC8" HREF="wget_toc.html#TOC8">Directory Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-nd'</SAMP>
-<DD>
-<DT><SAMP>`--no-directories'</SAMP>
-<DD>
-Do not create a hierarchy of directories when retrieving recursively.
-With this option turned on, all files will get saved to the current
-directory, without clobbering (if a name shows up more than once, the
-filenames will get extensions <SAMP>`.n'</SAMP>).
-
-<DT><SAMP>`-x'</SAMP>
-<DD>
-<DT><SAMP>`--force-directories'</SAMP>
-<DD>
-The opposite of <SAMP>`-nd'</SAMP>---create a hierarchy of directories, even if
-one would not have been created otherwise.  E.g. <SAMP>`wget -x
-http://fly.srk.fer.hr/robots.txt'</SAMP> will save the downloaded file to
-<TT>`fly.srk.fer.hr/robots.txt'</TT>.
-
-<DT><SAMP>`-nH'</SAMP>
-<DD>
-<DT><SAMP>`--no-host-directories'</SAMP>
-<DD>
-Disable generation of host-prefixed directories.  By default, invoking
-Wget with <SAMP>`-r http://fly.srk.fer.hr/'</SAMP> will create a structure of
-directories beginning with <TT>`fly.srk.fer.hr/'</TT>.  This option disables
-such behavior.
-
-<A NAME="IDX47"></A>
-<DT><SAMP>`--cut-dirs=<VAR>number</VAR>'</SAMP>
-<DD>
-Ignore <VAR>number</VAR> directory components.  This is useful for getting a
-fine-grained control over the directory where recursive retrieval will
-be saved.
-
-Take, for example, the directory at
-<SAMP>`ftp://ftp.xemacs.org/pub/xemacs/'</SAMP>.  If you retrieve it with
-<SAMP>`-r'</SAMP>, it will be saved locally under
-<TT>`ftp.xemacs.org/pub/xemacs/'</TT>.  While the <SAMP>`-nH'</SAMP> option can
-remove the <TT>`ftp.xemacs.org/'</TT> part, you are still stuck with
-<TT>`pub/xemacs'</TT>.  This is where <SAMP>`--cut-dirs'</SAMP> comes in 
handy; it
-makes Wget not "see" <VAR>number</VAR> remote directory components.  Here
-are several examples of how <SAMP>`--cut-dirs'</SAMP> option works.
-
-
-<PRE>
-No options        -&#62; ftp.xemacs.org/pub/xemacs/
--nH               -&#62; pub/xemacs/
--nH --cut-dirs=1  -&#62; xemacs/
--nH --cut-dirs=2  -&#62; .
-
---cut-dirs=1      -&#62; ftp.xemacs.org/xemacs/
-...
-</PRE>
-
-If you just want to get rid of the directory structure, this option is
-similar to a combination of <SAMP>`-nd'</SAMP> and <SAMP>`-P'</SAMP>.  
However, unlike
-<SAMP>`-nd'</SAMP>, <SAMP>`--cut-dirs'</SAMP> does not lose with 
subdirectories--for
-instance, with <SAMP>`-nH --cut-dirs=1'</SAMP>, a <TT>`beta/'</TT> 
subdirectory will
-be placed to <TT>`xemacs/beta'</TT>, as one would expect.
-
-<A NAME="IDX48"></A>
-<DT><SAMP>`-P <VAR>prefix</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--directory-prefix=<VAR>prefix</VAR>'</SAMP>
-<DD>
-Set directory prefix to <VAR>prefix</VAR>.  The <EM>directory prefix</EM> is 
the
-directory where all other files and subdirectories will be saved to,
-i.e. the top of the retrieval tree.  The default is <SAMP>`.'</SAMP> (the
-current directory).
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_7.html">previous</A>, 
<A HREF="wget_9.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_9.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_9.html
diff -N manual/wget-1.8.1/html_node/wget_9.html
--- manual/wget-1.8.1/html_node/wget_9.html     19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,236 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - HTTP Options</TITLE>
-</HEAD>
-<BODY>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_8.html">previous</A>, 
<A HREF="wget_10.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-<P><HR><P>
-
-
-<H2><A NAME="SEC9" HREF="wget_toc.html#TOC9">HTTP Options</A></H2>
-
-<DL COMPACT>
-
-<DT><SAMP>`-E'</SAMP>
-<DD>
-<A NAME="IDX49"></A>
- 
-<DT><SAMP>`--html-extension'</SAMP>
-<DD>
-If a file of type <SAMP>`text/html'</SAMP> is downloaded and the URL does not
-end with the regexp <SAMP>`\.[Hh][Tt][Mm][Ll]?'</SAMP>, this option will cause
-the suffix <SAMP>`.html'</SAMP> to be appended to the local filename.  This is
-useful, for instance, when you're mirroring a remote site that uses
-<SAMP>`.asp'</SAMP> pages, but you want the mirrored pages to be viewable on
-your stock Apache server.  Another good use for this is when you're
-downloading the output of CGIs.  A URL like
-<SAMP>`http://site.com/article.cgi?25'</SAMP> will be saved as
-<TT>`article.cgi?25.html'</TT>.
-
-Note that filenames changed in this way will be re-downloaded every time
-you re-mirror a site, because Wget can't tell that the local
-<TT>`<VAR>X</VAR>.html'</TT> file corresponds to remote URL 
<SAMP>`<VAR>X</VAR>'</SAMP> (since
-it doesn't yet know that the URL produces output of type
-<SAMP>`text/html'</SAMP>.  To prevent this re-downloading, you must use
-<SAMP>`-k'</SAMP> and <SAMP>`-K'</SAMP> so that the original version of the 
file will be
-saved as <TT>`<VAR>X</VAR>.orig'</TT> (see section <A 
HREF="wget_11.html#SEC11">Recursive Retrieval Options</A>).
-
-<A NAME="IDX50"></A>
-<A NAME="IDX51"></A>
-<A NAME="IDX52"></A>
-<DT><SAMP>`--http-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--http-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> on an
-HTTP server.  According to the type of the challenge, Wget will
-encode them using either the <CODE>basic</CODE> (insecure) or the
-<CODE>digest</CODE> authentication scheme.
-
-Another way to specify username and password is in the URL itself
-(see section <A HREF="wget_3.html#SEC3">URL Format</A>).  For more information 
about security issues with
-Wget, See section <A HREF="wget_42.html#SEC42">Security Considerations</A>.
-
-<A NAME="IDX53"></A>
-<A NAME="IDX54"></A>
-<DT><SAMP>`-C on/off'</SAMP>
-<DD>
-<DT><SAMP>`--cache=on/off'</SAMP>
-<DD>
-When set to off, disable server-side cache.  In this case, Wget will
-send the remote server an appropriate directive (<SAMP>`Pragma:
-no-cache'</SAMP>) to get the file from the remote service, rather than
-returning the cached version.  This is especially useful for retrieving
-and flushing out-of-date documents on proxy servers.
-
-Caching is allowed by default.
-
-<A NAME="IDX55"></A>
-<DT><SAMP>`--cookies=on/off'</SAMP>
-<DD>
-When set to off, disable the use of cookies.  Cookies are a mechanism
-for maintaining server-side state.  The server sends the client a cookie
-using the <CODE>Set-Cookie</CODE> header, and the client responds with the
-same cookie upon further requests.  Since cookies allow the server
-owners to keep track of visitors and for sites to exchange this
-information, some consider them a breach of privacy.  The default is to
-use cookies; however, <EM>storing</EM> cookies is not on by default.
-
-<A NAME="IDX56"></A>
-<A NAME="IDX57"></A>
-<DT><SAMP>`--load-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Load cookies from <VAR>file</VAR> before the first HTTP retrieval.
-<VAR>file</VAR> is a textual file in the format originally used by Netscape's
-<TT>`cookies.txt'</TT> file.
-
-You will typically use this option when mirroring sites that require
-that you be logged in to access some or all of their content.  The login
-process typically works by the web server issuing an HTTP cookie
-upon receiving and verifying your credentials.  The cookie is then
-resent by the browser when accessing that part of the site, and so
-proves your identity.
-
-Mirroring such a site requires Wget to send the same cookies your
-browser sends when communicating with the site.  This is achieved by
-<SAMP>`--load-cookies'</SAMP>---simply point Wget to the location of the
-<TT>`cookies.txt'</TT> file, and it will send the same cookies your browser
-would send in the same situation.  Different browsers keep textual
-cookie files in different locations:
-
-<DL COMPACT>
-
-<DT>Netscape 4.x.
-<DD>
-The cookies are in <TT>`~/.netscape/cookies.txt'</TT>.
-
-<DT>Mozilla and Netscape 6.x.
-<DD>
-Mozilla's cookie file is also named <TT>`cookies.txt'</TT>, located
-somewhere under <TT>`~/.mozilla'</TT>, in the directory of your profile.
-The full path usually ends up looking somewhat like
-<TT>`~/.mozilla/default/<VAR>some-weird-string</VAR>/cookies.txt'</TT>.
-
-<DT>Internet Explorer.
-<DD>
-You can produce a cookie file Wget can use by using the File menu,
-Import and Export, Export Cookies.  This has been tested with Internet
-Explorer 5; it is not guaranteed to work with earlier versions.
-
-<DT>Other browsers.
-<DD>
-If you are using a different browser to create your cookies,
-<SAMP>`--load-cookies'</SAMP> will only work if you can locate or produce a
-cookie file in the Netscape format that Wget expects.
-</DL>
-
-If you cannot use <SAMP>`--load-cookies'</SAMP>, there might still be an
-alternative.  If your browser supports a "cookie manager", you can use
-it to view the cookies used when accessing the site you're mirroring.
-Write down the name and value of the cookie, and manually instruct Wget
-to send those cookies, bypassing the "official" cookie support:
-
-
-<PRE>
-wget --cookies=off --header "Cookie: <VAR>name</VAR>=<VAR>value</VAR>"
-</PRE>
-
-<A NAME="IDX58"></A>
-<A NAME="IDX59"></A>
-<DT><SAMP>`--save-cookies <VAR>file</VAR>'</SAMP>
-<DD>
-Save cookies from <VAR>file</VAR> at the end of session.  Cookies whose
-expiry time is not specified, or those that have already expired, are
-not saved.
-
-<A NAME="IDX60"></A>
-<A NAME="IDX61"></A>
-<DT><SAMP>`--ignore-length'</SAMP>
-<DD>
-Unfortunately, some HTTP servers (CGI programs, to be more
-precise) send out bogus <CODE>Content-Length</CODE> headers, which makes Wget
-go wild, as it thinks not all the document was retrieved.  You can spot
-this syndrome if Wget retries getting the same document again and again,
-each time claiming that the (otherwise normal) connection has closed on
-the very same byte.
-
-With this option, Wget will ignore the <CODE>Content-Length</CODE> header--as
-if it never existed.
-
-<A NAME="IDX62"></A>
-<DT><SAMP>`--header=<VAR>additional-header</VAR>'</SAMP>
-<DD>
-Define an <VAR>additional-header</VAR> to be passed to the HTTP servers.
-Headers must contain a <SAMP>`:'</SAMP> preceded by one or more non-blank
-characters, and must not contain newlines.
-
-You may define more than one additional header by specifying
-<SAMP>`--header'</SAMP> more than once.
-
-
-<PRE>
-wget --header='Accept-Charset: iso-8859-2' \
-     --header='Accept-Language: hr'        \
-       http://fly.srk.fer.hr/
-</PRE>
-
-Specification of an empty string as the header value will clear all
-previous user-defined headers.
-
-<A NAME="IDX63"></A>
-<A NAME="IDX64"></A>
-<A NAME="IDX65"></A>
-<DT><SAMP>`--proxy-user=<VAR>user</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--proxy-passwd=<VAR>password</VAR>'</SAMP>
-<DD>
-Specify the username <VAR>user</VAR> and password <VAR>password</VAR> for
-authentication on a proxy server.  Wget will encode them using the
-<CODE>basic</CODE> authentication scheme.
-
-<A NAME="IDX66"></A>
-<A NAME="IDX67"></A>
-<DT><SAMP>`--referer=<VAR>url</VAR>'</SAMP>
-<DD>
-Include `Referer: <VAR>url</VAR>' header in HTTP request.  Useful for
-retrieving documents with server-side processing that assume they are
-always being retrieved by interactive web browsers and only come out
-properly when Referer is set to one of the pages that point to them.
-
-<A NAME="IDX68"></A>
-<DT><SAMP>`-s'</SAMP>
-<DD>
-<DT><SAMP>`--save-headers'</SAMP>
-<DD>
-Save the headers sent by the HTTP server to the file, preceding the
-actual contents, with an empty line as the separator.
-
-<A NAME="IDX69"></A>
-<DT><SAMP>`-U <VAR>agent-string</VAR>'</SAMP>
-<DD>
-<DT><SAMP>`--user-agent=<VAR>agent-string</VAR>'</SAMP>
-<DD>
-Identify as <VAR>agent-string</VAR> to the HTTP server.
-
-The HTTP protocol allows the clients to identify themselves using a
-<CODE>User-Agent</CODE> header field.  This enables distinguishing the
-WWW software, usually for statistical purposes or for tracing of
-protocol violations.  Wget normally identifies as
-<SAMP>`Wget/<VAR>version</VAR>'</SAMP>, <VAR>version</VAR> being the current 
version
-number of Wget.
-
-However, some sites have been known to impose the policy of tailoring
-the output according to the <CODE>User-Agent</CODE>-supplied information.
-While conceptually this is not such a bad idea, it has been abused by
-servers denying information to clients other than <CODE>Mozilla</CODE> or
-Microsoft <CODE>Internet Explorer</CODE>.  This option allows you to change
-the <CODE>User-Agent</CODE> line issued by Wget.  Use of this option is
-discouraged, unless you really know what you are doing.
-</DL>
-
-<P><HR><P>
-Go to the <A HREF="wget_1.html">first</A>, <A HREF="wget_8.html">previous</A>, 
<A HREF="wget_10.html">next</A>, <A HREF="wget_47.html">last</A> section, <A 
HREF="wget_toc.html">table of contents</A>.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_foot.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_foot.html
diff -N manual/wget-1.8.1/html_node/wget_foot.html
--- manual/wget-1.8.1/html_node/wget_foot.html  19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,27 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Footnotes</TITLE>
-</HEAD>
-<BODY>
-<H1>GNU Wget</H1>
-<H2>The noninteractive downloading utility</H2>
-<H2>Updated for Wget 1.8.1, December 2001</H2>
-<ADDRESS>by Hrvoje <A HREF="mailto:address@hidden";>address@hidden</A>{s}i'{c} 
and the developers</ADDRESS>
-<P>
-<P><HR><P>
-<H3><A NAME="FOOT1" HREF="wget_3.html#DOCF1">(1)</A></H3>
-<P>If you have a
-<TT>`.netrc'</TT> file in your home directory, password will also be
-searched for there.
-<H3><A NAME="FOOT2" HREF="wget_22.html#DOCF2">(2)</A></H3>
-<P>As an additional check, Wget will look at the
-<CODE>Content-Length</CODE> header, and compare the sizes; if they are not the
-same, the remote file will be downloaded no matter what the time-stamp
-says.
-<P><HR><P>
-This document was generated on 17 January 2002 using
-<A HREF="http://wwwinfo.cern.ch/dis/texi2html/";>texi2html</A>&nbsp;1.56k.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/html_node/wget_toc.html
===================================================================
RCS file: manual/wget-1.8.1/html_node/wget_toc.html
diff -N manual/wget-1.8.1/html_node/wget_toc.html
--- manual/wget-1.8.1/html_node/wget_toc.html   19 Oct 2003 23:07:43 -0000      
1.1
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,87 +0,0 @@
-<HTML>
-<HEAD>
-<!-- Created by texi2html 1.56k from ../texi/wget.texi on 17 January 2002 -->
-
-<TITLE>GNU Wget Manual - Table of Contents</TITLE>
-</HEAD>
-<BODY>
-<H1>GNU Wget</H1>
-<H2>The noninteractive downloading utility</H2>
-<H2>Updated for Wget 1.8.1, December 2001</H2>
-<ADDRESS>by Hrvoje <A HREF="mailto:address@hidden";>address@hidden</A>{s}i'{c} 
and the developers</ADDRESS>
-<P>
-<P><HR><P>
-<UL>
-<LI><A NAME="TOC1" HREF="wget_1.html#SEC1">Overview</A>
-<LI><A NAME="TOC2" HREF="wget_2.html#SEC2">Invoking</A>
-<UL>
-<LI><A NAME="TOC3" HREF="wget_3.html#SEC3">URL Format</A>
-<LI><A NAME="TOC4" HREF="wget_4.html#SEC4">Option Syntax</A>
-<LI><A NAME="TOC5" HREF="wget_5.html#SEC5">Basic Startup Options</A>
-<LI><A NAME="TOC6" HREF="wget_6.html#SEC6">Logging and Input File Options</A>
-<LI><A NAME="TOC7" HREF="wget_7.html#SEC7">Download Options</A>
-<LI><A NAME="TOC8" HREF="wget_8.html#SEC8">Directory Options</A>
-<LI><A NAME="TOC9" HREF="wget_9.html#SEC9">HTTP Options</A>
-<LI><A NAME="TOC10" HREF="wget_10.html#SEC10">FTP Options</A>
-<LI><A NAME="TOC11" HREF="wget_11.html#SEC11">Recursive Retrieval Options</A>
-<LI><A NAME="TOC12" HREF="wget_12.html#SEC12">Recursive Accept/Reject 
Options</A>
-</UL>
-<LI><A NAME="TOC13" HREF="wget_13.html#SEC13">Recursive Retrieval</A>
-<LI><A NAME="TOC14" HREF="wget_14.html#SEC14">Following Links</A>
-<UL>
-<LI><A NAME="TOC15" HREF="wget_15.html#SEC15">Spanning Hosts</A>
-<LI><A NAME="TOC16" HREF="wget_16.html#SEC16">Types of Files</A>
-<LI><A NAME="TOC17" HREF="wget_17.html#SEC17">Directory-Based Limits</A>
-<LI><A NAME="TOC18" HREF="wget_18.html#SEC18">Relative Links</A>
-<LI><A NAME="TOC19" HREF="wget_19.html#SEC19">Following FTP Links</A>
-</UL>
-<LI><A NAME="TOC20" HREF="wget_20.html#SEC20">Time-Stamping</A>
-<UL>
-<LI><A NAME="TOC21" HREF="wget_21.html#SEC21">Time-Stamping Usage</A>
-<LI><A NAME="TOC22" HREF="wget_22.html#SEC22">HTTP Time-Stamping Internals</A>
-<LI><A NAME="TOC23" HREF="wget_23.html#SEC23">FTP Time-Stamping Internals</A>
-</UL>
-<LI><A NAME="TOC24" HREF="wget_24.html#SEC24">Startup File</A>
-<UL>
-<LI><A NAME="TOC25" HREF="wget_25.html#SEC25">Wgetrc Location</A>
-<LI><A NAME="TOC26" HREF="wget_26.html#SEC26">Wgetrc Syntax</A>
-<LI><A NAME="TOC27" HREF="wget_27.html#SEC27">Wgetrc Commands</A>
-<LI><A NAME="TOC28" HREF="wget_28.html#SEC28">Sample Wgetrc</A>
-</UL>
-<LI><A NAME="TOC29" HREF="wget_29.html#SEC29">Examples</A>
-<UL>
-<LI><A NAME="TOC30" HREF="wget_30.html#SEC30">Simple Usage</A>
-<LI><A NAME="TOC31" HREF="wget_31.html#SEC31">Advanced Usage</A>
-<LI><A NAME="TOC32" HREF="wget_32.html#SEC32">Very Advanced Usage</A>
-</UL>
-<LI><A NAME="TOC33" HREF="wget_33.html#SEC33">Various</A>
-<UL>
-<LI><A NAME="TOC34" HREF="wget_34.html#SEC34">Proxies</A>
-<LI><A NAME="TOC35" HREF="wget_35.html#SEC35">Distribution</A>
-<LI><A NAME="TOC36" HREF="wget_36.html#SEC36">Mailing List</A>
-<LI><A NAME="TOC37" HREF="wget_37.html#SEC37">Reporting Bugs</A>
-<LI><A NAME="TOC38" HREF="wget_38.html#SEC38">Portability</A>
-<LI><A NAME="TOC39" HREF="wget_39.html#SEC39">Signals</A>
-</UL>
-<LI><A NAME="TOC40" HREF="wget_40.html#SEC40">Appendices</A>
-<UL>
-<LI><A NAME="TOC41" HREF="wget_41.html#SEC41">Robots</A>
-<LI><A NAME="TOC42" HREF="wget_42.html#SEC42">Security Considerations</A>
-<LI><A NAME="TOC43" HREF="wget_43.html#SEC43">Contributors</A>
-</UL>
-<LI><A NAME="TOC44" HREF="wget_44.html#SEC44">Copying</A>
-<UL>
-<LI><A NAME="TOC45" HREF="wget_45.html#SEC45">GNU General Public License</A>
-<LI><A NAME="TOC46" HREF="wget_45.html#SEC46">Preamble</A>
-<LI><A NAME="TOC47" HREF="wget_45.html#SEC47">TERMS AND CONDITIONS FOR 
COPYING, DISTRIBUTION AND MODIFICATION</A>
-<LI><A NAME="TOC48" HREF="wget_45.html#SEC48">How to Apply These Terms to Your 
New Programs</A>
-<LI><A NAME="TOC49" HREF="wget_46.html#SEC49">GNU Free Documentation 
License</A>
-<LI><A NAME="TOC50" HREF="wget_46.html#SEC50">ADDENDUM: How to use this 
License for your documents</A>
-</UL>
-<LI><A NAME="TOC51" HREF="wget_47.html#SEC51">Concept Index</A>
-</UL>
-<P><HR><P>
-This document was generated on 17 January 2002 using
-<A HREF="http://wwwinfo.cern.ch/dis/texi2html/";>texi2html</A>&nbsp;1.56k.
-</BODY>
-</HTML>

Index: manual/wget-1.8.1/info/wget-info.tar.gz
===================================================================
RCS file: manual/wget-1.8.1/info/wget-info.tar.gz
diff -N manual/wget-1.8.1/info/wget-info.tar.gz
Binary files /tmp/cvsvPrCMk and /dev/null differ

Index: manual/wget-1.8.1/ps/wget.ps.gz
===================================================================
RCS file: manual/wget-1.8.1/ps/wget.ps.gz
diff -N manual/wget-1.8.1/ps/wget.ps.gz
Binary files /tmp/cvs8ItIbl and /dev/null differ

Index: manual/wget-1.8.1/texi/wget.texi.tar.gz
===================================================================
RCS file: manual/wget-1.8.1/texi/wget.texi.tar.gz
diff -N manual/wget-1.8.1/texi/wget.texi.tar.gz
Binary files /tmp/cvsJRpCAj and /dev/null differ

Index: manual/wget-1.8.1/text/wget.txt
===================================================================
RCS file: manual/wget-1.8.1/text/wget.txt
diff -N manual/wget-1.8.1/text/wget.txt
--- manual/wget-1.8.1/text/wget.txt     29 Jun 2005 21:04:15 -0000      1.2
+++ /dev/null   1 Jan 1970 00:00:00 -0000
@@ -1,3538 +0,0 @@
-This file documents the the GNU Wget utility for downloading network
-data.
-
-   Copyright (C) 1996, 1997, 1998, 2000, 2001 Free Software Foundation,
-Inc.
-
-   Permission is granted to make and distribute verbatim copies of this
-manual provided the copyright notice and this permission notice are
-preserved on all copies.
-
-   Permission is granted to copy, distribute and/or modify this document
-under the terms of the GNU Free Documentation License, Version 1.1 or
-any later version published by the Free Software Foundation; with the
-Invariant Sections being "GNU General Public License" and "GNU Free
-Documentation License", with no Front-Cover Texts, and with no
-Back-Cover Texts.  A copy of the license is included in the section
-entitled "GNU Free Documentation License".
-
-Wget 1.8.1
-**********
-
-   This manual documents version 1.8.1 of GNU Wget, the freely
-available utility for network download.
-
-   Copyright (C) 1996, 1997, 1998, 2000, 2001 Free Software Foundation,
-Inc.
-
-Overview
-********
-
-   GNU Wget is a free utility for non-interactive download of files from
-the Web.  It supports HTTP, HTTPS, and FTP protocols, as well as
-retrieval through HTTP proxies.
-
-   This chapter is a partial overview of Wget's features.
-
-   * Wget is non-interactive, meaning that it can work in the
-     background, while the user is not logged on.  This allows you to
-     start a retrieval and disconnect from the system, letting Wget
-     finish the work.  By contrast, most of the Web browsers require
-     constant user's presence, which can be a great hindrance when
-     transferring a lot of data.
-
-
-   * Wget can follow links in HTML pages and create local versions of
-     remote web sites, fully recreating the directory structure of the
-     original site.  This is sometimes referred to as "recursive
-     downloading."  While doing that, Wget respects the Robot Exclusion
-     Standard (`/robots.txt').  Wget can be instructed to convert the
-     links in downloaded HTML files to the local files for offline
-     viewing.
-
-
-   * File name wildcard matching and recursive mirroring of directories
-     are available when retrieving via FTP.  Wget can read the
-     time-stamp information given by both HTTP and FTP servers, and
-     store it locally.  Thus Wget can see if the remote file has
-     changed since last retrieval, and automatically retrieve the new
-     version if it has.  This makes Wget suitable for mirroring of FTP
-     sites, as well as home pages.
-
-
-   * Wget has been designed for robustness over slow or unstable network
-     connections; if a download fails due to a network problem, it will
-     keep retrying until the whole file has been retrieved.  If the
-     server supports regetting, it will instruct the server to continue
-     the download from where it left off.
-
-
-   * Wget supports proxy servers, which can lighten the network load,
-     speed up retrieval and provide access behind firewalls.  However,
-     if you are behind a firewall that requires that you use a socks
-     style gateway, you can get the socks library and build Wget with
-     support for socks.  Wget also supports the passive FTP downloading
-     as an option.
-
-
-   * Builtin features offer mechanisms to tune which links you wish to
-     follow (*note Following Links::).
-
-
-   * The retrieval is conveniently traced with printing dots, each dot
-     representing a fixed amount of data received (1KB by default).
-     These representations can be customized to your preferences.
-
-
-   * Most of the features are fully configurable, either through
-     command line options, or via the initialization file `.wgetrc'
-     (*note Startup File::).  Wget allows you to define "global"
-     startup files (`/usr/local/etc/wgetrc' by default) for site
-     settings.
-
-
-   * Finally, GNU Wget is free software.  This means that everyone may
-     use it, redistribute it and/or modify it under the terms of the
-     GNU General Public License, as published by the Free Software
-     Foundation (*note Copying::).
-
-Invoking
-********
-
-   By default, Wget is very simple to invoke.  The basic syntax is:
-
-     wget [OPTION]... [URL]...
-
-   Wget will simply download all the URLs specified on the command
-line.  URL is a "Uniform Resource Locator", as defined below.
-
-   However, you may wish to change some of the default parameters of
-Wget.  You can do it two ways: permanently, adding the appropriate
-command to `.wgetrc' (*note Startup File::), or specifying it on the
-command line.
-
-URL Format
-==========
-
-   "URL" is an acronym for Uniform Resource Locator.  A uniform
-resource locator is a compact string representation for a resource
-available via the Internet.  Wget recognizes the URL syntax as per
-RFC1738.  This is the most widely used form (square brackets denote
-optional parts):
-
-     http://host[:port]/directory/file
-     ftp://host[:port]/directory/file
-
-   You can also encode your username and password within a URL:
-
-     ftp://user:address@hidden/path
-     http://user:address@hidden/path
-
-   Either USER or PASSWORD, or both, may be left out.  If you leave out
-either the HTTP username or password, no authentication will be sent.
-If you leave out the FTP username, `anonymous' will be used.  If you
-leave out the FTP password, your email address will be supplied as a
-default password.(1)
-
-   You can encode unsafe characters in a URL as `%xy', `xy' being the
-hexadecimal representation of the character's ASCII value.  Some common
-unsafe characters include `%' (quoted as `%25'), `:' (quoted as `%3A'),
-and `@' (quoted as `%40').  Refer to RFC1738 for a comprehensive list
-of unsafe characters.
-
-   Wget also supports the `type' feature for FTP URLs.  By default, FTP
-documents are retrieved in the binary mode (type `i'), which means that
-they are downloaded unchanged.  Another useful mode is the `a'
-("ASCII") mode, which converts the line delimiters between the
-different operating systems, and is thus useful for text files.  Here
-is an example:
-
-     ftp://host/directory/file;type=a
-
-   Two alternative variants of URL specification are also supported,
-because of historical (hysterical?) reasons and their widespreaded use.
-
-   FTP-only syntax (supported by `NcFTP'):
-     host:/dir/file
-
-   HTTP-only syntax (introduced by `Netscape'):
-     host[:port]/dir/file
-
-   These two alternative forms are deprecated, and may cease being
-supported in the future.
-
-   If you do not understand the difference between these notations, or
-do not know which one to use, just use the plain ordinary format you use
-with your favorite browser, like `Lynx' or `Netscape'.
-
-   ---------- Footnotes ----------
-
-   (1) If you have a `.netrc' file in your home directory, password
-will also be searched for there.
-
-Option Syntax
-=============
-
-   Since Wget uses GNU getopts to process its arguments, every option
-has a short form and a long form.  Long options are more convenient to
-remember, but take time to type.  You may freely mix different option
-styles, or specify options after the command-line arguments.  Thus you
-may write:
-
-     wget -r --tries=10 http://fly.srk.fer.hr/ -o log
-
-   The space between the option accepting an argument and the argument
-may be omitted.  Instead `-o log' you can write `-olog'.
-
-   You may put several options that do not require arguments together,
-like:
-
-     wget -drc URL
-
-   This is a complete equivalent of:
-
-     wget -d -r -c URL
-
-   Since the options can be specified after the arguments, you may
-terminate them with `--'.  So the following will try to download URL
-`-x', reporting failure to `log':
-
-     wget -o log -- -x
-
-   The options that accept comma-separated lists all respect the
-convention that specifying an empty list clears its value.  This can be
-useful to clear the `.wgetrc' settings.  For instance, if your `.wgetrc'
-sets `exclude_directories' to `/cgi-bin', the following example will
-first reset it, and then set it to exclude `/~nobody' and `/~somebody'.
-You can also clear the lists in `.wgetrc' (*note Wgetrc Syntax::).
-
-     wget -X '' -X /~nobody,/~somebody
-
-Basic Startup Options
-=====================
-
-`-V'
-`--version'
-     Display the version of Wget.
-
-`-h'
-`--help'
-     Print a help message describing all of Wget's command-line options.
-
-`-b'
-`--background'
-     Go to background immediately after startup.  If no output file is
-     specified via the `-o', output is redirected to `wget-log'.
-
-`-e COMMAND'
-`--execute COMMAND'
-     Execute COMMAND as if it were a part of `.wgetrc' (*note Startup
-     File::).  A command thus invoked will be executed _after_ the
-     commands in `.wgetrc', thus taking precedence over them.
-
-Logging and Input File Options
-==============================
-
-`-o LOGFILE'
-`--output-file=LOGFILE'
-     Log all messages to LOGFILE.  The messages are normally reported
-     to standard error.
-
-`-a LOGFILE'
-`--append-output=LOGFILE'
-     Append to LOGFILE.  This is the same as `-o', only it appends to
-     LOGFILE instead of overwriting the old log file.  If LOGFILE does
-     not exist, a new file is created.
-
-`-d'
-`--debug'
-     Turn on debug output, meaning various information important to the
-     developers of Wget if it does not work properly.  Your system
-     administrator may have chosen to compile Wget without debug
-     support, in which case `-d' will not work.  Please note that
-     compiling with debug support is always safe--Wget compiled with
-     the debug support will _not_ print any debug info unless requested
-     with `-d'.  *Note Reporting Bugs::, for more information on how to
-     use `-d' for sending bug reports.
-
-`-q'
-`--quiet'
-     Turn off Wget's output.
-
-`-v'
-`--verbose'
-     Turn on verbose output, with all the available data.  The default
-     output is verbose.
-
-`-nv'
-`--non-verbose'
-     Non-verbose output--turn off verbose without being completely quiet
-     (use `-q' for that), which means that error messages and basic
-     information still get printed.
-
-`-i FILE'
-`--input-file=FILE'
-     Read URLs from FILE, in which case no URLs need to be on the
-     command line.  If there are URLs both on the command line and in
-     an input file, those on the command lines will be the first ones to
-     be retrieved.  The FILE need not be an HTML document (but no harm
-     if it is)--it is enough if the URLs are just listed sequentially.
-
-     However, if you specify `--force-html', the document will be
-     regarded as `html'.  In that case you may have problems with
-     relative links, which you can solve either by adding `<base
-     href="URL">' to the documents or by specifying `--base=URL' on the
-     command line.
-
-`-F'
-`--force-html'
-     When input is read from a file, force it to be treated as an HTML
-     file.  This enables you to retrieve relative links from existing
-     HTML files on your local disk, by adding `<base href="URL">' to
-     HTML, or using the `--base' command-line option.
-
-`-B URL'
-`--base=URL'
-     When used in conjunction with `-F', prepends URL to relative links
-     in the file specified by `-i'.
-
-Download Options
-================
-
-`--bind-address=ADDRESS'
-     When making client TCP/IP connections, `bind()' to ADDRESS on the
-     local machine.  ADDRESS may be specified as a hostname or IP
-     address.  This option can be useful if your machine is bound to
-     multiple IPs.
-
-`-t NUMBER'
-`--tries=NUMBER'
-     Set number of retries to NUMBER.  Specify 0 or `inf' for infinite
-     retrying.
-
-`-O FILE'
-`--output-document=FILE'
-     The documents will not be written to the appropriate files, but
-     all will be concatenated together and written to FILE.  If FILE
-     already exists, it will be overwritten.  If the FILE is `-', the
-     documents will be written to standard output.  Including this
-     option automatically sets the number of tries to 1.
-
-`-nc'
-`--no-clobber'
-     If a file is downloaded more than once in the same directory,
-     Wget's behavior depends on a few options, including `-nc'.  In
-     certain cases, the local file will be "clobbered", or overwritten,
-     upon repeated download.  In other cases it will be preserved.
-
-     When running Wget without `-N', `-nc', or `-r', downloading the
-     same file in the same directory will result in the original copy
-     of FILE being preserved and the second copy being named `FILE.1'.
-     If that file is downloaded yet again, the third copy will be named
-     `FILE.2', and so on.  When `-nc' is specified, this behavior is
-     suppressed, and Wget will refuse to download newer copies of
-     `FILE'.  Therefore, "`no-clobber'" is actually a misnomer in this
-     mode--it's not clobbering that's prevented (as the numeric
-     suffixes were already preventing clobbering), but rather the
-     multiple version saving that's prevented.
-
-     When running Wget with `-r', but without `-N' or `-nc',
-     re-downloading a file will result in the new copy simply
-     overwriting the old.  Adding `-nc' will prevent this behavior,
-     instead causing the original version to be preserved and any newer
-     copies on the server to be ignored.
-
-     When running Wget with `-N', with or without `-r', the decision as
-     to whether or not to download a newer copy of a file depends on
-     the local and remote timestamp and size of the file (*note
-     Time-Stamping::).  `-nc' may not be specified at the same time as
-     `-N'.
-
-     Note that when `-nc' is specified, files with the suffixes `.html'
-     or (yuck) `.htm' will be loaded from the local disk and parsed as
-     if they had been retrieved from the Web.
-
-`-c'
-`--continue'
-     Continue getting a partially-downloaded file.  This is useful when
-     you want to finish up a download started by a previous instance of
-     Wget, or by another program.  For instance:
-
-          wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
-
-     If there is a file named `ls-lR.Z' in the current directory, Wget
-     will assume that it is the first portion of the remote file, and
-     will ask the server to continue the retrieval from an offset equal
-     to the length of the local file.
-
-     Note that you don't need to specify this option if you just want
-     the current invocation of Wget to retry downloading a file should
-     the connection be lost midway through.  This is the default
-     behavior.  `-c' only affects resumption of downloads started
-     _prior_ to this invocation of Wget, and whose local files are
-     still sitting around.
-
-     Without `-c', the previous example would just download the remote
-     file to `ls-lR.Z.1', leaving the truncated `ls-lR.Z' file alone.
-
-     Beginning with Wget 1.7, if you use `-c' on a non-empty file, and
-     it turns out that the server does not support continued
-     downloading, Wget will refuse to start the download from scratch,
-     which would effectively ruin existing contents.  If you really
-     want the download to start from scratch, remove the file.
-
-     Also beginning with Wget 1.7, if you use `-c' on a file which is of
-     equal size as the one on the server, Wget will refuse to download
-     the file and print an explanatory message.  The same happens when
-     the file is smaller on the server than locally (presumably because
-     it was changed on the server since your last download
-     attempt)--because "continuing" is not meaningful, no download
-     occurs.
-
-     On the other side of the coin, while using `-c', any file that's
-     bigger on the server than locally will be considered an incomplete
-     download and only `(length(remote) - length(local))' bytes will be
-     downloaded and tacked onto the end of the local file.  This
-     behavior can be desirable in certain cases--for instance, you can
-     use `wget -c' to download just the new portion that's been
-     appended to a data collection or log file.
-
-     However, if the file is bigger on the server because it's been
-     _changed_, as opposed to just _appended_ to, you'll end up with a
-     garbled file.  Wget has no way of verifying that the local file is
-     really a valid prefix of the remote file.  You need to be
-     especially careful of this when using `-c' in conjunction with
-     `-r', since every file will be considered as an "incomplete
-     download" candidate.
-
-     Another instance where you'll get a garbled file if you try to use
-     `-c' is if you have a lame HTTP proxy that inserts a "transfer
-     interrupted" string into the local file.  In the future a
-     "rollback" option may be added to deal with this case.
-
-     Note that `-c' only works with FTP servers and with HTTP servers
-     that support the `Range' header.
-
-`--progress=TYPE'
-     Select the type of the progress indicator you wish to use.  Legal
-     indicators are "dot" and "bar".
-
-     The "dot" indicator is used by default.  It traces the retrieval by
-     printing dots on the screen, each dot representing a fixed amount
-     of downloaded data.
-
-     When using the dotted retrieval, you may also set the "style" by
-     specifying the type as `dot:STYLE'.  Different styles assign
-     different meaning to one dot.  With the `default' style each dot
-     represents 1K, there are ten dots in a cluster and 50 dots in a
-     line.  The `binary' style has a more "computer"-like
-     orientation--8K dots, 16-dots clusters and 48 dots per line (which
-     makes for 384K lines).  The `mega' style is suitable for
-     downloading very large files--each dot represents 64K retrieved,
-     there are eight dots in a cluster, and 48 dots on each line (so
-     each line contains 3M).
-
-     Specifying `--progress=bar' will draw a nice ASCII progress bar
-     graphics (a.k.a "thermometer" display) to indicate retrieval.  If
-     the output is not a TTY, this option will be ignored, and Wget
-     will revert to the dot indicator.  If you want to force the bar
-     indicator, use `--progress=bar:force'.
-
-`-N'
-`--timestamping'
-     Turn on time-stamping.  *Note Time-Stamping::, for details.
-
-`-S'
-`--server-response'
-     Print the headers sent by HTTP servers and responses sent by FTP
-     servers.
-
-`--spider'
-     When invoked with this option, Wget will behave as a Web "spider",
-     which means that it will not download the pages, just check that
-     they are there.  You can use it to check your bookmarks, e.g. with:
-
-          wget --spider --force-html -i bookmarks.html
-
-     This feature needs much more work for Wget to get close to the
-     functionality of real WWW spiders.
-
-`-T seconds'
-`--timeout=SECONDS'
-     Set the read timeout to SECONDS seconds.  Whenever a network read
-     is issued, the file descriptor is checked for a timeout, which
-     could otherwise leave a pending connection (uninterrupted read).
-     The default timeout is 900 seconds (fifteen minutes).  Setting
-     timeout to 0 will disable checking for timeouts.
-
-     Please do not lower the default timeout value with this option
-     unless you know what you are doing.
-
-`-w SECONDS'
-`--wait=SECONDS'
-     Wait the specified number of seconds between the retrievals.  Use
-     of this option is recommended, as it lightens the server load by
-     making the requests less frequent.  Instead of in seconds, the
-     time can be specified in minutes using the `m' suffix, in hours
-     using `h' suffix, or in days using `d' suffix.
-
-     Specifying a large value for this option is useful if the network
-     or the destination host is down, so that Wget can wait long enough
-     to reasonably expect the network error to be fixed before the
-     retry.
-
-`--waitretry=SECONDS'
-     If you don't want Wget to wait between _every_ retrieval, but only
-     between retries of failed downloads, you can use this option.
-     Wget will use "linear backoff", waiting 1 second after the first
-     failure on a given file, then waiting 2 seconds after the second
-     failure on that file, up to the maximum number of SECONDS you
-     specify.  Therefore, a value of 10 will actually make Wget wait up
-     to (1 + 2 + ... + 10) = 55 seconds per file.
-
-     Note that this option is turned on by default in the global
-     `wgetrc' file.
-
-`--random-wait'
-     Some web sites may perform log analysis to identify retrieval
-     programs such as Wget by looking for statistically significant
-     similarities in the time between requests. This option causes the
-     time between requests to vary between 0 and 2 * WAIT seconds,
-     where WAIT was specified using the `-w' or `--wait' options, in
-     order to mask Wget's presence from such analysis.
-
-     A recent article in a publication devoted to development on a
-     popular consumer platform provided code to perform this analysis
-     on the fly.  Its author suggested blocking at the class C address
-     level to ensure automated retrieval programs were blocked despite
-     changing DHCP-supplied addresses.
-
-     The `--random-wait' option was inspired by this ill-advised
-     recommendation to block many unrelated users from a web site due
-     to the actions of one.
-
-`-Y on/off'
-`--proxy=on/off'
-     Turn proxy support on or off.  The proxy is on by default if the
-     appropriate environmental variable is defined.
-
-`-Q QUOTA'
-`--quota=QUOTA'
-     Specify download quota for automatic retrievals.  The value can be
-     specified in bytes (default), kilobytes (with `k' suffix), or
-     megabytes (with `m' suffix).
-
-     Note that quota will never affect downloading a single file.  So
-     if you specify `wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz',
-     all of the `ls-lR.gz' will be downloaded.  The same goes even when
-     several URLs are specified on the command-line.  However, quota is
-     respected when retrieving either recursively, or from an input
-     file.  Thus you may safely type `wget -Q2m -i sites'--download
-     will be aborted when the quota is exceeded.
-
-     Setting quota to 0 or to `inf' unlimits the download quota.
-
-Directory Options
-=================
-
-`-nd'
-`--no-directories'
-     Do not create a hierarchy of directories when retrieving
-     recursively.  With this option turned on, all files will get saved
-     to the current directory, without clobbering (if a name shows up
-     more than once, the filenames will get extensions `.n').
-
-`-x'
-`--force-directories'
-     The opposite of `-nd'--create a hierarchy of directories, even if
-     one would not have been created otherwise.  E.g. `wget -x
-     http://fly.srk.fer.hr/robots.txt' will save the downloaded file to
-     `fly.srk.fer.hr/robots.txt'.
-
-`-nH'
-`--no-host-directories'
-     Disable generation of host-prefixed directories.  By default,
-     invoking Wget with `-r http://fly.srk.fer.hr/' will create a
-     structure of directories beginning with `fly.srk.fer.hr/'.  This
-     option disables such behavior.
-
-`--cut-dirs=NUMBER'
-     Ignore NUMBER directory components.  This is useful for getting a
-     fine-grained control over the directory where recursive retrieval
-     will be saved.
-
-     Take, for example, the directory at
-     `ftp://ftp.xemacs.org/pub/xemacs/'.  If you retrieve it with `-r',
-     it will be saved locally under `ftp.xemacs.org/pub/xemacs/'.
-     While the `-nH' option can remove the `ftp.xemacs.org/' part, you
-     are still stuck with `pub/xemacs'.  This is where `--cut-dirs'
-     comes in handy; it makes Wget not "see" NUMBER remote directory
-     components.  Here are several examples of how `--cut-dirs' option
-     works.
-
-          No options        -> ftp.xemacs.org/pub/xemacs/
-          -nH               -> pub/xemacs/
-          -nH --cut-dirs=1  -> xemacs/
-          -nH --cut-dirs=2  -> .
-          
-          --cut-dirs=1      -> ftp.xemacs.org/xemacs/
-          ...
-
-     If you just want to get rid of the directory structure, this
-     option is similar to a combination of `-nd' and `-P'.  However,
-     unlike `-nd', `--cut-dirs' does not lose with subdirectories--for
-     instance, with `-nH --cut-dirs=1', a `beta/' subdirectory will be
-     placed to `xemacs/beta', as one would expect.
-
-`-P PREFIX'
-`--directory-prefix=PREFIX'
-     Set directory prefix to PREFIX.  The "directory prefix" is the
-     directory where all other files and subdirectories will be saved
-     to, i.e. the top of the retrieval tree.  The default is `.' (the
-     current directory).
-
-HTTP Options
-============
-
-`-E'
-`--html-extension'
-     If a file of type `text/html' is downloaded and the URL does not
-     end with the regexp `\.[Hh][Tt][Mm][Ll]?', this option will cause
-     the suffix `.html' to be appended to the local filename.  This is
-     useful, for instance, when you're mirroring a remote site that uses
-     `.asp' pages, but you want the mirrored pages to be viewable on
-     your stock Apache server.  Another good use for this is when you're
-     downloading the output of CGIs.  A URL like
-     `http://site.com/article.cgi?25' will be saved as
-     `article.cgi?25.html'.
-
-     Note that filenames changed in this way will be re-downloaded
-     every time you re-mirror a site, because Wget can't tell that the
-     local `X.html' file corresponds to remote URL `X' (since it
-     doesn't yet know that the URL produces output of type `text/html'.
-     To prevent this re-downloading, you must use `-k' and `-K' so
-     that the original version of the file will be saved as `X.orig'
-     (*note Recursive Retrieval Options::).
-
-`--http-user=USER'
-`--http-passwd=PASSWORD'
-     Specify the username USER and password PASSWORD on an HTTP server.
-     According to the type of the challenge, Wget will encode them
-     using either the `basic' (insecure) or the `digest' authentication
-     scheme.
-
-     Another way to specify username and password is in the URL itself
-     (*note URL Format::).  For more information about security issues
-     with Wget, *Note Security Considerations::.
-
-`-C on/off'
-`--cache=on/off'
-     When set to off, disable server-side cache.  In this case, Wget
-     will send the remote server an appropriate directive (`Pragma:
-     no-cache') to get the file from the remote service, rather than
-     returning the cached version.  This is especially useful for
-     retrieving and flushing out-of-date documents on proxy servers.
-
-     Caching is allowed by default.
-
-`--cookies=on/off'
-     When set to off, disable the use of cookies.  Cookies are a
-     mechanism for maintaining server-side state.  The server sends the
-     client a cookie using the `Set-Cookie' header, and the client
-     responds with the same cookie upon further requests.  Since
-     cookies allow the server owners to keep track of visitors and for
-     sites to exchange this information, some consider them a breach of
-     privacy.  The default is to use cookies; however, _storing_
-     cookies is not on by default.
-
-`--load-cookies FILE'
-     Load cookies from FILE before the first HTTP retrieval.  FILE is a
-     textual file in the format originally used by Netscape's
-     `cookies.txt' file.
-
-     You will typically use this option when mirroring sites that
-     require that you be logged in to access some or all of their
-     content.  The login process typically works by the web server
-     issuing an HTTP cookie upon receiving and verifying your
-     credentials.  The cookie is then resent by the browser when
-     accessing that part of the site, and so proves your identity.
-
-     Mirroring such a site requires Wget to send the same cookies your
-     browser sends when communicating with the site.  This is achieved
-     by `--load-cookies'--simply point Wget to the location of the
-     `cookies.txt' file, and it will send the same cookies your browser
-     would send in the same situation.  Different browsers keep textual
-     cookie files in different locations:
-
-    Netscape 4.x.
-          The cookies are in `~/.netscape/cookies.txt'.
-
-    Mozilla and Netscape 6.x.
-          Mozilla's cookie file is also named `cookies.txt', located
-          somewhere under `~/.mozilla', in the directory of your
-          profile.  The full path usually ends up looking somewhat like
-          `~/.mozilla/default/SOME-WEIRD-STRING/cookies.txt'.
-
-    Internet Explorer.
-          You can produce a cookie file Wget can use by using the File
-          menu, Import and Export, Export Cookies.  This has been
-          tested with Internet Explorer 5; it is not guaranteed to work
-          with earlier versions.
-
-    Other browsers.
-          If you are using a different browser to create your cookies,
-          `--load-cookies' will only work if you can locate or produce a
-          cookie file in the Netscape format that Wget expects.
-
-     If you cannot use `--load-cookies', there might still be an
-     alternative.  If your browser supports a "cookie manager", you can
-     use it to view the cookies used when accessing the site you're
-     mirroring.  Write down the name and value of the cookie, and
-     manually instruct Wget to send those cookies, bypassing the
-     "official" cookie support:
-
-          wget --cookies=off --header "Cookie: NAME=VALUE"
-
-`--save-cookies FILE'
-     Save cookies from FILE at the end of session.  Cookies whose
-     expiry time is not specified, or those that have already expired,
-     are not saved.
-
-`--ignore-length'
-     Unfortunately, some HTTP servers (CGI programs, to be more
-     precise) send out bogus `Content-Length' headers, which makes Wget
-     go wild, as it thinks not all the document was retrieved.  You can
-     spot this syndrome if Wget retries getting the same document again
-     and again, each time claiming that the (otherwise normal)
-     connection has closed on the very same byte.
-
-     With this option, Wget will ignore the `Content-Length' header--as
-     if it never existed.
-
-`--header=ADDITIONAL-HEADER'
-     Define an ADDITIONAL-HEADER to be passed to the HTTP servers.
-     Headers must contain a `:' preceded by one or more non-blank
-     characters, and must not contain newlines.
-
-     You may define more than one additional header by specifying
-     `--header' more than once.
-
-          wget --header='Accept-Charset: iso-8859-2' \
-               --header='Accept-Language: hr'        \
-                 http://fly.srk.fer.hr/
-
-     Specification of an empty string as the header value will clear all
-     previous user-defined headers.
-
-`--proxy-user=USER'
-`--proxy-passwd=PASSWORD'
-     Specify the username USER and password PASSWORD for authentication
-     on a proxy server.  Wget will encode them using the `basic'
-     authentication scheme.
-
-`--referer=URL'
-     Include `Referer: URL' header in HTTP request.  Useful for
-     retrieving documents with server-side processing that assume they
-     are always being retrieved by interactive web browsers and only
-     come out properly when Referer is set to one of the pages that
-     point to them.
-
-`-s'
-`--save-headers'
-     Save the headers sent by the HTTP server to the file, preceding the
-     actual contents, with an empty line as the separator.
-
-`-U AGENT-STRING'
-`--user-agent=AGENT-STRING'
-     Identify as AGENT-STRING to the HTTP server.
-
-     The HTTP protocol allows the clients to identify themselves using a
-     `User-Agent' header field.  This enables distinguishing the WWW
-     software, usually for statistical purposes or for tracing of
-     protocol violations.  Wget normally identifies as `Wget/VERSION',
-     VERSION being the current version number of Wget.
-
-     However, some sites have been known to impose the policy of
-     tailoring the output according to the `User-Agent'-supplied
-     information.  While conceptually this is not such a bad idea, it
-     has been abused by servers denying information to clients other
-     than `Mozilla' or Microsoft `Internet Explorer'.  This option
-     allows you to change the `User-Agent' line issued by Wget.  Use of
-     this option is discouraged, unless you really know what you are
-     doing.
-
-FTP Options
-===========
-
-`-nr'
-`--dont-remove-listing'
-     Don't remove the temporary `.listing' files generated by FTP
-     retrievals.  Normally, these files contain the raw directory
-     listings received from FTP servers.  Not removing them can be
-     useful for debugging purposes, or when you want to be able to
-     easily check on the contents of remote server directories (e.g. to
-     verify that a mirror you're running is complete).
-
-     Note that even though Wget writes to a known filename for this
-     file, this is not a security hole in the scenario of a user making
-     `.listing' a symbolic link to `/etc/passwd' or something and
-     asking `root' to run Wget in his or her directory.  Depending on
-     the options used, either Wget will refuse to write to `.listing',
-     making the globbing/recursion/time-stamping operation fail, or the
-     symbolic link will be deleted and replaced with the actual
-     `.listing' file, or the listing will be written to a
-     `.listing.NUMBER' file.
-
-     Even though this situation isn't a problem, though, `root' should
-     never run Wget in a non-trusted user's directory.  A user could do
-     something as simple as linking `index.html' to `/etc/passwd' and
-     asking `root' to run Wget with `-N' or `-r' so the file will be
-     overwritten.
-
-`-g on/off'
-`--glob=on/off'
-     Turn FTP globbing on or off.  Globbing means you may use the
-     shell-like special characters ("wildcards"), like `*', `?', `['
-     and `]' to retrieve more than one file from the same directory at
-     once, like:
-
-          wget ftp://gnjilux.srk.fer.hr/*.msg
-
-     By default, globbing will be turned on if the URL contains a
-     globbing character.  This option may be used to turn globbing on
-     or off permanently.
-
-     You may have to quote the URL to protect it from being expanded by
-     your shell.  Globbing makes Wget look for a directory listing,
-     which is system-specific.  This is why it currently works only
-     with Unix FTP servers (and the ones emulating Unix `ls' output).
-
-`--passive-ftp'
-     Use the "passive" FTP retrieval scheme, in which the client
-     initiates the data connection.  This is sometimes required for FTP
-     to work behind firewalls.
-
-`--retr-symlinks'
-     Usually, when retrieving FTP directories recursively and a symbolic
-     link is encountered, the linked-to file is not downloaded.
-     Instead, a matching symbolic link is created on the local
-     filesystem.  The pointed-to file will not be downloaded unless
-     this recursive retrieval would have encountered it separately and
-     downloaded it anyway.
-
-     When `--retr-symlinks' is specified, however, symbolic links are
-     traversed and the pointed-to files are retrieved.  At this time,
-     this option does not cause Wget to traverse symlinks to
-     directories and recurse through them, but in the future it should
-     be enhanced to do this.
-
-     Note that when retrieving a file (not a directory) because it was
-     specified on the commandline, rather than because it was recursed
-     to, this option has no effect.  Symbolic links are always
-     traversed in this case.
-
-Recursive Retrieval Options
-===========================
-
-`-r'
-`--recursive'
-     Turn on recursive retrieving.  *Note Recursive Retrieval::, for
-     more details.
-
-`-l DEPTH'
-`--level=DEPTH'
-     Specify recursion maximum depth level DEPTH (*note Recursive
-     Retrieval::).  The default maximum depth is 5.
-
-`--delete-after'
-     This option tells Wget to delete every single file it downloads,
-     _after_ having done so.  It is useful for pre-fetching popular
-     pages through a proxy, e.g.:
-
-          wget -r -nd --delete-after http://whatever.com/~popular/page/
-
-     The `-r' option is to retrieve recursively, and `-nd' to not
-     create directories.
-
-     Note that `--delete-after' deletes files on the local machine.  It
-     does not issue the `DELE' command to remote FTP sites, for
-     instance.  Also note that when `--delete-after' is specified,
-     `--convert-links' is ignored, so `.orig' files are simply not
-     created in the first place.
-
-`-k'
-`--convert-links'
-     After the download is complete, convert the links in the document
-     to make them suitable for local viewing.  This affects not only
-     the visible hyperlinks, but any part of the document that links to
-     external content, such as embedded images, links to style sheets,
-     hyperlinks to non-HTML content, etc.
-
-     Each link will be changed in one of the two ways:
-
-        * The links to files that have been downloaded by Wget will be
-          changed to refer to the file they point to as a relative link.
-
-          Example: if the downloaded file `/foo/doc.html' links to
-          `/bar/img.gif', also downloaded, then the link in `doc.html'
-          will be modified to point to `../bar/img.gif'.  This kind of
-          transformation works reliably for arbitrary combinations of
-          directories.
-
-        * The links to files that have not been downloaded by Wget will
-          be changed to include host name and absolute path of the
-          location they point to.
-
-          Example: if the downloaded file `/foo/doc.html' links to
-          `/bar/img.gif' (or to `../bar/img.gif'), then the link in
-          `doc.html' will be modified to point to
-          `http://HOSTNAME/bar/img.gif'.
-
-     Because of this, local browsing works reliably: if a linked file
-     was downloaded, the link will refer to its local name; if it was
-     not downloaded, the link will refer to its full Internet address
-     rather than presenting a broken link.  The fact that the former
-     links are converted to relative links ensures that you can move
-     the downloaded hierarchy to another directory.
-
-     Note that only at the end of the download can Wget know which
-     links have been downloaded.  Because of that, the work done by
-     `-k' will be performed at the end of all the downloads.
-
-`-K'
-`--backup-converted'
-     When converting a file, back up the original version with a `.orig'
-     suffix.  Affects the behavior of `-N' (*note HTTP Time-Stamping
-     Internals::).
-
-`-m'
-`--mirror'
-     Turn on options suitable for mirroring.  This option turns on
-     recursion and time-stamping, sets infinite recursion depth and
-     keeps FTP directory listings.  It is currently equivalent to `-r
-     -N -l inf -nr'.
-
-`-p'
-`--page-requisites'
-     This option causes Wget to download all the files that are
-     necessary to properly display a given HTML page.  This includes
-     such things as inlined images, sounds, and referenced stylesheets.
-
-     Ordinarily, when downloading a single HTML page, any requisite
-     documents that may be needed to display it properly are not
-     downloaded.  Using `-r' together with `-l' can help, but since
-     Wget does not ordinarily distinguish between external and inlined
-     documents, one is generally left with "leaf documents" that are
-     missing their requisites.
-
-     For instance, say document `1.html' contains an `<IMG>' tag
-     referencing `1.gif' and an `<A>' tag pointing to external document
-     `2.html'.  Say that `2.html' is similar but that its image is
-     `2.gif' and it links to `3.html'.  Say this continues up to some
-     arbitrarily high number.
-
-     If one executes the command:
-
-          wget -r -l 2 http://SITE/1.html
-
-     then `1.html', `1.gif', `2.html', `2.gif', and `3.html' will be
-     downloaded.  As you can see, `3.html' is without its requisite
-     `3.gif' because Wget is simply counting the number of hops (up to
-     2) away from `1.html' in order to determine where to stop the
-     recursion.  However, with this command:
-
-          wget -r -l 2 -p http://SITE/1.html
-
-     all the above files _and_ `3.html''s requisite `3.gif' will be
-     downloaded.  Similarly,
-
-          wget -r -l 1 -p http://SITE/1.html
-
-     will cause `1.html', `1.gif', `2.html', and `2.gif' to be
-     downloaded.  One might think that:
-
-          wget -r -l 0 -p http://SITE/1.html
-
-     would download just `1.html' and `1.gif', but unfortunately this
-     is not the case, because `-l 0' is equivalent to `-l inf'--that
-     is, infinite recursion.  To download a single HTML page (or a
-     handful of them, all specified on the commandline or in a `-i' URL
-     input file) and its (or their) requisites, simply leave off `-r'
-     and `-l':
-
-          wget -p http://SITE/1.html
-
-     Note that Wget will behave as if `-r' had been specified, but only
-     that single page and its requisites will be downloaded.  Links
-     from that page to external documents will not be followed.
-     Actually, to download a single page and all its requisites (even
-     if they exist on separate websites), and make sure the lot
-     displays properly locally, this author likes to use a few options
-     in addition to `-p':
-
-          wget -E -H -k -K -p http://SITE/DOCUMENT
-
-     To finish off this topic, it's worth knowing that Wget's idea of an
-     external document link is any URL specified in an `<A>' tag, an
-     `<AREA>' tag, or a `<LINK>' tag other than `<LINK
-     REL="stylesheet">'.
-
-Recursive Accept/Reject Options
-===============================
-
-`-A ACCLIST --accept ACCLIST'
-`-R REJLIST --reject REJLIST'
-     Specify comma-separated lists of file name suffixes or patterns to
-     accept or reject (*note Types of Files:: for more details).
-
-`-D DOMAIN-LIST'
-`--domains=DOMAIN-LIST'
-     Set domains to be followed.  DOMAIN-LIST is a comma-separated list
-     of domains.  Note that it does _not_ turn on `-H'.
-
-`--exclude-domains DOMAIN-LIST'
-     Specify the domains that are _not_ to be followed.  (*note
-     Spanning Hosts::).
-
-`--follow-ftp'
-     Follow FTP links from HTML documents.  Without this option, Wget
-     will ignore all the FTP links.
-
-`--follow-tags=LIST'
-     Wget has an internal table of HTML tag / attribute pairs that it
-     considers when looking for linked documents during a recursive
-     retrieval.  If a user wants only a subset of those tags to be
-     considered, however, he or she should be specify such tags in a
-     comma-separated LIST with this option.
-
-`-G LIST'
-`--ignore-tags=LIST'
-     This is the opposite of the `--follow-tags' option.  To skip
-     certain HTML tags when recursively looking for documents to
-     download, specify them in a comma-separated LIST.
-
-     In the past, the `-G' option was the best bet for downloading a
-     single page and its requisites, using a commandline like:
-
-          wget -Ga,area -H -k -K -r http://SITE/DOCUMENT
-
-     However, the author of this option came across a page with tags
-     like `<LINK REL="home" HREF="/">' and came to the realization that
-     `-G' was not enough.  One can't just tell Wget to ignore `<LINK>',
-     because then stylesheets will not be downloaded.  Now the best bet
-     for downloading a single page and its requisites is the dedicated
-     `--page-requisites' option.
-
-`-H'
-`--span-hosts'
-     Enable spanning across hosts when doing recursive retrieving
-     (*note Spanning Hosts::).
-
-`-L'
-`--relative'
-     Follow relative links only.  Useful for retrieving a specific home
-     page without any distractions, not even those from the same hosts
-     (*note Relative Links::).
-
-`-I LIST'
-`--include-directories=LIST'
-     Specify a comma-separated list of directories you wish to follow
-     when downloading (*note Directory-Based Limits:: for more
-     details.)  Elements of LIST may contain wildcards.
-
-`-X LIST'
-`--exclude-directories=LIST'
-     Specify a comma-separated list of directories you wish to exclude
-     from download (*note Directory-Based Limits:: for more details.)
-     Elements of LIST may contain wildcards.
-
-`-np'
-
-`--no-parent'
-     Do not ever ascend to the parent directory when retrieving
-     recursively.  This is a useful option, since it guarantees that
-     only the files _below_ a certain hierarchy will be downloaded.
-     *Note Directory-Based Limits::, for more details.
-
-Recursive Retrieval
-*******************
-
-   GNU Wget is capable of traversing parts of the Web (or a single HTTP
-or FTP server), following links and directory structure.  We refer to
-this as to "recursive retrieving", or "recursion".
-
-   With HTTP URLs, Wget retrieves and parses the HTML from the given
-URL, documents, retrieving the files the HTML document was referring
-to, through markups like `href', or `src'.  If the freshly downloaded
-file is also of type `text/html', it will be parsed and followed
-further.
-
-   Recursive retrieval of HTTP and HTML content is "breadth-first".
-This means that Wget first downloads the requested HTML document, then
-the documents linked from that document, then the documents linked by
-them, and so on.  In other words, Wget first downloads the documents at
-depth 1, then those at depth 2, and so on until the specified maximum
-depth.
-
-   The maximum "depth" to which the retrieval may descend is specified
-with the `-l' option.  The default maximum depth is five layers.
-
-   When retrieving an FTP URL recursively, Wget will retrieve all the
-data from the given directory tree (including the subdirectories up to
-the specified depth) on the remote server, creating its mirror image
-locally.  FTP retrieval is also limited by the `depth' parameter.
-Unlike HTTP recursion, FTP recursion is performed depth-first.
-
-   By default, Wget will create a local directory tree, corresponding to
-the one found on the remote server.
-
-   Recursive retrieving can find a number of applications, the most
-important of which is mirroring.  It is also useful for WWW
-presentations, and any other opportunities where slow network
-connections should be bypassed by storing the files locally.
-
-   You should be warned that recursive downloads can overload the remote
-servers.  Because of that, many administrators frown upon them and may
-ban access from your site if they detect very fast downloads of big
-amounts of content.  When downloading from Internet servers, consider
-using the `-w' option to introduce a delay between accesses to the
-server.  The download will take a while longer, but the server
-administrator will not be alarmed by your rudeness.
-
-   Of course, recursive download may cause problems on your machine.  If
-left to run unchecked, it can easily fill up the disk.  If downloading
-from local network, it can also take bandwidth on the system, as well as
-consume memory and CPU.
-
-   Try to specify the criteria that match the kind of download you are
-trying to achieve.  If you want to download only one page, use
-`--page-requisites' without any additional recursion.  If you want to
-download things under one directory, use `-np' to avoid downloading
-things from other directories.  If you want to download all the files
-from one directory, use `-l 1' to make sure the recursion depth never
-exceeds one.  *Note Following Links::, for more information about this.
-
-   Recursive retrieval should be used with care.  Don't say you were not
-warned.
-
-Following Links
-***************
-
-   When retrieving recursively, one does not wish to retrieve loads of
-unnecessary data.  Most of the time the users bear in mind exactly what
-they want to download, and want Wget to follow only specific links.
-
-   For example, if you wish to download the music archive from
-`fly.srk.fer.hr', you will not want to download all the home pages that
-happen to be referenced by an obscure part of the archive.
-
-   Wget possesses several mechanisms that allows you to fine-tune which
-links it will follow.
-
-Spanning Hosts
-==============
-
-   Wget's recursive retrieval normally refuses to visit hosts different
-than the one you specified on the command line.  This is a reasonable
-default; without it, every retrieval would have the potential to turn
-your Wget into a small version of google.
-
-   However, visiting different hosts, or "host spanning," is sometimes
-a useful option.  Maybe the images are served from a different server.
-Maybe you're mirroring a site that consists of pages interlinked between
-three servers.  Maybe the server has two equivalent names, and the HTML
-pages refer to both interchangeably.
-
-Span to any host--`-H'
-     The `-H' option turns on host spanning, thus allowing Wget's
-     recursive run to visit any host referenced by a link.  Unless
-     sufficient recursion-limiting criteria are applied depth, these
-     foreign hosts will typically link to yet more hosts, and so on
-     until Wget ends up sucking up much more data than you have
-     intended.
-
-Limit spanning to certain domains--`-D'
-     The `-D' option allows you to specify the domains that will be
-     followed, thus limiting the recursion only to the hosts that
-     belong to these domains.  Obviously, this makes sense only in
-     conjunction with `-H'.  A typical example would be downloading the
-     contents of `www.server.com', but allowing downloads from
-     `images.server.com', etc.:
-
-          wget -rH -Dserver.com http://www.server.com/
-
-     You can specify more than one address by separating them with a
-     comma, e.g. `-Ddomain1.com,domain2.com'.
-
-Keep download off certain domains--`--exclude-domains'
-     If there are domains you want to exclude specifically, you can do
-     it with `--exclude-domains', which accepts the same type of
-     arguments of `-D', but will _exclude_ all the listed domains.  For
-     example, if you want to download all the hosts from `foo.edu'
-     domain, with the exception of `sunsite.foo.edu', you can do it like
-     this:
-
-          wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
-              http://www.foo.edu/
-
-Types of Files
-==============
-
-   When downloading material from the web, you will often want to
-restrict the retrieval to only certain file types.  For example, if you
-are interested in downloading GIFs, you will not be overjoyed to get
-loads of PostScript documents, and vice versa.
-
-   Wget offers two options to deal with this problem.  Each option
-description lists a short name, a long name, and the equivalent command
-in `.wgetrc'.
-
-`-A ACCLIST'
-`--accept ACCLIST'
-`accept = ACCLIST'
-     The argument to `--accept' option is a list of file suffixes or
-     patterns that Wget will download during recursive retrieval.  A
-     suffix is the ending part of a file, and consists of "normal"
-     letters, e.g. `gif' or `.jpg'.  A matching pattern contains
-     shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'.
-
-     So, specifying `wget -A gif,jpg' will make Wget download only the
-     files ending with `gif' or `jpg', i.e. GIFs and JPEGs.  On the
-     other hand, `wget -A "zelazny*196[0-9]*"' will download only files
-     beginning with `zelazny' and containing numbers from 1960 to 1969
-     anywhere within.  Look up the manual of your shell for a
-     description of how pattern matching works.
-
-     Of course, any number of suffixes and patterns can be combined
-     into a comma-separated list, and given as an argument to `-A'.
-
-`-R REJLIST'
-`--reject REJLIST'
-`reject = REJLIST'
-     The `--reject' option works the same way as `--accept', only its
-     logic is the reverse; Wget will download all files _except_ the
-     ones matching the suffixes (or patterns) in the list.
-
-     So, if you want to download a whole page except for the cumbersome
-     MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'.
-     Analogously, to download all files except the ones beginning with
-     `bjork', use `wget -R "bjork*"'.  The quotes are to prevent
-     expansion by the shell.
-
-   The `-A' and `-R' options may be combined to achieve even better
-fine-tuning of which files to retrieve.  E.g. `wget -A "*zelazny*" -R
-.ps' will download all the files having `zelazny' as a part of their
-name, but _not_ the PostScript files.
-
-   Note that these two options do not affect the downloading of HTML
-files; Wget must load all the HTMLs to know where to go at
-all--recursive retrieval would make no sense otherwise.
-
-Directory-Based Limits
-======================
-
-   Regardless of other link-following facilities, it is often useful to
-place the restriction of what files to retrieve based on the directories
-those files are placed in.  There can be many reasons for this--the
-home pages may be organized in a reasonable directory structure; or some
-directories may contain useless information, e.g. `/cgi-bin' or `/dev'
-directories.
-
-   Wget offers three different options to deal with this requirement.
-Each option description lists a short name, a long name, and the
-equivalent command in `.wgetrc'.
-
-`-I LIST'
-`--include LIST'
-`include_directories = LIST'
-     `-I' option accepts a comma-separated list of directories included
-     in the retrieval.  Any other directories will simply be ignored.
-     The directories are absolute paths.
-
-     So, if you wish to download from `http://host/people/bozo/'
-     following only links to bozo's colleagues in the `/people'
-     directory and the bogus scripts in `/cgi-bin', you can specify:
-
-          wget -I /people,/cgi-bin http://host/people/bozo/
-
-`-X LIST'
-`--exclude LIST'
-`exclude_directories = LIST'
-     `-X' option is exactly the reverse of `-I'--this is a list of
-     directories _excluded_ from the download.  E.g. if you do not want
-     Wget to download things from `/cgi-bin' directory, specify `-X
-     /cgi-bin' on the command line.
-
-     The same as with `-A'/`-R', these two options can be combined to
-     get a better fine-tuning of downloading subdirectories.  E.g. if
-     you want to load all the files from `/pub' hierarchy except for
-     `/pub/worthless', specify `-I/pub -X/pub/worthless'.
-
-`-np'
-`--no-parent'
-`no_parent = on'
-     The simplest, and often very useful way of limiting directories is
-     disallowing retrieval of the links that refer to the hierarchy
-     "above" than the beginning directory, i.e. disallowing ascent to
-     the parent directory/directories.
-
-     The `--no-parent' option (short `-np') is useful in this case.
-     Using it guarantees that you will never leave the existing
-     hierarchy.  Supposing you issue Wget with:
-
-          wget -r --no-parent http://somehost/~luzer/my-archive/
-
-     You may rest assured that none of the references to
-     `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be
-     followed.  Only the archive you are interested in will be
-     downloaded.  Essentially, `--no-parent' is similar to
-     `-I/~luzer/my-archive', only it handles redirections in a more
-     intelligent fashion.
-
-Relative Links
-==============
-
-   When `-L' is turned on, only the relative links are ever followed.
-Relative links are here defined those that do not refer to the web
-server root.  For example, these links are relative:
-
-     <a href="foo.gif">
-     <a href="foo/bar.gif">
-     <a href="../foo/bar.gif">
-
-   These links are not relative:
-
-     <a href="/foo.gif">
-     <a href="/foo/bar.gif">
-     <a href="http://www.server.com/foo/bar.gif";>
-
-   Using this option guarantees that recursive retrieval will not span
-hosts, even without `-H'.  In simple cases it also allows downloads to
-"just work" without having to convert links.
-
-   This option is probably not very useful and might be removed in a
-future release.
-
-Following FTP Links
-===================
-
-   The rules for FTP are somewhat specific, as it is necessary for them
-to be.  FTP links in HTML documents are often included for purposes of
-reference, and it is often inconvenient to download them by default.
-
-   To have FTP links followed from HTML documents, you need to specify
-the `--follow-ftp' option.  Having done that, FTP links will span hosts
-regardless of `-H' setting.  This is logical, as FTP links rarely point
-to the same host where the HTTP server resides.  For similar reasons,
-the `-L' options has no effect on such downloads.  On the other hand,
-domain acceptance (`-D') and suffix rules (`-A' and `-R') apply
-normally.
-
-   Also note that followed links to FTP directories will not be
-retrieved recursively further.
-
-Time-Stamping
-*************
-
-   One of the most important aspects of mirroring information from the
-Internet is updating your archives.
-
-   Downloading the whole archive again and again, just to replace a few
-changed files is expensive, both in terms of wasted bandwidth and money,
-and the time to do the update.  This is why all the mirroring tools
-offer the option of incremental updating.
-
-   Such an updating mechanism means that the remote server is scanned in
-search of "new" files.  Only those new files will be downloaded in the
-place of the old ones.
-
-   A file is considered new if one of these two conditions are met:
-
-  1. A file of that name does not already exist locally.
-
-  2. A file of that name does exist, but the remote file was modified
-     more recently than the local file.
-
-   To implement this, the program needs to be aware of the time of last
-modification of both local and remote files.  We call this information
-the "time-stamp" of a file.
-
-   The time-stamping in GNU Wget is turned on using `--timestamping'
-(`-N') option, or through `timestamping = on' directive in `.wgetrc'.
-With this option, for each file it intends to download, Wget will check
-whether a local file of the same name exists.  If it does, and the
-remote file is older, Wget will not download it.
-
-   If the local file does not exist, or the sizes of the files do not
-match, Wget will download the remote file no matter what the time-stamps
-say.
-
-Time-Stamping Usage
-===================
-
-   The usage of time-stamping is simple.  Say you would like to
-download a file so that it keeps its date of modification.
-
-     wget -S http://www.gnu.ai.mit.edu/
-
-   A simple `ls -l' shows that the time stamp on the local file equals
-the state of the `Last-Modified' header, as returned by the server.  As
-you can see, the time-stamping info is preserved locally, even without
-`-N' (at least for HTTP).
-
-   Several days later, you would like Wget to check if the remote file
-has changed, and download it if it has.
-
-     wget -N http://www.gnu.ai.mit.edu/
-
-   Wget will ask the server for the last-modified date.  If the local
-file has the same timestamp as the server, or a newer one, the remote
-file will not be re-fetched.  However, if the remote file is more
-recent, Wget will proceed to fetch it.
-
-   The same goes for FTP.  For example:
-
-     wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*";
-
-   (The quotes around that URL are to prevent the shell from trying to
-interpret the `*'.)
-
-   After download, a local directory listing will show that the
-timestamps match those on the remote server.  Reissuing the command
-with `-N' will make Wget re-fetch _only_ the files that have been
-modified since the last download.
-
-   If you wished to mirror the GNU archive every week, you would use a
-command like the following, weekly:
-
-     wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
-
-   Note that time-stamping will only work for files for which the server
-gives a timestamp.  For HTTP, this depends on getting a `Last-Modified'
-header.  For FTP, this depends on getting a directory listing with
-dates in a format that Wget can parse (*note FTP Time-Stamping
-Internals::).
-
-HTTP Time-Stamping Internals
-============================
-
-   Time-stamping in HTTP is implemented by checking of the
-`Last-Modified' header.  If you wish to retrieve the file `foo.html'
-through HTTP, Wget will check whether `foo.html' exists locally.  If it
-doesn't, `foo.html' will be retrieved unconditionally.
-
-   If the file does exist locally, Wget will first check its local
-time-stamp (similar to the way `ls -l' checks it), and then send a
-`HEAD' request to the remote server, demanding the information on the
-remote file.
-
-   The `Last-Modified' header is examined to find which file was
-modified more recently (which makes it "newer").  If the remote file is
-newer, it will be downloaded; if it is older, Wget will give up.(1)
-
-   When `--backup-converted' (`-K') is specified in conjunction with
-`-N', server file `X' is compared to local file `X.orig', if extant,
-rather than being compared to local file `X', which will always differ
-if it's been converted by `--convert-links' (`-k').
-
-   Arguably, HTTP time-stamping should be implemented using the
-`If-Modified-Since' request.
-
-   ---------- Footnotes ----------
-
-   (1) As an additional check, Wget will look at the `Content-Length'
-header, and compare the sizes; if they are not the same, the remote
-file will be downloaded no matter what the time-stamp says.
-
-FTP Time-Stamping Internals
-===========================
-
-   In theory, FTP time-stamping works much the same as HTTP, only FTP
-has no headers--time-stamps must be ferreted out of directory listings.
-
-   If an FTP download is recursive or uses globbing, Wget will use the
-FTP `LIST' command to get a file listing for the directory containing
-the desired file(s).  It will try to analyze the listing, treating it
-like Unix `ls -l' output, extracting the time-stamps.  The rest is
-exactly the same as for HTTP.  Note that when retrieving individual
-files from an FTP server without using globbing or recursion, listing
-files will not be downloaded (and thus files will not be time-stamped)
-unless `-N' is specified.
-
-   Assumption that every directory listing is a Unix-style listing may
-sound extremely constraining, but in practice it is not, as many
-non-Unix FTP servers use the Unixoid listing format because most (all?)
-of the clients understand it.  Bear in mind that RFC959 defines no
-standard way to get a file list, let alone the time-stamps.  We can
-only hope that a future standard will define this.
-
-   Another non-standard solution includes the use of `MDTM' command
-that is supported by some FTP servers (including the popular
-`wu-ftpd'), which returns the exact time of the specified file.  Wget
-may support this command in the future.
-
-Startup File
-************
-
-   Once you know how to change default settings of Wget through command
-line arguments, you may wish to make some of those settings permanent.
-You can do that in a convenient way by creating the Wget startup
-file--`.wgetrc'.
-
-   Besides `.wgetrc' is the "main" initialization file, it is
-convenient to have a special facility for storing passwords.  Thus Wget
-reads and interprets the contents of `$HOME/.netrc', if it finds it.
-You can find `.netrc' format in your system manuals.
-
-   Wget reads `.wgetrc' upon startup, recognizing a limited set of
-commands.
-
-Wgetrc Location
-===============
-
-   When initializing, Wget will look for a "global" startup file,
-`/usr/local/etc/wgetrc' by default (or some prefix other than
-`/usr/local', if Wget was not installed there) and read commands from
-there, if it exists.
-
-   Then it will look for the user's file.  If the environmental variable
-`WGETRC' is set, Wget will try to load that file.  Failing that, no
-further attempts will be made.
-
-   If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'.
-
-   The fact that user's settings are loaded after the system-wide ones
-means that in case of collision user's wgetrc _overrides_ the
-system-wide wgetrc (in `/usr/local/etc/wgetrc' by default).  Fascist
-admins, away!
-
-Wgetrc Syntax
-=============
-
-   The syntax of a wgetrc command is simple:
-
-     variable = value
-
-   The "variable" will also be called "command".  Valid "values" are
-different for different commands.
-
-   The commands are case-insensitive and underscore-insensitive.  Thus
-`DIr__PrefiX' is the same as `dirprefix'.  Empty lines, lines beginning
-with `#' and lines containing white-space only are discarded.
-
-   Commands that expect a comma-separated list will clear the list on an
-empty command.  So, if you wish to reset the rejection list specified in
-global `wgetrc', you can do it with:
-
-     reject =
-
-Wgetrc Commands
-===============
-
-   The complete set of commands is listed below.  Legal values are
-listed after the `='.  Simple Boolean values can be set or unset using
-`on' and `off' or `1' and `0'.  A fancier kind of Boolean allowed in
-some cases is the "lockable Boolean", which may be set to `on', `off',
-`always', or `never'.  If an option is set to `always' or `never', that
-value will be locked in for the duration of the Wget
-invocation--commandline options will not override.
-
-   Some commands take pseudo-arbitrary values.  ADDRESS values can be
-hostnames or dotted-quad IP addresses.  N can be any positive integer,
-or `inf' for infinity, where appropriate.  STRING values can be any
-non-empty string.
-
-   Most of these commands have commandline equivalents (*note
-Invoking::), though some of the more obscure or rarely used ones do not.
-
-accept/reject = STRING
-     Same as `-A'/`-R' (*note Types of Files::).
-
-add_hostdir = on/off
-     Enable/disable host-prefixed file names.  `-nH' disables it.
-
-continue = on/off
-     If set to on, force continuation of preexistent partially retrieved
-     files.  See `-c' before setting it.
-
-background = on/off
-     Enable/disable going to background--the same as `-b' (which
-     enables it).
-
-backup_converted = on/off
-     Enable/disable saving pre-converted files with the suffix
-     `.orig'--the same as `-K' (which enables it).
-
-base = STRING
-     Consider relative URLs in URL input files forced to be interpreted
-     as HTML as being relative to STRING--the same as `-B'.
-
-bind_address = ADDRESS
-     Bind to ADDRESS, like the `--bind-address' option.
-
-cache = on/off
-     When set to off, disallow server-caching.  See the `-C' option.
-
-convert links = on/off
-     Convert non-relative links locally.  The same as `-k'.
-
-cookies = on/off
-     When set to off, disallow cookies.  See the `--cookies' option.
-
-load_cookies = FILE
-     Load cookies from FILE.  See `--load-cookies'.
-
-save_cookies = FILE
-     Save cookies to FILE.  See `--save-cookies'.
-
-cut_dirs = N
-     Ignore N remote directory components.
-
-debug = on/off
-     Debug mode, same as `-d'.
-
-delete_after = on/off
-     Delete after download--the same as `--delete-after'.
-
-dir_prefix = STRING
-     Top of directory tree--the same as `-P'.
-
-dirstruct = on/off
-     Turning dirstruct on or off--the same as `-x' or `-nd',
-     respectively.
-
-domains = STRING
-     Same as `-D' (*note Spanning Hosts::).
-
-dot_bytes = N
-     Specify the number of bytes "contained" in a dot, as seen
-     throughout the retrieval (1024 by default).  You can postfix the
-     value with `k' or `m', representing kilobytes and megabytes,
-     respectively.  With dot settings you can tailor the dot retrieval
-     to suit your needs, or you can use the predefined "styles" (*note
-     Download Options::).
-
-dots_in_line = N
-     Specify the number of dots that will be printed in each line
-     throughout the retrieval (50 by default).
-
-dot_spacing = N
-     Specify the number of dots in a single cluster (10 by default).
-
-exclude_directories = STRING
-     Specify a comma-separated list of directories you wish to exclude
-     from download--the same as `-X' (*note Directory-Based Limits::).
-
-exclude_domains = STRING
-     Same as `--exclude-domains' (*note Spanning Hosts::).
-
-follow_ftp = on/off
-     Follow FTP links from HTML documents--the same as `--follow-ftp'.
-
-follow_tags = STRING
-     Only follow certain HTML tags when doing a recursive retrieval,
-     just like `--follow-tags'.
-
-force_html = on/off
-     If set to on, force the input filename to be regarded as an HTML
-     document--the same as `-F'.
-
-ftp_proxy = STRING
-     Use STRING as FTP proxy, instead of the one specified in
-     environment.
-
-glob = on/off
-     Turn globbing on/off--the same as `-g'.
-
-header = STRING
-     Define an additional header, like `--header'.
-
-html_extension = on/off
-     Add a `.html' extension to `text/html' files without it, like `-E'.
-
-http_passwd = STRING
-     Set HTTP password.
-
-http_proxy = STRING
-     Use STRING as HTTP proxy, instead of the one specified in
-     environment.
-
-http_user = STRING
-     Set HTTP user to STRING.
-
-ignore_length = on/off
-     When set to on, ignore `Content-Length' header; the same as
-     `--ignore-length'.
-
-ignore_tags = STRING
-     Ignore certain HTML tags when doing a recursive retrieval, just
-     like `-G' / `--ignore-tags'.
-
-include_directories = STRING
-     Specify a comma-separated list of directories you wish to follow
-     when downloading--the same as `-I'.
-
-input = STRING
-     Read the URLs from STRING, like `-i'.
-
-kill_longer = on/off
-     Consider data longer than specified in content-length header as
-     invalid (and retry getting it).  The default behaviour is to save
-     as much data as there is, provided there is more than or equal to
-     the value in `Content-Length'.
-
-logfile = STRING
-     Set logfile--the same as `-o'.
-
-login = STRING
-     Your user name on the remote machine, for FTP.  Defaults to
-     `anonymous'.
-
-mirror = on/off
-     Turn mirroring on/off.  The same as `-m'.
-
-netrc = on/off
-     Turn reading netrc on or off.
-
-noclobber = on/off
-     Same as `-nc'.
-
-no_parent = on/off
-     Disallow retrieving outside the directory hierarchy, like
-     `--no-parent' (*note Directory-Based Limits::).
-
-no_proxy = STRING
-     Use STRING as the comma-separated list of domains to avoid in
-     proxy loading, instead of the one specified in environment.
-
-output_document = STRING
-     Set the output filename--the same as `-O'.
-
-page_requisites = on/off
-     Download all ancillary documents necessary for a single HTML page
-     to display properly--the same as `-p'.
-
-passive_ftp = on/off/always/never
-     Set passive FTP--the same as `--passive-ftp'.  Some scripts and
-     `.pm' (Perl module) files download files using `wget
-     --passive-ftp'.  If your firewall does not allow this, you can set
-     `passive_ftp = never' to override the commandline.
-
-passwd = STRING
-     Set your FTP password to PASSWORD.  Without this setting, the
-     password defaults to address@hidden'.
-
-progress = STRING
-     Set the type of the progress indicator.  Legal types are "dot" and
-     "bar".
-
-proxy_user = STRING
-     Set proxy authentication user name to STRING, like `--proxy-user'.
-
-proxy_passwd = STRING
-     Set proxy authentication password to STRING, like `--proxy-passwd'.
-
-referer = STRING
-     Set HTTP `Referer:' header just like `--referer'.  (Note it was
-     the folks who wrote the HTTP spec who got the spelling of
-     "referrer" wrong.)
-
-quiet = on/off
-     Quiet mode--the same as `-q'.
-
-quota = QUOTA
-     Specify the download quota, which is useful to put in the global
-     `wgetrc'.  When download quota is specified, Wget will stop
-     retrieving after the download sum has become greater than quota.
-     The quota can be specified in bytes (default), kbytes `k'
-     appended) or mbytes (`m' appended).  Thus `quota = 5m' will set
-     the quota to 5 mbytes.  Note that the user's startup file
-     overrides system settings.
-
-reclevel = N
-     Recursion level--the same as `-l'.
-
-recursive = on/off
-     Recursive on/off--the same as `-r'.
-
-relative_only = on/off
-     Follow only relative links--the same as `-L' (*note Relative
-     Links::).
-
-remove_listing = on/off
-     If set to on, remove FTP listings downloaded by Wget.  Setting it
-     to off is the same as `-nr'.
-
-retr_symlinks = on/off
-     When set to on, retrieve symbolic links as if they were plain
-     files; the same as `--retr-symlinks'.
-
-robots = on/off
-     Use (or not) `/robots.txt' file (*note Robots::).  Be sure to know
-     what you are doing before changing the default (which is `on').
-
-server_response = on/off
-     Choose whether or not to print the HTTP and FTP server
-     responses--the same as `-S'.
-
-span_hosts = on/off
-     Same as `-H'.
-
-timeout = N
-     Set timeout value--the same as `-T'.
-
-timestamping = on/off
-     Turn timestamping on/off.  The same as `-N' (*note
-     Time-Stamping::).
-
-tries = N
-     Set number of retries per URL--the same as `-t'.
-
-use_proxy = on/off
-     Turn proxy support on/off.  The same as `-Y'.
-
-verbose = on/off
-     Turn verbose on/off--the same as `-v'/`-nv'.
-
-wait = N
-     Wait N seconds between retrievals--the same as `-w'.
-
-waitretry = N
-     Wait up to N seconds between retries of failed retrievals
-     only--the same as `--waitretry'.  Note that this is turned on by
-     default in the global `wgetrc'.
-
-randomwait = on/off
-     Turn random between-request wait times on or off. The same as
-     `--random-wait'.
-
-Sample Wgetrc
-=============
-
-   This is the sample initialization file, as given in the distribution.
-It is divided in two section--one for global usage (suitable for global
-startup file), and one for local usage (suitable for `$HOME/.wgetrc').
-Be careful about the things you change.
-
-   Note that almost all the lines are commented out.  For a command to
-have any effect, you must remove the `#' character at the beginning of
-its line.
-
-     ###
-     ### Sample Wget initialization file .wgetrc
-     ###
-     
-     ## You can use this file to change the default behaviour of wget or to
-     ## avoid having to type many many command-line options. This file does
-     ## not contain a comprehensive list of commands -- look at the manual
-     ## to find out what you can put into this file.
-     ##
-     ## Wget initialization file can reside in /usr/local/etc/wgetrc
-     ## (global, for all users) or $HOME/.wgetrc (for a single user).
-     ##
-     ## To use the settings in this file, you will have to uncomment them,
-     ## as well as change them, in most cases, as the values on the
-     ## commented-out lines are the default values (e.g. "off").
-     
-     
-     ##
-     ## Global settings (useful for setting up in /usr/local/etc/wgetrc).
-     ## Think well before you change them, since they may reduce wget's
-     ## functionality, and make it behave contrary to the documentation:
-     ##
-     
-     # You can set retrieve quota for beginners by specifying a value
-     # optionally followed by 'K' (kilobytes) or 'M' (megabytes).  The
-     # default quota is unlimited.
-     #quota = inf
-     
-     # You can lower (or raise) the default number of retries when
-     # downloading a file (default is 20).
-     #tries = 20
-     
-     # Lowering the maximum depth of the recursive retrieval is handy to
-     # prevent newbies from going too "deep" when they unwittingly start
-     # the recursive retrieval.  The default is 5.
-     #reclevel = 5
-     
-     # Many sites are behind firewalls that do not allow initiation of
-     # connections from the outside.  On these sites you have to use the
-     # `passive' feature of FTP.  If you are behind such a firewall, you
-     # can turn this on to make Wget use passive FTP by default.
-     #passive_ftp = off
-     
-     # The "wait" command below makes Wget wait between every connection.
-     # If, instead, you want Wget to wait only between retries of failed
-     # downloads, set waitretry to maximum number of seconds to wait (Wget
-     # will use "linear backoff", waiting 1 second after the first failure
-     # on a file, 2 seconds after the second failure, etc. up to this max).
-     waitretry = 10
-     
-     
-     ##
-     ## Local settings (for a user to set in his $HOME/.wgetrc).  It is
-     ## *highly* undesirable to put these settings in the global file, since
-     ## they are potentially dangerous to "normal" users.
-     ##
-     ## Even when setting up your own ~/.wgetrc, you should know what you
-     ## are doing before doing so.
-     ##
-     
-     # Set this to on to use timestamping by default:
-     #timestamping = off
-     
-     # It is a good idea to make Wget send your email address in a `From:'
-     # header with your request (so that server administrators can contact
-     # you in case of errors).  Wget does *not* send `From:' by default.
-     #header = From: Your Name <address@hidden>
-     
-     # You can set up other headers, like Accept-Language.  Accept-Language
-     # is *not* sent by default.
-     #header = Accept-Language: en
-     
-     # You can set the default proxies for Wget to use for http and ftp.
-     # They will override the value in the environment.
-     #http_proxy = http://proxy.yoyodyne.com:18023/
-     #ftp_proxy = http://proxy.yoyodyne.com:18023/
-     
-     # If you do not want to use proxy at all, set this to off.
-     #use_proxy = on
-     
-     # You can customize the retrieval outlook.  Valid options are default,
-     # binary, mega and micro.
-     #dot_style = default
-     
-     # Setting this to off makes Wget not download /robots.txt.  Be sure to
-     # know *exactly* what /robots.txt is and how it is used before changing
-     # the default!
-     #robots = on
-     
-     # It can be useful to make Wget wait between connections.  Set this to
-     # the number of seconds you want Wget to wait.
-     #wait = 0
-     
-     # You can force creating directory structure, even if a single is being
-     # retrieved, by setting this to on.
-     #dirstruct = off
-     
-     # You can turn on recursive retrieving by default (don't do this if
-     # you are not sure you know what it means) by setting this to on.
-     #recursive = off
-     
-     # To always back up file X as X.orig before converting its links (due
-     # to -k / --convert-links / convert_links = on having been specified),
-     # set this variable to on:
-     #backup_converted = off
-     
-     # To have Wget follow FTP links from HTML files by default, set this
-     # to on:
-     #follow_ftp = off
-
-Examples
-********
-
-   The examples are divided into three sections loosely based on their
-complexity.
-
-Simple Usage
-============
-
-   * Say you want to download a URL.  Just type:
-
-          wget http://fly.srk.fer.hr/
-
-   * But what will happen if the connection is slow, and the file is
-     lengthy?  The connection will probably fail before the whole file
-     is retrieved, more than once.  In this case, Wget will try getting
-     the file until it either gets the whole of it, or exceeds the
-     default number of retries (this being 20).  It is easy to change
-     the number of tries to 45, to insure that the whole file will
-     arrive safely:
-
-          wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
-
-   * Now let's leave Wget to work in the background, and write its
-     progress to log file `log'.  It is tiring to type `--tries', so we
-     shall use `-t'.
-
-          wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
-
-     The ampersand at the end of the line makes sure that Wget works in
-     the background.  To unlimit the number of retries, use `-t inf'.
-
-   * The usage of FTP is as simple.  Wget will take care of login and
-     password.
-
-          wget ftp://gnjilux.srk.fer.hr/welcome.msg
-
-   * If you specify a directory, Wget will retrieve the directory
-     listing, parse it and convert it to HTML.  Try:
-
-          wget ftp://prep.ai.mit.edu/pub/gnu/
-          links index.html
-
-Advanced Usage
-==============
-
-   * You have a file that contains the URLs you want to download?  Use
-     the `-i' switch:
-
-          wget -i FILE
-
-     If you specify `-' as file name, the URLs will be read from
-     standard input.
-
-   * Create a five levels deep mirror image of the GNU web site, with
-     the same directory structure the original has, with only one try
-     per document, saving the log of the activities to `gnulog':
-
-          wget -r http://www.gnu.org/ -o gnulog
-
-   * The same as the above, but convert the links in the HTML files to
-     point to local files, so you can view the documents off-line:
-
-          wget --convert-links -r http://www.gnu.org/ -o gnulog
-
-   * Retrieve only one HTML page, but make sure that all the elements
-     needed for the page to be displayed, such as inline images and
-     external style sheets, are also downloaded.  Also make sure the
-     downloaded page references the downloaded links.
-
-          wget -p --convert-links http://www.server.com/dir/page.html
-
-     The HTML page will be saved to `www.server.com/dir/page.html', and
-     the images, stylesheets, etc., somewhere under `www.server.com/',
-     depending on where they were on the remote server.
-
-   * The same as the above, but without the `www.server.com/' directory.
-     In fact, I don't want to have all those random server directories
-     anyway--just save _all_ those files under a `download/'
-     subdirectory of the current directory.
-
-          wget -p --convert-links -nH -nd -Pdownload \
-               http://www.server.com/dir/page.html
-
-   * Retrieve the index.html of `www.lycos.com', showing the original
-     server headers:
-
-          wget -S http://www.lycos.com/
-
-   * Save the server headers with the file, perhaps for post-processing.
-
-          wget -s http://www.lycos.com/
-          more index.html
-
-   * Retrieve the first two levels of `wuarchive.wustl.edu', saving them
-     to `/tmp'.
-
-          wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
-
-   * You want to download all the GIFs from a directory on an HTTP
-     server.  You tried `wget http://www.server.com/dir/*.gif', but that
-     didn't work because HTTP retrieval does not support globbing.  In
-     that case, use:
-
-          wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
-
-     More verbose, but the effect is the same.  `-r -l1' means to
-     retrieve recursively (*note Recursive Retrieval::), with maximum
-     depth of 1.  `--no-parent' means that references to the parent
-     directory are ignored (*note Directory-Based Limits::), and
-     `-A.gif' means to download only the GIF files.  `-A "*.gif"' would
-     have worked too.
-
-   * Suppose you were in the middle of downloading, when Wget was
-     interrupted.  Now you do not want to clobber the files already
-     present.  It would be:
-
-          wget -nc -r http://www.gnu.org/
-
-   * If you want to encode your own username and password to HTTP or
-     FTP, use the appropriate URL syntax (*note URL Format::).
-
-          wget ftp://hniksic:address@hidden/.emacs
-
-   * You would like the output documents to go to standard output
-     instead of to files?
-
-          wget -O - http://jagor.srce.hr/ http://www.srce.hr/
-
-     You can also combine the two options and make pipelines to
-     retrieve the documents from remote hotlists:
-
-          wget -O - http://cool.list.com/ | wget --force-html -i -
-
-Very Advanced Usage
-===================
-
-   * If you wish Wget to keep a mirror of a page (or FTP
-     subdirectories), use `--mirror' (`-m'), which is the shorthand for
-     `-r -l inf -N'.  You can put Wget in the crontab file asking it to
-     recheck a site each Sunday:
-
-          crontab
-          0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
-
-   * In addition to the above, you want the links to be converted for
-     local viewing.  But, after having read this manual, you know that
-     link conversion doesn't play well with timestamping, so you also
-     want Wget to back up the original HTML files before the
-     conversion.  Wget invocation would look like this:
-
-          wget --mirror --convert-links --backup-converted  \
-               http://www.gnu.org/ -o /home/me/weeklog
-
-   * But you've also noticed that local viewing doesn't work all that
-     well when HTML files are saved under extensions other than `.html',
-     perhaps because they were served as `index.cgi'.  So you'd like
-     Wget to rename all the files served with content-type `text/html'
-     to `NAME.html'.
-
-          wget --mirror --convert-links --backup-converted \
-               --html-extension -o /home/me/weeklog        \
-               http://www.gnu.org/
-
-     Or, with less typing:
-
-          wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
-
-Various
-*******
-
-   This chapter contains all the stuff that could not fit anywhere else.
-
-Proxies
-=======
-
-   "Proxies" are special-purpose HTTP servers designed to transfer data
-from remote servers to local clients.  One typical use of proxies is
-lightening network load for users behind a slow connection.  This is
-achieved by channeling all HTTP and FTP requests through the proxy
-which caches the transferred data.  When a cached resource is requested
-again, proxy will return the data from cache.  Another use for proxies
-is for companies that separate (for security reasons) their internal
-networks from the rest of Internet.  In order to obtain information
-from the Web, their users connect and retrieve remote data using an
-authorized proxy.
-
-   Wget supports proxies for both HTTP and FTP retrievals.  The
-standard way to specify proxy location, which Wget recognizes, is using
-the following environment variables:
-
-`http_proxy'
-     This variable should contain the URL of the proxy for HTTP
-     connections.
-
-`ftp_proxy'
-     This variable should contain the URL of the proxy for FTP
-     connections.  It is quite common that HTTP_PROXY and FTP_PROXY are
-     set to the same URL.
-
-`no_proxy'
-     This variable should contain a comma-separated list of domain
-     extensions proxy should _not_ be used for.  For instance, if the
-     value of `no_proxy' is `.mit.edu', proxy will not be used to
-     retrieve documents from MIT.
-
-   In addition to the environment variables, proxy location and settings
-may be specified from within Wget itself.
-
-`-Y on/off'
-`--proxy=on/off'
-`proxy = on/off'
-     This option may be used to turn the proxy support on or off.  Proxy
-     support is on by default, provided that the appropriate environment
-     variables are set.
-
-`http_proxy = URL'
-`ftp_proxy = URL'
-`no_proxy = STRING'
-     These startup file variables allow you to override the proxy
-     settings specified by the environment.
-
-   Some proxy servers require authorization to enable you to use them.
-The authorization consists of "username" and "password", which must be
-sent by Wget.  As with HTTP authorization, several authentication
-schemes exist.  For proxy authorization only the `Basic' authentication
-scheme is currently implemented.
-
-   You may specify your username and password either through the proxy
-URL or through the command-line options.  Assuming that the company's
-proxy is located at `proxy.company.com' at port 8001, a proxy URL
-location containing authorization data might look like this:
-
-     http://hniksic:address@hidden:8001/
-
-   Alternatively, you may use the `proxy-user' and `proxy-password'
-options, and the equivalent `.wgetrc' settings `proxy_user' and
-`proxy_passwd' to set the proxy username and password.
-
-Distribution
-============
-
-   Like all GNU utilities, the latest version of Wget can be found at
-the master GNU archive site prep.ai.mit.edu, and its mirrors.  For
-example, Wget 1.8.1 can be found at
-<ftp://prep.ai.mit.edu/gnu/wget/wget-1.8.1.tar.gz>
-
-Mailing List
-============
-
-   Wget has its own mailing list at <address@hidden>, thanks to
-Karsten Thygesen.  The mailing list is for discussion of Wget features
-and web, reporting Wget bugs (those that you think may be of interest
-to the public) and mailing announcements.  You are welcome to
-subscribe.  The more people on the list, the better!
-
-   To subscribe, send mail to <address@hidden>.  the magic
-word `subscribe' in the subject line.  Unsubscribe by mailing to
-<address@hidden>.
-
-   The mailing list is archived at <http://fly.srk.fer.hr/archive/wget>.
-Alternative archive is available at
-<http://www.mail-archive.com/wget%40sunsite.auc.dk/>.
-
-Reporting Bugs
-==============
-
-   You are welcome to send bug reports about GNU Wget to
-<address@hidden>.
-
-   Before actually submitting a bug report, please try to follow a few
-simple guidelines.
-
-  1. Please try to ascertain that the behaviour you see really is a
-     bug.  If Wget crashes, it's a bug.  If Wget does not behave as
-     documented, it's a bug.  If things work strange, but you are not
-     sure about the way they are supposed to work, it might well be a
-     bug.
-
-  2. Try to repeat the bug in as simple circumstances as possible.
-     E.g. if Wget crashes while downloading `wget -rl0 -kKE -t5 -Y0
-     http://yoyodyne.com -o /tmp/log', you should try to see if the
-     crash is repeatable, and if will occur with a simpler set of
-     options.  You might even try to start the download at the page
-     where the crash occurred to see if that page somehow triggered the
-     crash.
-
-     Also, while I will probably be interested to know the contents of
-     your `.wgetrc' file, just dumping it into the debug message is
-     probably a bad idea.  Instead, you should first try to see if the
-     bug repeats with `.wgetrc' moved out of the way.  Only if it turns
-     out that `.wgetrc' settings affect the bug, mail me the relevant
-     parts of the file.
-
-  3. Please start Wget with `-d' option and send the log (or the
-     relevant parts of it).  If Wget was compiled without debug support,
-     recompile it.  It is _much_ easier to trace bugs with debug support
-     on.
-
-  4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which
-     wget` core' and type `where' to get the backtrace.
-
-Portability
-===========
-
-   Since Wget uses GNU Autoconf for building and configuring, and avoids
-using "special" ultra-mega-cool features of any particular Unix, it
-should compile (and work) on all common Unix flavors.
-
-   Various Wget versions have been compiled and tested under many kinds
-of Unix systems, including Solaris, Linux, SunOS, OSF (aka Digital
-Unix), Ultrix, *BSD, IRIX, and others; refer to the file `MACHINES' in
-the distribution directory for a comprehensive list.  If you compile it
-on an architecture not listed there, please let me know so I can update
-it.
-
-   Wget should also compile on the other Unix systems, not listed in
-`MACHINES'.  If it doesn't, please let me know.
-
-   Thanks to kind contributors, this version of Wget compiles and works
-on Microsoft Windows 95 and Windows NT platforms.  It has been compiled
-successfully using MS Visual C++ 4.0, Watcom, and Borland C compilers,
-with Winsock as networking software.  Naturally, it is crippled of some
-features available on Unix, but it should work as a substitute for
-people stuck with Windows.  Note that the Windows port is *neither
-tested nor maintained* by me--all questions and problems should be
-reported to Wget mailing list at <address@hidden> where the
-maintainers will look at them.
-
-Signals
-=======
-
-   Since the purpose of Wget is background work, it catches the hangup
-signal (`SIGHUP') and ignores it.  If the output was on standard
-output, it will be redirected to a file named `wget-log'.  Otherwise,
-`SIGHUP' is ignored.  This is convenient when you wish to redirect the
-output of Wget after having started it.
-
-     $ wget http://www.ifi.uio.no/~larsi/gnus.tar.gz &
-     $ kill -HUP %%     # Redirect the output to wget-log
-
-   Other than that, Wget will not try to interfere with signals in any
-way.  `C-c', `kill -TERM' and `kill -KILL' should kill it alike.
-
-Appendices
-**********
-
-   This chapter contains some references I consider useful.
-
-Robots
-======
-
-   It is extremely easy to make Wget wander aimlessly around a web site,
-sucking all the available data in progress.  `wget -r SITE', and you're
-set.  Great?  Not for the server admin.
-
-   While Wget is retrieving static pages, there's not much of a problem.
-But for Wget, there is no real difference between a static page and the
-most demanding CGI.  For instance, a site I know has a section handled
-by an, uh, "bitchin'" CGI script that converts all the Info files to
-HTML.  The script can and does bring the machine to its knees without
-providing anything useful to the downloader.
-
-   For such and similar cases various robot exclusion schemes have been
-devised as a means for the server administrators and document authors to
-protect chosen portions of their sites from the wandering of robots.
-
-   The more popular mechanism is the "Robots Exclusion Standard", or
-RES, written by Martijn Koster et al. in 1994.  It specifies the format
-of a text file containing directives that instruct the robots which URL
-paths to avoid.  To be found by the robots, the specifications must be
-placed in `/robots.txt' in the server root, which the robots are
-supposed to download and parse.
-
-   Wget supports RES when downloading recursively.  So, when you issue:
-
-     wget -r http://www.server.com/
-
-   First the index of `www.server.com' will be downloaded.  If Wget
-finds that it wants to download more documents from that server, it will
-request `http://www.server.com/robots.txt' and, if found, use it for
-further downloads.  `robots.txt' is loaded only once per each server.
-
-   Until version 1.8, Wget supported the first version of the standard,
-written by Martijn Koster in 1994 and available at
-<http://www.robotstxt.org/wc/norobots.html>.  As of version 1.8, Wget
-has supported the additional directives specified in the internet draft
-`<draft-koster-robots-00.txt>' titled "A Method for Web Robots
-Control".  The draft, which has as far as I know never made to an RFC,
-is available at <http://www.robotstxt.org/wc/norobots-rfc.txt>.
-
-   This manual no longer includes the text of the Robot Exclusion
-Standard.
-
-   The second, less known mechanism, enables the author of an individual
-document to specify whether they want the links from the file to be
-followed by a robot.  This is achieved using the `META' tag, like this:
-
-     <meta name="robots" content="nofollow">
-
-   This is explained in some detail at
-<http://www.robotstxt.org/wc/meta-user.html>.  Wget supports this
-method of robot exclusion in addition to the usual `/robots.txt'
-exclusion.
-
-Security Considerations
-=======================
-
-   When using Wget, you must be aware that it sends unencrypted
-passwords through the network, which may present a security problem.
-Here are the main issues, and some solutions.
-
-  1. The passwords on the command line are visible using `ps'.  If this
-     is a problem, avoid putting passwords from the command line--e.g.
-     you can use `.netrc' for this.
-
-  2. Using the insecure "basic" authentication scheme, unencrypted
-     passwords are transmitted through the network routers and gateways.
-
-  3. The FTP passwords are also in no way encrypted.  There is no good
-     solution for this at the moment.
-
-  4. Although the "normal" output of Wget tries to hide the passwords,
-     debugging logs show them, in all forms.  This problem is avoided by
-     being careful when you send debug logs (yes, even when you send
-     them to me).
-
-Contributors
-============
-
-   GNU Wget was written by Hrvoje Niksic <address@hidden>.
-However, its development could never have gone as far as it has, were it
-not for the help of many people, either with bug reports, feature
-proposals, patches, or letters saying "Thanks!".
-
-   Special thanks goes to the following people (no particular order):
-
-   * Karsten Thygesen--donated system resources such as the mailing
-     list, web space, and FTP space, along with a lot of time to make
-     these actually work.
-
-   * Shawn McHorse--bug reports and patches.
-
-   * Kaveh R. Ghazi--on-the-fly `ansi2knr'-ization.  Lots of
-     portability fixes.
-
-   * Gordon Matzigkeit--`.netrc' support.
-
-   * Zlatko Calusic, Tomislav Vujec and Drazen Kacar--feature
-     suggestions and "philosophical" discussions.
-
-   * Darko Budor--initial port to Windows.
-
-   * Antonio Rosella--help and suggestions, plus the Italian
-     translation.
-
-   * Tomislav Petrovic, Mario Mikocevic--many bug reports and
-     suggestions.
-
-   * Francois Pinard--many thorough bug reports and discussions.
-
-   * Karl Eichwalder--lots of help with internationalization and other
-     things.
-
-   * Junio Hamano--donated support for Opie and HTTP `Digest'
-     authentication.
-
-   * The people who provided donations for development, including Brian
-     Gough.
-
-   The following people have provided patches, bug/build reports, useful
-suggestions, beta testing services, fan mail and all the other things
-that make maintenance so much fun:
-
-   Ian Abbott Tim Adam, Adrian Aichner, Martin Baehr, Dieter Baron,
-Roger Beeman, Dan Berger, T. Bharath, Paul Bludov, Daniel Bodea, Mark
-Boyns, John Burden, Wanderlei Cavassin, Gilles Cedoc, Tim Charron, Noel
-Cragg, Kristijan Conkas, John Daily, Andrew Davison, Andrew Deryabin,
-Ulrich Drepper, Marc Duponcheel, Damir Dzeko, Alan Eldridge, Aleksandar
-Erkalovic, Andy Eskilsson, Christian Fraenkel, Masashi Fujita, Howard
-Gayle, Marcel Gerrits, Lemble Gregory, Hans Grobler, Mathieu Guillaume,
-Dan Harkless, Herold Heiko, Jochen Hein, Karl Heuer, HIROSE Masaaki,
-Gregor Hoffleit, Erik Magnus Hulthen, Richard Huveneers, Jonas Jensen,
-Simon Josefsson, Mario Juric, Hack Kampbjorn, Const Kaplinsky, Goran
-Kezunovic, Robert Kleine, KOJIMA Haime, Fila Kolodny, Alexander
-Kourakos, Martin Kraemer, Simos KSenitellis, Hrvoje Lacko, Daniel S.
-Lewart, Nicolas Lichtmeier, Dave Love, Alexander V. Lukyanov, Jordan
-Mendelson, Lin Zhe Min, Tim Mooney, Simon Munton, Charlie Negyesi, R.
-K. Owen, Andrew Pollock, Steve Pothier, Jan Prikryl, Marin Purgar,
-Csaba Raduly, Keith Refson, Tyler Riddle, Tobias Ringstrom, Juan Jose
-Rodrigues, Edward J. Sabol, Heinz Salzmann, Robert Schmidt, Andreas
-Schwab, Chris Seawood, Toomas Soome, Tage Stabell-Kulo, Sven
-Sternberger, Markus Strasser, John Summerfield, Szakacsits Szabolcs,
-Mike Thomas, Philipp Thomas, Dave Turner, Russell Vincent, Charles G
-Waldman, Douglas E. Wegscheid, Jasmin Zainul, Bojan Zdrnja, Kristijan
-Zimmer.
-
-   Apologies to all who I accidentally left out, and many thanks to all
-the subscribers of the Wget mailing list.
-
-Copying
-*******
-
-   GNU Wget is licensed under the GNU GPL, which makes it "free
-software".
-
-   Please note that "free" in "free software" refers to liberty, not
-price.  As some GNU project advocates like to point out, think of "free
-speech" rather than "free beer".  The exact and legally binding
-distribution terms are spelled out below; in short, you have the right
-(freedom) to run and change Wget and distribute it to other people, and
-even--if you want--charge money for doing either.  The important
-restriction is that you have to grant your recipients the same rights
-and impose the same restrictions.
-
-   This method of licensing software is also known as "open source"
-because, among other things, it makes sure that all recipients will
-receive the source code along with the program, and be able to improve
-it.  The GNU project prefers the term "free software" for reasons
-outlined at
-<http://www.gnu.org/philosophy/free-software-for-freedom.html>.
-
-   The exact license terms are defined by this paragraph and the GNU
-General Public License it refers to:
-
-     GNU Wget is free software; you can redistribute it and/or modify it
-     under the terms of the GNU General Public License as published by
-     the Free Software Foundation; either version 2 of the License, or
-     (at your option) any later version.
-
-     GNU Wget is distributed in the hope that it will be useful, but
-     WITHOUT ANY WARRANTY; without even the implied warranty of
-     MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-     General Public License for more details.
-
-     A copy of the GNU General Public License is included as part of
-     this manual; if you did not receive it, write to the Free Software
-     Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-
-   In addition to this, this manual is free in the same sense:
-
-     Permission is granted to copy, distribute and/or modify this
-     document under the terms of the GNU Free Documentation License,
-     Version 1.1 or any later version published by the Free Software
-     Foundation; with the Invariant Sections being "GNU General Public
-     License" and "GNU Free Documentation License", with no Front-Cover
-     Texts, and with no Back-Cover Texts.  A copy of the license is
-     included in the section entitled "GNU Free Documentation License".
-
-   The full texts of the GNU General Public License and of the GNU Free
-Documentation License are available below.
-
-GNU General Public License
-==========================
-
-                         Version 2, June 1991
-
-     Copyright (C) 1989, 1991 Free Software Foundation, Inc.
-     675 Mass Ave, Cambridge, MA 02139, USA
-     
-     Everyone is permitted to copy and distribute verbatim copies
-     of this license document, but changing it is not allowed.
-
-Preamble
-========
-
-   The licenses for most software are designed to take away your
-freedom to share and change it.  By contrast, the GNU General Public
-License is intended to guarantee your freedom to share and change free
-software--to make sure the software is free for all its users.  This
-General Public License applies to most of the Free Software
-Foundation's software and to any other program whose authors commit to
-using it.  (Some other Free Software Foundation software is covered by
-the GNU Library General Public License instead.)  You can apply it to
-your programs, too.
-
-   When we speak of free software, we are referring to freedom, not
-price.  Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-this service if you wish), that you receive source code or can get it
-if you want it, that you can change the software or use pieces of it in
-new free programs; and that you know you can do these things.
-
-   To protect your rights, we need to make restrictions that forbid
-anyone to deny you these rights or to ask you to surrender the rights.
-These restrictions translate to certain responsibilities for you if you
-distribute copies of the software, or if you modify it.
-
-   For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must give the recipients all the rights that
-you have.  You must make sure that they, too, receive or can get the
-source code.  And you must show them these terms so they know their
-rights.
-
-   We protect your rights with two steps: (1) copyright the software,
-and (2) offer you this license which gives you legal permission to copy,
-distribute and/or modify the software.
-
-   Also, for each author's protection and ours, we want to make certain
-that everyone understands that there is no warranty for this free
-software.  If the software is modified by someone else and passed on, we
-want its recipients to know that what they have is not the original, so
-that any problems introduced by others will not reflect on the original
-authors' reputations.
-
-   Finally, any free program is threatened constantly by software
-patents.  We wish to avoid the danger that redistributors of a free
-program will individually obtain patent licenses, in effect making the
-program proprietary.  To prevent this, we have made it clear that any
-patent must be licensed for everyone's free use or not licensed at all.
-
-   The precise terms and conditions for copying, distribution and
-modification follow.
-
-    TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
-
-  1. This License applies to any program or other work which contains a
-     notice placed by the copyright holder saying it may be distributed
-     under the terms of this General Public License.  The "Program",
-     below, refers to any such program or work, and a "work based on
-     the Program" means either the Program or any derivative work under
-     copyright law: that is to say, a work containing the Program or a
-     portion of it, either verbatim or with modifications and/or
-     translated into another language.  (Hereinafter, translation is
-     included without limitation in the term "modification".)  Each
-     licensee is addressed as "you".
-
-     Activities other than copying, distribution and modification are
-     not covered by this License; they are outside its scope.  The act
-     of running the Program is not restricted, and the output from the
-     Program is covered only if its contents constitute a work based on
-     the Program (independent of having been made by running the
-     Program).  Whether that is true depends on what the Program does.
-
-  2. You may copy and distribute verbatim copies of the Program's
-     source code as you receive it, in any medium, provided that you
-     conspicuously and appropriately publish on each copy an appropriate
-     copyright notice and disclaimer of warranty; keep intact all the
-     notices that refer to this License and to the absence of any
-     warranty; and give any other recipients of the Program a copy of
-     this License along with the Program.
-
-     You may charge a fee for the physical act of transferring a copy,
-     and you may at your option offer warranty protection in exchange
-     for a fee.
-
-  3. You may modify your copy or copies of the Program or any portion
-     of it, thus forming a work based on the Program, and copy and
-     distribute such modifications or work under the terms of Section 1
-     above, provided that you also meet all of these conditions:
-
-       a. You must cause the modified files to carry prominent notices
-          stating that you changed the files and the date of any change.
-
-       b. You must cause any work that you distribute or publish, that
-          in whole or in part contains or is derived from the Program
-          or any part thereof, to be licensed as a whole at no charge
-          to all third parties under the terms of this License.
-
-       c. If the modified program normally reads commands interactively
-          when run, you must cause it, when started running for such
-          interactive use in the most ordinary way, to print or display
-          an announcement including an appropriate copyright notice and
-          a notice that there is no warranty (or else, saying that you
-          provide a warranty) and that users may redistribute the
-          program under these conditions, and telling the user how to
-          view a copy of this License.  (Exception: if the Program
-          itself is interactive but does not normally print such an
-          announcement, your work based on the Program is not required
-          to print an announcement.)
-
-     These requirements apply to the modified work as a whole.  If
-     identifiable sections of that work are not derived from the
-     Program, and can be reasonably considered independent and separate
-     works in themselves, then this License, and its terms, do not
-     apply to those sections when you distribute them as separate
-     works.  But when you distribute the same sections as part of a
-     whole which is a work based on the Program, the distribution of
-     the whole must be on the terms of this License, whose permissions
-     for other licensees extend to the entire whole, and thus to each
-     and every part regardless of who wrote it.
-
-     Thus, it is not the intent of this section to claim rights or
-     contest your rights to work written entirely by you; rather, the
-     intent is to exercise the right to control the distribution of
-     derivative or collective works based on the Program.
-
-     In addition, mere aggregation of another work not based on the
-     Program with the Program (or with a work based on the Program) on
-     a volume of a storage or distribution medium does not bring the
-     other work under the scope of this License.
-
-  4. You may copy and distribute the Program (or a work based on it,
-     under Section 2) in object code or executable form under the terms
-     of Sections 1 and 2 above provided that you also do one of the
-     following:
-
-       a. Accompany it with the complete corresponding machine-readable
-          source code, which must be distributed under the terms of
-          Sections 1 and 2 above on a medium customarily used for
-          software interchange; or,
-
-       b. Accompany it with a written offer, valid for at least three
-          years, to give any third party, for a charge no more than your
-          cost of physically performing source distribution, a complete
-          machine-readable copy of the corresponding source code, to be
-          distributed under the terms of Sections 1 and 2 above on a
-          medium customarily used for software interchange; or,
-
-       c. Accompany it with the information you received as to the offer
-          to distribute corresponding source code.  (This alternative is
-          allowed only for noncommercial distribution and only if you
-          received the program in object code or executable form with
-          such an offer, in accord with Subsection b above.)
-
-     The source code for a work means the preferred form of the work for
-     making modifications to it.  For an executable work, complete
-     source code means all the source code for all modules it contains,
-     plus any associated interface definition files, plus the scripts
-     used to control compilation and installation of the executable.
-     However, as a special exception, the source code distributed need
-     not include anything that is normally distributed (in either
-     source or binary form) with the major components (compiler,
-     kernel, and so on) of the operating system on which the executable
-     runs, unless that component itself accompanies the executable.
-
-     If distribution of executable or object code is made by offering
-     access to copy from a designated place, then offering equivalent
-     access to copy the source code from the same place counts as
-     distribution of the source code, even though third parties are not
-     compelled to copy the source along with the object code.
-
-  5. You may not copy, modify, sublicense, or distribute the Program
-     except as expressly provided under this License.  Any attempt
-     otherwise to copy, modify, sublicense or distribute the Program is
-     void, and will automatically terminate your rights under this
-     License.  However, parties who have received copies, or rights,
-     from you under this License will not have their licenses
-     terminated so long as such parties remain in full compliance.
-
-  6. You are not required to accept this License, since you have not
-     signed it.  However, nothing else grants you permission to modify
-     or distribute the Program or its derivative works.  These actions
-     are prohibited by law if you do not accept this License.
-     Therefore, by modifying or distributing the Program (or any work
-     based on the Program), you indicate your acceptance of this
-     License to do so, and all its terms and conditions for copying,
-     distributing or modifying the Program or works based on it.
-
-  7. Each time you redistribute the Program (or any work based on the
-     Program), the recipient automatically receives a license from the
-     original licensor to copy, distribute or modify the Program
-     subject to these terms and conditions.  You may not impose any
-     further restrictions on the recipients' exercise of the rights
-     granted herein.  You are not responsible for enforcing compliance
-     by third parties to this License.
-
-  8. If, as a consequence of a court judgment or allegation of patent
-     infringement or for any other reason (not limited to patent
-     issues), conditions are imposed on you (whether by court order,
-     agreement or otherwise) that contradict the conditions of this
-     License, they do not excuse you from the conditions of this
-     License.  If you cannot distribute so as to satisfy simultaneously
-     your obligations under this License and any other pertinent
-     obligations, then as a consequence you may not distribute the
-     Program at all.  For example, if a patent license would not permit
-     royalty-free redistribution of the Program by all those who
-     receive copies directly or indirectly through you, then the only
-     way you could satisfy both it and this License would be to refrain
-     entirely from distribution of the Program.
-
-     If any portion of this section is held invalid or unenforceable
-     under any particular circumstance, the balance of the section is
-     intended to apply and the section as a whole is intended to apply
-     in other circumstances.
-
-     It is not the purpose of this section to induce you to infringe any
-     patents or other property right claims or to contest validity of
-     any such claims; this section has the sole purpose of protecting
-     the integrity of the free software distribution system, which is
-     implemented by public license practices.  Many people have made
-     generous contributions to the wide range of software distributed
-     through that system in reliance on consistent application of that
-     system; it is up to the author/donor to decide if he or she is
-     willing to distribute software through any other system and a
-     licensee cannot impose that choice.
-
-     This section is intended to make thoroughly clear what is believed
-     to be a consequence of the rest of this License.
-
-  9. If the distribution and/or use of the Program is restricted in
-     certain countries either by patents or by copyrighted interfaces,
-     the original copyright holder who places the Program under this
-     License may add an explicit geographical distribution limitation
-     excluding those countries, so that distribution is permitted only
-     in or among countries not thus excluded.  In such case, this
-     License incorporates the limitation as if written in the body of
-     this License.
-
- 10. The Free Software Foundation may publish revised and/or new
-     versions of the General Public License from time to time.  Such
-     new versions will be similar in spirit to the present version, but
-     may differ in detail to address new problems or concerns.
-
-     Each version is given a distinguishing version number.  If the
-     Program specifies a version number of this License which applies
-     to it and "any later version", you have the option of following
-     the terms and conditions either of that version or of any later
-     version published by the Free Software Foundation.  If the Program
-     does not specify a version number of this License, you may choose
-     any version ever published by the Free Software Foundation.
-
- 11. If you wish to incorporate parts of the Program into other free
-     programs whose distribution conditions are different, write to the
-     author to ask for permission.  For software which is copyrighted
-     by the Free Software Foundation, write to the Free Software
-     Foundation; we sometimes make exceptions for this.  Our decision
-     will be guided by the two goals of preserving the free status of
-     all derivatives of our free software and of promoting the sharing
-     and reuse of software generally.
-
-                                NO WARRANTY
-
- 12. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO
-     WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE
-     LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-     HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
-     WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT
-     NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
-     FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS TO THE
-     QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE
-     PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY
-     SERVICING, REPAIR OR CORRECTION.
-
- 13. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
-     WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY
-     MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE
-     LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
-     INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR
-     INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
-     DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU
-     OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY
-     OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN
-     ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-
-                      END OF TERMS AND CONDITIONS
-
-How to Apply These Terms to Your New Programs
-=============================================
-
-   If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these
-terms.
-
-   To do so, attach the following notices to the program.  It is safest
-to attach them to the start of each source file to most effectively
-convey the exclusion of warranty; and each file should have at least
-the "copyright" line and a pointer to where the full notice is found.
-
-     ONE LINE TO GIVE THE PROGRAM'S NAME AND AN IDEA OF WHAT IT DOES.
-     Copyright (C) 19YY  NAME OF AUTHOR
-     
-     This program is free software; you can redistribute it and/or
-     modify it under the terms of the GNU General Public License
-     as published by the Free Software Foundation; either version 2
-     of the License, or (at your option) any later version.
-     
-     This program is distributed in the hope that it will be useful,
-     but WITHOUT ANY WARRANTY; without even the implied warranty of
-     MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-     GNU General Public License for more details.
-     
-     You should have received a copy of the GNU General Public License
-     along with this program; if not, write to the Free Software
-     Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
-
-   Also add information on how to contact you by electronic and paper
-mail.
-
-   If the program is interactive, make it output a short notice like
-this when it starts in an interactive mode:
-
-     Gnomovision version 69, Copyright (C) 19YY NAME OF AUTHOR
-     Gnomovision comes with ABSOLUTELY NO WARRANTY; for details
-     type `show w'.  This is free software, and you are welcome
-     to redistribute it under certain conditions; type `show c'
-     for details.
-
-   The hypothetical commands `show w' and `show c' should show the
-appropriate parts of the General Public License.  Of course, the
-commands you use may be called something other than `show w' and `show
-c'; they could even be mouse-clicks or menu items--whatever suits your
-program.
-
-   You should also get your employer (if you work as a programmer) or
-your school, if any, to sign a "copyright disclaimer" for the program,
-if necessary.  Here is a sample; alter the names:
-
-     Yoyodyne, Inc., hereby disclaims all copyright
-     interest in the program `Gnomovision'
-     (which makes passes at compilers) written
-     by James Hacker.
-     
-     SIGNATURE OF TY COON, 1 April 1989
-     Ty Coon, President of Vice
-
-   This General Public License does not permit incorporating your
-program into proprietary programs.  If your program is a subroutine
-library, you may consider it more useful to permit linking proprietary
-applications with the library.  If this is what you want to do, use the
-GNU Library General Public License instead of this License.
-
-GNU Free Documentation License
-==============================
-
-                        Version 1.1, March 2000
-
-     Copyright (C) 2000  Free Software Foundation, Inc.
-     51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
-     
-     Everyone is permitted to copy and distribute verbatim copies
-     of this license document, but changing it is not allowed.
-
-
-
-  0. PREAMBLE
-
-     The purpose of this License is to make a manual, textbook, or other
-     written document "free" in the sense of freedom: to assure everyone
-     the effective freedom to copy and redistribute it, with or without
-     modifying it, either commercially or noncommercially.  Secondarily,
-     this License preserves for the author and publisher a way to get
-     credit for their work, while not being considered responsible for
-     modifications made by others.
-
-     This License is a kind of "copyleft", which means that derivative
-     works of the document must themselves be free in the same sense.
-     It complements the GNU General Public License, which is a copyleft
-     license designed for free software.
-
-     We have designed this License in order to use it for manuals for
-     free software, because free software needs free documentation: a
-     free program should come with manuals providing the same freedoms
-     that the software does.  But this License is not limited to
-     software manuals; it can be used for any textual work, regardless
-     of subject matter or whether it is published as a printed book.
-     We recommend this License principally for works whose purpose is
-     instruction or reference.
-
-
-  1. APPLICABILITY AND DEFINITIONS
-
-     This License applies to any manual or other work that contains a
-     notice placed by the copyright holder saying it can be distributed
-     under the terms of this License.  The "Document", below, refers to
-     any such manual or work.  Any member of the public is a licensee,
-     and is addressed as "you".
-
-     A "Modified Version" of the Document means any work containing the
-     Document or a portion of it, either copied verbatim, or with
-     modifications and/or translated into another language.
-
-     A "Secondary Section" is a named appendix or a front-matter
-     section of the Document that deals exclusively with the
-     relationship of the publishers or authors of the Document to the
-     Document's overall subject (or to related matters) and contains
-     nothing that could fall directly within that overall subject.
-     (For example, if the Document is in part a textbook of
-     mathematics, a Secondary Section may not explain any mathematics.)
-     The relationship could be a matter of historical connection with
-     the subject or with related matters, or of legal, commercial,
-     philosophical, ethical or political position regarding them.
-
-     The "Invariant Sections" are certain Secondary Sections whose
-     titles are designated, as being those of Invariant Sections, in
-     the notice that says that the Document is released under this
-     License.
-
-     The "Cover Texts" are certain short passages of text that are
-     listed, as Front-Cover Texts or Back-Cover Texts, in the notice
-     that says that the Document is released under this License.
-
-     A "Transparent" copy of the Document means a machine-readable copy,
-     represented in a format whose specification is available to the
-     general public, whose contents can be viewed and edited directly
-     and straightforwardly with generic text editors or (for images
-     composed of pixels) generic paint programs or (for drawings) some
-     widely available drawing editor, and that is suitable for input to
-     text formatters or for automatic translation to a variety of
-     formats suitable for input to text formatters.  A copy made in an
-     otherwise Transparent file format whose markup has been designed
-     to thwart or discourage subsequent modification by readers is not
-     Transparent.  A copy that is not "Transparent" is called "Opaque".
-
-     Examples of suitable formats for Transparent copies include plain
-     ASCII without markup, Texinfo input format, LaTeX input format,
-     SGML or XML using a publicly available DTD, and
-     standard-conforming simple HTML designed for human modification.
-     Opaque formats include PostScript, PDF, proprietary formats that
-     can be read and edited only by proprietary word processors, SGML
-     or XML for which the DTD and/or processing tools are not generally
-     available, and the machine-generated HTML produced by some word
-     processors for output purposes only.
-
-     The "Title Page" means, for a printed book, the title page itself,
-     plus such following pages as are needed to hold, legibly, the
-     material this License requires to appear in the title page.  For
-     works in formats which do not have any title page as such, "Title
-     Page" means the text near the most prominent appearance of the
-     work's title, preceding the beginning of the body of the text.
-
-
-  2. VERBATIM COPYING
-
-     You may copy and distribute the Document in any medium, either
-     commercially or noncommercially, provided that this License, the
-     copyright notices, and the license notice saying this License
-     applies to the Document are reproduced in all copies, and that you
-     add no other conditions whatsoever to those of this License.  You
-     may not use technical measures to obstruct or control the reading
-     or further copying of the copies you make or distribute.  However,
-     you may accept compensation in exchange for copies.  If you
-     distribute a large enough number of copies you must also follow
-     the conditions in section 3.
-
-     You may also lend copies, under the same conditions stated above,
-     and you may publicly display copies.
-
-
-  3. COPYING IN QUANTITY
-
-     If you publish printed copies of the Document numbering more than
-     100, and the Document's license notice requires Cover Texts, you
-     must enclose the copies in covers that carry, clearly and legibly,
-     all these Cover Texts: Front-Cover Texts on the front cover, and
-     Back-Cover Texts on the back cover.  Both covers must also clearly
-     and legibly identify you as the publisher of these copies.  The
-     front cover must present the full title with all words of the
-     title equally prominent and visible.  You may add other material
-     on the covers in addition.  Copying with changes limited to the
-     covers, as long as they preserve the title of the Document and
-     satisfy these conditions, can be treated as verbatim copying in
-     other respects.
-
-     If the required texts for either cover are too voluminous to fit
-     legibly, you should put the first ones listed (as many as fit
-     reasonably) on the actual cover, and continue the rest onto
-     adjacent pages.
-
-     If you publish or distribute Opaque copies of the Document
-     numbering more than 100, you must either include a
-     machine-readable Transparent copy along with each Opaque copy, or
-     state in or with each Opaque copy a publicly-accessible
-     computer-network location containing a complete Transparent copy
-     of the Document, free of added material, which the general
-     network-using public has access to download anonymously at no
-     charge using public-standard network protocols.  If you use the
-     latter option, you must take reasonably prudent steps, when you
-     begin distribution of Opaque copies in quantity, to ensure that
-     this Transparent copy will remain thus accessible at the stated
-     location until at least one year after the last time you
-     distribute an Opaque copy (directly or through your agents or
-     retailers) of that edition to the public.
-
-     It is requested, but not required, that you contact the authors of
-     the Document well before redistributing any large number of
-     copies, to give them a chance to provide you with an updated
-     version of the Document.
-
-
-  4. MODIFICATIONS
-
-     You may copy and distribute a Modified Version of the Document
-     under the conditions of sections 2 and 3 above, provided that you
-     release the Modified Version under precisely this License, with
-     the Modified Version filling the role of the Document, thus
-     licensing distribution and modification of the Modified Version to
-     whoever possesses a copy of it.  In addition, you must do these
-     things in the Modified Version:
-
-     A. Use in the Title Page (and on the covers, if any) a title
-     distinct    from that of the Document, and from those of previous
-     versions    (which should, if there were any, be listed in the
-     History section    of the Document).  You may use the same title
-     as a previous version    if the original publisher of that version
-     gives permission.
-     B. List on the Title Page, as authors, one or more persons or
-     entities    responsible for authorship of the modifications in the
-     Modified    Version, together with at least five of the principal
-     authors of the    Document (all of its principal authors, if it
-     has less than five).
-     C. State on the Title page the name of the publisher of the
-     Modified Version, as the publisher.
-     D. Preserve all the copyright notices of the Document.
-     E. Add an appropriate copyright notice for your modifications
-     adjacent to the other copyright notices.
-     F. Include, immediately after the copyright notices, a license
-     notice    giving the public permission to use the Modified Version
-     under the    terms of this License, in the form shown in the
-     Addendum below.
-     G. Preserve in that license notice the full lists of Invariant
-     Sections    and required Cover Texts given in the Document's
-     license notice.
-     H. Include an unaltered copy of this License.
-     I. Preserve the section entitled "History", and its title, and add
-     to    it an item stating at least the title, year, new authors, and
-       publisher of the Modified Version as given on the Title Page.
-     If    there is no section entitled "History" in the Document,
-     create one    stating the title, year, authors, and publisher of
-     the Document as    given on its Title Page, then add an item
-     describing the Modified    Version as stated in the previous
-     sentence.
-     J. Preserve the network location, if any, given in the Document for
-       public access to a Transparent copy of the Document, and
-     likewise    the network locations given in the Document for
-     previous versions    it was based on.  These may be placed in the
-     "History" section.     You may omit a network location for a work
-     that was published at    least four years before the Document
-     itself, or if the original    publisher of the version it refers
-     to gives permission.
-     K. In any section entitled "Acknowledgements" or "Dedications",
-     preserve the section's title, and preserve in the section all the
-      substance and tone of each of the contributor acknowledgements
-     and/or dedications given therein.
-     L. Preserve all the Invariant Sections of the Document,
-     unaltered in their text and in their titles.  Section numbers
-     or the equivalent are not considered part of the section titles.
-     M. Delete any section entitled "Endorsements".  Such a section
-     may not be included in the Modified Version.
-     N. Do not retitle any existing section as "Endorsements"    or to
-     conflict in title with any Invariant Section.
-
-     If the Modified Version includes new front-matter sections or
-     appendices that qualify as Secondary Sections and contain no
-     material copied from the Document, you may at your option
-     designate some or all of these sections as invariant.  To do this,
-     add their titles to the list of Invariant Sections in the Modified
-     Version's license notice.  These titles must be distinct from any
-     other section titles.
-
-     You may add a section entitled "Endorsements", provided it contains
-     nothing but endorsements of your Modified Version by various
-     parties-for example, statements of peer review or that the text has
-     been approved by an organization as the authoritative definition
-     of a standard.
-
-     You may add a passage of up to five words as a Front-Cover Text,
-     and a passage of up to 25 words as a Back-Cover Text, to the end
-     of the list of Cover Texts in the Modified Version.  Only one
-     passage of Front-Cover Text and one of Back-Cover Text may be
-     added by (or through arrangements made by) any one entity.  If the
-     Document already includes a cover text for the same cover,
-     previously added by you or by arrangement made by the same entity
-     you are acting on behalf of, you may not add another; but you may
-     replace the old one, on explicit permission from the previous
-     publisher that added the old one.
-
-     The author(s) and publisher(s) of the Document do not by this
-     License give permission to use their names for publicity for or to
-     assert or imply endorsement of any Modified Version.
-
-
-  5. COMBINING DOCUMENTS
-
-     You may combine the Document with other documents released under
-     this License, under the terms defined in section 4 above for
-     modified versions, provided that you include in the combination
-     all of the Invariant Sections of all of the original documents,
-     unmodified, and list them all as Invariant Sections of your
-     combined work in its license notice.
-
-     The combined work need only contain one copy of this License, and
-     multiple identical Invariant Sections may be replaced with a single
-     copy.  If there are multiple Invariant Sections with the same name
-     but different contents, make the title of each such section unique
-     by adding at the end of it, in parentheses, the name of the
-     original author or publisher of that section if known, or else a
-     unique number.  Make the same adjustment to the section titles in
-     the list of Invariant Sections in the license notice of the
-     combined work.
-
-     In the combination, you must combine any sections entitled
-     "History" in the various original documents, forming one section
-     entitled "History"; likewise combine any sections entitled
-     "Acknowledgements", and any sections entitled "Dedications".  You
-     must delete all sections entitled "Endorsements."
-
-
-  6. COLLECTIONS OF DOCUMENTS
-
-     You may make a collection consisting of the Document and other
-     documents released under this License, and replace the individual
-     copies of this License in the various documents with a single copy
-     that is included in the collection, provided that you follow the
-     rules of this License for verbatim copying of each of the
-     documents in all other respects.
-
-     You may extract a single document from such a collection, and
-     distribute it individually under this License, provided you insert
-     a copy of this License into the extracted document, and follow
-     this License in all other respects regarding verbatim copying of
-     that document.
-
-
-  7. AGGREGATION WITH INDEPENDENT WORKS
-
-     A compilation of the Document or its derivatives with other
-     separate and independent documents or works, in or on a volume of
-     a storage or distribution medium, does not as a whole count as a
-     Modified Version of the Document, provided no compilation
-     copyright is claimed for the compilation.  Such a compilation is
-     called an "aggregate", and this License does not apply to the
-     other self-contained works thus compiled with the Document, on
-     account of their being thus compiled, if they are not themselves
-     derivative works of the Document.
-
-     If the Cover Text requirement of section 3 is applicable to these
-     copies of the Document, then if the Document is less than one
-     quarter of the entire aggregate, the Document's Cover Texts may be
-     placed on covers that surround only the Document within the
-     aggregate.  Otherwise they must appear on covers around the whole
-     aggregate.
-
-
-  8. TRANSLATION
-
-     Translation is considered a kind of modification, so you may
-     distribute translations of the Document under the terms of section
-     4.  Replacing Invariant Sections with translations requires special
-     permission from their copyright holders, but you may include
-     translations of some or all Invariant Sections in addition to the
-     original versions of these Invariant Sections.  You may include a
-     translation of this License provided that you also include the
-     original English version of this License.  In case of a
-     disagreement between the translation and the original English
-     version of this License, the original English version will prevail.
-
-
-  9. TERMINATION
-
-     You may not copy, modify, sublicense, or distribute the Document
-     except as expressly provided for under this License.  Any other
-     attempt to copy, modify, sublicense or distribute the Document is
-     void, and will automatically terminate your rights under this
-     License.  However, parties who have received copies, or rights,
-     from you under this License will not have their licenses
-     terminated so long as such parties remain in full compliance.
-
-
- 10. FUTURE REVISIONS OF THIS LICENSE
-
-     The Free Software Foundation may publish new, revised versions of
-     the GNU Free Documentation License from time to time.  Such new
-     versions will be similar in spirit to the present version, but may
-     differ in detail to address new problems or concerns.  See
-     http://www.gnu.org/copyleft/.
-
-     Each version of the License is given a distinguishing version
-     number.  If the Document specifies that a particular numbered
-     version of this License "or any later version" applies to it, you
-     have the option of following the terms and conditions either of
-     that specified version or of any later version that has been
-     published (not as a draft) by the Free Software Foundation.  If
-     the Document does not specify a version number of this License,
-     you may choose any version ever published (not as a draft) by the
-     Free Software Foundation.
-
-
-ADDENDUM: How to use this License for your documents
-====================================================
-
-   To use this License in a document you have written, include a copy of
-the License in the document and put the following copyright and license
-notices just after the title page:
-
-
-       Copyright (C)  YEAR  YOUR NAME.
-       Permission is granted to copy, distribute and/or modify this document
-       under the terms of the GNU Free Documentation License, Version 1.1
-       or any later version published by the Free Software Foundation;
-       with the Invariant Sections being LIST THEIR TITLES, with the
-       Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
-       A copy of the license is included in the section entitled ``GNU
-       Free Documentation License''.
-If you have no Invariant Sections, write "with no Invariant
-Sections" instead of saying which ones are invariant.  If you have no
-Front-Cover Texts, write "no Front-Cover Texts" instead of "Front-Cover
-Texts being LIST"; likewise for Back-Cover Texts.
-
-   If your document contains nontrivial examples of program code, we
-recommend releasing these examples in parallel under your choice of
-free software license, such as the GNU General Public License, to
-permit their use in free software.
-
-Concept Index
-*************
-
-.html extension:
-          See ``HTTP Options''.
-.listing files, removing:
-          See ``FTP Options''.
-.netrc:
-          See ``Startup File''.
-.wgetrc:
-          See ``Startup File''.
-accept directories:
-          See ``Directory-Based Limits''.
-accept suffixes:
-          See ``Types of Files''.
-accept wildcards:
-          See ``Types of Files''.
-append to log:
-          See ``Logging and Input File Options''.
-arguments:
-          See ``Invoking''.
-authentication:
-          See ``HTTP Options''.
-backing up converted files:
-          See ``Recursive Retrieval Options''.
-base for relative links in input file:
-          See ``Logging and Input File Options''.
-bind() address:
-          See ``Download Options''.
-bug reports:
-          See ``Reporting Bugs''.
-bugs:
-          See ``Reporting Bugs''.
-cache:
-          See ``HTTP Options''.
-client IP address:
-          See ``Download Options''.
-clobbering, file:
-          See ``Download Options''.
-command line:
-          See ``Invoking''.
-Content-Length, ignore:
-          See ``HTTP Options''.
-continue retrieval:
-          See ``Download Options''.
-contributors:
-          See ``Contributors''.
-conversion of links:
-          See ``Recursive Retrieval Options''.
-cookies:
-          See ``HTTP Options''.
-cookies, loading:
-          See ``HTTP Options''.
-cookies, saving:
-          See ``HTTP Options''.
-copying:
-          See ``Copying''.
-cut directories:
-          See ``Directory Options''.
-debug:
-          See ``Logging and Input File Options''.
-delete after retrieval:
-          See ``Recursive Retrieval Options''.
-directories:
-          See ``Directory-Based Limits''.
-directories, exclude:
-          See ``Directory-Based Limits''.
-directories, include:
-          See ``Directory-Based Limits''.
-directory limits:
-          See ``Directory-Based Limits''.
-directory prefix:
-          See ``Directory Options''.
-dot style:
-          See ``Download Options''.
-downloading multiple times:
-          See ``Download Options''.
-examples:
-          See ``Examples''.
-exclude directories:
-          See ``Directory-Based Limits''.
-execute wgetrc command:
-          See ``Basic Startup Options''.
-features:
-          See ``Overview''.
-filling proxy cache:
-          See ``Recursive Retrieval Options''.
-follow FTP links:
-          See ``Recursive Accept/Reject Options''.
-following ftp links:
-          See ``Following FTP Links''.
-following links:
-          See ``Following Links''.
-force html:
-          See ``Logging and Input File Options''.
-free software:
-          See ``Copying''.
-ftp time-stamping:
-          See ``FTP Time-Stamping Internals''.
-GFDL:
-          See ``Copying''.
-globbing, toggle:
-          See ``FTP Options''.
-GPL:
-          See ``Copying''.
-hangup:
-          See ``Signals''.
-header, add:
-          See ``HTTP Options''.
-hosts, spanning:
-          See ``Spanning Hosts''.
-http password:
-          See ``HTTP Options''.
-http referer:
-          See ``HTTP Options''.
-http time-stamping:
-          See ``HTTP Time-Stamping Internals''.
-http user:
-          See ``HTTP Options''.
-ignore length:
-          See ``HTTP Options''.
-include directories:
-          See ``Directory-Based Limits''.
-incomplete downloads:
-          See ``Download Options''.
-incremental updating:
-          See ``Time-Stamping''.
-input-file:
-          See ``Logging and Input File Options''.
-invoking:
-          See ``Invoking''.
-IP address, client:
-          See ``Download Options''.
-latest version:
-          See ``Distribution''.
-link conversion:
-          See ``Recursive Retrieval Options''.
-links:
-          See ``Following Links''.
-list:
-          See ``Mailing List''.
-loading cookies:
-          See ``HTTP Options''.
-location of wgetrc:
-          See ``Wgetrc Location''.
-log file:
-          See ``Logging and Input File Options''.
-mailing list:
-          See ``Mailing List''.
-mirroring:
-          See ``Very Advanced Usage''.
-no parent:
-          See ``Directory-Based Limits''.
-no warranty:
-          See ``GNU General Public License''.
-no-clobber:
-          See ``Download Options''.
-nohup:
-          See ``Invoking''.
-number of retries:
-          See ``Download Options''.
-operating systems:
-          See ``Portability''.
-option syntax:
-          See ``Option Syntax''.
-output file:
-          See ``Logging and Input File Options''.
-overview:
-          See ``Overview''.
-page requisites:
-          See ``Recursive Retrieval Options''.
-passive ftp:
-          See ``FTP Options''.
-pause:
-          See ``Download Options''.
-portability:
-          See ``Portability''.
-progress indicator:
-          See ``Download Options''.
-proxies:
-          See ``Proxies''.
-proxy <1>:
-          See ``HTTP Options''.
-proxy:
-          See ``Download Options''.
-proxy authentication:
-          See ``HTTP Options''.
-proxy filling:
-          See ``Recursive Retrieval Options''.
-proxy password:
-          See ``HTTP Options''.
-proxy user:
-          See ``HTTP Options''.
-quiet:
-          See ``Logging and Input File Options''.
-quota:
-          See ``Download Options''.
-random wait:
-          See ``Download Options''.
-recursion:
-          See ``Recursive Retrieval''.
-recursive retrieval:
-          See ``Recursive Retrieval''.
-redirecting output:
-          See ``Advanced Usage''.
-referer, http:
-          See ``HTTP Options''.
-reject directories:
-          See ``Directory-Based Limits''.
-reject suffixes:
-          See ``Types of Files''.
-reject wildcards:
-          See ``Types of Files''.
-relative links:
-          See ``Relative Links''.
-reporting bugs:
-          See ``Reporting Bugs''.
-required images, downloading:
-          See ``Recursive Retrieval Options''.
-resume download:
-          See ``Download Options''.
-retries:
-          See ``Download Options''.
-retries, waiting between:
-          See ``Download Options''.
-retrieving:
-          See ``Recursive Retrieval''.
-robots:
-          See ``Robots''.
-robots.txt:
-          See ``Robots''.
-sample wgetrc:
-          See ``Sample Wgetrc''.
-saving cookies:
-          See ``HTTP Options''.
-security:
-          See ``Security Considerations''.
-server maintenance:
-          See ``Robots''.
-server response, print:
-          See ``Download Options''.
-server response, save:
-          See ``HTTP Options''.
-signal handling:
-          See ``Signals''.
-spanning hosts:
-          See ``Spanning Hosts''.
-spider:
-          See ``Download Options''.
-startup:
-          See ``Startup File''.
-startup file:
-          See ``Startup File''.
-suffixes, accept:
-          See ``Types of Files''.
-suffixes, reject:
-          See ``Types of Files''.
-symbolic links, retrieving:
-          See ``FTP Options''.
-syntax of options:
-          See ``Option Syntax''.
-syntax of wgetrc:
-          See ``Wgetrc Syntax''.
-tag-based recursive pruning:
-          See ``Recursive Accept/Reject Options''.
-time-stamping:
-          See ``Time-Stamping''.
-time-stamping usage:
-          See ``Time-Stamping Usage''.
-timeout:
-          See ``Download Options''.
-timestamping:
-          See ``Time-Stamping''.
-tries:
-          See ``Download Options''.
-types of files:
-          See ``Types of Files''.
-updating the archives:
-          See ``Time-Stamping''.
-URL:
-          See ``URL Format''.
-URL syntax:
-          See ``URL Format''.
-usage, time-stamping:
-          See ``Time-Stamping Usage''.
-user-agent:
-          See ``HTTP Options''.
-various:
-          See ``Various''.
-verbose:
-          See ``Logging and Input File Options''.
-wait:
-          See ``Download Options''.
-wait, random:
-          See ``Download Options''.
-waiting between retries:
-          See ``Download Options''.
-Wget as spider:
-          See ``Download Options''.
-wgetrc:
-          See ``Startup File''.
-wgetrc commands:
-          See ``Wgetrc Commands''.
-wgetrc location:
-          See ``Wgetrc Location''.
-wgetrc syntax:
-          See ``Wgetrc Syntax''.
-wildcards, accept:
-          See ``Types of Files''.
-wildcards, reject:
-          See ``Types of Files''.
-
-...Table of Contents...

Index: manual/wget-1.8.1/text/wget.txt.gz
===================================================================
RCS file: manual/wget-1.8.1/text/wget.txt.gz
diff -N manual/wget-1.8.1/text/wget.txt.gz
Binary files /tmp/cvs3QVPYl and /dev/null differ




reply via email to

[Prev in Thread] Current Thread [Next in Thread]