mirror for www?
"Luis Gonzalo Aller Arias" <GONZALO@cicei.ulpgc.es>
Errors-To: listmaster@www0.cern.ch
Date: Sun, 19 Jun 1994 20:17:18 +0200
Errors-To: listmaster@www0.cern.ch
Message-id: <5B063E07F2@cicei.ulpgc.es>
Errors-To: listmaster@www0.cern.ch
Reply-To: GONZALO@cicei.ulpgc.es
Originator: www-talk@info.cern.ch
Sender: www-talk@www0.cern.ch
Precedence: bulk
From: "Luis Gonzalo Aller Arias" <GONZALO@cicei.ulpgc.es>
To: Multiple recipients of list <www-talk@www0.cern.ch>
Subject: mirror for www?
X-Listprocessor-Version: 6.0c -- ListProcessor by Anastasios Kotsikonas
X-Mailer: Pegasus Mail v3.1 (R1a)
X-Mailer: Pegasus Mail v3.1 (R1a)
Priority: normal
Priority: normal
Content-Type: text/plain; charset=US-ASCII
Content-Type: text/plain; charset=US-ASCII
Mime-Version: 1.0
Mime-Version: 1.0
Organization: Universidad de Las Palmas de G.C.
Organization: Universidad de Las Palmas de G.C.
Hola!
I'm setting up a www server here. I have succesfully
installed a ISMAP and I'm almost ready to say: "hello world" 8).
It's really amazing the things this system is capable to do!.
I have read about the bots, spiders, and other programs that
can be used to retrieve information through the web; I'm really
interested!.
I now have a little problem. When I see a html of interest,
I would like to retrieve it, to read it locally and avoid to connect
a lot of times. If the page doesn't have too much links it can be
done by hand, but if it's very leafy (with local Links) I think that
surely there is a perl (or C,...) program that can retrieve the html
and all its local links. or even better, that works like mirror,
using the dates to make updates, etc... am I right?... if don't, I'll
try to do the program by myself 8).
Saludos:
_, _ _ _ _ | _
(_|(_)| |<(_||(_)
(_)