web page problem

Linux for blind general discussion blinux-list at redhat.com
Sun Jan 23 05:04:56 UTC 2022


Is it possible when a file had its version updated on its web page to get
the urls of the new versions so the new versions can be downloaded?  html
doesn't support wild cards so this can't be done with wget.  I'd like to
be able to do this with a script if at all possible.  I know perl does
wildcards well, but don't know if perl can handle a job like this.
If a file is on a web page it can be checked using wget with --spider
option followed by the url name.
Since such a case will return a 0 errorlevel, it's possible to put a &&
followed by a wget -bc url to download the file if it exists.
once the file is downloading
wc -l wget-log && grep -i saved wget-log && rm wget-log
command run every so often shows the growing size of wget-log and at the
end will show the file name and then remove wget-log  The magic is in that
&& pipe from one command to the next.




More information about the Blinux-list mailing list