command line fan fiction program?

Karen Lewellen klewellen at shellworld.net
Thu Mar 23 14:42:56 UTC 2017


Interesting ideas.
I appreciate the education.
An example of a mass downloader included with a  Linux shell?
I want to test this, but am unsure of what tool to use.
The editing is not a problem, I am far from picky about it having, with 
the work was smaller, used the m.edition  of the site to secure things one 
at a time.
That I might  grab all the chapters at once would be a fine solution. 
Certainly for the works I cannot get in epub.
robobraille converts epub to rtf.  when I use gmail, basic html, the work 
is automatically converted to plain text for me.
Thanks again,
Kare


On Thu, 23 Mar 2017, Jeffery Mewtamer wrote:

> A few thoughts:
>
> changing www to m in an FF.net URL gives you the mobile version of the
> page. For story chapters, this greatly reduces the cruft at the top of
> the page and somewhat reduces the cruft at the bottom.
>
> The format for story page URLs is
> https://m.fanfiction.net/s/[storyID]/[Chapter#]/[storyTitle]
>
> So, if you know the story Id, title, and number of chapters, you can
> use any generic mass downloader to download all pages in the range
> chapter#=1 to chapter#=total chapters.
>
> If your mass downloader doesn't have built-in support for converting
> to plain text, html2text can do this for you and you can then read in
> your text editor of choice(this also allows you to correct typos the
> author missed if you plan to reread at a future date).
>
> Thanks to downloading the mobile versions, there is less cruft to
> remove from the files if you just want the story text. Reduced enough
> that manually removing the cruft isn't much trouble, though if you
> plan to do this for entire bookshelves worth of content, you'll
> probably want a tool to automate the process. The split command can
> divide a file into files with x number of lines, which can be useful
> for removing large headers and small footers from text files, but can
> require remerging with cat if the split produces more than 2 files.
> the head and tail commands can return the first/last n lines of a
> file, which might also be useful.
>
> If you really want all the chapters in a single file(personally, I
> prefer the convenience of each chapter in it's own file), the cat
> command can merge the files for you provided they're named so they are
> in the correct order when you do ls -1. I believe the syntax you want
> is:
> cat *.txt > book.txt
> Again, it's important that the input files are named properly. For
> example, probably the easiest mistake to make is that a lack of
> leading zeros on chapters 1-9 will result in chapters 10-19 being
> between chapters 1 and 2, chapters 20-29 between chapters 2 and 3, and
> so on in the combined file, and lack of leading zeros can cause even
> more chaos with 100+ chapters).
>
> You could probably combine these tips to make a bash script that takes
> story ID, title, and number of chapters as input and produces a
> plain-text eBook as its final output, but that's a bit advanced for my
> scripting skills and I just read online, so I have little incentive.
>
> -- 
> Sincerely,
>
> Jeffery Wright
> President Emeritus, Nu Nu Chapter, Phi Theta Kappa.
> Former Secretary, Student Government Association, College of the Albemarle.
>
> _______________________________________________
> Blinux-list mailing list
> Blinux-list at redhat.com
> https://www.redhat.com/mailman/listinfo/blinux-list
>
>




More information about the Blinux-list mailing list