Sonar GNU/Linux merges with Vinux

Linux for blind general discussion blinux-list at redhat.com
Thu Apr 27 14:51:15 UTC 2017


I don't understand the advantage an ASCII speech synthesizer has over
a unicode speech synthesizer, or the advantage of having an
intermediary between synthesizer and screen reader. Maybe I'm missing
something, but I would think a hypothetical espeak-unicode that could
work directly with Orca would work better than keeping espeak ignorant
of unicode and requiring speech-dispatcher to translate unicode to
something espeak understands. Honestly, the use of an intermediary and
having the intermediary handle Unicode support sounds like the
computer equivalent of telling someone they shouldn't learn a foreign
language because they can just use Google Translate.

Anyways, I personally think stringing Greak, Hebrew, Arabic, etc. into
words instead of reading them as individual characters and actually
being able to identify individual kanji and kana are more important as
far as unicode support is concerned. Not that I know enough Hebrew or
Arabic for their proper reading to tell me anything, but I stumble
upon enough text in those alphabets that the slowdown to read
letter-by-letter Orca does to be annoying, and it would be nice if I
could make use of what little I remember from taking Japanese in high
school.

-- 
Sincerely,

Jeffery Wright
President Emeritus, Nu Nu Chapter, Phi Theta Kappa.
Former Secretary, Student Government Association, College of the Albemarle.




More information about the Blinux-list mailing list