(Courriels de diversion: <cachetteriez@blagueuse-decollement.com> <exceptant@frileux-etheres.com> <suppliciee@tenables-coïncidents.com> <ombragee@pardonnons-malthusiennes.com> <arche@designeriez-cabale.com> <sifflerais@decotes-precede.com> <heteroclites@selliers-decervelage.com> <decapotable@fructifiaient-compatissions.com> <deminerait@prejugez-valoriserions.com> <hypothecaires@dominais-ignorant.com> )


Bonjour Richard et bonjour à tous,

Tout d'abord, merci Richard d'être venu sur notre mailing list nous
parler du projet GNU Speech. Ce projet est vraiment très intéressant et
j'espère qu'on arrivera à le faire avancer. Il s'avère cependant
qu'il y a vraiment beaucoup de travail à faire, car si j'ai bien compris
il s'agit dans un premier temps de porter le soft sous GNU/Linux après
quoi seulement on pourrait s'attaquer à l'implémentation de nouveaux
langages !
Pour vous éclairer sur le contenu du projet et donc vous donner une
idée de ce qu'il faudrait faire je recopie ci-dessous le contenu d'un
message de David R. Hill, PEng que Richard nous a permis de connaitre
(merci Richard) :
Je pense qu'il serait bon de diffuser le plus largement possible l'info
concernant l'existance de ce projet afin de trouver les compétences
nécessaires pour le faire avancer.
Que pouvons-nous faire ? comment ? vos avis ?


----- Forwarded message from david <david@firethorne.com>-----Date: Wed, 28 Aug 2002 22:23:11 -0700 (PDT)
From: david <david@firethorne.com>Subject: Re: RMS recommended me to contact you!
To: <info@brlspeak.net>
Dear Osvalda La Rosa,

Many thanks for your email concerning the GnuSpeech project and your
interest in it for use as an aid to the blind.

GnuSpeech is intended as Free Software (i.e. under a GPL).  You can find
out more information by visiting my university web site:

	http://www.cpsc.ucalgary.ca/~hill

and following the menu selection for papers.  There's a paper I gave to
AVIOS 95 in San Jose which summarises the speech work, and there is also a
manual for the MONET system that was used to create some of the databases
used for the system that was, at one time, marketed by Trillium Sound
Research Inc -- a company we set up to develop the software commercially,
but which has been dissolved, following our decision to make it Free
Software.  The software was called the TextToSpeech Kit and came in
three forms: a User Kit, a Developer Kit, and a Researcher Kit.

The GnuSpeech software includes all the software for these, plus some
in-house tools we created and used for things like dictionary development.

The MONET system was central to the development of the English database,
and could be used to set up the articulatory database for other arbitrary
languages.

There is a 70,000 word English dictionary.  The system is
already partially adapted for French, though, so far, we have not defined
a uvular "r" sound, only the five nasal vowels.  It would be necessary to
define a dictionary of French words, providing pronunciations, plus a set
of French letter-to-sound rules for words not found in the French
dictionary.  It would also be necessary to define additional articulatory
rules for sounds special to the French language, though the existing
English rule set would provide an excellent starting point.

The software was originally developed for the NeXT computer, and makes
considerable use of the NeXTStep AppKit and Interface builder facilities,
which are currently being ported to Gnu/Linux as GnuStep.

The main obstacle to using the software now is that no-one is doing
anything to get the software working under Gnu/Linux (though I have some
vague promises of help).

There are two possible routes to getting the software working under
Gnu/Linux: (a) the high road -- port the existing software to GnuStep, the
main problem being the immaturity of GnuStep; (b) the low road -- rebuild
the syste using C, GTK and things like that to produce a new version based
on the old version which can run without the aid of GnuStep facilities,
the main problem being that to get the full system running would be a
fair amount of work, especially if the intention was to get a French
version fully working, because that would require MONET and the
interactive tube model interface to be ported, both of which involve
significant graphical interface programming (which would be much easier
under GnuStep -- I've had reports of NeXTStep applications being ported
with virtually no changes needed, thhe difficulties lie in getting GnuStep
installed and running apparently.

Another research project I worked on was to provide comouter access for
the blind, using the TextToSpeech software.  The results are written up
in the IEEE Transactions on Systems, Man and Cybernetics Vol.18 No. 2
March/April 1988 "Substituion for a Restricted Visual Channel in
Multimodal Computer-Human Dialogue"  David R. Hill & Christiane Grieb.
The work is briefly covered in my paper "Give us the tools: a personal
view of multi-modal computer-human dialogue" which has just appeared in
the book edited by Taylor MM, Neel F, and Bouwhuis DG "The Structure of
Multimodal Dialogue II" (John Benjamins Publishing Co.:
Philadelphia/Amsterdam, 2000) pages 25 to 62.  The system was called
"TouchNTalk"

If you find it hard to obtain a copy of either of the papers on
TouchNTalk, I'd be glad to mail copies to you.  Accessing the web papers
should present no problems.

I look forward to hearing from you, because I am sure that we can be of
mutual assistance.

Please don't apologise for your English.  I am sure it is better than my
French!!

All good wishes.

david
---
David R. Hill, PEng,
Professor Emeritus,
University of Calgary
---


--
Nath


______________________________________________________________________________
Pour mieux recevoir vos emails, utilisez un PC plus performant !
Découvrez la nouvelle gamme DELL en exclusivité sur i (france)
http://www.ifrance.com/_reloc/signedell


---------------------------------------------------------------------
To unsubscribe, e-mail: biglux-unsubscribe@savage.iut-blagnac.frFor additional commands, e-mail: biglux-help@savage.iut-blagnac.fr