[darcs-users] Re: Emacs / Vim / etc

Eric S. Johansson esj at harvee.org
Tue May 25 15:45:19 UTC 2004

David Roundy wrote:

> I've wondered about this (back when my wrist was bad enough to keep me off
> the computer at home--and cause trouble eating)... is there a language that
> does support speech recognition? How does it work? At the time, I was
> thinking about how I would implement such a thing in haskell--it seems like
> an interesting challenge creating a translator from haskell to
> "human-speakable haskell"... and of course back again.

the trick is not creating a language that is necessarily speech 
recognition friendly but to create an environment it allows you to work 
with other humans who are clueless about the needs of speech recognition 
dependent people.

The voice coder project is a start in the right direction.  They do many 
things right and then they do some things which I consider not forward 
thinking enough.

For example, all languages have features that can be spoken to. 
Variable names, functions, methods, classes, include files are examples 
of symbols you can use as part of grammars for navigation, code creation 
or editing.  You also need the smarts in the editor to tell you what's 
going on in your context.  Are you in comment mode so you use ordinary 
English dictation and a different set of commands for editing and 
navigation.  Are you in program creation mode and if so, what is the 
scope of the names your grammar can understand (i.e. local variables, 
methods for them a class, other classes, methods of other classes etc.) 
then there are unnamed or relative features such as argument positions, 
predicates, block beginning and endings.  These features are also 
extremely useful for navigation when editing.

how I cope today is that Python for the most part lets me just dictate a 
fair amount of text and I've added some macros to do things like match 
parentheses, braces, etc.  I could probably do more but I haven't quite 
figured out what yet.

another thing is to minimize or eliminate case changes.  I do everything 
lowercase unless NaturallySpeaking insists on creating the word 
differently.  a common bugaboo is fussy punctuation spacing.  English 
spacing and computer language spacing don't always match and any time 
they don't, I need to create another macro to force the right action.  a 
classic for this is self.xxx which frequently comes out as self-taught 
(dot).  So I am about ready to create something like "joiner" to force 
the items together.

this last example also highlights were a smart environment would really 
help.  For example if I said:


most people would expect some sort of class specific name to come next 
joined to the self by a ".".  So the grammar would be

self <class_names>

and class_names would be a list of the current classes symbols.  Current 
as of that moment.  The trick is extracting that list of names from an 
editor on one machine and send it to the machine with the speech 
recognition grammar engine across X security perimeters..

and that's just the start of the kind of things I want to do.

The important thing to do with any speech user interface is to minimize 
the load on the voice.  Just as frequent use damages hands, wrists, arms 
etc. frequent use will damage the voice and it's far more fragile than 
your arms.  Screw up your voice you are well and truly hosed.


More information about the darcs-users mailing list