Technology


6
Feb 11

iThink, therefore … iPad

As a person who can’t speak, I use many other strategies to communicate. Mostly, this involves technology. I love technology, and my lack of voice has given me an excuse to try everything I can get my hands on over the last few years. I’ve been meaning to write up a summary for ages. So here it is…

The tool I use most is just the word processing package Pages on the iPad. While obviously it’s not designed as any sort of therapeutic tool, it works so well I now use it every day. In most scenarios, I just show the person I’m speaking to the text I’ve typed. In large groups, or in meetings, I usually nominate someone as my ‘reader’ and hand the device to them when I want to make a contribution. (Though it’s worth saying, the job of reader has some skill attached. Some people insist on ‘interpreting’ what I type. And it’s a rare reader who can pick up the emphasis I want — no matter how much italics, bold or how many asterisks I use.) Previously I used Word on the laptop in the same way.

The choice between devices is interesting. Personally, I’m preferring the iPad to any other device I’ve used, though others had advantages too. My laptop was good when I used Word 2003, because I could choose to either show the text, or Word had a text-to-speech function so you could just press play and it would say it. That was a very nice option to have, and the iPad has no such function. (Though Word 2007 dropped that function. Go figure.) There was also an Australian accented voice for the notebook, whereas with the iPad you can only choose US or British accents so far. The beauty of the iPad is that you can operate it entirely with one finger. There’s no mouse to master, there are no combination keys to press and best of all, it is ‘instant on’ (that is, you just press the home button and it is on. No slow start up). The iPad is also easy to pass around and of course the battery lasts all day.

There are a number of tools that actually speak the words for you and they break down into two groups – text-to-speech (where you type in the exact words to say) and symbol-based tools (where you use pre-set symbols to make up sentences, or define your own commonly-used phrases). Personally, I’ve always preferred the text-to-speech tools because they allow more nuanced conversation. Symbol-based tools are quite good for making requests, but less so for conversations. On a laptop, the best of the text-to-speech tools are TextAloud and NextUp Talker (more sophisticated). Both are available with an incredible range of natural voices, including the Australian accent I mentioned above, which is called Lee. All are Windows only. For the Mac, the best I’ve seen is called GhostReader.

For the iPad, there are quite limited text-to-speech options. There is an expensive app called EZSpeechPro ($229), but the main one I use is called Speak it! Both have only American and/or British voices. The best of the symbol-based tools I’ve found for the iPad is called Auto Verbal. Again, it only has an American accent but it’s simple, cheap and useable. You can also save phrases you regularly use and they are available with a single tap. There are some very expensive and sophisticated symbol-based packages for both the iPad and laptop (an example is Proloquo2Go – $239) but, to me, they don’t offer more than the combination of a simple tool and a word processor.

I’ve also found that text-to-speech works well in some situations and less well in others. In formal meetings and presentations, and especially on conference calls and phone calls, I find text-to-speech a great tool. For example, when I need to speak to a bank on the phone, they won’t accept someone speaking for you. But they will accept text-to-speech. The thing is, you need to plan in advance or least warn the listener they will need to wait while you type your answer. Text-to-speech tools read exactly what you type, so if you make typos, that’s what they read. Whereas when I’m just showing the words I’ve typed I can type away flat out and just let the typos go uncorrected.

My feeling is that, on balance, the iPad currently offers the best range of functions for a user. For anyone with limited hand functions, especially, it is just so simple to use. Because most apps are so cheap, you can afford to get a number of the different text-to-speech and symbol-based apps and just try them out. Whereas on the laptop that would cost several hundred dollars.


Seo PackagesBlog Comment ServicesGov Backlinks