Steno 101: How to Do It
Steno 101: Lesson Zero
Steno 101: Lesson One
Steno 101: Lesson Two
Steno 101: Lesson Three
Steno 101: Lesson Four
It's been a while since the last installment, but here without further ado is Steno 101, Lesson Two. Now is probably a good time to review Lesson Zero and Lesson One.
Before we get started, I have to address an issue I inexcusably neglected in the previous two segments. If you look at the steno keyboard, you'll notice that the left hand side has four columns of keys before it hits the asterisk, but the right hand side has five columns after the asterisk.
So unless you're Count Rugen from the Princess Bride, you might be wondering how you're supposed to handle that extra column of keys. The answer is that the right hand pinky finger operates both of the two rightmost columns, even though it rests on the left TS column rather than the right DZ column when it's not in use. I prefer to use steno machines with wide keys that let me hit all four keys with one pinky, if I want to, but different stenographers have different preferences. The main thing to remember is that the pointer fingers should always be on the columns adjacent to the asterisks, and the rest of the fingers should follow naturally from there. I've recently started tutoring a beginner in steno, and her fingers kept wanting to drift over to the right, which meant that she had to either stretch or shift her hand left whenever she wanted to press the FR keys, which is really inefficient, considering how often "FR" is used in steno, versus "DZ".
So you've learned S, T, P, and R on both sides of the keyboard, and you've memorized all the various vowel combinations and diphthongs you can get out of the four vowel keys, A, O, E, and U. What's left? First, the other consonants just represented by individual keys:
You can see there are more on the right hand than on the left, but they should all be pretty easy to remember, since they're just straight-up letters rather than chords. The tricky part comes in here:
These are all the letters represented by chords. This time there are more on the left hand side than on the right. In fact, in most steno theories, including mine, only the left hand side has a complete alphabet, and it's the only side used to spell words out letter by letter when they aren't defined strokewise in the steno dictionary.
All this is a lot to memorize, but I hope that breaking it down in this way will make the process easier. Feel free to print out these charts and have them on hand for reference. You might first try memorizing the individual keys on both sides, then try to memorize the complete alphabet on the left hand side incorporating both chords and individual keys, and finally incorporating the four chords on the right hand side. Then try putting the letter keys and letter chords together with the vowel chart from the previous lesson. Once you've got all that under your belt, you'll be able to write almost any English word phonetically, and you'll be able to use the left-hand spelling alphabet (which I write using the letter or chord plus the asterisk key for lowercase letters; I'll get to uppercase letters and other alphabets in subsequent lessons) to spell out words that aren't in the dictionary. Next lesson we'll learn what to do when a word isn't possible to write strictly phonetically, then work on a few principles for briefing prefixes, suffixes, and other common word components.
Monday, August 30, 2010
Wednesday, August 18, 2010
Funding
So I just had a good session with my Python tutor, and it seems that Plover has reached a turning point. The essential steno-to-English structure of the program is working well. The next biggest priority is getting it to work as a keyboard emulator, and it looks like the most efficient and robust way to do that is to plug Plover into a program with flexible, relatively low-level control over the OS: namely, Autokey in Linux and AutoHotKey in Windows. We're starting with Autokey first, because it's written in Python (AutoHotKey is written in C++, though fortunately it's also open source) and my Python tutor runs Ubuntu, so it's the most convenient one to try first. We dug through the program a bit today, and it looks like it's going to involve a fair amount of intricate rerouting and testing and kludging. The other issue is that Plover is written in Python 3 and Autokey is written in Python 2.6, so we'll have to negotiate how to work that in as well. What it comes down to is that an hour a week or every other week is not going to be enough to get this moving along in the near future. Plover development so far has been funded entirely out of my pocket, to the tune of nearly $2,000 so far. I can only afford to pay about $300 a month, which has been keeping development at a relative snail's pace. My tutor is excited about the project and would like to put more time into it, but he's got a business to run just like I do, and can't afford to work at a loss. It looks like our options are:
* Keep developing Plover at the same rate we've been moving, about three hours per month.
* Find more funding from an external source, either from individual donations, a FLOSS grant, or some kind of seed money from a person or organization interested in contributing to the project. The trouble there is that Plover is not something that can be monetized directly. The whole point is that it's free for anyone to download and/or modify. Donations might help a little, but its user base is currently pretty small, and not likely to get much bigger until it's capable of being used as a fully functioning keyboard emulator.
* Figure out a way to make money in ways adjacent to Plover.
- My Python tutor is also a hardware hacker, so he's interested in trying to put together some kind of wearable steno keyboard with Plover built in, so it can just be plugged into a computer and work without any software configuration or Autokey-style workarounds. The only way that can be profitable is if it's sufficiently cheap to make and there's sufficient demand from wearable geeks willing to pay $100 - $200 to triple their typing speed. Whether either of those conditions are possible is difficult to calculate.
- Alternately, we could work on making a standalone AAC device, using Plover and Open Source Text-to-Speech Software. The advantage there is that AAC devices are often funded by health insurance or governmental agencies, and can cost thousands of dollars for non-realtime speech synthesis. If we could act as a vendor to people with good fine motor control but the inability to speak, as outlined in my How To Speak With Your Fingers article, there might well be a bigger pool of money available and more motivated users willing to pay for and learn how to use such a thing.
- On the other hand, I could try to make money on the pedagogical side of things, offering Plover and the Steno 101 series (next installment coming soon!) for free, but charging for personal steno tutoring or classes or something of that sort, either remotely or online. I'm certainly willing to put in the time, but I wonder how easy it will be to find people who want to take steno classes on software that's still in development, and how much they'll be willing to pay for the privilege.
- The third way to pay for development would be to keep funding Plover out of my own pocket up until the point that it hits keyboard emulation, then try to build it into a video game hosted on the web. If that takes off, we can fund development via ads and donation requests on the website. (A Tetris/Guitar Hero hybrid has been proposed, and its skeleton is already sketched out here, just waiting for code and graphics to bring it to life) Again, lots of uncertainty, but I feel like the video game route is both the best way to learn steno and the best way to bring it to the attention of the general public. If we could get it working on a mobile/wearable system, all the better. But we can't even start on the video game until Plover is emulating qwerty output, and that's looking to be a considerable distance away.
* And, of course, I want to improve my own Python skills so that I can start contributing code rather than just money, enthusiasm, and steno expertise to the project. That's easier said than done, but I'm going to try to keep making headway in that direction. Currently Plover isn't in the best shape for collaboration, but it's definitely a future priority to structure and document it so that people can contribute code as well as or instead of money to help get it off the ground.
Any thoughts and suggestions are very welcome. As long as I have money to spare, Plover development will continue to go forward. For now, I think I'm going to keep spending that $300 a month; I'm going to work on making a dedicated Plover page with a FAQ, donation button, and links to relevant posts from the blog; I'm going to do a bit of outreach in the mobile/wearable and open source communities to see if I can figure out a hypothetical price for functioning plug-and-play steno hardware; and I'm going to keep blogging the Steno 101 series so that anyone who wants to teach themselves steno can do so, using the current not-quite-a-word-processor version of Plover. It's definitely a start, and we'll just have to see where it goes.
* Keep developing Plover at the same rate we've been moving, about three hours per month.
* Find more funding from an external source, either from individual donations, a FLOSS grant, or some kind of seed money from a person or organization interested in contributing to the project. The trouble there is that Plover is not something that can be monetized directly. The whole point is that it's free for anyone to download and/or modify. Donations might help a little, but its user base is currently pretty small, and not likely to get much bigger until it's capable of being used as a fully functioning keyboard emulator.
* Figure out a way to make money in ways adjacent to Plover.
- My Python tutor is also a hardware hacker, so he's interested in trying to put together some kind of wearable steno keyboard with Plover built in, so it can just be plugged into a computer and work without any software configuration or Autokey-style workarounds. The only way that can be profitable is if it's sufficiently cheap to make and there's sufficient demand from wearable geeks willing to pay $100 - $200 to triple their typing speed. Whether either of those conditions are possible is difficult to calculate.
- Alternately, we could work on making a standalone AAC device, using Plover and Open Source Text-to-Speech Software. The advantage there is that AAC devices are often funded by health insurance or governmental agencies, and can cost thousands of dollars for non-realtime speech synthesis. If we could act as a vendor to people with good fine motor control but the inability to speak, as outlined in my How To Speak With Your Fingers article, there might well be a bigger pool of money available and more motivated users willing to pay for and learn how to use such a thing.
- On the other hand, I could try to make money on the pedagogical side of things, offering Plover and the Steno 101 series (next installment coming soon!) for free, but charging for personal steno tutoring or classes or something of that sort, either remotely or online. I'm certainly willing to put in the time, but I wonder how easy it will be to find people who want to take steno classes on software that's still in development, and how much they'll be willing to pay for the privilege.
- The third way to pay for development would be to keep funding Plover out of my own pocket up until the point that it hits keyboard emulation, then try to build it into a video game hosted on the web. If that takes off, we can fund development via ads and donation requests on the website. (A Tetris/Guitar Hero hybrid has been proposed, and its skeleton is already sketched out here, just waiting for code and graphics to bring it to life) Again, lots of uncertainty, but I feel like the video game route is both the best way to learn steno and the best way to bring it to the attention of the general public. If we could get it working on a mobile/wearable system, all the better. But we can't even start on the video game until Plover is emulating qwerty output, and that's looking to be a considerable distance away.
* And, of course, I want to improve my own Python skills so that I can start contributing code rather than just money, enthusiasm, and steno expertise to the project. That's easier said than done, but I'm going to try to keep making headway in that direction. Currently Plover isn't in the best shape for collaboration, but it's definitely a future priority to structure and document it so that people can contribute code as well as or instead of money to help get it off the ground.
Any thoughts and suggestions are very welcome. As long as I have money to spare, Plover development will continue to go forward. For now, I think I'm going to keep spending that $300 a month; I'm going to work on making a dedicated Plover page with a FAQ, donation button, and links to relevant posts from the blog; I'm going to do a bit of outreach in the mobile/wearable and open source communities to see if I can figure out a hypothetical price for functioning plug-and-play steno hardware; and I'm going to keep blogging the Steno 101 series so that anyone who wants to teach themselves steno can do so, using the current not-quite-a-word-processor version of Plover. It's definitely a start, and we'll just have to see where it goes.
Monday, August 16, 2010
Update
I gave my first beginning steno lesson to the auction winner last week, using a new SideWinder and the latest version of Plover. She did remarkably well, for only an hour worth of steno training! I taught her how to write "the stoat sat at the top step" (old steno school habits die hard) and gave her some exercises on memorizing the vowels and diphthongs with S, T, P, and R. Hopefully this week we'll move on to some of the other consonants, and she'll be off and running. I'm also supposed to be having another Python lesson tomorrow, in which we're hoping to get qwerty emulation operational in the Linux version of Plover using Autokey. I'll keep y'all posted.
Thursday, August 5, 2010
StenoKnight Blog?
Lately I feel like I'm bubbling over with things to say about my business, StenoKnight CART Services. I get some of my thoughts out on Twitter, but the character restriction is starting to feel really limiting. I have an articles page, which contains both things that I wrote for the Plover Blog and things that I wrote on various other forums over the last few years. But I kind of want to have a space for blog posts on my daily CART work, freelancing, promoting a business, time management, Deaf/HoH issues, and all that sort of thing. The Plover blog doesn't seem to be an appropriate place for it, so I'm toying with the idea of creating a StenoKnight-specific Blog. But, on the other hand, I've had the StenoKnight Facebook page for a while now and it's been largely neglected. Maybe that's just because I don't really enjoy the Facebook interface, but I'm worried that starting another blog will quickly feel like a burden and dilute my professional web presence even more. So maybe I should just write the things I want to write offline, polish them up, and post them on my articles page rather than having a dedicated top-of-the-head blogging space. I don't know. I know this isn't Plover-related, but since a fair number of this blog's readers are aspiring court reporters or CART providers, I guess I'm wondering which you guys would rather read: Polished articles on various topics that are published pretty infrequently on my website, or a new space for less carefully edited but more frequent posts on running a CART business in New York City?
Monday, August 2, 2010
Steno Versus Qwerty Versus Automatic SR
First of all, have some lovely Egyptian Plovers from the Bronx Zoo, taken by Abigail on a recent photo jaunt.
Thanks, Abigail!
Second, a low-cost wearable computer rig that'll make you look significantly less dorky than your average gargoyle.
I'm a little concerned about overheating brought on by lack of ventilation, but even so it's a good start. This guy uses a foldable qwerty keyboard. You'll remember from my article on mobile and wearable computing that I think steno is just dying to be turned into an efficient wearable text input system. I think that two flat multitouch panels with keypads for haptic feedback tied snugly around each thigh and connected to the rest of the rig via Bluetooth would be ideal. Still not sure how feasible it would be to manufacture, but as Plover inches ever closer to full keyboard emulation (we're a session or two away from getting it working in Linux, and then hopefully we'll be able to port that bit into the Windows version without much trouble), I'd love to try it out in a mobile context, even if that means sawing open a SideWinder, cutting off the steno-irrelevant sections, snapping the circuitboard in half, and rewiring everything so that it can be attached to a wearable substrate like jeans, a hoodie, or a small bag.
Third, I'm going to check out the exhibit hall of SpeechTek2010 tomorrow. I'll let you know what I think, but I doubt I'll be out of a job just yet. And, anyway, even positing true AI and human-equivalent automated speech recognition, there are many contexts in which text input via fingers is more private and more convenient than dictating, which means that steno will always have a place in the game -- assuming I can get enough people aware of its many benefits.
Speaking of automated speech recognition, I made a short video the other day, and YouTube's autocaption transcript just kicked in today, so I decided to post it.
I stitched together several snippets of timed audio, dictated at speeds between 40 and 185 words per minute, and made screencasts of myself transcribing them first using the qwerty keyboard (top portion) and then the steno keyboard (bottom portion). First notice how much more work I was doing on the qwerty keyboard than on the steno keyboard. In What Is Steno Good For: Raw Speed, I discuss all the wasted energy and effort required to type out each individual letter, backspacing and retyping if I inadvertently type them in the wrong order, and the inherent limitations on speed imposed by this method. My fingers are moving much more quickly and working much harder in the qwerty window than in the steno window. As you can see by the end of the video, I was losing huge chunks of the audio, my accuracy was dismal, and I was positively sweating bullets between 160 and 185 WPM in the qwerty window. When I got to use my steno machine, 185 felt like a stroll in the daisies. I usually don't start feeling like my fingers are really going full blast until I get up to 240 or more on my steno machine, which this video doesn't show; I chose to stop at the point where my qwerty typing completely broke down. The difference in terms of ease, comfort, and ergonomics is hard to miss.
If you want to see how YouTube's automated speech recognition did at the same task, turn the captions on, then select "transcribe audio". You'll notice that even though the audio is crystal clear and each word is perfectly enunciated, it makes so many mistakes that it's almost impossible to understand what's being said by reading the autocaptions alone. Later on the quality improves a bit, though there are still significant errors, because the 185 WPM section of the transcript is a judge's jury charge, which contains lots of boilerplate big words such as "testimony" and "circumstances". Speech recognition has a much easier time with multisyllabic words, especially in dictation where there's a clear space inserted between each word (as in these samples, since they're intended for dictation, though they sounds quite unnatural to most ears) because there are comparatively fewer possibilities to rule out than in words of one or two syllables, which automated SR always has a great deal of trouble with. The problem is that small words are often the most important words in the sentence (especially if they're negatory words such as "no" or "not"), and you can see if you look at the auto captions that even making an error in 1 word out of 20 (a 95% accuracy rate) can cause great confusion.
The most successful and least error-ridden passage in the transcription was the phrase "the convergence of resources and bolstering of partnerships to support a coherent program for science and mathematics for all students". Because there were so many multisyllabic words in that passage, the autocaptioning system got them all right except for the phrase "for all", which it translated as "football". This is an error that a human transcriber would never make, because "football" and "for all" are stressed entirely differently in English, and syntactically it makes no sense to insert the word "football" between "mathematics" and "students". But you can see how that one tiny error brings the rest of the sentence into doubt, and can steer a Deaf or hard of hearing person off on entirely the wrong train of thought. It's this complete lack of ability to understand speech in context and to correct errors semantically that makes automatic speech recognition ultimately unreliable without extensive babysitting by human editors.
Thanks, Abigail!
Second, a low-cost wearable computer rig that'll make you look significantly less dorky than your average gargoyle.
I'm a little concerned about overheating brought on by lack of ventilation, but even so it's a good start. This guy uses a foldable qwerty keyboard. You'll remember from my article on mobile and wearable computing that I think steno is just dying to be turned into an efficient wearable text input system. I think that two flat multitouch panels with keypads for haptic feedback tied snugly around each thigh and connected to the rest of the rig via Bluetooth would be ideal. Still not sure how feasible it would be to manufacture, but as Plover inches ever closer to full keyboard emulation (we're a session or two away from getting it working in Linux, and then hopefully we'll be able to port that bit into the Windows version without much trouble), I'd love to try it out in a mobile context, even if that means sawing open a SideWinder, cutting off the steno-irrelevant sections, snapping the circuitboard in half, and rewiring everything so that it can be attached to a wearable substrate like jeans, a hoodie, or a small bag.
Third, I'm going to check out the exhibit hall of SpeechTek2010 tomorrow. I'll let you know what I think, but I doubt I'll be out of a job just yet. And, anyway, even positing true AI and human-equivalent automated speech recognition, there are many contexts in which text input via fingers is more private and more convenient than dictating, which means that steno will always have a place in the game -- assuming I can get enough people aware of its many benefits.
Speaking of automated speech recognition, I made a short video the other day, and YouTube's autocaption transcript just kicked in today, so I decided to post it.
I stitched together several snippets of timed audio, dictated at speeds between 40 and 185 words per minute, and made screencasts of myself transcribing them first using the qwerty keyboard (top portion) and then the steno keyboard (bottom portion). First notice how much more work I was doing on the qwerty keyboard than on the steno keyboard. In What Is Steno Good For: Raw Speed, I discuss all the wasted energy and effort required to type out each individual letter, backspacing and retyping if I inadvertently type them in the wrong order, and the inherent limitations on speed imposed by this method. My fingers are moving much more quickly and working much harder in the qwerty window than in the steno window. As you can see by the end of the video, I was losing huge chunks of the audio, my accuracy was dismal, and I was positively sweating bullets between 160 and 185 WPM in the qwerty window. When I got to use my steno machine, 185 felt like a stroll in the daisies. I usually don't start feeling like my fingers are really going full blast until I get up to 240 or more on my steno machine, which this video doesn't show; I chose to stop at the point where my qwerty typing completely broke down. The difference in terms of ease, comfort, and ergonomics is hard to miss.
If you want to see how YouTube's automated speech recognition did at the same task, turn the captions on, then select "transcribe audio". You'll notice that even though the audio is crystal clear and each word is perfectly enunciated, it makes so many mistakes that it's almost impossible to understand what's being said by reading the autocaptions alone. Later on the quality improves a bit, though there are still significant errors, because the 185 WPM section of the transcript is a judge's jury charge, which contains lots of boilerplate big words such as "testimony" and "circumstances". Speech recognition has a much easier time with multisyllabic words, especially in dictation where there's a clear space inserted between each word (as in these samples, since they're intended for dictation, though they sounds quite unnatural to most ears) because there are comparatively fewer possibilities to rule out than in words of one or two syllables, which automated SR always has a great deal of trouble with. The problem is that small words are often the most important words in the sentence (especially if they're negatory words such as "no" or "not"), and you can see if you look at the auto captions that even making an error in 1 word out of 20 (a 95% accuracy rate) can cause great confusion.
The most successful and least error-ridden passage in the transcription was the phrase "the convergence of resources and bolstering of partnerships to support a coherent program for science and mathematics for all students". Because there were so many multisyllabic words in that passage, the autocaptioning system got them all right except for the phrase "for all", which it translated as "football". This is an error that a human transcriber would never make, because "football" and "for all" are stressed entirely differently in English, and syntactically it makes no sense to insert the word "football" between "mathematics" and "students". But you can see how that one tiny error brings the rest of the sentence into doubt, and can steer a Deaf or hard of hearing person off on entirely the wrong train of thought. It's this complete lack of ability to understand speech in context and to correct errors semantically that makes automatic speech recognition ultimately unreliable without extensive babysitting by human editors.