Vote to Greenlight Steno Arcade on Steam Here!
Anyone who's played video games on their PC no doubt knows about Steam, the comprehensive platform that supports a dizzying array of both commercial and indie games. Since I'm currently on winter break, I've spent a ton of time this week with my Steam Link and Steam Controller, playing Undertale and Botanicula on my six-foot-wide projector screen, and I love the idea of being able to have Steno Arcade parties on it someday as well. Just add steno machine!
For All To Play, the studio that's been developing Steno Arcade, were able to get , Grail To The Thief, their previous screen reader accessible game, up on Steam, and now we're trying again with the Steno Hero demo, via the Steam Greenlight system. If you have a Steam account or know anyone who does, please vote for us! It would be amazing to have such a massive platform to help get the word out about gamified steno.
Vote Here!
Wednesday, December 23, 2015
Tuesday, December 22, 2015
Steno Arcade Demo Is Here!
First: If you want to be notified about the launch of the Steno Arcade crowdfunding campaign, enter your email address here!
Second: The demo is complete! We currently offer four Jonathan Coulton songs, with permission:
I'm Your Moon
Mandelbrot Set
That Spells DNA
I Feel Fantastic
In the full game we'll offer many more Creative Commons-licensed songs, as well as a song editor to allow you to make levels out of your own music library! We'll also make the graphics a little bit splashier (and screen reader accessible!), and of course, the more money we raise, the more games we're going to be able to develop. We're hoping to launch the campaign in early January, so stay tuned!
Download the Demo Here!
(Demo link is Windows only, but I believe it will be cross-platform via Steam very soon.)
And one more time: If you want to be notified about the launch of the Steno Arcade crowdfunding campaign, enter your email address here!
Second: The demo is complete! We currently offer four Jonathan Coulton songs, with permission:
I'm Your Moon
Mandelbrot Set
That Spells DNA
I Feel Fantastic
In the full game we'll offer many more Creative Commons-licensed songs, as well as a song editor to allow you to make levels out of your own music library! We'll also make the graphics a little bit splashier (and screen reader accessible!), and of course, the more money we raise, the more games we're going to be able to develop. We're hoping to launch the campaign in early January, so stay tuned!
Download the Demo Here!
(Demo link is Windows only, but I believe it will be cross-platform via Steam very soon.)
And one more time: If you want to be notified about the launch of the Steno Arcade crowdfunding campaign, enter your email address here!
Monday, December 21, 2015
Learn Plover in Paperback
Learn Plover, Zachary Brown's online steno textbook, will always be available on the website for free. But several people requested a print version, so it's now available for purchase!
Buy Learn Plover in paperback from Amazon here. It will also be available as a Kindle ebook in the near future.
He's also been working on a new Velotype-style orthographic chording system, called Kinglet, which he thinks might have an easier learning curve than Plover. It's not yet compatible with any software, so he doesn't have videos of it in action, but it should be pretty simple to implement. If this sounds like your kind of text entry system, go check it out!
Buy Learn Plover in paperback from Amazon here. It will also be available as a Kindle ebook in the near future.
He's also been working on a new Velotype-style orthographic chording system, called Kinglet, which he thinks might have an easier learning curve than Plover. It's not yet compatible with any software, so he doesn't have videos of it in action, but it should be pretty simple to implement. If this sounds like your kind of text entry system, go check it out!
Wednesday, December 16, 2015
Stenosaurus Progress Shots!
Over on The Stenosaurus Blog, things are happening!
Josh says:
So excited to see what's coming next!
Josh says:
I'm carving out a lot of the next two weeks to work on this, so there should be more news soon.Check out these pictures of current Stenosaurus components!
So excited to see what's coming next!
Monday, December 14, 2015
Introducing Aloft From Stan Sakai and The Open Steno Project!
I am thrilled to utter pieces to introduce readers of The Plover Blog to the Open Steno Project's newest software application: Aloft, a collaborative realtime streaming app and automatic transcript repository. The amazing Stan Sakai, who taught himself steno and got up to professional speeds in less than a year, recently taught himself how to code, and has created a realtime streaming app that blows every other option out of the water.
Aloft's repository on Github
Crossposted from Stan's post on The Stanographer:
First off, apologies for the long radio silence. It’s been far too long since I’ve made any updates! But I just had to share a recent that I’m currently probably the most excited about.
[ Scroll down for TL;DR ]
To begin, a little background. For the past several months, I’ve been captioning an intensive web design class at General Assembly, a coding academy in New York City. Our class in particular utilized multiple forms of accommodation with four students using realtime captioning or sign interpreters depending on the context, as well as one student who used screen sharing to magnify and better view content projected in the classroom. A big shout-out to GA for stepping up and making a11y a priority, no questions asked.
On the realtime captioning front, it initially proved to be somewhat of a logistical challenge mostly because the system my colleague and I were using to deliver captions is commercial deposition software, designed primarily with judicial reporting in mind. But the system proved to be less than ideal for this specific context for a number of reasons.
Commercial realtime viewing software either fails to address the needs of deaf and hard of hearing individuals who rely on realtime transcriptions for communication access or addresses them half-assedly as an afterthought.
The UI is still clunky and appears antiquated. Limited in its ability to change font sizes, line spacing, and colors, it makes it unnecessarily difficult to access options that are frequently crucial when working with populations with diverse sensory needs. Options are sequestered behind tiny menus and buttons that are hard to hit on a tablet. One of the most glaring issues was the inability to scroll with a flick of the finger. Having not been updated since the widespread popularization of touch interfaces, there is practically no optimization for this format. To scroll, the user must drag the tiny scroll bar on the far edge of the screen with high precision. Menus and buttons clutter the screen and take up too much space and everything just looks ugly.
Though the software supports sending text via the Internet or locally via Wi-Fi, most institutional Wi-Fi is either not consistently strong enough, or too encumbered by security restrictions to send captions reliably to external devices.
Essentially, unless the captioner brought his or her own portable router, text would be unacceptably slow or the connection would drop. Additionally, unless the captioner either has access to an available ethernet port into which to plug said router or has a hotspot with a cellular subscription, this could mean the captioner is without Internet during the entire job.
Connection drops are handled ungracefully. Say I were to briefly switch to the room Wi-Fi to google a term or check my email. When I switch back to my router and start writing, only about half of the four tablets usually survive and continue receiving text. The connection is very fragile so you had to pretty much set it and leave it alone.
Makers of both steno translation software and realtime viewing software alike still bake in lag time between when a stroke is hit by the stenographer and when it shows up as translated text.
A topic on which Mirabai has weighed in extensively — most modern commercial steno software runs on time-based translation (i.e. translated text is sent out only after a timer of several milliseconds to a second runs out). This is less than ideal for a deaf or hard of hearing person relying on captions as it creates a somewhat awkward delay, which, to a hearing person merely watching a transcript for confirmation as in a legal setting would not notice.
Subscription-based captioning solutions that send realtime over the Internet are ugly and add an additional layer of lag built-in due to their reliance on Ajax requests that ping for new text on a timed interval basis.
Rather than utilizing truly realtime technologies which push out changes to all clients immediately as the server receives them, most subscription based captioning services rely on the outdated tactic of burdening the client machine with having to repeatedly ping the server to check for changes in the realtime transcript as it is written. Though not obviously detrimental to performance, the obstinate culture of “not fixing what ain’t broke” continues to prevail in stenographic technology. Additionally, the commercial equivalents to Aloft are cluttered with too many on-screen options without a way to hide the controls and, again, everything just looks clunky and outdated.
Proprietary captioning solutions do not allow for collaborative captioning.
At Kickstarter, I started providing transcriptions of company meetings using a combo of Plover and Google Docs. It was especially helpful to have subject matter experts (other employees) be able to correct things in the transcript and add speaker identifications for people I hadn’t met yet. More crucially, I felt overall higher sense of integration with the company and the people working there as more and more people got involved, lending a hand in the effort. But Google Docs is not designed for captioning. At times, it would freeze up and disable editing when text came in too quickly. Also, the user would have to constantly scroll manually to see the most recently-added text.
With all these frustrations in mind, and with the guidance of the GA instructors in the class, I set off on building my own solution to these problems with the realtime captioning use case specifically in mind. I wanted a platform that would display text instantaneously with virtually no lag, was OS and device agnostic, touch interface compatible, extremely stable in that it had a rock solid connection that could handle disconnects and drops on both the client and provider side without dying a miserable death, and last but not least, delivered everything on a clean, intuitive interface to boot.
How I Went About It
While expressing my frustrations during the class, one of the instructors informed me of the fairly straightforward nature of solving the problem, using the technologies the class had been covering all semester. After that discussion, I was determined to set out to create my own realtime text delivery solution, hoping some of what I had been transcribing every day for two months had actually stuck.
Well, it turned out not much did. I spent almost a full week learning the basics the rest of the class had covered weeks ago, re-watching videos I had captioned, putting together little bits of code before trying to pull them together into something larger. I originally tried to use a library called CodeMirror, which is essentially a web based collaborative editing program, using Socket.io as its realtime interface. It worked at first but I found it incapable of handling the volume and speed of text produced by a stenographer, constantly crashing when I transcribed faster than 200 WPM. I later discovered another potential problem — that Socket.io doesn’t guarantee order of delivery. In other words, if certain data were received out of order, the discrepancy between what was sent to the server versus what a client received would cause the program to freak out. There was no logic that would prioritize and handle concurrent changes.
I showed Mirabai my initial working prototype and she was instantly all for it. After much fuss around naming, Mirabai and I settled on Aloft as it’s easy to spell, easy to pronounce, and maintains the characteristic avian theme of Open Steno Project.
I decided to build Aloft for web so that any device with a modern browser could run it without complications. The core server is written in Node. I used Express and EJS to handle routing and layouts, and JQuery with JavaScript for handling the dynamic front-end bits.
Screenshot of server code
I incorporated the ShareJS library to handle realtime communication, using browserchannel as its WebSockets-like interface. Additionally, I wrapped ShareJS with Primus for more robust handling of disconnects and dissemination of updated content if/when a dropped machine comes back online. Transcript data is stored in a Mongo database via a wrapper, livedb-mongo, which allows ShareJS to easily store the documents as JSON objects into Mongo collections. On the front end, I used Bootstrap as the primary framework with the Flat UI theme. Aloft is currently deployed on DigitalOcean.
Current Features
Aloft homepage appearance if you are a captioner
In the Works
That’s all I have so far but for the short amount of time Aloft has been in existence, I’ve been extremely satisfied with it. I haven’t used my commercial steno software at all, in favor of using Aloft with Plover exclusively for about two weeks now. Mirabai has also begun to use it in her work. I’m confident that once I get the add-on working to get users of commercial stenography software on board, it’ll really take off.
Using Aloft on the Job on my Macbook Pro and iPad
Special thanks to: Matt Huntington, Matthew Short, Kristyn Bryan, Greg Dunn, Mirabai Knight, Ted Morin
Aloft's repository on Github
Crossposted from Stan's post on The Stanographer:
First off, apologies for the long radio silence. It’s been far too long since I’ve made any updates! But I just had to share a recent that I’m currently probably the most excited about.
[ Scroll down for TL;DR ]
To begin, a little background. For the past several months, I’ve been captioning an intensive web design class at General Assembly, a coding academy in New York City. Our class in particular utilized multiple forms of accommodation with four students using realtime captioning or sign interpreters depending on the context, as well as one student who used screen sharing to magnify and better view content projected in the classroom. A big shout-out to GA for stepping up and making a11y a priority, no questions asked.
On the realtime captioning front, it initially proved to be somewhat of a logistical challenge mostly because the system my colleague and I were using to deliver captions is commercial deposition software, designed primarily with judicial reporting in mind. But the system proved to be less than ideal for this specific context for a number of reasons.
Commercial realtime viewing software either fails to address the needs of deaf and hard of hearing individuals who rely on realtime transcriptions for communication access or addresses them half-assedly as an afterthought.
The UI is still clunky and appears antiquated. Limited in its ability to change font sizes, line spacing, and colors, it makes it unnecessarily difficult to access options that are frequently crucial when working with populations with diverse sensory needs. Options are sequestered behind tiny menus and buttons that are hard to hit on a tablet. One of the most glaring issues was the inability to scroll with a flick of the finger. Having not been updated since the widespread popularization of touch interfaces, there is practically no optimization for this format. To scroll, the user must drag the tiny scroll bar on the far edge of the screen with high precision. Menus and buttons clutter the screen and take up too much space and everything just looks ugly.
Though the software supports sending text via the Internet or locally via Wi-Fi, most institutional Wi-Fi is either not consistently strong enough, or too encumbered by security restrictions to send captions reliably to external devices.
Essentially, unless the captioner brought his or her own portable router, text would be unacceptably slow or the connection would drop. Additionally, unless the captioner either has access to an available ethernet port into which to plug said router or has a hotspot with a cellular subscription, this could mean the captioner is without Internet during the entire job.
Connection drops are handled ungracefully. Say I were to briefly switch to the room Wi-Fi to google a term or check my email. When I switch back to my router and start writing, only about half of the four tablets usually survive and continue receiving text. The connection is very fragile so you had to pretty much set it and leave it alone.
Makers of both steno translation software and realtime viewing software alike still bake in lag time between when a stroke is hit by the stenographer and when it shows up as translated text.
A topic on which Mirabai has weighed in extensively — most modern commercial steno software runs on time-based translation (i.e. translated text is sent out only after a timer of several milliseconds to a second runs out). This is less than ideal for a deaf or hard of hearing person relying on captions as it creates a somewhat awkward delay, which, to a hearing person merely watching a transcript for confirmation as in a legal setting would not notice.
Subscription-based captioning solutions that send realtime over the Internet are ugly and add an additional layer of lag built-in due to their reliance on Ajax requests that ping for new text on a timed interval basis.
Rather than utilizing truly realtime technologies which push out changes to all clients immediately as the server receives them, most subscription based captioning services rely on the outdated tactic of burdening the client machine with having to repeatedly ping the server to check for changes in the realtime transcript as it is written. Though not obviously detrimental to performance, the obstinate culture of “not fixing what ain’t broke” continues to prevail in stenographic technology. Additionally, the commercial equivalents to Aloft are cluttered with too many on-screen options without a way to hide the controls and, again, everything just looks clunky and outdated.
Proprietary captioning solutions do not allow for collaborative captioning.
At Kickstarter, I started providing transcriptions of company meetings using a combo of Plover and Google Docs. It was especially helpful to have subject matter experts (other employees) be able to correct things in the transcript and add speaker identifications for people I hadn’t met yet. More crucially, I felt overall higher sense of integration with the company and the people working there as more and more people got involved, lending a hand in the effort. But Google Docs is not designed for captioning. At times, it would freeze up and disable editing when text came in too quickly. Also, the user would have to constantly scroll manually to see the most recently-added text.
With all these frustrations in mind, and with the guidance of the GA instructors in the class, I set off on building my own solution to these problems with the realtime captioning use case specifically in mind. I wanted a platform that would display text instantaneously with virtually no lag, was OS and device agnostic, touch interface compatible, extremely stable in that it had a rock solid connection that could handle disconnects and drops on both the client and provider side without dying a miserable death, and last but not least, delivered everything on a clean, intuitive interface to boot.
How I Went About It
While expressing my frustrations during the class, one of the instructors informed me of the fairly straightforward nature of solving the problem, using the technologies the class had been covering all semester. After that discussion, I was determined to set out to create my own realtime text delivery solution, hoping some of what I had been transcribing every day for two months had actually stuck.
Well, it turned out not much did. I spent almost a full week learning the basics the rest of the class had covered weeks ago, re-watching videos I had captioned, putting together little bits of code before trying to pull them together into something larger. I originally tried to use a library called CodeMirror, which is essentially a web based collaborative editing program, using Socket.io as its realtime interface. It worked at first but I found it incapable of handling the volume and speed of text produced by a stenographer, constantly crashing when I transcribed faster than 200 WPM. I later discovered another potential problem — that Socket.io doesn’t guarantee order of delivery. In other words, if certain data were received out of order, the discrepancy between what was sent to the server versus what a client received would cause the program to freak out. There was no logic that would prioritize and handle concurrent changes.
I showed Mirabai my initial working prototype and she was instantly all for it. After much fuss around naming, Mirabai and I settled on Aloft as it’s easy to spell, easy to pronounce, and maintains the characteristic avian theme of Open Steno Project.
I decided to build Aloft for web so that any device with a modern browser could run it without complications. The core server is written in Node. I used Express and EJS to handle routing and layouts, and JQuery with JavaScript for handling the dynamic front-end bits.
Screenshot of server code
I incorporated the ShareJS library to handle realtime communication, using browserchannel as its WebSockets-like interface. Additionally, I wrapped ShareJS with Primus for more robust handling of disconnects and dissemination of updated content if/when a dropped machine comes back online. Transcript data is stored in a Mongo database via a wrapper, livedb-mongo, which allows ShareJS to easily store the documents as JSON objects into Mongo collections. On the front end, I used Bootstrap as the primary framework with the Flat UI theme. Aloft is currently deployed on DigitalOcean.
Current Features
- Fast AF™ text delivery that is as close to realtime as possible given your connection speed.
OS agnostic! Runs on Mac, Windows, Android, iOS, Linux, anything you can run a modern Internet browser on.
User login which allows captioner to create new events as well as visualize all past events by author and event title.
- Captioner can delete, modify, reopen previous sessions, and view raw text with a click of a link or button.
Option to make a session collaborative or not if you want to let your viewers directly edit your transcription.
Ability for the viewer to easily change font face, font size, invert colors, increase line spacing, save the transcription as .txt, and hide menu options.
- Easy toggle button to let viewers turn on/off autoscrolling so they can review past text but be quickly able to snap down to what is currently being written.
- Ability to run Aloft both over the Internet or as a local instance during cases in which a reliable Internet connection is not available (daemonize using pm2 and access via your machine’s IP addy).
Aloft homepage appearance if you are a captioner
In the Works
- Plugin that allows captioners using commercial steno software that typically output text to local TCP ports to send text to Aloft without having to focus their cursor on the editing window in the browser. Right now, Aloft is mostly ideal for stenographers using Plover.
- Ability for users on commercial steno software to make changes in the transcript, with changes reflected instantly on Aloft.
- Ability to execute Vim-like commands in the Aloft editor window.
- Angularize front-end elements that are currently accomplished via somewhat clunky scripts.
- “Minimal Mode” which would allow the captioner to send links for a completely stripped-down, nothing-but-the-text page that can be modified via parameters passed in to the URL (e.g. aloft.nu/stanley/columbia?min&fg=white&bg=black&size=20 would render a page that contains text from job name columbia with 20px white text on a black background.
That’s all I have so far but for the short amount of time Aloft has been in existence, I’ve been extremely satisfied with it. I haven’t used my commercial steno software at all, in favor of using Aloft with Plover exclusively for about two weeks now. Mirabai has also begun to use it in her work. I’m confident that once I get the add-on working to get users of commercial stenography software on board, it’ll really take off.
Using Aloft on the Job on my Macbook Pro and iPad
TL;DR:Try it out for yourself: Aloft.nu: Simple Realtime Text Delivery
I was captioning a web development course when I realized how unsatisfied I was with every commercial realtime system currently available. I consulted the instructors and used what I had learned to build a custom solution designed to overcome all the things I hate about what’s already out there and called it Aloft because it maintains the avian theme of the Open Steno Project.
Special thanks to: Matt Huntington, Matthew Short, Kristyn Bryan, Greg Dunn, Mirabai Knight, Ted Morin
Tuesday, December 8, 2015
Sneak Peek At A New 3D-Printed Steno Machine
Yesterday, on The Plover Google Group, we got an exciting announcement out of nowhere: There's another low cost steno writer on the horizon!
Scott writes:
I have been using a Stenoboard for a while now, and I feel like I'm reaching a point where I am better off with getting rid of the somewhat-difficult clicky microswitches. Don't get me wrong, the Stenoboard is an amazing project and is affordable enough to use as a learning tool. I would probably not be around here right now if it hadn't been available to me. But it isn't up to par with what I would enjoy typing with longer-term. Despite milled aluminium keycaps and a wood case sounding delicious on the Stenosaurus, like Robert Fontaine says, "I'd prefer a less sexy plastic machine ... with less price and less wait ;)" So, I introduce to you all a less sexy plastic machine with less price and (perhaps a little) less wait, the SOFT/HRUF!The Stenoboard's extremely shallow travel is an issue for me too, so this is very exciting. Scott was inspired by the ethos of the Humble Bundle, where people who have more money at their disposal choose to pay a bit more to help supplement the amount paid by people who can't afford to pay full price. Definitely a great complement to the principles of open source, and indeed, Scott says the SOFT/HRUF will be open source:
I started work on this project only three or four days ago with no idea how to do anything. Between then and now, I've learned how to model in OpenSCAD, use a 3D printer, (re)write keyboard firmware, solder matrices, and burned through nearly twenty iterations of keycap styles before finding a decent setup that could fit the entire keyboard on a single build plate. That work is just starting to pay off into something usable, so I figured I'd share a quick photo to show where it has come to and to gauge potential interest. Hopefully in the next couple weeks I get the first version finished, get some keyboards picked up, and some pennies thrown at me to offset the cost of this printer. Of course, I'll be fully releasing the source so you all can build upon and improve it as soon as everything is in a usable state. I will also be setting up a storefront with a pay-what-you-will option like I mentioned in the title. The way it will work is that there is a minimum set price that is equivalent to the direct cost of the parts (with no charge for assembly) and you'll pay exactly what I pay to acquire the parts. But if you are feeling like you want to be wallet-friendly, you can choose any greater price you think is fair for my time and effort.The recommended default price ends up being in the range of $120-$145, which is almost as affordable as the low to medium range of NKRO keyboards. Pretty impressive! I'm guessing that the machine's name is pronounced "Soft Love", but maybe it's pronounced "Soft Luff" or "Soft Hruf" or "Soft Hruv", or... I dunno! You'll have to ask Scott! There are already a bunch of questions and answers on the thread, so feel free to weigh in over there. Meanwhile, I'll be holding my breath and waiting to see what comes out of this ambitious and stylish attempt at a new ultra-accessible steno machine!