Yesterday I watched a bit of the Apple announcement and remembered a project I was working on in my last year of college. nowadays not just VR/AR but smartphones and other devices can just listen to spoken commands and accurately recognize the words, but in 2009 speech recognition was not good, specially with non native English speakers or people with non standard accents.
Visually impaired users had a couple options: mobile devices with physical keyboards (but blackberry was going the other way, to all screen devices) or alternative input methods. I was developing a braille based software keyboard for touchscreen devices, but of course there were many approaches. Having the braille layout on top of the screen and tapping on it as if it were a chorded keyboard was one way.
My approach was using the standard braille analog way of writing (imitating the movement one would make with a ruler and pencil), but reimagined to use the screen in a graffiti-like motion (those with Palm PDAs will remember). There were two major caveats: you needed to know how to write every braille character the analog way in order to reproduce the motion on the screen of the device and you needed to remember the state of the keyboard app (or present a constant audible reminder that the input method was active) in order to switch from the input operation to the app operation (like switching modes in VIM).
Eventually I tried to pivot from a general keyboard app to a email only client that could be operated by ear (as in you don't need to look at the screen), but that reduced the utility of my project. As an exploration of alternative input methods, I think it was an interesting problem to work on, but the advent of better speech recognition and many web accessibility features made my project obsolete and I canned the thing. I'm glad the technology got to a level where my workaround was no longer necessary.