Thursday, September 7, 2017

Ears or eyes

I find it interesting that humans developed speech and listening to speech much before inventing writing and reading.  So, the ears did communication in a special way, a way that delivered detail and heavy loads of information far earlier than the eyes.  I realize that ears are circular in that we can detect and use sound and therefore talk regardless of the direction of its source relative to us.  If you speak beside me, or in front of me, or behind me, I can still hear you and grasp what you are saying.  With writing, I need to be able to see what is written, so writing that is behind me or out of sight, or too little can't be decoded and absorbed.  


I am confident that you can make the screen and probably the letters too so bright or so dim that I can't read them.  Similarly, talk can be too loud for me to stand and maybe too loud for me to grasp.  In psychophysics, we learned that all our senses have comfort zones and outside of those zones, the signal is too strong or too weak for us to detect.


You probably know that the electromagnetic spectrum of light is much broader that the narrow part that is light our eyes can detect.  Some creatures can see types of light that we cannot.  The novel "All the Light We Cannot See" is basically about radio broadcasting and communication during World War II.  Sound, that is, vibrations, can also occur outside a range that humans can detect.  In the same psychophysics class, I read that humans can hear from 20 to 20000 vibrations per second, although I personally could not hear, even though I was much younger at the time, anything at either end of that range.  


It is also interesting that babies develop hearing in the womb and are said to be sensitive to sound at 24 weeks, roughly halfway to being born.  The exciting book "The Universal Sense" by Seth Horowitz says that some earth creatures have no way of detecting light and are completely blind but that no creature is completely insensitive to sound.


This topic arose from Alexa and Google Home. There are also Cortana and the original audio assistant, Siri, on Apple products.  I was taken with the notion of speaking and having what I say typed out but the couple of times I tried it, with Naturally Speaking software, it did not work well.  I suspect that I am habituated to composing on a keyboard but not so much by talking.  I think in general it is easier for software to mishear than it is for it to mistype.


It was a big change when getting a computer to perform changed from typing a command such as "Save" to clicking on an icon of a disk.  It is another change to endow advanced devices with the hearing and reacting to speech abilities.  However, I just read today that some Chinese researchers tried "speaking" to a smartphone using very high frequencies that a human could not detect.  Devices reacted as though the commands were spoken at typical human levels.  Such commands could be used to make the devices misbehave.

Popular Posts

Follow @olderkirby