Martin Geddes

Did we attend the same meeting?

The single biggest problem in communication is the illusion that it has taken place.
— George Bernard Shaw

We have observed in a previous article that Hypervoice technology helps us to actively listen by removing the burden of memorising what is being said. There is a little sting in the tail of this benefit, however. Once we start to become active listeners, we are forced to confront an unwelcome truth: we are not, in general, very good at it.

The anxiety of memory goes away only to be replaced by the fear of not being understood. Once you understand in your bones how hard it is to listen, you realize that your words are just as subject to misinterpretation as theirs had been by you. It is easy to assume that shared understanding has been achieved when in reality it has not.

Indeed, the situation is even worse than it first seems. Your natural and unchecked assumption is that you are not only a good listener, but indeed are the best listener in the room! We believe this because, up until now, it is impossible to listen as someone else. But just as genetic studies have revealed that people taste food differently—salty is not salty to everyone—using Hypervoice has revealed that people don't listen the same way.

In particular, group note taking reveals what people are truly keying into. It doesn't take long for the feedback loop to shake you awake. People are hearing different things! You are hearing agreement, while a colleague is hearing hesitation.  Who is right? Enough sessions and your high opinion of your listening will be checked and revealed as the unexamined, embedded assumption that it is.

Although inline translation for language is coming fast, the need to help native speakers is barely recognized as the profound communication challenge it is. The next frontier of computer-assisted communications is to help us to comprehend each other at the cognitive level, not merely at the audio or syntactical level.

Sensors will capture all that we say, our facial microexpressions, head movements, and body gestures. These will be “translated” for different individuals, cultures and physical environments to re-presentation at the other end in order to maximise cognitive understanding. One early example is this project working on a non-verbal, multicultural translator.

In many ways, we will be having “different” meetings at the same time. Only this time Hypervoice technologies will be working with our differences, not against them.