Talking Turkey: Where Does Speech Make Sense?
Sunday, 23 January 2011 | Just prior to AVIOS Mobile Voice Conference 2011 | 1 – 7 PM
San Jose, California
Join us for the 2011 workshop on voice interaction design, Talking Turkey: Where Does Speech Make Sense? The AVIxD workshop is a hands-on session in which voice user interface practitioners come together to debate a topic of interest to the VUI community. The workshop is an opportunity to meet with your peers and delve deeply into a single topic. As in 2010, we will be publishing our papers on www.avixd.org. Please visit our website for more details on the purpose of the organization and how you can be part of it.
To be considered for the workshop, individuals must submit a position paper of approximately 500 words on this workshop’s topic:
During a recent AVIxD workshop, one of our esteemed colleagues asked a fundamental question of the group: Where does speech make sense? The question is not simple to answer. We have seen touchtone prove itself preferable to speech in certain IVR contexts, and we have seen people happily dictate documents but then grow frustrated when editing the same documents by voice. Speech in the car seems to make perfect sense, but how should it be implemented in that environment so that it does not present safety risks?
The time of “irrational exuberance” regarding speech technology has died down, with fewer voices still suggesting speech be employed everywhere, replacing all other UI modes (keyboards, mice, buttons, etc.). With our often painfully learned lessons, we designers are now better able to take a critical look and ask: How do we really take hold of the speech modality and use it where it makes the most sense? How do we look beyond blanket replacements and apply speech strategically and tactically within a variety of user experiences?
These questions lead us to obvious follow-ups. For example:
- Where has speech been successful/unsuccessful so far? In what cases were those expected/unexpected?
- What areas of human-computer interaction might be intriguing new frontiers for speech?
- If one is going to promote speech at one conversational turn within a system or device (i.e., “where it makes sense”) but not the next, will that switch in modality be experienced as inconsistent and might it trip up the user? Or will the user respond intuitively, almost as if they already knew when to try each modality? What makes the difference?
- As designers, do we have a responsibility to identify optimal applications of speech interaction, avoiding the temptation to sell speech “everywhere?” Or do we employ speech “everywhere” and let users tell us where it’s working for them?
As usual, specific examples of applications you’ve worked on are always most valuable to the UX community.
The cost of participation is free to AVIxD members; non-members will be charged $40, however the fee may be applied towards AVIxD membership at the workshop. Please submit your papers via email no later than Tuesday, December 10, 2010 to firstname.lastname@example.org. Letters of acceptance will be sent by Tuesday, January 4, 2011.
We look forward to seeing you in San Jose! Contact either of the co-chairs with questions: Jonathan Bloom (email@example.com) or Phillip Hunter (firstname.lastname@example.org).