"I did not understand you" was yesterday

  • Share
    Two clicks for more data privacy: click here to activate the button and send your recommendation. Data will be transfered as soon as the activation occurs.
  • Read out

A new test procedure developed by Deutsche Telekom Laboratories is to finally teach voice recognition computers some manners.

When computers talk to people, communication chaos occasionally ensues. Anyone who has come across a speech dialog system on a customer hotline has a tale to tell about it. Researchers at Deutsche Telekom Laboratories at the Technical University of Berlin have now developed a method for testing and training speech dialog systems that will make people love them, not hate them.

Breaking the deadlock Everyone has been there: You call your insurer, bank or telephone company and are invited by the friendly voice of a computer to explain your business. Sometimes, the computer doesn’t understand you or the conversation takes longer than necessary because you don’t know which answers the system is expecting. In the worst case scenario, you get stuck in a deadlock because the course of the dialog was insufficiently tested. Automatic learning process That is all set to change. In the "SpeechEval" project, researchers at the Quality and Usability Lab (QU Lab) of Deutsche Telekom Laboratories at the Technical University of Berlin and the German Research Center for Artificial Intelligence (DFKI) have developed a process for automatically testing speech dialog systems. Over the last two and a half years, researchers at QU Lab and DFKI, led by Professor Sebastian Möller, have analyzed the behavior of real users based on data from well over 100 different systems and used it to train automatic learning processes. That saves money ... The product of this work is the "SpeechEval" workbench. As in a simulation lab, this workbench can generate as many realistic dialogs as desired in order to test systems extensively and make them more user-friendly while still in development. The dialogs automatically generated on the workbench can alert developers to inadequacies, such as requests that are too long or a too limited vocabulary of the speech recognizer. The technology behind "SpeechEval" is itself essentially a dialog system: The user simulation is given a task to be carried out, for instance checking a bank balance. The simulation identifies the statements of the dialog system to be tested and generates an answer based on the learned behavior of real users. …. and leaves customers satisfied "The findings we have compiled in "SpeechEval" not only help industry to save money in developing telephone customer portals, they also help to better meet the needs of users of these systems and to put in place more customer-friendly systems," explained project manager Sebastian Möller. "The next step will be to transfer these findings, for instance, to mobile voice control, which we will increasingly see on smartphones." The project will be sponsored in part by the European Regional Development Fund (ERDF) through Investitionsbank Berlin (IBB).