From Your Mouth to Your Screen, Transcribing Takes the Next Step

The Rev system allows the customer to choose whether they want more accuracy or a quicker turnaround at lower cost, said Jason Chicola, the company’s founder and chief executive. Increasingly, his customers will correct machine-generated texts rather than transcribing from scratch. He said that while Rev had 40,000 human transcribers, he did not believe that automated transcription would decimate his work force. “Humans and machines will work together for the foreseeable future,” he said.

Nevertheless, speech technologies are having an undeniable impact on the structure of corporations.

“We have chatbots that are running live in production, and they are deflecting a lot of service cases,” said Richard Socher, the chief scientist at Salesforce, a cloud-based software company. “In large service organizations, with thousands of people, if you can automate 5 percent of password reset requests, it’s a big impact on that organization.”

In the medical field, automated transcription is being used to change the way doctors take notes. In recent years, electronic health record systems became part of a routine office visit, and doctors were criticized for looking at their screens and typing rather than maintaining eye contact with patients. Now, several health start-ups are offering transcription services that capture text and potentially video in the examining room and use a remote human transcriber, or scribe, to edit the automated text and produce a “structured” set of notes from the patient visit.

One of the companies, Robin Healthcare, based in Berkeley, Calif., records office visits with an automated speech transcription system that is then annotated by a staff of human “scribes” who work in the United States, according to Noah Auerhahn, the company’s chief executive. Most of the scribes are pre-med students who listen to the doctor’s conversation, then produce a finished record within two hours of the patient’s visit. The Robin Healthcare system is being used at the University of California, San Francisco, and at Duke University.

A competitor, DeepScribe, also based in Berkeley, takes a more automated approach to generating electronic health records. The firm uses several speech engines from large technology companies like Google and IBM to record the conversation and creates a summary of the examination that is checked by a human. By relying more on speech automation, DeepScribe is able to offer a less expensive service, said Akilesh Bapu, the company’s chief executive.

In the past, human speech transcription has largely been limited to the legal and medical fields. This year, the cost of automated transcription has collapsed as rival start-up firms have competed for a rapidly growing market. Companies such as Otter.ai and Descript, a rival San Francisco-based start-up started by the Groupon founder Andrew Mason, are giving away basic transcription services and focusing on charging for subscriptions that offer enhanced features.

[1] https://www.nytimes.com/2019/10/02/technology/automatic-speech-transcription-ai.html?emc=rss&partner=rss

Leave a Reply

Your email address will not be published. Required fields are marked *