13.5 azspeech transcribe
The transcribe command will listen for an utterance from the computer microphone for up to 15 seconds and then transcribe it (convert to text) to standard output. The command can also be used to transcribe speech from an audio file (wav only).
$ ml transcribe azspeech -f <file.wav> --file=<file.wav>
By default the audio input is from the computer’s microphone:
$ ml transcribe azspeech The machine learning hub is useful for demonstrating capability of models as well as providing command line tools.
We can pipe the output to other tools, so as to analyse the sentiment of the spoken word. In the first instance you might say happy days and in the second say sad days.
$ ml transcribe azspeech | ml sentiment aztext 0.96 $ ml transcribe azspeech | ml sentiment aztext 0.07
The transcribe command can take an audio (wav) file and transcribe it to standard output. For large audio files this will take extra time. Currently only wav files are supported through the command line (though the service also supports mp3, ogg, and flac).
$ wget https://github.com/realpython/python-speech-recognition/raw/master/audio_files/harvard.wav $ ml transcribe azspeech --file=harvard.wav The stale smell of old beer lingers it takes heat to bring out the odor. A cold dip restore's health and Zest, a salt pickle taste fine with Ham tacos, Al Pastore are my favorite a zestful food is the hot cross bun.
Spoken English Input to Spoken French Output
$ ml transcribe azspeech | ml translate aztranslate --to=fr | cut -d',' -f4- | ml synthesize azspeech --voice=fr-FR-HortenseRUS
Your donation will support ongoing development and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984.
Copyright © 1995-2021 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0.