Uncloudy voice assistant: Difference between revisions

From titipi
Jump to navigation Jump to search
No edit summary
Line 29: Line 29:
* Transcribe only phonemes with coqui STT and CMU Sphinx
* Transcribe only phonemes with coqui STT and CMU Sphinx
* [https://gitlab.com/nglk/radioactive Radioactive Monstrosities]: voice upload and vocal transformation on a browser with the use of Web Audio API
* [https://gitlab.com/nglk/radioactive Radioactive Monstrosities]: voice upload and vocal transformation on a browser with the use of Web Audio API
* [https://github.com/jreus/chorusworkshop Remnants of future voices]: cloning voice scripts with an interface





Revision as of 13:59, 21 March 2023

How to access automatic subtitling in an interview, conferencing, gathering, conversation in a counter cloud way?

For example in the case of a counter cloud video conferencing like BBB and what happens when somebody complains about the non-seamlessness.


Some thoughts on machine listening:

The process of machine listening has been supported and developed technically through legal, state and military cases and developments. Clouds and voice databases are also rooted in this.


We can unpack the processes of speech recognition in interviewing, transcribing, listening, training, listening to patterns, predicting and follow hybrid ways of automated subtitling, that include computing and not.

Some algorithms

Speech recognition tools

from pocketsphinx import LiveSpeech
for phrase in LiveSpeech(): 
   # how to pause the mic occasionally so the live speech print appears often
   # check timer thread
   print(phrase)

Experiments

  • Gossip Booth: recognise words and keep only the ones with no meaning, like vocal expressions.
  • Transcribe only phonemes with coqui STT and CMU Sphinx
  • Radioactive Monstrosities: voice upload and vocal transformation on a browser with the use of Web Audio API
  • Remnants of future voices: cloning voice scripts with an interface



Older threads:


references: