Evaluate otter.ai for automated Zoom call transcription and speaker identification
We tested otter.ai to provide automatic transcription and speaker identification on a Zoom call.
@plafoucriere acted as the scribe (as we generally do at GitLab). Notes here: https://docs.google.com/document/d/1qCwZfoo1A-FihE2ifzd4ZT_Mpz-xFzZvAPJ7pJvWCEY/edit#heading=h.3fn91sng0foi
The meeting was recorded and posted to Unfiltered: https://www.youtube.com/watch?v=YQsWsoT8pDU
@sam.white @mparuszewski @zmartins @plafoucriere and @whaber participated live.
There were a number of people (@lkerr, @thiagocsf, @aevstifeev, @andyvolpe) who did not attend live who sometimes do attend (and therefore may find the recording/notes useful).
@whaber spent <5 minutes training Otter AI to identify speakers (which it gets better at the more it is trained) and correcting often used but uncommon terms for the 30-minute call (for example psilocybin vs cilium). The transcription is available here: https://docs.google.com/document/d/12_pWvVc5ZBACE7ZzNz7arflylaqAUv1PbQvDpwjVywM/edit
The next questions are:
- Did the attendees find the transcription useful?
- Did those who were not able to attend find the transcription useful?
- Would the scribe have spent more time participating and less time typing if we used a solution like this?
Please comment with your thoughts if you were tagged in this issue (or you weren't and have thoughts on this).