Sunday 23 October 2022 at 18:07 (UTC)

Hello, folks! Here's the weekly update on what's happening backstage for EmacsConf 2022 in case you notice something that you want to help out with. =)

  • We've e-mailed the speakers instructions for uploading their files through either a web browser or an FTP client, and three speakers have already done so! Those talks are now available in the backstage area (https://media.emacsconf.org/2022/backstage/), along with the first set of edited captions (thanks Jai Vetrivelan!). If you don't have the username and password for the backstage area and you would like to access it, please e-mail me and I'll send you the details.
  • We've created a BBB room for each speaker's live Q&A session. The URLs are in conf.org in the private repository if you need them.
  • We've drafted some documentation for different volunteer roles. If you'd like to volunteer as a captioner, check-in person (hmm, reception?), Etherpad scribe, IRC monitor, or host, please check out the appropriate link and let me know if I need to add anything to the docs:
  • Thanks to David O'Toole for signing up for some IRC shifts! If you would like to volunteer for a shift, check out https://emacsconf.org/2022/organizers-notebook/#shifts .
  • We've updated our streaming configuration for the General and Development tracks, and have started testing them using mpv and the watch pages. Videos aren't currently streaming, but you can check out the layout of the watch pages at:
  • There are now Q&A waiting rooms with friendly URLs so that it's easier for people to join the live Q&A when the host decides it's okay to let everyone in. They're linked on the watch pages (along with the pads) and they'll be linked from the talk pages once we're ready to share them.
  • zaeph has been busy tweaking the ffmpeg workflow for reencoding and normalizing videos. Thanks to Ry P. for sharing the res.emacsconf.org server with us - we've been using it for all the processing that our laptops can't handle.
  • We experimented with using the OpenAI Whisper speech-to-text toolkit to create the auto-generated captions that captioning volunteers can edit. Looks promising! If you'd like to compare the performance between small, medium, and large models, you can look at the VTT files for the sqlite talk in the backstage area. I've also added support for tab-separated values (like Audacity label exports) and a subed-convert command to subed.el, which might give us a more concise format to work with. I'll work on getting word-level timing data so that our captioning workflow can be even easier.

Next week, we hope to:

  • improve the prerec and captioning workflows
  • get more captions underway

Lots of good stuff happening!

Sacha