A digital game keeps track of the rules (29 July 2022)
Brandel comment on games and how in the real world we have to keep our own rules. Roughly 1:38† mins into the discussion.
In terms of rules, I guess the ‘ideal' would be a fantastic computer game type environment for information views and connections which are clear, but we can always, Matrix style, exit the system to change the rules. Make sense?
Mac Meeting Recording app
Here I am in my Journal thinking about how I’d like to use transcripts from recorded meetings.
I’d like to have a local app which records audio locally and transcribes.
It would be very useful if it can differentiate between audio which comes from the computer and audio which does not, so that it will know when I am speaking and when others are speaking. It would be very good if it can record on multiple computers and iCloud synch so that there is no question who said what and all speech is labeled correctly and without user effort. At the end of a recording, the user can click Stop and also Share, which will work by something as simple as adding a URL once, provided by the host, and then they can choose to share to that URL. There will need to be some management for this of course, with multiple places to share to and to edit these.
It will provide transcripts live in a window which does not need to be seen all the time so that it can be used for short term recall but not as a distraction.
Finding Specific Phrases
There will be a search function for phrases which can be filtered by speakers, time of day and date range, in a very simple interface to set these parameters (keyword search box visible at all times, a list of speakers in the system as buttons to mute any at the bottom and a reveal for the rest for more advanced use).
The presentation of the chat is a regular Author document so that the user can fold and find and highlight and so on.
This is where it gets tricky and I have to think some more.
Having gone though this basic exercise I think the human part needs to be augmented rather than automated, to use that phrase. When Brandel started talking about games I should be able to write a note on this, as I did in the section above, but then through some mechanism have it linked to the video when later uploaded. The connective mechanism is time. If Author knows what time I wrote something, because I chose to insert a timestamp, then I simply need a way for Author to know later, when I have the URL to the YouTube video, what time the YouTube video started recording. It does not have to be to the millisecond but meetings tend to start on the hour or half past. Imagine this simple scenario: I write a note, then timestamp it (cmd-t where there is now an option to ‘Insert Timestamp’ (along with ‘Search Books’ and ‘Use YouTube URL’). This simply adds a character, much like the character for endnote, which turns into a date and time stamp on export to PDF, but more importantly, it does this: When user clicks on it the user gets a dialog with the options to:
- Enter YouTube URL of recording.
- Specify time of day the recording started.
- Button to click to turn into plain text Date and time.
If the user chooses to paste the YouTube URL of the recording of that meeting, the user can also choose to specify the time of day it started, to know what the offset would be. This will be remembered by the system, for the next time, based on the current document (since there may be different documents for different meetings).
Once the user clicks ‘OK’, the system converts the YouTube URL into a link inside the video and goes online and retrieves the transcribed text… ok, stop, this is getting messy again.
Simple Author Recording System
Let’s try a new version: I am in Author, I choose to ‘Record’ and while I do so, I can choose to insert a key moment so I might type a keyword, such as ‘Brandel said…’ or simply do cmd-[ and the the recording is marked for that point and the characters  are entered at the cursor position.
Any time later I can click on it and it opens into a dialog where the transcribed text is shown, including 15 seconds before I clicked the button and 1 min after. I then choose the in and out points and ‘OK’. And here is the neat thing, this acts as stretch text when in regular view: I can trim what transcribed text is to be shown, by moving the [ and the ] on the page. When double clicked, I get the full edit window where I can edit the text, including selecting sections and then clicking on a person’s name (shown in a button at the bottom of the dialog, built up over time, easy to add by clicking a + button) and so on.
The recording is also uploaded to YouTube automatically after each section (with a card image, since YouTube does not accept audio only), with the system knowing the URL to the recording.
On export the  is turned into “quotes” with a URL to the YouTube audio section right after.
There will also be a dialog for searching all recordings, this might be in the Journal or a new space called ‘Recordings’ where the user can interact with the recording transcript like a normal Author document to get all the search and view capabilities. Doug thought it important to always have all the affordances in every view. How audio and text will be shown is an issue to be worked on. The primary use for this is initially to capture useful moments at the time. Secondary use will be to go through the recording later for keywords etc. I can imagine though, in the ‘Recordings’ view, that there is a play bar at the top of the screen, over the transcribed text, where the user can set if it’s only one session or all of them or a data range.
If I forget to record, and there is 5 min with no speaking, the system stops recording.
The recordings are kept outside the document, in a clearly marked folder in the Author documents folder.