I’m spending most of my time at the moment recording lectures for the autumn term. It is taking me roughly two full days to record an hour’s video. I could just record me talking over powerpoint, but that would be boring, both for me and for our students. So I’m trying to take advantage of the medium, and record something closer to mini-documentaries. So far, I’ve recorded 11 videos; in this post, I’ll explain what I’ve found works and what doesn’t.
Mostly I’m recording in my study, as it is the only place I don’t get in everyone’s way. It makes a good backdrop, but it’s not a large space – only two metres by four or five metres. It was already pretty full and there was too much stuff behind the original desk. To get the cameras far enough back, and to get the more interesting side in the background I built a new narrow desk on what used to be the back wall for my laptop and then wall-mounted a 27 inch curved screen there.
My recording is done on a 2018 13″ Macbook pro. It’s not ideal for this, as the fan spins up like crazy when it is working hard, so there’s a constant battle to keep background noise down on recordings. The built in camera and microphone are not useful for this – the camera is really noisy, the inbuilt microphone just picks up the laptop fan, and if you do use it, typing is deafening on the recordings.
I’m using a range of external cameras. The most expensive is a Logitech C920. This is a pretty good camera if the light is good, but I find the image very harsh and the colours ugly whenever I have to use artifical light. It’s an autofocus camera, and the autofocus is a pain if you gesticulate, focussing in and out constantly, so I usually end up using webcam settings to disable autofocus.
The main camera I use is a Logitubo 1080p webcam. This is actually the cheapest camera I own, but it’s got a wide field of view which is just right to get me in shot while fitting a terminal window overlayed on screen at the same time. The automatic exposure settings on this camera are a bit rubbish, but with careful manual tuning of exposure time and gain, again using webcam settings, I can usually get a pretty reasonable image, even in pretty low light when I’m recording at night. This camera has manual focus, but as it’s fairly wide angle, you can usually set it and forget. If it’s sometimes slightly soft focus, that also serves to reduce my wrinkles.
I’m also using an SJ4000 GoPro clone. This is not intended as a webcam at all, but works fine and has a very wide field of view, which is useful sometimes in a small room. It also works surprisingly well in low light, given it really is intended to be an action cam.
Finally, I’m sometimes using my phone (nothing fancy; a Moto G7), either using Iriun webcam to link it to the Macbook as a webcam, or just recording directly on the phone. The phone camera is actually at least as good as the webcams.
A pair of Velbon mini tripods mean I can easily move the cameras around and get the height just right at somewhere near eye level. This also allows quick changes of view, just to add some variety.
Cameras need light; during the daytime there’s good light from the window – too good if it is sunny in the morning, but I don’t really do mornings, so that’s not such a problem. I do find I need some fill light to reduce shadows though, and I’m using a cheap ring light for that. It is especially useful to be able to change the colour temperature of the lighting depending on the weather outside. I also have a small spotlamp aimed at the corner of the wall off to one side in front of me, to cast some soft light. Lighting is hard, and I still get more reflections in my glasses than I would prefer. I run the laptop in dark mode and close all bright windows on screen before starting recording, at least when I remember, to reduce reflections a little.
For just a few scenes, such as when I was kayaking while talking to camera, I record directly on the phone. The built-in microphone is no use for this, so I’m using a cheap wired lapel mike. The built-in Android camera app, for some reason best known to Google, cannot be configured to use an external microphone. Instead, I’m using Cinema FV-5. Its UI is a bit weird, but it does work. Yes, it can use the front camera, but you’ll never figure it out without help!
Back in my study, I’m using a TONOR USB condenser microphone to capture audio. It is probably overkill, but looks the business, the sound quality is good, and it is directional enough to reduce the laptop fan noise appreciably. My one criticism is that it is slightly quiet, even on the maximum input gain setting, but I fix that in OBS advanced audio settings.
Speaking of OBS, this is what I use for most of my recording, and it is truly brilliant. I use OBS in several different ways. First, for simple scenes, I just use it to record talking to camera. Second, I often overlap a window over a video stream of me, with my image placed well off centre. This allows me to alternate between talking to camera and interacting with the content in the window while appearing to look at the window.
Next, I use it to overlap pieces of powerpoint slides over the video of me explaining them. To do this, I get OBS to capture the powerpoint window, and then use either chromakey or lumakey filters to remove all of the powerpoint window background except the pieces I want.
OBS also allows the audio to be tweaked. I’m running the audio stream from the TONOR mike though an OBS noise filter to remove the remaining background hiss from the laptop fan. Finally, the diferent cameras have very slightly different lag when recording – enough that the lip sync is off every so slightly. To correct this, I use OBS to add a little audio delay to the microphone stream – something like 50-100 ms for the Logitubo and 200 ms for the SJCAM.
Occasionally I’ve done two-camera recording, where I want to cut between two different views with a continuous soundtrack. It is possible to do this in OBS – that’s what OBS is designed for – but I found I couldn’t do it and pay attention to talking to camera, switching camera via OBS, and what I was saying at the same time. You’d really need someone to drive OBS for you, and I’m recording solo. Instead I recorded the second camera completely separately, on the camera itself using either my phone or my old Lumix FZ72, and then re-synced the video streams later in editing. That way I could decide after the fact when to cut between views. As both camera streams record audio, it is fairly easy to align them afterwards in the editor by aligning their audio, then just disable whichever audio you don’t want. This is all more time consuming that it is worth though, unless you really do need two views.
Sometimes you just need to get creative with the hardware. In the picture below I’ve one camera on the tripod behind the ladder, and my phone on a long selfie-stick directly above the table, trying to keep it out of view of the first camera. My wife’s reaction: “no comment”. She’s used to me by now.
I have an XP-PEN drawing tablet, which is remarkably good value for money and works really well for drawing over static images, but less well for drawing over video as an OBS overlay. I thought I would use it a lot, but haven’t really.
My favorite tool for recording is my home-made lightboard, which you can read about in this post. It’s great for drawing figures live, but it is a bit limited in drawing area. I’d love to build a larger one, but I don’t have the space.
I bought a greenscreen so I could overlay myself over slides, pictures, etc, but it is a royal pain to greenscreen in a small space. You really really need to get even lighting on the greenscreen and good lighting on yourself, and it is nearly impossible to do without some space between you and the screen. I just don’t have that space in my study; so far I’ve not used it.
For editing, there is a great deal of choice of video editing software available, but I’m actually just using iMovie. It is fairly limited, but as I’m doing most of the fancy stuff in OBS, iMovie is adequate for editting all the recorded clips together, adding transitions, a few titles, and some extra sound. It also comes free with MacOS, which is good. I keep wondering if I should switch to something more capable, but so far I’ve not really found a need. Better the tool you know.
So what about content? All the technology in the world is no good if you can’t present the material well. I started out trying to script the videos, and then for lack of an autoqueue, putting the camera in front of a screen with the text on it. Turns out I can actually scroll text with one hand, gesticulate wildly with the other, read text, and talk at the same time, all while sounding not too scripted. But in the end, writing a script takes a lot of time, and I can’t read a script while live-coding anyway.
After the first video, I stopped scripting and simply ad-libbed to the camera. I find that I usually mess something up in a big way the first few takes, but the content rapidly improves, and I usually get a take I can use on the fourth or fifth attempt. This is of course something of a pain when you’re doing a ten-minute take, so I try to keep continuous takes to no more than five minutes, and deal with linkage in the editor.
It’s probably not a good idea to overdo the background music in lectures, but if you need a source of royalty free music, the YouTube music library is a useful resource.
What’s with the blue shirts? Most of the shirts in my wardrobe had patterns that didn’t work well on camera, so I took a lesson from David Attenborough and bought a job lot of plain blue ones. Attenborough always wears the same type of blue shirt when filming. It means you never need to worry about continuity when editing clips together shot on different days.
So far, so good. It’s a lot more work than I expected, but I’m having fun, and I hope it will make this strange year slightly more engaging for my students.