Methods of work: Testing the design and automation of lower thirds.

From the nightly news, (does anyone watch that anymore?) to mocumentaries, directors have found it necessary to identify the person speaking on camera. The typical device that is used to do this is the “lower third”. So named for it position, (occupying the lower third of the frame, below the image of the speaker).

For TransGeek Movie, I wanted to design a lower third that was visually interesting, distinctive, and easy to reproduce It should not, however, disrupt the viewer’s experience of the voice of the person on camera.

17 (16) segment display

I have not settled on a final design yet, but I think I will be using an animated lower third that emulates an old school, 17 segment, alphanumeric display.  This is easy to read, and adds a bit of geeky eye-candy.

Testing the design and automation of lower thirds.There will be many lower thirds throughout the film. Composing and animating each would be far too much work to do manually; and prone to error. Automation is a lovely thing.

I designed a template in Inkscape, an open source vector editing package. I then wrote a bash script that modifies the template with the appropriate text. It outputs the individual customized frames, using Inkscape’s command line interface; and then strings it all together into the final animation using  avconv. ImageMagick is used for creating the key channel.
I am currently using Kdenlive as my editor and compositor.

The result is satisfactory. At least until I get a real graphic artist and editor on board.

It doesn’t hurt that my test clip features Mattie Brice sharing some wise insights.

Edited 16 Dec. 2014 for clarity.

New(ish) Post-production rig.


Up to this point, all of the post-production of Trans*Geek Movie has taken place on my venerable ThinkPad T61P, running Ubuntu Studio 12.04 LTS. This continues to be my main machine, but I find myself in need of some more horsepower and flexibility, to pick up the pace of production.

There are two issues I keep bumping up against with the laptop machine: First, render time for video is painfully long. Secondly, the few tools that I must run in a Windows environment are not very happy in the VirtualBox installation of Windows 7 that I run.

Before and after, SparkServer to 8 core XeonFor this reason I have repurposed a retired 8 core 2.0 GHz Xeon server mainboard as my new post-production box. It takes up residence in a repurposed SparkServer chassis. (I know, this is IT sacrilege.) I have configured it as a dual boot machine; Windows 7 and Studio Ubuntu 13.10. At this point, it has no sound card, and plain vanilla VGA, but the reasonable power of the CPUs means that PluralEyes runs smoothly, and I can offload rendering from Kdenlive while continuing to edit.

I will be adding an HDMI capture board, reasonable GPU, and sound support in future; but this is a good start.

The right tool for the job.

In a previous post, I have noted that I have a strong prejudice for using Open Source/Free Software tools. However, when a proprietary tool is the only one that does the job, I will use it. I also believe that when I find a great tool, I should call it out.

PluralEyes is one of those great tools.

Anyone who is familiar with motion image post production, knows that one of the most odious task that an editor faces is syncing audio and video. For the purposes of quality, picture, and sound are often recorded on separate devices, necessitating their reintegration during the editing process.

Traditionally, this synchronization is done by identifying a common feature in both the recorded audio and video that can be matched up. Everyone is familiar with the archetypical movie slate.


The idea is very simple. When you slate a scene, you are creating a simultaneous visual and auditory event that you can use later to synchronize sound and picture. There are also other methods based of a shared timecode in the camera and audio recorder, but these tend to be expensive and complicated.

The slate has one major draw back when shooting documentaries, and especially interviews. It is distracting as hell. If an interview spans a camera or audio recorder stop, you really don’t want to get up in the face of your subject with a slate in the middle of a conversation, just to get a good sync mark.

That’s where PluralEyes comes into play. So long as your camera footage has audio, PluralEyes will automagically synchronize your external audio with picture. All you have to do is drop the audio and video clips into the GUI in the correct order.

This is not a perfect fit with my tool chain. First of all, PluralEyes does not come in a native Linux version, so I have to run it in a Windows VM, (I haven’t tried WINE yet). Second, it does not support a edit decision list format that is compatible with the editing software I am using. It does output Final Cut Pro XML Interchange Format, which is well documented, and I should be able write a script to parse it into a form that Kdenlive can use, but in the mean time, I have a “good enough” workaround using PluralEyes’ media export function. What is undeniable is the  great gain in productivity compared to manual synchronization.

Finally I am making headway towards putting out some teasers,  and a Kickstarter video.

Thanks to Lars Fuchs for turning me on to this tool.

Tool Chain

Post-production Begins! I am only 1/2 through the interviews I plan to do; but with nearly 20 hours of interviews in the can, the time has come to get serious about post-production.

I have been syncing audio and picture, transcribing interviews, and bookmarking material for the trailer, promos, and Kickstarter video. (There, the cat is out of the bag!)

Anna Anthropy at Dorkbot CHI via D5100





I have a preliminary tool chain on place:

I am using Akimbo for bookmarking the interview audio. This allows me to listen to MP3s of each interview on an android device, bookmark them, and take brief notes. I can then reconcile this with the footage when I am at my editing station.

I am an open source kinda guy. As far as possible, I am working with free, (as in speech) tools. I am not fanatic about this however. If I need to purchase software to get a job done, I will; but I will always favor an open source solution for my needs.

Right now the suite of tools I am using consists of: Kdenlive for vidoe editing editing; Audacity and Ardour for audio mixing/cleanup; Gimp for image manipulation and color correction; Gimp and Inkscape for graphics; and many many other little utilites for various tasks.

The wonderful thing about working with Linux, is that so much of the work I need to do can be automated with scripts. I have writen scripts to wrap around ffmpeg to do transcoding and resolution shifting of the vidoe; extract MP3s for transcription; and even do background rendering. I use the Ubuntu Studio distrobution. The low-latency kernel makes working with audio and video painless. As I have gotten older, I apriciate the value of stability over having the most bleeding edge releases. As a result, in the last year or so, I stopped chasing the latest release, and settled on Ubuntu’s Long Term Support (LTS) (12.04 Precise Pangolin) as my OS of choice.

I may find in future that I need to accomidate the work flow of others, or that I just must have some feature only available in AfterEffects, of FinalCutPro. But for the time being. This tool chain suits me well.



I learned today of the passing of Stefan Kudelski, inventor of the Nagra tape recorder. at age 83. For those that don’t immediately recognize the name Nagra, I can assure you that Mr. Kudelski profoundly changed the way we all hear our world. He invented the first high quality, portable reel to reel audio recorder. They were relatively compact, beautifully engineered, and rugged. These machines revolutionized motion picture production, radio  and TV news gathering, and even ethnomusicollogy. I will not eulogize Mr. Kudelski here. Others have done a better job than I could hope to.

When I was last regularly involved in filmmaking, the tools of choice for independent producers were often a 16mm Arriflex camera, and a Nagra III. This allowed one to shoot synchronous sound and picture in just about any location. In the early 1960’s equipment like this allowed François Truffaut, and Jean-Luc Godard, to make the films of the French New Wave. In the late 60’s the technology facilitated Film News Gathering, (the precursor to modern Electronic News Gathering; see the excellent film Medium Cool), and Cinéma vérité. The Nagra was a disruptive technology; putting affordable means of production into the hands of independent filmmakers and documentarians.

Affordable is a relative term. A Nagra, even used, was comparable in price to a small car. The cost of a motion picture camera was similar. Then there was the cost of film, tape, etc. “Low budget” production was a costly undertaking.

In contrast, when I outfitted the production of TransGeek Movie, all the kit, (camera, digital audio recorder. lights, stands, tripod, recording media), cost me slightly more than the price of just a used Nagra in 1986.

Now one can argue the relative quality of 1080p video vs. 16mm film, or 24bit 44.1Khz digital audio vs. analog tape; but I think my point stands: The digital production tools we have available today are making it possible for more people than ever to tell there stories.

Thank you Mr. Kudelski.