Discussion of Digital Video and Related Topics

Conducted and Edited by Alex Pournelle


read book now




BOOK Reviews


With Contributions from many other writers

read book now





I’d been having a quick online discussion with several people, and quickly realized that it was becoming quite interesting. Thus, I am formalizing it as a webpage for, which you’re reading right now. The mailing list discussion continues, but it will probably be folded into this version over time.

This discussion started out with a mere question by Mr. Mark Minasi about hooking up his spankin’ new DV camera to a PC. Poor Mark didn’t realize what he was getting into, and so I started throwing some answers back. (If, for some reason, you don’t know, Mark is the author of many fine books on Windows NT and fixing your PC. He’s also a Babylon5 fan and a good friend of mine.)

As of this writing, the text consists of several parts, corresponding to several mailings which I sent out. They, in turn, contain multiple comments and commentaries from various sources. They appear in chronological order for the most part, instead of being digested, to give a flavor of the ongoing discussion.

My inline comments appear in this font; comments from others in Times.

--All best,


Other Sites to Visit

If you want a few places to find out more about these subjects, here’s a quick set:

Codec Central, a place to learn about software and hardware codecs. (Codecs are coder/decoders, and are explained below.)

There’s a freeform discussion of DV in general and equipment for sale in particular at VideoGuys.

VideoNexus, a discussion about production and other video editing topics, from the makers of Speed Razor, an NT-based editing package.

Notes on Terms

Throughout, "DV" is used to mean "MiniDV". There are at least five Digital Video formats, but only MiniDV is being discussed here and now. A note from Peter Glaskowsky about other formats appears as Part III.

Future Topics for Discussion

Some of the burning questions I have are below. This is by no means complete! Please add your own.

  • What are the common means of data interchange between editing and post products? Are numbered Targas the only/best way? What about their huge cumulative size?
  • Why don’t companies with hardware codecs release software-only versions for people who want to read their files?
  • What is the future of MiniDV? Will it stay around? Is it going to be this little island of semi-pro production?
  • Why is this stuff so hard to do? Or is it just me?


Discussion of Digital Video and NT

Mark Minasi originally wrote, in the note which started it all:

Hey, Alex, do you know of any Firewire/IEEE 1394 boards for the PC? I've just gotten one of these incredibly neat (and tiny) cameras with an IEEE 1394 output and want to hack around with attaching it to the PC. As The Graphics Guy, I imagine you'll have an idea. Thanks.

See, Mark isn’t only a great guy, he recognizes my superior talents... or, at least, he’s great at flattery.


My reply to Mark was:

Let's see, yes, I do know a bit about these things. David Em and I have been researching them for the book.

There are only about three people actually making chips for IEEE-1394 I/O. Adaptec is the most popular for these chips. Let's see; the most popular packages are Pinnacle/Miro's, TrueVision's, and DPS. Pinnacle is probably the best of the bunch. Among other things, the Pinnacle solution will go back and pick up dropped frames if any are lost. Very cool.

There are also other manufacturers I’d left out, notably Radius’s Moto DV and Canopus. Read on for more.

If you want to merely pick up merely ok video, you can use the sloppy analog I/O methods like S-Video video capture cards. You will probably get better quality with the DV cameras even if you use S, because the storage is digital--less lossy. I’d bought a closeout of Sharp’s MiniDV ViewCam from Fry’s a few months ago, which we tested by plugging it in to the Targa 2000RTX which Intergraph had so kindly lent us. (We have since returned the Sharp ViewCam.) The results from it were clearly superior to those from either my Sony Hi-8 camera or David’s Sharp ViewCam Hi-8—even though all three were being grabbed as analog captures, the digital storage and superior three-chip pickup made a huge difference. We will discuss analog vs. digital capture in detail later. {And I’d welcome your input.}

If you want to merely use your DV camera as a still capture device, your best bet is still a Snappy from Play. For well under $200, you can grab frames up to 1500 x 1200. (Obviously, this is well higher than mere NTSC. The superior resolution is achieved by image oversampling and deinterlacing, a subject for another time.)

Many new W98 boxes have 1394 built in, notably the new Compaq Presarios. I have some really nasty words to say about W98 and power management which are in another page.

In any case, if you actually want to edit DV, Digital Video, you'll need scads of disk space of decent speed. 3.6 MBytes/Second, the speed of MiniDV adds up quickly. That’s four minutes per gigabyte.

Apropos of this subject, Mark Minasi wrote:

Thanks. I'd found the Pinnacle/Miro DV300 {Low-cost DV capture card} on the web and it seems to go for around $750 without capture software. The Truevision Bravado (not clear if it's a real product yet) was offered for $500 with a copy of Premiere 4.2 included, sounded like quite a good deal. OTOH, the DV claims to have NT drivers (!) and NT is probably better suited to handling big gobs of memory. As I write that, however, it occurs to me that perhaps I'm missing a central point about bandwidth, to wit:

Perhaps someone can comment on the reality of the Truevision card.

A digital video/audio stream (does FireWire transmit audio as well?) without compression would be a fearsome data rate indeed -- I figure 24-bit 640 x 480 x 24 bit color, 30 frames per second would be 27,648,000 bytes per second, with audio not included. Clearly no standard hard disk could handle that input rate. I'd always assumed that one of the beauties of a DV camcorder would be that I could go camera-PC-camera without loss, unlike the dread Digital-Analog-Digital implied by a Y/C connector. Am I incorrect there?

My reply:

Yes, FireWire/1394 does indeed transmit audio as well. As Peter Flynn notes below, it uses a 32Kbps rate, not the much more standard 44 Kbps. We’ll get into that in a moment.

Mark’s back-of-the-envelope calculation of speeds would be correct if video were sent uncompressed. However, MiniDV uses a lossy compression algorithm which provides a continuous data stream of 3.6 Mbytes/second. It’s a completely digital format; there is no more data loss from Analog -> Digital -> Analog steps.

This data rate can, in fact, be handled by a single hard drive, witness Peter Flynn’s new Presario, which has an UDMA2 IDE drive that will keep up (if just barely) with this rate. Professional video editing systems use multiple hard drives in pairs or threes or more, usually striped RAID 0. The more drives, the more reliable the data transfer. Since video can ill afford dropping even a bit, this becomes important.

There are, in fact, uncompressed video systems, notably SoftImage|Digital Studio and Play’s Trinity, but they will be discussed another time. Nearly all computer-based production of video is done with lossy compression. This lowers the data rate and means you don’t run out of disk space in ten minutes. Of course, there’s an inverse tradeoff between quality and the compression ratio.

There is also a third parameter under NT: file size limitations. Currently, NT only supports files up to 2GB in size. This quickly limits the size of clips which may be grabbed at once. Several homegrown solutions are being cooked up for this. In-Sync, makers of Speed Razor, transparently links files as they’re captured. DPS solves this by using their own drive array, slaved to their own I/O card. Other people have production boxes which handle all the I/O their own bad selves; this is the Trinity approach.

This 2Gbyte limitation isn’t always dealt with intelligently. The Truevision native capture/playback software, for instance, will merrily capture a file far larger than can be saved; so will Avid’s MCXpress.



There’s another problem with MiniDV: the compression method used is not compatible with that used by analog capture systems. MiniDV uses an MPEG-2 format, while most analog capture cards (e.g., Truevision) use a Motion JPEG (MJPEG) format. In order to work with any two together, you must transcode one or the other into a common format.

And you have to transcode to something lower bit-rate with attendant ghastly loss if you want to store more. Oh, and you can't mix DV with analog without transcoding to a compatible standard.... I could go on and on.

Transcoding is also the only way to reduce the data requirements for MiniDV, because it only runs at 3.6 MBps. This inflexible data rate is unsuitable for low-res work, though of course as a source for web-destined video (or other low-res work) it’s just fine. Transcoding can take significant amounts of processing time, on systems without hardware assistance.

There are new chips coming out which will change all this. C-Cube’s new DVx chips (there are two, at differing data rates) will make it possible to both mix MiniDV MPEG and MJPEG, and transcode in realtime. These chips will be less than $175 in quantity for the more consumer-oriented version. Peter Glaskowsky says that there are two others competing in this market space; I think it’s safe to state that competition is heating up quickly.


Machine Control and 1394

1394 is, of course, a fully-featured, high speed bus. It can conduct audio, video and control information—anything which can be encoded digitally. So there is no engineering reason why you can’t have complete control over a miniDV camera from a computer.

But the software is another issue. The deck and camcorder manufacturers have been very sticky about actually letting the machine control information out of the bag. Pinnacle claims to have very good rapport with the mfrs, and that they can back up and grab frames that were missed automatically. I have no direct experience on this.

This logjam is in part because the video mfrs are trying to protect their own "prosumer" lines; after all, if you can do all of this with a $4,000 camera, why buy a $10,000 one? Canon, which doesn’t make any camcorders except their new miniDV ones, doesn’t have a prosumer line to protect and thus may break this logjam yet.

One very instructive observation: at WinHEC in 1995, Bill Gates used an early Sony VX1000 camcorder connected to an early FireWire board to capture digital video. At WinHEC 1998, the same demo was done with an analog capture card.

It’s also instructive that Play still has not released their 1394 I/O boards for the Trinity. I guess this stuff is really hard after all.



I’d said to Mark:

You will need a 1394 cable for your camera. There are no real standards for these cables yet, or at least they are not in good supply. One good source is Computability.


Peter Flynn said:

I ordered the cable from off the web. I don't know if it is the same as i-Link or not. If it is, I could have gone to Fry’s {Local mega-warehouse computer/electronics store} and saved myself a lot of trouble -- the folks on the phone at Fry’s certainly don't have a clue. The cable has a 4-pin connector that goes into the Sony, and a much larger 6pin connector that goes into the Compaq.

i-Link is Sony’s offshoot of 1394/FireWire. I have yet to see any details on how it differs from "regular" 1394. I’d welcome any info anyone might have.

Actually Editing The Stuff

Peter Flynn said:

I did a bit of editing last night with Premiere 4.2. Works pretty well, but I'm in bad need of a mass (many Gig) fast and reliable back-up storage. 10 Gig for 40 minutes of video. Yikes! Currently I'll back dump to the camera.

One advantage to DV! As long as you’re doing cuts-only, you can dump it back to MiniDV with no further loss in quality. It’s only if you do fades, dissolves, effects, etc., that further loss will occur.

I need to get an update for Premiere so that it can properly handle the 32Ksample DV audio.

Darnell Gadberry, the man behind the company behind, points out that programs like Sound Forge will transcode with little or no audible quality loss, but if you wish to stay in full DV mode, then you need a program which will natively handle its audio format.

As it is now it has slight aliasing when converting from 32 to 44. I guess I'll have to buy Premiere 5.0. The results in 4.2 do look good. I did have one sudden system crash. I think the software was making a statement about the quality of the last couple of cuts I had made (they were pretty ugly -- and were lost during the reboot -- but the project was unharmed overall).

I think there are more FireWire cards out there. I've been trying to see if I can buy the Pinnacle DV transfer utility by itself -- and I still don't know if it will work with the Presario FireWire. -- I tested the motoDV product, and it seem NOT to work with the Presario. The Adaptec transfer utility that comes with the Presario {which does SoftDV encode/decode} leaves much to be desired. I guess it's time to e-mail Pinnacle and the moto people with questions.

I replied:

Premiere 5 still has some serious problems with DV so you might want to stick with Ulead's product (which is bundled with the Compaq), or just use Premiere 4.2 till they fix 5.

Peter Flynn commented:

What's the problem with Premiere 5 and DV? So far, for me, 5 has been more stable then 4, but then I haven't had much time to do much work with it.

To which I said:

Premiere 5 has some general problems, like crashing way more often than it should. It wasn't quite done when it came out. Thus spaketh the newsgroups, and David Em has had some problems, and Mary Wehmeier has also heard these rumors. By sheerest coincidence, all of these people are now on the mailing list.

Part Two Section One:

Mark Minasi's Followup Questions, and My Answers

At 05:03 AM 9/24/98 -0400, Mark M. wrote:

My heavens, a whole thread just for li'l old me. I am indebted ... but of course my thirst for knowledge has not been entirely slaked. Permit me to ask further:

Mark has been asking about doing DV production with his new lil' MiniDV camcorder. My answers, in the main, have been aimed at producing a videotape at the end, whether another MiniDV, VHS, or whatever.

There's an undertone to this discussion, of course; all us computer geeks want to keep everything as digital as possible, and are offended by the idea of going through analog conversions, or anything which causes data loss. As discussed here, you can do tapes (if they're cuts-only; read on) without further data loss. If you wish to send your masterpiece to someone to be viewed on their own computer, the situation is more complex, as is addressed in part II of Mark's Q&;A.

Mark's questions were numbered, and the numbering is preserved here:

So my 27 MB/s calculation {see above} is overstated because of lossiness. Hmmm...

  1. Back in the bad old days of analog/digital conversion, I lost resolution when I went camera->PC. No extra loss whilst editing, as I can simply tell the video program NOT to do any more compression, so I can do all the fussing I like and introduce no new loss. I also lost resolution with the final PC->camera transfer. Sounds like this happens as well with DV cameras and FireWire -- loss when transferring TO the PC, and loss when transferring FROM the PC?
  2. Ahh, but you forget, grasshopper, that digital is digital. As I said in the last message, but must have not stressed: If you're doing cuts-only, you stay digital the whole way through. You can pass it back and forth from camera to computer all day and not lose a bit--presuming, of course, that you don't lose any bits, either to tape dropouts or to hard disk hiccoughs.

    It's only if you do anything which requires a decompression --> compression cycle that you will lose more quality. The most likely cause for this would be doing effects, like dissolves, titling, fades, layering. In this case, the streams would have to be decompressed, then the fiddling done, then recompressed to the MiniDV format. Thus, generational loss.

    Otherwise, the video isn't decompressed. Oh, of course it was when you played back on the screen, but the original material wasn't affected; it just sat there on disk or on tape.

  3. If this is true, why bother? (And as long as you're busy impressing the masses here, please don't think you can get away with "there's less loss this way." How much less loss?" Is the DV camera doing the lossy compression, or is the capture board doing it? Getting back to my 27 MB transfer figure, it sounds like the firewire is transmitting the 27 MB and some hardware codec on the compression board is doing the compression, yes?


Nein! It's compressed all the way. The ~3.5 Mbyte/second stream (the actual rate is under debate; see below) is recorded on the tape compressed, it's sent via FireWire compressed, it's sent back to the camera at this rate. Otherwise, yes, it'd have to be compressed upon hitting the camera, and poof!, generational loss (albeit totally digital).

Also, your original calculations of 27 MByte/second were incorrect, simply because there are not 24 bits of color being recorded or even captured. NTSC isn't capable of that much color fidelity, in any case. Also, lesser formats like MiniDV are throwing color info away before they even record it. It's a "4:1:1" format, which is a term I wish I understood fully. Essentially, though, MiniDV throws away both chroma and luma info before it's even compressed.

Remember, no further loss happens just because it's sent down the FireWire to the computer, or back; it's only when doing effects or working in post packages which don't understand the MiniDV format natively when you'll have further loss.

Most FireWire/1394 I/O boards for a PC are just that: an I/O chip on a PCI board. They pull in that (fully digital, now) data stream and pass it to the disk. Some (notably Pinnacle) have an onboard SCSI port to avoid going to the PCI bus and back.

Now, this scenario does present a problem: you cannot play back any images you've captured without a hardware codec. That is, you can capture it, because capturing MiniDV just means copying the ~3.5 MBps data stream to disk. You cannot play it back, because the data stream is compressed. There are five solutions to this problem:

  • Hook up your camcorder/deck to the FireWire port, and view the picture on your NTSC monitor (No extra expense, but you must have the camera hooked up)
  • Buy a FireWire board that *also* has a DV codec, such as the Fast or Canopus solutions, which then gets hooked to a monitor (About $3,000 at last count)
  • Buy an internal MiniDV deck, such as the Sony, which sits in your computer and acts as both a deck and a codec (About $2,000)
  • Use a program which transcodes the captured data to a thumbnail which can be viewed realtime on your computer monitor, and then view that. (No extra expense beyond the program, but transcoding can take hours.)
  • Use a SoftDV codec, that is, an all-software solution.

Peter Flynn's Compaq seems to manage this trick with no hardware assist, using the Adaptec SoftDV player. I have to wonder whether there isn't some other onboard hardware they're using to assist, like a versatile MPEG codec. But this may just be my suspicions climbing in where they're not warranted. There's certainly no reason that a software-only solution cannot be made to work at least for playback in realtime, with a sufficiently smart programmer and a sufficiently fast computer. But is a P/II-400 Good Enough?

Now, one "however" about losses just due to transmission between camera and computer. There is a case where I believe such losses happen, and that would be using any tape format but MiniDV and any other player than a standard MiniDV player. Anything faster than the ~25 Mbps (uncompressed) data stream of MiniDV--that is, DVCPro, and of course DVCPRO50 and DVCPro100--use Serial Digital In (SDI), not FireWire. SDI is fully digital, but it is my impression that it's a full-speed, uncompressed format.

Therefore, I believe, you would have a conversion loss, because the data must be recompressed at the computer, unless that same PC could handle full uncompressed video data. Comments, anyone?

  1. Just to be 100 percent sure I'm not missing the boat, this implies that if I were to go camera-PC-camera-PC-camera-PC-camera (as a result of a lot of editing) I'd have a pretty lame picture, right? If so, that's a shame. I'd sort of hoped I could store interim projects on mini DV cartridges without image penalty.
  2. As I hope I've made clear now, if you're doing cuts-only, you can lay back to tape all day without any further image penalty.

  3. Roughly how DOES this lossy compression work? I gather it's not a temporal compression algorithm like MPEG, but instead some kind of sacrifice-chrominance-but-not-luminance approach like YUV?
  4. I'd like to know that, too. Messers Glaskowsky and Rosen, could you comment?

  5. Why do I care about transcoding? Do I only care if I decide ultimately to distribute the resultant video file on a CD as MPEG? Is the idea here that whatever compression the capture board does (assuming I guessed right in (2)) doesn't correspond to any known software codec, thus meaning that no one with a regular old PC could play back an AVI created by the (for instance) Pinnacle. Good grief, that would be staggeringly stupid -- although not unthinkable, this IS the computer biz -- and I suppose I could always write out RGB, although the file could only be played back by the fellow with the new Presario.


Now we get to another whole area of contention. You want to do something besides take the video, edit it, and put it back on tape? Now you must transcode it to something that can be read by your target audience. MPEG might be a good choice as MPEG decode hardware becomes ubiquitous, but of course requires hardware. What about a software-only solution?

There's hardly any unanimity on this subject; some people like Real Media because there are players for PCs and Macs, some people like QuickTime because there are good players for UNIX, too.

There are replacements for AVI a-comin' along. Microsoft has two initiatives, AAF and ASF, which we will probably discuss in future messages.

Now, back to production. You must also transcode if you have an existing stock of video captured in some other format, e.g., Hi-8 or etc. Captured on a Truevision card. This is of paramount importance if you want to mix work from multiple sources, as most videographers will want the option to do. Once created, of course, you then face the same problems mentioned above if you wish to then send it out to the Net, or as a CD-ROM, or in any way other than videotape.

Thanks for the time, Alex. I didn't mean to make this a big production. In my youthful optimism I'd thought it just a simple matter of plug and play. Ah well. Thanks again.


Ahh, that it was! But then that's why David and I are writing this whackin' great big book on the whole subject. Your questions have just sparked a discussion I was itching to have anyway.

Part Two Section Two: A Quick Explanation of Compression

Mark Minasi had asked:

4) Roughly how DOES this lossy compression work? I gather it's not a temporal compression algorithm like MPEG, but instead some kind of sacrifice-chrominance-but-not-luminance approach like YUV?

Peter Glaskowsky replied:

Here's a very short and somewhat oversimplified description of the process:

JPEG and MiniDV use similar algorithms based on discrete cosine transforms (DCT). Starting with separate 8x8-pixel blocks for luminance and chrominance (the luminance blocks have the same resolution as the source material; the chrominance blocks are typically lower resolution), the DCT algorithm converts the pixel data from the spatial domain to the frequency domain, then throws away the higher-frequency content to achieve the desired bit rate.

What you end up with is the DC component (the average intensity of the whole block) and a series of components of increasing frequency up to some cutoff. Decompression just reverses the process, but since some of the high-frequency content is lost, you get errors.

This is why JPEG causes "ringing" around sharp-edged things. By removing the high-frequency components that permit sharp edges, you create slower edges and ringing. JPEG can be configured for lossless compression by retaining all the high-frequency content, but since most scenes aren't all that highly detailed you can still get decent compression ratios.

Sony chose to make MiniDV sufficiently different from JPEG so that you can't use any pre-existing JPEG engine to handle MiniDV-- but some new hardware codecs can do both, and of course it's quite possible to do either one in software. It's just more work to do it at full resolution and in real time than most CPUs can handle.


I'd also said:

There are replacements for AVI a-comin' along. Microsoft has two initiatives, AAF and ASF, which we will probably discuss in future messages.

Peter replied:

Since the MPEG-4 people have decided to accept QuickTime as the MPEG-4 native file format, there's some hope that Apple's QuickTime people will be able to persuade Microsoft to accept the QuickTime format as well. Though I think this would be a good thing, I have to admit it's unlikely.

--Peter Glaskowsky


As a followup to the above, I’d like to know if anyone has experience in using QuickTime to distribute audio on CDs for either PCs or Macs. Does Sound Forge do an adequate job of transcoding .WAVs to QT format?


DV From Someone Who Does It

Courtesy Chris Hartt, who passed this on, another county heard from: Jim Seavall, who seems to know a thing or two from the production side.


OK. I've seen enough of this thread where I have to jump in at some point (grin). A couple of points:

The DV formats all translate, at least in capture card terms, to 3.5 MB per second. Actually a shade over. This is because in capture card terms, the data is translated/transferred/captured after compression occurs in the camcorder. That's right, the compression occurs in the camcorder. So we have a data stream of ~ 3.5 mb per second, or 210 mb per minute, so a gig goes by every 5 minutes. Needless to say that's why I have 30gb of fast/wide storage.


This also presumes that your captured cuts are all under 10 minutes or so, since there is a 2 Gbyte file size limit on both PCs and Macs, currently, unless something happened while I wasn’t looking.

Regarding effects, etc. There are two ways of handling the compression loss:

A) If you capture the datastream in P1394 (FireWire) form, it's already digital, hence no further loss of video info. When you apply effects, cross dissolves, 3D, 2D, whatever, it's up to the effects (transitions, filters, etc.) designed by the manufacturer to implement the quantitative analysis or interpretation of the video stream, and how well they implement it that determines the quality of the resultant effected video. This also affects how it looks after they've run it through their hardware/software codecs and effects for editing programs, such as Adobe Premiere.


I believe what Jim is saying is that the native MiniDV format isn’t handled directly by After Effects (AE), so the data has to be transcoded to the native format before any other processing happens. Any code/decode cycle is going to lead to compression loss, of course.

B) Most people I work with (directors, producers, MTV video editors) handle this as follows:

Regardless of the capture card, which could be Media 100 (my personal favorite), Avid/Truevision (Truevision makes Avid's cards for video capture) or Radius....they transfer their video to Adobe After Effects ( for special effects) with an Animation compressor, because it's lossless (or as close as you can be). The video card manufacturers sometimes write special transition effects (like cross dissolves, etc.) specifically designed for their cards. Now, of course, we get into the YUV colorspace of their cards and the subjective quantitization effects of how they captured the video data stream to begin with.

The Truevision card is very good (they make Avid's, remember) but the Media 100 has a slightly better interpretation of the raw video stream. An important point is that the Truevision and Radius cards translate the captured video into an RGB data stream after capture, the Media 100 leaves it in YUV colorspace.


I have read elsewhere that M100 does this; however, AE and sundry other products don’t do YUV, so all data must be transcoded before it’s used.

Raw uncompressed rates are usually as follows:

Video translated through the RGB process, 18.6 MB per second

Video untranslated through RGB is around 27 plus MB per second. Almost all video is translated if it appears on your RGB monitor, hence it's in the lower data rate format. That's why video card manufacturers consider 2:1 compressed video to be 300kb per frame, or 9mb per second.


It’s not clear to me whether he’s talking about video that’s transcoded for use in Premiere, edit* and the like, or if he means the native data rate inside the computer, when you’re watching it on your computer’s monitor.

There is always the consideration that pure animation (not video generated, but computer generated) can be set at a higher data rate than anything else, since it hasn't been translated from another format. Of course, then there's the consideration of, is it in 24 bits of color (18.6mb data rate, usually) or 24 bits plus alpha channel, which adds another 8 bits to the equation. In any event, you have to understand the capabilities of your capture device's compression scheme to understand how the captured video is being handled.


But you can make the animation be rendered at the target resolution so as not to waste extra disk space. How intelligently your average 3D program deals with the target resolution, codecs, interlace vs. non-interlace, etc., is a subject I’d like to know more about.

As for data rates, "knowing" what the program wants or produces can be a poser. It presumes you can find out those characteristics, which are sometimes difficult to ferret out. And most programs keep the information to themselves, for the most part.

Firewire stays at 3.5 mb all the time, video hardware codecs from different manufacturers vary on the interpretation of the YUV color space. Keep in mind that there are resultant differences in how the video was handled {meaning how it was output}. There is the component signal, which separates the Red, Green and Blue colors before capture with the aid of 3 CCD cameras, then there is Svideo, which the thread covered pretty well, and of course, composite.

The important thing is this, can you tell the difference in the video with the naked eye, are you that good? Reality is this, when you are doing special effects, the higher the data rate, the better, because of the added effect. However, if it's straight video, cuts only, a data rate of 150kb per frame, or 4.5 mb per second, with a BetaCam SP or higher end {mini}DV cam (like the Canon XL-1) as the input for raw video is considered broadcast quality....or at least CBS, ESPN, and MTV think so.


Are these truly broadcast quality? The audio isn’t captured at the standard 44KHz rate, and must be upsampled from the native 32 in order to work with it. In addition, the consumer camcorders don’t have XLR inputs. Are commercial producers actually using the XL-1 or Sony’s XL-1000 for production? I’ve also read that the XL-1’s audio has a serious hum problem, but this may have been solved.

Just some random thoughts on the discussion you were having.


Jim Seavall


Notes From Mark Randall of Play

One of the other people I sent this discussion to was Mark Randall, VP of Marketing for Play, who sends this food for thought. Play, for those of you who came in late, sells Trinity, the production-system-in-a-box that’s hosted on NT. Minor edits provided by me, otherwise straight off his fingers.

Hi Alex,

Wow... Quite a can a cheese you've opened with the DV page (to mangle a metaphor).

As to the difference between I-Link and IEEE1394... Some Japanese friends from Sony came up to me at IBC (these guys sort of run the Mini-DV stuff there) and asked us to refer to things as I-Link in our presentations. Now there was some broken English involved here, (a little of it on my part) so I'm not sure I got the whole thing straight but it seems that I-Link is exactly IEEE 1394/Mini-DV. The reason for the new name has to do with trademarks/marketing etc. and not technology. Anyway, reading between the lines, that's what I thought they meant (reading between their lines is hard because they all go up and down and you have to turn your head side ways, but that's another story).

Anyway my personal $.02 on the whole DV/MPEG2 thing is that DV/MPEG2 is a great acquisition format but not an ideal editing format. While it's true that DV can be edited cuts-only without decompressing it, I've never seen a video production without a dissolve, title, fade to/from black, graphic, or something. Therefore, we must realize that in practicality all productions will end up decompressing and recompressing their video. Most productions will end up decomp/recomping multiple times. To get around these problems I believe that the following steps should be taken:

    1. Design systems that perform all effects, titles and graphic insertions in the same pass, to minimize decomp/recomp.
    2. All future architectures should support non-compressed as well as compressed video seamlessly. There will always be situations where deep multi-layering is required or absolute pristine quality is demanded. If an architecture relies on some form of compression to move data around then it is not very 'future-proof'. While 22 MB/s seems like a lot of data now, in a few years it will be trivial.
    3. More specifically, make sure that the architecture has no problem moving non-compressed streams (with alpha) around in real-time. If it is bandwidth-limited to 3.6 MB a second (or whatever), it will be a problem someday soon.

    4. Future architectures should be compression algorithm agnostic. If there's one thing we know for sure, it's that the state-of-the-art in compression is moving fast. Also, keep in mind that DCT-based compression algorithm artifacts are additive. M-JPEG is DCT. Almost all NL editing systems use M-JPEG to compress video during editing. DBS Satellite, digital cable and DVD are all now very common ways to receive video. They all use MPEG2. MPEG2 is DCT-based, too.


But of course, slightly different DCT algorithms, gentle readers. Which leads to this problem.

One broadcast network told me about the day they pulled all the M-JPEG systems off-line because of the 'invisible' quality problem. What this means is that the video looked fine when it was M-JPEG compressed edited and then decompressed and shipped out to the universe. The problem was when it hit DirecTV, Dish Network, digital cable etc. It got compressed again by another (slightly different) DCT algorithm and then uncompressed. Oops! Blocks on blocks. Bad news. Of course they never knew it until the network President tuned in at home on his new 18" dish. Of course the ideal answer is to not compress at all in production but that's going to be an expensive solution for a little while longer. In the meantime, since you acquire in DV (DCT) and have to at least account for possible distribution in MPEG2 (DCT), then produce using a non-DCT algorithm. Compression is a fact of life. Acquiring, producing and distributing video all involve compression. It's a cascade process and as your video moves through each step you must make the proper compression choices based on your knowledge of where the video has been and where it's going. Companies have asset management programs so I guess it would be proper to think of this as a compression management strategy (CMS). Allowing users to choose the best CMS for their current projects/budgets/equipment and respond to changing conditions is why production architectures should be independent of any compression algorithm or the need to use compression at all.

Now before you run off thinking that I'm being biased towards Trinity's design philosophy, let me point out that I helped to (along with many others) design Trinity. Therefore, in actuality it is Trinity that is designed to fit this philosphy not the other way around. (a subtle distinction to be sure).


--- Mark


birdline.gif (1428 bytes)