Intermediate codec premiere pro
There are many different codecs that can be used in the editing process. There is a significant advantage to using popular codecs. Check out the table.
I mean the amount of data that is retained by the codec, only some of which you can see. The question is: If I had an uncompressed image, and then I compressed it with this codec, how similar would the new image be to the old image?
How much information is lost in the transcode? If the two images are very similar, then the codec is not very lossy. The lossyness is a combination of the techniques that the particular codec uses and its bitrate. A more lossy codec is not necessarily bad. Using a more lossy codec can be a really smart move because of how much space it saves.
You should care because you may want to change the image. If you are doing any sort of color correction, then you will be changing the image. The visual quality is about the same, and the H. It looks just about as good to the eye, and the file is much easier to upload to the internet. The trouble with the H. What if you wanted to increase the exposure? Now we can see where the highly-compressed image falls apart.
Her hair and shirt look terrible in the h. This is why you really want a high-quality codec when you capture the image. Every project starts with a codec that you capture in the camera. And ends with a codec you export delivery codec and hand to your client or upload to the web. In the simplest case, you do all of your editing and color-correction right on the camera files. But most of the time it gets a little bit more complicated.
You might transcode to a different codec for editing, and potentially for color-correction, and definitely for VFX. But it all starts with…. Generally speaking, you should aim for the highest-quality codec that your camera or your budget can capture. So you want less-lossy codecs: less compression, higher bit-depth, and less chroma subsampling. The more information you have when you capture, the more flexibility you will have later.
Of course, you also have to consider a lot of other, practical factors in this decision. Otherwise we would always be shooting 8K raw, right? The first consideration is obviously cost. Generally speaking, the more expensive the camera, the higher quality codecs are available on it.
Tip: Better Codecs with External Recorders One way to capture higher-quality codecs on cheaper cameras is to use an external recorder.
These devices many of which can double as external monitors take an uncompressed signal from the camera, via HDMI or SDI, and compress it separately. So you end up with two copies of your footage. One copy heavily compressed on the camera, and a second copy lightly compressed on the external recorder. The key thing here is that the camera sends the signal out to the recorder before compressing it. One important note here is that many cheaper cameras only output 8-bit, and often not in An external recorder might be able to compress to a bit codec.
But if the camera is only sending 8 bits, the recorder can only record 8 bits. The second factor to consider is storage space. High-quality codecs tend to be higher bit-rate, which means that the files are larger. And you may also have to upgrade your memory cards in order to be able to record the high-bitrate data. Another factor to consider is how much color-correction and VFX collectively referred to as finishing you plan to do. The last factor to consider is your editing machine.
Most capture codecs are not well suited to editing without a high-performance computer. And very-high-bitrate codecs may require high-speed hard drives or data servers. And this can take time. I explain which codecs are best for editing in the next section. Or whether you want to transcode into another format. Well, it depends. Pretty much all of the major software packages can now edit any codec that your camera creates. There are two main factors you need to consider when choosing your edit codec: compression type and bit rate.
Most lower to mid-range cameras record with codecs that use temporal compression, also known as long-GOP compression. The simple explanation of a long-GOP is that, for each frame, the codec only captures what has changed between this frame and the previous frame.
The difference between this frame and the last frame is just a few pixels. So all you need to store is a few pixels. The issue, however, is that these codecs tend only to work well when played forward. And tt takes a lot more processing power to do those things quickly with a long-GOP codec.
A high-end computer might have no trouble, but even a mid-range computer will lag and stutter when you skim through the footage quickly or jump around. So even a mid-range computer can skip around very smoothly. The other thing that can cause issues with playback is raw video. Raw video needs to be converted before it can be displayed sort of like a codec does.
Ironically, both the low-end cameras and the highest-end cameras produce files that are hard to edit! The good news is that hard drives are getting faster every day. The average external hard drive is only just barely fast enough to play that back. Here are some rough guidelines for common data storage speeds. There will always be certain models that underperform or overperform. Shooting in log is a way of preserving as much of your dynamic range as possible. This allows a scene that has bright highlights and dark shadows without blowing out the highlights or crushing the blacks.
Blown-out highlights are a particularly nasty side-effect of shooting on video instead of film. So shooting in log can help make your footage feel more cinematic. The most common way to do that is to add a LUT to your footage.
This means that your editor will need to apply the appropriate LUT to all of the clips when editing. This can be annoying to manage, and it can also slow down the computer a bit.
There's a reason why colour grading was usually done last. Here's a suggested workflow:. Make a "one-light" grade of your raw footage—where you correct any obvious problems such as white balance and exposure, and apply any necessary LUTs if your camera uses a profile that needs correction. It's called "one light" because you don't grade every shot, you just set a ballpark grade for each scene and apply it to everything.
This shouldn't take too much of your time, you just want the footage to be usable for your edit. Now you export the output to Premiere, using an intermediate codec and importantly exporting as individual clips with the same file name as the original but in a different location, obviously.
This intermediate doesn't have to be at the highest settings, e. This speeds up the edit process, because the files will be smaller, so it will be more responsive, with less rendering time etc. Next do your cutting, lock off your pictures and export the timeline as XML.
You can move or delete your intermediate files now, so that when you go back to resolve to do the grade you link back to your camera originals, at full quality. Now you do the proper grade, where you create the look you want, balance between shots, do any secondary grading, etc.. This way you get the best possible quality, without spending time grading footage that you end up not using in the edit.
Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? First, your original video files stay on your source drive. Proxies can be created on the SSD which further speeds up your editing.
Second, when you are finished with your project and move it off the SSD and archive it elsewhere, you can safely delete the proxies without breaking your project. The project will still keep the data link to the original source files since they have never moved. You can then, in the future, open up your project and re-edit using the original files, or even recreate proxies if you need to do major re-edits.
As we discussed, using proxies allow you to use an intermediate codec, such as ProRes , to edit with. If your original files are in MP4, then you will see improved performance scrubbing through your timeline. This is HUGE! I recently found this giant advantage while travelling for work. I was in the middle of an edit on my desktop PC when I had to travel for work. I usually travel with a Surface Pro 3 and I thought it would be great to continue editing while travelling.
But how to take it with me without breaking everything? Without proxies, you would need to copy the project file and original source files to your laptop, then relink all the media using the file structure on the laptop. When you get home, you will have to relink again using your PC file structure. With proxies, you only need to bring the project and proxy files with you, leaving the originals at home.
No data links get broken in the process. Once the proxy files are on the laptop, you only need to copy the project file back and forth as you edit. This also has the advantage of editing with ProRes on the laptop. Since laptops usually have less powerful CPUs than your desktop, this helps tremendously.
OK, so enough discussion, just make the proxies already and get back to work! This is super easy, we will start from scratch and show the process I use.
0コメント