8 bit codec
The problem is not with hoe the clips were recorded but how they are being exported. It seems that you are exporting using the youtube destination and probably that will create an H file. I am guessing that this is where the 8bit limitation is showing. Dec 17, AM. YouTube now supports HDR. I have tried both doing this through the Compressor directly to YouTube and that is where the warning shows and sharing as a master file at ProRes HQ which is surely 10 bit.
Dec 30, PM in response to W. This seems like a glitch, or a constant warning regardless of the bit depth that is used. I get the same thing using bit HLG footage. How in the world can that be considered an 8-bit video file? Dec 30, PM. Dec 30, PM in response to patrick In response to patrick The color profile shows as Rec. Those files are bit HLG from the gh5. Should not be getting an 8-bit file warning. Dec 31, AM. Dec 31, AM in response to patrick In response to patrick Learn more.
Asked 5 years, 9 months ago. Active 2 months ago. Viewed 35k times. Also, then is 12bit better than 10bit or are the rounding errors insignificant at that point? Improve this question. Fallen Fallen 1 1 gold badge 1 1 silver badge 3 3 bronze badges. Well, bit has 4 times the resolution of 8-bit, so yes, rounding errors will be smaller.
Assuming you are encoding for end use, the benefit will depend on the source material and the display medium. The same is true for bit, in principle, but the extra resolution over bit shouldn't be significant for normal end-use.
If storage and drive speed is not an issue then obviously throwing more data at it will make the encoding better, but the difference in quality might not match the increase in size.
I use 12 bit footage in production, it's huge, but it has the advantage that the video has waaay more latitude for grading: you can pull down over-exposed shots and bring up details in shadows without it all turning to mush, like 8-bit footage does if you push it too hard. However if you aren't going to process the video any more you don't need the extra latitude, especially if the extra info didn't exist in the first place.
Sure, 10 or 12 bits will mean less quantisation error but quantisation error is trivial compared to the losses you get in a lossy codec. Because raw anything is huge. Add a comment. Display as a link instead. Clear editor. Upload or insert images from URL. Why recording LOG with an 8bit codec is most probably going to get you in trouble. Share More sharing options Followers 0.
Reply to this topic Start new topic. Prev 1 2 3 Next Page 1 of 3. Recommended Posts. Don Kotlos Posted July 27, Posted July 27, Kudos to Canon. Do yourself a favor when you are using any log profile and record with a 10 bit codec. Mat Mayer Like Loading Link to comment Share on other sites More sharing options Replies 49 Created 6 yr Last Reply 6 yr. Top Posters In This Topic 6 3 5 9.
Andrew Reid July 28, Posted Images. TheRenaissanceMan Posted July 27, Cinegain Posted July 27, Julian Posted July 27, My personal opinion: most people are crap at grading. I think this is a a bigger issue than 8 bit vs 10 bit. Neuroscientist, studying how we our brain processes visual input. Could be true too. Liam Posted July 27, Nikkor Posted July 27, Guest Ebrahim Saadawi Posted July 28, Posted July 28, I've de-logged and heavily graded some 8bit Log images without issues C and de-logged and pushed 12bit log images with lots of issues BM4K There's more than the codec to the final grade-ability of the image, starting from the lens, the image sensor performance noise performance, cleanliness, fpn, DR, colour dye, resolution , the processor tricks, NR, colour science, WB, exposure, gain, gamma curve, happening before compression and all directly affect how much the image can be pushed and de-logged with or without issues, not just 8bit vs 10bit.
TheRenaissanceMan Posted July 28, Andrew Reid Posted July 28, Don Kotlos Posted July 28, When the V-Log gets released it will be interesting to redo that test.
AaronChicago Posted July 28, Michael Ma Posted July 28, Santiago de la Rosa Like Loading Join the conversation You can post now and register later. The sections that follow present some methods for converting and formats to This section describes an example method for performing the upconversion. The method assumes that the video pictures are progressive scan. The to interlaced scan conversion process presents atypical problems and is difficult to implement.
This article does not address the issue of converting interlaced scan from to Let each vertical line of input chroma samples be an array Cin[] that ranges from 0 to N - 1. The corresponding vertical line on the output image will be an array Cout[] that ranges from 0 to 2N - 1. To convert each vertical line, perform the following process:. The equations for handling the edges can be mathematically simplified.
They are shown in this form to illustrate the clamping effect at the edges of the picture. In effect, this method calculates each missing value by interpolating the curve over the four adjacent pixels, weighted toward the values of the two nearest pixels Figure The specific interpolation method used in this example generates missing samples at half-integer positions using a well-known method called Catmull-Rom interpolation, also known as cubic convolution interpolation.
In signal processing terms, the vertical upconversion should ideally include a phase shift compensation to account for the half-pixel vertical offset relative to the output sampling grid between the locations of the sample lines and the location of every other sample line. However, introducing this offset would increase the amount of processing required to generate the samples, and make it impossible to reconstruct the original samples from the upsampled image.
It would also make it impossible to decode video directly into surfaces and then use those surfaces as reference pictures for decoding subsequent pictures in the stream. Therefore, the method provided here does not take into account the precise vertical alignment of the samples. Doing so is probably not visually harmful at reasonably high picture resolutions. If you start with video that uses the sampling grid defined in H. Moreover, the distinction is probably not visually harmful at reasonably high picture resolutions.
Trying to correct for this problem would create the same sort of problems discussed for the vertical phase offset. The method described previously for vertical upconversion can also be applied to horizontal upconversion. Convert the image to , and then convert the image to You can also switch the order of the two upconversion processes, as the order of operation does not really matter to the visual quality of the result. Use them as follows:.
For example:.
0コメント