In that case, a compression algorithm like h.264 should be using that bandwidth to store the more interesting pixels in the highest possible quality.Īll that said though, you definitely raised some very interesting points that I would like to investigate further. I record at the highest possible settings, but because 99.99% of each image comprises of just black pixels, the compressor comes nowhere near to using it's allocated bandwidth for storing the video file. a sensor pixel (element) might be contributing to more than one pixel in the final image, but I think the main point to to get the data off all of the relevant sensor pixels and not to leave any of the relevant sensor pixels out.Īs for compression, I'm not sure if that applies either. So there might be some slight upsampling, i.e. ![]() I usually record 4K video around 1.5x digital zoom, so the pixels on the sensor closely correlate with pixels in the final image. The Nikon P1000 has a 16 megapixel sensor. I get you point, but not sure if it really applies. ![]() ![]() Your trying to stack already down - sampled/compressed video.you need to be stacking pure 1:1 frames like you get from an actual planetary camera.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |