Header image

Validated efficient image compression for quantitative and AI applications

Tuesday, September 22, 2020
4:46 PM - 5:10 PM

Speaker

Attendee31
Dotphoton Ag

Validated efficient image compression for quantitative and AI applications.

Abstract Submission

An increasing proportion of earth observation applications use quantitative and AI algorithms. In this paper we consider the requirements that such algorithms impose on image compression and how to validate them. We then present our approach to raw optical image compression which achieves a compression ratio in the range 5:1–10:1 with a SNR loss 1.25dB at up to 200Mpix per second in both software and FPGA.

Historically, lossless image compression has been used to handle raw data, as it allow for accurate post-processing, easy archival and translation across different lossless formats. This comes at a price of a low compression ratio, typically 1.5:1 an rarely >2:1. Higher compression ratios can be achieved at the expense of image information loss and limited flexibility, as chaining lossy compression algorithms may generate unforeseen interactions and artefacts. For consumer AI applications, these types of lossy compression is acceptable, however to achieve their high compression ratios, they often sacrifice the fine information which allows AI algorithms to achieve image enhancements such as sharpening, superresolution, denoising, segmentation and many others in a qunatitatively accurate way. To ensure that a compression algorithm is suitable for both foreseen and unforeseen applications, a general design and testing approach can be taken that yields a clear specification on the suitability of images at a given compression ratio. The approach that we present here relies on a physical model of the sensor with which the image was taken and generates a compressed raw image which is statistically consistent with the image arising from another (or the same) sensor of given specifications, and then being losslessly compressed and therefore suitable for post-processing, archival or format translation.
In particular, we demonstrate how these concepts can yield an algorithm able to provide the high performance stated above with constrained power and FPGA footrint.

We then discuss the advantages of embedding the physical sensor model within the data in the context of AI. A first use consists in data normalization by mapping sereval sensor models to a single output model therefore enhancing training. A second example is the reverse: data augmentation by simulating the statistics arising from different sensors. A third example is monte-carlo uncertainty propagation which gives a rapid overview of the uncertainty for even a complex AI algorithm.

We give an outlook by imaginig what a future AI image-processing pipeline could look like from the data management point of view, taking into account metrological accuracy, performance, cost and practicality for the end-user.

loading