Smart payloads : image analysis by deep learning on-board
Wednesday, September 23, 2020 |
4:30 PM - 4:55 PM |
Speaker
Attendee162
Agenium Space
Smart payloads : image analysis by deep learning on-board
Abstract Submission
This paper presents first implementations of high performance Deep Neural Networks (DNN) embedded in FPGA material representative of the hardware available in new space missions. DNN inference on board is made possible by the definition and development of highly efficient simplification methods allowing to execute the best-in-class deep neural networks (DNN) with hundreds of millions of parameters in the limited processing resources available on-board.
Small satellites platforms expansion increases the need to simplify payloads and to optimize downlink data capabilities. A promising solution is to enhance on-board software, in order to take early decisions, automatically. However, the most efficient methods for data analysis are generally large deep neural networks (DNN) oversized to be loaded and processed on limited hardware capacities of small satellites. To use them, we must reduce the size of DNN while accommodating efficiency in terms of both accuracy and inference cost. In this paper, we present a distillation method which reduces image segmentation deep neural network’s size to fit into on board processors. This method is presented through a ship detection example comparing accuracy and inference costs for several networks.
Distillation provides a way to extract the really meaningful parts of large and complex DNNs in a reduced model. This extraction is mainly performed by transferring the knowledge of a big teacher network in a smaller DNN by training the small DNN to predict the output of the teacher model. It shall not bring significant loss in terms of precision and reliability. Then, this size reduction, at no performance cost, will also simplify the inference code required to execute the distilled DNN on FPGA HW. This approach is complementary of existing techniques to reduce DNN memory footprints (ex. those implemented in EUCLID deep Space Mission, PhiSat-1 mission), compatible with precision reduction techniques for inference and can reuse generic VHDL (Very High Speed Integrated Circuit Hardware Description Language) code generation for running DNN on Soc FPGA HW. Our approach will reduce costs to fit state of the art DNN on HW of image payload.
Several DNN architectures have been implemented and simplified for ships segmentation and detection in very high resolution optical images. Simplified DNN were ported (inference code is adapted) and executed in mid-range FPGA HW (Xilinx ZCU102) without significant performance losses (< 10% F1-score). The implemented solutions and the obtained results (execution time on-board, frames per second processed, resources and power consumption) will be presented.
The work presented in this article was performed by AGENIUM Space and CNES in the framework of research and development contracts and internal activities aimed at the implementation of intelligent payloads.
Small satellites platforms expansion increases the need to simplify payloads and to optimize downlink data capabilities. A promising solution is to enhance on-board software, in order to take early decisions, automatically. However, the most efficient methods for data analysis are generally large deep neural networks (DNN) oversized to be loaded and processed on limited hardware capacities of small satellites. To use them, we must reduce the size of DNN while accommodating efficiency in terms of both accuracy and inference cost. In this paper, we present a distillation method which reduces image segmentation deep neural network’s size to fit into on board processors. This method is presented through a ship detection example comparing accuracy and inference costs for several networks.
Distillation provides a way to extract the really meaningful parts of large and complex DNNs in a reduced model. This extraction is mainly performed by transferring the knowledge of a big teacher network in a smaller DNN by training the small DNN to predict the output of the teacher model. It shall not bring significant loss in terms of precision and reliability. Then, this size reduction, at no performance cost, will also simplify the inference code required to execute the distilled DNN on FPGA HW. This approach is complementary of existing techniques to reduce DNN memory footprints (ex. those implemented in EUCLID deep Space Mission, PhiSat-1 mission), compatible with precision reduction techniques for inference and can reuse generic VHDL (Very High Speed Integrated Circuit Hardware Description Language) code generation for running DNN on Soc FPGA HW. Our approach will reduce costs to fit state of the art DNN on HW of image payload.
Several DNN architectures have been implemented and simplified for ships segmentation and detection in very high resolution optical images. Simplified DNN were ported (inference code is adapted) and executed in mid-range FPGA HW (Xilinx ZCU102) without significant performance losses (< 10% F1-score). The implemented solutions and the obtained results (execution time on-board, frames per second processed, resources and power consumption) will be presented.
The work presented in this article was performed by AGENIUM Space and CNES in the framework of research and development contracts and internal activities aimed at the implementation of intelligent payloads.