On board images processing using IA to reduce data transmission: example of OpsSat cloud detection
Wednesday, September 23, 2020 |
2:55 PM - 3:20 PM |
Speaker
Attendee4
Institue Research & Technology
On board images processing using IA to reduce data transmission: example of OpsSat cloud detection
Abstract Submission
Pending on flight results of OpsSat, we propose to present to you the long process to deploy Neural Networks on FPGA.
The first step, often underestimated with respect to its impact on the final performances, consist in collecting and labelling images to perform the NN learning. The dataset shall content good enough images to train the NN parameters (weights and bias) and large diversity to avoid overfitting. OpsSat camera has not flight experience so we have to plan an update of the NN once first pictures will be available to upgrade the parameters on the basis of “real” images from space of the sensor that is why the capability to upload new NN is necessary. This can be useful also to change or add a class detection during the life cycle depending on user needs.
Then the second step consist in selecting the NN architecture and size, depending on learning dataset quality and quantity, and depending on execution target (Integrated Circuit) capability. Considering the Cyclone V performances and the application selection, the clouds detection, we choose a Convolutional NN based on Lenet 5 architecture with some specific improvements. We implement also a very innovative approach based on "spikes" layers, in order to reduce the inference consumption.
Finally the third step is the deployment of the CNN on the Cyclone V FPGA. This last development step requires specific competences of RTL coding, cause not any commercial solution to deploy NN on Cyclone V are compatible to "small" FPGA. We apply a "pipelined" approach to optimize the execution time, considering the "tiny" CNN architecture is compatible to limited number of logic cells.
Thanks to this process, that we will detail in the final abstract, the on ground tests on MytiSOM board demonstrate only 150ms to infer a full OpsSat images (close to 2M pixels).
Hoping be able before the 7th OBPDC conference to present inferences performed on flight!
The first step, often underestimated with respect to its impact on the final performances, consist in collecting and labelling images to perform the NN learning. The dataset shall content good enough images to train the NN parameters (weights and bias) and large diversity to avoid overfitting. OpsSat camera has not flight experience so we have to plan an update of the NN once first pictures will be available to upgrade the parameters on the basis of “real” images from space of the sensor that is why the capability to upload new NN is necessary. This can be useful also to change or add a class detection during the life cycle depending on user needs.
Then the second step consist in selecting the NN architecture and size, depending on learning dataset quality and quantity, and depending on execution target (Integrated Circuit) capability. Considering the Cyclone V performances and the application selection, the clouds detection, we choose a Convolutional NN based on Lenet 5 architecture with some specific improvements. We implement also a very innovative approach based on "spikes" layers, in order to reduce the inference consumption.
Finally the third step is the deployment of the CNN on the Cyclone V FPGA. This last development step requires specific competences of RTL coding, cause not any commercial solution to deploy NN on Cyclone V are compatible to "small" FPGA. We apply a "pipelined" approach to optimize the execution time, considering the "tiny" CNN architecture is compatible to limited number of logic cells.
Thanks to this process, that we will detail in the final abstract, the on ground tests on MytiSOM board demonstrate only 150ms to infer a full OpsSat images (close to 2M pixels).
Hoping be able before the 7th OBPDC conference to present inferences performed on flight!