NASA and NOAA use space-borne sensors to support science and operational weather decision making. Measured satellite data can be used to calculate a myriad of useful data products including aerosols (e.g., smoke and dust), clouds (e.g., layers, height, particle sizes, temperatures, imagery), land temperature and albedo, rainfall rate (e.g., reflected shortwave radiation), sea/lake ice concentration and motion, snow cover, fire hot spot characterization, sea temperature, and others. These missions increasingly include lightning sensors, e.g., GOES-R (Geostationary Lightning Mapper, GM), TRMM and ISS (Lightning Imaging Sensor, LIS), GOES-XO (Lightning Mapper). Therefore, there is a need for autonomous on-board processing of lightning data to control the CIS Regions of Interest (ROI) and resolution, and in order to adjust the attitude of the sensor to keep the storm in view for the maximum possible duration.
In order to enable deployment of an autonomous lightning storm detection and tracking software pipeline on a space-borne platform, the need for efficient and reliable computing platforms is paramount. The entire processing pipeline must target radiation hardened/tolerant architectures that simultaneously target a platform that fits within the low SWaP (Size, Weight, and Power) budget available.
The primary product to be developed during this project is a software framework to ingest LIS event data and generate a storm ROI that captures the future lightning events for a given future time period. This ROI will be translated into an ROI bounding box for the CMOS Image Sensor on the LIS and attitude control information for the LIS satellite and sensor. This project focuses on developing a neural network processing pipeline that performs the required lightning detection and tracking, at a high accuracy and reduced computational needs. Computational needs are managed by evaluating the required frame rate (e.g., what is the duration of aggregation and time delta in the now casting), what input resolution (i.e., ground separation distance) is required to successfully detect, predict, and track lightning (i.e., can the input frame be coarsened), sparsify and prune the neural networks (e.g., reduce kernel size, number of channels, number of layers, etc.) to reduce the compute operations required while evaluating the impact on accuracy, and leveraging the sparsity of the input where possible (e.g., only processing non-empty input activations.