Header image

Invited Speakers


Dr. Swati Mohan (NASA/JPL)
Mars 2020 Terrain Relative Navigation

                                           Bio
Dr. Swati Mohan joined NASA Jet Propulsion Laboratory first in 2004, after completing her B.S from Cornell University in Mechanical & Aerospace Engineering. After working as a systems engineer on Cassini during Saturn Orbit Insertion and Huygens Probe release, she returned to graduate school in 2005 to MIT.  Dr. Mohan received her M.S. and Ph.D in Aeronautics/Astronautics from the MIT Space Systems Laboratory.  Since her return to JPL in 2010, Dr. Mohan has worked on multiple missions such as GRAIL and OCO-3.  Swati also co-founded and manages the Small Satellite Dynamics Testbed.  For the past 8 years, she has been the Lead Guidance, Navigation, and Controls Systems Engineer for Mars 2020 Perseverance rover, focusing on Cruise and EDL.  For the last year, her role was the Mars 2020 Guidance, Navigation, and Controls Operations Lead. Dr. Mohan was the mission commentator for the landing of the Perseverance rover on February 18, 2021.  She is currently the supervisor for the Guidance, Navigation, and Controls Systems Engineering group at NASA Jet Propulsion Laboratory.

Abstract
Terrain Relative Navigation (TRN) was a critical enabling Entry, Descent, and Landing (EDL) technology that enabled Mars 2020 mission Perseverance rover to land at Jezero crater. TRN provides real-time, autonomous, map-relative position determination and generates a landing target based on a priori knowledge of hazards. The required performance was for TRN to land within 60m of the selected target, but post-landing assessment of the landed point versus targeted point showed an as-flown performance of 5m.  This talk with describe the rationale for including terrain relative navigation for Mars 2020, describe the new sensor (the Lander Vision System) which performed the on-board real-time processing to generate the map-relative performance, and show in flight video and performance data from the landing from Feb 18, 2021.


Anko Börner (DLR)
Sensor AI – fusing two worlds for a better data processing

Bio
Anko Börner was born in Berlin, Germany. He studied electrical engineering at the Technical University Ilmenau. He joined the German Aerospace Center (DLR) in 1996 for his PhD studies on data processing onboard satellites. After receiving the doctoral degree in 1999 he accepted a PostDoc position at Zurich University, Switzerland. In 2000 he came back to Berlin to become a scientific researcher at the German Aerospace Center. Since 2003 he is head of department in different institutes. His research interests are about modelling and simulation of optical systems, computer vision and sensor artificial intelligence. He was and is involved in several space missions, e.g. NASA’s Insight to Mars and ESA’s PLATO.
                                         

Abstract
In the last decade, artificial intelligence methods and technologies have proven outstanding capabilities in a huge number of applications. Deep and convolutional neural networks have revolutionized image classification, explainability approaches tackle the black-box issue of neural networks, spiking neural networks more closely mimic natural neural networks – just to mention a few. 

Nevertheless, we are still far away from applying these new technologies in systems with high reliability and safety requirements which particularly applies to space systems and components. This is because we do not understand yet well enough how and why decisions are made in complex AI systems! One new approach to contribute to the solution of this challenge is to include knowledge about the environment and the sensors which are interfaces to the data processing unit instead of just relying on pure sensor data for decision making. Combing physical models and data based models are crucial to improve the quality of AI systems, to learn more efficient with a limited amount of data, to design architectures which can be implemented close to the sensor. This will allow to build a new generation of sensor systems for space missions.



David Evans (ESA)
OPS-SAT: How the increased and more flexible onboard processing has changed the way we operate our satellite
                                           Bio
David Evans is the Advanced Operations and In-Orbit Validation Project Manager at the European Space Agency. He manages ESA's innovative OPS-SAT project. Dave started his career at the European Space Operations Centre (ESOC) in 1992, working as a mission planner, flight control engineer and simulation officer on EURECA, ERS-2 and CLUSTER-1. In 1997 he joined EUTELSAT as their satellite control centre manager. There he oversaw a period of intense expansion of the fleet with 20 launches. 5 re-orbitings and the company privatisation.  In 2007 he returned to ESOC, specializing in small spacecraft missions and advanced technology. He is the holder of several patents on housekeeping telemetry compression and the author of the popular “Ladybird Guide to Spacecraft Operations” lecture courses.

Abstract
Dave's talk will describe the events of OPS-SAT LEOP and commissioning. Many challenges had to be overcome and it took ten months to complete commissioning compared to the initially planned three months. Problems started in the first pass, no packets were received from the spacecraft and bad communications plagued the mission for many months. However, during this time a great deal of progress was made thanks to the ingenuity of the mission control team and the availability of a powerful, flexible processor on-board. A framework evolved whereby commissioning was done using the experimenter infrastructure (centred around this processor) rather than the traditional flight control infrastructure. The experience acted as a catalyst and soon the team was deploying on-board AI algorithms to solve some classic mission control problems.... 
loading