Session 8A1: Modulation, Coding and DSP
Tracks
H-I
Thursday, September 26, 2019 |
2:00 PM - 3:20 PM |
Speaker
Attendee116
Bae Systems Applied Intelligence
Implementation of uplink Doppler presteering for TTCP ranging
2:00 PM - 2:20 PMAbstract Submission
The TTCP is ESA's primary deep space ground station signal processing system. It is a very powerful system capable of processing wideband (600 MHz) signals with very high precision. The TTCP implements four channels of closed loop radiometrics and TM and up to 16 subchannels of Open Loop recording anywhere within its input sample bandwidth. The TTCP also generates an uplink for TC and ranging.
Recently the algorithms in the TTCP have been upgraded to allow high precision radiometrics (both Doppler and Ranging) to be obtained whilst presteering the uplink carrier and ranging clock. The presteering allows the carrier and range clock to arrive at the spacecraft at their nominal rest frequencies and so allows ranging to be performed at very low receive SNRs on board since narrow loops can be used. This is critical for missions such as Bepi-Colombo where two-way range precisions of the order of 10cm are required with full presteering enabled.
The presteering makes the ranging and integrated Doppler measurement considerably more complex and this paper will describe the algorithms and methods adopted to implement this function in the TTCP and will be illustrated with recent test results.
Recently the algorithms in the TTCP have been upgraded to allow high precision radiometrics (both Doppler and Ranging) to be obtained whilst presteering the uplink carrier and ranging clock. The presteering allows the carrier and range clock to arrive at the spacecraft at their nominal rest frequencies and so allows ranging to be performed at very low receive SNRs on board since narrow loops can be used. This is critical for missions such as Bepi-Colombo where two-way range precisions of the order of 10cm are required with full presteering enabled.
The presteering makes the ranging and integrated Doppler measurement considerably more complex and this paper will describe the algorithms and methods adopted to implement this function in the TTCP and will be illustrated with recent test results.
Attendee64
European Space Agency
A fast method for optimizing the amplier output back-off by means of the total degradation
2:20 PM - 2:40 PMAbstract Submission
The design of satellite transmitters using high-order modulations, without constant envelope, often requires to trade-off amplifier nonlinear distortions against transmitted radio-frequency (RF) power. The trade-off is carried out by optimizing the output back-off (OBO), i.e., by finding the operating point that minimizes the total degradation (TD) [RD1] , defined
TD=(EbN0+OBO)_NL- (EbN0)_AWGN
where (EbN0+OBO)_NL is the sum of the signal-to-noise ratio plus the OBO value required to obtain a specific target frame error rate (FER), and (EbN0)_AWGN represents, instead, the signal-to-noise ratio required to achieve the same target FER on the AWGN channel.
With this definition, TD provides a useful representation of the overall losses experienced by the link, both in terms of distortion as well as reduced available power (due to the back-off).
However, the main drawback of such an approach is that a very long FER testing campaign is required. For instance, if we consider a Sentinel-like mission with target FER equal to 10^-7, transfer frame length 15296, and bitrate ~525 Mbps (before encoding), the computation of a single FER point (at the required target) needs at least 10^8 frames to be transmitted, i.e. 10^12 bits or 48.5 minutes of transmission time. Hence, considering that a TD curve can require at least 4 to 5 points FER points (if not more), the OBO optimisation for a single modulation and coding format (ModCod) can take as long as 3-4 hours of testing.
To tackle this problem, in this paper we propose a different approach: the TD curve is computed by means of the Montecarlo information-theoretical approach described in [RD2], instead of using the FER.
In particular, we adopt an achievable information rate estimator, which can be easily implemented in an electrical ground support equipment (EGSE) or a software simulator, without the need to change the receiving down-conversion and synchronization chain. The proposed estimator requires a maximum of 10^6 transmitted channel symbols [RD2], hence with this technique the amount of bits to be transmitted for a single TD point is not higher than 10^7 (in the worst case), resulting, for the aforementioned example, in a decrease of the testing time by a factor 10^{12-7}=10^5.
To prove the effectiveness of our technique, we will consider as main study case a link implementing CCSDS 131.2 (usually known as SCCC in the TT&C community), that include amplitude phase-shift keying (APSK) modulations with an order as high as 64-APSK. For this case study, we will show that our technique provides exactly the same results as the classical approach and that, due to the larger number of ModCods considered in CCSDS 131.2, the testing-time reduction factor can be increased even further.
[RD1] Casini et al, "DVB-S2 modem algorithms design and performance over typical satellite channels", 2004
[RD2] Arnold et al "Simulation-Based Computation of Information Rates for Channels With Memory", 2006
TD=(EbN0+OBO)_NL- (EbN0)_AWGN
where (EbN0+OBO)_NL is the sum of the signal-to-noise ratio plus the OBO value required to obtain a specific target frame error rate (FER), and (EbN0)_AWGN represents, instead, the signal-to-noise ratio required to achieve the same target FER on the AWGN channel.
With this definition, TD provides a useful representation of the overall losses experienced by the link, both in terms of distortion as well as reduced available power (due to the back-off).
However, the main drawback of such an approach is that a very long FER testing campaign is required. For instance, if we consider a Sentinel-like mission with target FER equal to 10^-7, transfer frame length 15296, and bitrate ~525 Mbps (before encoding), the computation of a single FER point (at the required target) needs at least 10^8 frames to be transmitted, i.e. 10^12 bits or 48.5 minutes of transmission time. Hence, considering that a TD curve can require at least 4 to 5 points FER points (if not more), the OBO optimisation for a single modulation and coding format (ModCod) can take as long as 3-4 hours of testing.
To tackle this problem, in this paper we propose a different approach: the TD curve is computed by means of the Montecarlo information-theoretical approach described in [RD2], instead of using the FER.
In particular, we adopt an achievable information rate estimator, which can be easily implemented in an electrical ground support equipment (EGSE) or a software simulator, without the need to change the receiving down-conversion and synchronization chain. The proposed estimator requires a maximum of 10^6 transmitted channel symbols [RD2], hence with this technique the amount of bits to be transmitted for a single TD point is not higher than 10^7 (in the worst case), resulting, for the aforementioned example, in a decrease of the testing time by a factor 10^{12-7}=10^5.
To prove the effectiveness of our technique, we will consider as main study case a link implementing CCSDS 131.2 (usually known as SCCC in the TT&C community), that include amplitude phase-shift keying (APSK) modulations with an order as high as 64-APSK. For this case study, we will show that our technique provides exactly the same results as the classical approach and that, due to the larger number of ModCods considered in CCSDS 131.2, the testing-time reduction factor can be increased even further.
[RD1] Casini et al, "DVB-S2 modem algorithms design and performance over typical satellite channels", 2004
[RD2] Arnold et al "Simulation-Based Computation of Information Rates for Channels With Memory", 2006
Attendee118
Deimos Engenharia
NEXTRACK - Next Generation ESTRACK Uplink Services
2:40 PM - 3:00 PMAbstract Submission
The Consultative Committee for Space Data Systems (CCSDS) has recently updated its recommendation for uplink communications, to cope with new requirements for telecommand (TC) and modern uplink profiles and applications. Two short Low-Density Parity-Check (LDPC) codes have been included to improve the link performance.
The ESA study NEXTRACK - Next Generation ESTRACK (ESA Tracking Stations) Uplink Services aims at assessing the impact of these codes in the TC Coding and Synchronization (C&S) sublayer, as well as the interactions with the Mission Control System, and at gaining practical experience in the implementation of the transmission critical parts.
The main scopes of the study are to:
• Analyze and design the critical modules for the TC C&S sublayer, including LDPC encoding, Command Link Transmission Unit (CLTU) generation and randomization, as well as to develop an off-line simulation tool.
• Select a proper platform from an existing one used in the Telemetry Tracking and Command Processor (TTCP), which is the ESTRACK’s Ground Station Modem, and prototype the critical modules for real-time implementation.
• Test the prototype and validate the results by comparison with the off-line simulation tool.
For keeping back-compatibility and taking into account it remains a valid option, besides the new LDPC codes also the standard Bose-Chaudhuri-Hocquenghem (BCH) code is considered.
As a first step, we have studied in detail the design of the short LDPC encoders. It is well known that LDPC codes, although characterized by sparse parity-check matrices, may have a non-negligible encoder complexity because their generator matrix is not sparse. This makes relatively less efficient the application of the classic encoding algorithm based on direct multiplication of the information vector and the generator matrix. The CCSDS short LDPC codes were designed starting from protographs and are characterized by block circulant parity-check and generator matrices. This allows applying some alternative efficient encoding techniques. Two of these procedures are deeply analyzed in the NEXTRACK project and reported in this paper: the first one based on a Shift Register Adder Accumulator (SRAA) and the second one based on Winograd convolution. These three encoding methods are in consideration and compared for following real-time implementation.
As for the selection of the platform, three different platforms available at TTCP are considered: they rely on CPU, ARM-based FPGA, and FPGA solutions, respectively.
After a careful investigation, in the first phase of the study, it has been observed that both hardware platforms i) Altera Cyclone V (ARM-based FPGA) and ii) Altera Stratix V (FPGA) allow a higher performance than the pure software approach. However, the most challenging requirement of the project, i.e., the output target data rate of 2.048 Mbps, is also complied by the CPU platform. In this preliminary evaluation, the classic encoder (matrices multiplication) for the three codes has been implemented in C/C++ programming language, where a CPU similar to the one available at TTCP has been used, namely the Intel® Xeon® CPU E5-1620 v2 with 3.70GHz clock. Furthermore, the CPU performance might be even better if at least one of the alternative encoding techniques identified for LDPC encoders is implemented.
Taking into account these results and the future portability towards an operational platform is easier for software approach, we have currently selected the CPU platform, based on Intel® Xeon® CPU E5-2637 v4 with 3.50GHz clock, a multi-core processor built on 14nm process technology characterized by low power and high performance, and designed for a platform consisting of a processor and the Platform Controller Hub (PCH) supporting up to 46 bits of physical address space and 48-bits of virtual address space.
Additionally, in order to allow the execution of long tests a test tool was designed allowing to evaluate the real-time transmitter performance in terms of speed and timings.
Besides the real-time transmitter to be implemented in the selected platform, we are also developing the off-line transmitter and receiver to support the validation of the real-time implementation and provide performance statistics.
Both transmitters, real-time and off-line, implement:
• Pseudo randomization, by considering the CCSDS Linear-Feedback Shift Register (LFSR)-based randomizer;
• Channel encoding, for both the BCH and the short LDPC codes (three different algorithms);
• CLTU generation, by proper segmentation and start sequence/tail sequence insertion, when necessary.
The off-line receiver implements:
• CLTU synchronization;
• Channel decoding, where for LDPC codes different algorithms are considered, including Sum Product Algorithm (SPA), Min Sum (MS), and Normalized Min Sum (NMS) iterative algorithms, as well as the non-iterative Most Reliable Basis (MRB) algorithm, which allows to further increase the performance of the LDPC decoder, alone or in a hybrid configuration with iterative schemes;
• Data de-randomization.
In this paper we will present the NEXTRACK architecture and focus on the results of the study on the different LDPC encoder algorithms and their complexity, on the real-time platform selection and on the simulation results obtained with the off-line simulation tool.
The ESA study NEXTRACK - Next Generation ESTRACK (ESA Tracking Stations) Uplink Services aims at assessing the impact of these codes in the TC Coding and Synchronization (C&S) sublayer, as well as the interactions with the Mission Control System, and at gaining practical experience in the implementation of the transmission critical parts.
The main scopes of the study are to:
• Analyze and design the critical modules for the TC C&S sublayer, including LDPC encoding, Command Link Transmission Unit (CLTU) generation and randomization, as well as to develop an off-line simulation tool.
• Select a proper platform from an existing one used in the Telemetry Tracking and Command Processor (TTCP), which is the ESTRACK’s Ground Station Modem, and prototype the critical modules for real-time implementation.
• Test the prototype and validate the results by comparison with the off-line simulation tool.
For keeping back-compatibility and taking into account it remains a valid option, besides the new LDPC codes also the standard Bose-Chaudhuri-Hocquenghem (BCH) code is considered.
As a first step, we have studied in detail the design of the short LDPC encoders. It is well known that LDPC codes, although characterized by sparse parity-check matrices, may have a non-negligible encoder complexity because their generator matrix is not sparse. This makes relatively less efficient the application of the classic encoding algorithm based on direct multiplication of the information vector and the generator matrix. The CCSDS short LDPC codes were designed starting from protographs and are characterized by block circulant parity-check and generator matrices. This allows applying some alternative efficient encoding techniques. Two of these procedures are deeply analyzed in the NEXTRACK project and reported in this paper: the first one based on a Shift Register Adder Accumulator (SRAA) and the second one based on Winograd convolution. These three encoding methods are in consideration and compared for following real-time implementation.
As for the selection of the platform, three different platforms available at TTCP are considered: they rely on CPU, ARM-based FPGA, and FPGA solutions, respectively.
After a careful investigation, in the first phase of the study, it has been observed that both hardware platforms i) Altera Cyclone V (ARM-based FPGA) and ii) Altera Stratix V (FPGA) allow a higher performance than the pure software approach. However, the most challenging requirement of the project, i.e., the output target data rate of 2.048 Mbps, is also complied by the CPU platform. In this preliminary evaluation, the classic encoder (matrices multiplication) for the three codes has been implemented in C/C++ programming language, where a CPU similar to the one available at TTCP has been used, namely the Intel® Xeon® CPU E5-1620 v2 with 3.70GHz clock. Furthermore, the CPU performance might be even better if at least one of the alternative encoding techniques identified for LDPC encoders is implemented.
Taking into account these results and the future portability towards an operational platform is easier for software approach, we have currently selected the CPU platform, based on Intel® Xeon® CPU E5-2637 v4 with 3.50GHz clock, a multi-core processor built on 14nm process technology characterized by low power and high performance, and designed for a platform consisting of a processor and the Platform Controller Hub (PCH) supporting up to 46 bits of physical address space and 48-bits of virtual address space.
Additionally, in order to allow the execution of long tests a test tool was designed allowing to evaluate the real-time transmitter performance in terms of speed and timings.
Besides the real-time transmitter to be implemented in the selected platform, we are also developing the off-line transmitter and receiver to support the validation of the real-time implementation and provide performance statistics.
Both transmitters, real-time and off-line, implement:
• Pseudo randomization, by considering the CCSDS Linear-Feedback Shift Register (LFSR)-based randomizer;
• Channel encoding, for both the BCH and the short LDPC codes (three different algorithms);
• CLTU generation, by proper segmentation and start sequence/tail sequence insertion, when necessary.
The off-line receiver implements:
• CLTU synchronization;
• Channel decoding, where for LDPC codes different algorithms are considered, including Sum Product Algorithm (SPA), Min Sum (MS), and Normalized Min Sum (NMS) iterative algorithms, as well as the non-iterative Most Reliable Basis (MRB) algorithm, which allows to further increase the performance of the LDPC decoder, alone or in a hybrid configuration with iterative schemes;
• Data de-randomization.
In this paper we will present the NEXTRACK architecture and focus on the results of the study on the different LDPC encoder algorithms and their complexity, on the real-time platform selection and on the simulation results obtained with the off-line simulation tool.
Attendee90
EmTroniX
Architecture Definition, DSP Algorithms Design and FPGA Implementation of an Orbiter SDR Autonomous UHF Transceiver
3:00 PM - 3:20 PMAbstract Submission
A large number of planetary missions including Mars rover and lander elements are planned to be sent, in the coming years, by Space Agencies including ESA. These landed elements usually transmit their data back to Earth via relay orbiters that are designed for a long lifetime, creating an infrastructure to support data return to Earth. Since, the communications link between the elements on the surface and the orbiter uses the Proximity-1 Protocol, the transceiver equipment is traditionally designed to comply with the current CCSDS Proximity-1 specification and offer limited parameters tuning. However, the orbiter UHF equipment shall support missions from different agencies over their design lifetime. Moreover, it is unlikely that future missions will use the same modulation schemes, data rates, error correcting codes or that the protocol will not be improved. In order to overcome these limitations, it is necessary for the orbiter UHF transceiver unit to be based on software-defined radio, and to be autonomous and able to recognize features of incoming signals in order to adapt automatically to different requirements. The advantage of an autonomous reconfigurable unit is that it allows the orbiter to communicate with any current or future landed element automatically since it autonomously recognizes the parameters of the received signal and adapt to it, offering the required flexibility and adaptability, and avoiding the need to upload new software from Earth and reconfigure the unit.
In this work, the detailed architecture and DSP algorithms of an orbiter SDR autonomous UHF transceiver are designed in view of a future flight-qualified unit implementation, implemented in a FPGA based digital platform, and verified through analysis, simulation and testing for concept validation and performance evaluation, while taking into account constraints of such missions.
First, the conventional transceiver architecture is studied and the needed changes to cope with all identified new requirements are analysed. The transceiver unit hardware architectural design is then defined including the frequency plan, the main performances of the RF front end, the transceiver interfaces and the components for unit implementation, the Analogue and Digital apportionment and the DSP partitioning between Firmware and Software.
After that, extensive investigations were carried out to design all DSP algorithms that are needed for the autonomous receiver. The investigated and proposed receiver algorithms include DSP and Digital Communications techniques focusing on the classification and/or estimation of the unknown parameters of the received signal, including modulation type classification, data rate classification, modulation index classification, data format classification, coding type identification, SNR estimation, coarse symbol timing estimation, coarse carrier frequency estimation, carrier frequency and phase acquisition and tracking, symbol synchronization, demodulation and data recovery. Doppler and Doppler rate profiles and SNR variation during orbital passes were considered as critical design constraints.
Then, the proposed digital algorithms and architecture were analysed, through simulation under channel conditions representative to expected dynamics during Mars orbital passes. They were verified to enable the autonomous receiver, without a priori knowledge of the received signal, to identify its attributes and automatically reconfigure itself accordingly by selecting the most performing loops and their parameters, without explicit pre-configuration or reprogramming of its functions.
A detailed architecture of the digital section and a FPGA implementation of the orbiter SDR autonomous transceiver unit is achieved, and integrated into a demonstration Test Bed in order to allow validation and performance evaluation of the transceiver algorithms in the selected digital platform.
The obtained transceiver test results are in their majority excellent, and in compliance with the target requirements. Particularly, the evaluation of the classification performance of the autonomous receiver in terms of detection time, misclassification probability, and estimation accuracy proves its high efficiency and reliability, and consolidates the results predicted analytically and by simulation. In fact, the autonomous receiver is verified to be able to autonomously detect the received signal, identify its attributes (including modulation type and data rate), and perform its overall configuration within less than 5 s from signal application, achieving a misclassification probability far less than the required 1E-3. Besides, for all data rates laying from 1kbps to 4096kbps and all supported modulation types, the autonomous receiver is demonstrated to accept a RF input signal within its specified acquisition dynamic range, successfully acquire and maintain time and carrier lock as long as the received signal is present and within its tracking range, reacquire carrier and time lock after link outages when the received signal is above its specified acquisition thresholds, while supporting Doppler shifts up to ±16 kHz and Doppler rates up to 200 Hz/s. In addition, the autonomous receiver is verified to be able to achieve bit error rates in the order of 1E-06 and even lower with the specified input signal power levels and under worst expected Doppler dynamics, for every supported modulation type and data rate, meeting in most tested cases the corresponding requirements, and approaching the theoretical limits.
In this work, the detailed architecture and DSP algorithms of an orbiter SDR autonomous UHF transceiver are designed in view of a future flight-qualified unit implementation, implemented in a FPGA based digital platform, and verified through analysis, simulation and testing for concept validation and performance evaluation, while taking into account constraints of such missions.
First, the conventional transceiver architecture is studied and the needed changes to cope with all identified new requirements are analysed. The transceiver unit hardware architectural design is then defined including the frequency plan, the main performances of the RF front end, the transceiver interfaces and the components for unit implementation, the Analogue and Digital apportionment and the DSP partitioning between Firmware and Software.
After that, extensive investigations were carried out to design all DSP algorithms that are needed for the autonomous receiver. The investigated and proposed receiver algorithms include DSP and Digital Communications techniques focusing on the classification and/or estimation of the unknown parameters of the received signal, including modulation type classification, data rate classification, modulation index classification, data format classification, coding type identification, SNR estimation, coarse symbol timing estimation, coarse carrier frequency estimation, carrier frequency and phase acquisition and tracking, symbol synchronization, demodulation and data recovery. Doppler and Doppler rate profiles and SNR variation during orbital passes were considered as critical design constraints.
Then, the proposed digital algorithms and architecture were analysed, through simulation under channel conditions representative to expected dynamics during Mars orbital passes. They were verified to enable the autonomous receiver, without a priori knowledge of the received signal, to identify its attributes and automatically reconfigure itself accordingly by selecting the most performing loops and their parameters, without explicit pre-configuration or reprogramming of its functions.
A detailed architecture of the digital section and a FPGA implementation of the orbiter SDR autonomous transceiver unit is achieved, and integrated into a demonstration Test Bed in order to allow validation and performance evaluation of the transceiver algorithms in the selected digital platform.
The obtained transceiver test results are in their majority excellent, and in compliance with the target requirements. Particularly, the evaluation of the classification performance of the autonomous receiver in terms of detection time, misclassification probability, and estimation accuracy proves its high efficiency and reliability, and consolidates the results predicted analytically and by simulation. In fact, the autonomous receiver is verified to be able to autonomously detect the received signal, identify its attributes (including modulation type and data rate), and perform its overall configuration within less than 5 s from signal application, achieving a misclassification probability far less than the required 1E-3. Besides, for all data rates laying from 1kbps to 4096kbps and all supported modulation types, the autonomous receiver is demonstrated to accept a RF input signal within its specified acquisition dynamic range, successfully acquire and maintain time and carrier lock as long as the received signal is present and within its tracking range, reacquire carrier and time lock after link outages when the received signal is above its specified acquisition thresholds, while supporting Doppler shifts up to ±16 kHz and Doppler rates up to 200 Hz/s. In addition, the autonomous receiver is verified to be able to achieve bit error rates in the order of 1E-06 and even lower with the specified input signal power levels and under worst expected Doppler dynamics, for every supported modulation type and data rate, meeting in most tested cases the corresponding requirements, and approaching the theoretical limits.