Pau Closas, Monica Navarro, Jordi Vila-Valls, Massimo Bertinelli, Centre Tecnològic de Telecomunicacions de Catalunya = Telecommunications Technological Centre of Catalonia (CTTC), Northeastern University [Boston], ESA (Eurpean Space Agency) -Noordwijk (ESA), European Space Agency (ESA), Centre Tecnològic de Telecomunicacions de Catalunya - CTTC (SPAIN), European Space Agency - ESA (THE NETHERLANDS), and Northeastern University (USA)
International audience; Deep space missions keep pushing for new frontiers affecting a wide spectrum of disciplines. To support the scientific achievements expected from new missions, communication technology is being pushed towards its limits [1]. A need to increase communication links data rate as well as to lower the operative signal-to-noise ratio (SNR) are identified. The adoption of advanced coding schemes such as turbo codes and low-density parity-check (LDPC) codes (e.g., Consultative Committee for Space Data Systems (CC-SDS) standards) allows receivers to operate at lower SNRs. However, in order to exploit the full potential of the coding gain, the receiver must be able to acquire and track a signal with a SNR much lower than expected in nominal conditions of state-of-the-art systems. The target operating point is given by the candidate LDPC codes [2], where the codeword error rate is set to WER ≤ 10 -5 , achieved at the bit energy to noise density ratio E b /N 0 ≥ 5.2 dB, ≥ 3.6 dB for LDPC(128,64) and LDPC(256,128), respectively. In [3] the first receiver bottleneck related with frame synchronization, a functionality required previous to channel decoding, was identified. Even though frame synchronization enhancements were proposed beyond standard correlation techniques [3], [4], [1], it was recommended to increase the synchronization word length in order to achieve the target performance. The recommendation was recently adopted by the CCSDS. In this work, the focus lies on the receiver synchronization stages (i.e., acquisition and tracking). Not only from a research standpoint, but also for the design of next generation Telemetry Tracking & Command (TT&C) transponders, it is of capital importance to understand the performance limitations of state-of-the-art deep space communications architectures, clearly identifying possible bottlenecks and the synchronization stages (i.e., acquisition and tracking) to be improved. Digital carrier and timing synchronization have been an active research field for the past three decades in applications such as satellite-based positioning or terrestrial wireless communications systems. In those scenarios, the limitations of standard delay, frequency, and phase-locked loop (delay-locked loop, frequency-locked loop (FLL), and phase-locked loop (PLL), respectively) architectures have been clearly overcome by Kalman filter (KF) based solutions [5], which provide an inherent adaptive bandwidth, robustness, flexibility, and an optimal design methodology. Despite the advances in the field, synchronization architectures for deep space communications links, implemented in current TT&C transponders, still rely on well-known conventional architectures, which may be insufficient if limits are pushed to extremely low SNR or harsh propagation conditions. With the advent of powerful software defined radio receivers and new system design rules, it is now possible to adopt new robust architectures that may enable going beyond the performance and reliability provided by legacy solutions.