Multicomponent signals (MCSs) are ubiquitous in real
life signals: for instance, audio (music, speech),
medical (electrocardiogram ECG, phonocardiogram PCG
electroencephalogram EEG), astronomical (gravitational
waves) or echolocation (bats, marine mammals) signals
can be modeled as the superimposition of
amplitude/frequency modulated (AM/FM) modes. Identifying
and separating these constituent modes are challenging
tasks due to the variety of MCSs encountered. In this
regard, the ANR-ASTRES project focused on the design of
advanced, data adaptive, signal and image processing
techniques, to decompose complex non stationary signals
into physically meaningful modes. To this aim, several
techniques were investigated based either on a revisit
of the reallocation principles through the concept of
synchrosqueezing transform (SST), optimization
techniques in relation with the notion of sparsity or
empirical mode decomposition. Different extensions of
the reassignment techniques, mainly based on a finer
analysis of the reassignment
operators, have also been beneficial to improve the
original SST by adapting it to modes with strong
frequency modulations or fast oscillating phases.
Demodulation algorithms were also used in conjunction
with SST to improve mode retrieval, and extensions of
these approaches will be discussed in the present
project.
In spite of these achievements, the behavior of the
synchrosqueezing operators in a noisy environment still
needs to be better understood. As we will also discuss
the extension of SST to bivariate signals putting the
emphasis on noisy cases, connections will be established
between monovariate and bivariate case, in particular
with respect to noise treatment. Furthermore, SST even
in its most recent variants contains several intrinsic
limitations: it assumes first the modes of the MCS to be
separated in the TF plane (it is therefore irrelevant to
the study of colliding modes), and second to have
regular instantaneous phase and amplitude, which
precludes the study of modes with finite duration. We
propose to address these issues in the present project.
In addition to this, if SST is used for mode retrieval,
the recovery process relies on a basic ridge
extractor which has seldom been discussed, and which we
propose to revisit in the present project. As we will
see, mode retrieval in that context is also greatly
influenced by the time and frequency resolutions, and in
this regard, we will investigate how to perform the
reconstruction of the modes from downsampled SST, a
problem which has not been addressed so far.
Another way to deal with intrinsic limitations of SST such as the case of overlapping components is to use machine learning approaches like deep neural networks (DNN). We will also investigate how to optimize the filter parameters and the resolution in TFRs as well as how to extract components of an MCS using DNN. The study of MCSs can be seen from another angle which relates to the concept of source separation, for which NMF has been extensively used. In the present project, we propose to investigate how NMF can be used in conjunction with SST to improve mode extraction. In this regard, since NMF is performed on the magnitude of a TFR, the recovery of the mode will also imply to investigate phase retrieval.
Finally, we will also address the study of specific applications of SST. In particular, we will study how SST applies to the context of audio source separation and how recent extensions of SST to the multivariate setting may be used on EEG recordings for the study of emotional states, and then on ECG and PCG signals for the monitoring of fetal cardiac activity. Note that, in all the developed applications and in term of programming, we will try to be in line with the ASTRES-Toolbox.