publications
Publications at peer-reviewed journals and conferences
2023
- Signal ProcessingOnline rank-revealing block-term tensor decompositionAthanasios A Rontogiannis, Eleftherios Kofidis, and Paris V GiampourasSignal Processing 2023
- ICCV WorkshopClustering-based domain-incremental learningChristiaan Lamers, René Vidal, Nabil Belbachir, and 3 more authors2023
- ArxivA Linearly Convergent GAN Inversion-based Algorithm for Reverse Engineering of DeceptionsDarshan Thaker, Paris Giampouras, and René VidalarXiv preprint arXiv:2306.04756 2023
- ICMLThe ideal continual learner: An agent that never forgetsLiangzu Peng, Paris Giampouras, and René Vidal2023
2022
- EUSIPCOA Projected Newton-type Algorithm for Rank - revealing Nonnegative Block - Term Tensor DecompositionEleftherios Kofidis, Paris Giampouras, and Athanasios A. Rontogiannis30th European Signal Processing Conference (EUSIPCO) 2022
The block-term tensor decomposition (BTD) model has been receiving increasing attention as a quite flexible way to capture the structure of 3-dimensional data that can be naturally viewed as the superposition of R block terms of multilinear rank ( Lr,Lr,1),r=1,2,…,R . Versions with nonnegativity constraints, especially relevant in applications like blind source separation problems, have only recently been proposed and they all share the need to have an a-priori knowledge of the number of block terms, R , and their individual ranks, Li . Clearly, the latter requirement may severely limit their practical applicability. Building upon earlier work of ours on unconstrained BTD model selection and computation, we develop for the first time in this paper a method for nonnegative BTD approximation that is also rank-revealing. The idea is to impose column sparsity jointly on the factors and successively estimate the ranks as the numbers of factor columns of non-negligible magnitude. This is effected with the aid of nonnegative alternating iteratively reweighted least squares, implemented via projected Newton updates for increased convergence rate and accuracy. Simulation results are reported that demonstrate the effectiveness of our method in accurately estimating both the ranks and the factors of the nonnegative least squares BTD approximation.
- ICLRImplicit Bias of Projected Subgradient Method Gives Provable Robust Recovery of Subspaces of Unknown CodimensionParis Giampouras, Benjamin Haeffele, and Rene VidalInternational Conference on Learning Representations 2022
Robust subspace recovery (RSR) is the problem of learning a subspace from sample data points corrupted by outliers. Dual Principal Component Pursuit (DPCP) is a robust subspace recovery method that aims to find a basis for the orthogonal complement of the subspace by minimizing the sum of the distances of the points to the subspaces subject to orthogonality constraints on the basis. Prior work has shown that DPCP can provably recover the correct subspace in the presence of outliers as long as the true dimension of the subspace is known. In this paper, we show that if the orthogonality constraints –adopted in previous DPCP formulations– are relaxed and random initialization is used instead of spectral one, DPCP can provably recover a subspace of \emphunknown dimension. Specifically, we propose a very simple algorithm based on running multiple instances of a projected sub-gradient descent method (PSGM), with each problem instance seeking to find one vector in the null space of the subspace. We theoretically prove that under mild conditions this approach succeeds with high probability. In particular, we show that 1) all of the problem instances will converge to a vector in the nullspace of the subspace and 2) the ensemble of problem instance solutions will be sufficiently diverse to fully span the nullspace of the subspace thus also revealing its true unknown codimension. We provide empirical results that corroborate our theoretical results and showcase the remarkable implicit rank regularization behavior of the PSGM algorithm that allows us to perform RSR without knowing the subspace dimension
- IEEE TSPBlock-Term Tensor Decomposition Model Selection and Computation: The Bayesian WayParis Giampouras, Athanasios A. Rontogiannis, and Eleftherios KofidisIEEE Transactions on Signal Processing 2022
The so-called block-term decomposition (BTD) tensor model, especially in its rank- (Lr,Lr,1) version, has been recently receiving increasing attention due to its enhanced ability of representing systems and signals that are composed of blocks of rank higher than one, a scenario encountered in numerous and diverse applications. Uniqueness conditions and fitting methods have thus been thoroughly studied. Nevertheless, the challenging problem of estimating the BTD model structure, namely the number of block terms, R , and their individual ranks, Lr , has only recently started to attract significant attention, mainly through regularization-based approaches which entail the need to tune the regularization parameter(s). In this work, we build on ideas of sparse Bayesian learning (SBL) and put forward a fully automated Bayesian approach. Through a suitably crafted multi-level hierarchical probabilistic model, which gives rise to heavy-tailed prior distributions for the BTD factors, structured sparsity is jointly imposed. Ranks are then estimated from the numbers of blocks ( R ) and columns ( Lr ) of non-negligible energy. Approximate posterior inference is implemented, within the variational inference framework. The resulting iterative algorithm completely avoids hyperparameter tuning, which is a significant defect of regularization-based methods. Alternative probabilistic models are also explored and the connections with their regularization-based counterparts are brought to light with the aid of the associated maximum a-posteriori (MAP) estimators. We report simulation results with both synthetic and real-word data, which demonstrate the merits of the proposed method in terms of both rank estimation and model fitting as compared to state-of-the-art relevant methods.
- ICMLReverse Engineering \ell_p attacks: A block-sparse optimization approach with recovery guaranteesDarshan Thaker *, Paris Giampouras *, and Rene Vidal(* equal contribution) 39th International Conference on Machine Learning 2022
Deep neural network-based classifiers have been shown to be vulnerable to imperceptible perturbations to their input, such as \ell_p-bounded norm adversarial attacks. This has motivated the development of many defense methods, which are then broken by new attacks, and so on. This paper focuses on a different but related problem of reverse engineering adversarial attacks. Specifically, given an attacked signal, we study conditions under which one can determine the type of attack (\ell_1, \ell_2 or \ell_∞) and recover the clean signal. We pose this problem as a block-sparse recovery problem, where both the signal and the attack are assumed to lie in a union of subspaces that includes one subspace per class and one subspace per attack type. We derive geometric conditions on the subspaces under which any attacked signal can be decomposed as the sum of a clean signal plus an attack. In addition, by determining the subspaces that contain the signal and the attack, we can also classify the signal and determine the attack type. Experiments on digit and face classification demonstrate the effectiveness of the proposed approach.
2021
- ICASSPRank-Revealing Block-Term Decomposition for Tensor CompletionAthanasios A. Rontogiannis, Paris V Giampouras, and Eleftherios Kofidis2021
- ASILOMAROnline rank-revealing block-term tensor decompositionAthanasios A Rontogiannis, Eleftherios Kofidis, and Paris Giampouras55th Asilomar Conference on Signals, Systems, and Computers 2021
The so-called block-term decomposition (BTD) tensor model, especially in its rank-(L r , L r , 1) version, has been recently receiving increasing attention due to its enhanced ability of representing systems and signals that are composed of block components of rank higher than one, a scenario encountered in numerous and diverse applications. Its uniqueness and approximation have thus been thoroughly studied. The challenging problem of estimating the BTD model structure, namely the number of block terms (rank) and their individual (block) ranks, is of crucial importance in practice and has only recently started to attract significant attention. In data-streaming scenarios and/or big data applications, where the tensor dimension in one of its modes grows in time or can only be processed incrementally, it is essential to be able to perform model selection and computation in a recursive (incremental/online) manner. To date there is only one such work in the literature concerning the (general rank-(L, M, N)) BTD model, which proposes an incremental method, however with the BTD rank and block ranks assumed to be a-priori known and time invariant. In this paper, a novel approach to rank-(L r , L r , 1) BTD model selection and tracking is proposed, based on the idea of imposing column sparsity jointly on the factors and estimating the ranks as the numbers of factor columns of nonnegligible magnitude. An online method of the alternating reweighted least squares (RLS) type is developed and shown to be computationally efficient and fast converging, also allowing the model ranks to change in time. Its time and memory efficiency are evaluated and favorably compared with those of the batch approach. Simulation results are reported that demonstrate the effectiveness of the proposed scheme in both selecting and tracking the correct BTD model.
- IEEE JSTSPBlock-Term Tensor Decomposition: Model Selection and ComputationAthanasios A. Rontogiannis, Eleftherios Kofidis, and Paris GiampourasIEEE Journal of Selected Topics in Signal Processing 2021
The so-called block-term decomposition (BTD) tensor model has been recently receiving increasing attention due to its enhanced ability of representing systems and signals that are composed of blocks of rank higher than one, a scenario encountered in numerous and diverse applications. Its uniqueness and approximation have thus been thoroughly studied. Nevertheless, the challenging problem of estimating the BTD model structure, namely the number of block terms and their individual ranks, has only recently started to attract significant attention. In this paper, a novel method of BTD model selection and computation is proposed, based on the idea of imposing column sparsity jointly on the factors and in a hierarchical manner and estimating the ranks as the numbers of factor columns of non-negligible magnitude. Following a block successive upper bound minimization (BSUM) approach for the proposed optimization problem is shown to result in an alternating hierarchical iteratively reweighted least squares (HIRLS) algorithm, which is fast converging and enjoys high computational efficiency, as it relies in its iterations on small-sized sub-problems with closed-form solutions. Simulation results for both synthetic examples and a hyper-spectral image denoising application are reported, which demonstrate the superiority of the proposed scheme over the state-of-the-art in terms of success rate in rank estimation as well as computation time and rate of convergence while attaining a comparable tensor approximation performance.
2020
- NeurIPSA novel variational form of the Schatten-p quasi-normParis Giampouras, Rene Vidal, Athanasios Rontogiannis, and 1 more author2020
The Schatten-p quasi-norm with p ∈ (0, 1) has recently gained considerable atten- tion in various low-rank matrix estimation problems offering significant benefits over relevant convex heuristics such as the nuclear norm. However, due to the nonconvexity of the Schatten-p quasi-norm, minimization suffers from two major drawbacks: 1) the lack of theoretical guarantees and 2) the high computational cost which is demanded for the minimization task even for trivial tasks such as finding stationary points. In an attempt to reduce the high computational cost induced by Schatten-p quasi-norm minimization, variational forms, which are defined over smaller-size matrix factors whose product equals the original matrix, have been proposed. Here, we propose and analyze a novel variational form of Schatten-p quasi-norm which, for the first time in the literature, is defined for any continuous value of p ∈ (0, 1] and decouples along the columns of the factorized matrices. The proposed form can be considered as the natural generalization of the well-known variational form of the nuclear norm to the nonconvex case i.e., for p ∈ (0, 1). Notably, low-rankness is now imposed via a group-sparsity promoting regularizer. The resulting formulation gives way to SVD-free algorithms thus offering lower computational complexity than the one that is induced by the original definition of the Schatten-p quasi-norm. A local optimality analysis is provided which shows that we can arrive at a local minimum of the original Schatten-p quasi-norm problem by reaching a local minimum of the matrix factorization based surrogate problem. In addition, for the case of the squared Frobenius loss with linear operators obeying the restricted isometry property (RIP), a rank-one update scheme is proposed, which offers a way to escape poor local minima. Finally, the efficiency of our approach is empirically shown on a matrix completion problem.
- IEEE SPOnline Reweighted Least Squares Robust PCAAthanasios A. Rontogiannis, Paris Giampouras, and Konstantinos D. KoutroumbasIEEE Signal Processing Letters 2020
The letter deals with the problem known as robust principal component analysis (RPCA), that is, the decomposition of a data matrix as the sum of a low-rank matrix component and a sparse matrix component. After expressing the low-rank matrix component in factorized form, we develop a novel online RPCA algorithm that is based entirely on reweighted least squares recursions and is appropriate for sequential data processing. The proposed algorithm is fast, memory optimal and, as corroborated by indicative empirical results on simulated data and a video processing application, competitive to the state-of-the-art in terms of estimation performance.
2019
- ICASSPA Projected Newton-type Algorithm for Nonnegative Matrix Factorization with Model Order SelectionParis V. Giampouras, Athanasios A. Rontogiannis, and Konstantinos D. Koutroumbas2019
- IEEE TSPAlternating Iteratively Reweighted Least Squares Minimization for Low-Rank Matrix FactorizationParis Giampouras, Athanasios A. Rontogiannis, and Konstantinos D. KoutroumbasIEEE Transactions on Signal Processing 2019
Nowadays, the availability of large-scale data in disparate application domains urges the deployment of sophisticated tools for extracting valuable knowledge out of this huge bulk of information. In that vein, low-rank representations (LRRs), which seek low-dimensional embeddings of data have naturally appeared. In an effort to reduce computational complexity and improve estimation performance, LRR has been viewed via a matrix factorization (MF) perspective. Recently, low-rank MF (LRMF) approaches have been proposed for tackling the inherent weakness of MF, i.e., the unawareness of the dimension of the low-dimensional space where data reside. Herein, inspired by the merits of iterative reweighted schemes for sparse recovery and rank minimization, we come up with a generic low-rank promoting regularization function. Then, focusing on a specific instance of it, we propose a regularizer that imposes column-sparsity jointly on the two matrix factors that result from MF, thus promoting low-rankness on the optimization problem. The low-rank promoting properties of the resulting regularization term are brought to light by mathematically showing that it is actually a tight upper bound of a specific version of the weighted nuclear norm. The problems of denoising and matrix completion are redefined according to the new LRMF formulation and solved via efficient alternating iteratively reweighted least squares type algorithms. Theoretical analysis of the algorithms regarding the convergence and the rates of convergence to stationary points is provided. The effectiveness of the proposed algorithms is verified in diverse simulated and real data experiments.
2018
- IEEE SPA Computationally Efficient Tensor Completion AlgorithmIoannis C. Tsaknakis, Paris V. Giampouras, Athanasios A. Rontogiannis, and 1 more authorIEEE Signal Processing Letters 2018
2017
- Elsevier SPOnline sparse and low-rank subspace learning from incomplete data: A Bayesian viewParis Giampouras, Athanasios A. Rontogiannis, Konstantinos E. Themelis, and 1 more authorSignal Processing 2017
Extracting the underlying low-dimensional space where high-dimensional signals often reside has been at the center of numerous algorithms in the signal processing and machine learning literature during the past few decades. Moreover, working with incomplete large scale datasets has recently been commonplace for diverse reasons. This so called big data era we are currently living calls for devising online subspace learning algorithms that can suitably handle incomplete data. Their anticipated goal is to recursively estimate the unknown subspace by processing streaming data sequentially, thus reducing computational complexity. In this paper, an online variational Bayes subspace learning algorithm from partial observations is presented. To account for the unawareness of the true rank of the subspace, commonly met in practice, low-rankness is explicitly imposed on the sought subspace data matrix by exploiting sparse Bayesian learning principles. Sparsity, simultaneously to low-rankness, is favored on the subspace matrix by the sophisticated hierarchical Bayesian scheme that is adopted. The proposed algorithm is thus adept in dealing with applications whereby the underlying subspace may be also sparse. The new subspace tracking scheme outperforms its state-of-the-art counterparts in terms of estimation accuracy, in a variety of experiments conducted on both simulated and real data.
2016
- IEEE TGRSSimultaneously Sparse and Low-Rank Abundance Matrix Estimation for Hyperspectral Image UnmixingParis Giampouras, Konstantinos E. Themelis, Athanasios A. Rontogiannis, and 1 more authorIEEE Transactions on Geoscience and Remote Sensing 2016
In a plethora of applications dealing with inverse problems, e.g., image processing, social networks, compressive sensing, and biological data processing, the signal of interest is known to be structured in several ways at the same time. This premise has recently guided research into the innovative and meaningful idea of imposing multiple constraints on the unknown parameters involved in the problem under study. For instance, when dealing with problems whose unknown parameters form sparse and low-rank matrices, the adoption of suitably combined constraints imposing sparsity and low rankness is expected to yield substantially enhanced estimation results. In this paper, we address the spectral unmixing problem in hyperspectral images. Specifically, two novel unmixing algorithms are introduced in an attempt to exploit both spatial correlation and sparse representation of pixels lying in the homogeneous regions of hyperspectral images. To this end, a novel mixed penalty term is first defined consisting of the sum of the weighted ℓ 1 and the weighted nuclear norm of the abundance matrix corresponding to a small area of the image determined by a sliding square window. This penalty term is then used to regularize a conventional quadratic cost function and impose simultaneous sparsity and low rankness on the abundance matrix. The resulting regularized cost function is minimized by: 1) an incremental proximal sparse and low-rank unmixing algorithm; and 2) an algorithm based on the alternating direction method of multipliers. The effectiveness of the proposed algorithms is illustrated in experiments conducted both on simulated and real data.
2015
- CoSeRaA sparse reduced-rank regression approach for hyperspectral image unmixingParis V. Giampouras, Athanasios A. Rontogiannis, Konstantinos D. Koutroumbas, and 1 more author2015
2014
- IEEE WHISPERSA variational Bayes algorithm for joint-sparse abundance estimationParis Giampouras, Konstantinos E. Themelis, Athanasios A. Rontogiannis, and 1 more author2014
This paper presents a variational Bayesian scheme for semi-supervised unmixing on hyperspectral images that exploits the inherent spatial correlation between neighboring pixels. More specifically, a hierarchical Bayesian model that promotes a joint-sparse profile on the abundance vectors of adjacent pixels is developed and a computationally efficient variational Bayes algorithm is incorporated to perform Bayesian inference. The benefits of the proposed joint-sparse model are demonstrated via simulations on both synthetic and real data.