Revealing stable and unstable modes of denoisers through nonlinear eigenvalue analysis

Ester Hait-Fraenkel, Guy Gilboa, Revealing stable and unstable modes of denoisers through nonlinear eigenvalue analysis. J. Vis. Commun. Image Represent. 75: 103041 (2021)

Ester Hait-Fraenkel, Guy Gilboa, arXiv

In this paper, we propose to analyze stable and unstable modes of generic image denoisers through nonlinear eigenvalue analysis. We attempt to find input images for which the output of a black-box denoiser is proportional to the input. We treat this as a nonlinear eigenvalue problem. This has potentially wide implications, since most image processing algorithms can be viewed as generic nonlinear operators. We introduce a generalized nonlinear power-method to solve eigenproblems for such black-box operators. Using this method we reveal stable modes of nonlinear denoisers. These modes are optimal inputs for the denoiser, achieving superior PSNR in noise removal. Analogously to the linear case (low-pass-filter), such stable modes are eigenfunctions corresponding to large eigenvalues, characterized by large piece-wise-smooth structures. We also provide a method to generate the complementary, most unstable modes, which the denoiser suppresses strongly. These modes are textures with small eigenvalues. We validate the method using total-variation (TV) and demonstrate it on the EPLL denoiser (Zoran-Weiss). Finally, we suggest an encryption-decryption application.

 

Experts with Lower-Bounded Loss Feedback: A Unifying Framework

Eyal Gofer, Guy Gilboa,  arxiv preprint

The most prominent feedback models for the best expert problem are the full information and bandit models. In this work we consider a simple feedback model that generalizes both, where on every round, in addition to a bandit feedback, the adversary provides a lower bound on the loss of each expert. Such lower bounds may be obtained in various scenarios, for instance, in stock trading or in assessing errors of certain measurement devices. For this model we prove optimal regret bounds (up to logarithmic factors) for modified versions of Exp3, generalizing algorithms and bounds both for the bandit and the full-information settings. Our second-order unified regret analysis simulates a two-step loss update and highlights three Hessian or Hessian-like expressions, which map to the full-information regret, bandit regret, and a hybrid of both. Our results intersect with those for bandits with graph-structured feedback, in that both settings can accommodate feedback from an arbitrary subset of experts on each round. However, our model also accommodates partial feedback at the single-expert level, by allowing non-trivial lower bounds on each loss.

 

Iterative Methods for Computing Eigenvectors of Nonlinear Operators

Guy Gilboa, arXiv preprint 

A chapter to appear in Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging.

 

In this chapter we are examining several iterative methods for solving nonlinear eigenvalue problems. These arise in variational image-processing, graph partition and classification, nonlinear physics and more. The canonical eigenproblem we solve is $T(u)=\lambda u$, where $T:R^n\to R^n$ is some bounded nonlinear operator. Other variations of eigenvalue problems are also discussed. We present a progression of 5 algorithms, coauthored in recent years by the author and colleagues. Each algorithm attempts to solve a unique problem or to improve the theoretical foundations. The algorithms can be understood as nonlinear PDE’s which converge to an eigenfunction in the continuous time domain. This allows a unique view and understanding of the discrete iterative process. Finally, it is shown how to evaluate numerically the results, along with some examples and insights related to priors of nonlinear denoisers, both classical algorithms and ones based on deep networks.

NeurIPS 2020: Deeply Learned Spectral Total Variation Decomposition

Tamara G. Grossmann, Yury Korolev, Guy Gilboa, Carola-Bibiane Schönlieb, arXiv 2020

Accepted for NeurIPS 2020.

Non-linear spectral decompositions of images based on one-homogeneous functionals such as total variation have gained considerable attention in the last few years. Due to their ability to extract spectral components corresponding to objects of different size and contrast, such decompositions enable filtering, feature transfer, image fusion and other applications. However, obtaining this decomposition involves solving multiple non-smooth optimisation problems and is therefore computationally highly intensive. In this paper, we present a neural network approximation of a non-linear spectral decomposition. We report up to four orders of magnitude (×10,000) speedup in processing of mega-pixel size images, compared to classical GPU implementations. Our proposed network, TVSpecNET, is able to implicitly learn the underlying PDE and, despite being entirely data driven, inherits invariances of the model based transform. To the best of our knowledge, this is the first approach towards learning a non-linear spectral decomposition of images. Not only do we gain a staggering computational advantage, but this approach can also be seen as a step towards studying neural networks that can decompose an image into spectral components defined by a user rather than a handcrafted functional.

 

Super-Pixel Sampler – a Data-driven Approach for Depth Sampling and Reconstruction

Adam Wolff, Shachar Praisler, Ilya Tcenov and Guy Gilboa, “Super-Pixel Sampler – a Data-driven Approach for Depth Sampling and Reconstruction”, accepted to ICRA (Int. Conf. on Robotics and Automation) 2020.

Paper

See the video of our mechanical prototype

Abstract

Depth acquisition, based on active illumination, is essential for autonomous and robotic navigation. LiDARs (Light Detection And Ranging) with mechanical, fixed, sampling templates are commonly used in today’s autonomous vehicles. An emerging technology, based on solid-state depth sensors, with no mechanical parts, allows fast and adaptive scans.

In this paper, we propose an adaptive, image-driven, fast, sampling and reconstruction strategy. First, we formulate a piece-wise planar depth model and estimate its validity for indoor and outdoor scenes. Our model and experiments predict that, in the optimal case, adaptive sampling strategies with about 20-60 piece-wise planar structures can approximate well a depth map. This translates to requiring a single depth sample for every 1200 RGB samples, providing strong motivation to investigate an adaptive framework. We propose a simple, generic, sampling and reconstruction algorithm, based on super-pixels. Our sampling improves grid and random sampling, consistently, for a wide variety of reconstruction methods. We propose an extremely simple and fast reconstruction for our sampler. It achieves state-of-the-art results, compared to complex image-guided depth completion algorithms, reducing the required sampling rate by a factor of 3-4.
A single-pixel depth camera built in our lab illustrates the concept.

Self-Supervised Unconstrained Illumination Invariant Representation

Damian Kaliroff, Guy Gilboa, arXiv

We propose a new and completely data-driven approach for generating an unconstrained illumination invariant representation of images. Our method trains a
neural network with a specialized triplet loss designed to emphasize actual scene
changes while downplaying changes in illumination. For this purpose we use the
BigTime image dataset, which contains static scenes acquired at different times.
We analyze the attributes of our representation, and show that it improves patch
matching and rigid registration over state-of-the-art illumination invariant representations.
We point out that the utility of our method is not restricted to handling
illumination invariance, and that it may be applied for generating representations
which are invariant to general types of nuisance, undesired, image variants.