# Flows Generating Nonlinear Eigenfunctions

### Image above: Eigenfunction of TGV found by the proposed flow.

Abstract:

Nonlinear variational methods have become very powerful tools for many image processing tasks. Recently a new line of research has emerged, dealing with nonlinear eigenfunctions induced by convex functionals. This has provided new insights and better theoretical understanding of convex regularization and introduced new processing methods. However, the theory of nonlinear eigenvalue problems is still at its infancy. We present a new flow that can generate nonlinear eigenfunctions of the form $T(u)=\lambda u$, where $T(u)$ is a nonlinear operator and $\lambda \in \mathbb{R}$ is the eigenvalue. We develop the theory where $T(u)$ is a subgradient element of a regularizing one-homogeneous functional, such as total-variation (TV) or total-generalized-variation (TGV). We introduce two flows: a forward flow and an inverse flow; for which the steady state solution is a nonlinear eigenfunction. The forward flow monotonically smooths the solution (with respect to the regularizer) and simultaneously increases the $L^2$ norm. The inverse flow has the opposite characteristics. For both flows, the steady state depends on the initial condition, thus different initial conditions yield different eigenfunctions. This enables a deeper investigation into the space of nonlinear eigenfunctions, allowing to produce numerically diverse examples, which may be unknown yet. In addition we suggest an indicator to measure the affinity of a function to an eigenfunction and relate it to pseudo-eigenfunctions in the linear case

# Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches

### Ester Hait, Guy Gilboa, “Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches”, accepted to IEEE Trans. Image Processing, 2017.

Abstract:

We propose to combine semantic data and registration algorithms to solve various image processing problems such as denoising, super-resolution and color-correction.
It is shown how such new techniques can achieve significant quality enhancement, both visually and quantitatively, in the case of facial image enhancement. Our model assumes prior high quality data of the person to be processed, but no knowledge of the degradation model.
We try to overcome the classical processing limits by using semantically-aware patches, with adaptive size and location regions of coherent structure and context, as building blocks. The method is demonstrated on the problem of cellular photography enhancement of dark facial images for different identities, expressions and poses

# Semi-Inner-Products for Convex Functionals and Their Use in Image Decomposition

### Guy Gilboa, Journal of Mathematical Imaging and Vision (JMIV), Vol. 57, No. 1, pp. 26-42, 2017.

Abstract:

Semi-inner-products in the sense of Lumer are extended to convex functionals. This yields a Hilbert-space like structure to convex functionals in Banach spaces. In particular, a general expression for semi-inner-products with respect to one homogeneous functionals is given. Thus one can use the new operator for the analysis of total variation and higher order functionals like total-generalized-variation (TGV). Having a semi-inner-product, an angle between functions can be defined in a straightforward manner. It is shown that in the one homogeneous case the Bregman distance can be expressed in terms of this newly defined angle. In addition, properties of the semi-inner-product of nonlinear eigenfunctions induced by the functional are derived. We use this construction to state a sufficient condition for a perfect decomposition of two signals and suggest numerical measures which indicate when those conditions are approximately met.

# Nonlinear Spectral Image Fusion

### M Benning, M. Moeller, R. Nossek, M. Burger, D. Cremers, G. Gilboa, C. Schoenlieb, “Nonlinear Spectral Image Fusion”, Proc. Scale-Space and Variational Methods (SSVM), 2017.

Abstract:

In this paper we demonstrate that the framework of nonlinear spectral decompositions based on total variation (TV) regularization is very well suited for image fusion as well as more general image manipulation tasks. The well-localized and edge-preserving spectral TV decomposition allows to select frequencies of a certain image to transfer particular features, such as wrinkles in a face, from one image to another. We illustrate the effectiveness of the proposed approach in several numerical experiments, including a comparison to the competing techniques of Poisson image editing, linear osmosis, wavelet fusion and Laplacian pyramid fusion. We conclude that the proposed spectral TV image decomposition framework is a valuable tool for semi- and fully-automatic image editing and fusion.

# In Situ Target-Less Calibration of Turbid Media

### 1. O. Spier, T. Treibitz, G. Gilboa, “In situ target-less calibration of turbid media”, Int. Conf. on Computational Photography (ICCP), Stanford Univ., 2017.

Abstract:

The color of an object imaged in a turbid medium varies with distance and medium properties, deeming color an unstable source of information. Assuming 3D scene structure has become relatively easy to estimate, the main challenge in color recovery is calibrating medium properties in situ, at the time of acquisition. Existing attenuation calibration methods use either color charts, external hardware, or multiple images of an object. Here we show none of these is needed for calibration. We suggest a method for estimating the medium properties (both attenuation and scattering) using only images of backscattered light from the system’s light sources. This is advantageous in turbid media where the object signal is noisy, and also alleviates the need for correspondence matching, which can be difficult in high turbidity. We demonstrate the advantages of our method through simulations and in a real-life experiment at sea.

# Robust Recovery of Heavily Degraded Depth Measurements

### G. Drozdov, Y. Shapiro, G. Gilboa, “Robust recovery of heavily degraded depth measurements”, Int. Conf. On 3D Vision (3DV), Stanford University, 2016.

Abstract:

The revolution of RGB-D sensors is advancing towards mobile platforms for robotics, autonomous vehicles and consumer hand-held devices. Strong pressures on power consumption and system price require new powerful algorithms that can robustly handle very low quality raw data.
In this paper we demonstrate the ability to reliably recover depth measurements from a variety of highly degraded depth modalities, coupled with standard RGB imagery. The method is based on a regularizer which fuses super-pixel information with the total-generalized-variation (TGV) functional.
We examine our algorithm on several different degradations, including new Intels RealSense hand-held device, LiDAR-type data and ultra-sparse random sampling. In all modalities which are heavily degraded, our robust algorithm achieves superior performance over the state-of-the-art. Additionally, a robust error measure based on Tukeys biweight metric is suggested, which is better at ranking algorithm performance since it does not reward blurry non-physical depth results

# A Depth Restoration Occlusionless Temporal Dataset

### D. Rotman, G. Gilboa, “A depth restoration occlusionless temporal dataset”, Int. Conf. On 3D Vision (3DV), Stanford University, 2016.

Abstract:

Depth restoration, the task of correcting depth noise and artifacts, has recently risen in popularity due to the increase in commodity depth cameras. When assessing the quality of existing methods, most researchers resort to the popular Middlebury dataset; however, this dataset was not created for depth enhancement, and therefore lacks the option of comparing genuine low-quality depth images with their high-quality, ground-truth counterparts. To address
this shortcoming, we present the Depth Restoration Occlusionless Temporal (DROT) dataset. This dataset offers real depth sensor input coupled with registered pixel-to-pixel color images, and the ground-truth depth to which we wish to compare. Our dataset includes not only Kinect 1 and Kinect 2 data, but also an Intel R200 sensor intended for integration into hand-held devices. Beyond this, we present a new temporal depth-restoration method. Utilizing multiple
frames, we create a number of possibilities for an initial degraded depth map, which allows us to arrive at a more educated decision when refining depth images. Evaluating this method with our dataset shows significant benefits, particularly for overcoming real sensor-noise artifacts

# Spectral Decompositions using One-Homogeneous Functionals

### Martin Burger, Guy Gilboa, Michael Moeller, Lina Eckardt, Daniel Cremers “Spectral Decompositions using One-Homogeneous Functionals”, SIAM Journal on Imaging Sciences, Vol. 9, No. 3, pp. 1374-1408, 2016.

Abstract:

This paper discusses the use of absolutely one-homogeneous regularization functionals in a variational, scale space, and inverse scale space setting to define a nonlinear spectral decomposition of input data. We present several theoretical results that explain the relation between the different definitions. Additionally, results on the orthogonality of the decomposition, a Parseval-type identity and the notion of generalized (nonlinear) eigenvectors closely link our nonlinear multiscale decompositions to the well-known linear filtering theory. Numerical results are used to illustrate our findings

# Separation Surfaces in the Spectral TV Domain for Texture Decomposition

### Dikla Horesh and Guy Gilboa, “Separation Surfaces in the Spectral TV Domain for Texture Decomposition”, IEEE Trans. Image Processing, Vol. 25, No. 9, pp. 4260 – 4270, 2016.

Abstract:

In this paper we introduce a novel notion of separation surfaces for image decomposition. A surface is embedded in the spectral total-variation (TV) three dimensional domain and encodes a spatially-varying separation scale. The method allows good separation of textures with gradually varying pattern-size, pattern-contrast or illumination.  The recently proposed total variation spectral framework is used to decompose the image into a continuum of textural scales. A desired texture, within a scale range, is found by fitting a surface to the local maximal responses in the spectral domain. A band above and below the surface, referred to as the Texture Stratum, defines for each pixel the adaptive scale-range of the texture.  Based on the decomposition an application is proposed which can attenuate or enhance textures in the image in a very natural and visually convincing manner

# Nonlinear Spectral Analysis via One-homogeneous Functionals – Overview and Future Prospects

### Guy Gilboa, Michael Moeller, Martin Burger, “Nonlinear Spectral Analysis via One-homogeneous Functionals – Overview and Future Prospects”, accepted to Journal of Mathematical Imaging and Vision (JMIV), 2016.

Abstract:

We present in this paper the motivation and theory of nonlinear spectral representations, based on convex regularizing functionals. Some comparisons and analogies are drawn to the fields of signal processing, harmonic analysis and sparse representations. The basic approach, main results and initial applications are shown. A discussion of open problems and future directions concludes this work.