diagram

In Situ Target-Less Calibration of Turbid Media

1. O. Spier, T. Treibitz, G. Gilboa, “In situ target-less calibration of turbid media”, Int. Conf. on Computational Photography (ICCP), Stanford Univ., 2017.

Abstract:

The color of an object imaged in a turbid medium varies with distance and medium properties, deeming color an unstable source of information. Assuming 3D scene structure has become relatively easy to estimate, the main challenge in color recovery is calibrating medium properties in situ, at the time of acquisition. Existing attenuation calibration methods use either color charts, external hardware, or multiple images of an object. Here we show none of these is needed for calibration. We suggest a method for estimating the medium properties (both attenuation and scattering) using only images of backscattered light from the system’s light sources. This is advantageous in turbid media where the object signal is noisy, and also alleviates the need for correspondence matching, which can be difficult in high turbidity. We demonstrate the advantages of our method through simulations and in a real-life experiment at sea.

photograph

Robust Recovery of Heavily Degraded Depth Measurements

G. Drozdov, Y. Shapiro, G. Gilboa, “Robust recovery of heavily degraded depth measurements”, Int. Conf. On 3D Vision (3DV), Stanford University, 2016.

Abstract:

The revolution of RGB-D sensors is advancing towards mobile platforms for robotics, autonomous vehicles and consumer hand-held devices. Strong pressures on power consumption and system price require new powerful algorithms that can robustly handle very low quality raw data.
In this paper we demonstrate the ability to reliably recover depth measurements from a variety of highly degraded depth modalities, coupled with standard RGB imagery. The method is based on a regularizer which fuses super-pixel information with the total-generalized-variation (TGV) functional.
We examine our algorithm on several different degradations, including new Intels RealSense hand-held device, LiDAR-type data and ultra-sparse random sampling. In all modalities which are heavily degraded, our robust algorithm achieves superior performance over the state-of-the-art. Additionally, a robust error measure based on Tukeys biweight metric is suggested, which is better at ranking algorithm performance since it does not reward blurry non-physical depth results

picture

A Depth Restoration Occlusionless Temporal Dataset

D. Rotman, G. Gilboa, “A depth restoration occlusionless temporal dataset”, Int. Conf. On 3D Vision (3DV), Stanford University, 2016.

Abstract:

Depth restoration, the task of correcting depth noise and artifacts, has recently risen in popularity due to the increase in commodity depth cameras. When assessing the quality of existing methods, most researchers resort to the popular Middlebury dataset; however, this dataset was not created for depth enhancement, and therefore lacks the option of comparing genuine low-quality depth images with their high-quality, ground-truth counterparts. To address
this shortcoming, we present the Depth Restoration Occlusionless Temporal (DROT) dataset. This dataset offers real depth sensor input coupled with registered pixel-to-pixel color images, and the ground-truth depth to which we wish to compare. Our dataset includes not only Kinect 1 and Kinect 2 data, but also an Intel R200 sensor intended for integration into hand-held devices. Beyond this, we present a new temporal depth-restoration method. Utilizing multiple
frames, we create a number of possibilities for an initial degraded depth map, which allows us to arrive at a more educated decision when refining depth images. Evaluating this method with our dataset shows significant benefits, particularly for overcoming real sensor-noise artifacts

photograph

Spectral Decompositions using One-Homogeneous Functionals

Martin Burger, Guy Gilboa, Michael Moeller, Lina Eckardt, Daniel Cremers “Spectral Decompositions using One-Homogeneous Functionals”, SIAM Journal on Imaging Sciences, Vol. 9, No. 3, pp. 1374-1408, 2016.

Abstract:

This paper discusses the use of absolutely one-homogeneous regularization functionals in a variational, scale space, and inverse scale space setting to define a nonlinear spectral decomposition of input data. We present several theoretical results that explain the relation between the different definitions. Additionally, results on the orthogonality of the decomposition, a Parseval-type identity and the notion of generalized (nonlinear) eigenvectors closely link our nonlinear multiscale decompositions to the well-known linear filtering theory. Numerical results are used to illustrate our findings

Tags:  ,  , 

image

Separation Surfaces in the Spectral TV Domain for Texture Decomposition

Dikla Horesh and Guy Gilboa, “Separation Surfaces in the Spectral TV Domain for Texture Decomposition”, IEEE Trans. Image Processing, Vol. 25, No. 9, pp. 4260 – 4270, 2016.

Abstract:

In this paper we introduce a novel notion of separation surfaces for image decomposition. A surface is embedded in the spectral total-variation (TV) three dimensional domain and encodes a spatially-varying separation scale. The method allows good separation of textures with gradually varying pattern-size, pattern-contrast or illumination.  The recently proposed total variation spectral framework is used to decompose the image into a continuum of textural scales. A desired texture, within a scale range, is found by fitting a surface to the local maximal responses in the spectral domain. A band above and below the surface, referred to as the Texture Stratum, defines for each pixel the adaptive scale-range of the texture.  Based on the decomposition an application is proposed which can attenuate or enhance textures in the image in a very natural and visually convincing manner