direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Inhalt des Dokuments

Sparse Steered Mixtureof-Experts (SMoE) regression networks for Image and Video compression

Kernel regression has been proven successful for image denoising, deblocking and reconstruction. These techniques lay the foundation for new image coding opportunities. We introduce a novel compression scheme: The sparse Steered Mixtureof-Experts (SMoE) regression network for coding image and video data.

Code: https://github.com/bochinski/tf-smoe

Quantized and Regularized Optimization for Coding Images Using Steered Mixtures-of-Experts

Lupe

Compression algorithms that employ Mixtures-of-Experts depart drastically from standard
hybrid block-based transform domain approaches as in JPEG and MPEG coders. In pre-
vious works we introduced the concept of Steered Mixtures-of-Experts (SMoEs) to arrive at
sparse representations of signals. SMoEs are gating networks trained in a machine learn-
ing approach that allow individual experts to explain and harvest directional long-range
correlation in the N-dimensional signal space. Previous results showed excellent potential
for compression of images and videos but the reconstruction quality was mainly limited to
low and medium image quality. In this paper we provide evidence that SMoEs can com-
pete with JPEG2000 at mid- and high-range bit-rates. To this end we introduce a SMoE
approach for compression of color images with specialized gates and steering experts. A
novel machine learning approach is introduced that optimizes RD-performance of quantized
SMoEs towards SSIM using fake quantization. We drastically improve our previous results
and outperform JPEG by up to 42%. 

Publication: Jongebloed, R., Bochinski, E., Lange, L., Sikora, T. Quantized and Regularized Optimization for Coding Images Using Steered Mixtures-of-Experts, DCC 2019

Regularized Gradient Descent Training of Steered Mixture of Experts for Sparse Image Representation

Lupe

The Steered Mixture-of-Experts (SMoE) framework targets a sparse space-continuous representation for images, videos, and light fields enabling processing tasks such as approximation, denoising, and coding. The underlying stochastic processes are represented by a Gaussian Mixture Model, traditionally trained by the Expectation-Maximization (EM) algorithm. We instead propose to use the MSE of the regressed imagery for a Gradient Descent optimization as primary training objective. Further, we extend this approach with regularization terms to enforce desirable properties like the sparsity of the model or noise robustness of the training process. Experimental evaluations show that our approach outperforms the state-of-the-art consistently by 1.5 dB to 6.1 dB PSNR for image representation.

Publication: 

Bochinski, E., Jongebloed, R., Tok, M., and Sikora, T. Regularized Gradient Descent Training of Steered Mixture of Experts for Sparse Image Representation, ICIP 2018

Zusatzinformationen / Extras