Skip to content

yurun00/twitter_compression

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image encoding and decoding network

Company: Tucodec

This project is based on Twitter's paper: LOSSY IMAGE COMPRESSION WITH COMPRESSIVE AUTOENCODERS

Prerequisites

  • A computer with Nvidia GPU
  • Python 3.5
  • Tensorflow 1.3

Introduction

Basic concept

Loss function

A tradeoff between entropy estimation and reconstruction distortion.

Quantization

Different function for forward and backward propagation. Implemented with tensorflow api 'stop_gradient'.

Entropy estimation

Parametrized 2-D GMM on each channel of the codec i.e the output of the encoder.

Architecture

The neural network described in the paper is as follows:

architecture

The residual edges in the network work well coping with vanishing gradient problems. Convolution layer is a standard method for image precessing tasks. While depth to space, subpixel, deconvolutional layers are powerful tools for superresolution tasks. Filters with stride more than 1 are tried to be placed at the ends of encoder and decoder to reduce the hidden layers sizes when we run the program. Round is utilized for quantization and GSM is for estimation of entropy i.e. the codec length.

Differences include:

  • The mirror padding layer is removed.
  • The GSM model for entropy estimation is replaced by mixed Gaussian distribution, i.e. GMM on two dimension.
  • Subpixel layers are replaced by tensorflow 'depth_to_space' api since the implementations seem to be the same.

Implementation

  • Tensorflow input pipeline with two queues.
  • Tensorflow distributed training, one-graph replication one 4 Titan X GPUs.

Dataset

  • training: Raise 6k, cut 128 x 128, 256 x 256, 512 x 512, 1024 x 1024 and resize to 128 x 128.
  • testing: Kodak, crop and concat.

Training

Choose different \beta for different entropy-distortion tradeoff model.

Entropy coding

The quantized codec need further be coded with entropy coding algorithm. Here I chose Arithmetic coding by matlab. The coding manner is a little different from the estimation manner, so the results are close but not close enough.

Result

RD curve

RD curve

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors