[visionlist] CFP: CVPR Workshop on New Trends in Image Restoration and Enhancement (DEADLINE: April 17, 2017)

Apologies for cross-posting*******************************CALL FOR PAPERS:NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution 2017
In conjunction with CVPR 2017, July 21, Hawaii.Website: http://ift.tt/2obsrjf
Contact: radu.timofte@vision.ee.ethz.ch
SCOPEImage restoration and image enhancement are key computer vision
tasks, aiming at the restoration of degraded image content or the
filling in of missing information. Recent years have witnessed an
increased interest from the vision and graphics communities in these
fundamental topics of research. Not only has there been a constantly
growing flow of related papers, but also substantial progress has been
achieved.
Each step forward eases the use of images by people or computers for
the fulfillment of further tasks, with image restoration or enhancement
serving as an important frontend. Not surprisingly then, there is an
ever growing range of applications in fields such as surveillance, the
automotive industry, electronics, remote sensing, or medical image
analysis. The emergence and ubiquitous use of mobile and wearable
devices offer another fertile ground for additional applications and
faster methods.
This workshop aims to provide an overview of the new trends and
advances in those areas. Moreover, it will offer an opportunity for
academic and industrial attendees to interact and explore
collaborations.
TOPICSPapers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:
● Image inpainting
● Image deblurring
● Image denoising
● Image upsampling and super-resolution
● Image filtering
● Image dehazing
● Demosaicing
● Image enhancement: brightening, color adjustment, sharpening, etc.
● Style transfer
● Image generation and image hallucination
● Image-quality assessment
● Video restoration and enhancement
● Hyperspectral imaging
● Methods robust to changing weather conditions
● Studies and applications of the above.
SUBMISSIONA paper submission has to be in English, in pdf format, and at most 8
pages (excluding references) in CVPR style. The paper format must
follow the same guidelines as for all CVPR submissions.
http://ift.tt/2ohyAXP
The review process is double blind. Authors do not know the names of
the chair/reviewers of their papers. Reviewers do not know the names of
the authors.
Dual submission is allowed with CVPR main conference only. If a
paper is submitted also to CVPR and accepted, the paper cannot be
published both at the CVPR and the workshop.
For the paper submissions, please go to the online submission site.
http://ift.tt/2obycxp
Accepted and presented papers will be published after the conference
in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and
Computer Vision Foundation (http://ift.tt/1dA6WiF).
The author kit provides a LaTeX2e template for paper submissions.
Please refer to the example for detailed formatting instructions. If you
use a different document processing system then see the CVPR author
instruction page.
Author Kit: http://ift.tt/2obauBc
WORKSHOP DATES● Submission Deadline: April 17, 2017
● Decisions: May 08, 2017
● Camera Ready Deadline: May 18, 2017
CHALLENGE on Example-based Single-Image Super-Resolution
In order to gauge the current state-of-the-art in example-based
single-image super-resolution, to compare and to promote different
solutions we are organizing an NTIRE challenge in conjunction with the
CVPR 2017 conference. We propose a large DIV2K dataset with DIVerse 2K
resolution images.
The challenge has 2 tracks:
● Track 1: bicubic uses the bicubic downscaling (Matlab
imresize), one of the most common settings from the recent single-image
super-resolution literature.
● Track 2: unknown assumes that the explicit forms for the
degradation operators are unknown, only the training pairs of low and
high images are available.
To learn more about the challenge, to participate in the challenge,
and to access the newly collected DIV2K dataset with DIVerse 2K
resolution images everybody is invited to register at the links from:
http://ift.tt/2lf5yXH

CHALLENGE DATES

● Release of train data: February 14, 2017
● Validation server online: February 25, 2017
● Competition ends: April 16, 2017
(extended!)ORGANIZERS● Radu Timofte, ETH Zurich, Switzerland (radu.timofte@vision.ee.ethz.c)
● Ming-Hsuan Yang, University of California at Merced, US (mhyang@ucmerced.edu)
● Eirikur Agustsson, ETH Zurich, Switzerland (eirikur.agustsson@vision.ee.e)
● Lei Zhang, The Hong Kong Polytechnic University (cslzhang@polyu.edu.hk)
● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland (vangool@vision.ee.ethz.ch)

PROGRAM COMMITTEE

Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium

Michael S. Brown, York University, Canada

Subhasis Chaudhuri, IIT Bombay, India

Sunghyun Cho, Samsung

Oliver Cossairt, Northwestern University, US

Chao Dong, SenseTime

Weisheng Dong, Xidian University, China

Alexey Dosovitskiy, Intel Labs

Touradj Ebrahimi, EPFL, Switzerland

Michael Elad, Technion, Israel

Corneliu Florea, University Politehnica of Bucharest, Romania

Alessandro Foi, Tampere University of Technology, Finland

Bastian Goldluecke, University of Konstanz, Germany

Luc Van Gool, ETH Zürich and KU Leuven, Belgium

Peter Gehler, University of Tübingen and MPI Intelligent Systems, Germany

Hiroto Honda, DeNA Co., Japan

Michal Irani, Weizmann Institute, Israel

Phillip Isola, UC Berkeley, US

Zhe Hu, Light.co

Sing Bing Kang, Microsoft Research, US

Vivek Kwatra, Google

Kyoung Mu Lee, Seoul National University, South Korea

Seungyong Lee, POSTECH, South Korea

Stephen Lin, Microsoft Research Asia

Chen Change Loy, Chinese University of Hong Kong

Vladimir Lukin, National Aerospace University, Ukraine

Kai-Kuang Ma, Nanyang Technological University, Singapore

Vasile Manta, Technical University of Iasi, Romania

Yasuyuki Matsushita, Osaka University, Japan

Peyman Milanfar, Google and UCSC, US

Rafael Molina Soriano, University of Granada, Spain

Yusuke Monno, Tokyo Institute of Technology, Japan

Hajime Nagahara, Kyushu University, Japan

Vinay P. Namboodiri, IIT Kanpur, India

Sebastian Nowozin, Microsoft Research Cambridge, UK

Aleksandra Pizurica, Ghent University, Belgium

Fatih Porikli, Australian National University, NICTA, Australia

Hayder Radha, Michigan State University, US

Stefan Roth, TU Darmstadt, Germany

Aline Roumy, INRIA, France

Jordi Salvador, Amazon, US

Yoichi Sato, University of Tokyo, Japan

Samuel Schulter, NEC Labs America

Nicu Sebe, University of Trento, Italy

Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan

Wenzhe Shi, Twitter Inc.

Alexander Sorkine-Hornung, Disney Research

Sabine Süsstrunk, EPFL, Switzerland

Yu-Wing Tai, SenseTime

Hugues Talbot, Université Paris Est, France

Robby T. Tan, Yale-NUS College, Singapore

Masayuki Tanaka, Tokyo Institute of Technology, Japan

Jean-Philippe Tarel, IFSTTAR, France

Radu Timofte, ETH Zürich, Switzerland

Ashok Veeraraghavan, Rice University, US

Jue Wang, Megvii Research, US

Chih-Yuan Yang, UC Merced, US

Ming-Hsuan Yang, University of California at Merced, US

Qingxiong Yang, Didi Chuxing, China

Lei Zhang, The Hong Kong Polytechnic University

Wangmeng Zuo, Harbin Institute of Technology, China

SPEAKERS

Alexei Efros, UC Berkeley, US

Jan Kautz, NVIDIA

Liang Lin, SenseTime and Sun Yat-Sen University, China

Peyman Milanfar, Google and UC Santa Cruz, US

Eli Shechtman, Adobe

Wenzhe Shi, Twitter Inc.

Sabine Süsstrunk, EPFL, Switzerland

SPONSORS

NVIDIA

SenseTime

Twitter Inc

Google

Contact: radu.timofte@vision.ee.ethz.ch
Website: http://ift.tt/2obsrjf



Leave a comment