[visionlist] VSS 2018 | Announcing the 2018 Member-Initiated Symposia

Advertisements

[visionlist] Position Posting for Listserv


[visionlist] Post-doc in Neurophysiology of Attention at UCLA


[visionlist] Assistant professor position in Physiology at the University of Arizona

The Department of Physiology in the College of Medicine at the University of Arizona invites applications for a tenure-track faculty position at the Assistant Professor level. We seek outstanding candidates who have demonstrated exceptional productivity and creativity in research broadly in the cardiovascular and/or neuroscience disciplines or other areas that complement and integrate with on-going strengths in the department. Applicants must hold a PhD, MD or equivalent degree and are expected to establish or continue active extramurally funded research programs, collaborate with other investigators in the Department of Physiology and at the University of Arizona, and participate in teaching medical, graduate, and/or undergraduate students and mentoring students. Additional information about the department and its faculty is available at http://ift.tt/2t3ogct.

Faculty research programs within the Department of Physiology are funded through agencies that include the NIH, NSF, DoD, AHA and many other national or international funding agencies. The Department of Physiology is also part of a diverse and interdisciplinary research community at the University of Arizona. This includes a T32 training grant for Interdisciplinary Training in Cardiovascular Research that is in its 40th year of existence and a T32 for Computational and Mathematical Modeling of Biomedical Systems. The University of Arizona also has multiple interdisciplinary graduate programs that Department of Physiology faculty are members of, including the Physiological Sciences (http://ift.tt/2FbQeHQ is external)), Neuroscience (http://ift.tt/2sZuJoR is external)) and Biomedical Engineering (http://ift.tt/2CqNdT9 is external)) graduate programs. Investigators are supported by state of the art core facilities including imaging, small animal physiology, genomics and proteomics among others (http://ift.tt/2t0jlsL is external)). There are also opportunities are available for interaction with the University of Arizona BIO5 institute (http://ift.tt/2F6sEw7 is external)) and the Center for Innovation in Brain Science (http://ift.tt/2t35e5O is external)).

Applicants should submit a University of Arizona faculty application form and the following documents to UACareer Track posting number F21305 at

http://ift.tt/2F6DphX is external)

Applicants will be asked to include the following with their application:

Letter of interest

Curriculum vitae, including full list of publications

Statements of research and teaching interests

Three letters of recommendation


[visionlist] VSS 2018 | Next Monday Is the Last Day to Submit Your BOD Nomination


[visionlist] CFP: New Trends in Restoration and Enhancement workshop and challenges @ CVPR 2018 – DEADLINE EXTENSION!

Apologies for cross-posting******************************

TWO WEEKS DEADLINE EXTENSION!

CALL FOR PAPERS  & CALL FOR PARTICIPANTS IN 3 CHALLENGES

NTIRE: 3rd New Trends in Image Restoration and Enhancement workshop and challenges on image super-resolution, dehazing, and spectral reconstruction in conjunction with CVPR 2018, June 18, Salt Lake City, USA.
Website: http://ift.tt/2obsrjf

SUBMISSION
A paper submission has to be in English, in pdf format, and at most 8
pages (excluding references) in CVPR style. The paper format must
follow the same guidelines as for all CVPR submissions.
http://ift.tt/2Bax0zT

CHALLENGE on IMAGE DEHAZING (ongoing!) A novel datasets of real hazy images obtained
in outdoor and indoor environments with ground truth is introduced with the challenge. It
is the first image dehazing online challenge.

Track 1: Indoor – the goal is to restore the visibility in images with haze generated
in a controlled indoor environment.

Track 2: Outdoor – the goal is to restore the visibility in outdoor images with haze
generated using a professional haze/fog generator.

CHALLENGE on SPECTRAL RECONSTRUCTION (ongoing!)

The largest dataset to date will be introduced with the challenge.
It is the first spectral reconstruction from RGB images online challenge.

Track 1: Clean recovering hyperspectral data
from uncompressed 8-bit RGB images created by applying a know response
function to ground truth hyperspectral information.

Track 2: Real World recovering hyperspectral
data from jpg-compressed 8-bit RGB images created by applying an unknown
response function to ground truth hyperspectral information.

To learn more about the challenges, to participate in the challenges,
and to access the data everybody is invited to check the NTIRE webpage:http://ift.tt/2obsrjf

● Release of train data: January 10, 2018● Competition ends: March 22, 2018 (extended!)

ORGANIZERS

● Radu Timofte, ETH Zurich, Switzerland ● Ming-Hsuan Yang, University of California at Merced, US● Shuhang Gu, ETH Zurich, Switzerland ● Jiqing Wu, ETH Zurich, Switzerland ● Lei Zhang, The Hong Kong Polytechnic University ● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland ● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium● Codruta O. Ancuti, University Politehnica Timisoara, Romania● Boaz Arad, Ben-Gurion University, Israel ● Ohad Ben-Shahar, Ben-Gurion University, Israel

PROGRAM COMMITTEE (to be updated)

Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium

Nick Barnes, Data61, Australia

Michael S. Brown, York University, Canada

Subhasis Chaudhuri, IIT Bombay, India

Sunghyun Cho, Samsung

Chao Dong, SenseTime

Weisheng Dong, Xidian University, China

Alexey Dosovitskiy, Intel Labs

Touradj Ebrahimi, EPFL, Switzerland

Michael Elad, Technion, Israel

Corneliu Florea, University Politehnica of Bucharest, Romania

Alessandro Foi, Tampere University of Technology, Finland

Peter Gehler, University of Tübingen, MPI Intelligent Systems, Amazon, Germany

Bastian Goldluecke, University of Konstanz, Germany

Luc Van Gool, ETH Zürich and KU Leuven, Belgium

Shuhang Gu, ETH Zürich, Switzerland

Michael Hirsch, Amazon

Hiroto Honda, DeNA Co., Japan

Jia-Bin Huang, Virginia Tech, US

Michal Irani, Weizmann Institute, Israel

Phillip Isola, UC Berkeley, US

Zhe Hu, Light.co

Sing Bing Kang, Microsoft Research, US

Jan Kautz, NVIDIA Research, US

Seon Joo Kim, Yonsei University, Korea

Vivek Kwatra, Google

In So Kweon, KAIST, Korea

Christian Ledig, Twitter Inc.

Kyoung Mu Lee, Seoul National University, South Korea

Seungyong Lee, POSTECH, South Korea

Stephen Lin, Microsoft Research Asia

Chen Change Loy, Chinese University of Hong Kong

Vladimir Lukin, National Aerospace University, Ukraine

Kai-Kuang Ma, Nanyang Technological University, Singapore

Vasile Manta, Technical University of Iasi, Romania

Yasuyuki Matsushita, Osaka University, Japan

Peyman Milanfar, Google and UCSC, US

Rafael Molina Soriano, University of Granada, Spain

Yusuke Monno, Tokyo Institute of Technology, Japan

Hajime Nagahara, Osaka University, Japan

Vinay P. Namboodiri, IIT Kanpur, India

Sebastian Nowozin, Microsoft Research Cambridge, UK

Federico Perazzi, Disney Research

Aleksandra Pizurica, Ghent University, Belgium

Sylvain Paris, Adobe

Fatih Porikli, Australian National University, NICTA, Australia

Hayder Radha, Michigan State University, US

Tobias Ritschel, University College London, UK

Antonio Robles-Kelly, CSIRO, Australia

Stefan Roth, TU Darmstadt, Germany

Aline Roumy, INRIA, France

Jordi Salvador, Amazon, US

Yoichi Sato, University of Tokyo, Japan

Konrad Schindler, ETH Zurich, Switzerland

Samuel Schulter, NEC Labs America

Nicu Sebe, University of Trento, Italy

Eli Shechtman, Adobe Research, US

Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan

Wenzhe Shi, Twitter Inc.

Alexander Sorkine-Hornung, Disney Research

Sabine Süsstrunk, EPFL, Switzerland

Yu-Wing Tai, Tencent Youtu

Hugues Talbot, Université Paris Est, France

Robby T. Tan, Yale-NUS College, Singapore

Masayuki Tanaka, Tokyo Institute of Technology, Japan

Jean-Philippe Tarel, IFSTTAR, France

Radu Timofte, ETH Zürich, Switzerland

George Toderici, Google, US

Ashok Veeraraghavan, Rice University, US

Jue Wang, Megvii Research, US

Chih-Yuan Yang, UC Merced, US

Jianchao Yang, Snapchat

Ming-Hsuan Yang, University of California at Merced, US

Qingxiong Yang, Didi Chuxing, China

Jong Chul Ye, KAIST, Korea

Jason Yosinski, Uber AI Labs, US

Wenjun Zeng, Microsoft Research

Lei Zhang, The Hong Kong Polytechnic University

Wangmeng Zuo, Harbin Institute of Technology, China

SPEAKERS (to be updated)

William T. Freeman (MIT, Google)Ming-Yu Liu            (NVIDIA)Graham Finlayson  (University of East Anglia, Spectral Edge Ltd, Simon Fraser University)Liang Lin               (SenseTime, Sun Yat-sen University)
SPONSORS (to be updated)

    Alibaba               (https://www.alibaba.com/)    NVIDIA               (http://www.nvidia.com/)    SenseTime         (http://www.sensetime.com/)    CodeOcean        (https://codeocean.com/)    Google               (https://www.google.com/)    Disney Research (https://www.disneyresearch.com/)    Amazon              (https://www.amazon.com/)

Contact: radu.timofte@vision.ee.ethz.ch
Website: http://ift.tt/2obsrjf


[visionlist] CFP: Workshop and Challenge on Learned Image Compression (CLIC) @ CVPR 2018

Apologies for cross-posting******************************CALL FOR PARTICIPANTS & PAPERSCLIC: Workshop and Challenge on Learned Image Compression 2018
in conjunction with CVPR 2018, June 18, Salt Lake City, USA.
Website: http://ift.tt/2D0WQsj

Motivation

The domain of image compression has traditionally used approaches
discussed in forums such as ICASSP, ICIP and other very specialized
venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and
challenge will be the first computer-vision event to explicitly focus on
these fields. Many techniques discussed at computer-vision meetings
have relevance for lossy compression. For example, super-resolution and
artifact removal can be viewed as special cases of the lossy compression
problem where the encoder is fixed and only the decoder is trained. But
also inpainting, colorization, optical flow, generative adversarial
networks and other probabilistic models have been used as part of lossy
compression pipelines. Lossy compression is therefore a potential topic
that can benefit a lot from a large portion of the CVPR community.

Recent advances in machine learning have led to an increased
interest in applying neural networks to the problem of compression. At
CVPR 2017, for example, one of the oral presentations was discussing
compression using recurrent convolutional networks. In order to foster
more growth in this area, this workshop will not only try to encourage
more development but also establish baselines, educate, and propose a
common benchmark and protocol for evaluation. This is crucial, because
without a benchmark, a common way to compare methods, it will be very
difficult to measure progress.

We propose hosting an image-compression challenge which specifically
targets methods which have been traditionally overlooked, with a focus
on neural networks (but also welcomes traditional approaches). Such
methods typically consist of an encoder subsystem, taking images and
producing representations which are more easily compressed than the
pixel representation (e.g., it could be a stack of convolutions,
producing an integer feature map), which is then followed by an
arithmetic coder. The arithmetic coder uses a probabilistic model of
integer codes in order to generate a compressed bit stream. The
compressed bit stream makes up the file to be stored or transmitted. In
order to decompress this bit stream, two additional steps are needed:
first, an arithmetic decoder, which has a shared probability model with
the encoder. This reconstructs (losslessly) the integers produced by the
encoder. The last step consists of another decoder producing a
reconstruction of the original image.

In the computer vision community many authors will be familiar with a
multitude of configurations which can act as either the encoder and the
decoder, but probably few are familiar with the implementation of an
arithmetic coder/decoder. As part of our challenge, we therefore will
release a reference arithmetic coder/decoder in order to allow the
researchers to focus on the parts of the system for which they are
experts.
While having a compression algorithm is an interesting feat by
itself, it does not mean much unless the results it produces compare
well against other similar algorithms and established baselines on
realistic benchmarks. In order to ensure realism, we have collected a
set of images which represent a much more realistic view of the types of
images which are widely available (unlike the well established
benchmarks which rely on the images from the Kodak PhotoCD, having a
resolution of 768×512, or Tecnick, which has images of around 1.44
megapixels). We will also provide the performance results from current
state-of-the-art compression systems as baselines, like WebP and BPG.

Challenge Tasks

We provide two datasets: Dataset P (“professional”) and Dataset M
(“mobile”). The datasets are collected to be representative for images
commonly used in the wild, containing thousands of images.

The challenge will allow participants to train neural networks or
other methods on any amount of data (it should be possible to train on
the data we provide, but we expect participants to have access to
additional data, such as ImageNet).
Participants will need to submit a decoder executable that can run
in the provided docker environment and be capable of decompressing the
submission files. We will impose reasonable limitations for compute and
memory of the decoder executable.

We will rank participants (and baseline image compression methods –
WebP, JPEG 2000, and BPG) based on multiple criteria: (a) decoding
speed; (b) proxy perceptual metric (e.g., MS-SSIM Y); and (c) will
utilize scores provided by human raters. The overall winner will be
decided by a panel, whose goal is to determine the best compromise
between runtime performance and bitrate savings.

Regular Paper Track

We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which
authors will be able to discuss their work in more detail.
We encourage exploratory research which shows promising results in:
● Lossy image compression
● Quantization (learning to quantize; dealing with quantization in optimization)
● Entropy minimization
● Image super-resolution for compression
● Deblurring
● Compression artifact removal
● Inpainting (and compression by inpainting)
● Generative adversarial networks
● Perceptual metrics optimization and their applications to compression
And in particular, how these topics can improve image compression.

Challenge Paper Track

The challenge task participants are asked to submit materials
detailing the algorithms which they submitted as part of the challenge.
Furthermore, they are invited to submit a paper detailing their approach
for the challenge.

Important Dates

● December 24th, 2017 Challenge announcement and the training part of the dataset released
● January 15th, 2018 The validation part of the dataset released, online validation server is made available
● April 15th, 2018 The test set is released
● April 22th, 2018 The competition closes and participants are
expected to have submitted their Docker image along with the compressed
versions of the test set
● April 26th, 2018 Deadline for factsheets
● May 29th, 2018 Results are released to the participants
● June 04th, 2018 Deadline for paper submission

ForumPlease check out the discussion forum of the challenge for announcements and discussions related to the challenge:http://ift.tt/1dSEnEW

Ramin Zabih (Google)
Oren Rippel (WaveOne)
Jim Bankoski (Google)
Jens Ohm (RWTH Aachen)

Organizers

William T. Freeman (MIT)
George Toderici (Google)
Michele Covell (Google)
Wenzhe Shi (Twitter)
Radu Timofte (ETH Zurich)
Lucas Theis (Twitter)
Johannes Ballé (Google)
Eirikur Agustsson (ETH Zurich)
Nick Johnston (Google)

Sponsors

Google

Twitter

Disney

Amazon

Website: http://ift.tt/2D0WQsj