[visionlist] [CFP] CLIC: Workshop and Challenge on Learned Image Compression @ CVPR 2019

Apologies for cross-posting*******************************CALL FOR PARTICIPANTS & PAPERSCLIC: Workshop and Challenge on Learned Image Compression 2019
in conjunction with CVPR 2019, June 17, Long Beach, USA.
Website: https://ift.tt/2D0WQsj


The domain of image compression has traditionally used approaches
discussed in forums such as ICASSP, ICIP and other very specialized
venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and
challenge will be the first computer-vision event to explicitly focus on
these fields. Many techniques discussed at computer-vision meetings
have relevance for lossy compression. For example, super-resolution and
artifact removal can be viewed as special cases of the lossy compression
problem where the encoder is fixed and only the decoder is trained. But
also inpainting, colorization, optical flow, generative adversarial
networks and other probabilistic models have been used as part of lossy
compression pipelines. Lossy compression is therefore a potential topic
that can benefit a lot from a large portion of the CVPR community.

Challenge Tasks

We will be running two tracks on the the challenge: low-rate
compression, to judged on the quality, and “transparent” compression, to
be judged by the bit rate. For the low-rate compression track, there
will be a bitrate threshold that must be met. For the transparent track,
there will be several quality thresholds that must be met. In all
cases, the submissions will be judged based on the aggregate results
across the test set: the test set will be treated as if it were a single
‘target’, instead of (for example) evaluating bpp or PSNR on each image

For the low-rate compression track, the requirement will be that the
compression is to less than 0.15 bpp across the full test set. The
maximum size of the sum of all files will be released with the test set.
In addition, a decoder executable has to be submitted that can run in
the provided Docker environment and is capable of decompressing the
submitted files. We will impose reasonable limitations for compute and
memory of the decoder executable. The submissions in this track that are
at or below that bitrate threshold will then be evaluated for best
PSNR, best MS-SSIM, and best MOS from human raters.

For the transparent compression track, the requirement will be that
the compression quality is at least 40 dB (aggregated) PSNR; at least
0.993 (aggregated) MS-SSIM; and a reasonable quality level using the
Butteraugli measure (final values will be announced later). The
submissions in this track that are at or better than these quality
thresholds will then be evaluated for lowest total bitrate.

We provide the same two training datasets as we did last year:
Dataset P (“professional”) and Dataset M (“mobile”). The datasets are
collected to be representative for images commonly used in the wild,
containing around two thousand images. The challenge will allow
participants to train neural networks or other methods on any amount of
data (it should be possible to train on the data we provide, but we
expect participants to have access to additional data, such as

Participants will need to submit a file for each test image.

Substantial prizes will be given to the winners of the challenges. This is possible thanks to the sponsors.

To ensure that the decoder is not optimized for the test set, we
will require the teams to use one of the decoders submitted in the
validation phase of the challenge.

Regular Paper Track

We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which
authors will be able to discuss their work in more detail.
We encourage exploratory research which shows promising results in:
● Lossy image compression
● Quantization (learning to quantize; dealing with quantization in optimization)
● Entropy minimization
● Image super-resolution for compression
● Deblurring
● Compression artifact removal
● Inpainting (and compression by inpainting)
● Generative adversarial networks
● Perceptual metrics optimization and their applications to compression
And in particular, how these topics can improve image compression.

Challenge Paper Track

The challenge task participants are asked to submit a short paper
(up to 4 pages) detailing the algorithms which they submitted as part of
the challenge.

Important Dates

All deadlines are 23:59:59 PST.

December 17th, 2018 Challenge announcement and the training part of the dataset released

January 8th, 2019 The validation part of the dataset released, online validation server is made available.

April 8th, 2019 Deadline for regular paper submission.

April 17th, 2019 The test set is released.

April 17th, 2019 Regular paper decision notification.

April 24th, 2019 The competition closes and participants are
expected to have submitted their solutions along with the compressed
versions of the test set.

May 8th, 2019 Deadline for challenge paper submission and factsheets.

May 15th, 2019 Results are released to the participants.

May 22rd, 2019 Challenge paper decision notification

May 30th, 2019 Camera ready deadline (all papers)


Anne Aaron (Netflix)
Aaron Van Den Oord (Deepmind)
Jyrki Alakuijala (Google)


George Toderici (Google)
Michele Covell (Google)
Wenzhe Shi (Twitter)
Radu Timofte (ETH Zurich)
Lucas Theis (Twitter)
Johannes Ballé (Google)
Eirikur Agustsson (Google / ETH Zurich)
Nick Johnston (Google)
Fabian Mentzer (ETH Zurich)



Website: https://ift.tt/2D0WQsj


[visionlist] SMI2019 – Call for Papers


Shape Modelling International (SMI) 2019

Vancouver, Canada, June 19-21, 2019

Shape Modeling International (SMI 2019), which this year is part of the International Geometry Summit 2019, provides an international forum for the dissemination of new mathematical theories and computational techniques for modeling, simulating and processing digital representations of shapes and their properties to a community of researchers, developers, students, and practitioners across a wide range of fields. Conference proceedings (long and short papers) will be published in a Special Issue of Computer & Graphics Journal, Elsevier. Papers presenting original research are being sought in all areas of shape modeling and its applications.

SMI 2019 will be co-located with the Symposium on Solid and Physical Modeling (SPM 2019), the SIAM Conference on Computational Geometric Design (SIAM/GD 2019), the International Conference on Geometric Modelling and Processing (GMP2019), as part of the Geometry Summit 2019.

The Fabrication and Sculpting Event (FASE 2019) will be organized in co-location with SMI 2019. FASE2019 will present original research at the intersection of theory and practice in shape modeling, fabrication and sculpting. SMI2019 also participates in the Replicability Stamp Initiative, an additional recognition for authors who are willing to go one step further, and in addition to publishing the paper, provide a complete open-source implementation. For more details, check the SMI2019 website.

We invite submissions related to, but not limited to, the following topics:

* Acquisition & reconstruction

* Behavior and animation models

* Compression & streaming

* Computational topology

* Correspondence & registration

* Curves & surfaces

* Deep Learning Techniques for Shape Processing

* Digital Fabrication & 3D Printing

* Exploration of shape collections

* Feature extraction and classification

* Healing & resampling

* Implicit surfaces

* Interactive modeling, design & editing

* Medial and skeletal representations

* Parametric & procedural models

* Segmentation

* Semantics of shapes

* Shape Analysis and Retrieval

* Shape Modeling applications (Biomedical, GIS, Artistic/cultural and others)

* Shape statistics

* Shape transformation, bending & deformation

* Simulation

* Sketching & 3D input modalities

* Triangle and polygonal meshes

* Shape modelling for 3D printing and fabrication

* Biomedical applications

* Artistic and cultural applications


1. Paper Format

Submissions should be formatted according to the style guidelines for the Computers & Graphics Journal and should not exceed 12 pages, including figures and references. We strongly recommend using the LaTeX template to format your paper. We also accept papers formatted by MS Word according to the style guidelines for Computers & Graphics. The file must be exported to pdf file for the first round of submission. For format details, please refer to the Computers & Graphics Journal Guide for Authors


Double-blind review

The SMI 2019 conference will use a double-blind review process. Consequently, all submissions must be anonymous. All papers should be submitted directly via the journal online submission system of Computers & Graphics: https://ift.tt/2OOaxNa. When submitting your paper to SMI 2019, please make sure that the type of article is specified as “SI: SMI 2019”.


Full paper submission: March 11

First review notification: April 15

Revised papers: May 2

Second review notification: May 15

Camera ready: May 20

Conference: June, 19-21


Loic Barthe (University of Toulouse, France)x

Ye Duan (University of Missouri, USA)

KangKang Yin (Simon Fraser University, Canada)


Raphaëlle Chaine (University of Lyon, France)

Giuseppe Patanè (CNR, Italy)

[visionlist] [CfP] CVPR’19 workshop on Fashion and Subjective Search (with Paper Awards)

Key Dates

March 7th — Workshop paper submission deadline

April 3rd — Author notification

April 10th — Camera-ready


Naver Labs Europe sponsors a $600 Best Paper Award and a $400 Runner-up Paper Award.


We use the same formatting template as CVPR’2019 and we seek for two kinds of submissions (through https://ift.tt/2NhOr5x

Full papers of new contributions (8 pages NOT including references)

Short papers describing incremental/preliminary work (2 pages NOT including references)


The workshop we propose for CVPR 2019 has a specific Focus on Fashion and Subjective Search (hence the name FFSS-USAD). Indeed, fashion [1,2] is influenced by subjective perceptions as well as societal trends, thus encompassing many of the subjective attributes (both individual and collective) mentioned in the USAD project page (https://ift.tt/2GDnWae). Fashion is therefore a very relevant application for research on subjective understanding of data, and at the same time has great economic and societal impact. Moreover one the of hardest associated tasks is how to perform retrieval (and thus search) of visual content based on subjective attributes of data [3-5].

The automatic analysis and understanding of fashion in computer vision has growing interest, with direct applications on marketing, advertisement, but also as a social phenomena and in relation with social media and trending. Exemplar tasks are, for instance, the creation of capsules wardrobes [6]. More fundamental studies address the design of unsupervised techniques to learn a visual embedding that is guided by the fashion style [7]. The task of fashion artifact/landmark localization has also been addressed [8], jointly with the creation of a large-scale dataset. Another research line consists on learning visual representations for visual fashion search [9]. The effect of social media tags on the training of deep architecture for image search and retrieval has also been investigated [10].

We seek for contributions on the following points:

Collecting large-scale datasets annotated with fashion (subjective) criteria.

Learning visual representations specifically tailored for fashion and exploitable for subjective search.

Reliably evaluating the accuracy of detectors/classifiers of subjective properties.

Translating (social) psychology theories into computational approaches to understand the perception of fashion, and its social dimension.


Compare and Contrast: Learning Prominent Visual Differences. S. Chen and K. Grauman. In CVPR 2018.

Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images. A. Yu and K. Grauman. In ICCV 2017.

Deep image retrieval: Learning global representations for image search. A. Gordo, J. Almazán, J. Revaud, D. Larlus. In ECCV 2016.

End-to-end learning of deep visual representations for image retrieval. A. Gordo, J. Almazan, J. Revaud, D. Larlus. IJCV 2017.

Beyond instance-level image retrieval: Leveraging captions to learn a global visual representation for semantic retrieval. A. Gordo, D. Larlus. In CVPR 2017.

Creating capsule wardrobes from fashion images. W.-L. Hsiao, K. Grauman. In CVPR 2018.

Learning the latent “look”: unsupervised discovery of a style-coherent embedding from fashion images. W.-L. Hsiao, K. Grauman. In ICCV 2017.

Runway to realway: Visual analysis of fashion. Vittayakorn, S., Yamaguchi, K., Berg, A. C., & Berg, T. L. In WACV 2015.

Learning Attribute Representations with Localization for Flexible Fashion Search.Ak, K. E., Kassim, A. A., Lim, J. H., & Tham, J. Y. In CVPR 2018.

Weakly supervised deep metric learning for community-contributed image retrieval. Li, Z., & Tang, J. IEEE TMM 2015.

Xavi, Miriam, Diane, Kristen, Nicu and Shih-Fu

[visionlist] Call for papers CAIP, 2-5 September 2019, Salerno (Italy)

International Conference on Computer Analysis of Images and
2-5 September 2019, Salerno, Italy

CAIP 2019 is
the 18th edition of the series of biennial international
conferences devoted to all aspects of computer vision, image
analysis and processing, pattern recognition, and related
fields. CAIP 2019 is organized by the University of Salerno and
will be held at the Grand Hotel Salerno, Italy.

CAIP 2019 welcomes submissions about, and not limited to, the
following topics:

Deep learning

3D Vision

Biomedical image and pattern analysis


Brain-inspired methods

Document analysis

Face and gestures

Feature extraction

Graph-based methods

High-dimensional topology methods

Human pose estimation

Image/video indexing & retrieval

Image restoration

Keypoint detection

Machine learning for image and pattern analysis

Mobile multimedia

Model-based vision

Motion and tracking

Object recognition


Shape representation and analysis

Static and dynamic scene analysis

Statistical models


Vision for robotics


Important Dates
Tutorial/workshop proposal submission        March
1, 2019
Paper submission                                          April 1, 2019


CAIP2019 proceedings will be published in Springer
Lecture Notes in Computer Science (LNCS) series.

Please find the Call for Papers at https://ift.tt/2Vm3LAM

Visit the CAIP
2019 website for more information.

with kind regards,
Mario Vento
Gennaro Percannella

CAIP2019 co-chairs

[visionlist] Festschrift in Honor of Robert Rafal (3/22/19, Berkeley, CA)

and Awareness II: A Festschrift in Honor of Robert Rafal

Friday, March 22, 2019

2121 Berkeley Way, Psychology Building,
University of California, Berkeley, CA


This series of talks will cover historical
and contemporary perspectives on the neural basis of attention and awareness.
Speakers will pay special tribute to the contributions and influences of
Professor Robert Rafal, a cognitive and behavioral neurologist whose career
focused on better understanding the neural underpinnings of attention,
consciousness, eye movements, and perception. During
his highly influential research career so far, Professor Rafal has elucidated
important and distinct roles of the superior colliculus, thalamus, and the
temporo-parietal junction in the orienting of attention, and also demonstrated
unconscious processing in blindsight and hemispatial neglect.


Speakers include Bob Knight, V.S Ramachandran, Chris Rorden,
Anne Sereno, and Patrik Vuilleumier, among others.


For further information and to register for the event, please visit: https://ift.tt/2EzbsOw


sponsored by the National Science Foundation and

Institute of Cognitive and Brain Sciences at the University of California, Berkeley


[visionlist] [CFP] New Trends in Image Restoration and Enhancement workshop and challenges @ CVPR 2019

Apologies for cross-posting*******************************CALL FOR PAPERS  & CALL FOR PARTICIPANTS IN 11 COMPETITIONS

NTIRE: 4th New Trends in Image Restoration and Enhancement workshop and challenges on super-resolution, dehazing, enhancement, colorization, and deblurringIn conjunction with CVPR 2019, June 17, Long Beach, USA.
Website: https://ift.tt/2H2YQAS radu.timofte@vision.ee.ethz.ch


Papers addressing topics related to image restoration and enhancement are invited. The topics include, but are not limited to:

Image/video inpainting

Image/video deblurring

Image/video denoising

Image/video upsampling and super-resolution

Image/video filtering

Image/video dehazing


Image/video compression

Artifact removal

Image/video enhancement: brightening, color adjustment, sharpening, etc.

Style transfer

Image/video generation and hallucination

Image/video quality assessment

Hyperspectral imaging

Underwater imaging

Aerial and satellite imaging

Methods robust to changing weather conditions / adverse outdoor conditions

Perceptual enhancement

Studies and applications of the above.


A paper submission has to be in English, in pdf format, and at most 8
pages (excluding references) in CVPR style. https://ift.tt/2QR961d
The review process is double blind.

Accepted and presented papers will be published after the conference
in the CVPR Workshops Proceedings on by IEEE (http://www.ieee.org) and
Computer Vision Foundation (www.cv-foundation.org).

Author Kit: https://ift.tt/2H2YRoq

Submission site: https://ift.tt/2tF9jum


● Submission Deadline: March 15, 2019
● Decisions: April 5, 2019● Camera Ready Deadline: April 10, 2019



Denoising: Track 1 rawRGB

Denoising: Track 2 sRGB



Colorization: Track 1 no guidance

Colorization: Track 2 with guidance


Deblurring: Track 1 clean

Deblurring: Track 2 compression artifacts

Super-Resolution: Track 1 clean

Super-Resolution: Track 2 blur

To learn more about the challenges, to participate in the challenges,
and to access the data everybody is invited to check the NTIRE webpage:https://ift.tt/2H5JED5


● Release of train data: January 10, 2019● Competitions end: March 19, 2019


● Radu Timofte, ETH Zurich, Switzerland (radu.timofte@vision.ee.ethz.ch)

● Shuhang Gu, ETH Zurich, Switzerland

● Ming-Hsuan Yang, University of California at Merced, US / Google AI● Lei Zhang, The Hong Kong Polytechnic University● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium● Codruta O. Ancuti, University Politehnica Timisoara, Romania● Kyoung Mu Lee, Seoul National University, Korea

● Michael S. Brown, York University, Canada

● Eli Shechtman, Adobe Research

● Ming-Yu Liu, NVIDIA Research

● Zhiwu Huang, ETH Zurich, Switzerland

● Jianrui Cai, The Hong Kong Polytechnic University

● Seungjun Nah, Seoul National University, Korea

● Richard Zhang, Adobe Research

● Abdelrahman Abdelhamed, York University, Canada

SPONSORS (to be updated)







Website: https://ift.tt/2H2YQAS radu.timofte@vision.ee.ethz.ch

[visionlist] Deadline 15th March: PhD Studentship in Psychology at Heriot-Watt

We are currently advertising for 2 PhD students in the Department of Psychology at Heriot-Watt University, Edinburgh.  Please note that the deadline is 15th March.

We welcome applications from candidates whose interests overlap with our three research themes of ‘Work, Society and Environment’; ‘Lifespan, Health and Wellbeing; ‘Cognition, Brain and Behaviour’.  

The Department of Psychology is offering up to two PhD scholarships to start in the academic year 2019-20.  The term of the scholarships is three years. Successful candidates will be receive
an annual maintenance allowance (currently set at £14,777 for 2018/19), a research support allowance of £2,250 over the registered period of study.

For more information please see 


Best wishes,