[visionlist] University of Melbourne – LECTURER (5 Positions; Female Applicants Only)


[visionlist] Funded Doctoral Position in Montreal: Deep Learning for Video-Based Expression Recognition

Applications are invited for a funded PhD position in deep learning for spatio-temporal expression recognition from facial and vocal modalities. The candidate will work under the supervision of Prof. Eric Granger and at the Laboratory of imaging, vision and artificial intelligence (LIVIA), ETS Montreal (University of Québec) and Prof. Simon Bacon at Concordia University and the CIUSSS-NIM’s Montreal Behavioural Medicine Centre. The position is available immediately after the candidate passes ETS application requirements, and for a maximum duration of 3-4 years.

We are looking for a highly motivated doctoral student who is interested in performing cutting-edge research in machine learning algorithms applied to expression recognition based on facial and vocal traits captured in videos, with a particular focus on deep learning (e.g, convolutional and recurrent neural network) architectures, domain adaptation and weakly-supervised learning. Prospective applicants should have:

 – Strong academic record with an excellent M.Sc. degree in computer science, applied mathematics, or electrical engineering, preferably with expertise in one or more of the following areas: affective computing, machine learning, neural networks, computer vision, face and speech processing, and pattern recognition;

 – A good mathematical background;

 – Good programming skills in languages such as C, C++, Python and/or MATLAB.

 – Knowledge of deep learning frameworks would be a plus.

 – A prior publication in one of the major conferences or journals in computer vision/machine learning is not necessary but would be very desirable.

Application process: For consideration, please send a resume, names and contact details of two references, transcripts for under-graduate and graduate studies, and a link to a Masters thesis (as well as relevant publications if any) to Eric.Granger@etsmtl.ca.

Further information :

Eric Granger

http://ift.tt/2FIRaAc recherche/LIVIA

Simon Bacon

http://ift.tt/2FyPygA bacon/

[visionlist] MathPsych/ICCM 2018 Deadline extended!

of Wisconsin, Madison

21st-24th, 2018              
(July 21st is for Workshops, Tutorials & Opening reception)


Extended: March 22nd, 11.59pm CDT.

details: http://ift.tt/2GV1YLQ

details:  http://ift.tt/2oCSvC8


We invite you to MathPsych/ICCM
2018, the joint gathering of the 51st Annual Meeting of the Society
for Mathematical Psychology and the 16th International Conference on
Cognitive Modeling (ICCM): the premier conference for research on computational
models and computation-based theories of human cognition.  Following our success in 2017, ICCM has again
joined forces with the Society for Mathematical Psychology to create a
conference in which all sessions are open to all attendees, and cross-talk is
highly encouraged.


MathPsych/ICCM 2018 is a forum for
presenting and discussing the complete spectrum of cognitive modeling
approaches, including connectionism, symbolic modeling, dynamical systems,
Bayesian modeling, and cognitive architectures. Research topics can range from
low-level perception to high-level reasoning. We welcome papers presenting
modeling concepts that are supported by empirical data. We also welcome
contributions that use computational models to better understand
neuroscientific data. Members of the Common Model of Cognition group and other
contributors interested in architectural issues are encouraged to use the
keywords “Common model of cognition.”


We are pleased to announce four
world-class invited speakers:
Angela Yu (University of California, San Diego)

Naomi Feldman (University of
Estes Early Career Award: Jennifer Trueblood (Vanderbilt University)

FABBS Early Career Award: Leslie
Blaha (Pacific Northwest National Laboratory, sponsored by Springer)


will have two invited symposia:

–       Should statistics determine the
practice of science, or science determine the practice of statistics?

(Organizers:  Rich Shiffrin & Joachim Vandekerckhove)

–       Probabilistic
Specification and Quantitative Testing of Decision Theories
Michel Regenwetter & Clintin Stober)

Brain & Behavior, a new journal sponsored by the Society for
Mathematical Psychology

            (Organizer:  Scott Brown)


We have separate submissions for the
MathPsych parallel tracks and the ICCM single-track. For MathPsych, submissions
are brief 250-word abstracts to be considered for both talks and posters. For
ICCM submissions are 6-page full papers to be considered for talks, and 2-page
poster abstracts. We are working with TopiCS to create a special issue based on
the best full ICCM papers. Submissions may be made by researchers, faculty,
post-docs, graduate students and undergraduate students. Any one person may
present only one paper, but may also be a co-author of other papers (when you are
presenting author of a MathPsych paper, you cannot also be a presenting author
of an ICCM paper and vica versa).
We also welcome pre-conference workshop/tutorial submissions that are not
specific to MathPsych or ICCM. All types of submissions are due on March 22nd,
11.59pm CDT.


On the evening of July 21st, there will be a
celebration of the launch of the new journal, Computational Brain &
Behavior. The event is sponsored by the journal’s publisher, Springer.


Registration fees will be $200
(faculty/professionals) and $80 (students). Registration will open in May.


hope to see you in Madison!

Juvina, Joseph Houpt, and Christopher Myers (ICCM co-chairs)

Austerweil and Joseph Houpt (MathPsych co-chairs)

[visionlist] New Job Posting from Apple: Scientist: 3D Imaging/Sensing Calibration

Scientist: 3D Imaging/Sensing Calibration

Job Description

The people here at Apple don’t just create products, they craft the kind of wonder that
has revolutionized entire industries. It’s the diversity of those people and their ideas
that inspires the innovation that runs through everything we do. Join Apple, and help us
leave the world better than we found it. Are you up for the challenge?

The Camera & Depth Sensing Instrumentation Group is looking for a scientist/architect.
In this role, you will be independently research, design and develop systems which
carry out 3D camera/depth sensing calibration and test for Apple’s new product
introduction (NPI).You will work collaboratively with a creative and diverse group of
engineers from multiple teams including camera/sensor design, algorithms and
product design for developing state-of-the-art measurement systems for Apple.

Job Responsibilities

Independently research and actively collaborate with HW designers, algorithm
developers, Instrumentation engineers to define and develop 3D fusion
calibration and test methodology of multi camera and depth sensing

Collaborate in the development and optimization of image processing and
calibration algorithms in Matlab/C/C++/ObjectiveC.

Analyze large quantity data, and formulate reports throughout the NPI cycle.

Work with fixture, equipment, optical component vendors across the globe to
develop and deploy platforms from scratch.

Key Qualifications

PhD or Master’s with 2 years related experience in Computer Vision, Controls,
EE, ECE, CS, Optics, and Imaging.

Expertise in defining the methodology – HW and SW solution for multi-camera/
sensor calibration, stereo vision or 3D depth registration.

Strong background in 3D Imaging, multi-view geometry, point cloud registration
and nonlinear optimization (bundle adjustment, pose optimization) techniques.

Knowledge of 3D depth sensing technology – structured light, ToF.

Knowledge of optics fundamentals – lens characteristic, camera models,
distortion estimation and correction.

Experience in mechatronics, robotics is a plus.

Programming skills : C/C++/ Objective C and Matlab/Python.

Experience with OpenCV, Eigen, SLAM, is a plus.

Ability to collaborate effectively and actively with cross-functional development

Hands-on in prototyping lab setup.

Apple is an Equal Opportunity Employer that is committed to inclusion and values
diversity. We do not discriminate on the basis of race, religion, color, national origin,
gender, sexual orientation, age, marital status, veteran status, or disability status. We
also take affirmative action to offer employment and advancement opportunities to all
applicants, including minorities, women, protected veterans, and individuals with

Please reply to anna_liu@apple.com to be considered.

Anna Liu 

Apple Corporate Recruiting | Product Integrity Team



[visionlist] VSS 2018 | Last Call for Demo Night Submissions

[visionlist] CFP IJCCI 2018 – 10th Int.l Joint Conf. on Computational Intelligence (Seville/Spain)


10th International Joint Conference on Computational Intelligence

Submission Deadline: May 2, 2018


September 18 – 20, 2018
Seville, Spain.

IJCCI is organized in 4 major tracks:

– Evolutionary Computation
– Fuzzy Computation
– Neural Computation
– Cognitive and Hybrid Systems

A short list of presented papers will be selected so that revised and extended versions of these papers will be published by Springer.

All papers presented at the congress venue will also be available at the SCITEPRESS Digital Library (http://ift.tt/1iohX1V).

Should you have any question please don’t hesitate contacting me.

Kind regards,
IJCCI Secretariat

Address: Av. D. Manuel I, 27A, 2º esq.
2910-595 Setubal, Portugal
Tel: +351 265 100 033
Fax: +351 265 520 186
Web: http://www.ijcci.org/
e-mail: ijcci.secretariat@insticc.org

[visionlist] CFP: Workshop and Challenge on Learned Image Compression (CLIC) @ CVPR 2018

Apologies for cross-posting******************************CALL FOR PARTICIPANTS & PAPERSCLIC: Workshop and Challenge on Learned Image Compression 2018
in conjunction with CVPR 2018, June 18, Salt Lake City, USA.
Website: http://ift.tt/2D0WQsj


The domain of image compression has traditionally used approaches
discussed in forums such as ICASSP, ICIP and other very specialized
venues like PCS, DCC, and ITU/MPEG expert groups. This workshop and
challenge will be the first computer-vision event to explicitly focus on
these fields. Many techniques discussed at computer-vision meetings
have relevance for lossy compression. For example, super-resolution and
artifact removal can be viewed as special cases of the lossy compression
problem where the encoder is fixed and only the decoder is trained. But
also inpainting, colorization, optical flow, generative adversarial
networks and other probabilistic models have been used as part of lossy
compression pipelines. Lossy compression is therefore a potential topic
that can benefit a lot from a large portion of the CVPR community.

Recent advances in machine learning have led to an increased
interest in applying neural networks to the problem of compression. At
CVPR 2017, for example, one of the oral presentations was discussing
compression using recurrent convolutional networks. In order to foster
more growth in this area, this workshop will not only try to encourage
more development but also establish baselines, educate, and propose a
common benchmark and protocol for evaluation. This is crucial, because
without a benchmark, a common way to compare methods, it will be very
difficult to measure progress.

We propose hosting an image-compression challenge which specifically
targets methods which have been traditionally overlooked, with a focus
on neural networks (but also welcomes traditional approaches). Such
methods typically consist of an encoder subsystem, taking images and
producing representations which are more easily compressed than the
pixel representation (e.g., it could be a stack of convolutions,
producing an integer feature map), which is then followed by an
arithmetic coder. The arithmetic coder uses a probabilistic model of
integer codes in order to generate a compressed bit stream. The
compressed bit stream makes up the file to be stored or transmitted. In
order to decompress this bit stream, two additional steps are needed:
first, an arithmetic decoder, which has a shared probability model with
the encoder. This reconstructs (losslessly) the integers produced by the
encoder. The last step consists of another decoder producing a
reconstruction of the original image.

In the computer vision community many authors will be familiar with a
multitude of configurations which can act as either the encoder and the
decoder, but probably few are familiar with the implementation of an
arithmetic coder/decoder. As part of our challenge, we therefore will
release a reference arithmetic coder/decoder in order to allow the
researchers to focus on the parts of the system for which they are
While having a compression algorithm is an interesting feat by
itself, it does not mean much unless the results it produces compare
well against other similar algorithms and established baselines on
realistic benchmarks. In order to ensure realism, we have collected a
set of images which represent a much more realistic view of the types of
images which are widely available (unlike the well established
benchmarks which rely on the images from the Kodak PhotoCD, having a
resolution of 768×512, or Tecnick, which has images of around 1.44
megapixels). We will also provide the performance results from current
state-of-the-art compression systems as baselines, like WebP and BPG.

Challenge Tasks

We provide two datasets: Dataset P (“professional”) and Dataset M
(“mobile”). The datasets are collected to be representative for images
commonly used in the wild, containing thousands of images.

The challenge will allow participants to train neural networks or
other methods on any amount of data (it should be possible to train on
the data we provide, but we expect participants to have access to
additional data, such as ImageNet).
Participants will need to submit a decoder executable that can run
in the provided docker environment and be capable of decompressing the
submission files. We will impose reasonable limitations for compute and
memory of the decoder executable.

We will rank participants (and baseline image compression methods –
WebP, JPEG 2000, and BPG) based on multiple criteria: (a) decoding
speed; (b) proxy perceptual metric (e.g., MS-SSIM Y); and (c) will
utilize scores provided by human raters. The overall winner will be
decided by a panel, whose goal is to determine the best compromise
between runtime performance and bitrate savings.

Regular Paper Track

We will have a short (4 pages) regular paper track, which allows
participants to share research ideas related to image compression. In
addition to the paper, we will host a poster session during which
authors will be able to discuss their work in more detail.
We encourage exploratory research which shows promising results in:
● Lossy image compression
● Quantization (learning to quantize; dealing with quantization in optimization)
● Entropy minimization
● Image super-resolution for compression
● Deblurring
● Compression artifact removal
● Inpainting (and compression by inpainting)
● Generative adversarial networks
● Perceptual metrics optimization and their applications to compression
And in particular, how these topics can improve image compression.

Challenge Paper Track

The challenge task participants are asked to submit materials
detailing the algorithms which they submitted as part of the challenge.
Furthermore, they are invited to submit a paper detailing their approach
for the challenge.

Important Dates

December 22nd, 2017

Challenge announcement and the training part of the dataset released

January 15th, 2018

The validation part of the dataset released, online validation server is made available

April 15th, 2018

The test set is released

April 22nd, 2018

The competition closes and participants are expected to have submitted their decoder and compressed images

April 26th, 2018

Deadline for paper submission

May 29th, 2018

Release of paper reviews and challenge results

ForumPlease check out the discussion forum of the challenge for announcements and discussions related to the challenge:http://ift.tt/1dSEnEW

Ramin Zabih (Google)
Oren Rippel (WaveOne)
Jim Bankoski (Google)
Jens Ohm (RWTH Aachen)


William T. Freeman (MIT / Google)
George Toderici (Google)
Michele Covell (Google)
Wenzhe Shi (Twitter)
Radu Timofte (ETH Zurich)
Lucas Theis (Twitter)
Johannes Ballé (Google)
Eirikur Agustsson (ETH Zurich)
Nick Johnston (Google)







Website: http://ift.tt/2D0WQsj