[visionlist] Call for Papers: Multimedia Tools and Applications SI: Few-shot Learning for Multimedia Content Understanding

Dear all:

  The submission deadline is 31. Aug. 2017.

Multimedia Tools and Applications

Special Issue on Few-shot Learning for Multimedia Content Understanding

http://ift.tt/2ugl6lp

Overview

The multimedia analysis and machine learning communities have long attempted to build models for understanding real-world applications. Driven by the innovations in the architectures of deep convolutional neural network (CNN), tremendous improvements on object recognition and visual understanding have been witnessed in the past few years. However, it should be noticed that the success of current systems relies heavily on a lot of manually labeled noise-free training data, typically several thousand examples for each object class to be learned, like ImageNet. Although it is feasible to build learning systems this way for common categories, recognizing objects “in the wild” is still very challenging. In reality, many objects follow a long-tailed distribution: they do not occur frequently enough to collect and label a large set of representative exemplars in contrast to common objects. For example, in some real-world applications, such as anomalous object detection in a video surveillance scenario, it is difficult to collect sufficient positive samples because they are “anomalous” as defined, and fine-grained object recognition, annotating fine-grained labels requires expertise such that the labeling expense is prohibitively costly.

The expensive labeling cost motivates the researchers to develop learning techniques that utilize only a few noise-free labeled data for model training. Recently, some few-shot learning, including the most challenging task zero-shot learning, approaches have been proposed to reduce the number of necessary labeled samples by transferring knowledge from related data sources. In the view of the promising results reported by these works, it is fully believed that the few-shot learning has strong potential to achieve comparable performance with the sufficient-shot learning techniques and significantly save the labeling efforts. There still remains some important problems. For example, a general theoretical framework for few-shot learning is not established, the generalized few-shot learning which recognizes common and uncommon objects simultaneously is not well investigated, and how to perform online few-shot learning is also an open issue.

The primary goal of this special issue is to invite original contributions reporting the latest advances in few-shot learning for multimedia (e.g., text, video and audio) content understanding towards addressing these challenges, and to provide the opportunity for researchers and product developers to discuss the state-of-the-art and trends of few-shot learning for building intelligent systems. The topics of interest include, but are not limited to:Topics

·           Few-shot/zero-shot learning theory;

·           Novel machine learning techniques for few-shot/zero-shot learning;

·           Generalized few-shot/zero-shot learning;

·           Online few-shot/zero-shot learning;

·           Few-shot/zero-shot learning with deep CNN;

·           Few-shot/zero-shot learning with transfer learning;

·           Few-shot/zero-shot learning with noisy data;

·           Few-shot learning with actively data annotation (active learning);

·           Few-shot/zero-shot learning for fine-grained object recognition;

·           Few-shot/zero-shot learning for anomaly detection;

·           Few-shot/zero-shot learning for visual feature extraction;

·           Applications in object recognition and visual understanding with few-shot learning;

Important Dates

·           Manuscript submission deadline: 31 August 2017

·           Notification of acceptance: 30 Nov 2017

·           Submission of final revised manuscript due: 31 Dec 2017

·           Publication of special issue: TBD  

Submission Procedure

All the papers should be full journal length versions and follow the guidelines set out by Multimedia Tools and Applications (http://ift.tt/2eX02b4). 

Manuscripts should be submitted online at http://mtap. choosing “1079 – Few-Shot Learning for MM Content Understanding” as article type, no later than 31 August, 2017. All the papers will be peer-reviewed following the MTAP reviewing procedures. 

Guest Editors

Dr. Guiguang Ding

E-mail: dinggg@tsinghua.edu.cn

Affiliation: Tsinghua University, China

Dr. Jungong Han

E-mail: jungong.han@. ac.uk

Affiliation: Northumbria University at Newcastle, UK

Dr. Eric Pauwels

E-mail: eric.pauwels@cwi.nl

Affiliation: Centrum Wiskunde & Informatica (CWI), Netherlands


[visionlist] Vision programmer position in Facebook’s Building 8

We are hiring for several positions in our group within Facebook’s
Building 8 (http://ift.tt/2ixNsxV).  It
is an opportunity to work in an exciting area of computer vision
with some top CV researchers on an applied problem. The project is
just starting out, so you will be seeing the project from start to
completion.  The positions are relatively short term (~ 1 year)
contract positions.  The full description is below.  If interested,
send a cover letter and CV to me (Daniel Huber) at dhuber@fb.com.


[visionlist] Postdoc in neuroimaging of space, time and number processing


[visionlist] AMFG 2017: ICCV Workshop on Analysis and Modeling of Faces and Gestures

​​AMFG 2017 – 7th IEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG)

28 October 2017

Venice, Italy

Embracing Face and Gesture Analysis in Social Media with Deep Learning

This one-day serial workshop (AMFG2017) will provide a forum for researchers to review the recent progress of recognition, analysis and modeling of face, gesture, and body, and embrace the most advanced deep learning system to address face and gesture analysis particularly under unconstrained environment such as social media. The workshop will consist of one to two invited talks; one from industry, together with peer-reviewed regular papers (oral and poster). Original high-quality contributions are solicited on the following topics:

1. Deep learning methodology, theory, and its applications to social media analytics;

2. Novel deep learning model, deep learning survey, or comparative study for face/gesture recognition;

3. Deep learning for internet-scale soft biometrics and profiling: age, gender, ethnicity, personality, kinship, occupation, beauty, and fashion classification by facial and/or body descriptor;

4. Face, gait, and action recognition in low-quality (blurred for instance), or low-resolution video from fixed or mobile cameras;

5. Novel mathematical modeling and algorithms, sensors and modalities for face & body gesture/action representation, analysis and recognition for cross-domain social media;

6. Deep learning for detection and recognition of face and body in the wild with large 3D rotation, illumination change, partial occlusion, unknown/changing background, and aging; especially large 3D rotation robust face and gesture recognition;

7. Motion analysis, tracking and extraction of face and body models from image sequences captured by mobile devices;

8. Face, gait, and action recognition in low-quality (blurred for instance), or low-resolution video from fixed or mobile cameras;

9. Novel mathematical modeling and algorithms, sensors and modalities for face & body gesture/action representation, analysis, and recognition for cross-domain social media;

10. Social/Psychological studies that can assist in understanding computational modeling and building better automated face and gesture systems for interaction purposes;

11. Novel social applications based on the robust detection, tracking and recognition of face, body, and action;

12. Face and gesture analysis for sentiment analysis in social media;

13. Other applications of face and gesture analysis in social media content understanding.

****Submission****

http://ift.tt/2v6Rbvn

****Important Dates****

Submission Deadline: Jul. 31, 2017 [Extended]

Notification: Aug. 15, 2017

Camera Ready: Aug. 20, 2017

Workshop Date: Oct. 28, 2017 (Full day)

****Invited Speakers****

Lei Zhang, Microsoft Research

Tim K. Marks, MERL

Xiaoming Liu, MSU

****Honorary General Chair****

Thomas S. Huang, University of Illinois at Urbana-Champaign, USA

****General Chairs****

Dimitris N. Metaxas, Rutgers, The State University of New Jersey, USA

Yun Raymond Fu, Northeastern University, Boston, USA

****Workshop Co-Chairs****

Mohammad Soleymani, Swiss Center for Affective Sciences, Switzerland

Ming Shao, University of Massachusetts Dartmouth, USA

Zhangyang (Atlas) Wang, Texas A&M University, USA

****Web and Publicity Co-Chairs****

Zhengming Ding, Northeastern University, Boston, USA

Sheng Li, Northeastern University, Boston, USA


[visionlist] CFP VISIGRAPP 2018 – 13th Int.l Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications (Funchal, Madeira/Portugal)

SUBMISSION DEADLINE

13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications

Submission Deadline: July 31, 2017

http://ift.tt/1EHL1Rp

January 27 – 29, 2018
Funchal, Madeira, Portugal.

In Cooperation with AFIG, Eurographics.

With the presence of internationally distinguished keynote speakers:
Carol O’Sullivan, Trinity College Dublin, Ireland
Alexander Bronshtein, Israel Institute of Technology,Tel Aviv University and Intel Corporation, Israel
Falk Schreiber, University of Konstanz, Germany and Monash University Melbourne, Australia
Catherine Pelachaud, CNRS/University of Pierre and Marie Curie, France

A short list of presented papers will be selected so that revised and extended versions of these papers will be published by Springer.

All papers presented at the congress venue will also be available at the SCITEPRESS Digital Library (http://ift.tt/1iohX1V).

Should you have any question please don’t hesitate contacting me.

Kind regards,
VISIGRAPP Secretariat

Address: Av. D. Manuel I, 27A, 2º esq.
2910-595 Setubal, Portugal
Tel: +351 265 520 184
Fax: +351 265 520 186
Web: http://ift.tt/1EHL1Rp
e-mail: visigrapp.secretariat@insticc.org


[visionlist] Postdoc in Image Processing/Forensics

I am looking for a postdoc to join my group in the Computer Science Department at Dartmouth College starting in the Fall of 2017. The postdoc will be involved in our research in image and video forensics. The candidate must have finished his or her Ph.D in Computer Science, Computer Engineering, or Electrical Engineering, have significant experience with image processing and analysis, have strong mathematical and computing skills, and ideally have some experience in the field of digital forensics.Dartmouth College is an Ivy League university with graduate programs in the sciences, engineering, medicine, and business. It is located in Hanover, New Hampshire (on the Vermont border) and has a beautiful, historic campus, located in a scenic area on the Connecticut River. Applicants should send a CV and names of three references (letters will be collected at a later date) to: Professor Hany Farid (farid@dartmouth.edu).


[visionlist] Fall Vision Meeting–Abstract Deadline is tomorrow (July 27th)

The abstract deadline for the 2017 OSA Fall Vision Meeting is tomorrow, July 27th, latest time on earth. 

Please submit abstract here:

http://ift.tt/2vSzBIM

Remember to add ‘YIA Candidate’ at the end if you want to be considered for the Young Investigator Award. 

A tentative schedule and description of the invited talk sessions can be found here:

http://ift.tt/2uis05p

More information about the meeting is described below. 

Hope to see you in DC,

 

Arthur Shapiro (American University)

Bei Xiao (American University) 

 

******************************

 

The 17th Annual Optical Society Vision Meeting is scheduled to take place at the Katzen Arts Center at American University in Washington, DC, from the 13th to the 15th of October 2017. 

http://ift.tt/2uis05p

Please note the  abstract submission deadline is July 27th (Thursday), 2017. The deadline won’t be extended again. 

 

The online registration is here: 

http://ift.tt/2uis05p

 

The early-bird registration deadline is September 5th, 2017. Information about registration and hotels are available on the meeting website. 

This year’s meeting includes five invited sessions, three contributed talk sessions, and a variety of contributed poster presentations. We are also pleased to announce two special events this year: Prof. Ken Nakayama from Harvard University will be presented with the 2017 Tillyer Award for distinguished work in the field of vision; and Prof. David H. Brainard from University of Pennsylvania will present the annual Boynton Lecture.

 

Attendees of Fall Vision can attend OSA Frontier in Optics (FiO) meeting (Sept 17-21 in DC) complimentary for one day and vice versa attendees of FiO can attend Fall Vision Meeting com

 

 

The invited talk sessions include: (1) Applications of High Resolution Retinal Imaging (2) Myopia Development  (3) Material Perception (4) From Retina to Extra-striate cortex: Forward Models of Visual Input  (5) Lighting, Color Rendering, and Color Vision.

 

Besides the keynote speakers, the invited speakers includes:

 

Andrew Pucker, University of Alabama Birmingham

Anya Hurlbert, University of New Castle

Bei Xiao, American University

Brian Wandell, Stanford University

David Huang, Oregon Health and Science University 

David Troilo, SUNY College of Optometry

Don Miller, Indiana University

Greg Schwartz, Northwestern University

Ione Fine, University of Washington 

Kendrick Kay, University of Minnesota

Lorne Whitehead, University of British Columbia

Machelle Pardue,  Georgia Tech 

Manuel Spitschan, Stanford University

Mark Fairchild, Rochester Institute of Technology

Michael Landy, New York University

Noah Benson, New York University

Qasim Zaidi, SUNY College of Optometry

Rigmor Baraas, University of Southeast Norway

Shin’ya Nishida, NTT Japan

Stephen Burns, Indiana University


[visionlist] Open Position No. 17/Sa19 in Robot Vision

Open Position No. 17/Sa19 in Robot Vision
at the Ernst-Moritz-Arndt Universität Greifswald, Germany

One PhD position in Robot Vision (research assistant No. 17/Sa19) at the
University of Greifswald, Germany, is currently open! Funding is available for 3 years.
The successful applicant is expected to start sometime between 15 October 2017
and 1 January 2018.
Applicants should have a very good Master’s degree in computer science,
physics, mathematics or electrical engineering.
Successful applicants will have to do mostly research but also teach two hours per week.
Applicants should have a drive for research and should have knowledge in one or more of
the following research areas: Computer Vision, Machine Learning, Computational
Intelligence, Computer Graphics, Robotics, Computer Games, Genetic Programming or
Evolutionary Algorithms. Applicants must be fluent in English and must have excellent
programming skills (C++). Knowledge of OpenCV, OpenGLSL, CUDA, GPGPU, Unix/Linux,
Script Languages or Physics Engines is a plus. Applications (including CV, transcripts,
Master’s Thesis, and, if available, list of publications as PDFs) for this positions should be
sent to marc.ebner@uni-greifswald.de. Deadline for applications is 26 August 2017.


[visionlist] Research Assistant Professor Position at the University of Nebraska – Lincoln and Center for Brain, Biology, & Behavior (CB3)

Hi, could you please include this job posting in the next visionlist digest

Research Assistant Professor

Center for Brain, Biology & Behavior

University of Nebraska-Lincoln

 

The University of Nebraska-Lincoln (UNL) Center for Brain, Biology and Behavior (CB3) is seeking applicants with expertise in fMRI research design and data analysis to engage in co-investigation
and consultation with faculty and student researchers conducting fMRI research. CB3 is an interdisciplinary, research-dedicated center that engages a broad spectrum of investigators across disciplines including basic and applied scientists, clinicians, and
engineers, and is actively involved in a unique research collaboration with University Athletics. The centerpiece of the 30,000 square foot facility is a Siemens Skyra 3 Tesla scanner equipped with an MR-compatible 256-electrode high-density EEG system and
an eye tracker. The center also features specialized laboratories for behavioral genetics, eye tracking, high-density EEG/ERP, NIRS, and psychophysiology, as well as a salivary bioscience core facility. The center’s state-of-the-art facilities and interdisciplinary
environment enable diverse studies to expand understanding of brain function and its effects on human behavior.

 

The successful candidate will assist experienced fMRI researchers with project development, experimental design, and data analysis, as well as provide consultation and assistance in
fMRI research design and analysis to researchers less experienced with fMRI. As such, the position includes opportunities for grant co-investigation and co-authorship in dissemination activities. Required qualifications include a Ph.D. in Psychology, Neuroscience,
or a related field; at least three years of experience conducting fMRI research, including years of education; demonstrated training and expertise in fMRI research design, including programming of fMRI stimulus delivery paradigms, as well as training and expertise
in fMRI data processing and analysis software (e.g., AFNI, FSL, SPM); co-authorship of scholarly work involving fMRI; and excellent communication skills and experience contributing to a productive fMRI research group. Preferred qualifications include five
years of experience conducting fMRI research, including years of education and postdoctoral experience; demonstrated training and expertise in the integration of fMRI research design and analysis with other complementary methodologies, such as structural MRI/DTI,
EEG/ERP, eye tracking, and/or psychophysiology; scripting and programming skills (e.g., R); and evidence of independence in fMRI research, grant writing and publication.

 

Review of applications will begin October 1, 2017 and continue until the position is filled. To be considered for the position, please go to http://ift.tt/1pJ9tLq,
requisition F_170067, and click “Apply to this job”. Candidates should attach a letter of application, curriculum vitae, and contact information for three references. The University of Nebraska-Lincoln is committed to a pluralistic campus community through
affirmative action, equal opportunity, work-life balance, and dual careers. See http://ift.tt/1NTnRg3.

 

http://cb3.unl.edu/

 


[visionlist] Postdoc in visual cognitive neuroscience at The Donders Institute, The Netherlands

A postdoctoral position is available at The Donders Institute (Nijmegen, The Netherlands). In this project, we will examine the neural basis of factors that contribute to efficient naturalistic vision, such as object
grouping, perceptual learning, context-based predictions, and
top-down attentional set.
The project will be embedded in the Donders research theme
‘Perception, Action and Control’ (Dr Marius Peelen’s group, http://ift.tt/2uYPU9Z). You will be expected to take the lead in developing and performing
neuroimaging experiments that characterise the neural basis of
naturalistic vision. Facilities to support this project include fMRI and
MEG, available at the Donders Institute.

The Donders
Institute for Brain, Cognition and Behaviour focuses on state-of-the-art
cognitive neuroscience, using a multidisciplinary approach, and offers
excellent lab and neuroimaging facilities, courses, and technical
support.

Preference
will be given to candidates with a PhD in cognitive neuroscience or a
related field, who have a strong motivation to pursue a scientific
career. Experience with neuroimaging techniques (fMRI or MEG/EEG) and
strong programming skills are highly desirable, as is a background in
high-level vision and/or attention research.

Application deadline is 1 August 2017

For informal inquiries, please email m.peelen@donders.ru.nl

To apply, please visit: http://ift.tt/2uAXe9m