The Tufts University Center for Applied Brain and Cognitive Sciences is looking to hire a postdoctoral researcher with broad interests to study fundamental questions in visual attention, but with an eye toward real-world problems.
The primary project aims to understand how humans perceive and act upon groups of objects, with the goal of developing broad, psychologically grounded principles for interfaces that would allow one human to control a swarm of many robots in real time. Interest
in ensemble perception, perception for action, gestalt psychology, visual attention/memory in interesting circumstances, human factors, interface design, or other related topics is ideal.
The candidate would also be expected to be involved in other Center projects, so ideally would also have interest/background in one or more of: virtual reality, augmented reality, electrophysiology, cognitive control, navigation, stress and cognition,
exercise and cognition, human-robot interaction, marksmanship, or other topics discussed on our website:
In addition to relevant theoretical background, the ideal candidate would have strong skills in behavioral experiment design, programming, and analysis. Experience with eye tracking, physiological, or electrophysiological recording and analysis is a plus.
The position starts on 1 September, or as soon after as is practical, and is for one year, with option to renew for a second year. Persons typically underrepresented in psychology or robotics research are encouraged to apply.
Please contact Matthew S. Cain for more information, or with a cover letter and C.V. to apply.
Unconventional Sensing and Processing for Robotic Visual Perception, IROS 2018 Workshop
* Technical program, schedule, and speakers info are now updated on:
* Event dates:
October 5, 2018, 14:30 – 19:00
Madrid Municipal Conference Centre, Madrid, Spain
* We are proud to confirm our excellent speakers:
François Berry Institute Pascal-CNRS, France
Luca Benini ETH Zurich, Switzerland
Jörg Conradt KTH, Sweden
Margarita Chli ETH Zurich, Switzerland
Guido de Croon TU Delft, The Netherlands
Tobi Delbruck ETH Zurich, Switzerland
Piotr Dudek University of Manchester, United Kingdom
Andrew Davison Imperial College London, United Kingdom
Justus Piater University of Innsbruck, Austria
Davide Scaramuzza University of Zurich, Switzerland
Deadline for abstract submission is Aug. 1, however, we will accept short abstracts (300 words) for poster and demo presentations until Aug. 20.
Abstracts can be submitted over easychair: https://ift.tt/2As8ICN
Please, find more information on the abstract submission and topics on the Workshop webpage:
* The registration site of the main conference (“Workshops and Tutorials”) can be used to register for the workshop:
Julien Martel (ETH Zurich)
Yulia Sandamirskaya (University of Zurich & ETH Zürich)
Jörg Conradt (KTH, Stockholm)
We are looking forward to welcome you at this event!
Dr. Yulia Sandamirskaya
Institute of Neuroinformatics (INI)
Neuroscience Center Zurich (ZNZ)University and ETH ZurichWinterthurerstrasse 190, 8057 Zurich
Tel: +41 77 97 46613, +41 44 63 53045
Please distribute to potentially interested parties.
Apologies for multiple postings.
Franco Pestilli, PhDAssistant Professor in Psychology, Neuroscience and Cognitive Science Adjunct Professor in Engineering, Computer Science, and Optometry Indiana Network Science Institute𝞇 Indiana University, Bloomingtonfrancopestilli.com | brainlife.io
email@example.com | +1 (812) 856 9967
PhD position: natural haptic interactions for augmented and virtual reality
We are seeking a highly-motivated PhD candidate to work in the Perceptual Intelligence Labs, Department of Industrial Design Engineering, at the TU Delft, within a multidisciplinary team of psychologists, engineers, physicists, and designers. The PhD position is part of a Future and Emerging Technologies (FET) Horizon2020 project (H-Reality) focused on creating contact and contactless haptic virtual reality experience. The position involves conducting research on haptic interactions in everyday environments, identifying their salient features and, in collaboration with the partners in the project, translating these into virtual reality. The work will primarily involve designing paradigms and conducting experiments to measure and evaluate haptic experiences. The PhD candidate will be encouraged to spend time with both the other academic and industry partners at different stages of the project.
We are interested in candidates with possible expertise in several potential areas, including any of the following:
– Experimental psychology- Signal processing- Probabilistic and computational models of behaviour- Mechanical engineering
Consideration will be given to candidates with strong quantitative skills and a recent completion of their masters (or equivalent) in a related field (e.g., haptic or auditory science, experimental psychology, computer science, mechanical engineering, neuroscience, physics). An interest in virtual or augmented reality systems, perception, psychophysics and data analysis is desirable, as well as the ability to communicate scientific ideas, both in written and oral form, and an ability to learn new skills.
The TU Delft
Delft University of Technology in the Netherlands is a modern university with a rich tradition. Its eight faculties and over 40 English-language Master programmes are at the forefront of technological development, contributing to scientific advancement in the interests of society. Ranked among the top universities of technology in the world (14th in 2015 QS world ranking Engineering & Technology), TU Delft’s excellent research and education standards are supported by outstanding facilities and research institutes. TU Delft maintains close links with inter- and national industry, a strategic alliance contributing to the relevance of its academic programmes and career prospects for its graduates. TU Delft is also a member of the IDEA league of five leading engineering universities in Europe, as well as CESAER, the association of European schools of technology and engineering.
The Perceptual Intelligence lab (π-lab) is a multidisciplinary task-force collaboration with 7 staff members, headed by prof. Pont. The lab is unique for its applied cross-disciplinary approach to real-world perception problems and ecological optics of light, exploration of natural haptic interactions and materials. Facilities include unique, partly custom-built, state-of-the-art photographic and photometric equipment, goniometer, photographic studio, several custom-built lighting setups, stereoscopic imaging, mechanical and electrical workshops with full-time staff. In addition, we have custom-built highly deformable PMDS thimble sensors for haptic interaction measurements, two Futek load cells (FUTEK L2357+JM-2A, 45N max), among a range of other sensors and 3D scanning and printing facilities.
The i-Touch haptics group was established in 2017 as part of the Perceptual Intelligence lab, supported by a Delft Technology Fellowship awarded to Dr. Hartcher-O’Brien. Dr. Hartcher-O’Brien has built her international reputation working on haptic perception, sensory augmentation and computational touch, as well as mechanisms and models of multisensory integration.
Conditions of employment
The TU Delft offers a customisable compensation package, a discount for health insurance and sport memberships, and a monthly work costs contribution. Flexible work schedules can be arranged. An International Children’s Centre offers childcare and an international primary school. Dual Career Services offers support to accompanying partners. Salary and benefits are in accordance with the Collective Labour Agreement for Dutch Universities. As a PhD candidate you will be enrolled in the TU Delft Graduate School. The TU Delft Graduate School provides an inspiring research environment; an excellent team of supervisors, academic staff and a mentor; and a Doctoral Education Programme aimed at developing your transferable, discipline-related and research skills. Please visit www.graduateschool.tudelft.nl for more information.
Information and application
For more information about this position, please feel free to contact Dr. Jess Hartcher-O’Brien, e-mail: firstname.lastname@example.org. To apply send a letter of application, a detailed CV, one recent academic publication (or master’s thesis), a statement of research interests (concerning this project) and the affiliations and contact information of two people who could furnish a letter of reference. Please e-mail your application by 28th September 2018 to: afdelingID-personeelszaken-IO@tudelft.nl.
[visionlist] IAPR Summer School on ‘ Machine and Vision Intelligence’ – Vico Equense, Italy – August 27-31, 2018. Deadline 6th August and Elsevier SIPosted: July 30, 2018
*** Apologise for multiple copies ***
IAPR International Summer School on
Machine and Vision Intelligence
Vico Equense, Naples (Italy)
August 27-31, 2018
Endorsed by: IAPR TC-03, TC-12 and Italian society CVPL
Registration: August 6, 2018
Poster submission: August 10, 2018
School: August 27-31, 2018
The aim of the IAPR Summer School on Machine and Vision
Intelligence (VISMAC 2018) is to provide in-depth courses of the
state-of-the-art research in Artificial Intelligence, and
specifically Deep Learning, for Computer Vision with a global
scope aiming at updating participants about the fundamentals and
most recent advances of different deep learning architectures,
which will be explained through three types of mainstream
applications, to image processing, pattern recognition and
CVPRLab, Department of Science and Technologies, University of
The school is endorsed by the ‘Computer Vision, Pattern
recognition and machine Learning Italian Association’ (CVPL) and
by the International Association for Pattern Recognition (IAPR),
under the Technical Committees
– IAPR-TC03 – Neural Networks & Computational Intelligence
– IAPR-TC12 – Multimedia and Visual Information Systems
The school will be hosted by Hotel Oriente in Vico Equense at
Sorrento Coast (https://ift.tt/2MXKXog)
near Sorrento, Pompei, Capri, Naples, one of the most beautiful
4-star hotels of the Sorrento coast, perfectly
combined with the history of the ancient town of noble Etruscan
The structure is reminiscent of the architecture of a fishing
village, with rooms connected by staircases and terraces, to fully
revive the typical Sorrento Peninsula.
The courses will be delivered by world renowned experts in the
field and will cover both theoretical and practical aspects of
Artificial Intelligence in Computer Vision problems together with
advanced homework sessions.
A (preliminary) list of the topics covered by the lectures is the
Artificial Intelligence and Deep Learning in Vision
Interpreting and Explaining Deep Models in Computer Vision
From task-specific to task-agnostic deep learning in computer
Robust Scene Analysis
Multi-Class Multi-Instance Model Fitting
Deep Learning for Affective Computing
Automatic Selection, Configuration and Composition of ML
Challenges and Opportunities in Machine Vision
Background Subtraction and Initialization Models
Tracking and Detection
3D Reconstruction and Global Averaging
The labs will include TensorFlow, Cafe, Theano, Torch sessions for
computer vision, alngo with homework sessions about well known
challenges and benchmarks like, among others, ChaLearn Looking at
People, Visual Object Tracking (VOT), Change Detection (CDnet),
Emotion Recognition in the Wild Challenge (EmotiW).
The list of (provisionary) speakers is the following:
Wojciech Samek, Fraunhofer HHI, Germany
Iasonas Kokkinos, University College London, UK
Stefan Roth,Technische Universität Darmstadt, Germany
Jiri Matas, Czech Technical University, Czech Republic
Björn Schuller, University of Augsburg, Germany
Hugo Jair Escalante, NIAOE, México
Silvio Savarese, Stanford University, US
Alberto Del Bimbo – University of Florence, Italy
Thierry Bouwmans, Université de La Rochelle, France
Benjamin Laugraud, University of Liège, Belgium
Luka Cehovin Zajc, University of Ljubljana, Slovenia
Srinivasa Venu Madhav Govindu, Indian Institute of Science, India
The registration fees include: Access to VISMAC 2018, Welcome kit,
Coffee breaks, Lunches, Social Dinner, Guided tour and Certificate
€ 400 for GIRPR/IAPR members (without accommodation)
€ 800* for GIRPR/IAPR members with accommodation in shared double
room at School venue from August 29 (arrival) to September 1
€ 500 for Non GIRPR/IAPR members (without accommodation)
€ 900* for Non GIRPR/IAPR members with accommodation in shared
double room at School venue from August 29 (arrival) to September
*(an extra fee of 300 € is due for single room occupancy)
The school offers a limited number of partial scholarships for
students requesting a financial support. The scholarships will
cover either part of the fees (which include the full board
accommodation and the school courses). A filled application form
must be sent to prof. Alfredo Petrosino, together with a
curriculum vitae, via e-mail to: email@example.com
Lectures together with selected school participant works will be
published in printed and in electronic version in the Springer’s
Lecture Notes in Computer Science series (LNCS) by Springer
Verlag. Selected works will have chance of publication in Elsevier
journal special issues (Pattern Recognition Letters and
– PhD Forum: students present their research work to leading
researchers. Submission of a poster is required.
– Reading Group: students will be asked to study one or more
research papers and discuss their perspective view.
– Hackathon: In 24 hours the participants should develop a
prototype for Interaction (to make easier ideas and information
sharing), Sustainability (to support an eco sustainable
development), Quality of Life (to contribute to the improvement of
the life style).
– Guided Tour: A guided tour at the excavation of Pompeii will be
scheduled on one day afternoon during the school.
Awards and prizes will be offered. Detail on the school website.
– Prestigious international university
– Wide range of employee benefits
– Opportunity at the Australian Institute for Machine Learning (AIML)
– Opportunity to be a part of a thriving space and defence ecosystem
[visionlist] Call-for-Participation – Data Released: MediaEval 2018 Recommending Movies using Content: Which Content is Key? TaskPosted: July 28, 2018
[Apologies for cross-postings]
2nd CALL FOR PARTICIPATION – DATA RELEASED
Recommending Movies using Content: Which Content is Key? Task
2018 MediaEval Benchmarking Initiative for Multimedia Evaluation
Register here: https://ift.tt/2KXgjKS
The task addresses the question of which kinds of content are most
helpful for predicting the reception that a movie will receive by its
audience, as reflected in its ratings. There are two aspects to this
question: (1) which part of the movie or trailer are most important
(e.g., type of scene, beginning middle end) and (2) which aspects of
the content are important (e.g., what is depicted, how it is edited).
Because trailers and movie clips are different, we expect that it will
be most productive to take their differences into account in this
task. For example, movie clips are made usually with a few long shots
focusing on a particular scene, while trailers use many short-length
shots summarizing the entire movie.
An important challenge of the task is addressing the fact that user
ratings on movies are atomic (i.e., users assign them to the movie as
a whole), and it is not clear in how far we can assume that different
parts of the movie or trailer contribute compositionally to the
rating. This task explores the idea that it is productive to look for
short segments that are predictive of the rating, and that it is not
necessary to process the full-length movie for successful rating
prediction. The advantages of a system that uses short segments are
twofold: first of all, short segments allow for a dramatic reduction
in computational time, and, second, short segments are more readily
available than full movies.
The overall goal of the task is to use content-based features to
predict how a movie is received by its viewers. Task participants must
create an automatic system that can predict the average ratings that
users assign to movies (representing the global appreciation of the
movie by the audience) and also the rating variance (representing the
agreement/disagreements between user ratings). The input to the system
is a set of audio, visual and text features derived from trailers and
selected movie scenes (movie clips).
Researchers will find this task interesting if they work in the
research areas of multimedia processing, personalization and
recommender system, machine learning and information retrieval.
Participants are supplied with audio, visual and text features
computed from trailers and clips corresponding to about 800 unique
movies in the well-known MovieLens 20M dataset. This allows to make
use of the user ratings and tags (keywords). Each movie is accompanied
by a set of links (mainly on YouTube) to different samples of movie
clips, each focusing on a particular scene and semantic.
Participants to the task are invited to present their results during
the annual MediaEval Workshop, which will be held 29-31 October 2018
at EURECOM, Sophia Antipolis, France. Working notes proceedings are to
appear with CEUR Workshop Proceedings (ceur-ws.org).
Important dates (tentative)
Development data release: 20 July
Test data release: 15 August (tentative)
Runs due: 25 September
Working notes papers due: 17 October
MediaEval Workshop, Sophia Antipolis, France: 29-31 October
Yashar Deldjoo, Politecnico di Milano, Italy
Thanasis Dritsas, TU Delft, Netherlands
Mihai Gabriel Constantin, University Politehnica of Bucharest, Romania
Anuva Agarwal, Carnegie Mellon University, USA
Bogdan Ionescu, University Politehnica of Bucharest, Romania
Markus Schedl, Johannes Kepler University Linz, Austria
On behalf of the organizers,
Prof. Bogdan IONESCU
ETTI – University Politehnica of Bucharest