[visionlist] [CFP] Workshop on Embedded Vision @IEEE AVSS 2017

Apologies for multiple postings

******************************************************************************************************************************

CALL FOR PAPERS

International Workshop on Software Architectures for Embedded Vision (SAV2017)

hosted by IEEE AVSS 2017 Conference 

(Lecce, Italy – August 29th, 2017)

******************************************************************************************************************************

Motivation

The growing number of surveillance cameras spread over the territory, as well as the increasing number of mobile devices able to acquire images, video streams or just speed and position, such as smartphones or wearable devices, has lead in the last decades to an increasing interest of the scientific community towards those solutions able to automatically analyse the scene where a person is moving so as to identify events of interest. This interest has been accompanied by a strong pushing of the market, more and more interested in the possibility to run the above algorithms directly on board of cameras, smartphones and wearable devices, or more generally on board of embedded low-cost architectures.

Within this context, the scientific problem, addressed by this workshop, becomes to find out the best possible trade-off between the accuracy required by the particular application at hand and the computational burden needed for its computation. The combination between computer vision algorithms and embedded systems is typically referred to as “Embedded Vision”.

******************************************************************************************************************************

Topics include (but are not limited to):

Embedded Vision for Unmanned Ground Vehicles (UGVs)

Embedded Vision for Unmanned Air Vehicles (UAVs)

Deep learning algorithms optimized for embedded architectures

Parallel algorithms for embedded architectures (such as GPUs)

Computer vision algorithms optimized for programmable or reconfigurable platforms (such as DSPs FPGAs, SoCs and so on)

Medical applications over embedded architectures

Video surveillance algorithms over embedded platforms

Applications for digital health over embedded systems

Detection and tracking algorithms over embedded platforms

Gesture, Action, Activity recognition algorithms for embedded platforms

Industrial applications optimized for embedded architectures

Distributed architectures based on embedded platforms

Tools and languages for embedded vision

******************************************************************************************************************************

Organizers 

Luca Greco, University of Salerno, Italy

Brian C. Lovell, The University of Queensland, Australia

Alessia Saggese, University of Salerno, Italy

Mario Vento, University of Salerno, Italy

******************************************************************************************************************************

Important Dates

Paper submission – May 12, 2017

Notification of acceptance – June 1, 2017

Camera Ready – June 12, 2017

Early registration – June 26, 2017

Workshop – August 29, 2017

******************************************************************************************************************************

Submission:

Manuscripts can be submitted through Easychair:

http://ift.tt/2nFZyIa

The final papers are limited to 6 pages in double-column IEEE format, including figures. All manuscripts will be published as IEEE proceedings.

******************************************************************************************************************************

More information

Email: mivia@unisa.it

Website: http://ift.tt/2noq4ZB

Advertisements

[visionlist] Postdoctoral position in “Cognitive representations of space through multisensory inputs4oCd

A 18-months postdoctoral position is available at
Istituto Italiano di Tecnologia, Genoa, Italy, lab of the
Robotics, Brain and Cognitive Science, starting from July 2017.

Recent findings suggest that attentional deficits can be
decreased by properly modulating multisensory sources. However,
there is a limited knowledge as how to perform such modulations in
non-isolating contexts (Söderlund e Jobs, 2016). The candidate
will study how environmental information modulates perception of
non-visual inputs when the person executes non-isolating
activities such as speech, understanding or listening to sound
sources or moving in non-familiar spaces. The candidate will
perform a complete study where behavioral variables are combined
with neurophysiological correlates (EEG). The results of this
research will be applicable in the field of assistive technologies
for sensory and cognitive impairments and in clinical setups.

The research will be carried out in collaboration with a
team of engineers, psychologists and experts in the field of
sensory disabilities.

The ideal candidate should hold a Ph.D. Degree in
Psychology, Cognitive Science or Neuroscience.

He/she should have experience in at least one of the
following fields: Psychophysics, Neurophysiology, Cognitive
Neuroscience.

He/she should also have published in the related field.
Experience with software suites manipulating statistical data (R,
SPSS, Matlab, Python) are a plus.

He/she will be working in a fully equipped laboratory of
the “Spatial Awareness and Multisensory Perception” lab of the
Robotics, Brain and Cognitive Science Research Line.

Interested applicants should submit CV, list of
publications, names of 2 referees and a statement of research
interest to jobpost.73218@rbcs.iit.it quoting “Postdoctoral position in “Cognitive
representations of space through multisensory inputs” – BC:
73218” in the
subject by May 15,
2017.

For information, please contact: luca.brayda@iit.it


[visionlist] CFP: Electronic Imaging 2018: IS&T International Symposium on Electronic Imaging

Call for Papers

Electronic Imaging 2018: IS&T International Symposium on Electronic Imaging Conference Dates: January 28 – February 1, 2018Website: http://ift.tt/2qfCXDM: Burlingame, CA USA
**Paper Submission Deadline: August 15, 2017 (Early Decision Submission Deadline:
June 30, 2017)
Imaging is integral to the human experience—from personal photographs taken every day with mobile devices to autonomous imaging algorithms in self-driving cars to the mixed reality technology that underlies new forms of entertainment. The Electronic Imaging
Symposium features 19 technical conferences covering all aspects of electronic imaging, including Human Vision and Electronic Imaging (HVEI), Color Imaging, Image Quality and System Performance, Intelligent Robotics and Industrial Applications using Computer
Vision, and Visual Information Processing and Communication.
For the full list of individual conferences and topics, see http://ift.tt/1quIxkc.

Join us at Electronic Imaging 2018, where leading researchers, developers, and entrepreneurs from around the world discuss, learn about, and share the latest imaging developments from industry and academia.
*******************

Jennifer Henderson

Society for Imaging Science & Technology (IS&T)

7003 Kilworth Lane

Springfield, VA 22151

USA

jhenderson@imaging.org


[visionlist] Last weeks to register for HCP Course 2017

It’s
not too late to register for the
2017
HCP Course: “Exploring the Human Connectome”, to be held June 19-23 at the
Djavad Mowafagian Centre for Brain Health
at
University of British Columbia (UBC) in Vancouver, BC, Canada! Spaces
for the course are limited and registration is on a first come, first served basis.May
17, 2017 is the deadline to reserve discounted UBC accommodations within walking distance to the course venue: http://ift.tt/2qjBcVs

The 5-day intensive
HCP course
is
a great opportunity to learn directly from HCP investigators
and designed for those interested in:

using data collected and distributed from the HCP young adult study

acquiring and analyzing HCP-style imaging and behavioral data at your own institution

processing your own non-HCP data using HCP pipelines and methods

using Connectome Workbench tools and sharing data using the BALSA imaging database

learning HCP multimodal neuroimaging analysis methods, including those that combine MEG and MRI data

exploring the HCP MMP 1.0 multimodal parcellation brain map and learning about how it can be used in your analyses

positioning yourself to capitalize on HCP-style data being distributed by the Connectome Coordinating
Facility (CCF) from HCP development (healthy subjects ages 5-21) and aging (healthy subjects ages 35-90+) and Connectomes Related to Human Disease projects

See http://ift.tt/2pqkFSf for more info.If you have any questions, please contact us at: hcpcourse@humanconnectome.org

We look forward to seeing you in Vancouver!Best,2017 HCP Course Staff


[visionlist] CFP: Special issue @ IEEE Transactions on Cognitive and Developmental Systems (TCDS)

Apologies for multiple postings

**********************************************************************************************************************************

CALL FOR PAPERS

IEEE Transactions on Cognitive and Developmental Systems (TCDS)

Special Issue on “A sense of interaction in humans and robots: from visual perception to social cognition”

http://ift.tt/2jb1sRY

Guest Editors 

Alessandra Sciutti (alessandra.sciutti@iit.it)

Nicoletta Noceti (nicoletta.noceti@unige.it)

**********************************************************************************************************************************

Important Dates

30 April 2017 – Deadline for title and abstract submission

30 June 2017 – Deadline for manuscript submission 

15 September 2017 – Notification to authors

15 October 2017- Deadline for submission of revised manuscripts 

15 November 2017 – Final decisions

Winter 2017 – Special Issue Publication in IEEE TCDS

Aim and Scope

Since early infancy, the ability of humans at interacting with each other is substantially strengthened by vision, with several visual processes tuned to support prosocial behaviour. For instance, a natural predisposition to look at human faces or to detect biological motion is present at birth. More refined abilities – as the understanding and anticipation of others’ actions and intentions- progressively develop with age, leading, in a few years, to a full capability of interaction based on mutual understanding, joint coordination and collaboration. 

A key challenge of robotics research nowadays is to provide artificial agents with similar advanced visual perception skills, with the ultimate goal of designing machines able to recognise and interpret both explicit and implicit communication cues embedded in human behaviours. These achievements pave the way for the large-scale use of Human-Robot Interaction applications on a variety of contexts, ranging from the design of personal robots, to physical, social and cognitive rehabilitation.

This special issue is aimed at gathering contributions from different research communities, including Robotics, Computer Vision, Cognitive Science, Psychology and Neuroscience, to create a comprehensive perspective on the topic of social interaction in humans and robots, with a specific reference to the role of human and machine perceptual abilities in supporting interactive skills. Contributions may focus on human visual perception for interaction on the one hand, and on the implementation of machine vision methods aimed at improving human-human or human-machine interaction on the other. This multidisciplinary effort is expected to bring innovations in fields as social robotics and human- machine interaction, but also in domains like developmental psychology and cognitive rehabilitation.

Themes

Understanding how efficient and seamless collaborations can be achieved among human partners and which are the explicit and implicit cues intuitively adopted in human cooperation would provide key insights on how to model a similar ability in the future interactive machines.

This special issue wants to address these relevant questions both from the side of the study of human perception for interaction and from the implementation perspective, considering new algorithms and modelling efforts brought forward to improve current robotics.

Topics include (but are not limited to) the following

* Computational Models of Visual Perception for Interaction 

* Perception of Intentions and Actions 

* Vision for Robotics and Artificial intelligence in Social Contexts 

* Neuroscientific bases of Interaction 

* Development of Social Cognition in Humans 

* Social Signals Recognition and Analysis 

* Human-Robot Interaction 

* Emotion Recognition for Interaction 

* Machine Learning for Visual Perception

Submission:

Manuscripts should be prepared according to the “Information for Authors” of the journal found at:

http://ift.tt/2nvY5GE submissions should be made through the IEEE TCDS Manuscript center at http://ift.tt/2ocwNTH the category “SI: Human Robot Interaction”.

Prospective authors are kindly asked to contact the guest editors by sending an e-mail to alessandra.sciutti@iit.it , and nicoletta.noceti@unige.it  providing a tentative title and abstract of the contribution by 30 April, 2017.

**********************************************************************************************************************************

Dr. Nicoletta NocetiDIBRIS – Dept. of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genovavia Dodecaneso 35, 16146 Genova-ITRoom 226Tel. +39 010 3536626Fax +39 010 3536699https://sites.google.com/site/nicolettanoceti/http:///www.slipguru.unige.it


[visionlist] VSS 2017 | VSS Smartphone App Now Available


[visionlist] ECVP 2017 in Berlin: Final call for abstracts (Deadline: 29 April 2017)

++++ Apologies for cross-posting ++++

Dear friends and colleagues,This is our final Call for Abstract submissions for the 40th European Conference on Visual Perception – ECVP 2017 (Berlin | Germany, 27–31 August 2017).

Please submit your contributions by Saturday, 29 April 2017 (3 days left!)

Please find the link to our online abstract submission here. 

(The website will show the updated deadline as of tonight.)

Here is our latest news:

1. Keynote speakers

We will have three Plenaries featuring 4 speakers: 

Nava Rubin (Perception Lecture)

Merav Ahissar & Brian Scholl (Keynote Dialogue)

Shin’ya Nishida (Rank Lecture)

2. Preliminary program (including selected symposia and speakers)

On our website, you find the preliminary schedule of the conference, including the selected symposia.

3. Pre-conference tutorials — Register now!

Register now for one of the seven pre-conference tutorials. There are only 20 spots for each of the tutorials and they are booked on a first-come-first-serve basis.

4. Conference Party Dinner & Night of Light

At ECVP 2017 in Berlin, the Conference dinner will be a party at one of Berlin’s most original venues, the Kulturbrauerei. It is all inclusive. The party will also feature the Night of Light, a place to exhibit your most stunning vision demos – apply by Monday, 15 May 2017.

5. Kids welcome!

We are planning to offer child care during ECVP 2017, free of charge. If you would like to use this service during the conference, please sign up here by Monday, 12 June 2017.

We are looking forward to your contributions.

We hope to see you all in Berlin!

 

Guido Hesselmann, Marianne Maertens, Florian Ostendorf, Martin Rolfs, & Philipp Sterzer

[The Organizing team of ECVP 2017 in Berlin]