Dear Visionlist users:
The Perception Action Cognition (PAC) Lab is looking for new graduate students with a starting date of August 2013.
Please check out the lab website for more details: http://ocean.otr.usm.edu/~w785427/lab.html
Current research topics include
What I am looking for in a graduate student is a dedicated, highly motivated person with excellent quantitative and behavioral statistics skills. I would prefer someone who is knowledgable in any computer programming languages such as C/C++, Matlab, R or similar. Minimally, I require from the candidate to express the willingness to learn some of these computing tools.
The successful applicant will receive a full paid assistantship (with a tuition waiver) that is renewable every year for 4 years. The deadline for applications is early January 2013.
We will be celebrating seventeen years of AVA Christmas Meetings with
a Meeting in London on Tuesday 18th December.
This is a one-day meeting to be held in the Wilkins Building of
University College London.
This year’s keynote talks will be:
Johan Wagemans (Experimental Psychology, KU Leuven) on
“The encoding of parts and wholes in the visual cortical hierarchy”
David Perrett (School of Psychology, University of St Andrews) on
“The role of shape and colour in face perception”
Peter Neri, 2012 Marr Medal winner (Institute of Medical Sciences,
University of Aberdeen) on
“Deep structure of human sensory processing”
Abstracts (max length: 250 words) should be submitted by November 6th.
The first step of the submission process is at the bottom of the “call
Abstracts will be peer-reviewed & published in the journal Perception
(so long as presenting authors attend the meeting) and should cover
previously unreported research on any aspect of vision in humans,
animals and machines. Abstracts must be in the standard format for
ECVP/Perception, examples can be seen at:
http://www.perceptionweb.com/P.html References should be given in the
body of the abstract in full, but without the title. e.g. (Rayner et
al, 2001, Vis Res, 41, 943-954) Unless otherwise stated (at the end of
the abstract), it will be assumed that the first author will be the
Please submit your abstract online:
Speakers should use their own laptop or bring a powerpoint
presentation on a memory stick.
The organizers will try to accommodate preferences for a talk or
poster but the number of submissions that this meeting now attracts
means that this is not always possible.
We look forward to seeing you on the 18th of December.
[visionlist] SOFTWARE DEVELOPER at the INSTRUCT Image Processing Center in Microscopy, CNB-CSIC, MadridPosted: September 28, 2012
Please post the following job offer,
SOFTWARE DEVELOPER at the INSTRUCT Image Processing Center in
Microscopy, CNB-CSIC, Madrid
Start Date: 26/09/2012 – End Date: 31/10/2012
Description: We are looking for a technically oriented candidate
(engineer, physicist, mathematician, computer scientist…) to work in
the development of a java-based platform for image processing in
Background: INSTRUCT is the European Strategic Initiative in the area
of Structural Biology. It is organized around 5 Core Centers and a
similar number of Associated Centers. During 2009 the Biocomputing
Unit (BCU) of the CNB-CSIC won the international competition to become
the “INSTRUCT Center for Image Processing in Biology”, starting its
operation during 2010 and providing the context for this opening of
positions. The BCU is well known in the area of three dimensional
electron microscopy, with over 150 publications in the area, including
public domains suites of programs and very popular image processing
methods. The I2PC is focused in the development of a large range of
services for the structural-biologist community, such as
3DEMbenchmark, Scipion or Pepper.
For further information go to Biocomp, XMIPP and INSTRUCT Image
BS in Computer Science, engineer, physicist, mathematician or
Sound knowledge on JAVA and J2EE
Java Swing knowledge.
Experience in Python, Shell scripting or similar languages.
Experience in GNU Linux OS
Experience in XML-based technologies, including XML Schema and XSD.
Experience working on WS (REST and SOAP)
Medium level of proficiency in written and spoken English
The following skills are considered a “plus”, but are not essential:
Experience in JPA, Hibernate or Ibatis
Ontologies and the semantic Web
Workflow engines Technical experience (BPMN, BPEL, etc.)
General background on electron microscopy
Interested candidates should send their CV’s and letter of interest to
email@example.com before 31 October 2012
http://www.dtic.upf.edu/~mbertalmio/ERC/recruitvision.htmlhttp://ec.europa.eu/euraxess/index.cfm/jobs/jobDetails/33798131The Information and Communication Technology department at UniversitatPompeu Fabra in Barcelona, Spain, invites applications for a researchposition in visual perception. This position is associated with the
ERC Starting Grant “Image processing for enhanced cinematographyâ€, ledby Marcelo BertalmÃo, http://www.dtic.upf.edu/~mbertalmio/, and whichis described below.Candidates should hold a Ph.D. in vision neuroscience, psychology of
visionor a related discipline, with at least 3 years of post-doctoral
experience and asolid background in visual perception and perceptual
modelling, and be proficient in spoken and written English.
The position is funded by the European Research Council and the
appointmentis for up to 5 years. Salary will be commensurate with
experience, in the bracketof 40,000 to 55,000EUR/year. Contact: please send application with CV, publication list, contact
informationof three references, and a summary of research
accomplishments to:marcelo DOT bertalmio AT upf DOT eduThe position
will be open until filled.Universitat Pompeu Fabra (UPF, http://www.upf.edu/en/) is a publicuniversity located in Barcelona. It is the best Spanish universityaccording to the London Times higher education index, 2011, and it’s
the number one Spanish university in number of ERC grants. TheInformation and Communication Technology department (ICT,http://www.upf.edu/dtic/en/) is the best in Computer Science in Spain,
according to the Shanghai index 2009.Description of the projectThe objective of this ERC Starting Grant project is to develop imageprocessing algorithms for cinema that allow people watching a movie on
a screen to see the same details and colors as people at the shootinglocation can. It is due to camera and display limitations that theshooting location and the images on the screen are perceived verydifferently. We want to be able to use common cameras and displays (as
opposed to highly expensive hardware systems) and work solely onprocessing the video so that our perception of the scene and of theimages on the screen match, without having to add artifical lightswhen shooting (other than for artistic purposes) or to manually
correct the colors to adapt to a particular display device. In termsof sensing capabilities cameras are in many regards better than humanphotoreceptors, but human vision performs better processing, carriedout in the retina and visual cortex. Therefore, rather than working on
the hardware, improving lenses and sensors, we will instead use,whenever possible, existing knowledge on visual neuroscience andmodels on visual perception to develop software methods mimickingneural processes in the human visual system, and apply these methods
to images captured with a regular camera. We will also use variationalmethods coupled with perceptual metrics to optimize the final outputs.From a technological standpoint, reaching our goal will be aremarkable achievement which will impact how movies are made (in less
time, with less equipment, with smaller crews, with more artisticfreedom) but also which movies are made (since good-visual-qualityproductions will become more affordable.) We also anticipate aconsiderable technological impact in the realm of consumer video. From
a scientific standpoint, this will imply finding solutions for severalchallenging open problems in image processing and computer vision, butit also has a strong potential to bring methodological advances toother domains like experimental psychology and visual neuroscience.http://www.dtic.upf.edu/~mbertalmio/ERC/overview.html
Social Interactions Debate:
The Meaning of Mirror Neurons
Date: 25th October from 12:00 – 17:30
Venue: Senate Room, Gilbert Scott Building, University of Glasgow
Registration link: www.ccni.gla.ac.uk/registration
(Register your interest as this event is free and spaces are limited)
Ever since the discovery of mirror neurons in monkeys, which were found to activate during action
control as well as during action observation, it has been debated if and how their existence and
function translates to humans. Large parts of research and discussion have been devoted to the
identification and measurement of actual mirror neurons, a mirror system, or a mirror mechanism
in humans. While these discussions are of course important, here we aim at discussing the wider
implications of such a capacity, assuming that it exists in some form or another. We therefore
dedicate this debate to â€œThe Meaning of Mirror Neuronsâ€ and we aim at discussing their
potential role in development, evolution, mental simulation, social interaction and theory of mind.
Klaus Kessler and Simon Garrod
Institute of Neuroscience and Psychology, University of Glasgow
The University of Glasgow, charity number SC004401
IROS 2012 Workshop on Cognitive Assistive Systems (CAS2012): Closing the Action-Perception Loop
Sunday October 7th 2012
Vilamoura, Algarve, Portugal
We gladly invite you to attend the CAS2012 workshop…
It is becoming increasingly clear that future robotic systems will need to exhibit sophisticated assistive capabilities, highly tuned and responsive to the needs of human users. Whether on autonomous platforms or within personal computing systems, awareness of human intentions and requirements will be an essential attribute of any robotic system aiming to be genuinely useful. In essence, they will need to be capable of empathising with human behaviour if they are to be truly assistive in the fullest sense of the word.
Realising such cognitive assistive systems (CAS) will require advances along the complete processing pipeline, from sensing through to learning and interaction. For instance, sensing will need to be proactive, anticipating user actions and environment changes to optimise data capture; whilst learning will need to exploit knowledge gained from observation of past actions and behaviours to predict likely human responses and reactions under different scenarios. During task-performance, assistive systems need to predict the perceptual changes that result as a consequence of human actions. These are challenging tasks which are likely to require step changes in current state of the art capability if they are to be addressed.
The aim of this workshop is to bring together researchers from relevant disciplines to exchange ideas and results on these and related tasks, as well as on the form of existing and future cognitive assistive systems. This will include those working in sensing, such as speech and vision, machine learning and AI, human computer interaction, biomechanics, and on systems and applications, including autonomous platforms, sensor networks and wearable computing, for example. One area in which CA systems are likely to have significant impact is in industrial manufacturing and training, and applications in this area will be of particular interest for this workshop.
08:40 Invited Speaker: Prof. Danica Kragic on Extracting and representing relevant information from high-dimensional data
09:15 Invited Speaker: Dr. Jeremy Wyatt on Active sensing and prediction in cognitive robots
09:50 Active Perception of Objects for Robot Grasping. Joao Bimbo, Xiaojing Song, Hongbin Liu, Lakmal Senerivatne and Kaspar Althoefer, King’s College London
10:10 A Preliminary Account of 3D Visual Search. Fiora Pirri, Matia Pizzoli, and Arnab Sinha, Sapienza UniversitÃ di Roma
10:30 coffee break
11:00 Invited Speaker: Prof. Yiannis Aloimonos on Cognitive Robots with a minimalist action grammar: Theory and Applications
11:35 Decoupling behavior, control and perception in affordance-based manipulation. Tucker Hermans, Jim Rehg and Aaron Bobick, Georgia Tech
11:55 Case Study: COGNITO – Cognitive Workflow Capturing and Rendering with On-Body Sensor Networks. Gabriele Bleser and Ardhendu Behera
12:25 lunch break
14:00 Invited Speaker: Prof. Tamim Asfour on Combining Active Vision and Active Touch for Grasping Unknown Objects
14:35 Invited Speaker: Dr. Claude Androit on Immersive Virtual Manufacturing and Training with Haptic Feedback and Virtual Manikins
15:10 Symbiotic-Autonomous Service Robots for User-Requested Tasks in a Multi-Floor Building. Manuela Veloso, Joydeep Biswas, Brian Coltin, Stephanie Rosenthal, Susana Brandao, Tekin Mericli and Rodrigo Ventura, Carnegie Mellon University
15:30 Multi-scale cortical keypoints for realtime hand tracking and gesture recognition. Miguel Farrajota, MÃ¡rio Saleiro, Kasim Terzic, JoÃ£o Rodrigues and Hans Du Buf, University of the Algarve
15:50 coffee break
16:30 Case Study: Closing The Action-Perception Loop at KTH. Lazaros Nalpantidis, Geert Kootstra and Renaud Detry
16:50 Posters Session
Dima Damen, University of Bristol, UK
Gabriele Bleser, DFKI, Germany
Lazaros Nalpantidis, KTH, Sweden
Gert Kootstra, KTH, Sweden
Renaud Detry, KTH, Sweden
Ardhendu Behera, University of Leeds, UK
Luis Almeida, Centre for Computer Graphics, Portugal
Applications are invited for a Postdoctoral Research Associate in
Scientific Computing/Numerical Modelling to work with Dr T. Betcke
(UCL Mathematics) and Prof S. Arridge (UCL Centre for Medical Image
Computing) on fast PDE solvers and inverse problems for image
reconstruction with applications to medical imaging. This is part of a
large interdisciplinary group, researching methods for Electrical
Impedance Tomography, Ultrasound and Optical Tomography, including
The Research Associate will contribute to the development of forward
and inverse solvers and their numerical implementation for large-scale
problems. A strong background in mathematics, scientific computing or
related areas is required. In particular, candidates should have
experience with finite and or boundary element methods and inverse
problems. Software development experience in C++ is essential.
This post is available from 1 November 2012 or as soon as possible
thereafter, and is funded by the MRC from 1 November 2012 to 6 October
2014 in the first instance.
For further information and an online application form see
Informal enquiries may be addressed to Dr Timo Betcke, tel: +44 (0)20
3108 4068, email: firstname.lastname@example.org.
The closing date for the post is 3 October 2012.