[visionlist] Postdoctoral Position in Theoretical and Computational Neuroscience at Yale

[visionlist] Deadline Extension: Second International Workshop on Context Based Affect Recognition CBAR13

****Deadline for paper submission postponed to 10th of May 2013*****

***************************************************************************************CBAR 2013: CALL FOR
International  Workshop on CONTEXT BASED AFFECT
RECOGNITION CBAR2013http://cbar2013.blogspot.com/Submission
Deadline: May 10,
2013***************************************************************************************The second international workshop on “Context Based Affect Recognition” CBAR13
(http://cbar2013.blogspot.com/) will be held in conjunction with the 2013 Affective Computing and Intelligent Interaction conference ACII2013, 2-5 September 2013, Geneva, Switzerland (http://www.acii2013.org/).

For details concerning the workshop program, paper submission guidelines, etc. please visit our workshop website at: http://cbar2013.blogspot.com/Best regards,Zakia Hammal

Zakia Hammal,
PhDThe Robotics
Institute, Carnegie Mellon Universityhttp://www.ri.cmu.edu/

InteractionFacial Expression RecognitionVisual Perceptionhttp://www.pitt.edu/~emotion/ZakiaHammal.html

[visionlist] (faculty) Biostatistics Research Faculty position at University of Houston

[visionlist] Postdoctoral position in Bayesian modeling for eye-writing at LPNC, Grenoble, France

Postdoctoral position: Bayesian modeling of on-line character recognition in an eye-writing software application

Candidates are invited to apply for a 12-month postdoctoral position (to start in September or October, 2013) to study and develop a Bayesian computational model for on-line character recognition. Research will take place in the Laboratory of Psychology and NeuroCognition (LPNC; CNRS and Grenoble University) in Grenoble, France, under the supervision of Dr. Julien Diard.

The context of this project is the computational study of “eye writing”, and software development of character recognition. We use a novel apparatus, based on a particular visual display and the reverse-phi illusory motion, that enables users to generate smooth-pursuit movement at will and in the direction of their choice. Coupled with eye tracking, the system allows participants to “write” cursive letters with their eyes. The application of the system to disabled, motor-impaired patients is central to the project.

Our objectives are two-fold. The first concerns adapting a previous model for cursive character recognition and production (Gilet et al., 2011) to the task of eye writing (Lorenceau, 2012). Our main goal here is to provide rapid character recognition, in an on-line manner, that is to say, as the character is being traced, and robust to the signal characteristic specific to eye writing (e.g., spurious saccades). The second objective is to expand and adapt the model, for instance toward disability assessment, and toward code convergence for easier man-machine interaction (i.e., adapting the system’s vocabulary to symbols that are convenient to produce for the patient).

Applicants must recently have obtained a PhD degree in Computational cognitive modeling, artificial intelligence, signal processing or a closely related field. This PhD is required to have been obtained not more than 24 months before the starting date of this postdoc (e.g., starting the postdoc on September, 1st, 2013 if you defended your PhD after September, 1st, 2011). The applicant must not have held any position, previously, in the LPNC. The postdoc will be supported by the EOL (Eye On-Line) project, funded by the French National Research Agency (ANR), in collaboration with Dr. Jean Lorenceau. Gross salary will be 2,500 € / month.

Required skills include software development, signal processing and probabilistic (Bayesian) modeling. The ability to communicate scientific ideas both orally and in writing are essential, while an interest in cognitive science and experimental psychology is desirable. Publications at an excellent level will be expected during the postdoc. Support for administrative procedures, for international candidates, is provided by the ISSO of Grenoble University.

Please send your application to Dr. Julien Diard. Only applications received before 12.00 midday on June, 14, 2013 can be considered (with a tentative decision date on June, 28). You will be required to provide a covering letter, a CV including a publication list, and names and contact information of two references with a brief description of your relationship to each reference. Applications can be written in French or English. Working language, also, will either be French or English. Informal enquiries before submitting a full application are welcome.

Contact Person: Dr. Julien Diard
Contact Phone: (+33) 4 76 82 58 93
Closing Date: 14 June 2013
Contact Email: julien.diard@upmf-grenoble.fr

Online resources for more information:
– Julien Diard’s website: http://diard.wordpress.com
– LPNC website: http://web.upmf-grenoble.fr/LPNC/
– More about the EOL project, from the general press:

Gilet, E., Diard, J. and Bessière, P. (2011) Bayesian action–perception computational model: Interaction of production and recognition of cursive letters. PLoS ONE, 6(6):e20387. http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0020387
Lorenceau, J. (2012) Cursive writing with smooth pursuit eye movements. Current Biology, 22:1506–1509. http://www.sciencedirect.com/science/article/pii/S0960982212006719

Re: [visionlist] #Question: 3D point cloud HDR

Dear Olivier,

I could offer some professional 3D scans with color and reflectance.
Please have a look at our youtube channel to see some data sets, e.g.,
http://youtu.be/CLwXeo2m83Y (http://www.youtube.com/AutomationAtJacobs)
A few data sets are available via the robotic 3D scan repository

Please drop me note, if your are interested.

Very best,

[visionlist] 2013 VSS Meeting Information

[visionlist] Research Assistant Position in the McDermott Lab at MIT

Dear colleagues,I am hoping to hire an RA in the near future. I’d be grateful if you could bring it to the attention of any promising undergraduates that you may know who are about to graduate.thanks,JoshPOSITION OPENING: Technical Assistant in the McDermott Lab, Department of Brain and Cognitive Sciences, MIT, to assist with all aspects of research on human audition.RESPONSIBILITIES: Designing, programming, and conducting behavioral experiments; analyzing data; participant outreach and recruitment; implementation and maintenance of analysis software and computational models; technical support for lab personnel; participation in reading groups and research seminars; and some basic administrative duties including ordering equipment, tracking supplies, etc. Will be encouraged to take an active role in scientific research. The position is ideal for individuals considering future graduate study in cognitive science or neuroscience.REQUIREMENTS: A bachelor’s degree in cognitive science, neuroscience, computer science, engineering, physics, or math; strong math, statistics, and computer skills (e.g., MATLAB, Python, shell scripting); substantial programming experience, including experience using HTML/CSS/JavaScript and experience implementing simple web sites; Macintosh and Windows troubleshooting skills and comfort in a Unix environment; good people skills; and evidence of serious interest in a career in cognitive science or neuroscience. We seek an organized, self-motivated, independent, and reliable individual who is able to multitask efficiently in a fast-paced environment. Must be able to work as part of a team. Research experience in cognitive science or neuroscience would be helpful, especially experience conducting behavioral experiments in human subjects.To apply, please follow the instructions at this link: http://sh.webhire.com/servlet/av/jd?ai=631&ji=2681090&sn=I

[visionlist] Postdoc – Johns Hopkins – Object organization in visual cortex

I have a postdoctoral position available in my laboratory at
the Johns Hopkins University. I am looking for a collaborator in research aimed
at understanding the neural processes underlying the transition from image to
object coding in the visual cortex. My lab uses multiple microelectrode
recording in behaving macaques combined with computational modeling. The main
requirement is interest in understanding the vision process; the candidate
should also have a strong quantitative background and experience in single cell
recording. The position could be for 3 years.

The Laboratory is in the Mind/Brain Institute which is a
center for systems neuroscience. The Johns Hopkins University has an
exceptional concentration of research into the neural processes underlying
perception, cognition and the control of action, including the visual, auditory
and tactile modalities. Collaborative research in the Mind/Brain Institute
includes the Departments of Neuroscience, Biomedical Engineering, and
Psychological and Brain Sciences. My lab has close collaboration with the
Computational Neuroscience Lab (Prof. Ernst Niebur) in the same Institute. For further
information about my lab see http://vlab.mb.jhu.edu.

The Mind/Brain Institute is located on the beautiful
Homewood Campus of the Johns Hopkins University. 

The Johns Hopkins University is an Equal
Opportunity/Affirmative Action Employer.

If you are interested, send me (Rudiger von der Heydt,
von.der.heydt@jhu.edu) a statement of interests, CV, and names of references. I
am attending the Vision Sciences Society 2013 meeting and will be available to
talk about the position.

[visionlist] #Question: 3D point cloud HDR

Dear All,

I have a query (do notice I have used an hash tag, up to you to see if
it was helpful before you delete this message):

In my research team we have conceived a method for High Dynamic Range
(HDR) Imaging display that is highly competitive with existing methods.
In addition it can be used for ANY data (I mean not necessarily data
living on an Eulidean grid as for images, for instance I would like to
consider point clouds or meshes).

However, it seems very difficult to find such 3D HDR data on the
INternet. What do I mean with such “3D HDR data”. Ideally, I would like
to have a set of data where for each point, I have a set of XYZ
coordinates and a set of RGB values (an HDR vector for each point).
Please notice that this has nothing to see with 3D stereo HDR Imaging.

Before I try to obtain such data with tailored experiments in our labs,
I would like to know if anyone of you could provide me with such
available data.

I would appreciate if anyone could answer before we enter in a long and
quite challenging process of generating such 3D data.


O. Lezoray

Re: [visionlist] Motion capture real-time system latency?

Hi Gabriel,

We once tested both Vicon and Qualisys optical motion capture systems
for how real their real time was. We did it by mounting a few markers on
the table of a record player. We would motion capture them, run them
through our rendering pipeline and project the image back onto the
record player with an LCD data projector. We aligned the image of the
markers with the markers themselves while the turntable was still. The
we set the turntable in motion at 45 rpm and measure the lag in terms of
the angle between the markers and their image.

Qualisys was clearly the winner. I think I recall that we also had used
a 60 Hz projector. Mocap was at 360 fps. My recollection is that we were
operating at around 20 ms latency. It’s still a lot for some
applications. We are now working with 120 Hz projectors, but we did not
take reliable measurements again. I am keen to hear what others have to say.



On 29/04/2013 4:07 PM, Gabriel Diaz wrote:
> I am currently scouting for a full-body motion capture system. My goal
> is to represent the data visually in real-time in an interactive
> simulation, and I am considering purchasing options. I do not trust
> the numbers provided by the manufacturers, and wanted to ask the group
> if they have any insight into the available options.
> We tested a Phasespace system running at 240 hz on a very fast
> workstation with high end graphics cards and an /extremely/ minimal
> virtual environment. We found 33-50ms of system delay (the screen
> updated to reflect movement 3 frames later @ 60 hz).
> To test, we compared two measures of the time at which a physical
> object collided with the ground. Both measures were represented on an
> oscilloscope for comparison. To get a measure of phase-space delay,
> the motion-tracked object would bring about a global change in screen
> luminance that was detected by a photometer, and represented on the
> oscilloscope. The object was also fitted with 9v battery, and a
> circuit-breaking switch that would trigger an instantaneous
> step-change in voltage upon collision with the ground. Lag was equal
> to the difference between the step-change in voltage and the change in
> screen luminance.