[visionlist] Special issue: Sight restoration: prosthetics, optogenetics and gene therapy

[visionlist] Seeking Part-Time Research Assistant in NYC


working with Edward A. Vessel, PhD

TOPIC: Sensory stimulation augmentation

We are seeking a research assistant to help with performing literature searches, writing summaries, project management, and other general research support. The project seeks to characterize the effects of inadequate sensory stimulation in isolated environments and review the potential for existing technologies to ameliorate such effects through sensory augmentation. This position is an invaluable opportunity to work with a leading neuroaesthetics lab on a cutting-edge, NASA funded project.


– literature searches

– read literature & write summaries

– data entry

– analysis of qualitative (interview) data

– create summary charts

– gathering references

– document formatting

– assist in writing reports

– Managing paperwork, emails

– Managing employment forms, timesheets, tax documents


– Bachelors degree or higher (MA/PhD) in Psychology, Cognitive Science, Neuroscience, or a related field. Background in perception and sensation preferred.

– Excellent research skills, familiarity with a range of journal databases (PubMed, ISI, etc.)

– Excellent scientific writing skills

– Ability to work independently, self-motivated


This is not a salaried position with regular hours. The research assistant will be paid $30/hour as an independent contractor for completing specific projects. The position starts immediately (Jan 1, 2014), is in the New York City area, and will involve between 2 and 10 hours per week until July 2014. The total workload is estimated to be in the range of 80 hours. No additional benefits are included.


Interested applicants should submit a CV/resumé and coverletter describing relevant experience to edvessel@gmail.com as soon as possible, but no later than January 6, 2014. 

Re: [visionlist] ABSTRACT DEADLINE NOW 1/2/2013 (11:59 pm EST): Conference & Bard Ermentrout’s 60th Birthday: Nonlinear Dynamics and Stochastic Methods

Nonlinear dynamics and
stochastic methods: from neuroscience to other biological applications

March 10-12, 2014

Pittsburgh, PA


conference on nonlinear dynamics and stochastic methods will bring together a
mix of senior and junior scientists to report on theoretical methods that
proved successful in mathematical neuroscience, and to encourage their
dissemination and application to modeling in computational medicine and other
biological fields. This conference will coincide with a celebration of G. Bard
Ermentrout’s sixtieth birthday. The invited speakers will present on
mathematical topics such as dynamical systems, multi-scale modeling, phase
resetting curves, pattern formation and statistical methods. The mathematical
tools will be demonstrated in the context of the following main topics: i)
Rhythms in biological systems; ii) The geometry of systems with multiple time
scales; iii) Pattern formation in biological systems; iv) Stochastic models:
statistical methods and mean field approximations.

The conference runs from March 10-12, 2014 at the University of Pittsburgh,
Pittsburgh, PA. Travel support may become available for young investigators. Currently,
this conference is partial funded by the Mathematical Biosciences Institute and
the University of Pittsburgh.




Important Dates: December 22, 2013: Deadline for travel award application and abstract submission.

SPONSORS:Department of Mathematics, University of PittsburghMathematical Biosciences InstituteNational Science Foundation (pending) CONTACT:  rodica-curtu@uiowa.edu or areynolds2@vcu.edu

Confirmed Speakers:

Paul Bressloff (University of Utah)

Carson Chow
(National Institutes of Health)

Sharon Crook
(Arizona State University)

Jack Cowan (University of Chicago)

Jonathan Drover
(Cornell Medical College, NYC)

Edelstein-Keshet (University of British Columbia, Vancouver – Canada)
Fernandez Galan (Case Western Reserve University)Pranay Goel
(Indian Institute of Science, Education and Research, Pune – India)Boris Gutkin
(Ecole Normale Superieure/ ENS, Paris – France)Zachary Kilpatrick (University of Houston)Nancy Kopell (Boston University)Cheng Ly (Virginia Commonwealth University)Remus Osan (Georgia State University)George Oster (University of California, Berkeley)John Rinzel (New York University)Jonathan Rubin (University of Pittsburgh)Daniel Simons (University of Pittsburgh)David Terman (Ohio State University)

If you have questions
now please contact one of the organizers, Angela Reynolds, areynolds2@vcu.edu,
or Rodica Curtu, rodica-curtu@uiowa.edu.

Virginia Commonwealth University

Department of Mathematics and Applied Mathematics

Assistant Professor



Grace Harris Hall 4176

[visionlist] CFP – Emerging Spatial Competences: From Machine Perception to, Sensorimotor Intelligence

[Imageworld] Computer Vision and Image Understanding Volume 118, Pages 1-224 (January 2014)


– Automatic extraction of relevant video shots of specific actions
exploiting Web data, Pages 2-15, Do Hang Nga, Keiji Yanai
– ObjectPatchNet: Towards scalable and semantic image annotation and
retrieval, Pages 16-29, Shiliang Zhang, Qi Tian, Gang Hua, Qingming
Huang, Wen Gao
– Social-oriented visual image search, Pages 30-39, Shaowei Liu, Peng
Cui, Huanbo Luan, Wenwu Zhu, Shiqiang Yang, Qi Tian
-Tag-Saliency: Combining bottom-up and top-down information for saliency
detection, Pages 40-49, Guokang Zhu, Qi Wang, Yuan Yuan
– Multiview Hessian discriminative sparse coding for image annotation,
Pages 50-60, Weifeng Liu, Dacheng Tao, Jun Cheng, Yuanyan Tang
– Combining histogram-wise and pixel-wise matchings for kernel tracking
through constrained optimization, Pages 61-70, Hong Seok Choi, In Su
Kim, Jin Young Choi
– Spatio-temporal weighting in local patches for direct estimation of
camera motion in video stabilization, Pages 71-83, Soo Wan Kim, Shimin
Yin, Kimin Yun, Jin Young Choi
– Bayesian perspective for the registration of multiple 3D views, Pages
84-96, X. Mateo, X. Orriols, X. Binefa
– Efficient keyframe-based real-time camera tracking, Pages 97-110,
Zilong Dong, Guofeng Zhang, Jiaya Jia, Hujun Bao
– Detecting, segmenting and tracking unknown objects using multi-label
MRF inference, Pages 111-127, Mårten Björkman, Niklas Bergström, Danica
– Object tracking using learned feature manifolds, Pages 128-139, Yanwen
Guo, Ye Chen, Feng Tang, Ang Li, Weitao Luo, Mingming Liu
– Coarse-to-fine skeleton extraction for high resolution 3D meshes,
Pages 140-152, Luca Rossi, Andrea Torsello
– Face recognition for web-scale datasets, Pages 153-170, Enrique G.
Ortiz, Brian C. Becker
– A visualization framework for team sports captured using multiple
static cameras, Pages 171-183, Raffay Hamid, Ramkrishan Kumar, Jessica
Hodgins, Irfan Essa
– Adaptive estimation of visual smoke detection parameters based on
spatial data and fire risk index, Pages 184-196, Marin Bugarić, Toni
Jakovčević, Darko Stipaničev
– ChESS – Quick and robust detection of chess-board features, Pages
197-210, Stuart Bennett, Joan Lasenby
– Joint view-identity manifold for infrared target tracking and
recognition, Pages 211-224, Jiulu Gong, Guoliang Fan, Liangjiang Yu,
Joseph P. Havlicek, Derong Chen, Ningjun Fan

[Imageworld] 2 Research Grants in Computer Vision for Medical Endoscopy at ISR-Coimbra, Portugal

The Institute of Systems and Robotics at the University of Coimbra in Portugal offers two research grants to work on the project “New Technologies to Support Heath and Quality of Life, Project A-Surgery and Diagnosis Assisted by Computer Using Images”, funded by the program QREN-MaisCentro under the framework of Call nr. Centro-SCT-2011-01.

The selected candidate will join a R&D team whose goal is to apply computer vision techniques to endoscopic images/videos with the aim to develop systems to assist the physicians during the surgical procedures. It is a multi-disciplinary team, with strong experience in the topic, and capable to transfer results to industry.

GRANT1: The R&D activities will focus in improving the visualisation in medical endoscopy, either by using software to improve image quality in monocular video or by generating 3D video based on stereoscopy. The work plan for the first 6 months comprises the following steps:
– Review of literature concerning the correction of “vignetting”, white balance, and use of “deconvolution” for deblurring and denoising;
– Development and implementation of algorithms to improve the quality of monocular endoscopic video;
– Review of literature on 3D video: stereoscopic perception, proper creation of disparities, stereoscopic displays, 3D video protocols;
– Implementation of a test platform able to create a 3D video signal using two independent cameras.

GRANT2: In the particular case of this grant, the R&D activities are focused on the development of navigation systems based in augmented reality. The endoscopic camera will be instrumented with optical and/or electromagnetic markers that allow the knowledge of their position each time. This information will be used to overlay in the video relevant information to the surgeon, such as the position of anatomical points, or cuts in depth, acquired with another sensory modality. The work plan for the first 6 months comprises the following steps:
– Development of an algorithm to solve the “hand-eye” calibration problem, i.e., estimate the rigid transformation between optical markers and the reference camera system. This is a problem that has already been studied by the team in the past; however is now suppose to explore a new approach.
– Development of an algorithm to solve the “hand-eye” calibration problem in the case of ultrasound.
– Implementation of the outcomes in a real demonstration system.

The application period will end in the 10th January 2014. The interested candidates must send by e-mail to Joao P Barreto (jpbar(at)isr.uc.pt) a motivation letter and detailed CV. They must also say to which grant they are applying (GRANT1 our GRANT2) and, if they are applying to both grants, they must give the order of preference.

For further details please consult the english version in the links:
GRANT1: http://www.eracareers.pt/opportunities/index.aspx?task=showAnuncioOportunities&jobId=41478&idc=1
GRANT2: http://www.eracareers.pt/opportunities/index.aspx?task=showAnuncioOportunities&jobId=41479&idc=1
or send an e-mail to jpbar(at)isr.uc.pt.

Joao P. Barreto
Tenured Assistant Professor
Dept. de Engenharia Electrotécnica e Computadores
Pólo II – Universidade de Coimbra
3030 Coimbra, Portugal

e-mail: jpbar@deec.uc.pt
phone: +351 938469263
web: http://www.deec.uc.pt/~jpbar

[Imageworld] re: Post-doc/Research assistant in Computer Vision/Machine Learning

=======================Start of post====================

Qatar Computing Research Institute (QCRI) is seeking candidates
for Postdoctoral Researcher and Research assistant in the area of Computer
Vision and Machine Learning.  The successful
candidate will contribute towards the Sports Analytics project in the
Distributed Systems Group. This will include developing models, algorithms and
software implementations to discover new information related to on-field performance.
Soccer is the current focus of research with the possibility of extending to other sports in the near future. Research efforts are to target top-tier publications and patent applications. 


Postdoctoral researcher

PhD in Electrical engineering or computer

Strong publication record.  

knowledge of Computer Vision and Digital
Video Processing.

Strong programming skills i.e. C++, MATLAB,
OpenCV, etc…

Knowledge of Machine Learning and/or Pattern
Recognition is a plus.

Experience in parallel programming is a plus.

Research Assistant

BSc or MSc in Electrical Engineering or computer
science from top-tier institution.

Strong academic record. Publications is a plus.

Knowledge of Computer Vision and Digital Video

Strong programming skills i.e. C++, MATLAB,
OpenCV, etc…

Knowledge of Machine Learning and/or Pattern
Recognition is a plus.

Experience in parallel programming is a plus.



A highly competitive compensation package is offered, including
attractive tax-free salary and additional benefits such as furnished accommodation,
excellent medical insurance, local transportation, and more

Research at QCRI:

QCRI is a national research institute conducting world-class
applied computing research. QCRI offers a collaborative, multidisciplinary team
environment, endowed with a comprehensive support infrastructure. QCRI is a proud
member of Qatar Foundation for Education, Science and Community Development.

To apply, please send CV and names of two referees to melgharib@qf.org.qa

[Imageworld] Overlapping Cervical Cytology Image Segmentation Challenge – ISBI 2014

The automated detection and segmentation of overlapping cells using microscopic images obtained from Pap smear can be considered to be one of the major hurdles
for a robust automatic analysis of cervical cells. The Pap smear is a screening test used to detect pre-cancerous and cancerous processes, which consists of a sample of cells collected from the cervix that are smeared onto a glass slide and further examined
under a microscope (see the left figure). The main factors affecting the sensitivity of the Pap smear test are the number of cells sampled, the overlap among these cells, the poor contrast of the cell cytoplasm, and the presence of mucus, blood and inflammatory
cells. Automated slide analysis techniques attempt to improve both sensitivity and specificity by automatically detecting, segmenting and classifying the cells present on a slide.

In this challenge, the targets are to extract the boundaries of individual cytoplasm and nucleus from overlapping cervical cytology images.  

The First Segmentation of Overlapping Cervical Cells from Extended Depth of Field Cytology Image Challenge is held under the auspices of the IEEE International Symposium on Biomedical
Imaging (ISBI 2014) held in Beijing, China on April 28th – May 2nd, 2013.

For more details, please visit http://cs.adelaide.edu.au/~carneiro/isbi14_challenge/index.html.

Important Dates:January 15th, 2014: Release 8 EDF real cytology images and training and partial testing synthetic images.February 15th, 2014: Release rest testing synthetic images and 8 EDF real cervical cytology images.February 15th to March 30th, 2014: Period for submission of quantitative & qualitative results and method abstract (2 pages maximum, ISBI format).April 15th, 2014: Release of the result (ranking of participanting teams, notification of acceptance/presentation type).April 28th, 2014: Presentation at the challenge workshop (ISBI 2014 conference).July, 2014: Joint submission (with the authors of all submissions) of a paper with all methodologies to a top tier medical imaging journal.

Andrew Bradley, University of Queensland, AustraliaGustavo Carneiro, University of Adelaide, AustraliaZhi Lu, City University of Hong Kong, China

[Imageworld] CFP: CVIU Special Issue on Generative Models in Computer Vision

CFP: CVIU Special Issue on Generative Models in Computer Vision


Generative models have been applied to a broad array of complex low- and high-level computer vision tasks, demonstrating the versatile and principled nature of the probabilistic, Bayesian approach to vision. Despite their intuitive appeal, generative models pose computational challenges during parameter learning due to the presence of hidden variables, which are commonly aggravated by the combinatorial problems encountered during inference.

Results developed over the last few years have shown that it is possible to exploit the representational power of generative models while at the same time harnessing their computational complexity. The learning task has been shown to profit from recent advances in optimization such as Accelerated Gradient/Momentum, Proximal Operators and more broadly Convex Optimization, while the inference task can also be accelerated using combinatorial optimization techniques, such as Branch-and-Bound, learning-based methods such as Regression/Voting/Cascades, or rapid sampling-based techniques, such as Perturb-and-Map and Swendsen-Wang cuts. On the representation side, the established dictionary-based models used in Sparse Coding have been extended and enhanced with Hierarchical, Grammatical representations for images and objects, and Structured models for Holistic Scene Understanding.


For this special issue, authors are invited to submit original research papers and high-quality overview and survey articles on topics including, but not limited to:

– Methodology

* Generative models

* Dictionary Learning

* Manifold Learning

* Grammars and Hierarchical Models

* Efficient algorithms for inference and unsupervised learning

* Theoretical characterizations of good representations and models

– Applications

* Object Detection / Object Segmentation

* Object / Face Recognition and Verification

* Image reconstruction / Image denoising

* Shape analysis / Registration / Statistical modeling

Full papers can be submitted via the online submission system for CVIU (http://ees.elsevier.com/cviu/). To encourage reproducible research, preference will be given to submissions accompanied by software that generates the results claimed in the manuscript.

Further details and updates can be found from here:



– Submission Deadline: April 15, 2014

– First Round Decisions: July 15, 2014

– Revisions Deadline: November 15, 2014

– Final Round Decisions: January 15, 2015

– Online Publication: 2015

*Special Issue Guest Editors*

Adrian Barbu

Associate Professor, Florida State University



Stephen Gould

Senior Lecturer, College of Engineering and Computer Science, Australian National University



Iasonas Kokkinos

Assistant Professor, Center for Visual Computing, Ecole Centrale Paris



Ying Nian Wu

Professor, UCLA



Alan Yuille

Professor, UCLA



[Imageworld] CfP: Special Issue on Activity Monitoring Using Multimodal Data in Journal of Electrical and Computer Engineering

Dear Imageworld members,

Here is a Call for Papers for:


Journal of Electrical and Computer Engineering

Special Issue: Activity Monitoring Using Multimodal Data


With the availability of affordable smart sensors, multimodal sensor data is conjectured to play an increasingly significant and central role in healthcare by the leverage of sophisticated data analysis technologies geared especially towards behavioral disease monitoring and diagnosis. Sensors exist for a wide range of modalities and environments, from wearable, physiological devices to pressure and contact sensors and ambient, audiovisual, or other environmental sensors. Accurate and effective remote monitoring to facilitate independent living of the elderly is becoming increasingly available as smart home technology matures. As such, the semantic interpretation of multimodal data as well as issues revolving around content-based access to such data is fast becoming relevant research issues associated with smart homes, remote monitoring, and elderly care technologies.

Multimodal data analysis, information retrieval, and indexing are central to this research agenda in that the semantic processing of the large-scale and heterogeneous data produced by smart sensors is fundamental to provide diagnostic assistance and feedback to people with medical conditions or in need of monitoring. Research challenges include the processing of dynamic, real-time, heterogeneous data, and they must take into account the spatiotemporal context of each sensor and the individual aspects of the person monitored. Fusion and higher-level interpretation techniques are needed in order to provide concrete and useful information to clinicians through appropriate user interfaces. 

Potential topics include, but are not limited to:

– Multimedia indexing and retrieval methods for healthcare monitoring

– Recognition of activities of daily living

– Detection of abnormal behaviors/event detection for health monitoring and remote care

– Video & voice-based analytics for monitoring and assessment

– Semantic complex event processing, reasoning, knowledge structures of high–level concepts, and data fusion

– Personalized data analysis, event detection, and profile building from multimodal data

– Wearable and pervasive computing

– Sensor networks, sensor correlation, and fusion

– Context modeling and contextual reasoning in ambient intelligence

Before submission authors should carefully read over the journal’s Author Guidelines, which are located at http://www.hindawi.com/journals/jece/guidelines/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://mts.hindawi.com/submit/journals/jece/signal.processing/mum/ according to the following timetable:

Manuscript Due: Friday, 6 June 2014

First Round of Reviews: Friday, 29 August 2014

Publication Date: Friday, 24 October 2014

Lead Guest Editor

Aytul Ercil, Faculty of Engineering and Natural Sciences, Sabancı University, Orhanlı, Istanbul, Turkey

Guest Editors

Eamonn Newman, Insight Centre for Data Analytics, Dublin City University, Dublin 9, Ireland

Ceyhun Burak Akgul, Vistek ISRA Vision A.S., ITU Ayazaga Kampusu, Teknokent Arı 1 Binası, No. 24/3, Maslak, Istanbul, Turkey

Yiannis Kompatsiaris, Information Technologies Institute, Centre for Research and Technology Hellas, Thermi, Thessaloniki, Greece

Ceyhun Burak Akgul, PhDR&D Manager | Vistek-ISRA Visioncbakgul@vistek-isravision.comwww.vistek-isravision.com | http://www.cba-research.com

Ceyhun Burak Akgul, PhDPart-time Faculty | Bogazici University Dept. of Electrical-Electronics Eng.www.cba-research.com