[visionlist] Probabilities and Optimal Inference to understand the Brain

Dear colleagues,

You are invited to participate to a 2-day workshop on “Probabilities and Optimal Inference to understand the Brain” April 5-6th 2018 at the Institute of Neurosciences Timone in Marseille in the south of France. 

The workshop will bring together experimentalists and theoreticians working in different fields of neurosciences (perception, motor control, learning and decision making and cognitive disorders) and at different levels on investigation (neuronal physiology,
oscillations in cortical activity, behavior…) but all developing models and experimental data interpretations within the Bayesian theoretical framework. 

The main ambition of the workshop will be not only to provide a phenomenological description on the current applications on the Bayesian optimal inference theory to the neurosciences, but also to open to critical analyses on the advantages and limitations on
this type on approach. In particular all speakers will be invited to address the question “In what aspects does the Bayesian or active inference framework push forward our understanding of the brain beyond other theoretical models?”

The list of confirmed speakers is:

– Dora Angelaki (Baylor College of Medicine)
– Rafal Bogacz (University of Oxford)

– Frédéric Crevecoeur (Université Catholique de Louvain)
– Kelly Diederen (University of Cambridge)
– Emmanuel Daucé (INS, Marseille)

– Opher Donchin (Ben-Gurion University of the Negev)

– Pascal Mamassian (ENS Paris)

– Laurent Perrinet (INT, Marseille)
– Lionel Rigoux (Max Planck Institute)
– David Thura (University of Montreal)

– Simone Vossel (Forschungszentrum Jülich & University of Cologne)

The full program is attached and also available at 

    http://ift.tt/2DHeA9i.

In addition we are accepting contributions such as posters, as well as a limited number of short oral presentations by young researchers of the Marseille Neuroscience community related to the workshop’s topic. 

Registration @ http://ift.tt/2DfwIpB free but mandatory.

Sincerely,

The organizing committee : Paul Apicella, Frederic Danion, Nicole Malfait, Anna Montagnini and Laurent Perrinet

e-mail : opt-infer-brain@sciencesconf.org

Advertisements

[visionlist] AVA travel awards for Phd and Postdoctoral researchers – applications now open

Dear all,

AVA Travel Awards provide funding (up to £750) for attendance costs at any conference or meeting at which the awardee will be presenting a paper or poster. Admissible costs are transport, registration, accommodation and subsistence at a conference.

The awards are open to students registered for a PhD and post-docs within 5 year of achieving their PhD. Applicants must be AVA members and have attended at least one AVA meeting in the 18 months previous to the application deadline. Conferences must be held
in the period 1st April 2018 – 1st April 2019.

Two awards will be made: one (the CRS award) made possible by a donation from our corporate member Cambridge Research Systems, and the other (the Geoffrey J Burton award) made possible by the Vision Scientists Memorial Fund. Recipients of awards are required
to write a conference report for the AVA and CRS websites.

The closing date for applications is 1st April, 2018.

Application forms can be found at the AVA website

http://ift.tt/2khaSth

best,

Isabelle

Isabelle Mareschal,
Deputy Head of Department,
Senior Lecturer in Experimental Psychology,
Queen  Mary University of London
Mile End Road, London
i.mareschal@qmul.ac.uk
02078826505


[visionlist] CFP: New Trends in Image Restoration and Enhancement workshop and challenges @ CVPR 2018

Apologies for cross-posting******************************CALL FOR PAPERS  & CALL FOR PARTICIPANTS IN 3 CHALLENGES

NTIRE: 3rd New Trends in Image Restoration and Enhancement workshop and challenges on image super-resolution, dehazing, and spectral reconstruction In conjunction with CVPR 2018, June 18, Salt Lake City, USA.
Website: http://ift.tt/2obsrjf

CHALLENGE on IMAGE DEHAZING (started!) A novel datasets of real hazy images obtained
in outdoor and indoor environments with ground truth is introduced with the challenge. It
is the first image dehazing online challenge.

Track 1: Indoor – the goal is to restore the visibility in images with haze generated
in a controlled indoor environment.

Track 2: Outdoor – the goal is to restore the visibility in outdoor images with haze
generated using a professional haze/fog generator.

For data and more details: http://ift.tt/2obsrjf

CHALLENGE on SPECTRAL RECONSTRUCTION (started!)

The largest dataset to date will be introduced with the challenge.
It is the first spectral reconstruction from RGB images online challenge.

Track 1: “Clean” recovering hyperspectral data
from uncompressed 8-bit RGB images created by applying a know response
function to ground truth hyperspectral information.

Track 2: “Real World” recovering hyperspectral
data from jpg-compressed 8-bit RGB images created by applying an unknown
response function to ground truth hyperspectral information.

For data and more details:http://ift.tt/2obsrjf

CHALLENGES DATES

● Release of train data: January 10, 2018● Competition ends: March 01, 2018
ORGANIZERS
● Radu Timofte, ETH Zurich, Switzerland (radu.timofte [at] vision.ee.ethz.ch) ● Ming-Hsuan Yang, University of California at Merced, US (mhyang [at] ucmerced.edu)● Jiqing Wu, ETH Zurich, Switzerland (Jiqing.wu [at] vision.ee.ethz.ch) ● Lei Zhang, The Hong Kong Polytechnic University (cslzhang [at] polyu.edu.hk) ● Luc Van Gool, KU Leuven, Belgium and ETH Zurich, Switzerland (vangool [at] vision.ee.ethz.ch) ● Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium● Codruta O. Ancuti, University Politehnica Timisoara, Romania● Boaz Arad, Ben-Gurion University, Israel ● Ohad Ben-Shahar, Ben-Gurion University, Israel

PROGRAM COMMITTEE (to be updated)

    Cosmin Ancuti, Université catholique de Louvain (UCL), Belgium    Michael S. Brown, York University, Canada    Subhasis Chaudhuri, IIT Bombay, India    Sunghyun Cho, Samsung    Oliver Cossairt, Northwestern University, US    Chao Dong, SenseTime    Weisheng Dong, Xidian University, China    Alexey Dosovitskiy, Intel Labs    Touradj Ebrahimi, EPFL, Switzerland    Michael Elad, Technion, Israel    Corneliu Florea, University Politehnica of Bucharest, Romania    Alessandro Foi, Tampere University of Technology, Finland    Bastian Goldluecke, University of Konstanz, Germany    Luc Van Gool, ETH Zürich and KU Leuven, Belgium    Peter Gehler, University of Tübingen and MPI Intelligent Systems, Germany    Hiroto Honda, DeNA Co., Japan    Michal Irani, Weizmann Institute, Israel    Phillip Isola, UC Berkeley, US    Zhe Hu, Light.co    Sing Bing Kang, Microsoft Research, US    Vivek Kwatra, Google    Christian Ledig, Twitter, UK    Kyoung Mu Lee, Seoul National University, South Korea    Seungyong Lee, POSTECH, South Korea    Stephen Lin, Microsoft Research Asia    Chen Change Loy, Chinese University of Hong Kong    Vladimir Lukin, National Aerospace University, Ukraine    Kai-Kuang Ma, Nanyang Technological University, Singapore    Vasile Manta, Technical University of Iasi, Romania    Yasuyuki Matsushita, Osaka University, Japan    Peyman Milanfar, Google and UCSC, US    Rafael Molina Soriano, University of Granada, Spain    Yusuke Monno, Tokyo Institute of Technology, Japan    Hajime Nagahara, Kyushu University, Japan    Vinay P. Namboodiri, IIT Kanpur, India    Sebastian Nowozin, Microsoft Research Cambridge, UK    Aleksandra Pizurica, Ghent University, Belgium    Fatih Porikli, Australian National University, NICTA, Australia    Hayder Radha, Michigan State University, US    Stefan Roth, TU Darmstadt, Germany    Aline Roumy, INRIA, France    Jordi Salvador, Amazon, US    Yoichi Sato, University of Tokyo, Japan    Samuel Schulter, NEC Labs America    Nicu Sebe, University of Trento, Italy    Boxin Shi, National Institute of Advanced Industrial Science and Technology (AIST), Japan    Wenzhe Shi, Twitter Inc.    Alexander Sorkine-Hornung, Disney Research    Sabine Süsstrunk, EPFL, Switzerland    Yu-Wing Tai, Tencent Youtu    Hugues Talbot, Université Paris Est, France    Robby T. Tan, Yale-NUS College, Singapore    Masayuki Tanaka, Tokyo Institute of Technology, Japan    Jean-Philippe Tarel, IFSTTAR, France    Radu Timofte, ETH Zürich, Switzerland    Ashok Veeraraghavan, Rice University, US    Jue Wang, Megvii Research, US    Chih-Yuan Yang, UC Merced, US    Ming-Hsuan Yang, University of California at Merced, US    Qingxiong Yang, Didi Chuxing, China    Lei Zhang, The Hong Kong Polytechnic University    Wangmeng Zuo, Harbin Institute of Technology, China

SPEAKERS (to be announced)

SPONSORS (to be updated)

NVIDIA

SenseTime

Twitter Inc

Google

Contact: radu.timofte@vision.ee.ethz.ch
Website: http://ift.tt/2obsrjf


[visionlist] Schallek Lab Opening

Postdoctoral
position: Adaptive optics imaging of retinal blood flow

            A
postdoctoral position is available in the laboratory of Jesse
Schallek  at the University of Rochester (LINK: 
http://ift.tt/2muHCBP ) .
The research position
will deploy advance imaging technology, including adaptive
optics, to study
capillary blood flow and hemodynamic regulation in the living
retina.  By imaging
single blood cells in the living
eye, our research seeks to better
understand the blood
flow in the smallest vessels of the body and how this flow is
disrupted in
human diseases such as diabetic retinopathy, glaucoma,
Alzheimer’s disease and
others.  Research will
be conducted in
human patients and animal models of disease. 

            The postdoctoral fellow will work in
the collaborative
environment of the Advanced
Retinal
Imaging Alliance of the University of Rochester.  This group includes the
research efforts of
David Williams, Bill Merigan, Jennifer Hunter, Mina Chung and
Jesse
Schallek.  The
postdoctoral trainee will
benefit from collaborative interactions in this group toward
novel ophthalmic
imaging design and study of retinal function.

            Candidates will be recruited from two
areas of
focus:  1) those with
strong backgrounds
in blood flow study of the brain or other tissues or 2) those
with exemplary
technical backgrounds with brain or retina imaging technologies.  We encourage applicants
from outside the
field of ophthalmology.

            Outstanding applicants will have
training in one or more
of the following fields: retinal imaging technology
(including OCT and
adaptive optics), BOLD fMRI mechanisms, intrinsic signal
optical imaging, study
of diabetic retinopathy or study of microvascular disease (in
other tissues).  Candidates
with either biology or engineering
focus will be suitable for this position.

 

Applicants
should have a PhD, MD or equivalent training. 
Senior graduate students nearing completion of their
degree are strongly
encouraged to apply.   Openings
are
available immediately and position will remain open until
qualified individual
is identified.  Appointment
is for 1 year
and may be extended up to 3 years depending on progress and
review.

Interested
applicants should send CV and names/contact information of
references to jschall3@ur.rochester.edu  Please include a cover
letter detailing your
current research activities, expertise and the reasons for your
interest in the
position.

 

More
information about research projects in the lab can be found at:

http://ift.tt/2muHCBP

and

http://ift.tt/1ulA3tw

 

email: jschall3@ur.rochester.edu

Jesse Schallek, PhD
Assistant Professor of Ophthalmology,
Neuroscience and the Center for Visual Science
Flaum Eye Institute
University of Rochester
601 Elmwood Ave, Box 319
Rochester, NY 14620

Phone: (585)-273-4848
Email: jschall3 at ur.rochester.edu
Office Location: G-4113

 


[visionlist] LandRate toolbox: a new eye movement analysis tool

**apologies for possible cross-posting**

Dear colleagues,

I am glad to inform you about the development of a new
MATLAB toolbox, called LandRate, appropriate for full post-experimental eye movement
analysis. LandRate constitutes an extension of EyeMMV toolbox (Krassanakis etal., 2014) and supports the generation, in just one simple step, of a full
analysis report based on experimental data collected through eye tracking.
Additionally, the toolbox supports the computation of a new aggregated index
(LRI) suitable for the performance of landscape rating procedures. This
index combines both quantitative eye tracking metrics and experts’ opinions
while it can be easily adapted in similar fields. Please note that LandRate toolbox
can be executed in MATLAB R2017a (and after) version.  

You can find a full description of LandRate toolbox in the
article below:

Krassanakis V., Misthos M. L., Menegaki M. (2018), LandRate toolbox: an adaptable tool for eye movement analysis and landscape rating, In P. Kiefer et al. (eds.): Proceedings of the 3rd International Workshop on Eye Tracking for Spatial Research, Switzerland, Zurich, pp. 40-45.

Additionally, you can find another description of LandRate
toolbox in this presentation.

LandRate toolbox is freely distributed (under GPLv.3
license) to the scientific community, through Github platform in this link.

Please, do not hesitate to contact me in order to express
your feedback.

Kind regards,
Vassilios Krassanakis


[visionlist] Join us for HCP Course 2018 in Oxford, UK June 25-29!

We are pleased to announce the 2018 HCP Course: “Exploring the Human Connectome”, to be held June 25 – 29, 2018 at the Blavatnik School of Government, at the University of Oxford, in Oxford, UK. 

This 5-day intensive course will provide training in acquisition, processing, analysis and visualization of  and behavioral data using methods and tools developed by the WU-Minn-Oxford Human Connectome Project (HCP) consortium.

The course is designed for investigators interested in:

using HCP-style
data distributed by the Connectome Coordinating Facility (CCF) from the
young adult (original) HCP and forthcoming projects

acquiring and analyzing HCP-style imaging and behavioral data at your own institution

processing your own non-HCP data using HCP pipelines and methods

using Connectome Workbench tools and sharing data using the BALSA imaging database

learning HCP multimodal neuroimaging analysis methods, including those that combine MEG and MRI data

positioning yourself to capitalize on HCP-style data forthcoming from large- currently collecting data (e.g., Lifespan HCP development and aging and Connectomes Related to Human Disease projects)

Participants
will learn how to acquire, analyze, visualize, and interpret data from
four major MR modalities (structural MR, resting-state fMRI, diffusion
imaging, task-evoked fMRI) plus magnetoencephalography (MEG) and
extensive behavioral data.  Lectures and labs will provide grounding in
neurobiological as well as methodological issues involved in
interpreting multimodal data, and will span the range from
single-voxel/vertex to brain network analysis approaches.  
The
course is open to students, postdocs, faculty, and industry
participants.  The course is aimed at both new and current users of HCP
data, methods, and tools, and will cover both basic and advanced topics.
Prior experience in human neuroimaging or in computational analysis of
brain networks is desirable, preferably including some familiarity with FSL and Freesurfer softwaFor more info and to register visit the HCP Course 2018 website. New this year is the opportunity to add 6 nights of bed and breakfast accommodation (Sun June 24 – Fri June 29) at nearby Worcester College to your registration at a group, taxes included rate. 

If you have any questions, please contact us at: hcpcourse@humanconnectome.We look forward to seeing you in Oxford!Best,2018 HCP Course Organizers


[visionlist] CFP: IEEE FG18 Workshop on Large scale Emotion Recognition and Analysis (LERA)

[apologies if you received multiple copies]