Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Advertisements

Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Niko,

I apparently wasn’t very clear because you’re reiterated exactly the point I was trying to make (i.e. that it’s not a debate of pre vs post, but how do we ensure that post works well). 

However the bit that I disagree with you on is that signed reviews solves the bad-review problem.  The reputation component works, but only for people who are in the relevant social clique and therefore care about their reputation.  

Let me provide a more concrete example this time:

If Bob the herbalist publishes a paper that claims his supplement increases night vision by 80% and he convinces 5 of his pseudo-scientist friends to write nice reviews and sign them, it will be apparent to all of us in the vision community that something is fishy. But to outsiders such as consumers and the less-diligent venture capitalists, it will not be clear that there’s any difference between Bob’s post-reviewed paper and Niko’s post-reviewed paper.

This is a problem because when the supplement makes your eyes fall out,  the trust between consumers and science is damaged. You might argue that someone will catch this paper and call it out, but this assumes that there’s enough people will free time to police the literature.  It works in physics but Psychology is much larger discipline with a lower bar of entry.  

I completely agree that open science is a good thing, but it not a magic wand that makes all problems disappear without some careful planning and implementation. We should borrow lessons from others that have used similar open solutions on a grand scale (like Amazon).  More specifically, there needs to be some sort of way to track reputation that is easily understood by outsiders.  Simply signing one’s name is not sufficient because it is easy to manufacture false credentials.  

-Brad

 

On Fri, Jul 29, 2016 at 5:48 PM, Nikolaus Kriegeskorte wrote:

pre-publication peer review only makes sense in the absence of internet technology.

 

the issue is complex and much has been written about it.

but to see why post-publication peer review is necessary, consider these points:

·       
science requires transparency. transparency is the antidote to corruption.

·       
pre-publication peer review can, by definition, never be transparent: transparency requires that the community can read the reviews as they come in. to understand
the reviews, we need to have access to the paper. thus, the paper must be publicly accessible, i.e. published, for peer review to be transparent.

·       
the problem brad describes, of associates praising a bad paper, is addressed by signed open reviews. when scientists sign their reviews, they bet their reputation
on their good judgement. this incentivises judgments that hold up in historic retrospect – which is exactly what we need.

 

so the question is not pre- or post-publication peer review, but how the future system of post-publication peer review should work and how we can best transition to it.

 

niko

 

 

 

From: visionlist-bounces@visionscience.com [mailto:visionlist-bounces@visionscience.com]
On Behalf Of Brad WybleSent: 29 July 2016 17:05To: Hoover ChanCc: Robert O’Shea; cvnet; Bill Stell; Alex Holcombe; visionlist@visionscience.com

Subject: Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

 

With regard to Hoover’s question about the ultimate in peer-review, all systems have their advantages and disadvantages.   The disadvantages of peer-review have been well documented and are many (bad reviews, biased reviews, inscrutable
editorial decisions, triage processes that place enormous power in the hands of a select few with little oversight, emphasis on flashy results, a strong incentive for HARKing).  

 

However the disadvantages of the post-review publication model must also be noted.  Below I link a recent post about some unfortunate problems at F1000, in which scientists recruit their friends to do post-pub review on plagiarized articles,
which are then given a gold star by said friends.  The potential for abuse here is obvious, because while it’s possible for an insider to know when a completely dodgy set of reviewers has been dispatched to “astroturf” an article, the process will look completely
legitimate to outsiders who might be consulting the literature for important uses. The current pre-pub peer-review process, with all its flaws, at least provides some baseline level of vigilance that cannot be so easily abused.  

 

The question is not pre or post review, but rather, how can we find a way to improve pre-review to avoid these problems. Perhaps we need some kind of vetting process for reviewers and someone from the outside would then have a way of telling
if the reviewers are legitimate or not.  We might look to companies like Amazon which have faced similar challenges and use various kinds of automated solutions.  

 

Anyway, here’s the blogpost, who is an advocate of open science. 

 

http://ift.tt/2aAxLFx

 

On Fri, Jul 29, 2016 at 11:39 AM, Hoover Chan wrote:

Is there a critical mass of vision research on either of these archives?
A really interesting idea and similar to something that I’ve been thinking about for a very long time. At that time, it was discouraged because of the importance of the review process. However, in some respects, isn’t this kind of the ultimate “peer” review?
Count me in to help out if there’s an interest in expanding into this territory.

From:
“Alex Holcombe” To: “Bosco S Tjan” Cc: “Bart Farell” , “Raphael.Maree.” ,
Wstell@ucalgary.ca, “Robert O’Shea” ,
cvnet@mail.ewind.com,
visionlist@visionscience.comSent: Friday, July 29, 2016 3:13:32 AMSubject: Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Six months after Bart wrote his message, Elsevier bought SSRN. SSRN previously served as the kind of open repository you envisioned, but for the social sciences, facilitating
the rapid dissemination of research, allowing faster progress than can be achieved via journals alone.

 

As a result of the Elsevier purchase and the ominous changes (http://ift.tt/2abaLxP)
Elsevier have (unsurprisingly, given their poor track record) introduced at SSRN, researchers have started a new preprint server, SocArXiv. The Elsevier decision to purchase SSRN, in combination with the recent announcement of a new non-profit engineering-oriented
open-access preprint server (http://ift.tt/2a2C2Cd), makes it clear that change is in the air – in other fields preprints are becoming important. Will we be left
behind?

 

While there still isn’t a preprint server that fits vision research perfectly, bioRxiv would likely take our work, as Niko mentioned in another thread (part of the recommendations
in his earlier email is pasted below).  If a few societies related to vision research (ARVO, VSS, and/or OSA?) were to endorse the use of these, then this could facilitate a change in our culture towards the more advanced sciences, such as many areas of physics,
that use preprint archives. 

 

Alex

 

(1) publish all papers on “preprint” servers like arXiv and bioRxiv as soon as we feel comfortable doing so

in my lab, we tend to do this around the time of initial submission to a journal. at that point, getting scooped is unlikely, and posting
enables others to respond to and even already cite the work early. feedback improves the final journal version. early availability hastens the paper’s integration into the literature through citations. many don’t know that all the major journals (including
nature, science, nat neurosci, neuron and most others) are fine with the posting of preprints (which can benefit their IF).

 

(2) cite preprints

by citing preprints of any highly relevant research that is not yet in any journal, we give due credit, accelerate science, encourage early
posting of preprints, and contribute to having the entire scientific literature openly available, even if journals still also offers alternative papers in their layout for $30. (published means publicly available. a paper behind a paywall is not a proper publication.)

 

 

On Fri, Jul 29, 2016 at 10:53 AM, Bosco S Tjan wrote:

Double Like!

 

On Jan 16, 2016, at 9:53 AM, Bart Farell wrote:

 

Like

****************************

Bart Farell, Ph.D.

Institute for Sensory Research

Department of Biomedical & Chemical Engineering

Syracuse University

621 Skytop Road

Syracuse NY  13244-5290

–and–

Departments of Ophthalmology & Neuroscience and Physiology

SUNY Upstate Medical University

750 East Adams Street

Syracuse NY 13210-2375

 

e-Mail:
bfarell@syr.edu

****************************

 

 

On Sat, Jan 16, 2016 at 8:22 AM, Raphael.Maree. wrote:

Thanks Robert for this move.
I would suggest to even go further. As suggested by someone else,
a single evaluation score should be designed.
Furthermore, the score should be visible on a kind of hat with an
electronic screen that we should all wear during working hours. I read
this brillant idea in a book of James C. Scott (Yale University).
Like a taximeter, the score would be constantly updated through
satellite so that students and colleagues of a peer would know instantly
if they are listening to or talking with a valuable scientist or not.
For example, by using an electronic voting system installed in our
class rooms, students would then be able to claim for a new teacher if
his/her evaluation score is not increasing rapidly enough. This strategy
would then maximize their own future score (because the score would be
somehow recursive).
Similarly, funding agencies would not need anymore to rely on subjective
reviews based on dumb qualitative criteria (like those based on reading
project proposal and papers !) that previously lead to suboptimal
research outputs.
Editors and publishers of for-profit scientific journals (the only
valuable ones) would probably be also interested by author’s score.
Please note that the whole scoring infrastructure should probably be
designed and promoted by a private company (e.g. Thomson Reuters) as a
state subcontractor in order to help the declining private sector hence
the whole society. In that respect, the scoring algorithm might be
closed-source.
Overall, this competitive and meritocratic world would be perfect. There
would be no mistake anymore (and so no need for appeal procedure nor
personal considerations), and we would not waste time at conferences and
at coffee time by talking with researchers with a low score (= who have
useless ideas).
Anecdotally, a few researchers with old-fashioned collaborative hopes
would criticize these objective principles in favor of more diversity
and long-term thoughts. Fortunately, these utopists would then be
assimilated to parasites who do not want to work and nobody will care.
Or… Wait a second… Maybe we should read this again ?http://ift.tt/1QyA7BO
Best regards,
Raphaël.


Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

With regard to Hoover’s question about the ultimate in peer-review, all systems have their advantages and disadvantages.   The disadvantages of peer-review have been well documented and are many (bad reviews, biased reviews, inscrutable editorial decisions, triage processes that place enormous power in the hands of a select few with little oversight, emphasis on flashy results, a strong incentive for HARKing).  

However the disadvantages of the post-review publication model must also be noted.  Below I link a recent post about some unfortunate problems at F1000, in which scientists recruit their friends to do post-pub review on plagiarized articles, which are then given a gold star by said friends.  The potential for abuse here is obvious, because while it’s possible for an insider to know when a completely dodgy set of reviewers has been dispatched to “astroturf” an article, the process will look completely legitimate to outsiders who might be consulting the literature for important uses. The current pre-pub peer-review process, with all its flaws, at least provides some baseline level of vigilance that cannot be so easily abused.  

The question is not pre or post review, but rather, how can we find a way to improve pre-review to avoid these problems. Perhaps we need some kind of vetting process for reviewers and someone from the outside would then have a way of telling if the reviewers are legitimate or not.  We might look to companies like Amazon which have faced similar challenges and use various kinds of automated solutions.  

Anyway, here’s the blogpost, who is an advocate of open science. 

http://ift.tt/2aAxLFx

On Fri, Jul 29, 2016 at 11:39 AM, Hoover Chan wrote:

Is there a critical mass of vision research on either of these archives?A really interesting idea and similar to something that I’ve been thinking about for a very long time. At that time, it was discouraged because of the importance of the review process. However, in some respects, isn’t this kind of the ultimate “peer” review?Count me in to help out if there’s an interest in expanding into this territory.

From: “Alex Holcombe” To: “Bosco S Tjan” Cc: “Bart Farell” , “Raphael.Maree.” , Wstell@ucalgary.ca, “Robert O’Shea” , cvnet@mail.ewind.com, visionlist@visionscience.comSent: Friday, July 29, 2016 3:13:32 AMSubject: Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Six months after Bart wrote his message, Elsevier bought SSRN. SSRN previously served as the kind of open repository you envisioned, but for the social sciences, facilitating the rapid dissemination of research, allowing faster progress than can be achieved via journals alone.

As a result of the Elsevier purchase and the ominous changes (http://ift.tt/2abaLxP) Elsevier have (unsurprisingly, given their poor track record) introduced at SSRN, researchers have started a new preprint server, SocArXiv. The Elsevier decision to purchase SSRN, in combination with the recent announcement of a new non-profit engineering-oriented open-access preprint server (http://ift.tt/2a2C2Cd), makes it clear that change is in the air – in other fields preprints are becoming important. Will we be left behind?

While there still isn’t a preprint server that fits vision research perfectly, bioRxiv would likely take our work, as Niko mentioned in another thread (part of the recommendations in his earlier email is pasted below).  If a few societies related to vision research (ARVO, VSS, and/or OSA?) were to endorse the use of these, then this could facilitate a change in our culture towards the more advanced sciences, such as many areas of physics, that use preprint archives. 

Alex

 

(1) publish all papers on “preprint” servers like arXiv and bioRxiv as soon as we feel comfortable doing so

in my lab, we tend to do this around the time of initial submission to a journal. at that point, getting scooped is unlikely, and posting enables others to respond to and even already cite the work early. feedback improves the final journal version. early availability hastens the paper’s integration into the literature through citations. many don’t know that all the major journals (including nature, science, nat neurosci, neuron and most others) are fine with the posting of preprints (which can benefit their IF).

 

(2) cite preprints

by citing preprints of any highly relevant research that is not yet in any journal, we give due credit, accelerate science, encourage early posting of preprints, and contribute to having the entire scientific literature openly available, even if journals still also offers alternative papers in their layout for $30. (published means publicly available. a paper behind a paywall is not a proper publication.)

On Fri, Jul 29, 2016 at 10:53 AM, Bosco S Tjan wrote:

Double Like!

On Jan 16, 2016, at 9:53 AM, Bart Farell wrote:

Like

****************************

Bart Farell, Ph.D.

Institute for Sensory Research

Department of Biomedical & Chemical Engineering

Syracuse University

621 Skytop Road

Syracuse NY  13244-5290

–and–

Departments of Ophthalmology & Neuroscience and Physiology

SUNY Upstate Medical University

750 East Adams Street

Syracuse NY 13210-2375

e-Mail:
bfarell@syr.edu

****************************

On Sat, Jan 16, 2016 at 8:22 AM, Raphael.Maree.
wrote:

Thanks Robert for this move.
I would suggest to even go further. As suggested by someone else,
a single evaluation score should be designed.
Furthermore, the score should be visible on a kind of hat with an
electronic screen that we should all wear during working hours. I read
this brillant idea in a book of James C. Scott (Yale University).
Like a taximeter, the score would be constantly updated through
satellite so that students and colleagues of a peer would know instantly
if they are listening to or talking with a valuable scientist or not.
For example, by using an electronic voting system installed in our
class rooms, students would then be able to claim for a new teacher if
his/her evaluation score is not increasing rapidly enough. This strategy
would then maximize their own future score (because the score would be
somehow recursive).
Similarly, funding agencies would not need anymore to rely on subjective
reviews based on dumb qualitative criteria (like those based on reading
project proposal and papers !) that previously lead to suboptimal
research outputs.
Editors and publishers of for-profit scientific journals (the only
valuable ones) would probably be also interested by author’s score.
Please note that the whole scoring infrastructure should probably be
designed and promoted by a private company (e.g. Thomson Reuters) as a
state subcontractor in order to help the declining private sector hence
the whole society. In that respect, the scoring algorithm might be
closed-source.
Overall, this competitive and meritocratic world would be perfect. There
would be no mistake anymore (and so no need for appeal procedure nor
personal considerations), and we would not waste time at conferences and
at coffee time by talking with researchers with a low score (= who have
useless ideas).
Anecdotally, a few researchers with old-fashioned collaborative hopes
would criticize these objective principles in favor of more diversity
and long-term thoughts. Fortunately, these utopists would then be
assimilated to parasites who do not want to work and nobody will care.
Or… Wait a second… Maybe we should read this again ?http://ift.tt/1QyA7BO
Best regards,
Raphaël.


Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Double Like!

On Jan 16, 2016, at 9:53 AM, Bart Farell wrote:

Like

****************************

Bart Farell, Ph.D.

Institute for Sensory Research

Department of Biomedical & Chemical Engineering

Syracuse University

621 Skytop Road

Syracuse NY  13244-5290

–and–

Departments of Ophthalmology & Neuroscience and Physiology

SUNY Upstate Medical University

750 East Adams Street

Syracuse NY 13210-2375

e-Mail:
bfarell@syr.edu

****************************

On Sat, Jan 16, 2016 at 8:22 AM, Raphael.Maree.
wrote:

Thanks Robert for this move.
I would suggest to even go further. As suggested by someone else,
a single evaluation score should be designed.
Furthermore, the score should be visible on a kind of hat with an
electronic screen that we should all wear during working hours. I read
this brillant idea in a book of James C. Scott (Yale University).
Like a taximeter, the score would be constantly updated through
satellite so that students and colleagues of a peer would know instantly
if they are listening to or talking with a valuable scientist or not.
For example, by using an electronic voting system installed in our
class rooms, students would then be able to claim for a new teacher if
his/her evaluation score is not increasing rapidly enough. This strategy
would then maximize their own future score (because the score would be
somehow recursive).
Similarly, funding agencies would not need anymore to rely on subjective
reviews based on dumb qualitative criteria (like those based on reading
project proposal and papers !) that previously lead to suboptimal
research outputs.
Editors and publishers of for-profit scientific journals (the only
valuable ones) would probably be also interested by author’s score.
Please note that the whole scoring infrastructure should probably be
designed and promoted by a private company (e.g. Thomson Reuters) as a
state subcontractor in order to help the declining private sector hence
the whole society. In that respect, the scoring algorithm might be
closed-source.
Overall, this competitive and meritocratic world would be perfect. There
would be no mistake anymore (and so no need for appeal procedure nor
personal considerations), and we would not waste time at conferences and
at coffee time by talking with researchers with a low score (= who have
useless ideas).
Anecdotally, a few researchers with old-fashioned collaborative hopes
would criticize these objective principles in favor of more diversity
and long-term thoughts. Fortunately, these utopists would then be
assimilated to parasites who do not want to work and nobody will care.
Or… Wait a second… Maybe we should read this again ?http://ift.tt/1QyA7BO
Best regards,
Raphaël.

Raphaël Marée (PhD)http://ift.tt/2anvA4D
Tél.: +32 4 366 26 44 | Fax:
+32 4 366 29 84
Le Fri, 15 Jan 2016 16:50:19 -0700
a écrit:

> Hallelujah, Brother!
>
>
> > Dear Colleague,
> >
> > I have followed the discussion about open access with great interest. I
> > know that when evaluating candidates for appointment, promotion, and
> > grants, reviewers use various criteria, such as number of published papers
> > per year, citation statistics, and amount of grant money raised. As others
> > have commented, these statistics mean that the rich get richer and the
> > poor get poorer.
> >
> > If you are such a reviewer, I would like to suggest adding a criterion
> > assessing someone’s value for money (if you are not doing so already).
> > This could be as simple as dividing an individual’s grant monies by the
> > number of published papers to calculate a cost per publication. With this
> > criterion, small cost per publication would be better than large costs,
> > allowing the poor and productive to get richer.
> >
> > In using this criterion, one would need to keep in mind that some fields
> > of research are more costly than others.
> >
> > Fortin and Currie (2013) have shown that when considering cost per paper,
> > the association between grant money and scientific impact is considerably
> > diminished if not reversed.
> >
> > What do you think?
> >
> > Reference
> >
> > Fortin, J.-M., & Currie, D. J. (2013). Big science vs. little science: How
> > scientific impact scales with funding. PLoS ONE, 8(6), e65263.
> > doi:10.1371/journal.pone.0065263
> >
> > Cheers,
> > Robert.
> >
> > Robert P. O’Shea
> > Senior Research Fellow, School of Psychology and Exercise Science, Murdoch
> > University
> > Adjunct Professor, School of Health and Human Sciences, Southern Cross
> > University
> > Location: Social Sciences (440) 2.023, 90 South Street, Murdoch WA 6150,
> > Australia
> > Phone: +61 8 9360 7284
> > e-mail: r.oshea@murdoch.edu.au
> > Web:
http://ift.tt/SVQVJz
> > Blog:
http://ift.tt/1jmfl3H
> > The content of this e-mail is intended for the addressee only and may not
> > be forwarded to a third party without my written permission.
> >


Re: [visionlist] Plea for a new criterion for evaluating candidates for appointment, promotion, and grants (Was: Re: [cvnet] Open access publishing charges in vision)

Robert –

Thanks for the Flexner essay. I too enjoyed it a lot.
I wouldn’t fault him, though, for selecting the eminent scholars whom he
did; who would take note of also-rans (like me, perhaps) who also failed
to translate their banal thoughts into practical applications?

In fact, the piece is only SADLY relevant today – a plea for
curiosity-driven inquiry in a time when university departments, promotion
committees, granting agencies, and editorial boards demand knowledge
transfer and applications.

I will sleep well, but thoughtfully, tonight.

Bill Stell


[visionlist] Deadline Extension: PRISM6 Conference on Illumination, Shape and Materials: 19-23 Oct 2016

SIXTH CONFERENCE ON THE PERCEPTUAL REPRESENTATION OF ILLUMINATION, SHAPE AND MATERIALS (PRISM6)
19th-23rd October 2016, Rauischholzhausen Castle, Germany

DEADLINE EXTENDED TO 9th AUG!

Applicants are invited to submit an abstract to take part in the 6th (and final) Conference on the Perceptual Representation of Illumination, Shape and Materials.

The goal of the meetings is to bring together researchers from different fields to present their latest research related to the perception of shape, shading and materials, and more broadly to discuss important emerging areas at the intersection between psychology, neuroscience, machine learning, computer graphics, industry and design.

CONFIRMED PRESENTERS:

KEYNOTE: Hidehiko Komatsu (NIPS)

Bart Anderson (Sydney)

Kavita Bala (Cornell)

Pascal Barla (INRIA Bordeaux)

Johannes Burge (UPenn)

Alexei Efros (UC Berkeley)

Jiri Filip (Czech Academy of Sciences)

Bill Geisler (UT Austin)

Julie Harris (St Andrews)

Anya Hurlbert (Newcastle)

Peter Janssen (KU Leuven)

Bobby Klatzky (Carnegie Mellon)

Shin’ya Nishida (NTT Research)

Gaël Obein (LNE-CNAM)

Flip Phillips (Skidmore)

Sylvia Pont (TU Delft)

Holly Rushmeier (Yale)

Ohad Ben Shahar (Ben Gurion)

Romain Vergne (Grenoble)

Greg Ward (Anyhere Software)

Andrew Welchman (Cambridge)

John Winawer (NYU)

Nathan Witthoft (Stanford)

Plus industrial perspectives from:

Thomas Dauser (Audi)

Bill Eibon (PPG)

Frank Maile (Schlenk)

LOCATION / TRAVEL:

Rauischholzhausen Castle (http://ift.tt/28S1TZk) is a splendid residence, owned by the University of Giessen, with scenic gardens, creaky spiral staircases and a nice beer cellar.  Full board and lodging are included in the price.  The closest airport is Frankfurt (FRA), the closest train station is Marburg.  We’ll arrange some shuttles directly from the airport, but if you are coming in from Marburg, we recommend just catching a taxi.  Further travel details will be given to participants.  On Saturday afternoon, there will a sightseeing trip for visitors.

APPLICATION PROCEDURE:

Participation, which includes food board and lodging, costs 750 Euros.  To apply, please submit a max 1-page abstract as a PDF or Word File to roland.w.fleming@psychol.uni-giessen.de AND to matteo.valsecchi@gmail.com.  The top two abstracts will be asked to present a 15 minute talk, the remaining accepted abstracts will present posters (size A0).  The deadline has been extended to 9th August 2016, but please note that place is extremely limited and it is possible that will not be able to accept everyone, so it is better to submit sooner than later.

Additional information will be posted in the coming weeks at: http://ift.tt/28Rqd1a


[visionlist] Register NOW for the 2016 HCP Course (Boston, 28 August – September 1)!

The 2016 HCP Course: “Exploring the Human Connectome”  starts in one month: August 28-September 1 in Boston! 

There are still some open slots, but we encourage you to Register today!The HCP Course is the best place to learn directly from HCP investigators and to explore HCP data and methods. This
year’s course will provide hands-on experience in working with the new,
multi-modal human cortical parcellation (Glasser et al., Nature, July
20, http://ift.tt/2a0E8nr) and with the “HCP-Style” paradigm for data acquisition, analysis, and sharing (Glasser et al., Nature Neuroscience, in press).

 

The 2016 HCP Course curriculum (see the full Course Schedule) includes over 20 Lectures and 15 Practicals and will be presented by 19 HCP investigators and staff and and will be held over five full days: August 28-September 1 (Sunday-Thursday) at the Joseph B. Martin Conference Center at Harvard Medical School, in Boston, Massachusetts, USA.

If you have any questions, please contact us at hcpcourse@humanconnectome.org
Hope to see you in Boston!

 

Best,

2016 HCP Course Staff

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110314-362-9387elam@wustl.eduwww.humanconnectome.org