Re: [visionlist] simple light sensing for vision science experiments

https://ift.tt/2MSGaEH

On Jun 22, 2018, at 10:50 AM, James Mazer wrote:

Hi Daniel,

We use these around the lab: https://ift.tt/2K52uNx.

It’s a small photodiode sensor board with an amp and some minimal logic. Puts out both analog (continuous) signal and thresholded level (on-board pot). Hard to beat for ~$5 assembled… You can cut the traces powering the LEDs if they cause problems with experiment, but useful for debugging..

best,

/jamie

On Wed, Jun 20, 2018 at 10:50 AM Daniel Seshu Joyce wrote:

Hi all,

I am looking to build a simple light sensor to detect transitions between stimuli of different intensities. This will be recorded by an EEG channel and used to segment the EEG data based on the timing
of the stimulus transitions. I would be grateful if you could point me in the direction of any resources/schematics that you have used with success – I recall there may have been some discussion on this topic on cvnet/visionlist some
years ago, and maybe even a tutorial paper mentioned.

Many thanks,

Daniel

Daniel S. Joyce, PhD

Postdoctoral Research Fellow

VA Palo Alto Health Care System & Department
of Psychiatry and Behavioral Sciences | School of Medicine | Stanford University | dsjoyce@stanford.edu

Advertisements

[visionlist] POST-DOC ON TACTILE PERCEPTION AND SENSORIMOTOR CONTROL AT GIESSEN UNIVERSITY


[visionlist] ECCV Satellite event and TPAMI Special Issue on Inpainting and Denoising in Looking at People

Chalearn Satellite Workshop on Image and Video Inpainting  @ECCV18


[visionlist] ECCV Workshop on Transferring and Adapting Source Knowledge & VISDA Challenge

======================================================                                         2nd Call for Papers              ======================================================ECCV TASK-CV Workshop on Transferring and Adapting Source Knowledgein Computer Vision & VisDA ChallengeMunich, September 14th 2018Workshop site: https://ift.tt/2KeBg2X site: https://ift.tt/2K1UhcJ DatesPaper TrackSubmission: July 2​nd​, 2018Notification: July 15​th​, 2018Camera Readay: July 25​th​, 2018ChallengeRegistration: April 21st​ , 2018Train and Validation data release: May 16th, 2018Test data release: August 1st, 2018Notification win.: September 1​st​, 2018Workshop TopicsA key ingredient of the recent successes in computer vision has been the availability ofvisual data with annotations, both for training and testing, and well-establishedprotocols for evaluating the results. However, this traditional supervised learningframework is limited when it comes to deployment on new tasks and/or operating innew domains. In order to scale to such situations, we must find mechanisms to reusethe available annotations or the models learned from them and generalize to newdomains and tasks.Accordingly, TASK-CV aims to bring together research in transfer learning and domainadaptation for computer vision and invites the submission of research contributionson the following topics:■ TL/DA focusing on specific computer vision tasks (e.g., image classification,object detection, semantic segmentation, recognition, retrieval, tracking, etc.)and applications (biomedical, robotics, multimedia, autonomous driving, etc.)■ TL/DA focusing on specific visual features, models or learning algorithms forchallenging paradigms like unsupervised, reinforcement, or online learning■ TL/DA in the era of convolutional neural networks (CNNs), adaptation effectsof fine-tuning, regularization techniques, transfer of architectures and weights, etc.■ Comparative studies of different TL/DA methods and transferring part representationsbetween categories and 2D/3D modalities■ Working frameworks with appropriate CV-oriented datasets and evaluationprotocols to assess TL/DAThis is not a closed list; thus, we welcome other related research for TASK-CV.VisDA ChallengeThe VisDA challenge aims to test domain adaptation methods’ ability to transfersource knowledge and adapt it to novel target domains.OrganizersTatiana Tommasi , IIT Milan-ItalyDavid Vázquez, Element AIKate Saenko, Boston UniversityBen Usman, Boston UniversityXingchao Peng, Boston UniversityJudy Hoffman, UC BerkeleyNeela Kaushik, Boston UniversityKuniaki Saito, Boston UniversityAntonio M. López, UAB/CVCWen Li, ETH ZurichFrancesco Orabona, Boston University


Re: [visionlist] simple light sensing for vision science experiments

Hi Daniel,

We use these around the lab: https://ift.tt/2K52uNx.

It’s a small photodiode sensor board with an amp and some minimal logic. Puts out both analog (continuous) signal and thresholded level (on-board pot). Hard to beat for ~$5 assembled… You can cut the traces powering the LEDs if they cause problems with experiment, but useful for debugging..

best,

/jamie

On Wed, Jun 20, 2018 at 10:50 AM Daniel Seshu Joyce wrote:

Hi all,

I am looking to build a simple light sensor to detect transitions between stimuli of different intensities. This will be recorded by an EEG channel and used to segment the EEG data based on the timing
of the stimulus transitions. I would be grateful if you could point me in the direction of any resources/schematics that you have used with success – I recall there may have been some discussion on this topic on cvnet/visionlist some
years ago, and maybe even a tutorial paper mentioned.

Many thanks,

Daniel

Daniel S. Joyce, PhD

Postdoctoral Research Fellow

VA Palo Alto Health Care System & Department
of Psychiatry and Behavioral Sciences | School of Medicine | Stanford University | dsjoyce@stanford.edu


Re: [visionlist] simple light sensing for vision science experiments

Dear Daniel,

You might want to try the TEMPT6000 sensor thingy from SparkFun. Whether you need to
amplify the output signal (using a non-inverting opamp) will depend on the sensitivity of
your EEG recording device for auxiliary signals. For a strong signal the sensor should of
course be a close as possible to the stimulus source.

Theo

On 06/20/2018 04:30 PM, Daniel Seshu Joyce wrote:
> Hi all,
>
>
> I am looking to build a simple light sensor to detect transitions between stimuli of
> different intensities. This will be recorded by an EEG channel and used to segment the EEG
> data based on the timing of the stimulus transitions. I would be grateful if you could
> point me in the direction of any resources/schematics that you have used with success – I
> recall there may have been some discussion on this topic on cvnet/visionlist some years
> ago, and maybe even a tutorial paper mentioned.
>
>
> Many thanks,
>
>
> Daniel
>
>
> *Daniel S. Joyce, **PhD*
> Postdoctoral Research Fellow
> VA Palo Alto Health Care System & Department of Psychiatry and Behavioral Sciences |
> School of Medicine | Stanford University | dsjoyce@stanford.edu
> **
> **
> **
> **
>
>
>


[visionlist] 6th VOT2018 Visual Object Tracking Challenge Workshop

*** Apologise for multiple copies ***

********************************************************
ECCV Visual Object Tracking Challenge Workshop, VOT2018
September 14th 2018, Munich, Germany

Half day workshop in conjunction with ECCV 2018
Web: https://ift.tt/2KdRklp
********************************************************

___________________________
LAST CALL FOR PARTICIPATION
___________________________

We are happy to announce that the 6th Visual Object Tracking Challenge VOT2018 will be held in conjunction with ECCV2018.

Participate by:
1. Sending a full-length paper

2. Sending your ECCV rejected paper

The VOT committee solicits full-length papers describing:
* Original or improved trackers as well as papers giving new insights into existing trackers or class of trackers.
* Novel ways of using and extending the VOT framework for tracker performance analysis.

For additional information please see the VOT2018 homepage (https://ift.tt/2JXHlVv).

___________________________
IMPORTANT NEXT DATES
___________________________

Paper Submission: June 29th 2018

ECCV tracking papers that were not accepted: July 27th 2018

Notification of Acceptance: August 1st 2018

Camera-Ready Paper Due: August 24th 2018

Workshop: September 14th 2018 (Half day)

___________________________

CONTACTS
___________________________
Homepage: https://ift.tt/2JXHlVv
Stay informed by subscribing to the VOT mailing list: https://ift.tt/2FINEpK

Best regards,
The VOT2018 team