

We show our results on two datasets, one collected by the authors using wide-field imaging and another a publicly available dataset collected using two-photon imaging. The retinotopy based area borders are used as ground truth to compare the performance of our clustering algorithms. Here, we collect responses of the visual cortex to various types of stimuli and ask if we could discover unique clusters from this dataset using machine learning methods. Previous work has segmented the mouse visual cortex into different areas based on the organization of retinotopic maps. The visual cortex has a prominent role in the processing of visual information by the brain. These responses likely reflect unique circuits within each area that give rise to activity with stronger intra-areal than inter-areal correlations, and their responses to controlled visual stimuli across trials drive higher areal classification accuracy than resting state responses. The results suggest that responses from visual cortical areas can be classified effectively using data-driven models. With the wide-field dataset, clustering neuronal responses using a constrained semi-supervised classifier showed graceful degradation of accuracy. Furthermore, the classifiers were able to model the boundaries of visual areas using resting state cortical responses obtained without any overt stimulus, in both datasets.

Using two distinct datasets obtained using wide-field and two-photon imaging, we show that the area labels predicted by the classifiers were highly consistent with the labels obtained using retinotopy. Visual areas defined by retinotopic mapping were examined using supervised classifiers applied to responses elicited by a range of stimuli. In contrast to the conventional input-output characterizations of neuronal responses to standard visual stimuli, here we asked whether six of the core visual areas have responses that are functionally distinct from each other for a given visual stimulus set, by applying machine learning techniques to distinguish the areas based on their activity patterns. It is generally assumed that these areas represent discrete processing regions. The result is that the sketch starts out with a grey background then turns orange when the mouse button is pressed.The visual cortex of the mouse brain can be divided into ten or more areas that each contain complete or partial retinotopic maps of the contralateral visual field. def setup():Ĭreate a new function, which is based on the mousePressed() event, where the background will change only after the mouse button is pressed. This change means that every time the mouse moves, the background is refreshed. This is done by moving the background from the setup function to the draw function. def setup():Īnimate a single line, where the top of the line is fixed but the bottom of the line moves with the mouse. Use setup() and draw() functions to animate the line. In this example, we also set the size of the output window using size(), the color of the background using background(), and the color of the line using stroke(). For example: line(15, 25, 70, 90)Ĭreate a static sketch that contains a line.

#Use pmouse processing how to#
Building on the first post about using Processing.py (shown in the table below), this post will demonstrate the basic structure of a Processing.py program, how to draw lines, and understand some other basics of the Processing.py programming language.įor the previous entry in the Processing.py series, see this page:ĭrawing a line is as simple as providing two x, y points, one for each end of the line.
