Multichannel Intan recording (for multichannel analysis)

Hardware

When we need access to several analog input channels, we usually acquire this information on a separate data acquisition board, such as the Intan Demo Board. (You may ask why we use a separate card for this purpose: the reason is that the CED micro1401/Spike2 system that we use to acquire stimulus timing information and up to 4 analog inputs cannot be inexpensively expanded to handle 32 or 64 channels at high sampling rates.)

Synchronization

Because we are using 2 acquisition devices (micro1401/Spike2 for stimulus timing information and a few analog inputs, and Intan for several analog inputs), we need to learn the time conversion between these devices. This means we need to identify the time shift between when the 2 devices were started (that is, what time on the LabView system corresponds to time 0 on the Spike 2 system?) as well as the time scaling between the 2 clocks (because the clocks on the 2 devices will advance at slightly different rates, and this difference matters over the course of a several minute recording).

To do this, we typically record the stimulus trigger signal on 1 analog input channel of the Intan board (usually the first digital input channel). Because the CED 1401/Spike2 system is also acquiring trigger times from this channel, we can analyze the signals from the Intan board to learn the exact mapping in shift and scale.

Acquisition software

The acquisition software we use is from Intan Technologies.

[INSTRUCTIONS HERE]

Output of the acquisition program: It writes 1 file that is based on the base file name that you choose and the time and has an extension .rhd. If you want to read the file directly to look at its contents, you can use our functions read_Intan_RHD2000_datafile and read_Intan_RHD2000_header to read the header information. (See help read_Intan_RHD2000_datafile or help read_Intan_RHD2000_header to learn more; these are part of vhlab-toolbox-matlab .)

Each recording epoch should have its own directory (t00001, t00002, etc). All of the files that are related to that epoch (excepting some 2-photon files) should be in that directory. This normally includes our spike2data.smr file, an Intan file with extension .rhd, a stimulus information file stims.mat, and some text files.

These folders should also contain the files vhintan_filtermap.txt, which describes how the channels should be grouped for filtering (see help vhintan_filtermap in Matlab), and vhintan_channelgrouping.txt, which describes how channels should be grouped for spike extraction and spike sorting (see help vhintan_channelgrouping in Matlab).

Note 2: In the meantime, one has to create these files manually. Follow the example in Step 0 to do this.

How to extract and cluster spikes (spike sorting)

Step 0: Making sure all of the description files are correct

In the base directory (20NN-XX-YY), make sure there is a file called subject.txt that has the subject. Usually ferret_NNN.YYYY@vhlab.org.

Before starting, please make sure you go through your experiment directory to find all of the directories (t00001, t00002, etc) that contain multichannel data to make sure the reference.txt, vhintan_filtermap.txt, and vhintan_channelgrouping.txt files are specified correctly. If you see files vhlv_filtermap.txt and vhlv_channelgroup.txt, then they should be re-named. If you acquired using 32 channels, and want the electrode channels to be combined, you should use:

reference.txt:

name<tab>ref<tab>type

leftcortex<tab>1<tab>ntrode

vhintan_filtermap.txt:

channel_list

[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32]

vhintan_channelgrouping.txt:

name<tab>ref<tab>channel_list

leftcortex<tab>1<tab>[1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32]

Step 1: Open in NDI

In Matlab:

S = ndi.setup.vhlab('20yy-mm-dd',[FULLPATH])

Step 2: Prepare to perform spike sorting with JRCLUST (use the version http://github.com/VH-Lab/JRCLUST)

p = S.getprobes('type','n-trode'); % load the probes

numel(p) % see how many probes there are

% if the result is 0, try running the command vhintan_sync2spike2(pwd) in each epoch folder, then S.cache.clear() outside of

% the epoch folders, and then retry p = S.getprobes('type','n-trode');

%for each probe, p{1}, p{2}, etc:

jrc('bootstrap','ndi',S,p{1}) % this makes a parameter file and opens it for editing

Now you have to modify the parameter file that comes up in the editor window (this is in a file, NOT the command line).

UseGPU = 0; % unless you have a compatible GPU, but you probably don't

maxSecLoad = [500]; % sets the number of seconds of data to load at a time; choose something that doesn't overload your computer

For probe parameters, assuming you have a Plexon 32 channel electrode custom made like we make it:

probePad = [10, 10]; % (formerly vrSiteHW) Recording contact pad size (in m) (Height x width)

shankMap = [1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1; 1]; % (formerly viShank_site) Shank ID of each site

depths_sl = 0:50:(16-1)*50;

siteLoc = [ depths_sl(floor(1+(0:0.5:15.5)))' depths_sl(1+mod(0:31,2))'];

siteMap = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32];

If you have channels to ignore, do that on this line:

ignoreChans = [1:8 25:32]; % (formerly viChanZero) Channel numbers to ignore manually, if some channels weren't working, for example

Make sure nSiteDir is empty (it isn't by default):

nSiteDir = []; % (formerly maxSite) Number of neighboring sites to group in either direction (nSitesEvt is set to 1 + 2*nSiteDir - nSitesExcl)

And set the system to group channels that are closer than 75 microns (or use 800 to group them all):

evtGroupRad = 75; % (formerly maxDist_site_spk_um) Maximum distance (in m) for extracting spike waveforms

Step 3: Perform spike detection and initial sorting with JRCLUST

jrc detect path/to/your/parameter/file

jrc sort path/to/your/parameter/file

It's recommended that you check out the traces to make sure you selected the right channels.

% check out the traces to make sure you selected the right channels

jrc traces path/to/your/parameter/file

% for example:

jrc traces YourDrive/2021-04-01/.JRCLUST/Rightcortex_|_1/jrclust.prm

Step 4: Examine and modify the spike clusters with JRCLUST

Use the code:

jrc manual path/to/your/parameter/file

For example:

jrc manual YourDrive/2021-04-01/.JRCLUST/Rightcortex_|_1/jrclust.prm

This launches the program for curating the spikes.

Step 5: Write the output neurons to NDI

Use the code:

hCfg = jrclust.Config(path/to/your/parameter/file)

jrclust.export.ndi(hCfg,'forceReplace',1)

For example:

hCfg = jrclust.Config('YourDrive/2021-04-01/.JRCLUST/Rightcortex_|_1/jrclust.prm')

jrclust.export.ndi(hCfg,'forceReplace',1)