Postprocessing and Curation with SpikeInterface
Authors
Setup
Import Modules
Import the modules required for this notebook
import numpy as np
from matplotlib import pyplot as plt
import spikeinterface.full as si
import probeinterface as pi
from probeinterface.plotting import plot_probe
%matplotlib inlineDownload the data for this notebook (this may take a while)
import os
import zipfile
import requests
fnames = [
"openephys_raw",
"openephys_preprocessed",
"results_KS4",
"results_SPC",
"results_TDC",
]
urls = [
"https://uni-bonn.sciebo.de/s/x2mNZ3pm3hAHKQa",
"https://uni-bonn.sciebo.de/s/2LbWWs3VPVrQkLO",
"https://uni-bonn.sciebo.de/s/26X6qhAAYq1uJfZ",
"https://uni-bonn.sciebo.de/s/FnRPj7EQSYXFIEg",
"https://uni-bonn.sciebo.de/s/EkdnDNhvF68HC9D",
]
for url, fname in zip(urls, fnames):
print("downloading", fname)
response = requests.get(f"{url}/download")
with open(f"{fname}.zip", "wb") as file:
file.write(response.content)
with zipfile.ZipFile(f"{fname}.zip", "r") as zip_ref:
zip_ref.extractall(".")
extracted_name = zip_ref.namelist()[0].split("/")[0]
if extracted_name != fname:
os.rename(extracted_name, fname)
os.remove(f"{fname}.zip")
print("Done!")downloading openephys_raw
downloading openephys_preprocessed
downloading results_KS4
downloading results_SPC
downloading results_TDC
Done!Section 1: Extracting Waveforms and Computing Templates
After spike sorting, we have the times when spikes occurred, but to validate and understand the sorted units, we need to examine their shapes. This process involves extracting the raw voltage “snippets” (waveforms) around each detected spike time. SpikeInterface streamlines this with the SortingAnalyzer object, a central hub that links a sorting result with its corresponding recording. In this section, you will learn how to create an analyzer and compute “extensions” like spike waveforms and average templates, which are the basis for all subsequent quality control and analysis.
| Code | Description |
|---|---|
rec = si.load_extractor("mydir") |
Load a recording extractor stored in the folder "mydir". |
sorting = si.read_sorter_folder("results_SPC") |
Load spike sorting results from a folder. |
analyzer = si.create_sorting_analyzer(sorting, rec) |
Create an analyzer by pairing sorting results with the recording. |
analyzer.compute("random_spikes", **kwargs) |
Randomly sample spikes from each unit for efficient processing. |
analyzer.compute("waveforms", **kwargs) |
Extract spike waveforms for the sampled spikes. |
analyzer.compute("templates", **kwargs) |
Compute average templates from the extracted waveforms. |
ext = analyzer.get_extension("extension_name") |
Get a computed extension (e.g., “waveforms”) from the analyzer. |
ext.get_data() |
Get the raw data from an extension object. |
ext.get_waveforms_one_unit(unit_id) |
Get the waveforms for a single specified unit. |
ext.get_unit_template(unit_id, operator) |
Get the template for a single unit (e.g., using “average” or “median”). |
plt.plot(template) |
Plot the average template waveform. |
Load the recoring extractor stored in the directory "openephys_preprocessed" and the sorting results stored in the directory "results_SPC" and create a sorting analyzer.
Exercises
rec = si.load("openephys_preprocessed")
sort = si.read_sorter_folder("results_SPC")
analyzer = si.create_sorting_analyzer(sort, rec)estimate_sparsity (no parallelization): 0%| | 0/300 [00:00<?, ?it/s]Example: Randomly sample up to 300 spikes from every unit.
analyzer.compute("random_spikes", method="uniform", max_spikes_per_unit=300)<spikeinterface.core.analyzer_extension_core.ComputeRandomSpikes at 0x79b0e9d4b200>Example: Get the "random_spikes" extension from the analyzer and print the total number of spikes sampled.
ext = analyzer.get_extension("random_spikes")
spike_indices = ext.get_data()
print("N_spikes = ", len(spike_indices))N_spikes = 34096Exercise: Randomly sample up to 500 spikes per unit and print the total number of spikes sampled.
Solution
analyzer.compute("random_spikes", method="uniform", max_spikes_per_unit=500)
ext = analyzer.get_extension("random_spikes")
spike_indices = ext.get_data()
print("N_spikes = ", len(spike_indices))N_spikes = 52646Exercise: Randomly sample spikes and use method="all" to get of the spikes (omit the max_spikes_per_unit argument). Then, print the total number of spikes sampled.
Solution
analyzer.compute("random_spikes", method="all")
ext = analyzer.get_extension("random_spikes")
spike_indices = ext.get_data()
print("N_spikes = ", len(spike_indices))N_spikes = 145296Example: Compute the "waveforms" for all of the spikes using the ms_before = 2 and ms_after=3 milliseconds after each spike.
analyzer.compute("waveforms", ms_before=2, ms_after=3)compute_waveforms (no parallelization): 0%| | 0/300 [00:00<?, ?it/s]<spikeinterface.core.analyzer_extension_core.ComputeWaveforms at 0x79b09b916450>Example: Get the "waveforms" extension from the analyzer, then get the array of waveforms for unit 3 and its .shape (dimensions represent spikes, samples and channels).
ext = analyzer.get_extension("waveforms")
wfs = ext.get_waveforms_one_unit(unit_id=3)
wfs.shape(3331, 150, 13)Exercise: Compute the "waveforms" for all of the spikes using the ms_before = 1 and ms_after=2 milliseconds after each spike. Then, get the waveforms for unit 3 and print their .shape.
Solution
analyzer.compute("waveforms", ms_before=1, ms_after=2)
ext = analyzer.get_extension("waveforms")
wfs = ext.get_waveforms_one_unit(unit_id=3)
wfs.shapecompute_waveforms (no parallelization): 0%| | 0/300 [00:00<?, ?it/s](3331, 90, 13)Example: Compute the "templates" for the extracted "waveforms" using the "average" operator.
analyzer.compute("templates", operators=["average"])<spikeinterface.core.analyzer_extension_core.ComputeTemplates at 0x79b09b916d50>Example: Get the "average" template for unit 3 and print its shape.
ext = analyzer.get_extension("templates")
template = ext.get_unit_template(unit_id=3, operator="average")
plt.plot(template);Exercise: Compute the "templates" for the extracted "waveforms" using the "median" operator.
Solution
analyzer.compute("templates", operators=["median"])<spikeinterface.core.analyzer_extension_core.ComputeTemplates at 0x79b09b81adb0>Exercise: Get the "median" template for unit 3 and print its shape.
Solution
ext = analyzer.get_extension("templates")
template = ext.get_unit_template(unit_id=3, operator="median")
plt.plot(template);Section 2: Curating Units with Quality Metrics
Spike sorting algorithms are not perfect; they often produce units that represent noise, artifacts, or a mix of multiple neurons (multi-unit activity). Therefore, a critical step in the workflow is to compute quality metrics to automatically curate the results and identify high-quality, well-isolated single units. SpikeInterface provides many metrics, including signal-to-noise ratio (SNR) and Inter-Spike Interval (ISI) violations. In this section, you will learn how to compute these metrics and use them to filter the sorting output, keeping only the units that meet your quality criteria.
| Code | Description |
|---|---|
analyzer.compute("noise_levels") |
Compute the noise level on each channel, a prerequisite for SNR. |
analyzer.compute("quality_metrics", metric_names=[...]) |
Compute specified quality metrics for all units. |
ext = analyzer.get_extension("quality_metrics") |
Get the computed quality metrics from the analyzer. |
df = ext.get_data() |
Get the quality metrics as a pandas DataFrame. |
plt.hist(df["metric_name"]) |
Plot a histogram of a specific quality metric’s distribution. |
unit_ids = analyzer.unit_ids[df["metric_name"] > value] |
Select unit IDs that satisfy a condition based on a quality metric. |
curated_units = sort.select_units(unit_ids=unit_ids) |
Create a new, curated sorting object containing only the selected units. |
curated_units.save(folder="...", overwrite=True) |
Save the curated sorting results to a new folder. |
Exercises
Example: Compute the "noise_levels" and "quality_metrics" extension with the metrics "snr" and "num_spikes". Then, get the extension data and print the data frame.
analyzer.compute("noise_levels")
analyzer.compute("quality_metrics", metric_names=["snr", "num_spikes"])
ext = analyzer.get_extension("quality_metrics")
df = ext.get_data()
dfnoise_level (no parallelization): 0%| | 0/20 [00:00<?, ?it/s]| snr | num_spikes | |
|---|---|---|
| 0 | 1.991351 | 1424 |
| 1 | 3.063185 | 899 |
| 2 | 5.274205 | 3523 |
| 3 | 3.983339 | 3331 |
| 4 | 4.593647 | 1561 |
| ... | ... | ... |
| 122 | 5.552776 | 576 |
| 123 | 5.575174 | 618 |
| 124 | 4.770040 | 2375 |
| 125 | 4.454178 | 2483 |
| 126 | 5.408644 | 1884 |
127 rows × 2 columns
Exercise: Plot the distribution of "snr" in a histogram.
Solution
plt.hist(df["snr"])Exercise: Compute the "quality_metrics" extension with the "isi_violation" metric, get the extension data and print the data frame.
Solution
analyzer.compute("quality_metrics", metric_names=["isi_violation"])
ext = analyzer.get_extension("quality_metrics")
df = ext.get_data()
df| isi_violations_ratio | isi_violations_count | snr | num_spikes | |
|---|---|---|---|---|
| 0 | 8.827405 | 179.0 | 1.991351 | 1424 |
| 1 | 5.444190 | 44.0 | 3.063185 | 899 |
| 2 | 2.489621 | 309.0 | 5.274205 | 3523 |
| 3 | 1.288804 | 143.0 | 3.983339 | 3331 |
| 4 | 3.652449 | 89.0 | 4.593647 | 1561 |
| ... | ... | ... | ... | ... |
| 122 | 1.507041 | 5.0 | 5.552776 | 576 |
| 123 | 0.261832 | 1.0 | 5.575174 | 618 |
| 124 | 0.319114 | 18.0 | 4.770040 | 2375 |
| 125 | 0.129759 | 8.0 | 4.454178 | 2483 |
| 126 | 0.225387 | 8.0 | 5.408644 | 1884 |
127 rows × 4 columns
Exercise: Plot the distribution of "isi_violation_count" in a histogram.
Solution
plt.hist(df["isi_violations_count"])Example: Get the units with a SNR above 10.
unit_ids = analyzer.unit_ids[df["snr"]>10]
unit_idsarray([ 11, 15, 17, 18, 19, 22, 23, 26, 27, 28, 30, 31, 33,
34, 38, 39, 40, 41, 43, 44, 45, 47, 50, 52, 53, 57,
59, 60, 61, 63, 65, 68, 72, 75, 76, 78, 81, 82, 84,
87, 90, 92, 95, 97, 99, 104, 111, 112, 114, 115, 116, 117])Exercise: Get the units with an "isi_violations_ratio" below 4.
Solution
unit_ids = analyzer.unit_ids[(df["isi_violations_ratio"]<4)]
unit_idsarray([ 2, 3, 4, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34,
35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61,
63, 64, 65, 66, 68, 70, 71, 73, 74, 75, 76, 77, 79,
82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95,
96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 109, 110,
111, 112, 113, 115, 117, 118, 119, 120, 121, 122, 123, 124, 125,
126])Exercise: Use sort.select_units to select the filtered unit_ids and return the curated_units. Then use curated_units.save() to save them.
Solution
curated_units = sort.select_units(unit_ids=unit_ids)
curated_units.save(folder="sorting_curated", overwrite=True)
si.load("sorting_curated")Unit IDs
- [ 2 3 4 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26 27 28 29 31 32 33 34 35 36 37 38 39 41 42 43 44 45
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 63 64
65 66 68 70 71 73 74 75 76 77 79 82 83 84 85 86 87 88
89 90 92 93 94 95 96 97 98 99 100 101 102 103 105 106 107 109
110 111 112 113 115 117 118 119 120 121 122 123 124 125 126]
Annotations
- __sorting_info__ : {'recording': {'class': 'spikeinterface.extractors.neoextractors.openephys.OpenEphysBinaryRecordingExtractor', 'module': 'spikeinterface', 'version': '0.102.2', 'kwargs': {'all_annotations': False, 'stream_id': '0', 'folder_path': '/home/olebi/projects/new-learning-platform/notebooks/spike_analysis/04_spike_sorting/04_postprocessing_and_curation/openephys_raw/2023-08-23_15-56-05', 'load_sync_channel': False, 'load_sync_timestamps': False, 'experiment_names': None}, 'annotations': {'is_filtered': False, 'experiment_name': 'experiment1', 'probe_0_planar_contour': [[-10.0, 900.0], [-10.0, -10.0], [12.0, -30.0], [30.0, 10.0], [70.0, 900.0]]}, 'properties': {'group': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'location': [[0.0, 775.0], [0.0, 400.0], [0.0, 450.0], [0.0, 700.0], [0.0, 650.0], [0.0, 575.0], [22.5, 762.5], [22.5, 537.5], [0.0, 750.0], [0.0, 375.0], [0.0, 525.0], [22.5, 437.5], [22.5, 487.5], [22.5, 562.5], [0.0, 550.0], [22.5, 687.5], [0.0, 475.0], [22.5, 787.5], [22.5, 512.5], [0.0, 625.0], [0.0, 725.0], [22.5, 737.5], [0.0, 425.0], [22.5, 662.5], [0.0, 500.0], [0.0, 675.0], [0.0, 600.0], [22.5, 612.5], [22.5, 462.5], [22.5, 587.5], [22.5, 712.5], [22.5, 637.5], [0.0, 275.0], [22.5, 337.5], [22.5, 212.5], [0.0, 100.0], [0.0, 350.0], [22.5, 137.5], [0.0, 50.0], [22.5, 12.5], [22.5, 312.5], [22.5, 162.5], [22.5, 112.5], [22.5, 237.5], [0.0, 175.0], [22.5, 387.5], [0.0, 75.0], [22.5, 262.5], [0.0, 300.0], [0.0, 200.0], [0.0, 325.0], [22.5, 37.5], [0.0, 225.0], [22.5, 412.5], [0.0, 150.0], [22.5, 62.5], [22.5, 287.5], [0.0, 250.0], [22.5, 362.5], [22.5, 87.5], [0.0, 125.0], [0.0, 25.0], [0.0, 0.0], [22.5, 187.5]], 'gain_to_uV': [0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426, 0.1949999928474426], 'offset_to_uV': [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]}, 'relative_paths': False}, 'params': {'sorter_name': 'spykingcircus2', 'sorter_params': {'general': {'ms_before': 2, 'ms_after': 2, 'radius_um': 75}, 'sparsity': {'method': 'snr', 'amplitude_mode': 'peak_to_peak', 'threshold': 1}, 'filtering': {'freq_min': 150, 'freq_max': 7000, 'ftype': 'bessel', 'filter_order': 2, 'margin_ms': 10}, 'whitening': {'mode': 'local', 'regularize': False}, 'detection': {'peak_sign': 'neg', 'detect_threshold': 5}, 'selection': {'method': 'uniform', 'n_peaks_per_channel': 5000, 'min_n_peaks': 100000, 'select_per_channel': False, 'seed': 42}, 'apply_motion_correction': True, 'motion_correction': {'preset': 'rigid_fast'}, 'merging': {'max_distance_um': 50}, 'clustering': {'legacy': True}, 'matching': {'method': 'circus-omp-svd'}, 'apply_preprocessing': True, 'matched_filtering': True, 'cache_preprocessing': {'mode': 'memory', 'memory_limit': 0.5, 'delete_cache': True}, 'multi_units_only': False, 'job_kwargs': {'n_jobs': 0.5}, 'seed': 42, 'debug': False}}, 'log': {'sorter_name': 'spykingcircus2', 'sorter_version': '2.0', 'datetime': '2025-05-12T12:48:40.874847', 'runtime_trace': [], 'error': False, 'run_time': 207.4290869400029}}
Properties
Section 3: Compare Different Sorters
Different spike sorters use different algorithms and may produce different results on the same dataset. By comparing the outputs of multiple sorters, we can identify a “consensus” set of units found by all (or most) of them, which increases our confidence in their validity. This comparison also highlights where sorters disagree, pointing to units that may be difficult to sort or require manual inspection. SpikeInterface provides convenient tools to compare sorters and visualize their agreement. In this section, you will learn how to compare sorters and interpret the resulting agreement matrices and summary plots.
| Code | Description |
|---|---|
comp = si.compare_two_sorters(...) |
Compare two sorters, returning a comparison object. |
si.plot_agreement_matrix(comp, **kwargs) |
Plot the agreement matrix for a pairwise sorter comparison. |
multi_comp = si.compare_multiple_sorters(...) |
Compare a list of multiple sorters, returning a multi-comparison object. |
si.plot_multicomparison_agreement(multi_comp) |
Plot a pie chart summarizing the agreement between multiple sorters. |
si.plot_multicomparison_agreement_by_sorter(multi_comp) |
Plot agreement summaries broken down by each individual sorter. |
Load the sorting results from Spykingcircus, Kilosort4 and Tridesclous.
Exercises
sorting_SPC = si.read_sorter_folder("results_SPC")
sorting_KS4 = si.read_sorter_folder("results_KS4")
sorting_TDC = si.read_sorter_folder("results_TDC")Example: Compare the results from the Spikingcircus and Kilosort4 sorters using the "count" agreement methods and plot the agreement matrix. A clearly visible diagonal indicates that the sorters agree on a given unit.
comp = si.compare_two_sorters(
sorting_SPC, sorting_KS4, "Spikingcircus", "Kilosort4", agreement_method="count"
)
si.plot_agreement_matrix(comp, unit_ticks=False)<spikeinterface.widgets.comparison.AgreementMatrixWidget at 0x79b0832852b0>Exercise: Compare the results from the Spikingcircus and Kilosort4 sorters using the "distance" agreement methods and plot the agreement matrix.
Solution
comp = si.compare_two_sorters(
sorting_SPC, sorting_KS4, "SPC", "KS4", agreement_method="distance"
)
si.plot_agreement_matrix(comp, unit_ticks=False)<spikeinterface.widgets.comparison.AgreementMatrixWidget at 0x79b0818a8680>Exercise: Load the results from the Tridesclous sorter and compare them to the results from Spikingcircus by plotting the agreement matrix.
Solution
sorting_TDC = si.read_sorter_folder("results_TDC")
comp = si.compare_two_sorters(
sorting_SPC, sorting_TDC, "Spikingcircus", "Tridesclous", agreement_method="count"
)
si.plot_agreement_matrix(comp, unit_ticks=False)<spikeinterface.widgets.comparison.AgreementMatrixWidget at 0x79b083412f90>Example: Compare the results from all three sorters using the "count" agreement method.
multi_comp = si.compare_multiple_sorters(
[sorting_SPC, sorting_KS4, sorting_TDC],
["Spikingcircus", "Kilosort4", "Tridesclous"],
agreement_method="count",
)Exercise: Use si.plot_multicomparison_agreement to plot the multi_comp.
Solution
si.plot_multicomparison_agreement(multi_comp)<spikeinterface.widgets.multicomparison.MultiCompGlobalAgreementWidget at 0x79b080f25610>Exercise: Compare the results from all three sorters using the "distance" agreement method.
Solution
multi_comp = si.compare_multiple_sorters(
[sorting_SPC, sorting_KS4, sorting_TDC],
["Spikingcircus", "Kilosort4", "Tridesclous"],
agreement_method="distance",
)Exercise: Plot the mutlicomparison agreement.
Solution
si.plot_multicomparison_agreement(multi_comp)<spikeinterface.widgets.multicomparison.MultiCompGlobalAgreementWidget at 0x79b081888e60>Exercise: Plot the mutlicomparison agreement separately for each sorter.
Solution
si.plot_multicomparison_agreement_by_sorter(multi_comp)<spikeinterface.widgets.multicomparison.MultiCompAgreementBySorterWidget at 0x79b0818806b0>Section 4: Localizing Detected Units
Knowing where a unit is physically located on the probe is crucial for validating sorting results (e.g., a single neuron should be localized in space) and for any spatial analysis of neural activity. SpikeInterface can estimate the position of each unit by analyzing the amplitude of its average waveform across the different electrode channels. In this section we will explore computing unit locations using various methods, such as “center of mass” and “monopolar triangulation,” and visualizing these locations in 2D and 3D space relative to the probe geometry.
| Code | Description |
|---|---|
si.plot_unit_waveforms(analyzer, **kwargs) |
Create an interactive plot showing unit waveforms on the probe. |
unit_locations = si.compute_unit_locations(analyzer, method) |
Compute unit locations using a specified method (e.g., “center_of_mass”). |
probe = rec.get_probe() |
Retrieve the probe information from a recording object. |
plot_probe(probe) |
Plot the geometry of the probe. |
plt.scatter(x, y) |
Create a 2D scatter plot of x and y coordinates. |
ax = plt.subplot(projection='3d') |
Create a 3-dimensional plot axis. |
ax.scatter(x, y, z) |
Create a 3D scatter plot of x, y, and z coordinates. |
ax.set(xlim=..., ylim=...) |
Set the x and y axis limits for a plot. |
Run the cell below to create an interactive widget that shows, for each unit, the waveforms and the electrode locations where they were recorded (this may take a while).
Exercises
%matplotlib widget
si.plot_unit_waveforms(analyzer, backend="ipywidgets")
%matplotlib inlineExample: Compute the unit locations using the "center_of_mass" method and print the shape of the returned unit_locations.
unit_locations = si.compute_unit_locations(analyzer, method="center_of_mass")
unit_locations.shape(127, 2)Example: Get the probe from rec and plot it together with the x and y unit_locations.
probe = rec.get_probe()
plot_probe(probe)
x = unit_locations[:, 0]
y = unit_locations[:, 1]
plt.scatter(x, y)
plt.xlim(-400, 400)
plt.ylim(0, 800)(0.0, 800.0)Exercise: Compute the unit locations using the "monopolar_triangulation" method and print the shape of the returned unit_locations.
Solution
unit_locations = si.compute_unit_locations(analyzer, method="monopolar_triangulation")
unit_locations.shape(127, 3)Exercise: Plot the probe together with the x and y unit_locations.
Solution
probe = rec.get_probe()
plot_probe(probe)
x = unit_locations[:, 0]
y = unit_locations[:, 1]
plt.scatter(x, y)
plt.xlim(-400, 400)
plt.ylim(0, 800)(0.0, 800.0)Example: Create a 3D plot of the x, y and z unit_locations.
x = unit_locations[:, 0]
y = unit_locations[:, 1]
z = unit_locations[:, 2]
ax = plt.subplot(projection="3d")
ax.scatter(x, y, z)
ax.set(xlim=(-400, 400), ylim=(0, 800))[(-400.0, 400.0), (0.0, 800.0)]Exercise: Compute the unit locations using the "grid_convolution" method and print the shape of the returned unit_locations.
Solution
unit_locations = si.compute_unit_locations(analyzer, method="grid_convolution")
unit_locations.shape(127, 3)Exercise: Create a 3D plot of the x, y and z unit_locations.
Solution
x = unit_locations[:, 0]
y = unit_locations[:, 1]
z = unit_locations[:, 2]
ax = plt.subplot(projection="3d")
ax.scatter(x, y, z)
ax.set(xlim=(-400, 400), ylim=(0, 800))[(-400.0, 400.0), (0.0, 800.0)]