Skip to main content

BRIEF RESEARCH REPORT article

Front. Hum. Neurosci., 19 September 2023
Sec. Brain-Computer Interfaces
This article is part of the Research Topic AI and Machine Learning Application for Neurological Disorders and Diagnosis View all 11 articles

CLET: Computation of Latencies in Event-related potential Triggers using photodiode on virtual reality apparatuses

\r\nPiyush Swami,,*Piyush Swami1,2,3*Klaus GramannKlaus Gramann4Elise Klbo VonstadElise Klæbo Vonstad1Beatrix VereijkenBeatrix Vereijken5Alexander HoltAlexander Holt1Tomas HoltTomas Holt1Grethe SandstrakGrethe Sandstrak1Jan Harald NilsenJan Harald Nilsen1Xiaomeng Su*Xiaomeng Su1*
  • 1Motion Capture and Visualization Laboratory, Applied Information Technology Group, Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
  • 2Section for Visual Computing, Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
  • 3Biomedical Engineering Techies, Broendby, Denmark
  • 4Biological Psychology and Neuroergonomics, Technical University of Berlin, Berlin, Germany
  • 5Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway

To investigate event-related activity in human brain dynamics as measured with EEG, triggers must be incorporated to indicate the onset of events in the experimental protocol. Such triggers allow for the extraction of ERP, i.e., systematic electrophysiological responses to internal or external stimuli that must be extracted from the ongoing oscillatory activity by averaging several trials containing similar events. Due to the technical setup with separate hardware sending and recording triggers, the recorded data commonly involves latency differences between the transmitted and received triggers. The computation of these latencies is critical for shifting the epochs with respect to the triggers sent. Otherwise, timing differences can lead to a misinterpretation of the resulting ERPs. This study presents a methodical approach for the CLET using a photodiode on a non-immersive VR (i.e., LED screen) and an immersive VR (i.e., HMD). Two sets of algorithms are proposed to analyze the photodiode data. The experiment designed for this study involved the synchronization of EEG, EMG, PPG, photodiode sensors, and ten 3D MoCap cameras with a VR presentation platform (Unity). The average latency computed for LED screen data for a set of white and black stimuli was 121.98 ± 8.71 ms and 121.66 ± 8.80 ms, respectively. In contrast, the average latency computed for HMD data for the white and black stimuli sets was 82.80 ± 7.63 ms and 69.82 ± 5.52 ms. The codes for CLET and analysis, along with datasets, tables, and a tutorial video for using the codes, have been made publicly available.

1. Introduction

1.1. Motivation

Many applications of electroencephalography (EEG) and event-related potentials (ERP) (Luck, 2012; Nidal and Malik, 2014) require the use of triggers (or tagging) (Wang et al., 2016; Cattan et al., 2018) to indicate the exact onset of presented stimuli (mostly visual or auditory events) (Miyakoshi et al., 2021; Ignatious et al., 2023) so that the recorded physiological data can be synchronized. However, variability in various hardware and software typically lead to differences in latencies between the transmission and reception of triggers (Wang et al., 2016; Cattan et al., 2021; Iwama et al., 2022). It is of importance that these latencies in triggers should not be confused with another type of latency that is present in the neural markers, such as N170, P250, N400, etc. (Luck, 2012; Cattan et al., 2021; Miyakoshi et al., 2021; Abreu et al., 2023). Although there are several studies (Hoormann et al., 1998; Kiesel et al., 2008; Wu et al., 2013; Liesefeld, 2018; Ignatious et al., 2023) that showcase the computation of latencies in neural markers and their association with brain activities, the focus of the current work is on the computation of latencies in event-related potential TRIGGERS, which is a critical pre-processing step for any brain-computer interface (BCI) study. Existing literature (discussed in the next section) lacks a methodical approach with datasets to compute trigger latencies, especially for immersive virtual reality (VR) apparatus. The objective to overcome this knowledge gap formed the main motivation of this work.

1.2. Literature survey

While past studies (Cattan et al., 2018; Iwama et al., 2022) have highlighted shifting of ERP epochs based on computed average latencies, other studies illustrate details about setting up the triggers to overcome latency differences (Wang et al., 2016; Cattan et al., 2018). To the best of our knowledge, only a few studies exist which describe the importance and considerations for computing the latencies in triggers using immersive VR systems (Wang et al., 2016; Cattan et al., 2021; Iwama et al., 2022). In the literature (Wang et al., 2016; Lees et al., 2018), Rapid Serial Visual Paradigm (RSVP) (Wang et al., 2016; Lees et al., 2018) is one of the common, simple, yet effective approaches for sending triggers. Lab Streaming Layer (LSL) (Stenner et al., 2023) is another preferred choice in many studies (Wang et al., 2016; Iwama et al., 2022). The availability of open-source resources like Simulation and Neuroscience Application Platform (SNAP) (Kothe, 2023) developed in Python to ease stimuli presentation, also favored using LSL. However, this could mandate using a setup with bigger memory and displays with higher refresh rates compared to the RSVP approach (Wang et al., 2016). For efficient hardware-software synchronization with the LSL approach, an extra hardware setup like Light Diode Resistor Comparator Circuit (LDRCC) (Wang et al., 2016) has also been used. Hence, the RSVP approach with C# programming was followed in this work.

Most of the existing literature (Lopez-Calderon and Luck, 2014; Cattan et al., 2018, 2021; Lees et al., 2018; Williams et al., 2021; Huang et al., 2022; Iwama et al., 2022), which at least provide scattered details and some considerations for settings up triggers, are based on using only EEG sensors or at most a few auxiliary (AUX) sensors. Although synchronization with other modalities like motion-capture (MoCap) has been achieved (Miyakoshi et al., 2021), the knowledge about setting up its triggers and computation of latencies during VR experiments is lacking.

1.3. Objectives

The work present work contributes to overcoming the existing knowledge gaps in the literature by setting up the following objectives: (1) to demonstrate the Computation of Latencies in Event-related potential Triggers (CLET) as a tool for measuring latencies in multi-model experiments, especially when VR apparatuses are used; and (2) to provide open access to novel datasets, codes, tables, and a tutorial video1 to ensure transparency and reproducibility of results, as well as boost future improvements in the algorithms.

1.4. Brief outline of the next sections

The work was performed to test the synchronization of triggers in a multi-model experiment that was designed to monitor biomechanics (specifically gait patterns) and physiological signals. This article is limited to the illustration of the CLET approach. The experimentation is explained in the next section, and the proposed method is detailed in the subsequent section. The computed latencies and their distribution are described in the results section, with the method’s advantages compared to the state-of-art being covered in the discussion section. Finally, the conclusions, limitations and future scope for improvement are described in the last section.

2. Methods

2.1. Experimentation

The experiment setup with the rapid serial visual paradigm (RSVP) is shown in Figure 1A. The apparatus included one desktop personal computer (PC1)–with Unity and Qualisys Track Manager (QTM) software installed and one laptop (PC2) with EEG Recorder (Brain Vision Recorder) software installed. The biomechanics monitoring setup included 3D MoCap cameras (nine Qualisys high-speed cameras and one Miqus camera), and the physiological monitoring setup included Brain Products LiveAmp with 64-channel wireless EEG with dual channels EMG and one PPG sensor. A photodiode was connected as an auxiliary (AUX) sensor with either a LED screen or an HMD at a time. The LED screen consisted of a Sony TV (KDL-75W855C) with dimensions of 167.7 × 96.9 × 7.9 cm and a refresh rate of 100 Hz. The HMD consisted of HTC VIVE Pro Eye with a field of view of 110° and a refresh rate of 90 Hz.

FIGURE 1
www.frontiersin.org

Figure 1. (A) Block diagram showing the experimental setup. Procedure to place and cover photodiode on (B) Light Emitting Diode (LED) screen, and (C) left eyepiece of a head-mounted display (HMD). For both displays, step number 1 is to cover the photodiode with the black tape. Step number 2 is to cover the tape with a piece of black cloth and secure the cloth. Step number 3 is to repeat the last step. Although the experiment was conducted in a dark room, the above steps ensured that any ambient light, if present due to displays or other electronics, did not affect the photodiode signals.

As shown in Figure 1A, PC1 was used to control the MoCap apparatus through a wired connection with the QTM software. The same computer was also used for stimulus presentation through the Unity software, which sent triggers to the EEG amplifier (amp.) unit via a USB connection to the wireless trigger box. Both displays were also connected to PC1. Data recorded using the amplifier was sent via Bluetooth to USB1 and USB2 dongles connected to PC2. The stimuli consisted of ∼100 images of black and white colors. The neutral image consisted of a gray color image with a red-colored cross at the center. The selection of these images represented ON (white screen) and OFF (black screen) input signals to the photodiode placed on the display. This procedure aligned with the protocol described in Cattan et al. (2021). The inter-stimulus interval (isi) was randomly varied between 1.0–1.5 s with a fixed stimulus duration of 0.3 s. The experimental paradigm was written in C# inside the Unity software. Here, the white and black stimuli were assigned to be displayed as “S1” and “S2” triggers, respectively, in EEG recordings. Similarly, the start and end of the recordings were assigned “S7” and “S8” triggers, respectively.

The photodiode was placed on the LED screen and covered with a black tape and then black cloth to prevent any disturbance from any external light source. The procedure to place and cover the photodiode for both displays is shown in Figure 1B. The entire experiment was performed in a dark room. After the calibration of Qualisys markers, the test involved recording QTM, then the Brain Vision Recorder (BV Rec.), followed by Unity. When the test was complete, a text notification was visible in the Unity console window. The operator was required to stop the software in the reverse order, i.e., Unity, then BV Rec., followed by QTM. The data was synchronized through triggers and time points noted in the log files. A similar process was repeated by the placement of the photodiode on the HMD and covering the sensor with black tape and black cloth. The procedure for placing the photodiode sensor is shown in Figure 1C. Each test lasted ∼5 min. The photodiode data recorded from each display is shown in Figures 2, 3. For better visualization of its shape, a section of 5 s is shown in Figures 2B, 3B. The methods developed to analyze the photodiode data recorded (see next section) have subtle variations for each of the displays due to the differences in their shapes.

FIGURE 2
www.frontiersin.org

Figure 2. Photodiode data recorded from Light Emitting Diode (LED) screen. Panel (A) is scaled to an instance of 10 s data, and panel (B) is scaled to an instance of 5 s data, to show changes in the shape of the signal after the onset of each type of stimulus.

FIGURE 3
www.frontiersin.org

Figure 3. Photodiode data recorded from the head-mounted display (HMD). Panel (A) is scaled to an instance of 10 s data, and panel (B) is scaled to an instance of 5 s data, to show changes in the shape of the signal after the onset of each type of stimulus.

2.2. Data analysis

2.2.1. Prerequisites

The algorithm was developed in MATLABBR2021a using inbuilt functions except for pop_fileio() (available in open-source EEGLAB library), which was used to load data.

2.2.2. Algorithm for CLET using data recorded from the LED screen

Notations: Let triggers sent for the first type (white image) and the second type (black image) of stimuli be S1 and S2, respectively. Let triggers detected for the first type (white image) and the second type (black image) of stimuli be D1 and D2, respectively.

Inputs: Directory Path, File Name, the Lower limit of the inter-stimulus interval (Lisi, i.e., 1 s in this study), Thresholds ThD1 and ThD2 for detecting triggers for the first and the second type of stimuli, respectively. For the LED screen, ThD1 is the transient rise in the signal’s amplitude (Figure 2B) above which the algorithm would detect D1. Similarly, ThD2 is the transient drop in the signal’s amplitude (Figure 2B) below which the algorithm would detect D2. The developed code would first plot the photodiode data extracted from the .eeg file. Then the user would be required to visually inspect and define any one of the 100 values of D1 and D2. These values do not need to be precise. For the data shown in Figure 2, the value of ThD1 = 60000 and ThD2 = 20000.

Outputs: Array of onset time (s) when triggers were sent tS1 and tS2; array of onset time (s) when triggers were detected tD1 and tD2; array of latencies between D1 and S1, i.e., LatD1S1 (ms), and array of latencies between D2 and S2, i.e., LatD2S2 (ms).

I. Steps:

1. Load photodiode data.

2. The gap between the indices of the detected triggers indxGap = Samplingratefs Lisi

3. To compute LatD1S1:

3.1. Define matrix PosPhoto containing 1’s and 0’s where 1’s represent positions of positive peaks PosPhoto = DataPhotoThD1

3.2. For PosPhoto = P1,P2,…,Pn; find the difference between the (n + 1) − n terms, i.e., PosDiff = (P2P1), (P3P2), …, (Pn + 1Pn)

3.3. Pad 0 in the beginning, ∴PosDiff = 0, PosDiff

3.4. Indices for D1 indxD1 = find (PosDiff = = 1)

3.5. Onset samples for D1 onsetsam_D1(1) = indxD1(1)

3.6. For n = 2 : (Length of indxD1 − 1)

If indxD1 (n) > (indxD1 (n − 1) + indxGap)

o n s e t s a m _ D 1 ( n ) = i n d x D 1 ( n )

3.7. onsetsam_D1 = (Values of onsetsam_D1 ≅ 0)

3.8. tD1 = Time points in DataPhoto corresponding to onsetsam_D1

4. LatD1S1=(tD1tS1)1000 in ms.

5. To compute LatD2S2:

5.1. Define matrix NegPhoto containing 1’s and 0’s where 1’s represents positions of negative peaks. NegPhoto = DataPhotoThD2

5.2. Substitute [S2/S1] and [D2/D1] in steps 3.2 to 4.

2.2.3. Algorithm for CLET using data recorded from the HMD

The notations for HMD-based data are the same as for the LED screen. Also, the inputs are similarly defined, except ThD2 which is the threshold of the smaller peaks in between the gaps, as observed in Figure 3B. In this case, ThD1 = 180000 and ThD2 = 8000.

II. Steps:

The algorithm to compute LatD1S1 here also remains the same as I. Steps 1 to 4 in section Algorithm for CLET using data recorded from the LED screen.

5. To compute LatD2S2:

5.1. Find all the ordinates of peaks in DataPhoto

5.2. Define matrix xVals containing 1’s and 0’s where 1’s represents positions of peaks.

5.3. xVals = Replace positions of   1′ s with abisccas of DataPhoto.

5.4. For n = 1 : (Length of signalindxGap

If xVals (n)ΛxVals (n+1)Λ xVals (n+2)Λ xVals (n+10)<ThD2

Then, the ordinates of peaks with the rest of the values equal to 0 are yVals

5.5. Position of small peaks PosSmPhoto = Replace values of yVals ≠ 0 with 1′ s.

5.6. Substitute [S2/S1], [D2/D1] and [PosSmPhoto/PosPhoto] in steps 3.2 to 4.

The outputs LatD1S1 and LatD2S2 computed from steps I and II (Tables available on the GitHub link mentioned in section 10 below) were used to plot distributions for the stimuli set (see Figure 4).

FIGURE 4
www.frontiersin.org

Figure 4. Latency distributions for complete photodiode data recorded from (A) Light Emitting Diode (LED) screen and (B) head-mounted display (HMD). Blue and pink reflect LatD1S1 and LatD2S2, respectively, while purple reflects overlap in LatD1S1 and Lat D2S2.

3. Results

The Computation of Latencies using the Event-related potential Triggers (CLET) method was successfully implemented and evaluated in two distinct virtual reality (VR) apparatuses, i.e., a non-immersive setup with a LED screen and an immersive setup with a Head-Mounted Display (HMD). The results obtained from both setups demonstrate the efficacy of the proposed CLET approach for accurately aligning EEG/ERP triggers with the presentation of stimuli, thus enabling the extraction and analysis of data with precision.

3.1. Latency computation for LED screen

In the non-immersive VR environment with the LED screen (Figure 4A), the CLET method efficiently computed the latencies for a set of white and black stimuli. For the white stimuli, the average latency was 121.98 ± 8.71 ms. Similarly, for the black stimuli, the average latency was 121.66 ± 8.80 ms. The distribution is maximum between 120–125 ms range (Figure 4A). Thus, the consistency in the latencies between the two sets of stimuli indicated the robustness of the CLET approach in this VR configuration.

3.2. Latency computation for HMD

In the immersive VR environment with the HMD (Figure 4B), the average latencies were 82.80 ± 7.63 ms, mainly distributed between 80–85 ms, and 69.82 ± 5.52 ms, mainly distributed between 67–77 ms for white and black stimuli sets, respectively. The lower latencies observed with the HMD setup than the LED screen setup suggested faster temporal dynamics for the immersive VR apparatus (Iwama et al., 2022). Although this observation is consistent with findings in the literature (Cattan et al., 2021), variations are particularly subject to protocols that rely on sending triggers for synchronization (Wang et al., 2016; Lees et al., 2018) and specifications of the hardware apparatuses (Rebenitsch and Owen, 2017; Cattan et al., 2018; Andreev et al., 2019; Wilson, 2023).

The results from both VR setups (Figure 4) also highlight the importance of considering latency distributions along with precision and accuracy (Williams et al., 2021) in the computation of latencies for aligning epochs, to avoid any timing discrepancies which would otherwise lead to misinterpretations of data (Cattan et al., 2021; Iwama et al., 2022).

4. Discussion

One of the primary strengths of this study lies in the rigorous experimental setup involving the synchronization of various sensors, including EEG, EMG, PPG, photodiode, and nine 3D and a 2D (Miqus) MoCap cameras. This multi-model approach assured a comprehensive investigation of the developed algorithms to compute trigger latencies in VR. Therefore, a direct comparison of this study with the state-of-the-art event-latency computation approaches (Cattan et al., 2018, 2021; Iwama et al., 2022), which were based on lower number of modalities recorded, would be biased. Nevertheless, Figure 4 demonstrates latencies at par with the stated literature. The novelty lies in the two sets of algorithms proposed for the CLET method to accurately detect triggers and compute latencies for both LED screen and HMD data. The adaptability of the algorithms to the subtle variations in the photodiode data shapes for each display type further highlights their versatility and robustness.

The shorter latencies observed in the HMD setup compared to the LED screen setup can likely be attributed to the hardware characteristics of the display technology, which could facilitate faster triggering and data transmission (Cattan et al., 2018; Andreev et al., 2019).

5. Conclusion, limitations, and future scope

In conclusion, the results from this research successfully demonstrate the effectiveness of the CLET method for accurately computing latencies in event-related potential (ERP) triggers within two different virtual reality (VR) apparatuses. Efficient synchronization of different sensors and apparatuses also contributed to the validity and the applicability of the CLET method to real-world scenarios. The rapid serial visual paradigm (RSVP) discussed in this study has also been the suggested paradigm due to its simplicity and high temporal accuracy to achieve low values of latencies in triggers (Wang et al., 2016).

A limitation of the proposed algorithms is their semi-automated nature. However, providing open access to the developed codes for CLET, along with novel datasets, tables, and a tutorial video, provides transparency and reproducibility. This encourages the wider scientific community to adopt and validate this method in their own ERP studies, thereby also fostering improvements in the algorithm. One approach could be to use artificial intelligence (AI) or machine learning (ML)-based clustering method(s) to capture transient changes in the shared datasets, followed by automated thresholding to achieve the remaining computation steps as in CLET. Additionally, factors related to the photodiode’s placement and sensor positioning on each display could have influenced the latency measurements. A separate study could be conducted to discuss the best positioning of the photodiode depending on the type of stimuli and display apparatus. It is also stressed that the HMD used in this study consisted of dual displays for each eyepiece. Thus, averaged latency calculated for each lens separately could be used for better accuracy (Cattan et al., 2021). It would be interesting to see the application of CLET in Brain-Computer Interface (BCI) extended to other neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) (Levitt et al., 2023) and magnetoencephalography (MEG) (Liesefeld, 2018), to enable further multimodal investigations of brain activities during Virtual Reality (VR) or Augmented Reality (AR) experiences. This study was limited to only the onset of visual stimuli and offline analysis. With rapid progress in developing VR/AR and haptic technologies (Lm, 2023), accurate computation of trigger latencies will become more critical in real-time BCI feedback systems (Putze et al., 2020; Wen et al., 2021), as well as in transcranial magnetic stimulation (TMS)-based neurorehabilitation (Hernandez-Pavon et al., 2023) where inaccurate triggers could have serious impact on the course of rehabilitation.

Data availability statement

The datasets, codes, supplementary tables, and a tutorial video are freely available in this link: github.com/BiomedicalEngineeringTechies/CLET.git.

Author contributions

PS designed research, conducted experiments, developed the algorithm, analyzed the data, wrote the manuscript, and contributed to procuring funds to buy instruments. KG validated the results and co-wrote the manuscript. EV helped to set up the experiment. BV, JN, and XS designed the research and co-wrote the manuscript. AH, TH, and GS contributed to setting up the experiment. GS, JN, and XS contributed to the procurement of funds. XS supervised the study. All authors contributed to the article and approved the submitted version.

Abbreviations

CLET, computation of latencies in event-related potential triggers; EEG, electroencephalography; ERP, event-related potential; EMG, electromyography; PPG, photoplethysmography; AUX, auxiliary; VR, virtual reality; LED, light emitting diode; HMD, head mounted display; MoCap, motion capture; RSVP, rapid serial visual paradigm; PC, personal computer; QTM, qualisys track manager; BV Rec, brain vision recorder; BCI, brain-computer interface.

Funding

This research was funded by the European Research Consortium for Informatics and Mathematics, and the Norwegian University of Science and Technology, Trondheim, Norway, from 2019–21.

Acknowledgments

We would like to acknowledge the funding organizations for this work.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Footnotes

  1. ^ github.com/BiomedicalEngineeringTechies/CLET.git

References

Abreu, A. L., Fernández-Aguilar, L., Ferreira-Santos, F., and Fernandes, C. (2023). Increased N250 elicited by facial familiarity: an ERP study including the face inversion effect and facial emotion processing. Neuropsychologia 188:108623. doi: 10.1016/j.neuropsychologia.2023.108623

PubMed Abstract | CrossRef Full Text | Google Scholar

Andreev, A., Cattan, G., and Congedo, M. (2019). Engineering study on the use of head-mounted display for brain-computer interface. arXiv [Preprint]. doi: 10.48550/arXiv.1906.12251

CrossRef Full Text | Google Scholar

Cattan, G., Andreev, A., Maureille, B., and Congedo, M. (2018). Analysis of tagging latency when comparing event-related potentials. arXiv [Preprint]. doi: 10.48550/arXiv.1812.03066

CrossRef Full Text | Google Scholar

Cattan, G. H., Andreev, A., Mendoza, C., and Congedo, M. (2021). A comparison of mobile VR display running on an ordinary smartphone with standard PC display for P300-BCI stimulus presentation. IEEE Trans. Games 13, 68–77.

Google Scholar

Hernandez-Pavon, J. C., Veniero, D., Bergmann, T. O., Belardinelli, P., Bortoletto, M., Casarotto, S., et al. (2023). TMS combined with EEG: recommendations and open issues for data collection and analysis. Brain Stimul. 16, 567–593. doi: 10.1016/j.brs.2023.02.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Hoormann, J., Falkenstein, M., Schwarzenau, P., and Hohnsbein, J. (1998). Methods for the quantification and statistical testing of ERP differences across conditions. Behav. Res. Methods Instru. Comput. 30, 103–109.

Google Scholar

Huang, J., Yang, P., Xiong, B., Wan, B., Su, K., and Zhang, Z. Q. (2022). Latency aligning task-related component analysis using wave propagation for enhancing SSVEP-based BCIs. IEEE Trans. Neural Syst. Rehabil. Eng. 30, 851–859. doi: 10.1109/TNSRE.2022.3162029

PubMed Abstract | CrossRef Full Text | Google Scholar

Ignatious, E., Azam, S., Jonkman, M., and De Boer, F. (2023). Frequency and time domain analysis of eeg based auditory evoked potentials to detect binaural hearing in noise. J. Clin. Med. 12:4487. doi: 10.3390/jcm12134487

PubMed Abstract | CrossRef Full Text | Google Scholar

Iwama, S., Takemi, M., Eguchi, R., Hirose, R., Morishige, M., and Ushiba, J. (2022). Two common issues in synchronized multimodal recordings with EEG: jitter and latency. bioRxiv [Preprint]. doi: 10.1101/2022.11.30.518625

CrossRef Full Text | Google Scholar

Kiesel, A., Miller, J., Jolicæur, P., and Brisson, B. (2008). Measurement of ERP latency differences: A comparison of single-participant and jackknife-based scoring methods. Psychophysiology 45, 250–274. doi: 10.1111/j.1469-8986.2007.00618.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Kothe, C. (2023). Simulation and Neuroscience Application Platform (SNAP). San Francisco, CA: Github.

Google Scholar

Lees, S., Dayan, N., Cecotti, H., McCullagh, P., Maguire, L., Lotte, F., et al. (2018). A review of rapid serial visual presentation-based brain-computer interfaces. J. Neural Eng. 15:021001. doi: 10.1088/1741-2552/aa9817

PubMed Abstract | CrossRef Full Text | Google Scholar

Levitt, J., Yang, Z., Williams, S. D., Lütschg Espinosa, S. E., Garcia-Casal, A., and Lewis, L. D. (2023). EEG-LLAMAS: A low-latency neurofeedback platform for artifact reduction in EEG-fMRI. Neuroimage 1:273. doi: 10.1016/j.neuroimage.2023.120092

PubMed Abstract | CrossRef Full Text | Google Scholar

Liesefeld, H. R. (2018). Estimating the timing of cognitive operations with MEG/EEG latency measures: a primer, a brief tutorial, and an implementation of various methods. Front. Neurosci. 12:765. doi: 10.3389/fnins.2018.00765

PubMed Abstract | CrossRef Full Text | Google Scholar

Lm, T. Y. (2023). A touch of virtual reality. Nat. Mach. Intell. 5:557.

Google Scholar

Lopez-Calderon, J., and Luck, S. J. (2014). ERPLAB: an open-source toolbox for the analysis of event-related potentials. Front. Hum. Neurosci. 8:213. doi: 10.3389/fnhum.2014.00213

PubMed Abstract | CrossRef Full Text | Google Scholar

Luck, S. J. (2012). “Event-related potentials,” in APA’s handbook of research methods in psychology: foundations, planning, measures, and psychometrics, Vol. 1, eds H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, and K. J. Sher (American Psychological Association), 523–546.

Google Scholar

Miyakoshi, M., Gehrke, L., Gramann, K., Makeig, S., and Iversen, J. (2021). The AudioMaze: An EEG and motion capture study of human spatial navigation in sparse augmented reality. Eur. J. Neurosci. 54, 8283–8307. doi: 10.1111/ejn.15131

PubMed Abstract | CrossRef Full Text | Google Scholar

Nidal, K., and Malik, A. S. (eds) (2014). EEG/ERP analysis: methods and applications. Boca Raton, FL: CRC Press.

Google Scholar

Putze, F., Vourvopoulos, A., Lécuyer, A., Krusienski, D., Bermúdez, I., Badia, S., et al. (2020). Editorial: Brain-Computer Interfaces and augmented/virtual reality. Front. Hum. Neurosci. 14:144. doi: 10.3389/fnhum.2020.00144

PubMed Abstract | CrossRef Full Text | Google Scholar

Rebenitsch, L., and Owen, C. (2017). Evaluating factors affecting virtual reality display. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Berlin: Springer Verlag.

Google Scholar

Stenner, T., Boulay, C., Grivich, M., Medine, D., Kothe, C., Tobiasherzke, G., et al. (2023). Lab streaming layer. San Francisco, CA: Github.

Google Scholar

Wang, Z., Healy, G., Smeaton, A. F., and Ward, T. E. (2016). “An investigation of triggering approaches for the rapid serial visual presentation paradigm in brain computer interfacing,” in Proceedings of the 27th Irish Signals and Systems Conference (ISSC), Manhattan, NY.

Google Scholar

Wen, D., Fan, Y., Hsu, S. H., Xu, J., Zhou, Y., Tao, J., et al. (2021). Combining brain–computer interface and virtual reality for rehabilitation in neurological diseases: a narrative review. Ann. Phys. Rehabil. Med. 64:101404.

PubMed Abstract | Google Scholar

Williams, N. S., McArthur, G. M., and Badcock, N. A. (2021). It’s all about time: precision and accuracy of Emotiv event-marking for ERP research. PeerJ 9:e10700. doi: 10.7717/peerj.10700

PubMed Abstract | CrossRef Full Text | Google Scholar

Wilson, D. (2023). AnandTech. Exploring input lag inside and out [Internet]. North Carolina: AnandTech.

Google Scholar

Wu, C., Wu, W., and Gao, X. (2013). “Measuring ERP latency shifts across experimental conditions using spatial filtering,” in Proceedings of the International IEEE/EMBS Conference on Neural Engineering (NER), Manhattan, NY. doi: 10.1109/NER.2013.6696202

CrossRef Full Text | Google Scholar

Keywords: motion-capture (Mocap), latencies, electroencephalography (EEG), event-related potential (ERP), interface

Citation: Swami P, Gramann K, Vonstad EK, Vereijken B, Holt A, Holt T, Sandstrak G, Nilsen JH and Su X (2023) CLET: Computation of Latencies in Event-related potential Triggers using photodiode on virtual reality apparatuses. Front. Hum. Neurosci. 17:1223774. doi: 10.3389/fnhum.2023.1223774

Received: 16 May 2023; Accepted: 31 August 2023;
Published: 19 September 2023.

Edited by:

Sunil Kumar Telagamsetti, KU Leuven, Belgium

Reviewed by:

Kandala N. V. P. S. Rajesh, VIT-AP University, India
Shishir Maheshwari, Thapar Institute of Engineering and Technology, India

Copyright © 2023 Swami, Gramann, Vonstad, Vereijken, Holt, Holt, Sandstrak, Nilsen and Su. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Piyush Swami, piyushswami@ieee.org; Xiaomeng Su, xiaomeng.su@ntnu.no

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.