The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings

Neuroimage. 2023 Apr 1:269:119913. doi: 10.1016/j.neuroimage.2023.119913. Epub 2023 Jan 31.

Abstract

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.

Keywords: Brain-computer interface (BCI); Imagined speech; Speech activity detection; Speech decoding; Stereotactic electroencephalography (sEEG).

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Brain / physiology
  • Brain-Computer Interfaces*
  • Electroencephalography / methods
  • Face
  • Humans
  • Mouth
  • Speech* / physiology