Files
Abstract
Activity-dependent changes in membrane excitability are observed in neurons across brain areas and represent a cell-autonomous form of plasticity (intrinsic plasticity; IP) that in itself does not involve alterations in synaptic strength (synaptic plasticity; SP). Non-homeostatic IP may play an essential role in learning, e.g. by changing the action potential threshold near the soma. A computational problem, however, arises from the implication that such amplification does not discriminate between synaptic inputs and therefore may reduce the resolution of input representation. Here, we investigate consequences of IP for the performance of an artificial neural network in (a) the discrimination of unknown input patterns and (b) the recognition of known/learned patterns. While negative changes in threshold potentials in the output layer indeed reduce its ability to discriminate patterns, they benefit the recognition of known but incompletely presented patterns. An analysis of thresholds and IP-induced threshold changes in published sets of physiological data obtained from whole-cell patch-clamp recordings from L2/3 pyramidal neurons in (a) the primary visual cortex (V1) of awake macaques and (b) the primary somatosensory cortex (S1) of mice in vitro, respectively, reveals a difference between resting and threshold potentials of ∼15 mV for V1 and ∼25 mV for S1, and a total plasticity range of ∼10 mV (S1). The most efficient activity pattern to lower threshold is paired cholinergic and electric activation. Our findings show that threshold reduction promotes a shift in neural coding strategies from accurate faithful representation to interpretative assignment of input patterns to learned object categories.