Finally, I read something about the similarities between the neural auditory mechanism of birds and humans and how birdsong can actually somehow trigger areas or processes within the bird's own brain which mirror those same neural reactions to an action or experience, pleasurable most likely, so the bird might be thought to be auto-stimulating or simulating certain experiences simply by creating certain sound patterns.
Per Wikipedia's entry for 'Bird vocalization':
"Because mirror neurons exhibit both sensory and motor activity, some researchers have suggested that mirror neurons may serve to map sensory experience onto motor structures."
This made me wonder about the work of neuroscientists and engineers working with psychoacoustics and perhaps music trying to emulate or model those mechanisms or synthesize certain sounds which can artificially emulate or trigger human emotions, thoughts, or even physiological reflexes based on these avian auto-stimulus mechanisms. For example, one of the parameter sets of the Neuron includes the "Funny" parameter. Since the NAS Engine exploits psychoacoustic principles I wondered if more directly targeting certain neural aesthetic and emotional perceptions based on sound characteristics clinically determined to elicit certain responses was an area of active research? This subject seems especially suitable for sound synthesis when one considers just how vivid and lush the songs of some bird species sounds even to our human ears who (ostensibly) are not the natural targets of the birds' amorous chirps.
Of course, it's fun to tease researchers and engineers about "meddling" in the field of bio-engineering, so I might continue my tasteless Mengele allusions. But shucks, all I know is that there's something weird goin' on in my Neuron (VS).