This system would use artificial intelligence to map not only the stimuli that are created through the person’s exposure to sounds, colors, images, and their environment, which is output or input from cameras or microphones, but also map the entire brain and all its activities that are causing the stimuli which can be mapped by the AI system.
The purpose of this system is to develop a comprehensive communication channel between the brain and the computing system that it interacts with. After the calibration cycle is completed, the AI system can decode and map any type of brainwave activity, and associate that brainwave activity with specific intelligence about the brainwave activity – such as what the person is seeing or hearing, or their sense of touch, or muscle movement, or their sense of smell, or the sense of pain, or their emotions. All this processing is possible because of the calibration of the mapping of brainwave activities, in addition to a system that has stored EEG or similar output that has already been mapped. This allows the AI system to learn not only what types of brainwave activities are associated to what types of thought activities of the brain, but also how it compares to past subjects.
This would involve a large data set, and computing technology is simply not yet capable of storing the required amount of information on a single small device to use as a CBI/BCI, because it would likely require entire data centers to store the EEG or similar stimuli intelligence. Instead of storing all the data locally on the device, a second AI system controls the fetching of various types of data, which are used to develop a custom brainwave model for each user of the technology. This allows the system to map specific regions and even epicenters of specific types of brainwave activities on the fly, without having to put multiple data centers in a backpack to carry around with the system.