Cochlear implants (CI) are sensory neuroprostheses capable of partially restoring hearing loss by electrically stimulating the auditory nerve to mimic normal hearing conditions. Despite their success and ongoing advances in both hardware and software, CI patients can still struggle to understand speech, most notably in complex auditory settings, also referred to as the cocktail party problem. Efforts to develop new CI algorithms to overcome this challenge rely on CI simulators and vocoders to test with normal hearing (NH) patients. However, recent studies have suggested that these tools fail to reproduce the stimuli perceived by CI patients. It is therefore critical to develop tools capable of producing better representations of the stimuli as perceived by CI patients. Thus, this work proposes a framework that incorporates physiological models of the peripheral auditory nerve. Using these models, the framework generates stimulations that elicit a neural response at the auditory nerve closer to that observed in NH conditions. Stimulations generated by the framework were evaluated by performing a vowel identification task. However, the task was performed by a classifier trained using deep learning techniques instead of a CI patient. These results give insight into how the framework could be applied for the development and validation of CI stimulation strategies.