Emerging haptic devices have granted individuals who are blind the capabilities to explore images in real-time, which has always been a challenge for them. However, when only haptic-based interaction is available, and no visual feedback is given, image comprehension demands time and major cognitive resources. This research developed an approach to improve blind people’s exploration performance by providing assisting strategies in various sensory modalities, when certain exploratory behavior is performed. There are three fundamental components developed in this approach: the user model, the assistance model, and the user interface. The user model recognizes users’ image exploration procedures. A learning framework utilizing spike-timing neural network is developed to classify the frequently applied exploration procedures. The assistance model provides different assisting strategies when certain exploration procedure is performed. User studies were conducted to understand the goals of each exploration procedure and assisting strategies were designed based on the discovered goals. These strategies give users hints of objects’ locations and relationships. The user interface then determines the optimal sensory modality to deliver each assisting strategy. Within-participants experiments were performed to compare three sensory modalities for each assisting strategy, including vibration, sound and virtual magnetic force. A complete computer-aided system was developed by integrating all the validated assisting strategies. Experiments were conducted to evaluate the complete system with each assisting strategy expressed through the optimal modality. Performance metrics including task performance and workload assessment were applied for the evaluation.