Considering that this project is set up as a two-player co-operative puzzle game, where one of the players is visually impaired but has normal hearing, we believe that voice prompts will become one of the most crucial ways of information transfer between the two players.

In order to meet this design requirement, I developed a voice triggering system script in Unity, so that the able-bodied players can press the up, down, left, right and right arrow keys of the keyboard to play the voice commands with different meanings (e.g., ‘forward, backward, right, left, right,’ ‘danger’, ‘need’, ‘need’, ‘need’, ‘need’, ‘need’, ‘need’, ‘need’). ’, ‘need help’, ‘find the exit’, etc.), and at the same time these voices have different emotional colours (e.g. nervousness, encouragement, confirmation, etc.), so that the messages are not only clear instructions, but also have a certain degree of emotional This makes the messages not only clear instructions, but also has a certain emotional impact, enhancing the sense of immersion and character immersion. Finally, after exporting the soundbank integration of wwise to unity, the game language system can be triggered in unity.
In the end, the system successfully realises the closed-loop interaction from key input → playing voice events → providing command prompts, and the visually impaired players can judge the environmental information through hearing, which further improves the depth of collaboration between players and the playability of the game.


