In this phase of the project, I am exploring how to remotely control effect parameters in Logic Pro using Max/MSP. Instead of focusing on sound processing within Max/MSP, the goal is to use Max/MSP as an interactive controller, transmitting parameter data via MIDI CC (Continuous Control Messages) to enable real-time adjustments of effects in Logic Pro.
Current Progress: Designing Interaction in Max/MSP (Using MIDI CC for Transmission)
I have now decided to use MIDI CC as the data transmission method between Max/MSP and Logic Pro, allowing for a more direct mapping of effect parameters while providing a smooth control experience.
Key Interaction Design Considerations:
-
How will the audience interact?
- Should interaction be gesture-based (e.g., Leap Motion, Kinect)?
- Should it rely on external MIDI controllers (e.g., faders, knobs)?
- Can spatial tracking (e.g., Kinect) allow the audience’s movement to influence sound effects?
-
Which effect parameters should be controlled?
- The initial design allows the audience to influence reverb, filtering, delay, and other key effects to create a dynamic sonic experience.
- Each parameter will be mapped to a specific MIDI CC channel, allowing Logic Pro to receive real-time control data from Max/MSP and apply adjustments to sound effects.
- Interaction methods can be continuous (smooth transitions) or discrete (preset switching), with further testing to refine the response.
-
Mapping MIDI CC Between Max/MSP and Logic Pro
- Max/MSP generates MIDI CC data, which is then sent to Logic Pro via a MIDI port.
- In Logic Pro, each CC message is mapped to different effect parameters (e.g., CC#1 controls reverb depth, CC#2 adjusts delay time, etc.).
- To ensure smooth parameter transitions, data filtering and smoothing techniques need to be implemented to avoid abrupt jumps in effect changes.
Next Steps: Expanding Functionality
The next phase of development will focus on integrating sound selection and playback mechanisms to enhance the system’s flexibility and dynamism.
- How should different audio tracks be selected and triggered based on interaction?
- Should pre-recorded sound files be used, or should effects be applied to real-time external audio input?
- How can smooth transitions between different effect states be achieved instead of abrupt changes?
Once these questions are addressed, I will further refine the interaction model in Max/MSP, ensuring that user gestures, controllers, or spatial movement can intuitively and smoothly influence Logic Pro’s sound processing, creating a more dynamic and immersive experience.
This approach ensures clear interaction logic, immersive auditory experience, and intuitive control, ultimately building an interactive sound environment that seamlessly connects real-world actions with digital sound processing. More updates will follow as the system evolves!
Written by JIngxian Li(s2706245)&Tianhua Yang(s2700229)

