Wednesday, April 2, 2025

A Masterpiece of Precision: The Virtuosic Mannequin at MIT Information

A captivated audience assembled at the MIT Media Lab in September to witness a unique, real-time collaboration between renowned musician Jordan Rudess and his two esteemed colleagues. With a distinguished career as a versatile performer, Camilla Bäckman, a talented violinist and vocalist, has collaborated with Jordan Rudess on previous occasions. As the jam_bot, a man-made intelligence prototype informally dubbed by Rudess and his team at MIT after several months of collaborative development, made its public debut as a work-in-progress composition.

As they jammed together, Rudess and Bäckman’s hands danced in tandem, their eyes locking in a shared understanding that only came from years of honing their craft. Rudess’ exchanges with the jam_bot initiated a novel and uncharted mode of commerce. As they performed their Bach-inspired duet, Rudess oscillated between delighting in select measures and surrendering to the AI’s creative license, allowing it to continue in the same baroque vein. As the mannequin posed, Rudess’ countenance shifted through a kaleidoscope of emotions: bewilderment, intensity, and a hint of inquiry. As the piece concluded, Rudess candidly acknowledged to the audience, “It’s a blend of immense enjoyment and genuine, extreme difficulty.”

As widely regarded as the greatest keyboardist of all time by Music Radar’s poll, Rudess is celebrated for his esteemed contributions to the platinum-selling, Grammy-winning progressive metal group Dream Theater, marking a milestone 40th anniversary tour this autumn. He’s also a successful solo artist, having recently released his latest album, “TBA”, on September As a renowned educator, he disseminates his knowledge through comprehensive online tutorials, while also serving as the visionary founder of the software company Wizdom Music. With a classical foundation rooted in his earliest start at The Juilliard College at just nine years old, he seamlessly blends a mastery of traditional techniques with an innate ability to improvise and a driving force for creative exploration.

Last spring, Jordan Rudess, renowned keyboardist for the band Dream Theater, served as a visiting artist at MIT’s Center for Art, Science and Technology (CAST), where he collaborated with researchers from the Media Lab’s Responsive Environments group to develop innovative AI-powered music technology. Rudess’ key collaborators within the venture include Media Lab graduate students Lancelot Blanchard, who explores the musical applications of generative AI informed by his own research in classical piano, and Perry Naseck, an artist and engineer expertising in interactive, kinetic, light- and time-based media. Professor Joseph Paradiso, leader of the Responsive Environments group, oversees the endeavour, his affinity for Rudess dating back to a bygone era. In 1994, Paradiso brought his unique blend of physics, engineering expertise, and passion for music design to the Media Lab, having crafted a career as both a scientist and a synthesizer builder in pursuit of exploring the avant-garde soundscape? The group explores uncharted musical territories through innovative interface designs, real-time sensor feedback, and atypical data sources.

Researchers endeavored to create an artificial intelligence model that emulated the unique musical style and techniques of renowned keyboardist Jordan Rudess, developing a machine learning prototype that captured his distinct flair. In September, MIT Press published an online edition co-authored with Eran Egozy, a professor of music technology at MIT, who outline their vision for “symbiotic virtuosity”: the real-time duet between humans and computers, learning from each collaboration and creating performance-worthy music before a live audience.

Virtuoso keyboardist Jordan Rudess collaborated with renowned expert Dr. Halley Blanchard to train a cutting-edge AI model. With Rudess providing consistent feedback and guidance, Naseck explored innovative ways to convey complex knowledge visually for the audience’s comprehension.

“Audiences have come to expect immersive experiences at concerts, featuring elaborate lighting, graphics, and scenic elements; as such, our goal was to create a platform where an AI can forge a genuine connection with attendees.” Initially, these demonstrations manifested as an interactive sculpture setup, where shifting lighting effects were triggered by the AI’s adjustments to chord patterns. During the live show on September Twenty-one petals of intricately designed panels, suspended behind Rudess, sprang to life through a dynamic choreography informed by both the principles of the exercise and the innovative potential of an AI model’s futuristic technology.

“When watching a jazz performance, the subtle cues of eye contact and nods between musicians create an air of anticipation for the audience, setting the stage for an engaging experience.” The AI successfully generates sheet music, which it then plays and appreciates. What’s next will unfold; let’s discuss how.

Naseck, working in collaboration with Brian Mayton (mechanical design) and Carlo Mandolini (fabrication), at the Media Lab, designed and programmed the construction from the ground up, drawing inspiration from an experimental machine learning mannequin developed by visiting scholar Madhav Lavakare that maps music to shifting variables in space. As the kinetic sculpture’s petals spun and tilted at velocities ranging from subtle to dynamic, it artfully differentiated the AI’s contributions during the live performance from those of its human counterparts, while poignantly conveying the emotion and potency of its output: softly undulating in tandem with Rudess’ lead, or unfurling and refurling like a bloom as the AI model produced majestic chords for an improvised adagio. The latter was considered one of Naseck’s most cherished moments in his current life.

“On the finale, Jordan and Camilla departed the stage, giving the AI full autonomy to chart its own path.” The sculpture’s impact was striking – by keeping the stage dynamic, it amplified the grandeur of the AI-generated chords, elevating the overall performance to new heights. The audience was utterly entranced, perched on the very edge of their seats.

To elevate the art form, Rudess emphasizes the importance of showcasing musical mastery, stating, “I want to demonstrate what’s possible and push the boundaries.”

Blanchard started his project with a music transformer, an open-source neural network architecture created by MIT Assistant Professor Anna Huang, an alumnus of the institution’s 2008 class.

“Like massive language models, music transformers operate through a similar mechanism,” Blanchard clarifies. “The model is designed to anticipate the most probable next sequence of words or notes, much like ChatGPT generates the most likely subsequent phrase.”

Rudess personally curated a selection of bass lines, chords, and melodies, recording them in his New York studio before Blanchard fine-tuned the mannequin to perfectly capture their nuances. As he carefully programmed the AI, Blanchard prioritized its agility, enabling it to respond promptly and in real-time to Rudess’ dynamic improvisations.

According to Blanchard, they reoriented their approach when considering musical prospects previously posited by the model, which were only being actualized pending Jordan’s decisions.

“As Rudess asks, ‘How can an AI respond – how can I engage in a dialogue with it?’ ” That’s the most innovative aspect of our work.

Here is the rewritten text:

Among the many precedents that emerged, one notable example is the rise of startups like Suno and Udio, which can generate music solely based on textual content prompts within the realm of generative AI and music. “These robots are indeed intriguing, yet their lack of control is a significant limitation,” remarks Blanchard. To excel in his chosen field, Jordan required a high degree of situational awareness and the capacity to forecast potential developments. “If the AI predicted he wouldn’t be needed, he might deliberately shut it down or implement a kill switch to regain control.”

As Rudess was given a real-time visual preview of the musical options available on the mannequin, Blanchard skillfully integrated various interactive features that allowed the musician to shape his performance – from prompting the AI to create harmonious chords or melodic motifs, to initiating a call-and-response sequence.

“Jordan is the driving force behind every aspect of what’s unfolding,” he states.

Despite the conclusion of the residency, the partners envision numerous avenues for continuing to develop their research. Nasceck seeks to explore unconventional approaches with Jordan Rudess, potentially combining forces immediately, leveraging innovative methods such as capacitive sensing. “We anticipate being able to collaborate more extensively with him once he has mastered a range of nuanced gestures and body language.”

While the initial MIT collaboration focused on leveraging the device to elevate Rudess’ personal shows, one can easily envision various applications? During that first encounter with Jordan’s cutting-edge technology, Paradiso vividly recalls playing a chord sequence, which prompted the mannequin to generate the lead melodies. With Jordan Rudess’s virtuosic keyboard runs swirling around me like a symphony, my melodic foundation began to take shape, mirroring the innovative spirit that guided my creative process at the time. “As technology advances, you’ll soon be able to incorporate AI-powered plugins into your favorite musicians’ work, allowing you to customize and manipulate their sounds to suit your own creative vision.” “This project is pioneering a new realm, one that’s yet to be explored.”

Rudy Rudess, renowned keyboardist for Dream Theater, demonstrates a keen enthusiasm for uncovering the various ways his craft is applied in an academic setting. Given his experience using similar recordings as ear-training exercises with students, he envisions a future where the mannequin could potentially serve as an educational tool. “This project has enormous potential beyond mere recreational value,” he remarks.

The exploration of synthetic intelligence marks a significant milestone in Rudess’ ongoing quest to master the intricacies of music technology. “This is the next step,” he asserts confidently. Despite his passion for exploring the intersection of music and artificial intelligence, his peers often greet his discussions with skepticism. He concedes, “I can have empathy for a musician who feels threatened; I understand their perspective.” “My ultimate goal is to join forces with others in harnessing this knowledge for the greater good.”

“On the Media Lab, it’s crucial to explore the synergies between humans and AI for mutual benefit,” says Paradiso. What will be the transformative impact of artificial intelligence on our collective future? Ideally, this technology will transport us to a new frontier where we are empowered and more capable.

“Jordan stands out from the rest,” Paradiso notes. When a connection has been made, people will start to notice.

During an earlier period, the Media Lab caught Rudess’ attention prior to his residency due to his interest in exploring the innovative Knitted Keyboard developed by Irmandy Wickasono, a textile researcher and Ph.D. candidate ’24 from Responsive Environments. Since joining forces with Jordan Rudess at Berklee College of Music, it’s been a revelation studying the cutting-edge developments in MIT’s music sphere, he reveals.

During two visits to Cambridge last spring, accompanied by his wife, theatrical and musical producer Danielle Rudess, Rudess revisited outstanding assignments in Paradiso’s digital music controllers course, a curriculum that featured films showcasing his own past performances. He unveiled Osmose, a novel gesture-driven synthesizer, in an interactive music methods course taught by Egozy, who co-created the acclaimed online game “Guitar Hero.” Additionally, Rudess shared his expertise on improvisation with a composition class; showcased GeoShred, a touchscreen instrument he co-developed with Stanford researchers, alongside student musicians in MIT’s Computer Ensemble and Arts Students program; and explored immersive audio within the university’s Spatial Sound Lab. In September, the musician conducted a masterclass on campus, instructing pianists as part of MIT’s Emerson/Harris Program, which supports 67 students and fellows in their pursuit of conservatory-level music education.

“With each visit, I experience an undeniable thrill,” Rudess explains. “As I reflect, I’m struck by the realization that my diverse musical inspirations and creative pursuits have finally converged in a truly remarkable way.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles