Although the sensitivity and performance of microphones has improved quite a bit since Alexander Graham Bell first patented They still have a major disadvantage that researchers are identifying Carnegie Mellon University may have finally overcome through use a two ordinary video cameras.
Put a mic in a room with some musicians and as you capture every last note and nuance of their individual performances, you’re left with a single shot that has it all mixed together. But to make this performance sound even better, yyou ideally want to record each instrument and each musician separately, This allows each performance to be remixed by a sound engineer with an experienced ear.
Software tools have been developed to extract individual tones from an audio recording, but the results just aren’t as good as recording a sound source with a microphone. That’s why mixing consoles are often so gigantic and elaborate: countless microphones with limited pickup patterns must be set up to properly capture every component of a musical performance, from vocals to instruments, which means a lot of equipment to get things right.
There’s really no way to redesign microphones to distinguish the captured sound vibrations traveling through the air, which is why the Carnegie Mellon University researchers Institute for Robotics of the Faculty of Computer Science have turned to video cameras instead. strum the strings of a guitar, and not only will it create sound waves vibrating through the air, but the guitar itself is also made to vibrate. With the right equipment, these vibrations can be visualized and analyzed to recreate the sounds produced, even when no sounds are recorded.
Optical microphones, as these camera systems are called, are not a new idea, but what the CMU researchers came up with and shared in a recent article: “Optical dual shutter vibration sensor,” is a way to get them working with inexpensive camera gear.
G/O Media may receive a commission
The new system shines a bright laser light source onto a vibrating surface, like the body of a guitar, and captures the movement of the resulting dappled light pattern. Because the human auditory range can perceive sounds oscillating at up to 20,000 times per second, optical microphones have typically relied on expensive, high-speed cameras to capture oscillating physical vibrations just as quickly. But the new CMU system uses cameras running at just 63 frames per second, which would appear to miss the high-speed movements of a vibration that occurs 20,000 times per second.
The clever breakthrough here is the simultaneous use of two different camera types: one with a global shutter that captures entire video frames, resulting in distinct mottled patterns, and one with a rolling shutter that captures frames line-by-line from top to bottom of the sensor, resulting in distorted mottled patterns that actually contain more information about how they’re changing over time and move here.
Using a custom algorithm, each camera’s captured frames can be compared to pinpoint the movements of the vibrating, speckled laser patterns more accurately at up to 63,000 times per second — or as fast as an expensive, high-speed camera could.
The approach allows audio to be extracted individually from multiple sources in a single video, e.g. B. by multiple musicians, each playing their own guitar, or even by multiple speakers, all playing different music.
The extracted audio isn’t as clear or high-fidelity as what a traditional mic can pick up, but the optical mic could offer mixing engineers an easy way to monitor individual instruments during a live performance, and over time there’s little Doubts about the quality of the extracted audio will be further improved. The system has other interesting applications outside of music. A video camera monitoring all the machinery on a factory floor or aimed at the engine of a running car could detect when individual parts or components are making unusual noises, indicates that service may be required before a problem actually becomes a problem.