Jonathan's Site

Audio / Video / Cars / Gadgets

Audio predictions for the next 20 years
(2009 - 2029)

So we had some fun discussing what the progress has occurred over the past 20 years, so how about a bit of speculation about where we could go in the next 20.

 My crystal ball is pretty good, and definitely long-range, so I’ll take a shot at some topics I believe will be transformational in the field of audio.

True vector-encoded surround formats

 This is something that has been desired for a long time, and certain attempts have been made in the past (Ambisonics being a notable one I admire). But whatever the tech, a new formalized format for encoding a full 3D acoustic space that is totally abstracted from the speaker configuration will arrive and be adopted.

Vector encoding is critical to truly preserving the spatial positioning intent of the artist/producer. Much like a vector encoded image can reproduce well at low resolution or at 10x the resolution (say when printing). Such as the difference between an Illustrator image vs. a bitmap image. Scaling of the bitmap can be done, but looks ragged and blocky as you increase magnification 10x, but an illustrator rendering is perfect at any selected output size.

In contrast to the current discreet mess of 2, 6, 8 or more channels, vector encoded audio will finally split the responsibilities where they belong: Spatial positioning in the artists hands and adaptive, environment and equipment specific rendering in the hands of the consumer.

Imaging and resolution of placement is now controlled by the playback system and its dynamic configuration.

This paradigm is adaptable to binaural (headphones), 2ch or multi-speaker arrays.

Get ready to see systems with dozens, even hundreds of uniquely addressable sound reproduction emitters. Mine will definitely be one of those ;)

Since a vector encoded sound source could be more compact than a high-rez multichannel, expect to see delivery via all channels, even network. Platforms such as BluRay, that support extensible Audio container formats will likely be one of the first to provide this in a physical media.  

Sound system to room interfaces

 Over the past 20 years, we have seen huge strides in spectral management, the introduction of usable temporal solutions (e.g Audyssey), and these will continue to evolve.

In the future, we will see amplitude-variant adjustments to both temporal and spectral profiles.

Like an engines RPM based fuel-map that reacts to many input variables and adaptively selects appropriate profiles to determine what parameters to adjust given the conditions.

Therefore, when you play your system at an average level of 70dB, a certain profile will be in place to handle the slight imbalance between woofer and panel (let’s say woofer is louder), then you crank it up to get into that song you love, and at 90dB avg, the panel is imbalanced relative to the woofer, and it’s getting shrill in the room. An amplitude-variant map would have appropriate frequency (spectral) adjustment to bring those two back in-line, as well as a unique temporal adjustment to minimize a high-frequency resonance node that forms at high-volumes.

To deliver this, Power-envelopes of the system will be tested and modeled to achieve as accurate a reproduction as possible at any volume. Including limiters to maintain a target max THD.

An increase in vertical integration of speaker systems

 I foresee that more and more speaker houses will go to tighter and tighter integration of drivers, cabinet systems, active crossovers and amplification. All reasonable designs will have similar topologies to a Meridian DSP speaker (Meridian has always been ahead of the curve).

 We will see most respected solutions have:

  • Digital delivery of content and command data (vol, room correction, steering, etc.)

  •  Highly customized DSP with driver-specific correction curves (or dynamically adaptable algorithms)

  • A dedicated, optimal-match amplifier able to meet all SPL and THD goals for the ‘system’

  • Implementing and integrating with high-function room correction solutions.

 The more advanced designs will be implementing ‘steerable’ imaging, where users can select optimizations for movies vs. music. This feature will also be leveraged by vector encoded positional audio.

 Speaker tech

 Imaging arrays will be implemented using Hypersonic Sound Systems emitters to provide pinpoint sound sources with minimal room effects.

 The steerable array of many small drivers designs pioneered by Yamaha in their “Digital Sound Projector” series, will see greater adoption by vendors and wider deployment as individual speakers that collaborate under a regime illustrated above in ‘Vertical integration’.

 Air velocity based infra-woofers (0 – 35hz) will be perfected and mass-marketed. The current representative, the Thigpen Rotary woofer, will evolve and become a standard high-performance installation ‘must-have’ at a ‘reasonable’ sub $5K price-point.

But new implementations, leveraging smart materials and ever increasing computational and DSP power will deliver more conventional solutions that don’t require an Infinite Baffle sized rear-wave room. They might not get into the single digits, but could do 10Hz pretty clean and loud.

On ESL’s I have a ton of ideas, but some are patent worthy, so sorry, the crystal ball has a ‘non-disclosure’ cloud over it there ;)

Radical doesn’t even begin to describe some of them.

But here’s an obvious direction for MartinLogan and other ESL vendors, as outlined in vertical integration, and evidenced by speakers like the Source and the powered centers, we will see active crossovers with DSP speaker corrections, fully dedicated and matched amplification, with DSP implemented amplitude management (SPL limiters, etc.). Basically, crank it as much as you want and it will always sound good, and won’t go out in a puff of smoke either :)

[Sidebar] This is a bit of marketing conundrum for many vendors, as for some reason, a lot of audiophiles think they are better at picking a good amplification match for their speaker.

It’s as if Corvettes were sold without an engine because their purchasers believed they could better select an engine for it afterwards. I’ve seen the audio equivalent of a Corvette with a 4cylChevette engine , a noisy Diesel, a steam engine, a tank turbine, 2Kw electric, oh, and maybe the occasional LT-1 ;)[/Sidebar]

 Room tuning

The science of room tuning will evolve with the introduction of highly automated spatial measurement systems (based on analysis from pictures, it can create a 3D model of the space). Using a combination of actual room-correction measurement data pus the 3D model, it will run full acoustical modeling and analysis, resulting in recommendations for equipment location, treatment quantity, type and location.

Used iteratively, it will allow the advanced user to tune their systems with the precision only the top people in the field can now barely achieve.

With the rise of highly integrated speaker array solutions along with the sophisticated room-correction system outlined below, we will see the introduction of integrated active room correction solutions.

Specifically aiming at the low-frequencies, we could see panels and other smart-material based devices implement a combination of sound re-enforcement and room mode mitigation in one active element. 

The process of measuring room acoustics will become vastly simpler and fool-proof. A fixed-dimension multi-mic array on a mechanized stand is placed at the ‘prime’ position. Then a fully automated measurement process physically moves the array while issuing the measurement tones through the actual speaker system.

Instead of having a big dedicated acoustic measurement computer, or needing to co-opt your PC or laptop, this measurement system will be a basic (reasonable cost) box connected to the network, relaying all its captured data to a cloud-based compute array implementing sophisticated modeling software that can crunch through hundreds of individual measurements and model the acoustic space with incredible precision in all three axis mentioned previously (spectral, temporal and dynamic). Using a cool graphical UI’s presented by a web browser, the user can then see exactly what the models look like and optionally ‘tune’ the corrections and limits to taste.

The imaging would look like the fluid-dynamics modeling of automotive wind-tunnel testing along with stress analysis visualizations applied in materials sciences.

Once the correction maps are computed, they are now accessible to the preamp or speaker processors involved in the users system. Users will be able to store and recall many profiles to align with various scenarios (curtains closed, room full-of people, just me, etc.)

 Smarter source to rendering management

 Content will start to include much more meta-data around the options contained in the content delivery. Therefore, a BluRay disc could contain info on which soundtrack is the highest resolution / best fidelity. It would contain specific information about aspect ratio (this already there, just needs to be propagated and used further downstream), and many more things about the content.

Propagating this information through the rendering chain will allow all system components along the way to orchestrate their configurations for an optimal viewing/listening experience.

This means preamps, source playback systems and video rendering systems all have to communicate bi-directionally (using HDMI CEC and other protocols to-be-developed) and self-configure.

The user interfaces of all this will move away from being NASA control centers understood only be the indoctrinated geek-hood, to truly end-user result oriented interactions (I want to watch a movie, but it’s late at night).

The visibility of actual in-home data and constructive feedback loops

 In this highly connected era, and with all the data-gathering and compute horsepower in modern A/V systems, we will see a dramatic increase in solutions that implement bi-directional information exchange between the A/V systems deployed in homes and manufacturers or service companies that pop-up to help users get the most out of their investments.

With all the aggregated data from the installed base, manufacturers or service organizations will be able to refine future product offerings to minimize confusion, address acoustical, placement or configuration needs with great certainty of outcome based on the depth and breadth of the data set.

Solutions that allow vendors to perform remote diagnostics and config changes will continue to propagate (BTW- My Denon AVP preamp already has this).

  

So there you have a few of my thoughts one where this is going over the next couple of decades, and if you think any of this too radical, believe me, I held back ;-)