Jump to content

Will the next round of Zeppelin releases use "de-mixing" (and maybe AI tech)?


ArmsofAtlas1977

Recommended Posts

Hello everyone

It's been four years since The Beatles Hollywood Bowl re-release came out where they "de-mixed" the tracks to improve the sound quality.  Obviously, with all the Zeppelin bootlegs out there, the potential is huge to do this with Zeppelin too.  So has there been any word of Jimmy Page commenting on this, or hinting at it?  Hollywood Bowl was recorded on three tracks, most soundboards are just two.

I'm also wondering if we're going to see any AI upscaling.  It would be a fairly straight-forward process at this point to take soundboards and use them as a guide to generate simulated sounds/instruments.

It may even be possible to restore lower audio quality bootlegs like Providence 73!

Link to comment
Share on other sites

I have not heard anything about the band using the "de-mix" technology. I've used it to do stereo remasters of 9/29/71, 3/14/69, and 1/22/73, with 9/28/71 and 5/25/75 to come shortly. Here's a thread on this site where I go into details and provide examples of some of the projects I've worked on. As far as I know, no one else is using this technology in the Zeppelin remaster community. 

 

Edited by SteveZ98
Link to comment
Share on other sites

  • 4 months later...

IA Tech and De-Mixing are really interesting and will continue evolving the next years. Imagine taking a good quality soundboard and then split into multi-tracks, you can work into good official materials.

 

It increases the chance of more live albums like a compilation of best 1977 or 1980 tours. Or maybe more stuff like japan 71/72 or australia/new zealand 72. Ideally, one record for each tour (there are 30 tours LZ made in their career)

Link to comment
Share on other sites

22 minutes ago, zeppelin_starship said:

IA Tech and De-Mixing are really interesting and will continue evolving the next years. Imagine taking a good quality soundboard and then split into multi-tracks, you can work into good official materials.

 

It increases the chance of more live albums like a compilation of best 1977 or 1980 tours. Or maybe more stuff like japan 71/72 or australia/new zealand 72. Ideally, one record for each tour (there are 30 tours LZ made in their career)

It would not surprise me one bit if in five year time the tech has evolved to where you can take a vocal of Robert from say Montreal 1975, run it through a processor which has Robert's voce samples from 70'-71', and come back with Montreal 1975 having a 71' era Robert vocal applied to the contemporary songs. Maybe cheating but it would be cool as hell.

Link to comment
Share on other sites

  • 4 weeks later...
On 4/19/2021 at 11:39 AM, zeppelin_starship said:

IA Tech and De-Mixing are really interesting and will continue evolving the next years. Imagine taking a good quality soundboard and then split into multi-tracks, you can work into good official materials.

 

It increases the chance of more live albums like a compilation of best 1977 or 1980 tours. Or maybe more stuff like japan 71/72 or australia/new zealand 72. Ideally, one record for each tour (there are 30 tours LZ made in their career)

You'll enjoy seeing what is possible with AI technology for degraded Zeppelin boots. Stay tuned

Link to comment
Share on other sites

This is such a fascinating topic.  One of my top picks would be the April 27, 1969 Fillmore show.  Being able to upscale it and fix some of the cuts (most notably Killing Floor) has always been a daydream of mine.  Its a fantastic candidate:  a great recording, killer performance.  Plus there is an AUD to use a a guide for the missing portions, and in addition, you have good to excellent recordings of the same setlist for that stretch of shows, including the April 24 SBD which has a crystal clear capture of Jone's bass tone in the left channel. 

Link to comment
Share on other sites

27 minutes ago, Thalassophile said:

This is such a fascinating topic.  One of my top picks would be the April 27, 1969 Fillmore show.  Being able to upscale it and fix some of the cuts (most notably Killing Floor) has always been a daydream of mine.  Its a fantastic candidate:  a great recording, killer performance.  Plus there is an AUD to use a a guide for the missing portions, and in addition, you have good to excellent recordings of the same setlist for that stretch of shows, including the April 24 SBD which has a crystal clear capture of Jone's bass tone in the left channel. 

Here's a start on Killing Floor from 4/27/69, but Robert's voice is too far in the background in a lot of the song to do much with.

 

Edited by SteveZ98
Link to comment
Share on other sites

Sounds awesome!  I think what people are anticipating is the near-ish future is an AI program that will take everything you have been doing to the next level using other recordings as a reference.  Sort of an audio version of face mapping techniques used in film.  Example:  recreating a young Luke Sywalker in the final episode of Season 2 of the Mandalorian. 

Edited by Thalassophile
Link to comment
Share on other sites

14 hours ago, Thalassophile said:

Sounds awesome!  I think what people are anticipating is the near-ish future is an AI program that will take everything you have been doing to the next level using other recordings as a reference.  Sort of an audio version of face mapping techniques used in film.  Example:  recreating a young Luke Sywalker in the final episode of Season 2 of the Mandalorian. 

The technology already exists, sort of -- see https://medium.com/the-research-nest/voice-cloning-using-deep-learning-166f1b8d8595 -- still in its infancy and largely calibrated for *spoken* audio. I'd reckon in about a year or two it'll be able to do what you're describing.

There's a huge emphasis on AI in audio restoration right now, with Izotope leading the pack. Spleeter, too, works extremely well, and in many cases outperforms Izotope's rebalancing tool. So I wouldn't be surprised if a lot of the innovation comes from outside the industry giants. But there's definitely been a lot of chatter about developing machine learning tools to reconstruct degraded audio, particularly, e.g., where non-degraded audio from the same (or similar track) can be used as a reference.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...