How to Load an Audio File Using Fetch
Introduction
In this article, let's learn how to use the Fetch API in JavaScript to bring in an audio file we can work with when using the Web Audio API.
Once we fetch our audio file, we can process it as a node and do all sorts of cool manipulations. For example, we can change its playback speed, apply filters, reverb and more.
Creating an Audio Context
The first step when working with the Web Audio API is to create a new audio context and assign it to a variable.
const ctx = new AudioContext();
Now we have an AudioContext object on which we can call Web Audio API methods.
Next we'll declare a variable and called audio. But we won't assign it just yet.
const ctx = new AudioContext();
let audio;
Fetching an Audio File
Now, let's call fetch and pass it the path of the audio file we want to use. In this case, we will point to an mp3 which lives in a folder I've called "sounds".
const ctx = new AudioContext();
let audio;
fetch("./sounds/phantom-cities.mp3");
Putting the Audio Data Into a Buffer
Since fetch returns a Promise object, we can call the then method on it. What we're doing is taking data from the mp3 and putting it into a buffer. We use the arrayBuffer method here in the body of the function.
const ctx = new AudioContext();
let audio;
fetch("./sounds/phantom-cities.mp3");
.then(data => data.arrayBuffer())
The reason for putting the data into a buffer is so we can process it as an audio node without latency. (This will also let us apply a wide variety of additional processing to it).
Decoding the Audio Data
But before we can actually work with the data as an audio node, we'll need to decode the data currently in the buffer.
So we can use the decodeAudioData method and pass in arrayBuffer as its argument.
const ctx = new AudioContext();
let audio;
fetch("./sounds/phantom-cities.mp3");
.then(data => data.arrayBuffer())
.then(arrayBuffer => ctx.decodeAudioData(arrayBuffer))
Finally, we assign that audio variable to the decoded audio.
const ctx = new AudioContext();
let audio;
fetch("./sounds/phantom-cities.mp3");
.then(data => data.arrayBuffer())
.then(arrayBuffer => ctx.decodeAudioData(arrayBuffer))
.then(decodedAudio => {
audio = decodedAudio;
})
And that's the basic process:
- fetch the audio file
- get it into a buffer
- decode it in order to work with it as an audio node.
Playing the Sound
To use this audio, I've declared a function called playback. In order to work with the decoded audio, we call createBufferSource on the audio context. And we'll assign it to a const called playSound.
function playback() {
const playSound = ctx.createBufferSource();
}
Now that playSound is a buffer source, we assign our audio variable as its buffer property. Then, just as if we're working with the Web Audio API's built-in waveforms, we need to connect playSound to audio context's destination and start the sound itself.
function playback() {
const playSound = ctx.createBufferSource();
playSound.buffer = audio;
playSound.connect(ctx.destination);
playSound.start(ctx.currentTime);
}
Finally I'm simply adding an event listener to a mousedown event and passing in that playback function as the callback.
window.addEventListener("mousedown", playback);
If you're more of a visual learner, check out the video version of this article: