Java:将4个单独的音频字节数组合并为单个WAV音频文件

huangapple go评论76阅读模式
英文:

Java: Combining 4 separate audio byte arrays into single wav audio file

问题

我试图将4个单独的字节数组合并成一个文件但我只得到空指针异常我不知道为什么我的音频格式是16位有符号的PCM我知道我应该使用short而不是byte但老实说我在这一切中感到非常迷惑

private short[] mixByteBuffers(byte[] bufferA, byte[] bufferB) {
    short[] first_array = new short[bufferA.length/2];
    short[] second_array = new short [bufferB.length/2];
    short[] final_array = null;

    if(first_array.length > second_array.length) {
        short[] temp_array = new short[bufferA.length];

        for (int i = 0; i < temp_array.length; i++) {
            int mixed=(int)first_array[i] + (int)second_array[i];
            if (mixed > 32767) mixed = 32767;
            if (mixed < -32768) mixed = -32768;
            temp_array[i] = (short)mixed;
            final_array = temp_array;
        }
    }
    else {
        short[] temp_array = new short[bufferB.length];

        for (int i = 0; i < temp_array.length; i++) {
            int mixed=(int)first_array[i] + (int)second_array[i];
            if (mixed > 32767) mixed = 32767;
            if (mixed < -32768) mixed = -32768;
            temp_array[i] = (short)mixed;
            final_array = temp_array;
        }        
    }
    return final_array;
}

这是我目前正在尝试的但在以下这行代码处返回 `java.lang.ArrayIndexOutOfBoundsException: 0`:

int mixed = (int)first_array[i] + (int)second_array[i];

我的数组不都是相同的长度这是我调用该函数的方式

public void combineAudio() {
    short[] combinationOne = mixByteBuffers(tempByteArray1, tempByteArray2);
    short[] combinationTwo = mixByteBuffers(tempByteArray3, tempByteArray4);
    short[] channelsCombinedAll = mixShortBuffers(combinationOne, combinationTwo);
    byte[] bytesCombined = new byte[channelsCombinedAll.length * 2];
    ByteBuffer.wrap(bytesCombined).order(ByteOrder.LITTLE_ENDIAN)
        .asShortBuffer().put(channelsCombinedAll);
    
    mixedByteArray = bytesCombined;
}

肯定有比我当前做法更好的方式这让我非常疯狂
英文:

I've tried to combine 4 separate byte arrays into a single file but I'm only getting null pointer exceptions and I'm not sure why. My audio format is 16 bit PCM signed and I know I should be using short instead of bytes but honestly I'm quite lost in it all.

private short[] mixByteBuffers(byte[] bufferA, byte[] bufferB) {
short[] first_array = new short[bufferA.length/2];
short[] second_array = new short [bufferB.length/2];
short[] final_array = null;
if(first_array.length &gt; second_array.length) {
short[] temp_array = new short[bufferA.length];
for (int i = 0; i &lt; temp_array.length; i++) {
int mixed=(int)first_array[i] + (int)second_array[i];
if (mixed&gt;32767) mixed=32767;
if (mixed&lt;-32768) mixed=-32768;
temp_array[i] = (short)mixed;
final_array = temp_array;
}
}
else {
short[] temp_array = new short[bufferB.length];
for (int i = 0; i &lt; temp_array.length; i++) {
int mixed=(int)first_array[i] + (int)second_array[i];
if (mixed&gt;32767) mixed=32767;
if (mixed&lt;-32768) mixed=-32768;
temp_array[i] = (short)mixed;
final_array = temp_array;
}        
}
return final_array;
}

This is what I'm trying at the moment but it's returning with java.lang.ArrayIndexOutOfBoundsException: 0 at line

int mixed = (int)first_array[i] + (int)second_array[i];

My arrays aren't all the same length, this is how I call the function:

public void combineAudio() {
short[] combinationOne = mixByteBuffers(tempByteArray1, tempByteArray2);
short[] combinationTwo = mixByteBuffers(tempByteArray3, tempByteArray4);
short[] channelsCombinedAll = mixShortBuffers(combinationOne, combinationTwo);
byte[] bytesCombined = new byte[channelsCombinedAll.length * 2];
ByteBuffer.wrap(bytesCombined).order(ByteOrder.LITTLE_ENDIAN)
.asShortBuffer().put(channelsCombinedAll);
mixedByteArray = bytesCombined;
}

There's got to be a better way than what I'm doing currently, it's driving me absolutely crazy.

答案1

得分: 2

temp_array.lengthelse子句的for循环中的值为bufferB.length。但是在if子句中的值为bufferA.length/2。你是否忽视了在else子句中除以2的操作?

无论如何,通常的做法是将音频数据(信号)作为流进行处理。在每行打开的情况下,从每行中获取预定义缓冲区大小的字节值,足够从每行获取相同数量的PCM值。如果其中一行在其他行之前用完,可以用0值填充该行。

除非有非常强烈的原因需要添加长度不相等的数组,我认为最好避免这样做。相反,使用指针(如果从数组中提取)或渐进式的read()方法(如果从AudioInput行提取)在每次循环迭代中获取固定数量的PCM值。否则,我认为这将会带来麻烦,不必要地复杂化事情。

我曾经看到过可行的解决方案,其中每次从每个源处理一个PCM值,或者更多,例如1000个,甚至是半秒钟的数据(如果是以44100帧每秒的速度,那么为22050个)。最重要的是在每次迭代中从每个源获取相同数量的PCM值,并在源数据用尽时用0填充。

英文:

The temp_array.length value in the else clause for loop is bufferB.length. But the value in the if clause is bufferA.length/2. Did you overlook dividing by 2 in the else clause?

Regardless, it's usual to just process audio data (signals) as streams. With each line open, grab a predefined buffer's worth of byte values from each, enough to get the same number of PCM values from each line. If one line runs out before the others, you can fill out that line with 0 values.

Unless there is a really strong reason for adding arrays of unequal length, I think it's best to avoid doing so. Instead, use pointers (if you are drawing from arrays) or progressive read() methods (if from AudioInput lines) to get the fixed number of PCM values each loop iteration. Otherwise, I think you are asking for trouble, needlessly complicating things.

I've seen workable solutions where just one PCM value is processed from each source at a time, to more, such as 1000 or even a full half second (22,050 if at 44100 fps). The main thing is get the same number of PCM from each source on each iteration, and fill in with 0's if a source runs out of data.

答案2

得分: 2

要混合两个包含16位声音样本的byte数组,您应该首先将这些数组转换为int数组,即基于样本的数组,然后将它们相加(混合),然后再将其转换回byte数组。在从byte数组转换为int数组时,您需要确保使用正确的字节顺序(字节序)

以下是一些允许您混合两个数组的代码。在最后有一些示例代码(使用正弦波)来演示这种方法。请注意,这可能不是编写此代码的理想方式,但是它是一个能够演示概念的工作示例。使用流或线,正如Phil建议的,可能是更明智的整体方法。

祝好运!

import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;

public class MixDemo {

    public static byte[] mix(final byte[] a, final byte[] b, final boolean bigEndian) {
        // ... (这里省略了部分代码)
    }

    private static int[] toSamples(final byte[] byteSamples, final boolean bigEndian) {
        // ... (这里省略了部分代码)
    }

    private static byte[] toBytes(final int[] intSamples, final boolean bigEndian) {
        // ... (这里省略了部分代码)
    }

    private static int byteToIntLittleEndian(final byte[] buf, final int offset, final int bytesPerSample) {
        // ... (这里省略了部分代码)
    }

    private static int byteToIntBigEndian(final byte[] buf, final int offset, final int bytesPerSample) {
        // ... (这里省略了部分代码)
    }

    private static byte[] intToByteLittleEndian(final int sample, final int bytesPerSample) {
        // ... (这里省略了部分代码)
    }

    private static byte[] intToByteBigEndian(final int sample, final int bytesPerSample) {
        // ... (这里省略了部分代码)
    }

    public static void main(final String[] args) throws IOException {
        // ... (这里省略了部分代码)
    }
}

(注意:由于您要求只返回翻译后的代码部分,我已经从原始代码中提取了核心代码部分,但仍保留了关键的函数和类定义。)

英文:

To mix two byte arrays with 16 bit sound samples, you should first convert those arrays to int arrays, i.e., sample-based arrays, then add them (to mix) and then convert back to byte arrays. When converting from byte array to int array, you need to make sure you use the correct endianness (byte order).

Here's some code that lets you mix two arrays. At the end there is some sample code (using sine waves) that demonstrates the approach. Note that this is may not be the ideal way of coding this, but a working example to demonstrate the concept. Using streams or lines, as Phil recommends is probably the smarter overall approach.

Good luck!

import javax.sound.sampled.AudioFileFormat;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.IOException;
public class MixDemo {
public static byte[] mix(final byte[] a, final byte[] b, final boolean bigEndian) {
final byte[] aa;
final byte[] bb;
final int length = Math.max(a.length, b.length);
// ensure same lengths
if (a.length != b.length) {
aa = new byte[length];
bb = new byte[length];
System.arraycopy(a, 0, aa, 0, a.length);
System.arraycopy(b, 0, bb, 0, b.length);
} else {
aa = a;
bb = b;
}
// convert to samples
final int[] aSamples = toSamples(aa, bigEndian);
final int[] bSamples = toSamples(bb, bigEndian);
// mix by adding
final int[] mix = new int[aSamples.length];
for (int i=0; i&lt;mix.length; i++) {
mix[i] = aSamples[i] + bSamples[i];
// enforce min and max (may introduce clipping)
mix[i] = Math.min(Short.MAX_VALUE, mix[i]);
mix[i] = Math.max(Short.MIN_VALUE, mix[i]);
}
// convert back to bytes
return toBytes(mix, bigEndian);
}
private static int[] toSamples(final byte[] byteSamples, final boolean bigEndian) {
final int bytesPerChannel = 2;
final int length = byteSamples.length / bytesPerChannel;
if ((length % 2) != 0) throw new IllegalArgumentException(&quot;For 16 bit audio, length must be even: &quot; + length);
final int[] samples = new int[length];
for (int sampleNumber = 0; sampleNumber &lt; length; sampleNumber++) {
final int sampleOffset = sampleNumber * bytesPerChannel;
final int sample = bigEndian
? byteToIntBigEndian(byteSamples, sampleOffset, bytesPerChannel)
: byteToIntLittleEndian(byteSamples, sampleOffset, bytesPerChannel);
samples[sampleNumber] = sample;
}
return samples;
}
private static byte[] toBytes(final int[] intSamples, final boolean bigEndian) {
final int bytesPerChannel = 2;
final int length = intSamples.length * bytesPerChannel;
final byte[] bytes = new byte[length];
for (int sampleNumber = 0; sampleNumber &lt; intSamples.length; sampleNumber++) {
final byte[] b = bigEndian
? intToByteBigEndian(intSamples[sampleNumber], bytesPerChannel)
: intToByteLittleEndian(intSamples[sampleNumber], bytesPerChannel);
System.arraycopy(b, 0, bytes, sampleNumber * bytesPerChannel, bytesPerChannel);
}
return bytes;
}
// from https://github.com/hendriks73/jipes/blob/master/src/main/java/com/tagtraum/jipes/audio/AudioSignalSource.java#L238
private static int byteToIntLittleEndian(final byte[] buf, final int offset, final int bytesPerSample) {
int sample = 0;
for (int byteIndex = 0; byteIndex &lt; bytesPerSample; byteIndex++) {
final int aByte = buf[offset + byteIndex] &amp; 0xff;
sample += aByte &lt;&lt; 8 * (byteIndex);
}
return (short)sample;
}
// from https://github.com/hendriks73/jipes/blob/master/src/main/java/com/tagtraum/jipes/audio/AudioSignalSource.java#L247
private static int byteToIntBigEndian(final byte[] buf, final int offset, final int bytesPerSample) {
int sample = 0;
for (int byteIndex = 0; byteIndex &lt; bytesPerSample; byteIndex++) {
final int aByte = buf[offset + byteIndex] &amp; 0xff;
sample += aByte &lt;&lt; (8 * (bytesPerSample - byteIndex - 1));
}
return (short)sample;
}
private static byte[] intToByteLittleEndian(final int sample, final int bytesPerSample) {
byte[] buf = new byte[bytesPerSample];
for (int byteIndex = 0; byteIndex &lt; bytesPerSample; byteIndex++) {
buf[byteIndex] = (byte)((sample &gt;&gt;&gt; (8 * byteIndex)) &amp; 0xFF);
}
return buf;
}
private static byte[] intToByteBigEndian(final int sample, final int bytesPerSample) {
byte[] buf = new byte[bytesPerSample];
for (int byteIndex = 0; byteIndex &lt; bytesPerSample; byteIndex++) {
buf[byteIndex] = (byte)((sample &gt;&gt;&gt; (8 * (bytesPerSample - byteIndex - 1))) &amp; 0xFF);
}
return buf;
}
public static void main(final String[] args) throws IOException {
final int sampleRate = 44100;
final boolean bigEndian = true;
final int sampleSizeInBits = 16;
final int channels = 1;
final boolean signed = true;
final AudioFormat targetAudioFormat = new AudioFormat(sampleRate, sampleSizeInBits, channels, signed, bigEndian);
final byte[] a = new byte[sampleRate * 10];
final byte[] b = new byte[sampleRate * 5];
// create sine waves
for (int i=0; i&lt;a.length/2; i++) {
System.arraycopy(intToByteBigEndian((int)(30000*Math.sin(i*0.5)),2), 0, a, i*2, 2);
}
for (int i=0; i&lt;b.length/2; i++) {
System.arraycopy(intToByteBigEndian((int)(30000*Math.sin(i*0.1)),2), 0, b, i*2, 2);
}
final File aFile = new File(&quot;a.wav&quot;);
AudioSystem.write(new AudioInputStream(new ByteArrayInputStream(a), targetAudioFormat, a.length),
AudioFileFormat.Type.WAVE, aFile);
final File bFile = new File(&quot;b.wav&quot;);
AudioSystem.write(new AudioInputStream(new ByteArrayInputStream(b), targetAudioFormat, b.length),
AudioFileFormat.Type.WAVE, bFile);
// mix a and b
final byte[] mixed = mix(a, b, bigEndian);
final File outFile = new File(&quot;out.wav&quot;);
AudioSystem.write(new AudioInputStream(new ByteArrayInputStream(mixed), targetAudioFormat, mixed.length),
AudioFileFormat.Type.WAVE, outFile);
}
}

huangapple
  • 本文由 发表于 2020年3月15日 09:40:18
  • 转载请务必保留本文链接:https://go.coder-hub.com/60688956.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定