英文:
Creating a Microphone for Windows in Java with Existing Sound Input
问题
我想在Java中将声音输入从实际麦克风创建为虚拟麦克风。例如:
笔记本电脑实际麦克风输入 --> 转换为Java对象的麦克风输入(实时) --> 虚拟麦克风。
我希望能够将虚拟麦克风用作外部应用程序中的实际麦克风。例如,假设我将这个Java麦克风设置为Zoom上的输入设备,以提高原始麦克风的音频质量。如果这样的表述不够清楚,我表示歉意。
请记住,这是我个人感兴趣的项目,而不是我关心用途的事情。再次澄清一下,我想创建一个虚拟麦克风,它可以从实际麦克风或其他音频源接收输入,并且可以被其他应用程序识别为麦克风。
英文:
I wanted to create a virtual microphone from sound input into an actual microphone in Java. For example:
Laptop Actual Mic Input --> Converted to whatever Java object microphone input is (realtime) --> Virtual Mic.
I want to be able to use ethe virtual mic as an actual microphone on external applications. For example, let's say I have this Java microphone set as the input device on Zoom to enhance the original microphone's audio quality. I apologize if this isn't the best way to phrase it.
Please keep in mind that this is a personal project I was interested in doing rather than something I'm concerned about the use case for. Just to clarify again, I want to create a virtual microphone that takes input from the actual microphone or other audio source, and can be seen as a microphone by other applications.
答案1
得分: 0
对我来说,似乎有三个不同的领域需要解决:
- 读取声音,
- 存储在某种类型的缓冲区中,
- 为其他程序提供连接,以从该缓冲区获取音频数据。
完全自己动手,我找到了一些关于Java麦克风读取的教程,并且修改了其中一个,以便将传入的音频数据打包到一个缓冲区中,而不是直接发送到扬声器或保存到文件中。对于缓冲区的容器类,我选择了ConcurrentLinkedQueue
,因为它可以轻松地提供add
方法来放入缓冲区数据包,并且提供poll
方法,以便一个完全独立的线程可以取出数据包,而无需编写任何显式的同步代码。
之所以需要某种中间缓冲区,是因为音频处理往往呈现出不可预测的起伏,而在一个线程上进行读取/添加的活动突发情况和在另一个线程上进行轮询的活动不一定会同时发生。我不太确定,但在我看来,如果两端直接连接,吞吐量会受到影响,因为任一线路都可能减慢信号传输速度,从而极大增加丢失数据的可能性。有了缓冲区,两条线都可以在一定程度上弹性地处理,而不会互相阻塞。
我调整了ConcurrentLinkedQueue
以获得合理的数据包大小,但目前设置为512个PCM值,我不太确定是否有最佳的设置。但是,我确实有一个功能正常(虽然略有延迟)的设置,可以将麦克风数据传输到数字延迟器,然后将回声作为音频混音器上的轨道输出,这个混音器是我编写的。
似乎你可以同样轻松地将轮询处理通过某种套接字或与外部世界的连接来处理。但是连接到外部世界就超出了我的个人经验范围。
英文:
Seems to me there are three distinct areas to solve:
- reading the sound,
- storing in some sort of buffer,
- providing a connection for the other program to obtain the audio data from that buffer.
Purely DIY, I found some tutorials on Java mike reading, and adapted one of them so that I was packaging the incoming audio data to a buffer instead of sending directly on to speakers or saving it to a File. For the container class of the buffer, I chose a ConcurrentLinkedQueue
as this painlessly provides the add
method to put buffer packets in and poll
method for an entirely independent thread to take packets out without requiring any explicit synchronization code to be written.
The reason some sort of intermediate buffer was needed is that audio processing tends to go in unpredictable fits and starts, and the bursts of activity in reading/adding on one thread and polling on the other don't necessarily coincide. I'm not sure, but it seemed to me that if the two ends are directly connected, the throughput is impacted as either line can slow the signal transmission down, and this greatly increase the likelihood of dropouts. With a buffer, both lines can flex some without blocking each other.
I fiddled with the ConcurrentLinkedQueue
to get a reasonable packet size, and I'm not confident I have the optimal setup, currently set to 512 PCM values. But I do have a functional, if slightly laggy, setup that can ship the microphone data to a digital delay which in turn outputs echoes as a track on an audio mixer that I wrote.
Seems like you could just as readily have the polling be handled by some sort of socket or connection to the outside world. But connecting to the external world gets beyond my personal experience.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论