英文:
Write blocking javascript
问题
以下是您要翻译的内容:
我有一个小型服务器,我正在尝试从中获取一堆文件;大约100个要获取。我正在使用JS通过forEach
循环来获取这些文件。不幸的是,这大约100个请求会导致服务器崩溃。
links.forEach((link) => {
const name = link.split("/")[5];
const file = fs.createWriteStream(name);
const request = http.get(link, (res) => {
res.pipe(file);
file.on("finish", () => {
file.close();
});
});
});
我正在尝试编写同步阻塞的JavaScript;我希望在开始下一个获取操作之前文件写入完成。我一直在尝试使用生成器、while循环和fs.writeFileSync
。我还使用setTimeout
来模拟慢速网络调用(这样我就不必在每次崩溃后重新设置服务器了)。
看起来我漏掉了一些概念。这是我的一个非常天真的实验,我认为应该需要3秒,但实际只需要1秒。经过思考,很明显所有的超时都是同时发生的。
function writeSlow(path) {
setTimeout(function () {
console.log("write now");
fs.writeFileSync(path, "lorem");
}, 1000);
}
writeSlow("junk/one");
writeSlow("junk/two");
writeSlow("junk/three");
我进行了一些阅读,确信使用Promises是正确的方法,但似乎也不起作用:
function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function run() {
arr.forEach(async (str) => {
await sleep(1000);
fs.writeFileSync("junk/" + str + ".txt", "lorem");
});
}
我的期望,或者我试图达到的目标,是在实验性代码中,我可以监视文件系统,每秒钟会出现一个新文件。
(编辑)
我寻求的实际最终结果是,只有在上一个http请求完成后,才会触发下一个请求。
英文:
I have a tiny server I'm trying to fetch a bunch of files from; ~100 to fetch. I'm using JS to fetch the files using a forEach
loop. Unfortunately, the ~100 requests knocks over the server.
links.forEach((link) => {
const name = link.split("/")[5];
const file = fs.createWriteStream(name);
const request = http.get(link, (res) => {
res.pipe(file);
file.on("finish", () => {
file.close();
});
});
});
I'm trying to write synchronous blocking JavaScript; I want the file writing to complete before beginning the next fetching action. I've been experimenting with generators, while loops, and fs.writeFileSync
. I've also been using setTimeout to emulate a slow network call (just so I don't have to reset the server after knocking it over each time).
It seems like I'm missing some concept. Here's my very naive experiment that I thought should take 3 seconds, but only takes 1. After thinking about it, it's clear all the timeouts are happening at once.
function writeSlow(path) {
setTimeout(function () {
console.log("write now");
fs.writeFileSync(path, "lorem");
}, 1000);
}
writeSlow("junk/one");
writeSlow("junk/two");
writeSlow("junk/three");
I did some reading and was convinced using Promises was the way to go, but this doesn't appear to work either:
function sleep(ms) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
async function run() {
arr.forEach(async (str) => {
await sleep(1000);
fs.writeFileSync("junk/" + str + ".txt", "lorem");
});
}
My expectation, or what I'm trying to get to, with the experimental code, is the point where I can watch the filesystem, and every second, a new file appears.
(edit)
The actual end result I'm looking for is for the next http request to only fire once the last one completes.
答案1
得分: 1
以下是翻译好的部分:
你可以编写一个异步循环:
function loop(links, i) {
if (i >= links.length) return; // all done
const link = links[i];
const name = link.split("/")[5];
const file = fs.createWriteStream(name);
const request = http.get(link, (res) => {
res.pipe(file);
file.on("finish", () => {
file.close();
loop(links, i+1); // Continue with next
});
});
}
// Start the asynchronous loop:
loop(links, 0);
或者你可以切换到基于Promise的库,或者像这样将你的基于回调的函数promisify(未经测试):
// Promisify your callback-based functions
const asyncWrite = (name, res) => new Promise((resolve) => {
const file = fs.createWriteStream(name);
res.pipe(file);
file.on("finish", () => {
file.close();
resolve();
});
});
const asyncHttpGet = (link) => new Promise((resolve) =>
http.get(link, resolve)
);
// ... and use them:
async function writeAllLinks(links) {
for (const link of links) {
await asyncWrite(link.split("/")[5], await asyncHttpGet(link));
}
}
// Start the asynchronous loop:
writeAllLinks(links).then( /* .... */ );
你可能想要添加错误处理...
英文:
You could write an asynchronous loop:
function loop(links, i) {
if (i >= links.length) return; // all done
const link = links[i];
const name = link.split("/")[5];
const file = fs.createWriteStream(name);
const request = http.get(link, (res) => {
res.pipe(file);
file.on("finish", () => {
file.close();
loop(links, i+1); // Continue with next
});
});
}
// Start the asynchronous loop:
loop(links, 0);
Alternatively you could switch to promise-based libraries, or else, promisfy the callback-based functions you have, like so (not tested):
// Promisify your callback-based functions
const asyncWrite = (name, res) => new Promise((resolve) => {
const file = fs.createWriteStream(name);
res.pipe(file);
file.on("finish", () => {
file.close();
resolve();
});
});
const asyncHttpGet = (link) => new Promise((resolve) =>
http.get(link, resolve)
);
// ... and use them:
async function writeAllLinks(links) {
for (const link of links) {
await asyncWrite(link.split("/")[5], await asyncHttpGet(link));
}
}
// Start the asynchronous loop:
writeAllLinks(links).then( /* .... */ );
You would probably want to add error handling...
答案2
得分: 0
forEach 不像你可能认为的那样在使用 async/await 时表现。查看此 StackOverflow 帖子以获取更多信息:使用 async/await 与 forEach 循环
相反,在你的情况下(如你在上面看到的),你可以使用标准的 for 循环:
function sleep(ms, fileName) {
return new Promise(resolve => setTimeout(() => resolve(fileName), ms))
}
const files = ["file1", "file2", "file3"];
async function run() {
for(let i = 0; i < files.length; i++) {
const fileName = await sleep(1000, files[i])
console.log(fileName, new Date().toLocaleTimeString())
}
}
run();
英文:
forEach doesn't behave like you may think it will with async/await. Take a look at this post on SO for more information:Using async/await with a forEach loop
Instead, in your case (as you will see above), you can use a standard for loop:
function sleep(ms, fileName) {
return new Promise(resolve => setTimeout(() => resolve(fileName), ms))
}
const files = ["file1", "file2", "file3"];
async function run() {
for(let i = 0; i < files.length; i ++) {
const fileName = await sleep(1000, files[i])
console.log(fileName, new Date().toLocaleTimeString())
}
}
run();
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论