如何在本地运行 CodePen 项目

huangapple go评论72阅读模式
英文:

how to run CodePen project localy

问题

我想在本地运行一个 CodePen 项目,但是当我将文件复制粘贴到我的计算机上的本地项目时,它不起作用。该项目包括 3 个文件(html、css、js(ts))。如何修复它并使其正常工作?我想在本地运行的 CodePen 项目如下:https://codepen.io/mediapipe-preview/pen/OJBVQJm。我已将文件复制粘贴到本地项目,但似乎 js 文件未运行。我甚至将 js 和 css 文件链接到 html 文件中,但它仍然无法正常工作。我甚至从 CodePen 导出整个项目,但对我来说仍然无法正常工作。

英文:

i want to run a codePen project locally, but when i copy past the files into local project on my computer its not working, the project have 3 file(html,css,js(ts)).
how could i fix it and make it working?
the codepen project that i want to make it locally is below:
<kbd>https://codepen.io/mediapipe-preview/pen/OJBVQJm</kbd>

i copy pasted the files into local project but it seems the js file is not running, i even linked the js and css file into html file but its not working.
even i export the hole project from codepen but it still not working for me

答案1

得分: 0

这是您提供的代码的翻译部分:

<html>
<head>
  <meta charset="utf-8">
  <meta http-equiv="Cache-control" content="no-cache, no-store, must-revalidate">
  <meta http-equiv="Pragma" content="no-cache">
  <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no">
  <title>Face Landmarker</title>
  <!-- 样式部分 -->
  <!-- ...(省略部分内容)... -->
</head>
<body>
  <h1>使用MediaPipe FaceLandmarker任务进行面部关键点检测</h1>
  <section id="demos" class="invisible">
    <h2>演示:检测图像</h2>
    <p><b>单击下面的图像</b>以查看面部的关键点。</p>
    <!-- 图像部分 -->
    <div class="detectOnClick">
      <!-- 图像链接 -->
    </div>
    <!-- ...(省略部分内容)... -->
  </section>
  <!-- 脚本部分 -->
  <script type="module">
    // JavaScript脚本部分
    // ...(省略部分内容)...
  </script>
</body>
</html>

请注意,我已经省略了大部分的代码内容,只提供了整个HTML文档的结构和脚本部分的提示。如果需要更详细的翻译或有其他特定的要求,请告诉我。

英文:

Heya Hope you are doing well..here's a working version of the codepen you provided in local..i have made necessary changes for you..hope it will help you.

<!-- begin snippet: js hide: false console: true babel: false -->

<!-- language: lang-html -->

&lt;html&gt;
&lt;head&gt;
&lt;meta charset=&quot;utf-8&quot;&gt;
&lt;meta http-equiv=&quot;Cache-control&quot; content=&quot;no-cache, no-store, must-revalidate&quot;&gt;
&lt;meta http-equiv=&quot;Pragma&quot; content=&quot;no-cache&quot;&gt;
&lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1, user-scalable=no&quot;&gt;
&lt;title&gt;Face Landmarker&lt;/title&gt;
&lt;style&gt;
@use &quot;@material&quot;;
body {
font-family: helvetica, arial, sans-serif;
margin: 2em;
color: #3d3d3d;
--mdc-theme-primary: #007f8b;
--mdc-theme-on-primary: #f1f3f4;
}
h1 {
font-style: italic;
color: #ff6f00;
color: #007f8b;
}
h2 {
clear: both;
}
em {
font-weight: bold;
}
video {
clear: both;
display: block;
transform: rotateY(180deg);
-webkit-transform: rotateY(180deg);
-moz-transform: rotateY(180deg);
}
section {
opacity: 1;
transition: opacity 500ms ease-in-out;
}
header,
footer {
clear: both;
}
.removed {
display: none;
}
.invisible {
opacity: 0.2;
}
.note {
font-style: italic;
font-size: 130%;
}
.videoView,
.detectOnClick,
.blend-shapes {
position: relative;
float: left;
width: 48%;
margin: 2% 1%;
cursor: pointer;
}
.videoView p,
.detectOnClick p {
position: absolute;
padding: 5px;
background-color: #007f8b;
color: #fff;
border: 1px dashed rgba(255, 255, 255, 0.7);
z-index: 2;
font-size: 12px;
margin: 0;
}
.highlighter {
background: rgba(0, 255, 0, 0.25);
border: 1px dashed #fff;
z-index: 1;
position: absolute;
}
.canvas {
z-index: 1;
position: absolute;
pointer-events: none;
}
.output_canvas {
transform: rotateY(180deg);
-webkit-transform: rotateY(180deg);
-moz-transform: rotateY(180deg);
}
.detectOnClick {
z-index: 0;
}
.detectOnClick img {
width: 100%;
}
.blend-shapes-item {
display: flex;
align-items: center;
height: 20px;
}
.blend-shapes-label {
display: flex;
width: 120px;
justify-content: flex-end;
align-items: center;
margin-right: 4px;
}
.blend-shapes-value {
display: flex;
height: 16px;
align-items: center;
background-color: #007f8b;
}
&lt;/style&gt;
&lt;link href=&quot;https://unpkg.com/material-components-web@latest/dist/material-components-web.min.css&quot; rel=&quot;stylesheet&quot;&gt;
&lt;script src=&quot;https://unpkg.com/material-components-web@latest/dist/material-components-web.min.js&quot;&gt;&lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;h1&gt;Face landmark detection using the MediaPipe FaceLandmarker task&lt;/h1&gt;
&lt;section id=&quot;demos&quot; class=&quot;invisible&quot;&gt;
&lt;h2&gt;Demo: Detecting Images&lt;/h2&gt;
&lt;p&gt;&lt;b&gt;Click on an image below&lt;/b&gt; to see the key landmarks of the face.&lt;/p&gt;
&lt;div class=&quot;detectOnClick&quot;&gt;
&lt;img src=&quot;https://storage.googleapis.com/mediapipe-assets/portrait.jpg&quot; width=&quot;100%&quot; crossorigin=&quot;anonymous&quot; title=&quot;Click to get detection!&quot; /&gt;
&lt;/div&gt;
&lt;div class=&quot;blend-shapes&quot;&gt;
&lt;ul class=&quot;blend-shapes-list&quot; id=&quot;image-blend-shapes&quot;&gt;&lt;/ul&gt;
&lt;/div&gt;
&lt;h2&gt;Demo: Webcam continuous face landmarks detection&lt;/h2&gt;
&lt;p&gt;Hold your face in front of your webcam to get real-time face landmarker detection.&lt;/br&gt;Click &lt;b&gt;enable webcam&lt;/b&gt; below and grant access to the webcam if prompted.&lt;/p&gt;
&lt;div id=&quot;liveView&quot; class=&quot;videoView&quot;&gt;
&lt;button id=&quot;webcamButton&quot; class=&quot;mdc-button mdc-button--raised&quot;&gt;
&lt;span class=&quot;mdc-button__ripple&quot;&gt;&lt;/span&gt;
&lt;span class=&quot;mdc-button__label&quot;&gt;ENABLE WEBCAM&lt;/span&gt;
&lt;/button&gt;
&lt;div style=&quot;position: relative;&quot;&gt;
&lt;video id=&quot;webcam&quot; style=&quot;position: abso&quot; autoplay playsinline&gt;&lt;/video&gt;
&lt;canvas class=&quot;output_canvas&quot; id=&quot;output_canvas&quot; style=&quot;position: absolute; left: 0px; top: 0px;&quot;&gt;&lt;/canvas&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div class=&quot;blend-shapes&quot;&gt;
&lt;ul class=&quot;blend-shapes-list&quot; id=&quot;video-blend-shapes&quot;&gt;&lt;/ul&gt;
&lt;/div&gt;
&lt;/section&gt;
&lt;script type=&quot;module&quot;&gt;
import vision from &quot;https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0&quot;;
const { FaceLandmarker, FilesetResolver, DrawingUtils } = vision;
const demosSection = document.getElementById(&quot;demos&quot;);
const imageBlendShapes = document.getElementById(&quot;image-blend-shapes&quot;);
const videoBlendShapes = document.getElementById(&quot;video-blend-shapes&quot;);
let faceLandmarker;
let runningMode= &quot;IMAGE&quot; | &quot;VIDEO&quot;;
let enableWebcamButton = HTMLButtonElement;
let webcamRunning= Boolean = false;
const videoWidth = 480;
// Before we can use HandLandmarker class we must wait for it to finish
// loading. Machine Learning models can be large and take a moment to
// get everything needed to run.
async function runDemo() {
// Read more `CopyWebpackPlugin`, copy wasm set from &quot;https://cdn.skypack.dev/node_modules&quot; to `/wasm`
const filesetResolver = await FilesetResolver.forVisionTasks(
&quot;https://cdn.jsdelivr.net/npm/@mediapipe/tasks-vision@0.10.0/wasm&quot;
);
faceLandmarker = await FaceLandmarker.createFromOptions(filesetResolver, {
baseOptions: {
modelAssetPath: `https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task`,
delegate: &quot;GPU&quot;
},
outputFaceBlendshapes: true,
runningMode,
numFaces: 1
});
demosSection.classList.remove(&quot;invisible&quot;);
}
runDemo();
/********************************************************************
// Demo 1: Grab a bunch of images from the page and detection them
// upon click.
********************************************************************/
// In this demo, we have put all our clickable images in divs with the
// CSS class &#39;detectionOnClick&#39;. Lets get all the elements that have
// this class.
const imageContainers = document.getElementsByClassName(&quot;detectOnClick&quot;);
// Now let&#39;s go through all of these and add a click event listener.
for (let i = 0; i &lt; imageContainers.length; i++) {
// Add event listener to the child element whichis the img element.
imageContainers[i].children[0].addEventListener(&quot;click&quot;, handleClick);
}
// When an image is clicked, let&#39;s detect it and display results!
async function handleClick(event) {
if (!faceLandmarker) {
console.log(&quot;Wait for faceLandmarker to load before clicking!&quot;);
return;
}
if (runningMode === &quot;VIDEO&quot;) {
runningMode = &quot;IMAGE&quot;;
await faceLandmarker.setOptions({ runningMode });
}
// Remove all landmarks drawed before
const allCanvas = event.target.parentNode.getElementsByClassName(&quot;canvas&quot;);
for (var i = allCanvas.length - 1; i &gt;= 0; i--) {
const n = allCanvas[i];
n.parentNode.removeChild(n);
}
// We can call faceLandmarker.detect as many times as we like with
// different image data each time. This returns a promise
// which we wait to complete and then call a function to
// print out the results of the prediction.
const faceLandmarkerResult = faceLandmarker.detect(event.target);
const canvas = document.createElement(&quot;canvas&quot;);
canvas.setAttribute(&quot;class&quot;, &quot;canvas&quot;);
canvas.setAttribute(&quot;width&quot;, event.target.naturalWidth + &quot;px&quot;);
canvas.setAttribute(&quot;height&quot;, event.target.naturalHeight + &quot;px&quot;);
canvas.style.left = &quot;0px&quot;;
canvas.style.top = &quot;0px&quot;;
canvas.style.width = `${event.target.width}px`;
canvas.style.height = `${event.target.height}px`;
event.target.parentNode.appendChild(canvas);
const ctx = canvas.getContext(&quot;2d&quot;);
const drawingUtils = new DrawingUtils(ctx);
for (const landmarks of faceLandmarkerResult.faceLandmarks) {
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_TESSELATION,
{ color: &quot;#C0C0C070&quot;, lineWidth: 1 }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_EYE,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_EYEBROW,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_EYE,
{ color: &quot;#30FF30&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_EYEBROW,
{ color: &quot;#30FF30&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_FACE_OVAL,
{ color: &quot;#E0E0E0&quot; }
);
drawingUtils.drawConnectors(landmarks, FaceLandmarker.FACE_LANDMARKS_LIPS, {
color: &quot;#E0E0E0&quot;
});
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_IRIS,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_IRIS,
{ color: &quot;#30FF30&quot; }
);
}
drawBlendShapes(imageBlendShapes, faceLandmarkerResult.faceBlendshapes);
}
/********************************************************************
// Demo 2: Continuously grab image from webcam stream and detect it.
********************************************************************/
const video = document.getElementById(&quot;webcam&quot;);
const canvasElement = document.getElementById(
&quot;output_canvas&quot;
);
const canvasCtx = canvasElement.getContext(&quot;2d&quot;);
// Check if webcam access is supported.
function hasGetUserMedia() {
return !!(navigator.mediaDevices &amp;&amp; navigator.mediaDevices.getUserMedia);
}
// If webcam supported, add event listener to button for when user
// wants to activate it.
if (hasGetUserMedia()) {
enableWebcamButton = document.getElementById(
&quot;webcamButton&quot;
);
enableWebcamButton.addEventListener(&quot;click&quot;, enableCam);
} else {
console.warn(&quot;getUserMedia() is not supported by your browser&quot;);
}
// Enable the live webcam view and start detection.
function enableCam(event) {
if (!faceLandmarker) {
console.log(&quot;Wait! faceLandmarker not loaded yet.&quot;);
return;
}
if (webcamRunning === true) {
webcamRunning = false;
enableWebcamButton.innerText = &quot;ENABLE PREDICTIONS&quot;;
} else {
webcamRunning = true;
enableWebcamButton.innerText = &quot;DISABLE PREDICTIONS&quot;;
}
// getUsermedia parameters.
const constraints = {
video: true
};
// Activate the webcam stream.
navigator.mediaDevices.getUserMedia(constraints).then(function (stream) {
video.srcObject = stream;
video.addEventListener(&quot;loadeddata&quot;, predictWebcam);
});
}
let lastVideoTime = -1;
let results = undefined;
const drawingUtils = new DrawingUtils(canvasCtx);
async function predictWebcam() {
const radio = video.videoHeight / video.videoWidth;
video.style.width = videoWidth + &quot;px&quot;;
video.style.height = videoWidth * radio + &quot;px&quot;;
canvasElement.style.width = videoWidth + &quot;px&quot;;
canvasElement.style.height = videoWidth * radio + &quot;px&quot;;
canvasElement.width = video.videoWidth;
canvasElement.height = video.videoHeight;
// Now let&#39;s start detecting the stream.
if (runningMode === &quot;IMAGE&quot;) {
runningMode = &quot;VIDEO&quot;;
await faceLandmarker.setOptions({ runningMode: runningMode });
}
let nowInMs = Date.now();
if (lastVideoTime !== video.currentTime) {
lastVideoTime = video.currentTime;
results = faceLandmarker.detectForVideo(video, nowInMs);
}
if (results.faceLandmarks) {
for (const landmarks of results.faceLandmarks) {
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_TESSELATION,
{ color: &quot;#C0C0C070&quot;, lineWidth: 1 }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_EYE,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_EYEBROW,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_EYE,
{ color: &quot;#30FF30&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_EYEBROW,
{ color: &quot;#30FF30&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_FACE_OVAL,
{ color: &quot;#E0E0E0&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LIPS,
{ color: &quot;#E0E0E0&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_RIGHT_IRIS,
{ color: &quot;#FF3030&quot; }
);
drawingUtils.drawConnectors(
landmarks,
FaceLandmarker.FACE_LANDMARKS_LEFT_IRIS,
{ color: &quot;#30FF30&quot; }
);
}
}
drawBlendShapes(videoBlendShapes, results.faceBlendshapes);
// Call this function again to keep predicting when the browser is ready.
if (webcamRunning === true) {
window.requestAnimationFrame(predictWebcam);
}
}
function drawBlendShapes(el=HTMLCanvasElement, blendShapes= any[{}]) {
if (!blendShapes.length) {
return;
}
let htmlMaker = &quot;&quot;;
blendShapes[0].categories.map((shape) =&gt; {
htmlMaker += `
&lt;li class=&quot;blend-shapes-item&quot;&gt;
&lt;span class=&quot;blend-shapes-label&quot;&gt;${
shape.displayName || shape.categoryName
}&lt;/span&gt;
&lt;span class=&quot;blend-shapes-value&quot; style=&quot;width: calc(${
+shape.score * 100
}% - 120px)&quot;&gt;${(+shape.score).toFixed(4)}&lt;/span&gt;
&lt;/li&gt;
`;
});
el.innerHTML = htmlMaker;
}
&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;

<!-- end snippet -->

huangapple
  • 本文由 发表于 2023年6月13日 18:02:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/76463760.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定