英文:
Getting Error: CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1 in production nextjs app
问题
我有以下用于上传通过我的API接收到的文件的代码(使用formidable表单发送),在我的开发环境中运行正常:
const product = {
post: async (req, res) => {
await dbConnect()
const form = new formidable.IncomingForm({
multiples: true,
keepExtensions: true,
})
const s3 = new S3({
accessIdKey: process.env.ACCESS_KEY_AWS,
secretAccessKey: process.env.SECRET_KEY_AWS,
})
form.parse(req, async (error, fields, data) => {
if (error) {
return res.status(500).json({ success: false })
}
const { files } = data
const filesToUpload = files instanceof Array
? files
: [files]
let filesToSaveOnDb = []
async function uploadFile(filesToUpload) {
for(let file of filesToUpload) {
try {
const timestamp = Date.now()
const random = Math.floor(Math.random() * 999999999) + 1
const extension = path.extname(file.name)
const Key = `${timestamp}_${random}${extension}`
const fileToUpload = fs.readFileSync(file.path)
const uploadedImage = await s3.upload({
Bucket: process.env.BUCKET_NAME,
Key,
Body: fileToUpload,
ContentType: "image/*"
}).promise()
filesToSaveOnDb.push({
name: Key,
path: `${uploadedImage.Location}`,
})
// [...代码的其余部分...]
} catch (err) {
console.error(err)
}
}
}
// [...代码的其余部分...]
})
}
}
我的代码托管在AWS Amplify上。
正如我所述,当在我的本地机器上运行 "npm run dev" 时,此代码按预期工作。
然而,在生产环境中,产品被保存,但图像没有上传到S3。在CloudWatch日志中,抛出以下错误:Error: CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
我已经尝试并检查了以下内容:
- 在Amplify上正确设置了环境变量。
- 在Amplify上的构建设置中,有以下行来将环境变量传递到生产环境:
env | grep -e MONGODB_URI -e APP_URL -e NEXTAUTH_URL -e NEXTAUTH_SECRET -e SECRET_KEY_AWS -e BUCKET_NAME -e ACCESS_KEY_AWS >> .env.production
- 使用console.log进行调试,环境变量的内容显示在CloudWatch日志中。这意味着代码能够访问环境变量。
- S3 Bucket设置为公共访问。
- IAM用户(持有访问密钥和秘密密钥)具有“AmazonS3FullAccess”权限。
- 使用S3 JS SDK v3:显示了一个误导性的错误,根据我的研究,这也涉及到凭据不在配置中。
- 尝试内联设置AWS配置,但没有成功:
AWS.config.update({
accessIdKey: process.env.ACCESS_KEY_AWS,
secretAccessKey: process.env.SECRET_KEY_AWS,
region: "sa-east-1",
})
我对问题可能是什么感到非常困惑。
英文:
I have the following code to upload files received by my API (sent using a formidable form), which works perfectly fine in my dev environment:
const product = {
post: async (req, res) => {
await dbConnect()
const form = new formidable.IncomingForm({
multiples: true,
keepExtensions: true,
})
const s3 = new S3({
accessIdKey: process.env.ACCESS_KEY_AWS,
secretAccessKey: process.env.SECRET_KEY_AWS,
})
form.parse(req, async (error, fields, data) => {
if (error) {
return res.status(500).json({ success: false })
}
const { files } = data
const filesToUpload = files instanceof Array
? files
: [files]
let filesToSaveOnDb = []
async function uploadFile(filesToUpload) {
for(let file of filesToUpload) {
try {
const timestamp = Date.now()
const random = Math.floor(Math.random() * 999999999) + 1
const extension = path.extname(file.name)
const Key = `${timestamp}_${random}${extension}`
const fileToUpload = fs.readFileSync(file.path)
const uploadedImage = await s3.upload({
Bucket: process.env.BUCKET_NAME,
Key,
Body: fileToUpload,
ContentType: "image/*"
}).promise()
filesToSaveOnDb.push({
name: Key,
path: `${uploadedImage.Location}`,
})
[...rest of the code...]
My code is hosted on AWS Amplify.
As I stated, this code works as intended when running with "npm run dev" on my local machine.
On production, however, the product is saved but the images are not uploaded to S3. On CloudWatch logs, the following error is thrown: Error: CredentialsError: Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
What I already tried and checked:
-
environment variables as correctly setup on Amplify
-
Build settings on Amplify have the following line to pass the env vars to production:
- env | grep -e MONGODB_URI -e APP_URL -e NEXTAUTH_URL -e NEXTAUTH_SECRET -e SECRET_KEY_AWS -e BUCKET_NAME -e ACCESS_KEY_AWS >> .env.production
-
debugged with console.log and the content of the environment variables are showing on CloudWatch logs. This means the code is able to access the env variables.
-
S3 Bucket is set to public access.
-
IAM user (holder of the access key and secret key) have "AmazonS3FullAccess" permission.
-
Using the S3 JS SDK v3: a misleading error is shown, which, from my research, also refers to credentials not being present.
-
Tried setting AWS configuration inline, no success:
AWS.config.update({ accessIdKey: process.env.ACCESS_KEY_AWS, secretAccessKey: process.env.SECRET_KEY_AWS, region: "sa-east-1", })
I'm really lost on what the problem may be.
答案1
得分: 0
所以,我意识到,无论是使用 AWS.configure.update 还是在实例化 S3 类时传递我的凭据,都无关紧要。至少在我使用 Amplify 托管我的应用程序时,所使用的凭据将是我用于设置 Amplify 应用程序(amplify init)的凭据。
这样,我可以摆脱环境变量。
为了解决"缺少凭据"的问题,这在某种程度上是误导性的,我不得不访问 IAM 管理控制台,单击"角色",然后选择 Amplify 未经身份验证角色(因为我不是使用 Cognito 身份验证上传),并创建以下内联策略:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::{BUCKET_NAME}/public/*"
],
"Effect": "Allow"
}
]
}
在存储桶策略中,我取消了"阻止公共访问"的选项,并设置了以下内联策略:
{
"Version": "2012-10-17",
"Id": "ExamplePolicy01",
"Statement": [
{
"Sid": "ExampleStatement01",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:*",
],
"Resource": [
"arn:aws:s3:::{BUCKET_NAME}/*",
"arn:aws:s3:::{BUCKET_NAME}"
]
}
]
}
这解决了生产环境中的问题。但是,我不知道为什么相同的代码已经在我的本地机器上运行。
英文:
So, I realized that does not matter if I use AWS.configure.update or pass my credentials when instantiating the S3 class. The credential (at least when hosting my app with Amplify) used will be the one that I used to set up Amplify App (amplify init).
This way, I can get rid of the environment variables.
To solve the issue of "Missing credentials", which is somewhat misleading, I had to access the IAM Management Console, click on "Roles" and select the Amplify Unauthenticated Role (as I'm not uploading with Cognito authentication), and create a Inline Policy with the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::{BUCKET_NAME}/public/*"
],
"Effect": "Allow"
}
]
In the Bucket Policies, I unchecked "Block public access" and set the following inline policy:
{
"Version": "2012-10-17",
"Id": "ExamplePolicy01",
"Statement": [
{
"Sid": "ExampleStatement01",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:*",
],
"Resource": [
"arn:aws:s3:::{BUCKET_NAME}/*",
"arn:aws:s3:::{BUCKET_NAME}"
]
}
]
}
This solved the issue on production. I don't know, however, why the same code already worked in my local machine.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论