lakeFS Docker构建失败

huangapple go评论116阅读模式
英文:

lakeFS Docker build fails

问题

我正在为您翻译以下内容:

我正在尝试使用一个包括Presto、Spark、Hive、lakeFS等的“本地”数据处理生态系统。

我的docker-compose.yml文件如下所示:

  1. version: "3.5"
  2. services:
  3. lakefs:
  4. image: treeverse/lakefs:latest
  5. container_name: lakefs
  6. depends_on:
  7. - minio-setup
  8. ports:
  9. - "8000:8000"
  10. environment:
  11. - LAKEFS_DATABASE_TYPE=local
  12. - LAKEFS_BLOCKSTORE_TYPE=s3
  13. - LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE=true
  14. - LAKEFS_BLOCKSTORE_S3_ENDPOINT=http://minio:9000
  15. - LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID=minioadmin
  16. - LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY=minioadmin
  17. - LAKEFS_AUTH_ENCRYPT_SECRET_KEY=some random secret string
  18. - LAKEFS_STATS_ENABLED
  19. - LAKEFS_LOGGING_LEVEL
  20. - LAKECTL_CREDENTIALS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
  21. - LAKECTL_CREDENTIALS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
  22. - LAKECTL_SERVER_ENDPOINT_URL=http://localhost:8000
  23. entrypoint: ["/bin/sh", "-c"]
  24. command:
  25. - |
  26. lakefs setup --local-settings --user-name docker --access-key-id AKIAIOSFODNN7EXAMPLE --secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY || true
  27. lakefs run --local-settings &
  28. wait-for -t 60 lakefs:8000 -- lakectl repo create lakefs://example s3://example || true
  29. wait
  30. minio-setup:
  31. image: minio/mc
  32. container_name: minio-setup
  33. environment:
  34. - MC_HOST_lakefs=http://minioadmin:minioadmin@minio:9000
  35. depends_on:
  36. - minio
  37. command: ["mb", "lakefs/example"]
  38. minio:
  39. image: minio/minio
  40. container_name: minio
  41. ports:
  42. - "9000:9000"
  43. - "9001:9001"
  44. entrypoint: ["minio", "server", "/data", "--console-address", ":9001"]
  45. mariadb:
  46. image: mariadb:10
  47. container_name: mariadb
  48. environment:
  49. MYSQL_ROOT_PASSWORD: admin
  50. MYSQL_USER: admin
  51. MYSQL_PASSWORD: admin
  52. MYSQL_DATABASE: metastore_db
  53. hive-metastore:
  54. build: hive
  55. container_name: hive
  56. depends_on:
  57. - mariadb
  58. ports:
  59. - "9083:9083"
  60. environment:
  61. - DB_URI=mariadb:3306
  62. volumes:
  63. - ./etc/hive-site.xml:/opt/apache-hive-bin/conf/hive-site.xml
  64. ulimits:
  65. nofile:
  66. soft: 65536
  67. hard: 65536
  68. hive-server:
  69. build: hive
  70. container_name: hiveserver2
  71. ports:
  72. - "10001:10000"
  73. depends_on:
  74. - hive-metastore
  75. environment:
  76. - DB_URI=mariadb:3306
  77. volumes:
  78. - ./etc/hive-site.xml:/opt/apache-hive-bin/conf/hive-site.xml
  79. ulimits:
  80. nofile:
  81. soft: 65536
  82. hard: 65536
  83. entrypoint: [
  84. "wait-for-it", "-t", "60", "hive:9083", "--",
  85. "hive", "--service", "hiveserver2", "--hiveconf", "hive.root.logger=INFO,console"]
  86. hive-client:
  87. build: hive
  88. profiles: ["client"]
  89. entrypoint: ["beeline", "-u", "jdbc:hive2://hiveserver2:10000"]
  90. trino:
  91. image: trinodb/trino:358
  92. container_name: trino
  93. volumes:
  94. - ./etc/s3.properties:/etc/trino/catalog/s3.properties
  95. ports:
  96. - "48080:8080"
  97. trino-client:
  98. image: trinodb/trino:358
  99. profiles: ["client"]
  100. entrypoint: ["trino", "--server", "trino:8080", "--catalog", "s3", "--schema", "default"]
  101. spark:
  102. image: docker.io/bitnami/spark:3
  103. container_name: spark
  104. environment:
  105. - SPARK_MODE=master
  106. - SPARK_MASTER_HOST=spark
  107. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  108. - SPARK_RPC_ENCRYPTION_ENABLED=no
  109. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  110. - SPARK_SSL_ENABLED=no
  111. ports:
  112. - "18080:8080"
  113. volumes:
  114. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  115. spark-worker:
  116. image: docker.io/bitnami/spark:3
  117. ports:
  118. - "8081"
  119. environment:
  120. - SPARK_MODE=worker
  121. - SPARK_MASTER_URL=spark://spark:7077
  122. - SPARK_WORKER_MEMORY=1G
  123. - SPARK_WORKER_CORES=1
  124. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  125. - SPARK_RPC_ENCRYPTION_ENABLED=no
  126. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  127. - SPARK_SSL_ENABLED=no
  128. deploy:
  129. replicas: 3
  130. volumes:
  131. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  132. spark-submit:
  133. image: docker.io/bitnami/spark:3
  134. profiles: ["client"]
  135. entrypoint: /opt/bitnami/spark/bin/spark-submit
  136. environment:
  137. - SPARK_MODE=worker
  138. - SPARK_MASTER_URL=spark://spark:7077
  139. - SPARK_WORKER_MEMORY=1G
  140. - SPARK_WORKER_CORES=1
  141. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  142. - SPARK_RPC_ENCRYPTION_ENABLED=no
  143. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  144. - SPARK_SSL_ENABLED=no
  145. volumes:
  146. - ./:/local
  147. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  148. spark-sql:
  149. image: docker.io/bitnami/spark:3
  150. profiles: ["client"]
  151. environment:
  152. - SPARK_MODE=worker
  153. - SPARK_MASTER_URL=spark://spark:7077
  154. - SPARK_WORKER_MEMORY=1G
  155. - SPARK_WORKER_CORES=1
  156. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  157. - SPARK_RPC_ENCRYPTION_ENABLED=no
  158. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  159. - SPARK_SSL_ENABLED=no
  160. volumes:
  161. - ./:/local
  162. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  163. command: ["spark-sql", "--master", "spark://spark:7077"]
  164. spark-thrift:
  165. image: docker.io/bitnami/spark:3
  166. container_name: spark-thrift
  167. command: ["bash","-c", "/opt/bitnami/entrypoint.sh"]
  168. depends_on:
  169. - spark
  170. environment:
  171. - SPARK_MODE=master
  172. - SPARK_MASTER_URL=spark://spark:7077
  173. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  174. - SPARK_RPC_ENCRYPTION_ENABLED=no
  175. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  176. - SPARK_MODE=worker
  177. volumes:
  178. - ./etc/spark-thrift-entrypoint.sh:/opt/bitnami/entrypoint.sh
  179. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  180. create-dbt-schema-main:
  181. image: trinodb/trino:358
  182. profiles: ["client"]
  183. entrypoint: ["trino", "--server", "trino:8080", "--catalog", "s3", "--execute", "drop schema if exists dbt_main ;create schema dbt_main with (location = 's3://example/main/dbt' )"]
  184. dbt:
  185. build: dbt
  186. profiles: ["client"]
  187. volumes:
  188. - ./dbt/dbt-project:/usr/app
  189. - ./dbt/profiles.yml:/root/.dbt/profiles.yml
  190. entrypoint: dbt
  191. notebook:
  192. # To login to jupyter notebook, use password:lakefs
  193. build: jupyter
  194. container_name: notebook
  195. ports:
  196. - 8888:8888
  197. volumes:
  198. - ./etc/jupyter_notebook_config.py:/home/jovyan/.jupyter/jupyter_notebook_config.py
  199. - ./etc/hive-site.xml:/usr/local/spark/conf/hive-site.xml
  200. networks:
  201. default:
  202. name: bagel

当我运行"docker compose up"时,我得到以下错误:

  1. => ERROR [build 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg GOOS=linux GOARCH=amd64 0.4s
  2. => CACHED [lakefs 2/8] RUN apk add -U --no-cache ca-certificates 0.0s
  3. => CACHED [lakefs 3/8] RUN apk add netcat-openbsd 0.0s
  4. => CACHED [lakefs 4/8] WORKDIR /app 0.0s
  5. => CACHED [lakefs 5/8] COPY ./scripts/wait-for ./ 0.0s
  6. ------
  7. failed to solve: executor failed running [/bin/sh -c GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-X github.com/treeverse/lakefs/pkg/version.Version=dev" -o lakefs ./cmd/lakefs]: exit code: 1

我的操作系统是:

Linux B460MDS3HACY1 5.15.0-58-generic #64~20.04.1-Ubuntu SMP Fri Jan 6 16:42:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

我的go版本是:

go version go1.16.7 linux/amd64

您应该怎么解决这个错误?

英文:

I am trying to get started with a "local" data processing ecosystem which includes Presto, Spark, Hive. lakeFS and a few other.

My docker-compose.yml looks like this:

  1. version: "3.5"
  2. services:
  3. lakefs:
  4. image: treeverse/lakefs:latest
  5. container_name: lakefs
  6. depends_on:
  7. - minio-setup
  8. ports:
  9. - "8000:8000"
  10. environment:
  11. - LAKEFS_DATABASE_TYPE=local
  12. - LAKEFS_BLOCKSTORE_TYPE=s3
  13. - LAKEFS_BLOCKSTORE_S3_FORCE_PATH_STYLE=true
  14. - LAKEFS_BLOCKSTORE_S3_ENDPOINT=http://minio:9000
  15. - LAKEFS_BLOCKSTORE_S3_CREDENTIALS_ACCESS_KEY_ID=minioadmin
  16. - LAKEFS_BLOCKSTORE_S3_CREDENTIALS_SECRET_ACCESS_KEY=minioadmin
  17. - LAKEFS_AUTH_ENCRYPT_SECRET_KEY=some random secret string
  18. - LAKEFS_STATS_ENABLED
  19. - LAKEFS_LOGGING_LEVEL
  20. - LAKECTL_CREDENTIALS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
  21. - LAKECTL_CREDENTIALS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
  22. - LAKECTL_SERVER_ENDPOINT_URL=http://localhost:8000
  23. entrypoint: ["/bin/sh", "-c"]
  24. command:
  25. - |
  26. lakefs setup --local-settings --user-name docker --access-key-id AKIAIOSFODNN7EXAMPLE --secret-access-key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY || true
  27. lakefs run --local-settings &
  28. wait-for -t 60 lakefs:8000 -- lakectl repo create lakefs://example s3://example || true
  29. wait
  30. minio-setup:
  31. image: minio/mc
  32. container_name: minio-setup
  33. environment:
  34. - MC_HOST_lakefs=http://minioadmin:minioadmin@minio:9000
  35. depends_on:
  36. - minio
  37. command: ["mb", "lakefs/example"]
  38. minio:
  39. image: minio/minio
  40. container_name: minio
  41. ports:
  42. - "9000:9000"
  43. - "9001:9001"
  44. entrypoint: ["minio", "server", "/data", "--console-address", ":9001"]
  45. mariadb:
  46. image: mariadb:10
  47. container_name: mariadb
  48. environment:
  49. MYSQL_ROOT_PASSWORD: admin
  50. MYSQL_USER: admin
  51. MYSQL_PASSWORD: admin
  52. MYSQL_DATABASE: metastore_db
  53. hive-metastore:
  54. build: hive
  55. container_name: hive
  56. depends_on:
  57. - mariadb
  58. ports:
  59. - "9083:9083"
  60. environment:
  61. - DB_URI=mariadb:3306
  62. volumes:
  63. - ./etc/hive-site.xml:/opt/apache-hive-bin/conf/hive-site.xml
  64. ulimits:
  65. nofile:
  66. soft: 65536
  67. hard: 65536
  68. hive-server:
  69. build: hive
  70. container_name: hiveserver2
  71. ports:
  72. - "10001:10000"
  73. depends_on:
  74. - hive-metastore
  75. environment:
  76. - DB_URI=mariadb:3306
  77. volumes:
  78. - ./etc/hive-site.xml:/opt/apache-hive-bin/conf/hive-site.xml
  79. ulimits:
  80. nofile:
  81. soft: 65536
  82. hard: 65536
  83. entrypoint: [
  84. "wait-for-it", "-t", "60", "hive:9083", "--",
  85. "hive", "--service", "hiveserver2", "--hiveconf", "hive.root.logger=INFO,console"]
  86. hive-client:
  87. build: hive
  88. profiles: ["client"]
  89. entrypoint: ["beeline", "-u", "jdbc:hive2://hiveserver2:10000"]
  90. trino:
  91. image: trinodb/trino:358
  92. container_name: trino
  93. volumes:
  94. - ./etc/s3.properties:/etc/trino/catalog/s3.properties
  95. ports:
  96. - "48080:8080"
  97. trino-client:
  98. image: trinodb/trino:358
  99. profiles: ["client"]
  100. entrypoint: ["trino", "--server", "trino:8080", "--catalog", "s3", "--schema", "default"]
  101. spark:
  102. image: docker.io/bitnami/spark:3
  103. container_name: spark
  104. environment:
  105. - SPARK_MODE=master
  106. - SPARK_MASTER_HOST=spark
  107. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  108. - SPARK_RPC_ENCRYPTION_ENABLED=no
  109. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  110. - SPARK_SSL_ENABLED=no
  111. ports:
  112. - "18080:8080"
  113. volumes:
  114. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  115. spark-worker:
  116. image: docker.io/bitnami/spark:3
  117. ports:
  118. - "8081"
  119. environment:
  120. - SPARK_MODE=worker
  121. - SPARK_MASTER_URL=spark://spark:7077
  122. - SPARK_WORKER_MEMORY=1G
  123. - SPARK_WORKER_CORES=1
  124. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  125. - SPARK_RPC_ENCRYPTION_ENABLED=no
  126. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  127. - SPARK_SSL_ENABLED=no
  128. deploy:
  129. replicas: 3
  130. volumes:
  131. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  132. spark-submit:
  133. image: docker.io/bitnami/spark:3
  134. profiles: ["client"]
  135. entrypoint: /opt/bitnami/spark/bin/spark-submit
  136. environment:
  137. - SPARK_MODE=worker
  138. - SPARK_MASTER_URL=spark://spark:7077
  139. - SPARK_WORKER_MEMORY=1G
  140. - SPARK_WORKER_CORES=1
  141. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  142. - SPARK_RPC_ENCRYPTION_ENABLED=no
  143. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  144. - SPARK_SSL_ENABLED=no
  145. volumes:
  146. - ./:/local
  147. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  148. spark-sql:
  149. image: docker.io/bitnami/spark:3
  150. profiles: ["client"]
  151. environment:
  152. - SPARK_MODE=worker
  153. - SPARK_MASTER_URL=spark://spark:7077
  154. - SPARK_WORKER_MEMORY=1G
  155. - SPARK_WORKER_CORES=1
  156. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  157. - SPARK_RPC_ENCRYPTION_ENABLED=no
  158. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  159. - SPARK_SSL_ENABLED=no
  160. volumes:
  161. - ./:/local
  162. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  163. command: ["spark-sql", "--master", "spark://spark:7077"]
  164. spark-thrift:
  165. image: docker.io/bitnami/spark:3
  166. container_name: spark-thrift
  167. command: ["bash","-c", "/opt/bitnami/entrypoint.sh"]
  168. depends_on:
  169. - spark
  170. environment:
  171. - SPARK_MODE=master
  172. - SPARK_MASTER_URL=spark://spark:7077
  173. - SPARK_RPC_AUTHENTICATION_ENABLED=no
  174. - SPARK_RPC_ENCRYPTION_ENABLED=no
  175. - SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
  176. - SPARK_MODE=worker
  177. volumes:
  178. - ./etc/spark-thrift-entrypoint.sh:/opt/bitnami/entrypoint.sh
  179. - ./etc/hive-site.xml:/opt/bitnami/spark/conf/hive-site.xml
  180. create-dbt-schema-main:
  181. image: trinodb/trino:358
  182. profiles: ["client"]
  183. entrypoint: ["trino", "--server", "trino:8080", "--catalog", "s3", "--execute", "drop schema if exists dbt_main ;create schema dbt_main with (location = 's3://example/main/dbt' )"]
  184. dbt:
  185. build: dbt
  186. profiles: ["client"]
  187. volumes:
  188. - ./dbt/dbt-project:/usr/app
  189. - ./dbt/profiles.yml:/root/.dbt/profiles.yml
  190. entrypoint: dbt
  191. notebook:
  192. # To login to jupyter notebook, use password:lakefs
  193. build: jupyter
  194. container_name: notebook
  195. ports:
  196. - 8888:8888
  197. volumes:
  198. - ./etc/jupyter_notebook_config.py:/home/jovyan/.jupyter/jupyter_notebook_config.py
  199. - ./etc/hive-site.xml:/usr/local/spark/conf/hive-site.xml
  200. networks:
  201. default:
  202. name: bagel

when I run "docker compose up" I get this error:

  1. => ERROR [build 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg GOOS=linux GOARCH=amd64 0.4s
  2. => CACHED [lakefs 2/8] RUN apk add -U --no-cache ca-certificates 0.0s
  3. => CACHED [lakefs 3/8] RUN apk add netcat-openbsd 0.0s
  4. => CACHED [lakefs 4/8] WORKDIR /app 0.0s
  5. => CACHED [lakefs 5/8] COPY ./scripts/wait-for ./ 0.0s
  6. ------
  7. > [build 7/8] RUN --mount=type=cache,target=/root/.cache/go-build --mount=type=cache,target=/go/pkg GOOS=linux GOARCH=amd64 go build -ldflags "-X github.com/treeverse/lakefs/pkg/version.Version=dev" -o lakefs ./cmd/lakefs:
  8. #0 0.407 webui/content.go:7:12: pattern dist: no matching files found
  9. ------
  10. failed to solve: executor failed running [/bin/sh -c GOOS=$TARGETOS GOARCH=$TARGETARCH go build -ldflags "-X github.com/treeverse/lakefs/pkg/version.Version=${VERSION}" -o lakefs ./cmd/lakefs]: exit code: 1

My OS is:

  1. Linux B460MDS3HACY1 5.15.0-58-generic #64~20.04.1-Ubuntu SMP Fri Jan 6 16:42:31 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

My go is:

  1. go version go1.16.7 linux/amd64

What should I do to overcome this error?

答案1

得分: 3

奇怪 - docker-compose 使用一个镜像,应该只是拉取它而不是尝试构建一个 Docker 镜像。
你可以验证一下工作目录是否包含你的 docker-compose 文件。
在调用 docker-compose up 之前,你也可以通过 docker-compose pull 来验证是否使用了最新的镜像。

英文:

Strange - the docker-compose uses an image and it should just pull it and not try to build a docker image.
Can you verify that the working directoy holds your docker-compose?
You can also verify that you are using the latest images by docker-compose pull before calling docker-compose up.

huangapple
  • 本文由 发表于 2023年2月10日 01:26:41
  • 转载请务必保留本文链接:https://go.coder-hub.com/75402281.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定