英文:
What is entry point of BigQuery Cloud Function?
问题
I successfully updated a BigQuery Table Using an External API and a Cloud Function. My entry point of the code below is hello_pubsub, however, I do not know what those 2 parameters are. I didn't provide event and context to the function, how can it still run my code without errors? Though I know the code in the function provides all the information to do the loading job.
import pandas as pd
import requests
from datetime import datetime
from google.cloud import bigquery
def hello_pubsub(event, context):
PROJECT = "test-391108"
client = bigquery.Client(project=PROJECT)
table_id = "test-391108.TEAM.MEMBER"
API_ENDPOINT ='https://fantasy.premierleague.com/api/bootstrap-static/'
response = requests.get(API_ENDPOINT, timeout=5)
response_json = response.json()
df = pd.DataFrame(response_json['teams'])
df = df.iloc[:,:6]
job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE")
job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
Is there also another way I can schedule my code without using a function and load data into BigQuery table by using an External API?
英文:
I successfully updated a BigQuery Table Using an External API and a Cloud Function. My entry point of the code below is hello_pubsub, however, I do not know what those 2 parameters are. I didn't provide event and context to the function, how can it still run my code without errors? Though I know the code in the function provide all the information to do the loading job.
import pandas as pd
import requests
from datetime import datetime
from google.cloud import bigquery
def hello_pubsub(event, context):
PROJECT = "test-391108"
client = bigquery.Client(project=PROJECT)
table_id = "test-391108.TEAM.MEMBER"
API_ENDPOINT ='https://fantasy.premierleague.com/api/bootstrap-static/'
response = requests.get(API_ENDPOINT, timeout=5)
response_json = response.json()
df = pd.DataFrame(response_json['teams'])
df = df.iloc[:,:6]
job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE")
job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
Is there also another way I can schedule my code without using function and load data into Bigquery table by using an External API?
答案1
得分: 1
将@Mike Karp的评论发布为答案,以便社区更好地看到。
事件和上下文这两个变量是在设置PubSub触发器时所期望的。
有关变量的定义,请参阅此处:
https://cloud.google.com/functions/docs/writing/write-event-driven-functions#background-functions
英文:
Posting @Mike Karp's comment as an answer for better visibility for the community.
>The two variables event and context are what is expected when you set up a PubSub trigger
For reference a definition of the variables are cited here:
https://cloud.google.com/functions/docs/writing/write-event-driven-functions#background-functions
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论