BigQuery Cloud Function 的入口点是什么?

huangapple go评论64阅读模式
英文:

What is entry point of BigQuery Cloud Function?

问题

I successfully updated a BigQuery Table Using an External API and a Cloud Function. My entry point of the code below is hello_pubsub, however, I do not know what those 2 parameters are. I didn't provide event and context to the function, how can it still run my code without errors? Though I know the code in the function provides all the information to do the loading job.

import pandas as pd
import requests
from datetime import datetime
from google.cloud import bigquery

def hello_pubsub(event, context):

    PROJECT = "test-391108"

    client = bigquery.Client(project=PROJECT)

    table_id = "test-391108.TEAM.MEMBER"

    API_ENDPOINT ='https://fantasy.premierleague.com/api/bootstrap-static/'

    response = requests.get(API_ENDPOINT, timeout=5)
    response_json = response.json()
    df = pd.DataFrame(response_json['teams'])
    df = df.iloc[:,:6]

    job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE")
    job = client.load_table_from_dataframe(df, table_id, job_config=job_config)

Is there also another way I can schedule my code without using a function and load data into BigQuery table by using an External API?

英文:

I successfully updated a BigQuery Table Using an External API and a Cloud Function. My entry point of the code below is hello_pubsub, however, I do not know what those 2 parameters are. I didn't provide event and context to the function, how can it still run my code without errors? Though I know the code in the function provide all the information to do the loading job.

import pandas as pd
import requests
from datetime import datetime
from google.cloud import bigquery

def hello_pubsub(event, context):

    PROJECT = "test-391108"

    client = bigquery.Client(project=PROJECT)
    
    table_id = "test-391108.TEAM.MEMBER"

    API_ENDPOINT ='https://fantasy.premierleague.com/api/bootstrap-static/'

    response = requests.get(API_ENDPOINT, timeout=5)
    response_json = response.json()
    df = pd.DataFrame(response_json['teams'])
    df = df.iloc[:,:6]

    job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE")
    job = client.load_table_from_dataframe(df, table_id, job_config=job_config)

Is there also another way I can schedule my code without using function and load data into Bigquery table by using an External API?

答案1

得分: 1

将@Mike Karp的评论发布为答案,以便社区更好地看到。

事件和上下文这两个变量是在设置PubSub触发器时所期望的。

有关变量的定义,请参阅此处:
https://cloud.google.com/functions/docs/writing/write-event-driven-functions#background-functions

英文:

Posting @Mike Karp's comment as an answer for better visibility for the community.

>The two variables event and context are what is expected when you set up a PubSub trigger

For reference a definition of the variables are cited here:
https://cloud.google.com/functions/docs/writing/write-event-driven-functions#background-functions

huangapple
  • 本文由 发表于 2023年6月29日 17:16:41
  • 转载请务必保留本文链接:https://go.coder-hub.com/76579703.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定