使用DataFlow解析API的JSON响应

huangapple go评论69阅读模式
英文:

Parsing JSON response from API using DataFlow

问题

如何将API的JSON响应解析为CSV?
目前,API响应存储在Blob Storage中,我现在想要将其解析并保存为CSV文件。

这是JSON响应的一个子集示例,

{
  "reports": [
    {
      "columnHeader": {
        "dimensions": [
          "ga:dimension2",
          "ga:dimension1",
          "ga:dimension4",
          "ga:dimension6"
        ],
        "metricHeader": {
          "metricHeaderEntries": [
            {
              "name": "ga:newUsers",
              "type": "INTEGER"
            }
          ]
        }
      },
      "data": {
        "rows": [
          {
            "dimensions": [
              "Australia",
              "Hong Kong",
              "HKNG",
              "not set"
            ],
            "metrics": [
              {
                "values": [
                  "1"
                ]
              }
            ]
          },
          {
            "dimensions": [
              "Australia",
              "Malaysia",
              "KL",
              "not set"
            ],
            "metrics": [
              {
                "values": [
                  "1"
                ]
              }
            ]
          }
        ]
      }
    }
  ]
}

所需的output.csv文件应该具有JSON文件的data键的列标题。

列标题应该如下所示,
+-----------+------------+------------+------------+-------------+
|dimension1 | dimension2 | dimension4 | dimension6 | metric_value|
+-----------+------------+------------+------------+-------------+
| Australia | Hong Kong  | HKNG       | not set    | 1           |
| ...       | ...        | ....       | ...        | ...         |
+-----------+------------+------------+------------+-------------+
英文:

How do I parse a JSON response from API as CSV?
Currently the API response is stored as a JSON file on Blob Storage and I now want to parse and save it as CSV file.

Here's a subset of the JSON response as sample,

{
  "reports": [
    {
      "columnHeader": {
        "dimensions": [
          "ga:dimension2",
          "ga:dimension1",
          "ga:dimension4",
          "ga:dimension6"
        ],
        "metricHeader": {
          "metricHeaderEntries": [
            {
              "name": "ga:newUsers",
              "type": "INTEGER"
            }
          ]
        }
      },
      "data": {
        "rows": [
          {
            "dimensions": [
              "Australia",
              "Hong Kong",
              "HKNG",
              "not set"
            ],
            "metrics": [
              {
                "values": [
                  "1"
                ]
              }
            ]
          },
          {
            "dimensions": [
              "Australia",
              "Malaysia",
              "KL",
              "not set"
            ],
            "metrics": [
              {
                "values": [
                  "1"
                ]
              }
            ]
          }
        ]
      }
    }
  ]
}

the desired output.csv file should have column headers from the data key of JSON file.

the column header would ideally be like this,
+-----------+------------+------------+------------+-------------+
|dimension1 | dimension2 | dimension4 | dimension6 | metric_value|
+-----------+------------+------------+------------+-------------+
| Australia | Hong Kong  | HKNG       | not set    | 1           |
| ...       | ...        | ....       | ...        | ...         |
+-----------+------------+------------+------------+-------------+

答案1

得分: 1

解析来自API的JSON响应并将其保存为Azure Data Factory (ADF) Data Flow中的CSV文件,您可以按照以下步骤操作:

  • 在数据流中使用“源”转换,并将Blob存储中的JSON文件作为源数据集。将documentForm选项设置为arrayOfDocuments。这将指示JSON文件包含一组对象。

使用DataFlow解析API的JSON响应

  • 使用选择转换,将reports数组中第二个对象的data字段映射到名为data的新列中。

data = reports[1].data

使用DataFlow解析API的JSON响应

  • 使用展开转换,展开data列中metrics字段中的values数组。将dimensionsvalues字段映射到新的列中。

使用DataFlow解析API的JSON响应

  • 然后使用推导转换基于dimensionsvalues列创建新列。在此处创建名为dimensions2dimensions1dimensions4dimensions6metric_values的新列。

使用DataFlow解析API的JSON响应

  • 使用选择转换仅选择dimensions2dimensions1dimensions4dimensions6metric_values字段。

  • 然后使用CSV数据集的接收转换

输出:

使用DataFlow解析API的JSON响应

运行带有此数据流的管道时,JSON数据将以所需格式复制到CSV文件中。

英文:

To parse a JSON response from an API and save it as a CSV file in Azure Data Factory (ADF) Data Flow, you can use the following steps:

  • Take the Source transformation in dataflow and take the JSON file from Blob Storage as the source dataset. Set the documentForm option to arrayOfDocuments. This will indicate that the JSON file contains an array of objects.

使用DataFlow解析API的JSON响应

  • Take the Select transformation and map the data field of the second object in the reports array to a new column called data.

data = reports[1].data

使用DataFlow解析API的JSON响应

  • Take the Flatten transformation and unroll the values array in the metrics field of the data column. Map the dimensions and values fields ato new columns.
    使用DataFlow解析API的JSON响应

  • Then take the Derive transformation to create new columns based on the dimensions and values columns. New columns called dimensions2, dimensions1, dimensions4, dimensions6, and metric_values are created here.

使用DataFlow解析API的JSON响应

  • Take the Select transformation is to select dimensions2, dimensions1, dimensions4, dimensions6, and metric_values fields only.

  • Then take the sink transformation with csv dataset.

Output:

使用DataFlow解析API的JSON响应

When the pipeline with this dataflow is run, the Json data will be copied to csv file in required format.

huangapple
  • 本文由 发表于 2023年6月29日 13:07:34
  • 转载请务必保留本文链接:https://go.coder-hub.com/76578190.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定