DataFrame对象在函数之间共享变量时不可调用。

huangapple go评论65阅读模式
英文:

DataFrame object is not callable when sharing a variable between functions

问题

以下是您提供的代码的翻译部分:

这一行导致错误:`SKU_Metrics = SKU_Details(SAS_full_dataset)`。
我得到的错误是`Dataframe对象不可调用`。基本上我想要做的是在函数内调用`SAS_full_dataset()`函数将其存储为变量然后在这些函数中对这些变量进行转换请注意我将函数作为要调用该函数的函数的参数我认为这是一个合理的方法在这里也让我感到困惑的是代码在某些地方没有出错比如`SAS_full_dataset = SAS_full_dataset()`。

我无法弄清楚为什么会出现这个错误在下面的代码中出于长度考虑一些文件导入赋值被省略了

这是一个函数用于合并基础和当前期发票数据集因为我们只想对过去一年购买的SKU进行推荐
这是一个函数用于合并基础和当前期发票数据集因为我们只想对过去一年购买的SKU进行推荐
这是一个函数用于合并基础和当前期发票数据集因为我们只想对过去一年购买的SKU进行推荐
这是一个函数用于合并基础和当前期发票数据集因为我们只想对过去一年购买的SKU进行推荐
以下是追溯信息:
以下是追溯信息:
以下是追溯信息:
以下是追溯信息:

希望这有助于您理解代码和错误信息。如果您需要进一步的帮助,请随时提出。

英文:

This line is causing an error: SKU_Metrics = SKU_Details(SAS_full_dataset).
The error I get is Dataframe object is not callable. Basically, I want to do this pattern where I call the SAS_full_dataset() function within functions, store that as a variable and do transformations on those variables in those functions. Notice I put the function as parameters in the functions I want to call that function in. I think this is a sensible approach. What puzzles me here too is that the code doesn't error out in some places, like SAS_full_dataset = SAS_full_dataset()

I can't figure out why I am getting this error. In the code below, some file import assignments were left out of this code to for length sake.

def data_cleaning_and_filtering(SAS_current_period,SAS_prior_period,primary_vendor):
    ## filtering out OT,LT Verticals
    SAS_current_period = SAS_current_period.loc[(SAS_current_period['Vertical Code'] != 'LT') & (SAS_current_period['Vertical Code'] != 'OT' )]
    SAS_prior_period = SAS_prior_period.loc[(SAS_prior_period['Vertical Code'] != 'LT') & (SAS_prior_period['Vertical Code'] != 'OT' )]

    ## filtering out CustTypes UPS, S&S Latam, S&S Canada
    SAS_current_period = SAS_current_period.loc[(SAS_current_period['Customer Type Code'] != 'NT') & (SAS_current_period['Customer Type Code'] != 'GH' ) & (SAS_current_period['Customer Type Code'] != 'NJ' )]
    SAS_prior_period = SAS_prior_period.loc[(SAS_prior_period['Customer Type Code'] != 'NT') & (SAS_prior_period['Customer Type Code'] != 'GH' ) & (SAS_prior_period['Customer Type Code'] != 'NJ' )]

    ## filtering out negative quantities and Revenue
    SAS_current_period = SAS_current_period.loc[(SAS_current_period['Qty Shipped'] > 0) & (SAS_current_period['Ext. Sell Price'] > 0)]
    SAS_prior_period = SAS_prior_period.loc[(SAS_prior_period['Qty Shipped'] > 0) & (SAS_prior_period['Ext. Sell Price'] > 0)]

    ## filtering to shipBR = 01
    SAS_current = SAS_current_period.loc[(SAS_current_period['Ship Br.'] == 1)]
    SAS_prior = SAS_prior_period.loc[(SAS_prior_period['Ship Br.'] == 1)]

    ## dropping/Excluding SKU's where primary vendor = 1708 (Midwest)
    primary_vendor['SKU'] = primary_vendor['PRDLIN'] + '-' + primary_vendor['PRODNO']
    exclude_list = primary_vendor.loc[(primary_vendor['VENDNO'] == 1708),['SKU']]
    #SAS_current = SAS_current_period.drop(SAS_current_period[SAS_current_period.SKU.isin(exclude_list[:])].index)
    #SAS_prior = SAS_prior_period.drop(SAS_prior_period[SAS_prior_period.SKU.isin(exclude_list[:])].index)

    return SAS_current,SAS_prior

The overall goal through the next handful of functions is to build sections of the overall sp1 file in blocks, then join them in a final function at the end to return the full sp1 adjustment file.

This is a function to join the base and current period invoice datasets, because we only want to make recommendations on SKU's that have been bought in the last year.

def SAS_full_dataset():
    SAS_current,SAS_prior = data_cleaning_and_filtering(SAS_current_period,SAS_prior_period,primary_vendor)
    SAS_full_dataset = pd.concat([SAS_current, SAS_prior])
    SAS_full_dataset['Invoiced Date'] = pd.to_datetime(SAS_full_dataset['Invoiced Date']).dt.strftime('%Y-%m-%d')
    SAS_full_dataset = SAS_full_dataset.sort_values(by='Invoiced Date',ascending=False)
    return SAS_full_dataset

here we are putting together the SKU details tab of the adjustment table

def SKU_Details(SAS_full_dataset):
    SAS_full_dataset = SAS_full_dataset()
    SAS_full_dataset = pd.merge(SAS_full_dataset,AVG_cost[['SKU','inventory code']],how='left',on='SKU')
    SKU_Details=SAS_full_dataset
    SKU_Details = pd.merge(SKU_Details,AVG_cost[['SKU','inventory code']],how='left',on='SKU')
    SKU_Details = SAS_full_dataset[['Tracking Ln','PCAT','SKU','inventory code']]
    SKU_Details['Tracking Ln|PCAT'] = SKU_Details['Tracking Ln']+'|'+SKU_Details['PCAT']
    SKU_Details = SKU_Details.drop_duplicates(subset='SKU')
    return SKU_Details
# this function returns the the dataframe associated with the 'sku metrics' tab of
# the sp1 adjustment file 
def SKU_Metrics(SAS_full_dataset):
    # doing aggregate calculation on base dataset. keep in mind this dataset has been filtered
    SAS_full_dataset = SAS_full_dataset()
    SAS_full_dataset['Ext. Sell Price'].astype(float).inplace=True
    SAS_full_dataset['TTM']=SAS_full_dataset.groupby(['SKU'])['Ext. Sell Price'].transform('sum')
    
    # creating SKU metrics tabs
    SKU_Metrics = SKU_Details(SAS_full_dataset)
    SKU_Metrics = pd.merge(SKU_Metrics,SAS_full_dataset[['SKU','TTM Revenue']],how='left',on='SKU')
    
    Historical_SP1['Effective Date'] = pd.to_datetime(Historical_SP1['Effective Date']).dt.strftime('%Y-%m-%d')
    Historical_SP1.sort_values(['Effective Date'],ascending=False).groupby('SKU').inplace=True
    current_period_sp1 = Historical_SP1.drop_duplicates(subset='SKU',keep='first')
    base_period_sp1 = np.where(Historical_SP1['Effective Date'].isin(base_period),base_period_sp1)
    return SKU_Metrics
    
SKU_Metrics = SKU_Metrics(SAS_full_dataset)

The traceback is the following:

TypeError                                 Traceback (most recent call last)
c:\Users\mstevens\python_test _Copy_for_pricing\python_sns\pricing_test_docs\Sp1 Adjustment June23.py in line 141
    145     base_period_sp1 = np.where(Historical_SP1['Effective Date'].isin(base_period),base_period_sp1)
    146     return SKU_Metrics
--> 148 SKU_Metrics = SKU_Metrics(SAS_full_dataset)

c:\Users\mstevens\python_test _Copy_for_pricing\python_sns\pricing_test_docs\Sp1 Adjustment June23.py in line 133, in SKU_Metrics(SAS_full_dataset)
    138 SAS_full_dataset['TTM']=SAS_full_dataset.groupby(['SKU'])['Ext. Sell Price'].transform('sum')
    139 # creating SKU metrics tabs
--> 140 SKU_Metrics = SKU_Details(SAS_full_dataset)
    141 SKU_Metrics = pd.merge(SKU_Metrics,SAS_full_dataset[['SKU','TTM Revenue']],how='left',on='SKU')
    142 Historical_SP1['Effective Date'] = pd.to_datetime(Historical_SP1['Effective Date']).dt.strftime('%Y-%m-%d')

c:\Users\mstevens\python_test _Copy_for_pricing\python_sns\pricing_test_docs\Sp1 Adjustment June23.py in line 115, in SKU_Details(SAS_full_dataset)
    121 def SKU_Details(SAS_full_dataset):
--> 122     SAS_full_dataset = SAS_full_dataset()
    123     SAS_full_dataset = pd.merge(SAS_full_dataset,AVG_cost[['SKU','inventory code']],how='left',on='SKU')
    124     SKU_Details=SAS_full_dataset

TypeError: 'DataFrame' object is not callable

答案1

得分: 1

SKU_Details 在执行这行代码后不再是一个函数。

你已经重新赋值了名称 SKU_Details,使其成为从函数调用中返回的值,这种情况下是一个数据框对象。

英文:
SKU_Details=SKU_Details(SAS_full_dataset)

After executing this line, SKU_Details isn't a function anymore.

You've reassigned the name SKU_Details to be the value that was returned from the function call, in this case a Dataframe object.

huangapple
  • 本文由 发表于 2023年6月13日 08:26:09
  • 转载请务必保留本文链接:https://go.coder-hub.com/76461018.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定