Error resulting from running code in Colab

huangapple go评论104阅读模式
英文:

Error resulting from running code in Colab

问题

  1. model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)

错误信息:

  1. Epoch 1/2
  2. ---------------------------------------------------------------------------
  3. ValueError Traceback (most recent call last)
  4. <ipython-input-171-3925766564e3> in <cell line: 1>()
  5. ----> 1 model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
  6. ...
  7. ValueError: Unknown loss function: 'nse'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.

这部分代码出现了错误,错误信息指出了问题可能在于损失函数 'nse' 未被识别。建议参考链接 https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object 中的说明,确保 'nse' 被正确注册。

英文:

I'm running the code in parts in the colab, an error appears in this part. Please tell me what is wrong and if possible how to fix it.

  1. model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)

Error:

  1. Epoch 1/2
  2. ---------------------------------------------------------------------------
  3. ValueError Traceback (most recent call last)
  4. &lt;ipython-input-171-3925766564e3&gt; in &lt;cell line: 1&gt;()
  5. ----&gt; 1 model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
  6. 1 frames
  7. /usr/local/lib/python3.10/dist-packages/keras/engine/training.py in tf__train_function(iterator)
  8. 13 try:
  9. 14 do_return = True
  10. ---&gt; 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
  11. 16 except:
  12. 17 do_return = False
  13. ValueError: in user code:
  14. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/training.py&quot;, line 1284, in train_function *
  15. return step_function(self, iterator)
  16. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/training.py&quot;, line 1268, in step_function **
  17. outputs = model.distribute_strategy.run(run_step, args=(data,))
  18. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/training.py&quot;, line 1249, in run_step **
  19. outputs = model.train_step(data)
  20. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/training.py&quot;, line 1051, in train_step
  21. loss = self.compute_loss(x, y, y_pred, sample_weight)
  22. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/training.py&quot;, line 1109, in compute_loss
  23. return self.compiled_loss(
  24. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py&quot;, line 240, in __call__
  25. self.build(y_pred)
  26. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py&quot;, line 182, in build
  27. self._losses = tf.nest.map_structure(
  28. File &quot;/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py&quot;, line 353, in _get_loss_object
  29. loss = losses_mod.get(loss)
  30. File &quot;/usr/local/lib/python3.10/dist-packages/keras/losses.py&quot;, line 2653, in get
  31. return deserialize(identifier, use_legacy_format=use_legacy_format)
  32. File &quot;/usr/local/lib/python3.10/dist-packages/keras/losses.py&quot;, line 2600, in deserialize
  33. return legacy_serialization.deserialize_keras_object(
  34. File &quot;/usr/local/lib/python3.10/dist-packages/keras/saving/legacy/serialization.py&quot;, line 543, in deserialize_keras_object
  35. raise ValueError(
  36. ValueError: Unknown loss function: &#39;nse&#39;. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.

This is the code:

  1. # TensorFlow and tf.keras
  2. import tensorflow as tf
  3. # Helper libraries.
  4. import numpy as np
  5. import matplotlib.pyplot as plt
  6. import pandas as pd
  7. [ ] # daily prices from github
  8. filename = pd.read_csv(&#39;/content/SBER_100101_230510 (1) (1).csv&#39;)
  9. print(df.shape, df.columns)
  10. df.rename(columns={&quot;&lt;DATE&gt;&quot;: &quot;Date&quot;, &quot;&lt;TIME&gt;&quot;: &quot;Time&quot;, &quot;&lt;OPEN&gt;&quot;: &quot;Open&quot;, &quot;&lt;HIGH&gt;&quot;: &quot;High&quot;, &quot;&lt;LOW&gt;&quot;: &quot;Low&quot;, &quot;&lt;CLOSE&gt;&quot;: &quot;Close&quot;, &quot;&lt;VOL&gt;&quot;: &quot;Volume&quot;}, inplace=True)
  11. print (df.shape, df.columns)
  12. df[&quot;Date&quot;] = pd.to_datetime(df[&quot;Date&quot;], format=&quot;%d/%m/%y&quot;)
  13. print (df.shape, df.columns)
  14. #Убираем сегодняшнюю дату - если скачали сегодня
  15. from datetime import datetime
  16. today = datetime.today().strftime (&#39;%Y-%m-%d&#39;)
  17. last_day = df.at[df.shape [0] -1, &#39;Date&#39;].strftime (&#39;%Y-%m-%d&#39;)
  18. if today == last_day: df = df[:-1]
  19. df
  20. split = 0.85
  21. i_split = int(len(df) * split)
  22. cols = [&quot;Close&quot;, &quot;Volume&quot;]
  23. data_train = df.get(cols).values[:i_split]
  24. data_test = df.get(cols).values[i_split:]
  25. len_train = len(data_train)
  26. len_test = len(data_test)
  27. print(len(df), len_train, len_test)
  28. data_train, len_train, data_test, len_test
  29. data_train.shape, data_test.shape
  30. sequence_length = 50;
  31. input_dim = 2;
  32. batch_size = 32;
  33. epochs = 2;
  34. model = tf.keras.Sequential([
  35. tf.keras.layers.LSTM (100, input_shape=(sequence_length-1, input_dim), return_sequences=True),
  36. tf.keras.layers.Dropout(.2),
  37. tf.keras.layers.LSTM(100, return_sequences=True),
  38. tf.keras.layers.LSTM(100, return_sequences=False),
  39. tf.keras.layers. Dropout(.2),
  40. tf.keras.layers.Dense(1, activation=&#39;linear&#39;)
  41. ])
  42. model = tf.keras.Sequential([
  43. tf.keras.layers.LSTM (50, input_shape=(sequence_length-1, input_dim), return_sequences=True),
  44. tf.keras.layers. Dropout(.2),
  45. tf.keras.layers.LSTM(50, return_sequences=False),
  46. tf.keras.layers.Dropout(.2),
  47. tf.keras.layers.Dense(1, activation=&#39;linear&#39;)
  48. ])
  49. model.summary()
  50. tf.keras.utils.plot_model(model, &quot;multi_input_and_output_model.png&quot;, show_shapes=True)
  51. model.compile(optimizer=&#39;adam&#39;, loss=tf.keras.losses.MeanSquaredError(), metrics=[&#39;accuracy&#39;])
  52. model.compile(optimizer=&#39;adam&#39;, loss=&#39;nse&#39;)
  53. model.compile(optimizer=&#39;adam&#39;, loss=&#39;nse&#39;, metrics=[&#39;accuracy&#39;])
  54. def normalise_windows (window_data, single_window=False):
  55. normalised_data = []
  56. window_data= [window_data] if single_window else window_data
  57. for window in window_data:
  58. normalised_window = []
  59. for col_i in range(window.shape[1]):
  60. normalised_col= [((float (p) / float(window[0, col_i])) - 1) for p in window[:, col_i]]
  61. normalised_window.append(normalised_col)
  62. normalised_window = np.array(normalised_window).T # reshape and transpose array back into original multidimensional format
  63. normalised_data.append(normalised_window)
  64. return np.array(normalised_data)
  65. def _next_window(i, seq_len, normalise):
  66. window=data_train[i:i+seq_len]
  67. window=normalise_windows (window, single_window=True) [0] if normalise else window
  68. x = window[:-1]
  69. y = window[-1, [0]]
  70. return x, y
  71. def get_train_data(seq_len, normalise):
  72. data_x = []
  73. data_y = []
  74. for i in range(len_train - seq_len + 1):
  75. x, y = _next_window(i, seq_len, normalise)
  76. data_x.append(x)
  77. data_y.append(y)
  78. return np.array(data_x), np.array(data_y)
  79. x, y = get_train_data(seq_len=sequence_length,normalise=True)
  80. print(x, y, x.shape, y.shape)
  81. def get_train_data2(seq_len, normalise):
  82. data_windows=[]
  83. for i in range(len_train - seq_len + 1):
  84. data_windows.append(data_train[i:i+seq_len])
  85. data_windows = np.array(data_windows).astype(float)
  86. data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
  87. x = data_windows[:, :-1]
  88. y = data_windows[:, -1, [0]]
  89. return x,y
  90. x2, y2 = get_train_data2(
  91. seq_len=sequence_length,
  92. normalise=True
  93. )
  94. print(&quot;train data shapes: &quot;, x.shape, y.shape)
  95. print(&quot;train data shapes: &quot;, x2. shape, y2.shape)
  96. import math
  97. steps_per_epoch=math.ceil((len_train - sequence_length) / batch_size)
  98. print(steps_per_epoch)
  99. batch_size=32
  100. from keras.callbacks import EarlyStopping
  101. callbacks = [EarlyStopping(monitor=&#39;val_loss&#39;, patience=2)]
  102. callbacks = [EarlyStopping(monitor=&#39;accuracy&#39;, patience=2)]
  103. model.fit(x, y, epochs = epochs, batch_size = batch_size, callbacks = callbacks)
  104. model.fit(x,y,epochs=epochs)
  105. def get_test_data()
  106. data_windows=[]
  107. for i in range(len_test - seq_len):
  108. data_windows.append(data_test[i:i+seq_len])
  109. data_windows = np.array(data_windows).astype(float)
  110. data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
  111. x = data_windows[:, :-1]
  112. y = data_windows[:, -1, [0]]
  113. return x,y
  114. x_test, y_test = get_test_data(
  115. seq_len=sequence_length,
  116. normalise=True
  117. )
  118. print(&quot;test data shapes:&quot;, x_test.shape, y_test.shape)
  119. model.evaluate(x_test, y_test, verbose=2)
  120. def get_last_data(seq_len, normalise):
  121. last_data=data_test[seq_len:]
  122. data_windows=np.array(last_data).astype(float)
  123. data_windows=normalise_windows(data_windows, single_window=True) if normalise else data_windows
  124. return data_windows
  125. last_data_2_predict_prices = get_last_data(-(sequence_lenght-1), False)
  126. last_data_2_predict_prices_1st_prise = last_data_2_predict_prices[0][0]
  127. last_data_2_predict = get_last_data(-(sequence_lenght-1), True)
  128. print(&quot;*** &quot;, - (-(sequence_lenght-1), last_data_2_predict.size, &quot;***&quot;))
  129. last_data_2_predict
  130. last_data_2_predict_prices
  131. last_data_2_predict_prices_1st_prise
  132. predictions2=model.predict(last_data_2_predict)
  133. print(predictions2, predictions2[0][0])
  134. def de_normalise_predicted(price_1st, _data):
  135. return (_data+1)*price_1st
  136. predicted_price = de_normalise_predicted(last_data_2_predict_prices_1st_price, predictions2[0][0])
  137. predicted_price

答案1

得分: 1

以下是翻译好的部分:

这个错误是由这里的两个拼写错误引起的:

  1. model.compile(optimizer='adam', loss='nse')
  2. model.compile(optimizer='adam', loss='nse', metrics=['accuracy'])

应该将 nse (均方误差) 改为 mse (均方差):

  1. model.compile(optimizer='adam', loss='mse')
  2. model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])

我还必须修改这一行:

  1. df["Date"] = pd.to_datetime(df["Date"], format="%Y%m%d") # 而不是 "%d/%m/%y"

一旦模型训练完成,使用 get_test_data() 函数时出现错误 (TypeError: get_test_data() got an unexpected keyword argument 'seq_len'),但我猜这是你正在进行中的工作 Error resulting from running code in Colab 如果需要更多帮助,请告诉我!

英文:

The error is due to two typos here:

  1. model.compile(optimizer=&#39;adam&#39;, loss=&#39;nse&#39;)
  2. model.compile(optimizer=&#39;adam&#39;, loss=&#39;nse&#39;, metrics=[&#39;accuracy&#39;])

which should mention mse (mean squared error) instead of nse

  1. model.compile(optimizer=&#39;adam&#39;, loss=&#39;mse&#39;)
  2. model.compile(optimizer=&#39;adam&#39;, loss=&#39;mse&#39;, metrics=[&#39;accuracy&#39;])

I also had to modify this line here:

  1. df[&quot;Date&quot;] = pd.to_datetime(df[&quot;Date&quot;], format=&quot;%Y%m%d&quot;) # instead of &quot;%d/%m/%y&quot;

Once the model is trained I get an error with the get_test_data() function (TypeError: get_test_data() got an unexpected keyword argument &#39;seq_len&#39;) but I guess that's your work-in-progress Error resulting from running code in Colab Let me know if you need more help!

答案2

得分: 0

尝试这个:
希望能对你有所帮助。

  1. def get_test_data(seq_len, normalise):
  2. data_windows=[]
  3. for i in range(len_test - seq_len):
  4. data_windows.append(data_test[i:i+seq_len])
  5. data_windows = np.array(data_windows).astype(float)
  6. data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
  7. x = data_windows[:, :-1]
  8. y = data_windows[:, -1, [0]]
  9. return x,y
英文:

Try this:
Hopefully it will help.

  1. def get_test_data(seq_len, normalise):
  2. data_windows=[]
  3. for i in range(len_test - seq_len):
  4. data_windows.append(data_test[i:i+seq_len])
  5. data_windows = np.array(data_windows).astype(float)
  6. data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
  7. x = data_windows[:, :-1]
  8. y = data_windows[:, -1, [0]]
  9. return x,y

huangapple
  • 本文由 发表于 2023年5月11日 12:30:10
  • 转载请务必保留本文链接:https://go.coder-hub.com/76224173.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定