英文:
Error resulting from running code in Colab
问题
model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
错误信息:
Epoch 1/2
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-171-3925766564e3> in <cell line: 1>()
----> 1 model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
...
ValueError: Unknown loss function: 'nse'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
这部分代码出现了错误,错误信息指出了问题可能在于损失函数 'nse' 未被识别。建议参考链接 https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object 中的说明,确保 'nse' 被正确注册。
英文:
I'm running the code in parts in the colab, an error appears in this part. Please tell me what is wrong and if possible how to fix it.
model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
Error:
Epoch 1/2
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-171-3925766564e3> in <cell line: 1>()
----> 1 model.fit(x, y, epochs=epochs, batch_size=batch_size, callbacks=callbacks)
1 frames
/usr/local/lib/python3.10/dist-packages/keras/engine/training.py in tf__train_function(iterator)
13 try:
14 do_return = True
---> 15 retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
16 except:
17 do_return = False
ValueError: in user code:
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1284, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1268, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1249, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1051, in train_step
loss = self.compute_loss(x, y, y_pred, sample_weight)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/training.py", line 1109, in compute_loss
return self.compiled_loss(
File "/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py", line 240, in __call__
self.build(y_pred)
File "/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py", line 182, in build
self._losses = tf.nest.map_structure(
File "/usr/local/lib/python3.10/dist-packages/keras/engine/compile_utils.py", line 353, in _get_loss_object
loss = losses_mod.get(loss)
File "/usr/local/lib/python3.10/dist-packages/keras/losses.py", line 2653, in get
return deserialize(identifier, use_legacy_format=use_legacy_format)
File "/usr/local/lib/python3.10/dist-packages/keras/losses.py", line 2600, in deserialize
return legacy_serialization.deserialize_keras_object(
File "/usr/local/lib/python3.10/dist-packages/keras/saving/legacy/serialization.py", line 543, in deserialize_keras_object
raise ValueError(
ValueError: Unknown loss function: 'nse'. Please ensure you are using a `keras.utils.custom_object_scope` and that this object is included in the scope. See https://www.tensorflow.org/guide/keras/save_and_serialize#registering_the_custom_object for details.
This is the code:
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries.
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
[ ] # daily prices from github
filename = pd.read_csv('/content/SBER_100101_230510 (1) (1).csv')
print(df.shape, df.columns)
df.rename(columns={"<DATE>": "Date", "<TIME>": "Time", "<OPEN>": "Open", "<HIGH>": "High", "<LOW>": "Low", "<CLOSE>": "Close", "<VOL>": "Volume"}, inplace=True)
print (df.shape, df.columns)
df["Date"] = pd.to_datetime(df["Date"], format="%d/%m/%y")
print (df.shape, df.columns)
#Убираем сегодняшнюю дату - если скачали сегодня
from datetime import datetime
today = datetime.today().strftime ('%Y-%m-%d')
last_day = df.at[df.shape [0] -1, 'Date'].strftime ('%Y-%m-%d')
if today == last_day: df = df[:-1]
df
split = 0.85
i_split = int(len(df) * split)
cols = ["Close", "Volume"]
data_train = df.get(cols).values[:i_split]
data_test = df.get(cols).values[i_split:]
len_train = len(data_train)
len_test = len(data_test)
print(len(df), len_train, len_test)
data_train, len_train, data_test, len_test
data_train.shape, data_test.shape
sequence_length = 50;
input_dim = 2;
batch_size = 32;
epochs = 2;
model = tf.keras.Sequential([
tf.keras.layers.LSTM (100, input_shape=(sequence_length-1, input_dim), return_sequences=True),
tf.keras.layers.Dropout(.2),
tf.keras.layers.LSTM(100, return_sequences=True),
tf.keras.layers.LSTM(100, return_sequences=False),
tf.keras.layers. Dropout(.2),
tf.keras.layers.Dense(1, activation='linear')
])
model = tf.keras.Sequential([
tf.keras.layers.LSTM (50, input_shape=(sequence_length-1, input_dim), return_sequences=True),
tf.keras.layers. Dropout(.2),
tf.keras.layers.LSTM(50, return_sequences=False),
tf.keras.layers.Dropout(.2),
tf.keras.layers.Dense(1, activation='linear')
])
model.summary()
tf.keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
model.compile(optimizer='adam', loss=tf.keras.losses.MeanSquaredError(), metrics=['accuracy'])
model.compile(optimizer='adam', loss='nse')
model.compile(optimizer='adam', loss='nse', metrics=['accuracy'])
def normalise_windows (window_data, single_window=False):
normalised_data = []
window_data= [window_data] if single_window else window_data
for window in window_data:
normalised_window = []
for col_i in range(window.shape[1]):
normalised_col= [((float (p) / float(window[0, col_i])) - 1) for p in window[:, col_i]]
normalised_window.append(normalised_col)
normalised_window = np.array(normalised_window).T # reshape and transpose array back into original multidimensional format
normalised_data.append(normalised_window)
return np.array(normalised_data)
def _next_window(i, seq_len, normalise):
window=data_train[i:i+seq_len]
window=normalise_windows (window, single_window=True) [0] if normalise else window
x = window[:-1]
y = window[-1, [0]]
return x, y
def get_train_data(seq_len, normalise):
data_x = []
data_y = []
for i in range(len_train - seq_len + 1):
x, y = _next_window(i, seq_len, normalise)
data_x.append(x)
data_y.append(y)
return np.array(data_x), np.array(data_y)
x, y = get_train_data(seq_len=sequence_length,normalise=True)
print(x, y, x.shape, y.shape)
def get_train_data2(seq_len, normalise):
data_windows=[]
for i in range(len_train - seq_len + 1):
data_windows.append(data_train[i:i+seq_len])
data_windows = np.array(data_windows).astype(float)
data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
x = data_windows[:, :-1]
y = data_windows[:, -1, [0]]
return x,y
x2, y2 = get_train_data2(
seq_len=sequence_length,
normalise=True
)
print("train data shapes: ", x.shape, y.shape)
print("train data shapes: ", x2. shape, y2.shape)
import math
steps_per_epoch=math.ceil((len_train - sequence_length) / batch_size)
print(steps_per_epoch)
batch_size=32
from keras.callbacks import EarlyStopping
callbacks = [EarlyStopping(monitor='val_loss', patience=2)]
callbacks = [EarlyStopping(monitor='accuracy', patience=2)]
model.fit(x, y, epochs = epochs, batch_size = batch_size, callbacks = callbacks)
model.fit(x,y,epochs=epochs)
def get_test_data()
data_windows=[]
for i in range(len_test - seq_len):
data_windows.append(data_test[i:i+seq_len])
data_windows = np.array(data_windows).astype(float)
data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
x = data_windows[:, :-1]
y = data_windows[:, -1, [0]]
return x,y
x_test, y_test = get_test_data(
seq_len=sequence_length,
normalise=True
)
print("test data shapes:", x_test.shape, y_test.shape)
model.evaluate(x_test, y_test, verbose=2)
def get_last_data(seq_len, normalise):
last_data=data_test[seq_len:]
data_windows=np.array(last_data).astype(float)
data_windows=normalise_windows(data_windows, single_window=True) if normalise else data_windows
return data_windows
last_data_2_predict_prices = get_last_data(-(sequence_lenght-1), False)
last_data_2_predict_prices_1st_prise = last_data_2_predict_prices[0][0]
last_data_2_predict = get_last_data(-(sequence_lenght-1), True)
print("*** ", - (-(sequence_lenght-1), last_data_2_predict.size, "***"))
last_data_2_predict
last_data_2_predict_prices
last_data_2_predict_prices_1st_prise
predictions2=model.predict(last_data_2_predict)
print(predictions2, predictions2[0][0])
def de_normalise_predicted(price_1st, _data):
return (_data+1)*price_1st
predicted_price = de_normalise_predicted(last_data_2_predict_prices_1st_price, predictions2[0][0])
predicted_price
答案1
得分: 1
以下是翻译好的部分:
这个错误是由这里的两个拼写错误引起的:
model.compile(optimizer='adam', loss='nse')
model.compile(optimizer='adam', loss='nse', metrics=['accuracy'])
应该将 nse
(均方误差) 改为 mse
(均方差):
model.compile(optimizer='adam', loss='mse')
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
我还必须修改这一行:
df["Date"] = pd.to_datetime(df["Date"], format="%Y%m%d") # 而不是 "%d/%m/%y"
一旦模型训练完成,使用 get_test_data()
函数时出现错误 (TypeError: get_test_data() got an unexpected keyword argument 'seq_len'
),但我猜这是你正在进行中的工作 如果需要更多帮助,请告诉我!
英文:
The error is due to two typos here:
model.compile(optimizer='adam', loss='nse')
model.compile(optimizer='adam', loss='nse', metrics=['accuracy'])
which should mention mse
(mean squared error) instead of nse
model.compile(optimizer='adam', loss='mse')
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
I also had to modify this line here:
df["Date"] = pd.to_datetime(df["Date"], format="%Y%m%d") # instead of "%d/%m/%y"
Once the model is trained I get an error with the get_test_data()
function (TypeError: get_test_data() got an unexpected keyword argument 'seq_len'
) but I guess that's your work-in-progress Let me know if you need more help!
答案2
得分: 0
尝试这个:
希望能对你有所帮助。
def get_test_data(seq_len, normalise):
data_windows=[]
for i in range(len_test - seq_len):
data_windows.append(data_test[i:i+seq_len])
data_windows = np.array(data_windows).astype(float)
data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
x = data_windows[:, :-1]
y = data_windows[:, -1, [0]]
return x,y
英文:
Try this:
Hopefully it will help.
def get_test_data(seq_len, normalise):
data_windows=[]
for i in range(len_test - seq_len):
data_windows.append(data_test[i:i+seq_len])
data_windows = np.array(data_windows).astype(float)
data_windows = normalise_windows(data_windows, single_window=False) if normalise else data_windows
x = data_windows[:, :-1]
y = data_windows[:, -1, [0]]
return x,y
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论