如何使用K-Fold交叉验证与DenseNet121模型

huangapple go评论110阅读模式
英文:

How to use K-Fold cross validation with DenseNet121 model

问题

我正在使用预训练模型DensetNet121来对乳腺癌图像进行分类。我将数据集分为训练、测试和验证集。我想要应用k-fold交叉验证。我使用了sklearn库中的cross_validation,但在运行代码时出现了以下错误。我尝试解决它,但没有成功。有人知道如何解决吗?

错误信息:

  1. TypeError: 无法克隆对象'<keras.engine.functional.Functional object at 0x000001F82E17E3A0>'(类型<class 'keras.engine.functional.Functional'>):它似乎不是scikit-learn估计器,因为它没有实现'get_params'方法。
英文:

I am working on classification of images breast cancer using DensetNet121 pretrained model. I split the dataset into training, testing and validation. I want to apply k-fold cross validation. I used cross_validation from sklearn library, but I get the below error when I run the code. I tried to solve it but nothing solved the error. Anyone have idea how to solve this.

  1. in_model = tf.keras.applications.DenseNet121(input_shape=(224,224,3),
  2. include_top=False,
  3. weights=&#39;imagenet&#39;,classes = 2)
  4. in_model.trainable = False
  5. inputs = tf.keras.Input(shape=(224,224,3))
  6. x = in_model(inputs)
  7. flat = Flatten()(x)
  8. dense_1 = Dense(1024,activation = &#39;relu&#39;)(flat)
  9. dense_2 = Dense(1024,activation = &#39;relu&#39;)(dense_1)
  10. prediction = Dense(2,activation = &#39;softmax&#39;)(dense_2)
  11. in_pred = Model(inputs = inputs,outputs = prediction)
  12. validation_data=(valid_data,valid_labels)
  13. #16
  14. in_pred.summary()
  15. in_pred.compile(optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.0002), loss=tf.keras.losses.CategoricalCrossentropy(from_logits = False), metrics=[&#39;accuracy&#39;])
  16. history=in_pred.fit(train_data,train_labels,epochs = 3,batch_size=32,validation_data=validation_data)
  17. model_result=cross_validation(in_pred, train_data, train_labels, 5)

The error:

  1. TypeError: Cannot clone object &#39;&lt;keras.engine.functional.Functional object at 0x000001F82E17E3A0&gt;&#39;
  2. (type &lt;class &#39;keras.engine.functional.Functional&#39;&gt;):
  3. it does not seem to be a scikit-learn estimator as it does not implement a &#39;get_params&#39; method.

答案1

得分: 2

因为您的模型不是 scikit-learn 估计器,所以您将无法使用 sklearn 内置的 cross_validate 方法。

但是,您可以使用 k-fold 来将数据分成 k 个折叠,并获取每个折叠的度量。您也可以在这里使用 TF 的内置 model.evaluate,或者 sklearn 的度量。

  1. from sklearn.model_selection import KFold
  2. in_model = tf.keras.applications.DenseNet121(
  3. input_shape=(224, 224, 3), include_top=False, weights="imagenet", classes=2
  4. )
  5. in_model.trainable = False
  6. inputs = tf.keras.Input(shape=(224, 224, 3))
  7. x = in_model(inputs)
  8. flat = Flatten()(x)
  9. dense_1 = Dense(1024, activation="relu")(flat)
  10. dense_2 = Dense(1024, activation="relu")(dense_1)
  11. prediction = Dense(2, activation="softmax")(dense_2)
  12. in_pred = Model(inputs=inputs, outputs=prediction)
  13. validation_data = (valid_data, valid_labels)
  14. in_pred.summary()
  15. in_pred.compile(
  16. optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.0002),
  17. loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
  18. metrics=["accuracy"],
  19. )
  20. kf = KFold(n_splits=10)
  21. kf.get_n_splits(train_data)
  22. for i, (fold_train_index, fold_test_index) in enumerate(kf.split(train_data)):
  23. print(f"Fold {i}:")
  24. print(f" Train: index={fold_train_index}")
  25. print(f" Test: index={fold_test_index}")
  26. history = in_pred.fit(
  27. train_data[fold_train_index],
  28. train_labels[fold_train_index],
  29. epochs=3,
  30. batch_size=32,
  31. validation_data=validation_data,
  32. )
  33. in_pred.evaluate(train_data[fold_test_index], train_labels[fold_test_index])

这是您提供的代码的翻译部分。

英文:

Since your model is not a scikit-learn estimator, you won't be able to use sklearn's built-in cross_validate method.

You can, however use k-fold to split your data into k-folds and get the metrics for each fold. We can use TF's built in model.evaluate, or sklearn's metrics here, too).

  1. from sklearn.model_selection import KFold
  2. in_model = tf.keras.applications.DenseNet121(
  3. input_shape=(224, 224, 3), include_top=False, weights=&quot;imagenet&quot;, classes=2
  4. )
  5. in_model.trainable = False
  6. inputs = tf.keras.Input(shape=(224, 224, 3))
  7. x = in_model(inputs)
  8. flat = Flatten()(x)
  9. dense_1 = Dense(1024, activation=&quot;relu&quot;)(flat)
  10. dense_2 = Dense(1024, activation=&quot;relu&quot;)(dense_1)
  11. prediction = Dense(2, activation=&quot;softmax&quot;)(dense_2)
  12. in_pred = Model(inputs=inputs, outputs=prediction)
  13. validation_data = (valid_data, valid_labels)
  14. # 16
  15. in_pred.summary()
  16. in_pred.compile(
  17. optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.0002),
  18. loss=tf.keras.losses.CategoricalCrossentropy(from_logits=False),
  19. metrics=[&quot;accuracy&quot;],
  20. )
  21. kf = KFold(n_splits=10)
  22. kf.get_n_splits(train_data)
  23. for i, (fold_train_index, fold_test_index) in enumerate(kf.split(train_data)):
  24. print(f&quot;Fold {i}:&quot;)
  25. print(f&quot; Train: index={fold_train_index}&quot;)
  26. print(f&quot; Test: index={fold_test_index}&quot;)
  27. history = in_pred.fit(
  28. train_data[fold_train_index],
  29. train_labels[fold_train_index],
  30. epochs=3,
  31. batch_size=32,
  32. validation_data=validation_data,
  33. )
  34. in_pred.evaluate(train_data[fold_test_index],train_labels[fold_test_index])

huangapple
  • 本文由 发表于 2023年3月31日 23:07:41
  • 转载请务必保留本文链接:https://go.coder-hub.com/75900056.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定