英文:
MultiClass Classification Confusion Matrix
问题
我正在构建一个卷积神经网络分类模型,类别包括[Pneumonia(肺炎), Healthy(健康), TB(结核病)],我已经编写了一些代码来构建模型,而且进展顺利。但问题在于混淆矩阵对我来说有点奇怪。测试数据的准确度表现相当不错,准确度约为85%。我使用以下代码来生成混淆矩阵:
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import confusion_matrix
def confussion_matrix(test_true, test_pred, test_class):
cm = confusion_matrix(test_true, test_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=test_class)
fig, ax = plt.subplots(figsize=(15,15))
disp.plot(ax=ax,cmap=plt.cm.Blues)
plt.show()
testing_pred_raw = model.predict(testing_generator)
testing_pred = np.argmax(testing_pred_raw, axis=1)
testing_true = testing_generator.classes
testing_class = ['Pneumonia', 'Sehat', 'TB']
confussion_matrix(testing_true, testing_pred, testing_class)
注:Sehat = 健康
我已经得到了混淆矩阵,但结果看起来有点奇怪(好像准确度不到85%)。以下是结果的图片:
这个结果是否正确(也许是因为我错误地读取了混淆矩阵),还是我的代码中有什么需要修改的地方?
我已经尝试过类似的事情。
英文:
Im building a CNN Classification model to with classes = [Pneumonia, Healthy, TB], i already made some code to build the model and it went pretty well. But the problem is the confusion matrix is a little bit weird for me. I got a pretty well result for the accuracy of the testing_data with the accuracy of around 85%. i use this code for making the confusion matrix :
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import confusion_matrix
def confussion_matrix(test_true, test_pred, test_class):
cm = confusion_matrix(test_true, test_pred)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=test_class)
fig, ax = plt.subplots(figsize=(15,15))
disp.plot(ax=ax,cmap=plt.cm.Blues)
plt.show()
testing_pred_raw = model.predict(testing_generator)
testing_pred = np.argmax(testing_pred_raw, axis=1)
testing_true = testing_generator.classes
testing_class = ['Pneumonia', 'Sehat', 'TB']
confussion_matrix(testing_true, testing_pred, testing_class)
Note : Sehat = Healthy
i already got the confusion matrix but the spread is pretty weird (is like is not even 85% accuracy). Heres the result :
Is this result is correct (maybe because i read the confusion matrix incorrectly) or theres something in my code that can be modify?
I already tried the thing as above
答案1
得分: 0
您的混淆矩阵显示了大约32%的准确率
有15个正确分类和15 + 32 = 47个总分类。
15/47约为32%。
我无法解释您得出85%准确率的来源
这不是来自这个数据集。有没有可能您的模型在训练集上得分为85%,而这32%的性能是在测试集上?
啊,谢谢您提供的额外信息
哦,是的... 我将数据分为训练、验证和测试数据。
比例是0.7、0.2、0.1。所以训练集有314个图像,验证集有
89个图像,测试集有47个图像。所以测试集只使用了47个
图像。
现在我们知道准确率在训练集上约为85%,在验证集上约为81%,在测试集上约为32%。
这就是为什么我们有一个测试集:以确保我们没有自欺欺人!
如果您想要检查这种解释是否正确,请将训练数据或验证数据输入到混淆矩阵图中。您应该看到主对角线上有更多的一致性。
英文:
Your confusion matrix shows accuracy of about 32%
There are 15 correct classifications and 15+32 = 47 classifications in total.
15/47 is ~32%.
I can't explain where you got your accuracy of 85% from
It is not from this dataset. Is there any chance that your model was scoring 85% on the training set, and this performance of 32% is on a test set?
Ah, thank you for the additional info
> oh yeah.. i split the data for train, validation, and testing data.
> The ratio is 0.7,0.2,0.1. so the train has 314 image, validation has
> 89 image, and testing has 47 image. so for the testing only use the 47
> image.
So now we know that the accuracy is ~85% on the training set, ~81% on the validation set, and ~32% on the test set.
That is why we have a test set: to make sure we haven't deluded ourselves!
If you want to check whether this interpretation is correct, feed in the training data, or the validation data, into the confusion matrix plot. You should see much more agreement along the main diagonal.
答案2
得分: 0
所以似乎问题不在数据或CNN模型上。我的问题可以通过将测试和验证生成器的shuffle设置为False来解决。
def data_generators(TRAINING_DIR, VALIDATION_DIR, TESTING_DIR):
training_datagen = ImageDataGenerator(rescale = 1./255, horizontal_flip = True)
training_generator = training_datagen.flow_from_directory(TRAINING_DIR,
batch_size = 32,
class_mode = 'categorical',
target_size = (128, 128))
validation_datagen = ImageDataGenerator(rescale = 1./255, horizontal_flip = True)
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size = 32,
class_mode = 'categorical',
shuffle = False,
target_size = (128, 128))
testing_datagen = ImageDataGenerator(rescale = 1./255)
testing_generator = testing_datagen.flow_from_directory(TESTING_DIR,
batch_size = 32,
class_mode = 'categorical',
shuffle = False,
target_size = (128, 128))
return training_generator, validation_generator, testing_generator
但是我不知道为什么如果在验证数据中将shuffle设置为True,预测结果仍然很好。但重要的是在测试数据中将shuffle设置为False。
英文:
So its seems the problem is not on the data or the CNN Model. my problem can be fix by just set the shuffle on my test and validation generator to False.
def data_generators(TRAINING_DIR, VALIDATION_DIR, TESTING_DIR):
training_datagen = ImageDataGenerator(rescale = 1./255,
horizontal_flip = True)
training_generator = training_datagen.flow_from_directory(TRAINING_DIR,
batch_size = 32,
class_mode = 'categorical',
target_size = (128, 128))
validation_datagen = ImageDataGenerator(rescale = 1./255,
horizontal_flip = True)
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size = 32,
class_mode = 'categorical',
shuffle = False,
target_size = (128, 128))
testing_datagen = ImageDataGenerator(rescale = 1./255)
testing_generator = testing_datagen.flow_from_directory(TESTING_DIR,
batch_size = 32,
class_mode = 'categorical',
shuffle = False,
target_size = (128,128))
return training_generator, validation_generator, testing_generator
But i dont know why if i set the shuffle = True in validation data, the prediction is still good. But the important part is setting the Shuffle = False in the testing data.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论