STEM图像分析使用OpenCV

huangapple go评论113阅读模式
英文:

STEM image analysis using OpenCV

问题

我正在分析TEM/STEM扫描图像。白色圆圈代表原子,而黑色是背景。捕获的图像有噪音,圆圈的边界不清晰。

您运行了以下代码:

  1. # Python代码
  2. import numpy as np
  3. import cv2
  4. import matplotlib.pyplot as plt
  5. from skimage import feature
  6. from scipy.optimize import curve_fit
  7. img = cv2.imread('input_image.tif', cv2.IMREAD_GRAYSCALE)
  8. img_noise_removed = cv2.medianBlur(img, 3)
  9. mask = cv2.GaussianBlur(img_noise_removed, (101, 101), 0)
  10. img_subtracted = cv2.absdiff(img_noise_removed, mask)
  11. edges = feature.canny(img_subtracted, sigma=1)
  12. cv2.imwrite('noise_removed_image.tif', img_noise_removed)

但它没有解决原子边界的问题。

第二部分的代码如下:

  1. import cv2
  2. import numpy as np
  3. from skimage.feature import peak_local_max
  4. from skimage.filters import threshold_otsu
  5. img = cv2.imread('test2.tif', cv2.IMREAD_GRAYSCALE)
  6. # 中值滤波和Laplacian滤波
  7. img_median = cv2.medianBlur(img, 3)
  8. img_laplacian = cv2.Laplacian(img_median, cv2.CV_64F, ksize=3)
  9. # 使用Otsu方法对图像进行阈值处理
  10. thresh = threshold_otsu(img_laplacian)
  11. binary = img_laplacian > thresh
  12. # 在图像中找到局部最大值的坐标
  13. coords = peak_local_max(binary, min_distance=5, threshold_abs=0.3)
  14. # 将坐标写入文本文件
  15. with open('coordinates.txt', 'w') as f:
  16. for coord in coords:
  17. f.write('{} {}\n'.format(coord[1], coord[0]))

希望这些信息对您有所帮助。

英文:

I am analyzing TEM/STEM scanned image. The white circles are the atoms whereas the black is the background. The captured image is noisy. The circle boundary is not clear.

STEM图像分析使用OpenCV

Is there any way to enhance the image to show the circle boundary?

I run the following code:

  1. #Python code
  2. import numpy as np
  3. import cv2
  4. import matplotlib.pyplot as plt
  5. from skimage import feature
  6. from scipy.optimize import curve_fit
  7. img = cv2.imread('input_image.tif', cv2.IMREAD_GRAYSCALE)
  8. img_noise_removed = cv2.medianBlur(img, 3)
  9. mask = cv2.GaussianBlur(img_noise_removed, (101, 101), 0)
  10. img_subtracted = cv2.absdiff(img_noise_removed, mask)
  11. edges = feature.canny(img_subtracted, sigma=1)
  12. cv2.imwrite('noise_removed_image.tif', img_noise_removed)

But it is not resolving the atom boundary.

#Second part of the code

  1. import cv2
  2. import numpy as np
  3. from skimage.feature import peak_local_max
  4. from skimage.filters import threshold_otsu
  5. img = cv2.imread('test2.tif', cv2.IMREAD_GRAYSCALE)
  6. # median filter and a Laplacian filter
  7. img_median = cv2.medianBlur(img, 3)
  8. img_laplacian = cv2.Laplacian(img_median, cv2.CV_64F, ksize=3)
  9. # Threshold the image using Otsu's method
  10. thresh = threshold_otsu(img_laplacian)
  11. binary = img_laplacian > thresh
  12. # Finding the coordinates of the local maxima in the image
  13. coords = peak_local_max(binary, min_distance=5, threshold_abs=0.3)
  14. # Writing the coordinates to a text file
  15. with open('coordinates.txt', 'w') as f:
  16. for coord in coords:
  17. f.write('{} {}\n'.format(coord[1], coord[0]))

答案1

得分: 3

以下是翻译好的内容,代码部分不包括在内:

"The other two answers attempt to extract an outline for each atom, then find the centroid of those outlines. I think this is the wrong approach, you want to use the gray values in the image for more than finding an outline. By computing the gray-weighted first order moment (centroid of the gray-scale blob, rather than the centroid of the outline) you can get a much more precise result. Also, you can get this result without filtering the image first.

I am assuming the example image is comparable to the actual images you deal with. If the actual images are more noisy, you might need to adjust some parameters to the watershed function for it to be robust against that noise.

I'm using DIPlib (disclaimer: I'm an author) because I'm more familiar with it than OpenCV, and because DIPlib is meant for precise measurements, unlike OpenCV."

这段文字讨论了使用灰度值来精确测量原子的位置,而不是仅仅提取轮廓并计算轮廓的质心。作者建议使用灰度加权的一阶矩(灰度尺度斑点的质心,而不是轮廓的质心)来获得更精确的结果,而且可以在不对图像进行滤波的情况下获得这个结果。

"I am assuming the example image is comparable to the actual images you deal with. If the actual images are more noisy, you might need to adjust some parameters to the watershed function for it to be robust against that noise.

I'm using DIPlib (disclaimer: I'm an author) because I'm more familiar with it than OpenCV, and because DIPlib is meant for precise measurements, unlike OpenCV."

这部分提到了如果实际图像比示例图像更嘈杂,可能需要调整分水岭函数的一些参数以使其对抗噪音。作者还提到使用DIPlib进行测量,因为作者更熟悉它,而且DIPlib专注于精确测量。

这部分是代码示例,不需要翻译:

  1. import diplib as dip
  2. # Read in the image
  3. img = dip.ImageRead("5V6nl.jpg", 'bioformats')
  4. img = img(0) # The JPEG has 3 channels, though it's a gray-scale image
  5. # We want to measure the position of the atoms in pixels. If there is pixel
  6. # size information in the input image, it will be attached to the image,
  7. # and the measurement will be in physical units. To avoid this, we remove
  8. # the pixel size information. But you can keep it if you need it!
  9. img.SetPixelSize([])
  10. # The watershed of the inverse image gives a label for each atom, we'll be
  11. # measuring inside each label independently
  12. # (the "high first" flag is like inverting the image)
  13. mask = img > 20
  14. labels = dip.Watershed(img, mask, flags={"labels", "high first"})
  15. # The "Gravity" feature is the gray-weighted first order moment
  16. msr = dip.MeasurementTool.Measure(labels, img, ["Gravity"])
  17. # Iterate over the resulting centroids.
  18. # Note that there is no specific order to them.
  19. gravity = msr["Gravity"]
  20. for o in msr.Objects():
  21. values = gravity[o] # this is a list with two elements (x, y)
  22. print(f"Object {o}: ({values[0]:.4f}, {values[1]:.4f})")

这段代码是用于测量原子位置的Python示例代码,它使用DIPlib库。它首先读取图像,然后进行分水岭分割,最后测量原子的位置,并打印结果。

这是打印结果:

"This prints out a list with 325 items:

  1. Object 1: (137.8478, 151.9975)
  2. Object 2: (24.9894, 89.0565)
  3. Object 3: (49.9969, 89.0618)
  4. Object 4: (25.0534, 151.9687)
  5. Object 5: (275.0785, 130.1821)
  6. Object 6: (125.3453, 152.0229)
  7. ...

这是测量结果的示例,显示了每个原子的位置坐标。

"Note that the atoms at the edge of the image will have wrong centroids. I would suggest ignoring them in these measurements, for example by discarding centroids that are too close to any of the image boundaries."

这部分提到图像边缘的原子可能会有错误的质心,建议在测量中忽略它们,可以通过丢弃接近图像边界的质心来实现。

"The labels image looks like this:

STEM图像分析使用OpenCV

You notice how the regions in which we measure are quite loose. The only goal is to contain the full blob for each atom, so that the centroid measurement works correctly. We don't care about the exact extent of these regions, any darker pixels they contain will not affect the result very much.

If you examine your intermediate labels image and notice multiple regions for one atom, it means you have more noise in your image than we have in the example image here. In that case you need to adjust the maxDepth parameter to dip.Watershed(). This parameter controls merging of regions. Increasing that parameter (the default is 1) will result in fewer regions. You will have to tweak it until you see exactly one region per atom."

这部分说明了labels图像的外观,以及如何处理测量区域。作者提到测量区域的目标是包含每个原子的完整斑点,而不关心这些区域的确切范围。如果您的labels图像中有一个原子有多个区域,那意味着您的图像中有更多的噪声,需要调整dip.Watershed()函数的maxDepth参数来控制区域的合并,直到每个原子有一个区域。

英文:

The other two answers attempt to extract an outline for each atom, then find the centroid of those outlines. I think this is the wrong approach, you want to use the gray values in the image for more than finding an outline. By computing the gray-weighted first order moment (centroid of the gray-scale blob, rather than the centroid of the outline) you can get a much more precise result. Also, you can get this result without filtering the image first.

I am assuming the example image is comparable to the actual images you deal with. If the actual images are more noisy, you might need to adjust some parameters to the watershed function for it to be robust against that noise.

I'm using DIPlib [disclaimer: I'm an author] because I'm more familiar with it than OpenCV, and because DIPlib is meant for precise measurements, unlike OpenCV.

  1. import diplib as dip
  2. # Read in the image
  3. img = dip.ImageRead("5V6nl.jpg", 'bioformats')
  4. img = img(0) # The JPEG has 3 channels, though it's a gray-scale image
  5. # We want to measure the position of the atoms in pixels. If there is pixel
  6. # size information in the input image, it will be attached to the image,
  7. # and the measurement will be in physical units. To avoid this, we remove
  8. # the pixel size information. But you can keep it if you need it!
  9. img.SetPixelSize([])
  10. # The watershed of the inverse image gives a label for each atom, we'll be
  11. # measuring inside each label independently
  12. # (the "high first" flag is like inverting the image)
  13. mask = img > 20
  14. labels = dip.Watershed(img, mask, flags={"labels", "high first"})
  15. # The "Gravity" feature is the gray-weighted first order moment
  16. msr = dip.MeasurementTool.Measure(labels, img, ["Gravity"])
  17. # Iterate over the resulting centroids.
  18. # Note that there is no specific order to them.
  19. gravity = msr["Gravity"]
  20. for o in msr.Objects():
  21. values = gravity[o] # this is a list with two elements (x, y)
  22. print(f"Object {o}: ({values[0]:.4f}, {values[1]:.4f})")

This prints out a list with 325 items:

  1. Object 1: (137.8478, 151.9975)
  2. Object 2: (24.9894, 89.0565)
  3. Object 3: (49.9969, 89.0618)
  4. Object 4: (25.0534, 151.9687)
  5. Object 5: (275.0785, 130.1821)
  6. Object 6: (125.3453, 152.0229)
  7. ...

Note that the atoms at the edge of the image will have wrong centroids. I would suggest ignoring them in these measurements, for example by discarding centroids that are too close to any of the image boundaries.


The labels image looks like this:

STEM图像分析使用OpenCV

You notice how the regions in which we measure are quite loose. The only goal is to contain the full blob for each atom, so that the centroid measurement works correctly. We don't care about the exact extent of these regions, any darker pixels they contain will not affect the result very much.

If you examine your intermediate labels image and notice multiple regions for one atom, it means you have more noise in your image than we have in the example image here. In that case you need to adjust the maxDepth parameter to dip.Watershed(). This parameter controls merging of regions. Increasing that parameter (the default is 1) will result in fewer regions. You will have to tweak it until you see exactly one region per atom.

  1. labels = dip.Watershed(img, mask, maxDepth=10, flags={"labels", "high first"})

答案2

得分: 1

以下是Python/OpenCV中找到原子质心的一种方法:

  • 读取输入
  • 转换为灰度
  • 阈值化以分离原子
  • 获取轮廓
  • 对于每个轮廓,获取图像矩并计算质心。
  • 打印每个质心
  • 在输入的副本上绘制质心处的小圆圈
  • 保存结果

输入:

STEM图像分析使用OpenCV

  1. import cv2
  2. import numpy as np
  3. # 读取图像
  4. img = cv2.imread('STEM.jpg')
  5. # 转换为灰度
  6. gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  7. # 阈值化
  8. thresh = cv2.threshold(gray, 164, 255, cv2.THRESH_BINARY)[1]
  9. # 获取轮廓
  10. result = img.copy()
  11. contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  12. contours = contours[0] if len(contours) == 2 else contours[1]
  13. index = 1
  14. for cntr in contours:
  15. M = cv2.moments(cntr)
  16. cx = int(M["m10"] / M["m00"])
  17. cy = int(M["m01"] / M["m00"])
  18. print(index, cx, cy)
  19. cv2.circle(result, (cx, cy), 2, (0, 0, 255), -1)
  20. index = index + 1
  21. # 保存结果
  22. cv2.imwrite('STEM_centroids.jpg', result)
  23. # 显示结果
  24. cv2.imshow('thresh', thresh)
  25. cv2.imshow('result', result)
  26. cv2.waitKey(0)

阈值化:

STEM图像分析使用OpenCV

结果:

STEM图像分析使用OpenCV

索引 质心(x y)

1 302 254
2 290 254
3 277 254
4 265 254
5 252 255
6 239 254
7 227 254
8 214 255
9 202 254
10 189 255
11 176 254
12 164 254
13 151 255
14 138 254
15 126 254
16 113 254
17 101 254
18 88 255
19 75 254
20 63 254
21 50 255
22 37 254
23 25 254
24 12 255
25 1 254
26 302 234
27 289 234
28 277 234
29 264 234
30 251 234
31 239 234
32 226 234
33 214 234
34 201 234
35 189 234
36 176 234
37 163 234
38 151 234
39 138 234
40 126 234
41 113 234
42 100 234
43 88 234
44 75 234
45 63 234
46 50 234
47 37 234
48 25 234
49 12 234
50 1 234
51 301 214
52 289 213
53 276 213
54 264 213
55 251 213
56 238 213
57 226 213
58 213 213
59 201 213
60 188 213
61 176 213
62 163 213
63 151 213
64 138 213
65 125 213
66 113 213
67 100 213
68 88 213
69 75 214
70 62 213
71 50 213
72 37 213
73 25 213
74 12 213
75 1 213
76 301 193
77 288 193
78 276 193
79 263 193
80 251 193
81 238 193
82 226 193
83 213 193
84 200 193
85 188 193
86 175 193
87 163 193
88 150 193
89 138 193
90 125 193
91 113 193
92 100 193
93 87 193
94 75 193
95 62 193
96 50 193
97 37 193
98 25 193
99 12 193
100 1 193
101 275 172
102 263 172
103 250 172
104 238 172
105 225 172
106 213 172
107 200 172
108 188 172
109 175 172
110 163 172
111 150 172
112 138 172
113 125 172
114 112 172
115 100 172
116 87 172
117 75 172
118 62 172
119 50 172
120 37 172
121 25 172
122 12 172
123 1 172
124 300 171
125 288 171
126 175 152
127 162 152
128 150 152
129 137 152
130 125 152
131 112 151
132 100 151
133 87 152
134 75 152
135 62 152
136 50 151
137 37 152
138 25 151
139 12 152
140 1 151
141 300 150
142 287 150
143 275 150
144 262 150
145 250 150
146 237 150
147 225 150
148 212 150
149 200 150
150 187 150
151 299 130
152 287 130
153 275 130
154 262 130
155 250 130
156 237 130
157 225 130
158 212 130
159 200 130
160 187 130
161 175 130
162 162 130
163 150 130
164 137 130
165 125 130
166 112 130
167 100 130
168 87 130
169 75 130
170 62 130
171 50 130
172 37 130
173 25 130
174 12 130
175 1 130
176 287 109
177 262 109
178 249 109
179 237 109
180 212 109
181 187 109
182 162 109
183 137

英文:

Here is one way to find the centroids of the atoms in Python/OpenCV.

  • Read the input
  • Convert to grayscale
  • Threshold so as to separate the atoms
  • Get contours
  • For each contour, get the image moments and compute the centroids.
  • Print each centroid
  • Draw a small circle at the centroid on a copy of the input
  • Save the results

Input:

STEM图像分析使用OpenCV

  1. import cv2
  2. import numpy as np
  3. # read image
  4. img = cv2.imread('STEM.jpg')
  5. # convert to gray
  6. gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
  7. # threshold
  8. thresh = cv2.threshold(gray, 164, 255, cv2.THRESH_BINARY)[1]
  9. # get contours
  10. result = img.copy()
  11. contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
  12. contours = contours[0] if len(contours) == 2 else contours[1]
  13. index=1
  14. for cntr in contours:
  15. M = cv2.moments(cntr)
  16. cx = int(M["m10"] / M["m00"])
  17. cy = int(M["m01"] / M["m00"])
  18. print(index,cx,cy)
  19. cv2.circle(result, (cx,cy), 2, (0,0,255), -1)
  20. index = index + 1
  21. # save results
  22. cv2.imwrite('STEM_centroids.jpg', result)
  23. # show results
  24. cv2.imshow('thresh', thresh)
  25. cv2.imshow('result', result)
  26. cv2.waitKey(0)

Threshold:

STEM图像分析使用OpenCV

Result:

STEM图像分析使用OpenCV

  1. Index Centroid(x y)
  2. 1 302 254
  3. 2 290 254
  4. 3 277 254
  5. 4 265 254
  6. 5 252 255
  7. 6 239 254
  8. 7 227 254
  9. 8 214 255
  10. 9 202 254
  11. 10 189 255
  12. 11 176 254
  13. 12 164 254
  14. 13 151 255
  15. 14 138 254
  16. 15 126 254
  17. 16 113 254
  18. 17 101 254
  19. 18 88 255
  20. 19 75 254
  21. 20 63 254
  22. 21 50 255
  23. 22 37 254
  24. 23 25 254
  25. 24 12 255
  26. 25 1 254
  27. 26 302 234
  28. 27 289 234
  29. 28 277 234
  30. 29 264 234
  31. 30 251 234
  32. 31 239 234
  33. 32 226 234
  34. 33 214 234
  35. 34 201 234
  36. 35 189 234
  37. 36 176 234
  38. 37 163 234
  39. 38 151 234
  40. 39 138 234
  41. 40 126 234
  42. 41 113 234
  43. 42 100 234
  44. 43 88 234
  45. 44 75 234
  46. 45 63 234
  47. 46 50 234
  48. 47 37 234
  49. 48 25 234
  50. 49 12 234
  51. 50 1 234
  52. 51 301 214
  53. 52 289 213
  54. 53 276 213
  55. 54 264 213
  56. 55 251 213
  57. 56 238 213
  58. 57 226 213
  59. 58 213 213
  60. 59 201 213
  61. 60 188 213
  62. 61 176 213
  63. 62 163 213
  64. 63 151 213
  65. 64 138 213
  66. 65 125 213
  67. 66 113 213
  68. 67 100 213
  69. 68 88 213
  70. 69 75 214
  71. 70 62 213
  72. 71 50 213
  73. 72 37 213
  74. 73 25 213
  75. 74 12 213
  76. 75 1 213
  77. 76 301 193
  78. 77 288 193
  79. 78 276 193
  80. 79 263 193
  81. 80 251 193
  82. 81 238 193
  83. 82 226 193
  84. 83 213 193
  85. 84 200 193
  86. 85 188 193
  87. 86 175 193
  88. 87 163 193
  89. 88 150 193
  90. 89 138 193
  91. 90 125 193
  92. 91 113 193
  93. 92 100 193
  94. 93 87 193
  95. 94 75 193
  96. 95 62 193
  97. 96 50 193
  98. 97 37 193
  99. 98 25 193
  100. 99 12 193
  101. 100 1 193
  102. 101 275 172
  103. 102 263 172
  104. 103 250 172
  105. 104 238 172
  106. 105 225 172
  107. 106 213 172
  108. 107 200 172
  109. 108 188 172
  110. 109 175 172
  111. 110 163 172
  112. 111 150 172
  113. 112 138 172
  114. 113 125 172
  115. 114 112 172
  116. 115 100 172
  117. 116 87 172
  118. 117 75 172
  119. 118 62 172
  120. 119 50 172
  121. 120 37 172
  122. 121 25 172
  123. 122 12 172
  124. 123 1 172
  125. 124 300 171
  126. 125 288 171
  127. 126 175 152
  128. 127 162 152
  129. 128 150 152
  130. 129 137 152
  131. 130 125 152
  132. 131 112 151
  133. 132 100 151
  134. 133 87 152
  135. 134 75 152
  136. 135 62 152
  137. 136 50 151
  138. 137 37 152
  139. 138 25 151
  140. 139 12 152
  141. 140 1 151
  142. 141 300 150
  143. 142 287 150
  144. 143 275 150
  145. 144 262 150
  146. 145 250 150
  147. 146 237 150
  148. 147 225 150
  149. 148 212 150
  150. 149 200 150
  151. 150 187 150
  152. 151 299 130
  153. 152 287 130
  154. 153 275 130
  155. 154 262 130
  156. 155 250 130
  157. 156 237 130
  158. 157 225 130
  159. 158 212 130
  160. 159 200 130
  161. 160 187 130
  162. 161 175 130
  163. 162 162 130
  164. 163 150 130
  165. 164 137 130
  166. 165 125 130
  167. 166 112 130
  168. 167 100 130
  169. 168 87 130
  170. 169 75 130
  171. 170 62 130
  172. 171 50 130
  173. 172 37 130
  174. 173 25 130
  175. 174 12 130
  176. 175 1 130
  177. 176 287 109
  178. 177 262 109
  179. 178 249 109
  180. 179 237 109
  181. 180 212 109
  182. 181 187 109
  183. 182 162 109
  184. 183 137 109
  185. 184 112 109
  186. 185 87 109
  187. 186 62 109
  188. 187 37 109
  189. 188 12 109
  190. 189 299 109
  191. 190 274 109
  192. 191 225 109
  193. 192 200 109
  194. 193 175 109
  195. 194 150 109
  196. 195 125 109
  197. 196 100 109
  198. 197 75 109
  199. 198 50 109
  200. 199 25 109
  201. 200 1 109
  202. 201 299 89
  203. 202 287 89
  204. 203 274 89
  205. 204 262 89
  206. 205 249 89
  207. 206 237 89
  208. 207 224 88
  209. 208 212 89
  210. 209 199 89
  211. 210 187 89
  212. 211 174 89
  213. 212 162 89
  214. 213 149 89
  215. 214 137 89
  216. 215 124 89
  217. 216 112 89
  218. 217 99 89
  219. 218 87 89
  220. 219 74 89
  221. 220 62 89
  222. 221 49 89
  223. 222 37 89
  224. 223 24 89
  225. 224 12 89
  226. 225 1 89
  227. 226 274 68
  228. 227 262 68
  229. 228 249 68
  230. 229 237 68
  231. 230 224 68
  232. 231 212 68
  233. 232 199 68
  234. 233 187 68
  235. 234 174 68
  236. 235 162 68
  237. 236 149 68
  238. 237 137 68
  239. 238 124 68
  240. 239 112 68
  241. 240 99 68
  242. 241 87 68
  243. 242 74 68
  244. 243 62 68
  245. 244 49 68
  246. 245 37 68
  247. 246 25 68
  248. 247 12 68
  249. 248 1 68
  250. 249 299 67
  251. 250 287 67
  252. 251 112 47
  253. 252 99 47
  254. 253 87 48
  255. 254 74 47
  256. 255 62 48
  257. 256 49 47
  258. 257 37 48
  259. 258 24 47
  260. 259 12 48
  261. 260 1 47
  262. 261 299 46
  263. 262 286 46
  264. 263 274 46
  265. 264 262 46
  266. 265 249 46
  267. 266 237 46
  268. 267 224 46
  269. 268 212 46
  270. 269 199 46
  271. 270 187 46
  272. 271 174 46
  273. 272 162 46
  274. 273 149 46
  275. 274 137 46
  276. 275 124 46
  277. 276 299 26
  278. 277 286 26
  279. 278 274 26
  280. 279 261 26
  281. 280 249 26
  282. 281 237 26
  283. 282 224 26
  284. 283 212 26
  285. 284 199 26
  286. 285 187 26
  287. 286 174 26
  288. 287 162 26
  289. 288 149 26
  290. 289 137 26
  291. 290 124 26
  292. 291 112 26
  293. 292 99 26
  294. 293 87 26
  295. 294 74 26
  296. 295 62 26
  297. 296 49 26
  298. 297 37 26
  299. 298 24 26
  300. 299 12 26
  301. 300 1 26
  302. 301 299 5
  303. 302 286 5
  304. 303 274 5
  305. 304 261 5
  306. 305 249 5
  307. 306 236 5
  308. 307 224 5
  309. 308 212 5
  310. 309 199 5
  311. 310 187 5
  312. 311 174 5
  313. 312 162 5
  314. 313 149 5
  315. 314 137 5
  316. 315 124 5
  317. 316 112 5
  318. 317 99 5
  319. 318 87 5
  320. 319 74 5
  321. 320 62 5
  322. 321 49 5
  323. 322 37 5
  324. 323 24 5
  325. 324 12 5
  326. 325 1 5

答案3

得分: 0

您的问题类似于https://stackoverflow.com/a/17116465/1510289,我相信类似的解决方案适用于您的情况,即在模糊处理后应用阈值,直到生成令人满意的轮廓,即边界。

以下是代码(在链接中提到),但参数已经根据您的特定输入图像进行了定制,以生成准确的轮廓。

  1. import cv2
  2. image = cv2.imread('input.jpg')
  3. image2 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  4. image2 = cv2.GaussianBlur(image2, ksize=(11,11), sigmaX=1, sigmaY=1)
  5. cv2.imwrite('blurred.png', image2)
  6. hello, image2 = cv2.threshold(image2, thresh=140, maxval=255, type=cv2.THRESH_BINARY)
  7. cv2.imwrite('thresholded.png', image2)
  8. contours, hier = cv2.findContours(
  9. image2,
  10. mode=cv2.RETR_EXTERNAL,
  11. method=cv2.CHAIN_APPROX_NONE)
  12. print(f'Number of contours: {len(contours)}, hit any key to continue')
  13. cv2.drawContours(
  14. image,
  15. contours=contours,
  16. contourIdx=-1,
  17. color=(0,255,0),
  18. thickness=1)
  19. cv2.imwrite('augmented.png', image)
  20. cv2.imshow('hello', image)
  21. cv2.waitKey(-1)

blurred.png

STEM图像分析使用OpenCV

thresholded.png

STEM图像分析使用OpenCV

augmented.png

STEM图像分析使用OpenCV

英文:

Your question is similar to https://stackoverflow.com/a/17116465/1510289, and I believe a similar solution would apply in your case, namely to apply a threshold after the blur until you generate satisfactory contours, i.e. boundaries.

Below is the code (mentioned in the link), but with parameters customized for your specific input image to yield accurate contours.

  1. import cv2
  2. image = cv2.imread('input.jpg')
  3. image2 = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
  4. image2 = cv2.GaussianBlur(image2, ksize=(11,11), sigmaX=1, sigmaY=1)
  5. cv2.imwrite('blurred.png', image2)
  6. hello, image2 = cv2.threshold(image2, thresh=140, maxval=255, type=cv2.THRESH_BINARY)
  7. cv2.imwrite('thresholded.png', image2)
  8. contours, hier = cv2.findContours(
  9. image2,
  10. mode=cv2.RETR_EXTERNAL,
  11. method=cv2.CHAIN_APPROX_NONE)
  12. print(f'Number of contours: {len(contours)}, hit any key to continue')
  13. cv2.drawContours(
  14. image,
  15. contours=contours,
  16. contourIdx=-1,
  17. color=(0,255,0),
  18. thickness=1)
  19. cv2.imwrite('augmented.png', image)
  20. cv2.imshow('hello', image)
  21. cv2.waitKey(-1)

blurred.png :

STEM图像分析使用OpenCV

thresholded.png :

STEM图像分析使用OpenCV

augmented.png :

STEM图像分析使用OpenCV

huangapple
  • 本文由 发表于 2023年3月7日 02:00:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/75654277.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定