Afternoon,
I am doing some deep learning with whole slide images (biomedical images of physical samples). Im using openslide to tile up the whole slide image (5gb in size) into smaller patches of the whole image and then convert this to RGB color space (I believe the patch is in RGBA as a PIL image object). If I use 5x magnification I find that When I save the image down I find a weird noise is present in the pixel that looks like a lot of the pixels are fluorescent (not present in higher magnifications). My question is does anyone knows why the Openslide library read_region function is causing this or if they know how I can do some post processing to remove this artifact from the image? I have tried to ways of convering the image see below
1.
for w in range(boundaries[0][0], boundaries[0][1], 500):
for h in range(boundaries[1][0], boundaries[1][1], 500):
patch = scan.read_region((w-2000, h-2000), 3, (500, 500))
img2 = img[h-2000:h+2000, w-2000:w+2000,:]
patch=np.asarray(patch)
patchRGB = cv2.cvtColor(patch, cv2.COLOR_RGBA2RGB)
print(np.mean(patchRGB), img2.shape)
if (img2.shape == (4000, 4000, 3) and np.mean(patchRGB) < 200) and np.mean(patchRGB) > 50:
cv2.imwrite('output/test/images/'+os.path.basename(ndpi)[:-5]+'_'+str(w)+'_'+str(h)+'.png', patchRGB)
cv2.imwrite('output/test/masks/'+os.path.basename(ndpi)[:-5]+'_'+str(w)+'_'+str(h)+'_masks.png', img2)
2.
for w in range(boundaries[0][0], boundaries[0][1], 500):
for h in range(boundaries[1][0], boundaries[1][1], 500):
patch = scan.read_region((w-2000, h-2000), 3, (500, 500))
img2 = img[h-2000:h+2000, w-2000:w+2000,:]
patchRGB = patch.convert('RGB')
print(np.mean(patchRGB), img2.shape)
if (img2.shape == (4000, 4000, 3) and np.mean(patchRGB) < 200) and np.mean(patchRGB) > 50:
patchRGB.save('output/5x/images/'+os.path.basename(ndpi)[:-5]+'_'+str(w)+'_'+str(h)+'.png')
cv2.imwrite('output/5x/masks/'+os.path.basename(ndpi)[:-5]+'_'+str(w)+'_'+str(h)+'_masks.png', img2)