copping grayscale image and remove any other borders or details

could you please help me to get only the gray scale images without any borders and without any written words or letters created by the device . i want a general code to do this beause i have other images with other border thickness and other written , and i want the gray image entire only

 Accepted Answer

This example relies on the images being clean TIFF files of the expected size. It won't work on JPGs, and it won't work on the downscaled PNG thumbnails. The template images are attached.
unzip UltraS_tiff_images.zip
% the image
inpict = imread('US_tiff_images/tech_029.tiff');
inpict = im2gray(inpict); % needs to be single-channel
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% a template image of the mfr logo
logo = imread('logo.png');
% find the logo
corrmap = normxcorr2(logo,inpict);
[row col] = find(corrmap == max(corrmap(:)));
szl = size(logo,1:2);
nwcorner = [row col] - szl + 1;
% fill the logo with a known value so that it can be inpainted later
% if it doesn't get cropped out
inpict(nwcorner(1)+(0:szl(1)-1),nwcorner(2)+(0:szl(2)-1)) = 234;
% crop to the logo position
% this immediately gets rid of the top and most LH annotations
inpict = inpict(nwcorner(1):end,nwcorner(2):end);
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% a template image of the color bar
cbar = imread('cbar.png');
% find the colorbar
corrmap = normxcorr2(cbar,inpict);
[row col] = find(corrmap == max(corrmap(:)));
szl = size(cbar,1:2);
nwcorner = [row col] - szl + 1;
% fill the colorbar with a known value so that it can be inpainted later
% if it doesn't get cropped out
inpict(nwcorner(1)+(0:szl(1)-1),nwcorner(2)+(0:szl(2)-1)) = 234;
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% try to get rid of annotations by doing inpainting
% this won't always work, and the masking relies on an exact value
mask = ismember(inpict,233:234);
inpict = regionfill(inpict,mask);
% try to find the ultrasound image region
% based on the value distribution in row/col vectors
ent = entropyfilt(inpict);
xb = median(ent,1) > 2;
yb = median(ent,2) > 2;
mask = bwareafilt(xb & yb,1);
% crop again
[~,rows,cols] = crop2box(mask);
outpict = inpict(rows,cols);
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
imshow(outpict)
If there are large bands within the ultrasound image where there is little variation (e.g. at the top or bottom of the scan), those regions may get cropped off. If there is such a band which splits the image, only the largest part will be returned.

5 Comments

Thank you for your great help
Can you suggest to me filters types to use to enhance these images , and what is better using smoothing or sharpening filters for these images to enhance the images before entering to a pretrainned network DGM
Unfortunately, I don't know enough about preparing images for machine learning to give you good advice. Processing or analyzing ultrasound images specifically isn't something I'm accustomed to doing, so I'd have a hard time figuring out what features should be improved.
@Dina are you going to use transfer learning on a pre-built network? (This can speed training compared to building your own.) If so, what is the size of images it expects? It would be good to crop out the image alone so that you have the highest resolution when you resize for the network. Don't worry about the white letters overlaying the image. If they're in the same place all the time (like a logo or something) the network will learn to ignore those pixels.
And what are you trying to figure out about these images? Do you want to find a region (like find the dark region)? Do you want to classify them as something (like normal or abnormal)? Do you want to get a number from them (lilke 4.8 or 23.7 or whatever)?
yes, i am going to use transfer learning. the size of image expects to be 227*227*3 or 224*224*3. The logo and white written doesn't have the same place for all images. I will classify the images as normal and abnormal. Could you help me about the preprocessing steps that i needed before using transfer learning , filtreing,normalization,etc depend on the images nature as seen @Image Analyst
@Dina it's far too much information for me to spell out here so I suggest you go through the Deep Learning Onramp where they will teach you how to do transfer learning: https://matlabacademy.mathworks.com/details/deep-learning-onramp/deeplearning
Basically you need to make up two folders called Normal and Abnormal where you have your ground truth images (already cropped, white pixels filled in, and resized). Then you call a function called something like trainnet to train/create the neural network model. Then you can call a function like classify or minibatchpredict to predict the class on new images.

Sign in to comment.

More Answers (3)

Read about imcrop.
[I,rect] = imcrop(myimage) ;

6 Comments

I dont want to do this with specify the dimensions that i want to crop , i want this automatically depend on the picture to get the gray image only and remove words or any other writings due to the used device
inpict = imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/1745341/image.jpeg');
inpict = imcrop(inpict,[120.51 63.51 679.98 455.98]);
imshow(inpict)
If you have other images which differ significantly in size such that a fixed rectangular crop won't work, then in what other ways do they differ? Are the backgrounds always black? What about the text, ticks and annotations within the region of interest? Are those to also be addressed, or is it okay to leave them? If they need to be removed, do they vary between images?
I want the code to crop the gray part, which is the part that will contain the required image, and then remove any letters, words, or shapes that do not belong to the color distribution in the cropped image as it possible .
The color distribution of the ultrasound image and the annotations, border, etc. are not mutually exclusive.
% the image
inpict = imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/1745341/image.jpeg');
% subdivide the image regions
north = imcrop(inpict,[0.51 0.51 799.98 57.98]);
west = imcrop(inpict,[0.51 0.51 111.98 599.98]);
roi = imcrop(inpict,[120.51 63.51 679.98 455.98]);
mask = imread('mk.png');
% pixels from the ultrasound image with no annotations
uspix = roi(~mask);
subplot(3,1,1)
imhist(uspix)
% pixels from anywhere else
annpix = [roi(mask); north(:); west(:)];
subplot(3,1,2)
imhist(annpix)
% find the pixels whose values are common to both regions
compix = union(unique(uspix),unique(annpix));
commask = ismember(inpict,compix);
compix = inpict(commask);
subplot(3,1,3)
imhist(compix)
The distribution of values common to both regions spans the dynamic range of the datatype. Since the image is a JPG, and the non-ROI regions are full of high-contrast edges, this shouldn't be surprising. Let's say we just take the peaks of the histogram and select those pixels that are at least in the region which predominantly represents the non-ROI content:
% get counts and threshold
[count,cen] = imhist(annpix);
mk = count>1000;
figure
mask = ismember(inpict,[cen(mk)]);
imshow(~mask)
That's still not terribly good by itself, especially at the edges.
If we can't simply use pixel value alone to globally discriminate between the area of interest and other parts of the the image, we have to use some other information. How did I find the ROI? I used the marker annotations. Are those easy to find by graylevel alone? No, but if there is some aspect of their geometry or some other aspect of the image presentation which is consistent between images, then maybe we can use that.
For this image, it might suffice to use row/column means to find the location of the ROI, but if the annotation text on the left side extends further down, our assumptions may fail. If the average brightness of the ROI changes, the process may become unreliable.
It helps to have a more concrete understanding of the possible range of inputs.
i attached another images , the original extension of images are .tiff but i convert it to display here
so , you mean removing redundant objects or letters can not be done ?
what about cropping only gray without specific dimenssions

Sign in to comment.

Not sure of the source of this image. It looks like a screenshot that was exported by your ultrasound program. That means the original grayscale image is probably contained in an image, like a dicom image, along with some metadata, some of which gets imprinted on the screen as text. Plus with the PNG images you attached, I can see some different cropping, like in one you included the words on the left and in another you didn't, so that makes it harder to simply use the fixed ROI that the image would have. So I think the best approach is to just use the original format image and try to extract the image from it alone, and not use a screenshot exported by some other program. If you have a dicom image, attach it.

6 Comments

Need to login. Just zip up the files and attach the zip file here with the paperclip icon.
i attached samples of my images
These are the annotated images. Is that all you have? Is your instrument not able to give you the file with the image data and the meta data separated?
Sorry about all the moving. Page shifted when I went to move the comment-as-answer above yours and I didn't catch it.

Sign in to comment.

Try this:
% Demo by Image Analyst
% Initialization steps:
clc; % Clear the command window.
close all; % Close all figures (except those of imtool.)
clear; % Erase all existing variables. Or clearvars if you want.
workspace; % Make sure the workspace panel is showing.
format long g;
format compact;
fontSize = 16;
%--------------------------------------------------------------------------------------------------------
% READ IN TEST IMAGE
folder = fullfile(pwd, '\US_tiff_images');
baseFileName = "tech_004.tiff";
fullFileName = fullfile(folder, baseFileName);
% Check if file exists.
if ~isfile(fullFileName)
% The file doesn't exist -- didn't find it there in that folder.
% Check the entire search path (other folders) for the file by stripping off the folder.
fullFileNameOnSearchPath = baseFileName; % No path this time.
if ~exist(fullFileNameOnSearchPath, 'file')
% Still didn't find it. Alert user.
errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName);
uiwait(warndlg(errorMessage));
return;
end
end
% Read in image file.
rgbImage = imread(fullFileName);
% Get size
[rows, columns, numberOfColorChannels] = size(rgbImage)
% Get gray scale version of it.
if numberOfColorChannels == 3
grayImage = rgbImage(:, :, 2); % Take green channel.
else
grayImage = rgbImage;
end
% Display the image.
subplot(2, 2, 1);
imshow(grayImage);
axis('on', 'image');
impixelinfo;
title('Original Image', 'FontSize', fontSize, 'Interpreter', 'None');
% Maximize window.
g = gcf;
g.WindowState = 'maximized';
g.Name = 'Demo by Image Analyst';
g.NumberTitle = 'off';
drawnow;
%--------------------------------------------------------------------------------------------------------
% CREATE MASK.
% Threshold.
thresholdValue = 0;
xline(thresholdValue, 'Color', 'r');
% Create initial mask
imageMask = grayImage > thresholdValue;
subplot(2, 2, 2);
imshow(imageMask);
axis('on', 'image');
impixelinfo;
title('Initial Mask Image', 'FontSize', fontSize, 'Interpreter', 'None');
drawnow;
%--------------------------------------------------------------------------------------------------------
% Fill holes, extract largest, crop, and resize to 227x227.
imageMask = bwconvhull(imageMask, 'objects');
% Take largest blob which should be the image.
imageMask = bwareafilt(imageMask, 1);
% Do an opening (erosion followee by dilation) to get rid of little
% "tendrils" caused by text sticking out of the image.
imageMask = imopen(imageMask, true(3));
subplot(2, 2, 3);
imshow(imageMask);
axis('on', 'image');
impixelinfo;
title('Final Mask Image', 'FontSize', fontSize, 'Interpreter', 'None');
drawnow;
%--------------------------------------------------------------------------------------------------------
% Get the bounding box and crop out the image from the full sized image.
props = regionprops(imageMask, 'BoundingBox');
croppedImage = imcrop(grayImage, props.BoundingBox);
croppedImage = imresize(croppedImage, [227, 227]);
subplot(2, 2, 4);
imshow(croppedImage);
axis('on', 'image');
impixelinfo;
title('Final Cropped, Resized Image', 'FontSize', fontSize, 'Interpreter', 'None');
drawnow;
%--------------------------------------------------------------------------------------------------------
% Save the image in an output folder.
outputFolder = fullfile(folder, '\output');
if ~isfolder(outputFolder)
% Create folder if it does not exist yet.
fprintf('Creating new folder for output images:"%s".\n', outputFolder);
mkdir(outputFolder);
end
fullOutputFileName = fullfile(outputFolder, baseFileName);
fprintf('Saving output file "%s".\n', fullOutputFileName);
imwrite(croppedImage, fullOutputFileName);
I would not worry about the small remnants of the text that remain in the image. It's likely that they will be ignored and not cause any difference in the predicted output.

Categories

Asked:

on 2 Aug 2024

Answered:

on 3 Aug 2024

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!