How to integrate the countEachLabel(pxds) into my code?

11 views (last 30 days)
Hudson
Hudson on 21 Nov 2024 at 20:09
Edited: Walter Roberson on 21 Nov 2024 at 21:43
I have this code which generates a segmented image from an image using a deep learning model, I need to be able to calculate the area of the pixels that the newly formed segmented images has for each label. I know the code countEachLabel(pxds) can do this but I am unsure of how to integrate this back into my function in a way that this will work for the newly generted images. I have tried doing it by hand and chatgpt and can't figure it out my code is down below. Any suggestions for how to do this? An example of the images are also attahced below as well. The colored one is a results image while the white one is one of the images that I used to make some of the ground truths.
function trainedNet = trainDeepLabModel(gTruth, numClasses, networkType, downsamplingFactor, maxEpochs, miniBatchSize)
% Function to train a DeepLab v3+ model with specified parameters.
% Automatically determines image size from the dataset.
% Parameters:
% gTruth - Ground truth dataset object for training
% numClasses - Number of segmentation classes (e.g., 2 for binary mask)
% networkType - Base network type, e.g., 'resnet18'
% downsamplingFactor - Downsampling factor for DeepLab, e.g., 8 or 16
% maxEpochs - Maximum number of training epochs
% miniBatchSize - Size of each training mini-batch
% Extract image and label data from the gTruth object
[imds, pxds] = pixelLabelTrainingData(gTruth);
% Automatically determine a manageable image size
imageSize = [256 256 3]; % Adjusted image size for stability
% Combine image and label datastores
cds = combine(imds, pxds);
% Apply preprocessing to resize images and labels
tds = transform(cds, @(data) preprocessTrainingData(data, imageSize));
% Create the DeepLab v3+ network as a layer graph
lgraph = deeplabv3plusLayers(imageSize, numClasses, networkType, ...
'DownsamplingFactor', downsamplingFactor);
% Training options with reduced MiniBatchSize
opts = trainingOptions('sgdm', ...
'MiniBatchSize', miniBatchSize, ...
'MaxEpochs', maxEpochs, ...
'Verbose', true, ...
'Plots', 'training-progress');
% Clear GPU memory before training (if using GPU)
if canUseGPU
reset(gpuDevice);
end
% Train the network
trainedNet = trainNetwork(tds, lgraph, opts);
disp('Training completed successfully!');
end
function data = preprocessTrainingData(data, imageSize)
% Resize image and pixel label data to match imageSize
data{1} = imresize(data{1}, imageSize(1:2));
data{2} = imresize(data{2}, imageSize(1:2));
% Convert grayscale to RGB if required by network
if size(data{1}, 3) == 1
data{1} = repmat(data{1}, 1, 1, 3);
end
end
function results = applyTrainedModel(trainedNet, testImagesDir, outputDir, imageSize)
% Function to apply the trained model to new images and save results.
% Parameters:
% trainedNet - Trained DeepLab v3+ network
% testImagesDir - Directory of test images
% outputDir - Directory to save the segmented results
% imageSize - Size of images expected by the network [height, width, channels]
% Create an imageDatastore for the test images
testImds = imageDatastore(testImagesDir);
% Initialize results structure
results = struct();
% Loop through each test image
for i = 1:numel(testImds.Files)
% Read the test image
img = readimage(testImds, i);
% Check if image needs to be resized
if size(img, 1) ~= imageSize(1) || size(img, 2) ~= imageSize(2)
imgResized = imresize(img, imageSize(1:2)); % Resize only width and height
else
imgResized = img; % Keep original size if it matches
end
% Perform semantic segmentation
C = semanticseg(imgResized, trainedNet);
% Overlay the segmentation mask on the original image
B = labeloverlay(imgResized, C);
% Save the segmented image
if ~isempty(outputDir)
[~, fileName, ext] = fileparts(testImds.Files{i});
outputFileName = fullfile(outputDir, strcat(fileName, '_segmented', ext));
imwrite(B, outputFileName);
disp(['Saved segmented image to: ' outputFileName]);
end
% Store results in structure
results(i).OriginalImage = img;
results(i).SegmentedOverlay = B;
results(i).SegmentationMask = C;
end
end
% Define parameters for training
numClasses = 7; % Adjust according to your dataset
networkType = 'resnet18';
downsamplingFactor = 16;
maxEpochs = 1;
miniBatchSize = 1; % Smaller batch size to reduce memory load
% Train the model
trainedNet = trainDeepLabModel(gTruth, numClasses, networkType, downsamplingFactor, maxEpochs, miniBatchSize);
% Define the path to the "Binary mask" dataset
testImagesDir = fullfile(getenv('HOME'), 'Downloads', 'Training');
outputDir = fullfile(getenv('HOME'), 'Downloads', 'Training Results');
% Apply the trained model to the "Binary mask" dataset
results = applyTrainedModel(trainedNet, testImagesDir, outputDir, [256 256 3]); % Use imageSize from training,

Answers (0)

Products


Release

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!