Clear Filters
Clear Filters

Is it possible to automate image scaling?

3 views (last 30 days)
Jordan Schroeder
Jordan Schroeder on 14 Apr 2022
Answered: Shreshth on 5 Feb 2024
Can the deep learning tool box or basic edge detection be used to find the bottom edge of the taillight house and apply a known dimension of 24.125" to its measured pixel count? Then deep learning or some other method would then need to identify the width of the car (green line in OP), get its pixel count, and do the math to scale and find the dimension. Is there a way to do that? The following code does exactly what I need but all the inputs are manual.
%%
clc
clear
TAtaillight=imread('blank DSE Tail Panel.jpg');
sz = size(TAtaillight);
myData.Units = 'pixels';
myData.MaxValue = hypot(sz(1),sz(2));
myData.Colormap = hot;
myData.ScaleFactor = 1;
hIm = imshow (TAtaillight)
hIm.ButtonDownFcn = @(~,~) startDrawing(hIm.Parent,myData);
function startDrawing(hAx,myData)
% Create a line ROI object. Specify the initial color of the line and
% store the |myData| structure in the |UserData| property of the ROI.
h = images.roi.Line('color',[0, 0, 0.5625],'UserData',myData);
% Set up a listener for movement of the line ROI. When the line ROI moves,
% the |updateLabel| callback updates the text in the line ROI label and
% changes the color of the line, based on its length.
addlistener(h,'MovingROI',@updateLabel);
% Set up a listener for clicks on the line ROI. When you click on the line
% ROI, the |updateUnits| callback opens a GUI that lets you specify the
% known distance in real-world units, such as, meters or feet.
addlistener(h,'ROIClicked',@updateUnits);
% Get the current mouse location from the |CurrentPoint| property of the
% axes and extract the _x_ and _y_ coordinates.
cp = hAx.CurrentPoint;
cp = [cp(1,1) cp(1,2)];
% Begin drawing the ROI from the current mouse location. Using the
% |beginDrawingFromPoint| method, you can draw multiple ROIs.
h.beginDrawingFromPoint(cp);
% Add a custom option to the line ROI context menu to delete all existing
% line ROIs.
c = h.UIContextMenu;
uimenu(c,'Label','Delete All','Callback',@deleteAll);
end
function updateLabel(src,evt)
% Get the current line position.
pos = evt.Source.Position;
% Determine the length of the line.
diffPos = diff(pos);
mag = hypot(diffPos(1),diffPos(2));
% Choose a color from the colormap based on the length of the line. The
% line changes color as it gets longer or shorter.
color = src.UserData.Colormap(ceil(64*(mag/src.UserData.MaxValue)),:);
% Apply the scale factor to line length to calibrate the measurements.
mag = mag*src.UserData.ScaleFactor;
% Update the label.
set(src,'Label',[num2str(mag,'%30.1f') ' ' src.UserData.Units],'Color',color);
end
function updateUnits(src,evt)
% When you double-click the ROI label, the example opens a popup dialog box
% to get information about the actual distance. Use this information to
% scale all line ROI measurements.
if strcmp(evt.SelectionType,'double') && strcmp(evt.SelectedPart,'label')
% Display the popup dialog box.
answer = inputdlg({'Known distance','Distance units'},...
'Specify known distance',[1 20],{'24.125','inches'});
% Determine the scale factor based on the inputs.
num = str2double(answer{1});
% Get the length of the current line ROI.
pos = src.Position;
diffPos = diff(pos);
mag = hypot(diffPos(1),diffPos(2));
% Calculate the scale factor by dividing the known length value by the
% current length, measured in pixels.
scale = num/mag;
% Store the scale factor and the units information in the |myData|
% structure.
myData.Units = answer{2};
myData.MaxValue = src.UserData.MaxValue;
myData.Colormap = src.UserData.Colormap;
myData.ScaleFactor = scale;
% Reset the data stored in the |UserData| property of all existing line
% ROI objects. Use |findobj| to find all line ROI objects in the axes.
hAx = src.Parent;
hROIs = findobj(hAx,'Type','images.roi.Line');
set(hROIs,'UserData',myData);
% Update the label in each line ROI object, based on the information
% collected in the input dialog.
for i = 1:numel(hROIs)
pos = hROIs(i).Position;
diffPos = diff(pos);
mag = hypot(diffPos(1),diffPos(2));
set(hROIs(i),'Label',[num2str(mag*scale,'%30.1f') ' ' answer{2}]);
end
% Reset the |ButtonDownFcn| callback function with the current |myData|
% value.
hIm = findobj(hAx,'Type','image');
hIm.ButtonDownFcn = @(~,~) startDrawing(hAx,myData);
end
end
function deleteAll(src,~)
hFig = ancestor(src,'figure');
hROIs = findobj(hFig,'Type','images.roi.Line');
delete(hROIs)
end

Answers (1)

Shreshth
Shreshth on 5 Feb 2024
Hello Jordan,
Your question suggests that you are looking for a way in which we can use a MATLAB toolbox to automate the process of assigning predefined dimension to the bottom edge of the tail light and also determine the width of the vehicle.
Here I am assuming that image data set you will be using to train the network will be consistent in terms of size and pixel, an example of which is shared as an attachment to the question.MATLAB’s computer vision toolbox can be used to automate the required process. Before the use of deep learning you will need the completer the below processes using MATLAB:
  • Read the image using ‘imread’
  • Preprocess the image for edge detection (convert to grey and apply gaussian filter)
  • Use Canny Method to detect edges and find the contours using ‘bwboundaries’.
Now the next step is to identify the contour corresponding to the taillight housing. A convolutional neural network (CNN) will be suitable to recognise image and patterns. Mask R-CNN can be one way to detect the contours of the taillights due to its ability to provide pixel level precision.
For more info about working of Mask R-CNN in MATLAB refer to tis MathWorks link : https://in.mathworks.com/help/vision/ug/getting-started-with-mask-r-cnn-for-instance-segmentation.html
Further, Once u have the model trained for identifying contour , you can measure the pixel count and apply the known dimension of the taillight bottom edge to get the scale in pixel per inch.
Using the computer vision toolbox of MATLAB, you can now apply CNN to recognise car’s width with labelled dataset for training the model.
The final steps will be to measure the pixel count between car edges to get the car’s width and multiplying this pixel count with the scale factor for real world.
Here is a documentation by MathWorks that will help you perform Mask R-CNN comfortably.
Hope this helps.
Regards,
Shubham Shreshth.

Categories

Find more on MATLAB in Help Center and File Exchange

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!