Highlights
Follow


Image Analyst

Ask Me Anything about image analysis or the Mathworks community

Image Analyst on 18 Jun 2024 (Edited on 19 Jun 2024)
Latest activity Reply by Cesar Nieves about 5 hours ago

Hello, everyone! I’m Mark Hayworth, but you might know me better in the community as Image Analyst. I've been using MATLAB since 2006 (18 years). My background spans a rich career as a former senior scientist and inventor at The Procter & Gamble Company (HQ in Cincinnati). I hold both master’s & Ph.D. degrees in optical sciences from the College of Optical Sciences at the University of Arizona, specializing in imaging, image processing, and image analysis. I have 40+ years of military, academic, and industrial experience with image analysis programming and algorithm development. I have experience designing custom light booths and other imaging systems. I also work with color and monochrome imaging, video analysis, thermal, ultraviolet, hyperspectral, CT, MRI, radiography, profilometry, microscopy, NIR, and Raman spectroscopy, etc. on a huge variety of subjects.
I'm thrilled to participate in MATLAB Central's Ask Me Anything (AMA) session, a fantastic platform for knowledge sharing and community engagement. Following Adam Danz’s insightful AMA on staff contributors in the Answers forum, I’d like to discuss topics in the area of image analysis and processing. I invite you to ask me anything related to this field, whether you're seeking recommendations on tools, looking for tips and tricks, my background, or career development advice. Additionally, I'm more than willing to share insights from my experiences in the MATLAB Answers community, File Exchange, and my role as a member of the Community Advisory Board. If you have questions related to your specific images or your custom MATLAB code though, I'll invite you to ask those in the Answers forum. It's a more appropriate forum for those kinds of questions, plus you can get the benefit of other experts offering their solutions in addition to me.
For the coming weeks, I'll be here to engage with your questions and help shed light on any topics you're curious about.
Cesar Nieves
Cesar Nieves about 5 hours ago
Hi,
I am trying to simulate the motion of ions under electric field using poisson equation, drift and diffusion equations, and general mass equation for continuity. Also, I am using Eistein relationship to relate electrical mobility to diffusion. However, my code doesnt seem to work well. I am not able to see changes in concentration, potential or electric field profile with time. I can only see the results at time 0s.
saleel
saleel on 21 Jul 2024 at 2:38
Hi
Why does this error appear during training in yolo object detection algorithm?
It was working and I got results, but when I increased the dataset, this error started appearing during training!!!
Error using collateMiniBatch
Unable to apply function specified by 'MiniBatchFcn' value.
Error in minibatchqueue>@(x)collateMiniBatch(x,options.InternalMiniBatchFcn,options.IsDefault.MiniBatchFcn,options.NumInputsMiniBatchFcn,options.NumOutputs) (line 291)
collateFcn = @(x)collateMiniBatch(x,options.InternalMiniBatchFcn,...
Error in nnet.internal.cnn.DataLoader/nextBatch (line 85)
miniBatch = this.CollateFcn(miniBatch);
Error in deep.internal.data.DatastoreDispatcher/readNextMiniBatchIntoCache (line 153)
[this.CachedMiniBatch, this.CachedMiniBatchSize] = nextBatch(this.Dataloader);
Error in deep.internal.data.DatastoreDispatcher/next (line 59)
readNextMiniBatchIntoCache(this);
Error in deep.internal.data.BackgroundDispatcher>iCallNextOnPoolConstant (line 571)
[miniBatch, nextMiniBatchSize] = constantObject.Value.next();
Caused by:
Error using cat
Dimensions of arrays being concatenated are not consistent.
Image Analyst
Image Analyst on 21 Jul 2024 at 2:51
Sorry, I don't really know. You should try Answers forum, or call tech support.
Frank
Frank on 18 Jul 2024 at 13:45
Hi Mark,
I have been writing matlab apps that visualize field experimental data involving movements of instrumented people and physical objects, tracking of released vapor clouds, data from referee instrumentation, and data from cfd simulations. The apps typically involve time sequenced and synchronized plotting of points and lines in uiaxes, contourf plotting of vapor concentrations, synchronized time lapsed photos or sometimes videos frame by frame. They typically wind up having between 5 and 10 axes containg plots or images which are updated with data synchronized in time on a second by second basis for a few thousand seconds.
I often need to present the visualizations to groups of people and for that I need for the app to run as quickly as it can to maintain their attention and still present the important details. I try to minimize numerical calculations in the app by pre-processing and synchronizing all the data and imagery beforehand so that the app largely involves just managing display of the graphical objects. I found that even simple graphical displays of the data (such as calls to scatter, plot, patch, contourf, etc), which I do for a few dozen sets of data every second for the thousands of seconds covered, progressively slows the real-time display as more and more data sets are involved, even with generous application of drawnows.
I have been trying to generate the displays as arrays of graphical objects and then sequence the display by adjusting object properties like visibility, alpha, etc instead of calling the scatter, plot, patch, etc functions. That allows me to control the speed of the display much better, but generating the graphical object arrays seems to be generating a delay when the app is doing it. Sorry for the long-winded description, but my question is fairly simple I think.
Instead of calling scatter at each second for all the different data streams i have been doing something like this example (the x and y here are just nominal examples of a dataset):
x = (1:50);
y = x.^1.5;
i = (1:50);
ax = axes;
xlim(ax,[0 50]);
ylim(ax,[0 360]);
hold(ax,"on");
H = arrayfun(@(i) scatter(ax,x(i),y(i),"filled","black","Visible","off"),i);
[Then I control which points are displayed when by something like the following]
for j=1:50
H(j).Visible = "on";
pause(0.05);
end
I can make points appear as a sequence of points (or other objects), or appear as a single point moving by toggling the visibility properties or alphas if needed.
Finally my question. The multiple arrayfun calls for arrays of a few thousands of elements seem to take a bit of time in the real time running of the app, but after they are done things are very fast. I think I would like to preprocess the graphical objects (eg. the H above) and then store the graphical objects in a .mat file and simply load them when the app starts up. I have been able to save them (eg H above) into a .mat file and then load them back in to the workspace, but then how do I add them to the axes that I want to add them to? For example if I try
saveobj('H.mat','H');
Then delete H by hand with a right click (for some reason delete(H) doesn't work at the moment), create new axes as above, how do I add the just-loaded H to the newly created axes?
[I have another question perhaps for later about generating a few hundred contourf plots in advance as graphical objects and then storing them, loading them, and displaying them without calling contourf. And maybe storing them as images and displaying them that way so I can adjust the alpha]
Thanks in advance.
Image Analyst
Image Analyst on 19 Jul 2024 at 14:40
Can you use a movie showing "old" data instead of showing it "live" with data as it's being collected? I'm attaching a demo of how to make a movie from figures.
But I think you're on the right track. Constantly calling drawnow forces the screen to repaint and, while it will show you data "live" as it comes in, it can slow things down, so you might continue to try to hide the figures (set visibility off) until all of the plots are ready, then, in a burst, set all of the visibilities on to show them all at once.
Frank
Frank on 19 Jul 2024 at 17:09 (Edited on 19 Jul 2024 at 17:12)
Thanks much. I asked the same question on the answers board and Voss pointed me to the copyobj command as the basic answer to the question. Typically I do make movies (basically recorded screenshots) of the overall visualization once it runs and then edit the speed of the video to suit the patience of the particular audience. Speeding it up crams data and activities that evolve over hour time frames to a minute or so and things can look a little like busy bees, but it gets the point across. A main reason I want to speed up the execution of the app is the time it takes to write and debug the apps, which usually takes a week or so. The apps often take ten or more minutes to run and fixing a bug (often a mistype of mine) that occurs after midway in the execution means I have to let it run until the point at which the bug happens. Some folks have asked for real-time visualization but I am unsure if that is going to happen. There is just a lot of pre-processing of data, particularly synchronizing mulitple data streams, that I don't see a workable way to do in real time without very exacting control of the set up and conduct of the experiments, which is hard because it involves multiple researchers. I do have a couple of graduate students working with a couple of the virtual reality engines which I want to somehow incorporate but I am still unsure how well that is going to work. It seems those things take a lot computer memory and power and may not be geared to real time interactivity.
Thanks for the surf movie demo. I can use that in the classes I teach that incorporate matlab work by my students. Videos are an entirely different problem The video I use in the experiment visualizations typically don't come with datetime stamps to synch the video with other instrumental data streams and synching the video displays with display of other data is a pain, currently trial and error by hand frame by frame. The frame rates are also a pain because they are approximately 30, 60 or sometimes more frames per second, but not exactly, and it seems to vary with time, which makes using frame numbers mapped to datetime increments doesn't work well. Oh well.
Xiaofeng
Xiaofeng on 18 Jul 2024 at 8:30
Hello,
I develop project with Matlab Simulink based on S32K3 MCU series. I hope to combine Simulink Module with C code. Currently, I have installed MinGW-w64 version 8.1 for R2023B, It works normally. Simulink module works normally. But I can't download code to my MCU, since Simulink reports error. "Error(s) encountered while building "xxx". How can I fix this question?
Thanks!
Image Analyst
Image Analyst on 19 Jul 2024 at 14:34
Sounds like a question for the Answers forum, especially since I don't have Simulink and don't know anything about Simulink.
Jared
Jared on 15 Jul 2024 at 23:19
Hello,
I'm a benthic marine ecologist working with sea floor images. I am currently using the single camera calibrator app while trying to 'relearn' Matlab after a few years hiatus. I am able to calibrate and export cameraParams and estimationErrors from ~ 20 checkerboard images, but struggling with the next steps.
1) Conducting EDA and visualizing the normalized pixel coordinates for each camera (n = 2). Attached .mat result file as example. The app itself corrects the distortion automatically with a GUI button click, but I'll like to understand the matrices output and develop a distortion plot of raw vs. normalized points. Can you help?
2) Conceptually (and in practice) integrating the camera-specific radial distortion (i.e. normalization) into our data collection capabilities and SOP. What might be the best way to automate corrections into our image analyses? Such as developing generalized cut-offs for edge distortions or trying to correct every image prior to data collection.
3) Does MATLAB have any toolboxes or development in AI/ML techniques to classify and train for object detection (lines, boxes, polygons) or point-based quantification (e.g. point contact estimators for habitat types). I am not up to speed on MATLAB offerings.
Thank you for any professional suggestions or coding aids.
Jared
Image Analyst
Image Analyst on 19 Jul 2024 at 14:33 (Edited on 19 Jul 2024 at 14:33)
Sounds like a good question for the Answers forum. I haven't done a lot with spatial transforms/warping but you might look at imwarp and Steve's blog Spatial transformations Defining and applying custom transforms Steve on Image Processing
Axis Friday
Axis Friday on 15 Jul 2024 at 7:39
Hello,
I am an undergrad in engineering. Irrelevant to my career and degree, I am deeply interested in color science. Where would you reccomend I begin, whether those be texts or otherwise, on learning applying my understanding of color science albeit elementary. I want to learn more about color spaces and how to define them, transform inbetween them, and implement CIE color systems in my own applications.
Image Analyst
Image Analyst on 19 Jul 2024 at 14:30
@Axis Friday Color science is a difficult topic, mostly because it's not straightforward like spectroscopy -- it must necessarily include information of the human visual system, which varies from person to person and is difficult to model and quantify. It's the kind of thing where you need to see it over and over again. A little bit more sinks in each time you see it. I already had a firm grasp of spectroscopy and optics due to my Ph.D. but this color science was a whole other beast. I started out taking courses offered by the main spectrophotometer manufacturers: HunterLabs, Datacolor, X-rite, BYK-Gardner, Konica Minolta, Perkin Elmer, Thermo Scientific, StellarNet, etc. Look on the web sites for things like education or seminars. You can start with reading on their web sites, but to really understand you need to attend instructor-led, in-person seminars so you can ask questions. Some, but not all, of the companies offer in-person seminars occasionally. I took several seminars before it started to sink in and I felt I began to understand it. Then to really understand it you need to teach it. I began offering day long color science seminars at P&G for our scientists. Even then each time I taught it, I understood it a little better.
Another good resource is Wikipedia. There are numerous articles there on all aspects of color science.
If you really want to get into color science then you need to get a graduate degree in color science from the Munsell Color Science Lab at the Rochester Institute of Technology in New York state. https://www.rit.edu/science/munsell-color-lab
It's probably the top color science research center in the world.
Also, see my attached seminar on color calibration.
There is also a Color & Imaging Conference that I used to attend:
If you can go to that, you will learn a lot since all of the worlds top color researchers go there. You might be able to browse through old proceedings and read papers that interest you.
You can also search VisionBib for articles on color:
This is a database of nearly all image processing articles, and is maintained and updated by the University of Southern California (you know, where the very famous "Lena" picture came from https://www.wired.com/story/finding-lena-the-patron-saint-of-jpegs/ )
It can take a long time to learn, even a lifetime which is why they're still studying it and having conferences and papers on it. I've talked to the worlds top color scientists (like Mark Fairchild, Ronnie Luo, Stephen Westland | The University of Leeds, etc. and they all admit even they don't know everything and are still learning. Dr. Wesland had a color toolbox he's uploaded to the File Exchange:
Gokul Kiran
Gokul Kiran on 12 Jul 2024 at 6:30
Hello sir can i get answer for this
Display all 2 digit numbers in such a way that there should be consecutive pulses for a digit in the interval of 0.25 seconds and the interval between the digits should be 0.5 seconds. Interval between two numbers should be 1 second
Ignore the digits that have 0 in it.
Image Analyst
Image Analyst on 12 Jul 2024 at 14:16
Yes, you can. Just ask in the Answers forum.
This looks like a homework problem. If you have any questions ask your instructor or read the link below to get started:
Obviously, if it is homework, we can't give you the full solution because you're not allowed to turn in our code as your own. So after you read the link, post your homework there and you'll get hints. This is the kind of question that should be posted there rather than here.
In the meantime, take a look at repmat and this link
Ahmed
Ahmed on 10 Jul 2024 at 10:21
Hello sir, i'm trying to create a Matlab code that matches the (4000 patch) RGB values of two cameras in a meshy way and generates a 3D cube lut. i have used scatterdatatinterpolatant (method: linear and extrapolation: boundary). This approach is limiting my gamut and does not interpolate the out of the convex hull points in a smooth way. I tried to add plenty of points around the convex hull, but I didn't like the result because the edge became non meshy.
Is there a better way to extrapolate the points out off the convex hull smoothly?
Image Analyst
Image Analyst on 12 Jul 2024 at 14:21
I'd ask this in Answers and supply your patch data.
scatteredInterpolant seems like a reasonable approach. After you get the interpolant you should run through all 16 million color possibilities, and of course some of them will be outside your training gamut but I don't think that should matter. I think it should still give you values. What did you get when you gave it out-of-gamut values?
Daniel
Daniel on 8 Jul 2024 at 11:02
I had to reinstall matlab and the process deleted an app.
I have the contents in a text file - if i rename it with .mlapp it doesn't load.
I would paste the code into a new app in app designer, but of course the code is greyed out and can't be edited.
Do you know a way around this?
Image Analyst
Image Analyst on 8 Jul 2024 at 12:00
Just try pasting in the individual button contents one at a time. You might have to recreate all your controls on the GUI first, and then paste in just the button function contents, not all the protected/gray code.
Patrizia
Patrizia on 8 Jul 2024 at 8:17 (Edited on 8 Jul 2024 at 8:18)
Dear Sir
I am looking for help in revising a code I wrote with the aim of detecting plastic bottles in images. There are probably errors in the code, causing an error in the final function 'metrics = evaluateObjectDetection(detectionResults, testData)'.
I would need help to complete the code so that it can return a final result, even if it is inaccurate. Can you help me? I look forward to your kind feedback. Thank you in advance!
Patrizia
Image Analyst
Image Analyst on 8 Jul 2024 at 19:20
I understand. Perhaps you can hire a local university or a consultant firm like SRI International who will sign your confidential disclosure agreement and work on your confidential project.
Patrizia
Patrizia on 8 Jul 2024 at 12:29
unfortunately the contents are confidential..
Image Analyst
Image Analyst on 8 Jul 2024 at 11:58
Post the image and code in Answers and we'll deal with the question there.
Viren
Viren on 7 Jul 2024 at 20:19
Dear Sir,
How to l find so many formulas under the analog and digital filter designs as well as DSP in MATLAB? Are there any documentations for the digital filter designs as well as Digital Signal Processing in MATLAB? Some of the formulas MATLAB used for above mentioned topic, is not available DSP books or Advance Communication books.
Viren
Image Analyst
Image Analyst on 7 Jul 2024 at 23:50
Some of the functions will have a reference to an article that describes the algorithm that the Mathworks developers implemented in MATLAB. If you don't find what you want there, then tell me a specific function and I might be able to find out more from my Mathworks constacts.
idris
idris on 7 Jul 2024 at 18:00 (Edited on 7 Jul 2024 at 18:00)
Dear Sir,
I was wondering if could share some materials to understanding image processing for Wireless access in vehicular environements.
Image Analyst
Image Analyst on 7 Jul 2024 at 18:16 (Edited on 7 Jul 2024 at 18:16)
I don't really know anything about image processing in vehicles - I've never had to deal with that situation. To understand MATLAB in general, try this: Best way(s) to master MATLAB? - MATLAB Answers - MATLAB Central
In addition, Steve Eddins, former image processing team leader at Mathworks, wrote a book about it, so you can look for that book.
cheng
cheng on 6 Jul 2024 at 7:31

Hello dear teacher, my name is Chris, I studied and recently qualified as a higher-level technician in industrial automation and control. I know that MATLAB is a widely used tool to carry out industrial automation and control processes. It is used to develop control algorithms, simulate control systems, design artificial intelligence algorithms for predictive maintenance, operations optimization, and generate real-time code for PLCs and industrial controllers. My question is, what do I have to learn so that the software can help me create matrices and vectors and be able to learn to use them in the field at an industrial level? I am also interested in learning Autodesk investor professional.

Thank you very much for your attention and information.

Greetings from a distance.

Image Analyst
Image Analyst on 7 Jul 2024 at 18:21
cheng
cheng on 7 Jul 2024 at 18:37

Muchas gracias por su orientación y apoyo.

Saludos.

Muhammad
Muhammad on 5 Jul 2024 at 18:01
Hello sir. Nice to hear your background. I need to ask you where should i start to master matlab? Looking forward to hear from you. Thank you.
Image Analyst
Image Analyst on 7 Jul 2024 at 18:17 (Edited on 7 Jul 2024 at 18:18)
@Muhammad See this link for a pretty comprehensive list of resources:
Zoumin
Zoumin on 4 Jul 2024 at 5:10
Hi, Mark ,I have an annoying promblem when I try to find the defections in the diff.bmp.
I tried to segment the defect by removing the contour artifacts with filtering methods and morphological manipulations, but it didn't work well, as shown in the myresult.bmp.
Is there any good way to accurately delineate the defects in the red box in the deftection.bmp?Or how to improve the effect of morphological manipulation?
Thanks for your generous contribution for the community.
Image Analyst
Image Analyst on 7 Jul 2024 at 18:23
I'd try normalized cross correlation. It's bascially like a template matching. See attached demo.
Baraa
Baraa on 3 Jul 2024 at 14:44
Hello
My name is Baraa and I am currently developing a code to measure the contact angle of different liquids.
I managed to finish the code in the question link below:
However, the code has to be manually edited every time the volume of the droplet changes, where I have to change the y-coordinate value in the generated mask to detect the edges of the droplet. How can I modify the code so that I don't have to manually define the x and y coordinates of the mask for each droplet?
Thanks in advance for your help.
Regards.
Image Analyst
Image Analyst on 3 Jul 2024 at 16:36
I've answered your post in Answers. I'm not sure how I missed it. But in the meantime, attached is my contact angle demo.
Jose
Jose on 3 Jul 2024 at 9:16
Hello,
I'm seeking assistance with an image processing task in MATLAB. I have an image, .jpg from SEM. I would like to quantify morphology and shapes of crystilized hydrated products.
Image Analyst
Image Analyst on 3 Jul 2024 at 11:46
@Jose, this looks like a question for Answers. Attach your image there and say exactly what kind of measurements you want, like intensity, area, area fraction, circularity, or whatever
Jose
Jose on 4 Jul 2024 at 1:10
Thank you for reply about my inquieries. Here are some details and the image.
  1. Needle-like Shape (Ettringite, Ett): Identified by its needle-like, elongated crystal structure.
  2. Plate-like Crystalline Structure (Calcium Hydroxide, CH): Appears as plate-like shapes with more defined, flat surfaces.
  3. Fibrous Formation (Calcium Silicate Hydrate, CSH): Appears as fibrous or gel-like structures under a scanning electron microscope (SEM). These fibers can be loosely packed or densely interwoven, creating a complex network.
Image Analyst
Image Analyst on 4 Jul 2024 at 3:41
I still don't know what you want in terms of measurements. But the image looks extraordinarily difficult, at least for me, to say if a given pixel or small region belongs to one of those three classes you listed. I think this may be a case for deep learning but you'd have to manually identify, like by painting over the image, which pixels are from which class. Then do this for about a hundred images where each class will span the range from almost none of that class there, to the max that will ever be there. So you'll have 3 numbers (say based on area fraction or your human estimate of area fraction) for each image. Then you can build a model to predict the three numbers for any new image. Another possibility would be for you to just hand draw outlines or distances with the drawing tools and make measurements that way, so it's human assisted rather than fully automatic.
Jose
Jose on 4 Jul 2024 at 5:34
I really appreciate your feedback. I will try the human assisted rather than fully automatic.
Thank you
Nurul
Nurul on 3 Jul 2024 at 4:26
Hi Mark,
Thanks for your generous contribution for the community.
I'm seeking assistance with an image processing task in MATLAB. I have an image, F1.jpg, where I want to create a binary image (F2.jpg) that isolates the region below a blue line. Ideally, the pixels within the line should be white (1) and those outside the line, black (0). or vise-versa.
I've tried using the various thresholding techniques (included in the attached script, BinaryImage.m), but the resulting image (F1_BW.png) shows unwanted gray values in the black region. Thresholding doesn't seem to be fully removing these intermediate values.
My question is: How can I effectively extract the area within the blue line and convert it to a clean binary image (F1_BW.png) with only black and white pixels.
Image Analyst
Image Analyst on 4 Jul 2024 at 3:46
I'd just threshold, fill the region(s), then take the largest region (if you want just one and only one). I'm not sure what you're doing but I don't see any reason that an extremely smoothed boudary/outline would be any better or more accurate than the actual boundary you get from thresholding.
If you need more help, post in Answers and I, and others, can give you some code.
Nurul
Nurul on 5 Jul 2024 at 21:39
Thanks for your reply.
I'm actually dealing with a batch of 1920 RGB images. I'm trying to isolate the floodplain area, but thresholding picks up both the river and ocean since they appear similar in the images.
Anyone have suggestions for separating the floodplain (including the river) from the ocean in RGB images? I'm open to exploring other methods!
May I ask, by saying "ANSWERS", where do you want me to post this. I am new in asking questions in MATLAB forum.
Antonio
Antonio on 3 Jul 2024 at 2:05 (Edited on 3 Jul 2024 at 2:08)
I am having trouble figuring out how to use GUI modul;e in Matlab. I have downloaded a link to it provided by my Profeesor and it is just not working on my apple-mini desktop. Could you please assist with this matter?
Image Analyst
Image Analyst on 3 Jul 2024 at 2:31
@Antonio, possibly. But this sounds like a question for your professor (preferably) or the Answers forum. If your professor refuses to help you then you can upload the files needed to the Answers forum. If someone has all the toolboxes needed they may try to help you.
Luís Henrique Bordin
Luís Henrique Bordin on 2 Jul 2024 at 15:35
Dear Mark, I am Luis from NOAA. Could you help me please on a group scatter plot?
I have three regions (MSD, DPG and WFS) and four seasons (Winter, Spring, Summer and Autumn). I chose four colors to represent the seasons, and 3 symbols to represent the regions, however, the function doesn't understand this way. The symbols are correct to each region, but the colors are inconsistent, I think it is because there are 3 regions, but 4 seasons. The thing is, I need always the same color to the same season at each region/symbol, but I could not figure it out yet. Please, could you help me? The data is attached and the code below. Thank you very much in advance!
load gscat.m
gscatter(x,y,g,'rkgb','o*h',6,'on','Hind','Twin')
legend('Location','northeastoutside')
Image Analyst
Image Analyst on 2 Jul 2024 at 17:28
Luis, This is the kind of question that should be asked in the Answers forum.
Maria Merin Antony
Maria Merin Antony on 2 Jul 2024 at 4:40
I am Maria.
I need to use the radialdistribution function2D. this uses a histogram function which is not available now.
I am not able to find the URL.
Please help out to find the URL and code.
Image Analyst
Image Analyst on 2 Jul 2024 at 12:48
@Maria Merin Antony This looks like a question for the Berkeley professor who teaches the course, or the Chemistry Department there. Look at the calling code to see how it tries to call the histogram function and see if the built-in histogram takes the same arguments. If they're different call the UC Berkeley Chemistry department at (510) 642-5060 and ask how you can obtain the code.
hoang
hoang on 2 Jul 2024 at 2:03
Dear Mark,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf('s');
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys)
Image Analyst
Image Analyst on 2 Jul 2024 at 4:25
@hoang I have no idea. I have not used pid() and don't use Simulink. Try asking in the Answers forum.
HCH
HCH on 30 Jun 2024 at 13:45 (Edited on 30 Jun 2024 at 13:46)
Dear Mark,
I have some questions about interactive programming between Python and Matlab.
There are now some Matlab script files and executable files that can be run by entering relevant names and parameters in the Matlab command window. During the running process, some interactive operations are also required, such as selecting the working mode by displaying prompt words in the Matlab command window and entering 0 or 1 according to the actual working situation, opening a file option to select the desired file, displaying an image and clicking the mouse on it, and so on. Now I need to integrate these script files and executable files into a non Matlab application, and I plan to develop the application in Python language.
The first question is: Can the Matlab Engine API for Python implement interactive operations that can only be completed by Matlab Command Window in Python without modifying the original Matlab file? I tried the Matlab Engine API to connect Python and Matlab related files, but it seems that I cannot implement interactive operations in the Matlab command window. I am not sure if this language interface supports such interactive operations? Here https://ww2.mathworks.cn/help/matlab/matlab_external/user-input-not-supported.html Explained that when using C language to call Matlab, user input is not supported.
I don't know if Python language is also like this?
Second question: Is there a recommended way to implement the scenario mentioned above if the Matlab Engine API doesn't work?
Thank you, and also thank you for your contribution to the Mathworks community!
Image Analyst
Image Analyst on 30 Jun 2024 at 23:11
Sorry, though I took a week long course in Python, I didn't see anything there that I couldn't do natively in MATLAB so I never wrote Python code and then used MATLAB to run the Python code. Maybe try asking in Answers, or see if you can do it all in Python, in which case you might have to ask in a Python-only forum for how to get user input.
saleel
saleel on 30 Jun 2024 at 13:00
why is (MiniBatch) don't work now in matlab 2023b?
Image Analyst
Image Analyst on 30 Jun 2024 at 23:07
No idea. I was not aware you could not do minibatches in that version. It's probably an error in your code. If not, call tech support or try posting in Answers.
Tatyana
Tatyana on 28 Jun 2024 at 7:21
Dear Mark, thanks for doing this AMA.
I'm a newbie asking for your advice - Is it possible to solve the following problem using MATLAB?
I have a CT study (CBCT) of the mandible. I want to highlight areas of the jaw (3D rendering image) with different densities in different colors based on Hounsfield units (f.e. area with HU>1250 -green, 850<HU<1250 - blue, 350<HU<850 -red, etc)
Is it possible- to use Hounsfield Units in Matlab?
Image Analyst
Image Analyst on 28 Jun 2024 at 15:48
@Tatyana yes it is. You can do it in several ways, like making a solid RGB image, or showing the regions "tinted" as an semi-transparent overlay above the original image.
greenMask = huImage > 1250;
blueMask = huImage > 850 & huImage <1250
etc.
I haven't done a lot with pseudocoloring 3-D images. It's trickier than 2-D images, which is easy. There is a volume viewer app in MATLAB that might help. I know our CT people usually use AVIZO software for their 3-D visualization tasks.
This is a good question to post in Answers after you read this: TUTORIAL: How to ask a question (on Answers) and get a fast answer Be sure to attach your image so people can offer actual code solutions.
Eric
Eric on 27 Jun 2024 at 17:37
I have an intriguing problem around the pixel dimensions of an image stored in a file, and the size of the Matlab variable that results from imread( ) of that same file.
The offending file is a 120 x 90 pixel image (which I'll try and attach). I know it has those pixel dimensions because (i) the Windows file properties say that and (ii) if I import it into Paint.net, the canvas size is 120 x 90.
So far, so sensible. Now, if I imread(this_file) I get a matrix that is 150 x 200 x 3 uint8. What gives? Is it resampling to those new dimensions, or it there some meta-info that's fooling imread( )?
All thoughts gratefully received! I am duty-bound to say that I don't own this image - it came from a website.
The reason it matters is that the calculations I want to do subsequently depend on there being more than 128 x 128 pixels in the image. Matlab thinks there are 150 rows and 200 columns, but the image is less than 128 x 128, so my algorithm falls over - and I can't do a size( ) check before heading into the functions.
Thanks!
Eric
Eric
Eric on 28 Jun 2024 at 14:12
OK, we can scrub this question.
The difference in pixel size vs. row/column dimensions is not there today, so I wonder if there was some glitch that was loading the wrong image file.
Also, there's an easy workaround to my image-size-detection problem: using imfinfo( ), which seems to retrieve image height and width reliably (along with a bunch of other useful stuff).
All good, and apologies for the red herring!
Eric
Geetanjali
Geetanjali on 27 Jun 2024 at 9:01
for making code for gyrator transform , i have to take image as function f(x,y), and i have to exract the value of x and y so that these values are further used in calculating other function . can u help in it
Image Analyst
Image Analyst on 28 Jun 2024 at 15:40
I don't understand. You have to have x and y if those are inputs to your function. I don't know how you would get them. I have also not used a gyrator.
I suggest you ask this in the Answers forum after you read this: TUTORIAL: How to ask a question (on Answers) and get a fast answer
Arun Mahendran
Arun Mahendran on 26 Jun 2024 at 20:41 (Edited on 26 Jun 2024 at 20:45)
Hello @Image Analyst! Could you give us any tips on how to reduce computational time for image processing especially during real-time tracking using high resolution cameras (lets say 1920x1080)
Image Analyst
Image Analyst on 28 Jun 2024 at 15:37
Sorry I haven't done much real time processing, and when I did it was in a slow speed laboratory R&D environment, like watching a stain area increase (snap, analyze, snap next, etc.). So it would be like 10 frames a second, not anywhere close to 30 frames a second or faster. I've gotten faster frame analysis rates but it was not real time -- it was on already saved videos. I'd suggest just use a fast computer with lots of memory and use parallel processing. Just acquire a small portion of your field of view, if that is an option with your camera and you know where to get the sub-image from so you're not having to analyze hundreds of thousands of pixels that are known not to have any good data in them. Of course if your objects might travel anywhere over the whole scene then that is not an option.
Brian
Brian on 23 Jun 2024 at 21:49
I'm classically trained in image processing (e.g. Jain link). While I understand NNs / ML, I have had a hard time using it to make my scientific / engineering image processing any more efficient (coding or using the code).
Is AI image processing for tracking objects (that are not people or cars) really effective?
  1. Does it take cloud computing to train?
  2. How much data do I need to collect?
  3. How much time should I budget for annotating the images so I can produce good training data?
  4. How do you decide to use the "old school" methods vs. AI?
Image Analyst
Image Analyst on 28 Jun 2024 at 15:30
I haven't really done tracking. It can get really really complicated due to objects entering, leaving, and possibly re-entering the field of view, as well as the background possibly changing. I did see a paper at the Electronic Imaging conference where they were tracking small drones using IA. You might look in the Image Processing literature here USC Annotated Computer Vision Bibliography for papers on that or any other image processing topic.
To answer your questions:
  1. No it doesn't take cloud/grid computing to train AI/deep learning models, though it can vertainly create the model much quicker if you do. I've trained many models locally with my laptop.
  2. Yes, it takes a lot more data to train an AI algorithm than a traditional algorithm. Sometimes with an easy task you need only 5 or 10 images to do traditional image analysis. However with AI you need a minimum of around 50 or 100 images to do a sort of acceptable job, but with hundreds or thousands of images (or several videos) you will get a better prediction. But it's not just that you need hundreds of images, you also need a ground truth image for each of them. And that can be tedious. You could do a classical segmentation and then use people to fine tune (correct) what it did. Or what I did for finding a mask was to use the automatic region identifier tool in Photoshop and then correct it with the drawing tools. I wrote a macro in Photoshop to streamline that and I made a few hundred ground truth image masks that way for use in training.
  3. How much time you spend annotating the images depends on how complicated the images are. If it's a simple classification task (is a dog present in the image) it's pretty quick for you to give ground truth classification for that. If it's manually identifying dozens of objects to be tracked in thousands of video frame when the objects are traveling through a cluttered background, it could take weeks to get ground truth images.
  4. I always try to use traditional methods first. First of all they require fewer images than AI for getting a good algorithm/model. Secondly you avoid having to label images with ground truth information. Third, for some situations developing a traditional algorithm will take less time than training a deep learning algorithm. Fourth traditional algorithms often can be more accurate than DL models and easier to understand. If the task is difficult, like it takes an expert to see something (think of like outlining a green bird in a leafy green tree), then AI can help, though you'd need the expert to find the object of interest to create the ground truth. I think almost all the times I've used deep learning, it was close but not quite good enough so I often had to post process the mask DL automatically found with traditional code to improve or fix it. So I almost always use a hybrid approach. Even after DL finds the mask, you still need to employ traditional methods to do the analysis and find things like size, color, dimensions, etc. inside the mask. So AI might only be one part of a larger image processing pipeline.
David Cazenave
David Cazenave on 23 Jun 2024 at 9:51
Impressive download count, and gold star average. For all you 'Series Convergence Calculator' (nfnitcerestst.m), and newton_2x2.m fans, My next update for nfnitcerestst.m will be extensive (Abel's Test, and many other script capability improvements), and will happen soon!!!
Image Analyst
Image Analyst on 28 Jun 2024 at 14:55
Thank you, and thank you for your willingness to share your MATLAB code with others.
Ezekiel auby
Ezekiel auby on 22 Jun 2024

Bonjour Pouvez vous m'aider à installer les librairies arduino dans matlab

Image Analyst
Image Analyst on 28 Jun 2024 at 14:54
I have not dabbled with the Arduino in MATLAB beyond the very simple examples that you can find online.
Melvin
Melvin on 21 Jun 2024
Hello @Image Analyst, how do I access solid oxide fuel cell/solid oxide electrolysis cell imaging?
Image Analyst
Image Analyst on 21 Jun 2024
Sorry but I have no idea. Presumably you have some sort of commercial chemistry instrument that can generate the pictures for you. Then hopefully there is a way to export the images to regular PNG format image files. If you don't know how, ask the manufacturer of the instrument. Once you have the images saved on disk, then you can import them into MATLAB and do whatever kind of image analysis you want. Once you have the images, if you still need help, upload the images to the Answers forum for help with the analysis.
Dyuman Joshi
Dyuman Joshi on 19 Jun 2024
I'd like to hear about your journey - How did you get into Image Analysis and into MATLAB?
Also, in which particular applications did you find yourself implementing Image analysis the most?
Last but not the least, The topics you have worked are intriguing, however I do not have much knowledge about them, so I would like to ask you about the topics I work with - Along my previous question, what do you think about application of Image Analysis in Mechanical Engineering, specially in Material science and Fluid Mechanics (or any other field that you might have interest in)?
Image Analyst
Image Analyst on 28 Jun 2024 at 14:53
My dad was a physicist from MIT who moved to San Diego to become a rocket scientist. He worked on propulsion systems at first. Then he invented a small lightweight capacitor that allowed satellites to stay in space (prior ones were too heavy), and enabled defibrillators to become portable. Then he went on to found his own capacitor company, Maxwell Technologies https://en.wikipedia.org/wiki/Maxwell_Technologies. So I decided to go into physics like my Dad.
I attended University of California, San Diego and did my first two years as a physics major. I liked mechanics but astrophysics and plasma physics got me wondering if I really wanted to do that. Then quantum mechanics was really the turning point. I was just filling out the equations with angle brackets and vertical bars, etc. I knew the process for transforming, simplifying, and manipulating the equations but I really didn’t know what they meant. It was too abstract for me. But at the time (late 1970’s) holograms and lasers were becoming popular. And I had always like photography and optical illusions so I decided to switch majors to “Engineering Physics with a specialization in optics”. I had classes in the laser lab and that is where I got a really good intuitive feeling for what a Fourier transform is. The Fraunhofer diffraction pattern of a plane wave (laser) shining through an aperture pattern is the 2-D Fourier transform. I also learned about geometrical optics. I had a 14 month internship with the US Navy (as a civilian) working on algorithms for inverse synthetic aperture radar. Instead of radar being just a blip/dot on a screen we were able to detect the basic shape of the object. It was pretty crude in those days but we could tell if it was a ship or plane and size of the ship or plane by the shape and other attributes. I’m sure it’s advanced greatly since then in what they can resolve and determine. We used a DEC PDP-11 mini computer. I also learned to like programming there.
I then entered the Ph.D. program in the largest and best optical graduate school in the country at the Optical Sciences Center of the University of Arizona. They had a strong group in medical imaging and had just invented Digital Subtraction Angiography (which is now in every hospital) so I joined up with a professor in that sub-group. For a while I worked on a laser scanning ophthalmoscope but that kept breaking down so I switched to a radiology (angiography) project where I was working on tracking and measuring blood vessels in radiographs. We worked on a DEC VAX computer and if it ever was down, it was fixed within a day. So I finished up my Ph.D. in 1988 and in my final year I started interviewing. Optics was the highest paying major, and the College of Optics (as it’s now called) was the second largest department at the University of Arizona (after the medical school). I interviewed with a lot of companies but most of them specialized in one thing, such as fingerprint/palmprint authentication, missile tracking, semiconductor wafer inspection, etc. But two companies, the Procter & Gamble Company https://www.cincinnatimagazine.com/article/17-fun-facts-to-know-and-tell-about-procter-gamble/and Dupont had an extremely wide diversity of applications and projects I could work on due to the wide range of products they worked on. They were both great but I chose P&G because the group of people there were the friendliest. Also at P&G I would be the most knowledgeable one in the group (only one in imaging with the rest being computer scientists at that time in 1988) where as at Dupont I would have been the small fish in a big pond (lots of people there who were world class experts and I didn’t want to feel like the dummy in the group).
One of my first projects at P&G was to determine how well Crest toothbrush bristles penetrated into the interproximal (between teeth) spaces. We had a brushing robot and stained the teeth with red dyed hair spray and I would see how much of the stain got removed. Another early project was to find the silicone particle size in Pert Plus (Rejoice in Asia) and Pantene shampoo in the factory. I’ve done most of my work in Fabric and Home Care (Tide, Ariel, Cascade, Dawn, Swiffer, Mr. Clean) and Baby and Feminine Care (Pampers, Always, etc.) though I’ve worked on nearly every major brand that we have (Bounty, Olay, Febreze, Gillette, Old Spice, Oral B, Braun, etc., around a hundred or so). Pampers diapers is our biggest brand, followed by Tide laundry detergent. We only make our own brands – we do not make any private label brands for stores (no house brands for Walmart, Costco, Kroger, etc.). Generally our images involve looking at the product itself, or the effect of the product doing its job. Quite often there are before and after usage images so we can determine how well the product worked. We also look at competitors’ products to make sure ours are always superior. I’ve worked on what some people would consider very VERY unusual images – just imagine how baby care and feminine care products are used! At P&G we’re used to working on sensitive products and no one is embarrassed (yes there are lots of men working on femcare products, in addition to women of course). However there were a few projects where I couldn’t even work on my computer in the public P&G office area. I had to go to a private conference room. I promise it’s not pornography, it’s just my job! A common project involves replacing human visual grading with image analysis, which is objective, more accurate, more precise, and more repeatable and quite often cheaper and faster than having to get humans to grade the images subjectively. We often look at stains and measure things like what is their color and size, and how did the color and size change after using our products to clean it. Shine, gloss, whiteness, and absorption are important for many of our products (Pantene shampoo, Crest toothpaste, etc.), so I’ve measured those attributes. Sometimes there are applications where we need to find things and count how many there are, measure their size, their color, their shape, their uniformity.
Sometimes we have existing instruments that can create an image like CT, MRI, or chemical instruments. But often we need to create a customized imaging rig optimized for the specific task at hand. So that requires that I know what kind of lighting, lenses, camera, geometry, mounting jigs, enclosures, etc. to use. My extensive background in optics, spectroscopy, and color science are very helpful for that. Usually a new light booth would be obtaining an image from a digital machine vision camera – I like the cameras from Lumenera. Sometimes we work with the machine shop in P&G to create the imaging light booth and other times we work with an outside machine vision vendor to create a custom booth to meet our specialized needs.
I can be a little more specific about projects for brands that P&G sold off (no longer owns). For Pur (water filters) I measured the porosity, surface area, and volume fraction in a microCT 3-D image of charcoal particles. For Thermacare (heat wrap) we used a thermal camera to measure the heat (temperature and uniformity from cell to cell) of the wraps themselves, and the temperature and spatial distribution of temperature on the skin after removal of a heat wrap that had been applied. For Iams (dog and cat food) I’ve looked at volume fraction of kibbles in packages from CT images. For Actonel (anti-osteoporosis drug) I looked at the porosity and connectivity of the trabecular bone. For Pringles I measured the spatial distribution of fat in the potato crisp. I’ve looked at hair count and length for a hair growth product we were working on. I’ve also looked at the volume of nasal turbinates in MRI images for an antihistamine product.
Many of my image analysis projects have resulted in numerical claims that we put onto packages, like Dawn’s “Cleans with 50% less scrubbing” and similar claims on Tide and Cascade packages. It’s nice to work on projects with such a big impact. One of the claims my image analysis enables could make $100 million dollars more in sales of the product. Not many people can say that. Many people work on some specialized project that may benefit a few hundred or few thousand people. How many people will benefit from what you do? My projects affect about 6 billion people (consumers of our products) across the globe. Over 90 percent of the world's population can buy P&G products. In the USA 99% of households have at least one P&G product. It’s nice to go into the grocery store and see a product I worked on and say “Yeah, I did that.” Sometimes my work ends up in patents. Gillette (our shaving brand) is located in the Boston area not far from the Mathworks and Gillette Stadium and I’ve done image analysis on razor blade cartridges.
I’ve also done a lot of signal analysis for our spectroscopists. They obtain 1-D signals (mostly, though sometimes 2-D) from their instruments such as FTIR or Raman spectroscopy instruments. I write code to take their signals and perform the special measurements they want like getting ratios of signal from certain wavenumbers or computing area under the curve, doing baseline correction, etc. I create a custom GUI interface where they can select the data files they want from a listbox and then batch process them and send the results to an Excel workbook. Usually these are specialized computations that they want to do that are not built-in to their instrument’s software.
After retirement from P&G last year, I continue to support the MATLAB answers forum and am on the Community Advisory Board. I’ve done very well as an investor. I’m making ten times more money from my investments than I ever did at P&G. So, I also donate a lot of my time and money to a handful of charities that interest me.
Adam Danz
Adam Danz on 19 Jun 2024
@Image Analyst, what's your preferred color space to work in and why? I think I can guess based on your avatar. 😊
In what circumstances, if any, would you work in a color space other than RGB/sRGB in MATLAB?
Image Analyst
Image Analyst on 19 Jun 2024
Well color images are in RGB space, but for color segmentation (finding pixels in a certain color range) I almost always use HSV color space. And the Color Thresholder on the Apps tab of the tool ribbon is a nice tool for doing color segmentation. That color space makes it easy to threshold the channels to get the region(s) you need. The other hue-based spaces like YCbCr and HSI, HSL, etc. are pretty similar so I just stick with HSV since it's the most intuitive.
LAB color space is usually not good for color segmentation. The color changes depending on what the L is in such a way that it's not good for finding things based on their hue. Run the attached demo and look at how the colors in the corners of the images change with L and you'll understand what I mean.
HOWEVER, very often in my work we need color differences and the true LAB values of the things we are looking at. For example if we are using a product to clean something you need to measure the Delta E color difference and for that you need LAB color space. But we need the TRUE LAB values, not the arbitrary, nominal, book values you get from using built in functions like rgb2lab or what you'd get in Photoshop. So we need to calibrate the RGB values to a standard, the Calibrite Color Checker Chart. I've written my own calibration routines that give more accuracy than the built-in functions and give values very close to what you'd get from a spectrophotometer. We need this kind of accuracy if you're developing new cleaning formulations that have just very slight improvements over your existing detergent. I'm also attaching a Powerpoint describing the color calibration process.
So my typical process is I convert from the original RGB colorspace of the original image to HSV colorspace for segmentation. Then once I have the segmentation (mask, regions) I convert from RGB to LAB so I can get the true, calibrated LAB values in the regions of interest. If I need to compare colors (like a before and after cleaning) then I do the same for the second image and use the two LAB images to compute a Delta E color difference between the pair of images.
Off on a tangent: If you're comparing color differences you need to use the same mask on both images and they need to be registered/aligned. This is where I see novices/data scientists fail. They think "oh, I'll just use deep learning to find the stained region". Nope, won't work. As a thought experiment, what if the cleaned stain region that DL found was the same color/darkness as the original stain but the region was just 80% smaller. DL would find the small region and give you a color the same as the original stain, giving a delta E of zero, indicating no cleaning happened. You need to also include the other 80% of pixels that got cleaned, not just look at the 20% that your DL model found. I think a lot of mistakes are going to be made by data scientists because they don't have a good foundation in tradition image analysis, optics, and spectroscopy. They don't know what they don't know and will blindly give results they think are accurate but are not.
Adam Danz
Adam Danz on 20 Jun 2024
That choice of color spaces makes a lot of sense. Thanks for the explanation and attachments.
The Image Processing Toolbox has a deltaE function, released in R2020b, well into your career. I'm sure you had your own deltaE tools in MATLAB before then. How do you make the decision whether to keep using your custom tools or start using new features that perform the same task? This type of decision was something I often find myself considering with my own tools.
goc3
goc3 on 19 Jun 2024
How were you introduced to MATLAB? And, how did you do computations before using MATLAB?
Image Analyst
Image Analyst on 19 Jun 2024
Originally I was using a DEC VAX and FORTRAN. Then I was using FORTRAN with calls to a Gould Image Processing box. Then around 1991 I switched to microcomputers (PC-AT) and started using a package called Visilog which was a C language based package where you'd make programs in C and then compile it to test it. Then I moved to another similar package called Aphelion that someone else in the company recommended. That one was a nightmare. Then in 1993 we moved to a very nice interpretive image processing package called Optimas. It was vectorized and very similar to MATLAB. I used that until the company was acquired by Media Cybernetics, which made the biggest selling PC-based image processing package called Image Pro Plus, and they killed off the Optimas product in 1995, though I continued to use it.
I was looking for the next generation of product to use and a colleague asked me to do the image processing for a pharmaceutical project they had. They were using MATLAB and wanted me to. Another group in the company was also using MATLAB and I'd heard several people at SPIE and IS&T conferences saying they used MATLAB, so I decided to use it. That was in 2006. Because of my deep experience with vectorized languages already (Optimas) I caught on to MATLAB very quickly and basically considered myself an expert in MATLAB in about 3 months. I've been using it ever since.
When Python came along it was rapidly gaining users so I thought I'd better look into it. I took a week long course in Python but in the end I didn't find anything that Python would do better or easier than MATLAB. Plus Python is a lot more fussy with having to import things, having to get the indentation exactly right, etc. When I started to get into deep learning I thought I might have to switch to Python but then MATLAB was coming up with all these deep learing functions and tools that made it much easier to do deep learning in than Python. So I saw no need to continue with a language (Python) that was more complicated, harder to use, harder to find the functions you need, and couldn't do anything I could already do easily in MATLAB.
goc3
goc3 on 19 Jun 2024
If you could change any one thing in MATLAB, what would it be and why?
goc3
goc3 on 19 Jun 2024
Which built-in MATLAB function (or set of functions) or app have you found to be the most helpful and why?
Image Analyst
Image Analyst on 19 Jun 2024
Well number one has to be regionprops. Essentially all image analysis comes down to this: preprocess the image to you get to a point where you can threshold it and get a binary image of regions you want to measure. Then you call regionprops to make the measurements.
The function imread is used quite a bit, for obvious reasons, as well as imshow to display the image.
Other functions I use a lot include imfill, bwareafilt, and the family of morphological functions such as imerode, etc.
About a third of my code is error handling or dealing with unexpected situations. For example you go to process an image expecting it to be gray scale but it's really color, or the file doesn't exist, or no regions were found, etc. So I use functions like fullfile, fprintf, isfile, warndlg a lot.
goc3
goc3 on 19 Jun 2024
Which of your contributions to File Exchange are you most proud of and why?
Image Analyst
Image Analyst on 19 Jun 2024 (Edited on 19 Jun 2024)
It would be my Image Processing Tutorial. It's one of the most downloaded File Exchange submissions of all time and I think I've helped countless people get started with a simple example in what otherwise could be a bewildering array of possible functions or things people might attempt on their own if they didn't have a basic example as guidance.
A lot of the other things I've submitted are just sort of fun things. Some of the color segmentation things probably don't apply anymore now that the Color Thresholder Tool (on the Apps tab of the tool ribbon) has been introduced. That makes it easy to get an initial mask of the colored regions you want, though you will often have to clean up the mask with other functions, like filling holes in the region with imfill or using bwareafilt to do some size filtering.
I have 365 demos that I've created from Answers questions. If I thought the question was broadly applicable, then I'd create a demo from it. Quite often my Answer to a new post is to take the attached demo and adapt it to your situation. I can't upload all 365 demos but I've thought about zipping them all up and submitting "A grab bag of image processing demos" to the File Exchange.
Ryan
Ryan on 25 Jun 2024 at 13:49
That tutorial is what got me into my first research group at college, eventually influencing my career path! Thank you for sharing your work and ideas!
goc3
goc3 on 19 Jun 2024
That is quite the feat. Answers is fortunate to have a contributor like you.
David
David on 19 Jun 2024
Hi Mark, thanks for doing this AMA. I'm curious to know if there's something related to image processing, imaging, or analysis that you would have liked to have done but for some reason never did. Whether for your career or otherwise.
Image Analyst
Image Analyst on 19 Jun 2024
I'd like to have done more deep learning, because I think it's a really cool concept. However I'd usually get about one new project a week and almost always they can be tackled with traditional image analysis methods. On a few projects where it was difficult to do traditionally, like finding masks or ROIs in a noisy or cluttered environment, I've employed deep learning to estimate the mask. In one project finding landmarks on an image with deep learning was acceptable in most cases but in some cases it required traditional programming to accurately locate those landmarks. Though even if deep learning did an acceptable job, I always have to follow up with additional steps of traditional methods to make the actual measurements, like use regionprops on a mask that my deep learning model created.
My big worry, and one also voiced by Professor Ed Delp of Purdue, is that if deep learning is the only tool in the analysts toolbox, that's all they'll ever use since that's all they know. If they don't know the traditional methods then they won't know if those are better. I often see people trying to use deep learning when a simpler traditional method would work better, for example color segmentation. If you have a hammer and are driving in nails then a hammer is great, however once you encounter a screw it doesn't work and if you don't know about a screwdriver....
But image processing is such a huge field that there are lots of interesting subspecialties that I thought looked fun and interesting but never had the time or need in my job to dive into those.
I also did slow speed image analysis for R&D. We had another group that worked with commercial machine vision companies to do high speed image analysis on the assembly line where things are flying by at a speed of hundreds per minute. They will usually do simple things like gauging caliper distances, checking for presense or alignment of a label or other part of the product being assembled.
I've also taught many classes here at P&G on MATLAB, image analysis, color science, etc. One thing I'd like to do is to teach the MATLAB course on image processing or computer vision. Never got to do that - maybe I just need to ask - but I think that would be fun. I think there must be people in Hawaii or San Diego who want to learn MATLAB image processing right? I could do that.
David
David on 19 Jun 2024
Thanks for the great response!

See Also

Tags