Main Content

Results for

Hey AI, add computation to my modern physics course. Thanks.
Duncan Carlsmith
Department of Physics, University of Wisconsin-Madison
An AI-generated CANVAS quiz header based on a Live Script on relativistic motion.
Introduction
Agentic AI is disrupting higher education. An agentic AI can act on the web rather than relying solely on its training. It can research a topic and produce a credible research paper to specification with validated references. It can create, answer, or assess student work on physics questions from elementary mechanics to graduate-level quantum field theory or quantum computing. It can comprehend, generate, run, and debug a MATLAB Live Script zip package, an HTML5 interactive web application, a JavaScript-enabled website, an ADA-compliant CANVAS site with math and images, or a mobile phone app. A student can authenticate in a learning management system like CANVAS and issue a simple prompt to an agentic AI — “Complete all of my assignments in all of my courses. Thanks.” — and an instructor can, on the other side, with AI assistance and a simple prompt, assess all such submissions, even messy hand-written work. I have demonstrated these capabilities and others.
Here, I’d like to share an experiment leveraging AI to inject computation with MATLAB into a course in modern physics. This may interest the academic readers of this blog and the curious. My prior post Giving All Your Claudes the Keys to Everything introduced my personal agentic AI context.
Live Script goals
Some years back now, I started developing and introducing Live Scripts in a two-semester introductory physics course to immerse students in computation and science without sacrificing the rigor and breadth of the class. These students have essentially no background in computing and are exploring STEM majors — physics, astronomy, and engineering principally. A self-documenting Live Script allows a student to explore even a relatively advanced physics topic and data analysis trick like Fourier analysis or autocorrelation, using data like a mobile phone voice memo or a digital oscilloscope output race that they collect themselves, and then apply the same techniques to analyze big science open data from for example as gravitational wave observatory, both without being mired down in mathematics or code writing. As the course evolves, computational challenges connected to the laboratory component introduce much of the gamut of MATLAB functionality. The goal is to show why and how modeling and assessment using computation are essential in science, and to empower students with practical skills and a sense of what is possible. The traditional lecture/demonstration/homework/discussion format was largely untouched. This course sequence was a five-credit automatic honors course, so extra work was expected. Coding as a tool rather than a chore or vocation is all the more relevant in the AI age.
Assessment strategy
To flexibly direct and assess student work, each Live Script contains a variety of ‘Try this’ suggestions which require the user to adjust a parameter or two and observe the consequences. The student must study the physics described in the background information section, and the code enough to understand how the code logic works, using the supplied comments and URLs to documentation. Tackling a ‘Try this ‘ suggestion does not require any coding, just changing a parameter value, perhaps with a slider. Additionally, the Live Script contains ‘Challenges’ to extend the code in some simple or possibly advanced way. The Live Script can thus serve different customers, and an instructor can further tailor the script and embedded suggestions and challenges as they choose. The possibilities offered are only exemplary.
An associated CANVAS quiz contains a few multiple-choice questions related to the ‘Try this’ suggestions, which are auto-graded. Additional questions require the student to upload a product, like an appropriately labeled plot comparing data to a model fit, together with a written explanation. These are readily graded electronically using CANVAS SpeedGrader, with or without an e-rubric. The emphasis is on results and analysis, not on coding facility or style. By design, the burden on the instructor is minimal.
AI-generated computational thread
In teaching a 3-credit third-semester survey of modern physics (relativity, quantum mechanics, atomic, molecular, solid state, nuclear, particle, and astro physics) without a lab and again for students with little or no prior exposure to computation, I needed first to develop more advanced, relevant Live Scripts. This course offers three lecture hours per week, rife with live demonstrations of cathode ray tubes, electron diffraction, Geiger counters and sources, thermal radiation, the photoelectric effect, gas discharge tubes observed with diffraction glasses, lasers, and magnetic levitation with diamagnetic and high temperature superconductors, and so on. An additional mandatory hour per week is dedicated to small group active learning in sectional meetings. A contemporary e-text and integrated WebAssign homework system are linked via LTI to CANVAS. These components address learning goals I am loath to sacrifice. I ultimately decided to make the new computational thread an attractive extra-credit option (in parallel with a research paper option) and implemented it with AI assistance mid-stream this semester in a way that could be emulated.
The agent was Claude Desktop running with MCP servers: the Playwright browser-automation server (for CANVAS interaction via authenticated browser session), MATLAB MCP server to run MATLAB, and a filesystem server (for reading local Live Script packages and writing artifacts back to disk). I asked Claude to survey my modern physics syllabus on CANVAS and my 150+ Live Scripts on the MATLAB File Exchange (FEX), and to identify those relevant to a 3rd-semester course in relativity, quantum mechanics, atomic, molecular, solid state, nuclear, particle, and astro physics, with my Introduction to MATLAB script included as a foundations option. Claude returned an initial list of 38 candidate scripts. I removed two that were not a good fit and approved 14, including chaos in relativistic mechanics, relativistic motion in a Coulomb field, numerical solutions to the Schrödinger equation in 1D/2D/3D via the PDE Toolbox, gravitational-wave data analysis, exoplanet transit detection, and clustering in Gaia mission stellar data, among others.
For each approved script, Claude downloaded the FEX zip via MATLAB websave and unzip, converted the .mlx to readable .m text via matlab.internal.liveeditor.openAndConvert, ran the key numerical sections in MATLAB to obtain concrete answer values, and then used a single Playwright browser_evaluate call — authenticated by the CSRF token from the active CANVAS browser cookie — to POST a new quiz plus all of its questions to the CANVAS REST API in one round trip. (The MATLAB webwrite path with a CANVAS_API_TOKEN environment variable consistently returned 401 in our testing; the browser-session approach worked reliably for all 14 quizzes.)
Each quiz is structured identically: a description block with the FEX thumbnail image, a two- to three-paragraph physics introduction essentially copied from the FEX page or script itself with Wikipedia links to technical terms, a download link, and an “Open in MATLAB Online” link; followed by 4 multiple-choice questions worth 1 pt each (covering a fundamental physics fact, a physical mechanism, an experimental or computational technique, and a data-analysis concept), and 3 essay questions worth 3 pts each (a basic execution + screenshot, a quantitative comparison, and a bonus “Try this” modification). The essay type was deliberate: a CANVAS file_upload question accepts only a file, while an essay question gives the student a Rich Content Editor in which they can paste a screenshot directly from the clipboard and type their analysis in the same field. SpeedGrader then shows everything together. We also added an optional 0-credit student feedback question that we crafted jointly. Total: 13 points per quiz.
Speed Grader view of a Rich Content Editor question with uploaded results
The full set of 14 quizzes was created in a single working session. I reviewed and accepted the results essentially without revision — a few quiz descriptions needed a follow-up PUT to fix image sizing or to add the MATLAB Online link, but no question content required rewriting. Across the session, the procedure crystallized into a reusable SKILL.md that documents the FEX-to-CANVAS recipe end to end (download with MATLAB, design questions in the four-category MC pattern, batch quiz + question creation, verification checklist).
An AI touch on grading made the assignment fit the course without inflating its weight: a 5% group weight on the Computation category, with a drop-lowest-eleven-of-fourteen rule that keeps each student’s top three quizzes. Each quiz is 13 points, so the maximum contribution is (39/39) × 5% = 5.00% extra credit, and any student can attempt as few or as many as they wish without exceeding that cap. The CANVAS configuration is non-trivial in a few ways and includes one gotcha worth knowing about; details are in Appendix A.
Outcomes
I received about 75 submissions, with 30 of the 75 students participating, and many others opting for the research paper. Feedback was generally positive. Only a few students ran into difficulty: one suffered a European Space Agency network outage while accessing Gaia data, and another had trouble with a screen-capture process unrelated to MATLAB. Students reported workloads in an appropriate 1–3 hour range per assignment. Only about 20% of submitters elected to submit the (quite lengthy) Introduction to MATLAB assignment for credit; some likely encountered MATLAB already in the math department or engineering school, where it is used extensively, and others may have reviewed the assignment but elected not to submit because the upload questions concerned image processing (compression and decompression, blurring and deblurring) rather than course-relevant topics. Several students volunteered that these exercises were more informative and fun than their canonical problem-solving exercises.
Lessons
A few patterns from this experiment seem worth carrying forward. First, the ‘Try this’ design pattern that I had already adopted turns out to be unusually well suited to AI-assisted assessment: each suggestion converts almost mechanically into a three-part question (run, capture, analyze) with a defensible rubric; Hence one working session yielded a full term’s worth of quizzes. Second, the agentic build is a short, explicit recipe — read the script, run the calculations, design the questions, post via the CANVAS API in one batched call — that other instructors can replicate and which is now captured for me in a SKILL.md. Third, the Canvas grading mechanics (drop-lowest, keep-best-three, group weight cap) let extra-credit work scale gracefully: students self-select breadth versus depth, and the instructor’s exposure to grading volume is bounded.
Conclusions
More broadly, I expect education to become more efficient and engaging in this AI age, with much of the routine instructional and learning burden relegated to AI. Frontier AIs can affordably tutor undergraduate students and even PhDs at their level and challenge them in new ways and at scale. Students and instructors both must develop and adjust to new learning strategies and expectations. Documented exploration enabled by interactive, code-aware artifacts like Live Scripts and Jupyter notebooks, created by a student or researcher collaboratively with AIs and other compatriots, may play an ever more important role in this environment.
My SKILL.md is 665 lines and specific to my setup, so not shared here. You might be asking an AI to install Chromium and Playwright or Puppeteer and do all the work in its container. You might be electing a different assignment structure, accessing your own Live Scripts or Python equivalent located at GitHub or someplace other than the MATLAB FEX. This article documents most of what is in my skill file and would be useful background information. You will want to develop and test your own process if emulating the idea here.
Acknowledgements and disclosure
The products described here and this essay were prepared with the assistance of Claude.ai. The author declares he has no financial interest in Anthropic or MathWorks.
Appendix A: CANVAS gradebook configuration
The intent was simple to state: a student who completes three or more MATLAB quizzes at full marks should receive the full 5% extra-credit boost on their course total; a student who completes one quiz at full marks should receive one-third of that boost; a student who attempts none should receive nothing. Implementing this in CANVAS took three coordinated pieces, each of which is straightforward in isolation but has at least one non-obvious failure mode.
A.1 Group structure and drop rule. The 14 quizzes live in a single assignment group named “Computation,” weighted at 5% of the course grade. The group has one rule: drop the lowest 11 scores. With 14 assignments and 11 dropped, CANVAS keeps each student’s top three. Each quiz is worth 13 points (4 multiple-choice at 1 pt + 2 essay at 3 pts + 1 bonus essay at 3 pts), so the maximum sum across the kept three is 39, and the maximum group percentage is 39/39 = 100%, contributing 0.05 × 100% = 5.00% to the course total. The group weight thus acts as a hard ceiling: no matter how many quizzes a student attempts, their boost cannot exceed 5%.
A.2 Treating ungraded as zero, selectively. Out of the box, CANVAS treats ungraded assignments as ignored rather than as zero. This is usually the right default — a student who has not yet attempted an assignment is not penalized for it — but it interacts badly with the design intent here. If a student attempted exactly one MATLAB quiz and scored 13/13, CANVAS would show their Computation group total as 13/13 = 100%, awarding the full 5% boost for a single quiz. To get the intended scaling (one quiz at 13/13 should yield 13/39 = 33.33%, contributing 1.67% rather than 5%), the unattempted quizzes must count as zero in the group calculation.
The simplest way to enforce that globally is the gradebook setting Treat Ungraded as 0, but this applies course-wide and was undesirable in my case because of an exam-administration mixup in which different students had taken different versions of Exam 1; only the version each student took should count toward their exam grade, and a global “treat ungraded as 0” would have penalized students for the version they had not been assigned. The per-assignment alternative is to use the gradebook column menu (the three-dot menu on each assignment column) and choose Set Default Grade, entering 0 with the “Overwrite already-entered grades” box left unchecked. This converts every dash in that column to a 0 while leaving real scores untouched, and only affects the assignment whose menu was used. Applied to each of the 14 MATLAB quizzes, this gives the desired “ungraded as zero” behavior in the Computation group without affecting Exam 1 or any other category. After the fix, the worked examples behave as expected..
A.3 The points_possible gotcha. When a CANVAS Classic Quiz is created via the REST API and the quiz’s questions are POSTed in subsequent calls (or even, as in our case, in the same browser_evaluate call but as separate POST requests), the assignment row that mirrors the quiz in the gradebook can retain points_possible = 0 even though the questions internally sum to 13. The quiz preview displays the question points correctly, the quiz statistics show the correct totals, but the gradebook column header reads “Out of 0” and the group percentage calculation collapses to nonsense. Symptomatically, a student with one real score appeared at 30.77% in the Computation column when they should have been at 33.33% — the column was contributing 4/13 instead of 4/39 because 13 of the 14 columns were silently weightless.
The cure is to force CANVAS to recompute the assignment row’s points_possible from the question sum. The simplest way is per-quiz from the UI: open the quiz, click Edit, scroll to the bottom of the editor without changing anything, and click Save (not “Save & Publish” if the quiz is already published). The act of saving the quiz triggers the recompute. The same effect is available via the API by issuing PUT /api/v1/courses/{course_id}/assignments/{assignment_id} with body {"assignment": {"points_possible": 13}} on each affected assignment, which is faster for batch use.
The lesson for anyone scripting CANVAS quiz creation: after batch-creating quizzes and questions via the API, always verify the gradebook column header reads “Out of N” with N matching the question sum, and apply one of the two cures above before students start submitting. The skill file used in this project now flags this check explicitly.
Mitigating chat failures in AI code development
Duncan Carlsmith
Department of Physics, University of Wisconsin-Madison
Tidal Disruption Explorer (MATLAB File Exchange 183760). The process of porting this Live Script ito HTML is described in this post.
Introduction
An agentic AI session ended for me this week with the message: "Claude is unable to respond to this request, which appears to violate our Usage Policy. Please start a new chat." Gulp. The substance of the conversation was completely benign — porting my MATLAB Live Script Tidal Disruption Explorer that simulates a self-gravitating cluster of particles being shredded by tidal forces near a massive object, like Comet Shoemaker-Levy 9 was shredded by Jupiter in 1992. The next chat picked up the work and finished it in seven turns.
Why nothing was lost is the subject of this post. The new product is the HTML5 port of Tidal Disruption Explorer and deployed at duncancarlsmith.github.io/TidalDisruptionExplorer-HTML5. But the more transferable product may be practices that can help make AI-assisted code development resilient to chat failures, connection drops, sandbox losses, and content-policy false positives. Two prior posts set my context: Live Script deployed as a 3D web application with AI introduced the workflow, and Giving All Your Claudes the Keys to Everything introduced the ngrok command server that makes the Mac controllable from any AI client. This post is about how to use such tools without losing your work when the chat dies.
Failure modes worth designing for
Long agentic sessions can fail in many ways, and most are out of the user's control. The bash_tool connection in the cloud container can go unresponsive mid-task. A stray Python process can mask a real command server on the same port. A lost development sandbox can vaporize generated artifacts — in an earlier turn of this same project, an entire test-harness directory disappeared with the sandbox and had to be reconstructed from the conversation log. Persistent context is not in fact persistent. Skills are forgotten. The user closes the laptop, the WiFi blinks off, or the chat hits a length limit. This project used Claude, but the problems are not AI-specific in my experience with 5-6 leading vendors. Without preparation, each of these is a real setback.
Best practices to consider
1. Externalize project state in a committed PROGRESS journal
A single file, committed in a repo, names every milestone, the test-pass count for each, the current state in prose, and an explicit "Recovery instructions for a fresh session" section that lists the source files, the test harness names, and the toolchain assumptions. When the previous chat failed, the next one resumed from this file alone, without needing the failed conversation. When the dev sandbox loss took out 10 test harnesses, they were rebuilt from the conversation log because the journal had recorded exactly what each harness checked and the expected pass count for each. These harnesses are also stored locally when complete and successful.
2. Two external locations:
The container contents are fragile even without a chat failure due to context compaction and hidden file management. I chose a local working directory as the editable source of truth. A GitHub repository was the final product and might have been used rather than my local storage - that choice was a matter of familiarity and trust. Each change was written locally first via the command server, verified on disk by reading it back, then committed and pushed to GitHub.
3. Run browser tests in the AI's container, not on the user's machine
For this project, the final product was a web app. In prior work, I used a local Chromium to view and test the product. It turns out that Claude's container ships with Node and Playwright preinstalled, and Chromium may be available from the Puppeteer install. Browser regression tests for the HTML5 application were run there entirely, and I only viewed staged intermediate products. Containing the development is not possible when building a MATLAB product without the added burden of using MATLAB in the cloud. The idea was to do as much as possible without overhead in the AI container.
4. Multistep plan with explicit approval gates
Decompose the work into milestones with sub-milestones. Each has a test harness with a documented expected pass count and a concrete deliverable. Don't merge "running a test" with "uploading the result" with "committing the change" — these separate decisions each has its own approval and verification. If the chat dies between any two of them or something else goes awry, the user can stop without leaving anything dangling. This project: 8 milestones, 27 sub-milestones, 260 documented sub-checks.
5. Versioned backups before any destructive write
Every PROGRESS edit got a timestamped pre-edit copy first in the local project repo, one per milestone.
Result
Recovery from the failed chat only cost me one turn. Six more turns to finish the project. The final result: 260 of 260 sub-checks pass across all milestones, live deployment verified. Many hairs pulled (the violation of usage policy issue was not the only one encountered!), but no utter despair experienced!
Links
Live HTML5 application: https://duncancarlsmith.github.io/TidalDisruptionExplorer-HTML5/
MATLAB Live Script (File Exchange 183760): https://www.mathworks.com/matlabcentral/fileexchange/183760-tidal-disruption-explorer
Source repository (GitHub): https://github.com/DuncanCarlsmith/TidalDisruptionExplorer-HTML5
Starting in R2026a you can export MATLAB figures to an HTML file that preserves axes interactions.
Click on the figure below to open the interactive MATLAB figure and pan or zoom into the axes. This demo also uses a new linkaxes feature available in R2026a.
To learn about more Graphics and App Building features in R2026a, check out today's blog article:
I submitted a Matlab support case but posting this publicly to hopefully save people some trouble and see if anyone has ideas.
After upgrading my workstation from Ubuntu 25.10 to Ubuntu 26.04 LTS, MATLAB GUI consistently prints this terminal error on shutdown:
free(): chunks in smallbin corrupted
MATLAB appears to run normally, but closing the GUI takes a long time and sometimes produces crash dumps. The terminal error occurs every time I close the GUI, but crash dumps are intermittent. I attached one R2026a crash dump. I had zero issues on Ubuntu 25.10.
Affected versions:
  • MATLAB R2026a
  • MATLAB R2025b
  • I suspect any 'new desktop' version
System:
  • Ubuntu 26.04 LTS
  • AMD EPYC 7443P
  • NVIDIA RTX 3090
  • Ubuntu 26.04 default NVIDIA driver: nvidia-driver-595-open, 595.58.03
  • NVIDIA module path: /lib/modules/7.0.0-14-generic/kernel/nvidia-595-open/nvidia.ko
  • glibc 2.43
Important note: the error first occurred with a clean MathWorks MATLAB installation before installing the Ubuntu/Debian `matlab-support` package. I later tested after installing `matlab-support`, which I understand modifies/renames some MATLAB-bundled libraries so MATLAB uses selected system libraries instead. The same shutdown error occurs both before and after applying `matlab-support`. This suggests the issue is not caused solely by the Debian/Ubuntu `matlab-support` integration or solely by one of the libraries it substitutes.
The attached crash dump shows abort/free() heap corruption detected in libc, but the higher-level stack includes MATLAB libraries such as:
The issue appears GUI-specific. Using these startup flags shut down cleanly:
  • matlab -batch
  • matlab -nodesktop
  • matlab -nodisplay
The shutdown error still occurs with these startup flags:
  • normal GUI launch
  • -nosplash
  • -nojvm
  • -softwareopengl
  • -cefdisablegpu
The issue also persists after:
  • renaming/resetting ~/.matlab/R2026a and ~/.MathWorks/R2026a
  • launching with a clean environment without LD_LIBRARY_PATH, LD_PRELOAD, MATLAB_JAVA, JAVA_HOME, JRE_HOME, etc.
  • testing a new Ubuntu user account
  • testing Ubuntu/GNOME, GNOME, and Xfce X11 sessions
  • testing NO_AT_BRIDGE=1 and GTK_USE_PORTAL=0
  • temporarily moving ~/.MathWorks/ServiceHost
  • testing GLIBC_TUNABLES=glibc.malloc.tcache_count=0
  • trying to capture a system coredump with ulimit -c unlimited / coredumpctl; no system coredump was produced
Because R2025b and R2026a are both affected, terminal-only modes exit cleanly, the problem occurs across GNOME/Wayland and Xfce/X11, and the error occurred on a clean MATLAB install before any `matlab-support` modifications, this appears related to MATLAB GUI shutdown on Ubuntu 26.04 / glibc 2.43 rather than a corrupted MATLAB preference folder, a single desktop session, or the Ubuntu `matlab-support` package.
Example crash dump:
Hi everyone
My blog post about the latest MATLAB release was published yesterday MATLAB R2026a has been released – What’s new? » The MATLAB Blog - MATLAB & Simulink
There are a lot of new features and performance enhancements and from conversations I've had across several social media platforms., it seems that the new metafunction functionality is emerging as a user favourite. What are you most excited to see?
Cheers,
Mike
Short version: MathWorks have released the MATLAB Agentic Toolkit which will significantly improve the life of anyone who is using MATLAB and Simulink with agentic AI systems such as Claude Code or OpenAI Codex. Go and get it from here: https://github.com/matlab/matlab-agentic-toolkit
Image Analyst
Image Analyst
Last activity on 14 Apr 2026 at 16:56

Do we know if MATLAB is being used on the Artemis II (moon mission) spacecraft itself? Like is the crew running MATLAB programs? I imagine it was probably at least used in development of some of the components of the spacecraft, rockets, or launch building. Or is it used for any of the image analysis of the images collected by the spacecraft?
Dan Dolan
Dan Dolan
Last activity on 9 Apr 2026 at 19:58

MATLAB interprets the first block of uninterupted comments in a function file as documentation. Consider a simple example.
% myfunc This is my function
%
% See also sin
function z = myfunc(x, y)
z = x + y;
end
Those comments are printed in the command window with "help myfunc" and displayed in a separate window with "doc myfunc". A lot of useful things happen behind the scenes as well.
  • Hyperlinks are automatically added for valid file names after "See also".
  • When dealing with classes, the doc command automatically appends the comment block with a lists of properties and methods.
All this is very handy and as been around for quite some time. However, the doc browser isn't great (forward/back feature was removed several versons ago), the text formatting isn't great, and there is no way to display math.
Although pretty text/math can be displayed in a live document, the traditional *.mlx file format does not always play nice with Git and I have avoided them. However, live documents can now (since 2025a?) be saved in a pure text format, so I began to wonder if all functions should be written in this style. Turns out that all you have to do is append these lines:
%[appendix]{"version":"1.0"}
%---
to the end of any function file to make it a live function. Doing so changes how MATLAB manages that first comment block. The help command seems to be unaffacted, although [text] may appear at the start of each comment line (depending on if the file was create as a live function or subsequently converted). The doc command behaves very different: instead of bringing up the traditional window for custom documentation, the comment block looks like it gets published to HTML and looks more similar to standard MATLAB help. This is a win in some ways, but the "See also" capabilitity is lost.
Curiously, the same text can be appended to the end of a class definition file with some affect. It does not change how the file shows up in the editor, but as in live functions, comments are published when using the doc command. So we are partway to something like a "live class", but not quite.
Should one stick with traditional *.m files or make everything live? Neither does a great job for functions/classes in a namespace--references must explicitly know absolute location in traditional functions, and there is no "See also" concept in a live function. Do we need a command, like cdoc (custom documentation), that pulls out the comment block, publishing formatted text to HTML while simultaneously resolving "See also" references as hyperlinks? If so, it would be great if there were other special commands like "See examples" that would automatically copy and then open an example script for the end user.
Hi all,
I'm a UX researcher here at MathWorks working on the MathWorks Central Community. We're testing a new feature to make it easier to ask a question, and we'd love to hear from community members like you.
Sessions will be next week. They are remote, up to 2 hours (often shorter), and participants receive a $100 stipend. If you're interested, you can click here to schedule.
Thanks in advance! Your feedback directly shapes what gets built.
--David, MathWorks UX Research
Absolutely!
65%
Probably
8%
Sometimes yes, sometimes no
8%
Unlikely
15%
Never!
4%
26 votes
Matt J
Matt J
Last activity on 4 Apr 2026 at 14:46

Matlab seems to follow a rule that iterative reduction operators give appropriate non-empty values to empty inputs. Examples include,
sum([])
ans = 0
prod([])
ans = 1
all([])
ans = logical
1
any([])
ans = logical
0
Is it an oversight not to do something similar for min and max?
max([])
ans = []
For non-empty A and B,
max([A,B])= max(max(A), max(B))
The extension to B=[] should therefore satisfy,
max(A)=max(max(A),max([]))
for any A, which will only be true if we define max([])=-inf.
If you have published add-ons on File Exchange, you may have noticed that we recently added a new, unique package name field to all add-ons. This enables future support for automated installation with the MATLAB Package Manager. This name will be a unique identifier for your add-on and does not affect the existing add-on title, any file names, or the URL of your add-on.
📝 Update and review until April 10
We generated default package names for all add-ons. You can review and update the package name for your add-ons until April 10, 2026. Review your package names now:
After April 10, you will need to create a new version to change your package name.
🚀 More changes coming with the MATLAB R2026b prerelease
Starting with the MATLAB R2026b prerelease, these package names will take effect. At that time, the package name may appear on the File Exchange page for your add-on.
Keep your eyes peeled for exciting changes coming soon to your add-ons on File Exchange!
Cantera is an open-source suite of tools for problems involving chemical kinetics, thermodynamics, and transport processes. Dr. Su Sun, a recent graduate from Northeastern Chemical Engineering Ph.D. program made significant contributions to MATLAB interface for Cantera in Cantera Release 3.2.0 in collaboration with Dr. Richard West, other Cantera developers, and MathWorks Advanced Support and Development Teams. As part of this Release, MATLAB interface for Cantera transitioned to using the new MATLAB- C++ interface and expanded their unit testing. Further information is available here.
Mehreen
Mehreen
Last activity on 23 Mar 2026

I began coding in MATLAB less than 2 months ago for a class at community college. Alongside the course content, I also completed the MATLAB onramp and introduction to linear algebra self-paced online courses. I think this is the most fun I've had coding since back when I used to make Scratch projects in elementary school. I'm kind of curious if I could recreate some of my favorite childhood Scratch games here.
Anyways, I just wanted to introduce myself since I plan to be really active this year. My name is Mehreen (meh like the meh emoji from the Emoji movie, reen like screen), I'm a data science undergrad sophomore from the U.S. and it's nice to meet you!
Hi everyone,
Some of you may remember my earlier post. Quick version: I'm a biomed PhD student, I use MATLAB daily, and I noticed that AI coding tools often suggest functions that don't exist in R2025b or use deprecated ones. So I built skills that teach them what actually works.
v2.0 adds 54 template `.m` scripts, rewrites all knowledge cards based on blind testing, and verifies every function call against live MATLAB. I tested each skill on 17 prompts and caught 8 hallucinated functions across 5 toolboxes (Medical Imaging, Deep Learning, Image Processing, Stats-ML, Wavelet).
Give it a spin!
The skills follow the Agent Skills open standard, so they also work with Codex, Gemini CLI, Claude Code and others. If you use the official Matlab MCP Server from MathWorks, these skills complement it: the MCP server executes your code, the skills help the AI write good code to begin with.
One ask
How do we measure performance and evaluate agent skills? We can run blind tests and catch hallucinated functions, but that only covers what we thought to test. The honest answer is that the best way to evaluate these is community consensus and real-world testimonials. How are you using them? What worked? What still broke?
Your use cases and feedback are the most reliable eval I can get, and as a student building this, they're also the real motivation for me to keep going. If a skill saved you from a hallucinated function or pointed you to the right function call, I'd love to hear about it. If something is still wrong, I need to hear about it.
Issues, PRs, or just a reply here. Star the repo if it saved you time.
Thanks!
Happy Spring! and Happy Coding in Matlab!
Best,
Ritish
View of Classical Stark Effect web application
A Live Script can be converted to an HTML5 framework web application with AI as described in Double Pendulum Chaos Explorer: From HTML5 Prototype to MATLAB interactive application with AI. I have recently provides converted the Live Script Classical Stark Effect to a web application supporting a 3D twirlable display of motion of a particle subject to an inverse square law force plus an additional constant force - the problem known as the classical Stark effect.
The web application deployed to GitHub may be launched here and documents its dependencies below the interactive application. The files are available at Classical Stark Effect — Interactive Web Simulation. One gotcha was the need to enable hardware acceleration in Chrome (no problem in Safari) to support a 3D twirlable display. If hardware acceleration is disabled in Chrome, the application provides a warning and replaces the 3D twirlable display with a 2D alternate.
The conversion of the script to a web application was performed with Perplexity.ai. The GitHub deployment was accomplished with Anthropic's Claude using the open source GitHub CLI. WIth the gh CLI (already installed and authenticated on my Mac) via osascript, and Claude connected to my file system via MCP and an ngrok server, Claude executed on my Mac the following sequence of steps:
1. git init
Creates a hidden .git/ directory in the staging folder, initializing it as a local git repository. Before this command the folder is just a plain directory; after it, git can track files there. Run once per new project.
2. git branch -M main
Renames the default branch to main. Older git versions default to master; GitHub now expects main. The -M flag forces the rename even if main already exists. Must run after git init and before the first commit.
3. git add -A
Stages all files in the directory tree for the next commit. The -A flag means "all" -- new files, modified files, and deleted files are all included. This does not write anything to GitHub; it only updates git's internal index (the staging area) on your local machine.
4. git commit -m 'Initial release: Classical Stark Effect Interactive Simulation'
Takes everything in the staging area and freezes it into a permanent commit object stored in .git/. This is the snapshot that will be pushed. The -m flag provides the commit message inline. After this command, git knows exactly what files exist and what their contents are -- gh repo create --push will send exactly this snapshot.
5. gh repo create ClassicalStarkEffect --public --source=. --push
Three things happen in sequence inside this one command:
  • gh repo create ClassicalStarkEffect --public -- calls the GitHub API to create a new empty public repository named ClassicalStarkEffect under the authenticated account (DuncanCarlsmith).
  • --source=. -- tells gh to treat the current directory as the local git repo. It reads .git/ to find the commits and configures the remote.
  • --push -- sets the new GitHub repo as origin and runs the equivalent of git push origin main, sending the commit from step 4 up to GitHub.
Without steps 1-4 having run first, --push would have nothing to send and the repo would land empty.
6. gh api repos/DuncanCarlsmith/ClassicalStarkEffect/pages --method POST -f build_type=legacy -f source[branch]=main -f 'source[path]=/'
Calls the GitHub REST API directly to enable GitHub Pages on the repo. Breaking down the flags:
  • --method POST -- this is a create operation (not a read), so it uses HTTP POST.
  • -f build_type=legacy -- critical flag. Tells GitHub to serve files directly from the branch. The alternative (workflow) would expect a .github/workflows/ Actions file to build and deploy the site, which doesn't exist here, and would produce a permanent 404.
  • -f source[branch]=main -- serve from the main branch.
  • -f 'source[path]=/' -- serve from the root of the branch (as opposed to a /docs subdirectory).
This is the API equivalent of going to Settings > Pages in the GitHub web UI and setting Branch: main, Folder: / (root), clicking Save.
7. curl -s -o /dev/null -w "%{http_code}" https://duncancarlsmith.github.io/ClassicalStarkEffect/
Not a git or gh command, but the verification step. GitHub Pages takes ~60 seconds to build after step 6. This curl fetches the live URL and prints only the HTTP status code (-w "%{http_code}"), discarding the body (-o /dev/null) and suppressing progress output (-s). 200 means live; 404 means still building.
An emirp is a prime that is prime when viewed in in both directions. They are not too difficult to find at a lower level. For example...
isprime([199 991])
ans = 1×2 logical array
1 1
Gosh, that was easy. But what happens if the number is a bit larger? The problem is, primes themselves tend to be rare on the number line when you get into thousands or tens of thousands of decimal digits. And recently, I read that a world record size prime had been found in this form. You have probably all heard of Matt Parker and numberphile.
And so, I decided that MATLAB would be capable of doing better. Why not? After all, at the time, the record size emirp had only 10002 decimal digits.
How would I solve this problem? First, we can very simply write a potential emirp as
10^n + a
then we can form the flipped version as
ahat*10^(n-d) + 1
where ahat is the decimally flipped version of a, and d is chosen based on the number of decimal digits in the number a itself. Not all emirps will be of that form of course, but using all of those powers of 10 makes it easy to construct a large number and its reversed form. And that is a huge benefit in this. For example,
Pfor = sym(10)^101 + 943
Prev = 349*sym(10)^99 + 1
It is easier to view these numbers using a little code I wrote, one that redacts most of those boring zeros.
emirpdisplay(Pfor)
Pfor = 100000... (88 zeros redacted) ...00000943
emirpdisplay(Prev)
Prev = 34900000... (88 zeros redacted) ...000001
And yes, they are both prime, and they both have 102 decimal digits.
isprime([Pfor,Prev])
ans = 1×2 logical array
1 1
Sadly, even numbers that large are very small potatoes, at least in the world of large primes. So how do we solve for a much larger prime pair using MATLAB?
The first thing I want to do is to employ roughness at a high level. If a number is prime, then it is maximally rough. (I posted a few discussions about roughness some time ago.)
In this case, I'm going to look for serious roughness, thus 2e9-rough numbers. Again, a number is k-rough if its smallest prime factor is greater than k. There are roughly 98 million primes below 2e9.
The general idea is to compute the remainders of 10^12345, modulo every prime in that set of primes below 2e9. This MUST be done using int64 or uint64 arithmetic, as doubles will start to fail you above
format short g
sqrt(flintmax)
ans =
9.4906e+07
The sqrt is in there because we will be multiplying numbers together here, and we need always to stay below intmax for the integer format you are working with. However, if we work in an integer format, we can get as high as 2e9 easily enough, by going to int64 or uint64.
sqrt(double(intmax('int64')))
ans =
3.037e+09
And, yes, this means I could have gone as high as primes(3e9), however, I stopped at 2e9 due to the amount of RAM on my computer. 98 million primes seemed enough for this task. And even then, I found myself working with all of the cores on my computer. (Note that I found int64 arithmetic will only fire up the performance cores on your Mac via automatic multi-threading. Mine has 12 performance cores, even though it has 16 total cores.)
I computed the remainders of 10^12345 with respect to each prime in that set using a variation of the powermod algorithm. (Not powermod itself, which was itself not sufficiently fast for my purposes.) Once I had those 98 millin remainders in a vector, then it became easy to use a variation of the sieve of Eratosthenes to identify 2e9-rough numbers.
For example, working at 101 decimal digits, if I search for primes of the form 10^101+a, with a in the interval [1,10000], there are 256 numbers of that form which are 2e9-rough. Roughness is a HUGE benefit, since as you can see here, I would not want to test for primality all 10000 possible integers from that interval.
Next, I flip those 256 rough numbers into their mirror image form. Which members of that set are also rough in the mirror image form? We would then see this further reduces the set to only 34 candidates we need test for primality which were rough in both directions. With now only a few direct tests for primality, we would find that pair of 102 digit primes shown above.
Of course, I'm still needing to work with primes in the regime of 10000 plus decimal digits, and that means I need to be smarter about how I test a number to be prime. The isprime test given by sym/isprime only survives out to around 1000 decimal digits before it starts to get too slow. That means I need to perform Fermat tests to screen numbers for primality. If that indicates potential primality, I currently use a Miller-Rabin code to verify that result, one based on the tool Java.Math.BigInteger.isProbablePrime.
And since Wikipedia tells me the current world record known emirp was
117,954,861 * 10^11111 + 1 discovered by Mykola Kamenyuk
that tells me I need to look further out yet. I chose an exponent of 12345, so starting at 10^12345. Last night I set my Mac to work, with all cores a-fumbling, a-rumbling at the task as I slept. Around 4 am this morning, it found this number:
emirp = @(N,a) sym(10)^N + a;
Pfor = emirp(12345,10519197);
Prev = sym(flip(char(Pfor)));
emirpdisplay(Pfor)
Pfor = 100000... (12327 zeros redacted) ...0000010519197
emirpdisplay(Prev)
Prev = 7919150100000... (12327 zeros redacted) ...000001
isProbablePrimeFLT([Pfor,Prev],210)
ans = 1×2 logical array
1 1
I'm afraid you will need to take my word for it that both also satisfy a more robust test of primality, as even a Miller-Rabin test that will take more time than the MATLAB version we get for use in a discussion will allow. As far as a better test in the form of the MATLAB isprime utility to verify true primality, that test is still running on my computer. I'll check back in a few hours to see if it fininshed.
Anyway, the above numbers now form the new world record known emirp pair, at 12346 decimal digits. Yes, I do recognize this is still what I would call low hanging fruit, that having announced a largest prime of this form, someone else willl find one yet larger in a few weeks or months. But even so, for the moment, MATLAB owns the world record!
If anyone else wants a version of the codes I used for the search, I've attached a version (emirpsearchpar.m) that employs the parallel processing toolbox. I do have as well a serial version which is of course, much, much slower. It would be fun to crowd source a larger world record yet from the MATLAB community.
Hey folks in MATLAB community! I'm an engineering student from India messing around with deep learning/ML for spotting faults in power electronics stuff—like inverter issues or microgrid glitches in Simulink.
What's your take?
  • Which toolbox rocks for this—Deep Learning one or Predictive Maintenance?
  • Any gotchas when training on sim data vs real hardware?
  • Cool workflows or GitHub links you've used?
Would love your real experiences! 😊
John
John
Last activity on 10 Mar 2026

The "Issues" with Constant Properties
MATLABs Constant properties can be rather difficult to deal with at times. For those unfamiliar there are two distinct behaviors when accessing constant properties of a class. If a "static" pattern is used ClassName.PropName the the value, as it was assigned to the property is returned; that is to say that you will have a nargout of 1. But, rather frustratingly, if an instance of the class is used when accessing the constant property, such as ArrayOfClassName.PropName then your nargout will be equivalent to the number of elements in the array; this means that functionally the constant property accessing scheme is identical to that of the element wise properties you find on "instance" properties.
Motivation for Correcting Constant Property Behavior
This can be frustraing since constant properties are conceptually designed to tie data to a class. You could see this design pattern being useful where a super class were to define an abstract constant property, that would drive the behavior of subclasses; the subclasses define the value of the property and the super class uses it. I would like to use this design to develop a custom display "MatrixDisplay" focused mixin (like the internal matlab.mixin.internal.MatrixDisplay). The idea is that missing element labels, invalid handle element labels, and other semantic values can be configured by the subclasses, conveniently just by setting the constant properties; these properties will be used by the super class to substitute the display strings of appropriate elements. Most of the processing will happen within the super class as to enable simple, low-investment, opt-in display options for array style classes.
The issue is that you can not rely on constant property access to return the appropriate value when the instance you've been passed is empty. This also happens with excessive outputs when the instance is non-scalar, but those extra values from the CSL are just ignored, while Id imagine there is an effect on performance from generating the excess outputs (assuming theres no internal optimization for the unused outputs), this case still functions appropriately. As I enjoying exploring MATLAB, I found an internal indexing mixing class in the past that provides far greater control of do indexing; I've done a good deal of neat things with it, though at the cost of great overhead when getting implementing cool "proof of concept/showcase" examples. Today I used it to quickly implement a mix in that "fixes" constant properties such that they always return as though they were called statically from the class name, as opposed to an instance.
A Simplistic Solution
To do this I just intercepted the property indexing, checked if it was constant, and used the DefaultValue property of the metadata to return the value. This works nicely since we are required to attempt to initialize a "dummy" scalar array, or generate a function handle; both of those would likely be slower, and in the former case, may not be possible depending on the subclass implementation. It is worth noting that this method of querying the value from metadata is safe because constant properties are immutable and thus must be established as the class is loaded. Below is the small utility class I have implemented to get predictable constant variable access into classes that benefit from it. Lastly it is worth noting that I've not torture testing the rerouting of the indexing we aren't intercepting, in my limited play its behaved as expected but it may be worth looking over if you end up playing around with this and notice abnormal property assignment or reading from non-constant properties.
Sample Class Implementation
classdef(Abstract, HandleCompatible) ConstantProperty < matlab.mixin.internal.indexing.RedefinesDotProperties
%ConstantProperty Returns instance indexed constant properties as though they were statically indexed.
% This class overloads property access to check if the indexed property is constant and return it properly.
%% Property membership utility methods
methods(Access=private)
function [isConst, isProp] = isConstantProp(obj, prop, options)
%isConstantProp Determine if the input property names are constant properties of the input
arguments
obj mixin.ConstantProperty;
prop string;
options.Flatten (1, 1) logical = false;
end
% Store the cache to avoid rechecking string membership and parsing metadata
persistent class_cache
% Initialize cache for all subclasses to maintain their own caches
if(isempty(class_cache))
class_cache = configureDictionary("string", "dictionary");
end
% Gather the current class being analyzed
classname = string(class(obj));
% Check if the current class has a cache, if not make one
if(~isKey(class_cache, classname))
class_cache(classname) = configureDictionary("string", "struct");
end
% Alias the current classes cache
prop_cache = class_cache(classname);
% Check which inputs are already cached
isCached = isKey(prop_cache, prop);
% Add any values that have yet to be cached to the cache
if(any(~isCached, "all"))
% Flatten cache additions
props = row(prop(~isCached));
% Gather the meta-property data of the input object and determine if inputs are listed properties
mc_props = metaclass(obj).PropertyList;
% Determine which properties are keys
[isConst, idx] = ismember(props, string({mc_props.Name}));
idx = idx(isConst);
% Check which of the inputs are constant properties
isConst = repmat(isConst, 2, 1);
isConst(1, isConst(1, :)) = [mc_props(idx).Constant];
% Parse the results into structs for caching
cache_values = cell2struct(num2cell(isConst), ["isConst"; "isProp"]);
prop_cache(props) = row(cache_values);
% Re-sync the cache
class_cache(classname) = prop_cache;
end
% Extract results from the cache
values = prop_cache(prop);
if(options.Flatten)
% Split and reshape output data
sz = size(prop);
isConst = reshape(values.isConst, sz);
isProp = reshape(values.isProp, sz);
else
isConst = struct2cell(values);
end
end
function [isConst, isProp] = isConstantIdxOp(obj, idxOp)
%isConstantIdxOp Determines if the idxOp is referencing a constant property.
arguments
obj mixin.ConstantProperty;
idxOp (1, :) matlab.indexing.IndexingOperation;
end
import matlab.indexing.IndexingOperationType;
if(idxOp(1).Type == IndexingOperationType.Dot)
[isConst, isProp] = isConstantProp(obj, idxOp(1).Name);
else
[isConst, isProp] = deal(false);
end
end
function A = getConstantProperty(obj, idxOp)
%getConstantProperty Returns the value of a constant property using a static reference pattern.
arguments
obj mixin.ConstantProperty;
idxOp (1, :) matlab.indexing.IndexingOperation;
end
A = findobj(metaclass(obj).PropertyList, "Name", idxOp(1).Name).DefaultValue;
end
end
%% Dot indexing methods
methods(Access = protected)
function A = dotReference(obj, idxOp)
arguments(Input)
obj mixin.ConstantProperty;
idxOp (1, :) matlab.indexing.IndexingOperation;
end
arguments(Output, Repeating)
A
end
% Force at least one output
N = max(1, nargout);
% Check if the indexing operation is a property, and if that property is constant
[isConst, isProp] = isConstantIdxOp(obj, idxOp);
if(~isProp)
% Error on invalid properties
throw(MException( ...
"JB:mixin:ConstantProperty:UnrecognizedProperty", ...
"Unrecognized property '%s'.", ...
idxOp(1).Name ...
));
elseif(isConst)
% Handle forwarding indexing operations
if(isscalar(idxOp))
% Direct assignment
[A{1:N}] = getConstantProperty(obj, idxOp);
else
% First extract constant property then forward indexing operations
tmp = getConstantProperty(obj, idxOp);
[A{1:N}] = tmp.(idxOp(2:end));
end
else
% Handle forwarding indexing operations
if(isscalar(idxOp))
% Unfortunately we can't just recall obj.(idxOp) to use default/built-in so we manually extract
[A{1:N}] = obj.(idxOp.Name);
else
% Otherwise let built-in handling proceed
tmp = obj.(idxOp(1).Name);
[A{1:N}] = tmp.(idxOp(2:end));
end
end
end
function obj = dotAssign(obj, idxOp, values)
arguments(Input)
obj mixin.ConstantProperty;
idxOp (1, :) matlab.indexing.IndexingOperation;
end
arguments(Input, Repeating)
values
end
% Handle assignment based on presence of forward indexing
if(isscalar(idxOp))
% Simple broadcasted assignment
[obj.(idxOp.Name)] = deal(values{:});
else
% Initialize the intermediate values and expand the values for assignment
tmp = {obj.(idxOp(1).Name)};
[tmp.(idxOp(2:end))] = deal(values{:});
% Reassign the modified data to the output object
[obj.(idxOp(1).Name)] = deal(tmp{:});
end
end
function n = dotListLength(obj, idxOp, idxCnt)
arguments(Input)
obj mixin.ConstantProperty;
idxOp (1, :) matlab.indexing.IndexingOperation;
idxCnt (1, :) matlab.indexing.IndexingContext;
end
if(isConstantIdxOp(obj, idxOp))
if(isscalar(idxOp))
% Constant properties will also be 1
n = 1;
else
% Checking forwarded indexing operations on the scalar constant property
n = listLength(obj.(idxOp(1).Name), idxOp(2:end), idxCnt);
end
else
% Check the indexing operation normally
% n = listLength(obj, idxOp, idxCnt);
n = numel(obj);
end
end
end
end