Finding the points where Bessel functions are equivalent
Show older comments
Hello,
I am trying to repeat some results in a research paper using Matlab. I am having some trouble finding some propagation constants. I suspect Matlab is capable of doing this, but I'm not sure how to go about it. The value I am trying to find is the propagation constant β, given in the following:

The values for k, n1, n0 and a are known. Only β is unknown. I am trying to find β by determining the values for u and w using the following equation:

Where J1 and J0 are Bessel functions of the first kind and K1 and K0 are Bessel functions of the second kind. I have tried solving for this using symbolic logic as follows:
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
syms x
w = (a/2)*sqrt(k^2*nCore^2 - x^2);
u = (a/2)*sqrt(x^2 - k^2*nClad^2);
eqn = (besselj(1, u)/(u*besselj(0, u))) == -1*(besselk(1, w)/(w*besselk(0, w)));
S = vpasolve(eqn, x);
This returns the following:
Warning: Unable to find explicit solution. For options, see help.
> In sym/solve (line 317)
In besselTest (line 24)
The results here are well-known, so I think the issue is with my execution of the problem.
Is there another way to go about this without using symbolic logic?
Thanks!
2 Comments
Walter Roberson
on 23 Feb 2025
The error message does not match the code. The error message is about using solve() but the code uses vpasolve()
Matthew
on 23 Feb 2025
Accepted Answer
More Answers (3)
Like this?
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
w = @(x) (a/2)*sqrt(k^2*nCore^2 - x^2);
u = @(x) (a/2)*sqrt(x^2 - k^2*nClad^2);
eqn = @(x) (besselj(1, u(x))/(u(x)*besselj(0, u(x)))) + 1*(besselk(1, w(x))/(w(x)*besselk(0, w(x))));
x0 = k*(nCore+nClad)/2;
x = fzero(eqn,x0);
disp(x)
Hi @Matthew
You might consider approaching the problem as finding the minimum point of a convex function. I have used both ga() and fminunc() for this purpose. You should exercise caution when dealing with square root functions to prevent the generation of complex numbers in this context. Both terms
and
inside the domains of the square root functions, are very large values, on the order of
. Therefore, it is essential to estimate the range of β to solve the problem more efficiently.
. Therefore, it is essential to estimate the range of β to solve the problem more efficiently.The β I found using this approach is around this point 5,908,196.505.
format long
%% initial beta search
lb = 5.905e6; % lower bound of beta search
ub = 5.910e6; % upper bound of beta search
beta0 = ga(@costfcn, 1, [], [], [], [], lb, ub)
%% refined beta search
options = optimoptions('fmincon', 'Display', 'iter', 'OptimalityTolerance', 1e-8);
[bestbeta, fval] = fmincon(@costfcn, beta0, [], [], [], [], lb, ub, [], options)
%% plot result
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
beta = linspace(5.90e6, 5.92e6, 10001);
w = (a/2)*sqrt(k^2*nCore^2 - beta.^2);
u = (a/2)*sqrt(beta.^2 - k^2*nClad^2);
Leqn = (besselj(1, u)./(u.*besselj(0, u))); % left side of equation
Reqn = -1*(besselk(1, w)./(w.*besselk(0, w))); % right side of equation
plot(beta, [Leqn; Reqn]), grid on, xlabel('\beta')
xline(bestbeta, '--', sprintf('best beta: %.3f', bestbeta), 'color', '#7F7F7F', 'LabelVerticalAlignment', 'bottom')
legend('Leqn', 'Reqn', 'best \beta', 'location', 'east')
%% cost function to find the minimum
function J = costfcn(beta)
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
% to find the range
% ub = k^2*nCore^2
% lb = k^2*nClad^2
% beta = linspace(sqrt(lb+100), sqrt(ub-100), 10001);
% beta = linspace(5.90e6, 5.92e6, 10001);
w = (a/2)*sqrt(k^2*nCore^2 - beta.^2);
u = (a/2)*sqrt(beta.^2 - k^2*nClad^2);
Leqn = (besselj(1, u)./(u.*besselj(0, u)));
Reqn = -1*(besselk(1, w)./(w.*besselk(0, w)));
J = (Leqn - Reqn).^2;
end
2 Comments
Matthew
on 23 Feb 2025
Sam Chak
on 24 Feb 2025
You do not need to supply an initial guess when using the genetic algorithm (ga) method, as it automatically generates a random initial population of solutions to initiate the search process. However, it does not return the same solution each time it is executed because it incorporates randomness in its operations to find a 'just good enough' solution.
Most optimization methods require an initial guess from the user, as this allows for guiding the iterative process from a desired starting point toward a potential optimum. This enables the optimizer to navigate the solution space more efficiently and converge to a result more quickly. You may refer to @David Goodmanson's excellent answer for further details.
Hi @Matthew
I revisited your approach using vpasolve(), but I provided an initial guess that is close to the solution, as illustrated in the graph.
um = 1e-6;
nCore = 1.4682;
nClad = 1.446;
lam = 1.55*um;
k = 2*pi/lam;
a = 8.2*um;
A = 125*um;
syms x
w = (a/2)*sqrt(k^2*nCore^2 - x^2);
u = (a/2)*sqrt(x^2 - k^2*nClad^2);
% eqn = (besselj(1, u)/(u*besselj(0, u))) == -1*(besselk(1, w)/(w*besselk(0, w)));
eqnLeft = (besselj(1, u)/(u*besselj(0, u)));
eqnRight = -1*(besselk(1, w)/(w*besselk(0, w)));
lb = 5.90e6; % lower bound
ub = 5.92e6; % upper bound
fplot([eqnLeft eqnRight], [lb, ub]), grid on
legend('Leqn', 'Reqn', 'location', 'east')
x0 = (lb + ub)/2;
bestbeta = vpasolve(eqnLeft == eqnRight, x, x0)
Categories
Find more on Solver Outputs and Iterative Display in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!


