Before R2020a, the easiest way to bring the critic in Simulink without using the Agent block is to call generatePolicyFunction to generate a script that does inference, and then use a MATLAB Fcn block to call the generated script. You may need to add 'coder.extrinsic' statements at the begining of the script for things that do not support code generation, but that should work.
In R2020b (you can try this in the prerelease that is out), Deep Learning Toolbox ships a couple of blocks that allow you to bring deep neural networks into Simulink. This would be a faster way to do the same.
To do inference on the critic in Simulink before R2020b, create a function that does inference like the following:
function q = evaluateCritic(observation1)
q = localEvaluate(observation1);
function q = localEvaluate(observation1)
policy = coder.loadDeepLearningNetwork('agentData.mat','policy');
q = predict(policy,observation1);
and in the MATLAB Fcn block in Simulink put the following
function q = MATLABFcn(observation1)
q = evaluateCritic(observation1);
Note that it's preferrable to use the critic architecture that has a single input channel and outputs multiple values based on the number of actions. I tested the above and works in R2020a.
Hope that helps