As Star Strider said, this is an heuristic exploration. I am not sure if there is an method that we can avoid this exploration of initial guess.
For the model you've shown, you can take some of the heuristics out of it by analyzing the log of your model function log(F)
logF = log(x1)+ x3*(1-xdata/x2);
Since log(F) depends linearly on log(x1) and x3, the least squares estimates of those parameters are determined once x2 is fixed. You can therefore write the least squares function in terms of a single variable x2 as follows,
logF=norm( Q*z-log(y) );
If you plot this over a range of x2,
logF=@(x2) arrayfun(@(z) logFlsq(z,t,y),x2);
you will see that it is minimized at about x2=12. The fit becomes very insensitive to x2 at that point.
By calling logFlsq() with 2 output arguments, you can find the minimizing values for x1 and x3 corresponding to x2=12,
8.3133 12.0000 12.5592
Feeding this as your initial guess x0 and your original F(x) to lsqcurvefit leads to a very different result
13.2006 457.1577 622.1261
but not a very different result for the curve. Both x and x0 fit the data very closely, at least visually
legend('Final Fit','Initial Fit', 'Data');
So, it appears that the fitting problem is highly ill-conditioned.