The only real purpose of the standard errors is as an intermediate value used to compute the confidence intervals. If you want to compare Prism's results to those of other programs, you will want to include standard errors in the output. Otherwise, we suggest that you ask Prism to report the confidence intervals only (choose on the Diagnostics tab). The calculation of the standard errors depends on the sum-of-squares, the spacing of X values, the choice of equation, and the number of replicates.
Prism reports the standard error of each parameter, but some other programs report the same values as 'standard deviations'. Both terms mean the same thing in this context.
When you look at a group of numbers, the standard deviation (SD) and standard error of the mean (SEM) are very different. The SD tells you about the scatter of the data. The SEM tells you about how well you have determined the mean. The SEM can be thought of as "the standard deviation of the mean" -- if you were to repeat the experiment many times, the SEM (of your first experiment) is your best guess for the standard deviation of all the measured means that would result.
When applied to a calculated value, the terms "standard error" and "standard deviation" really mean the same thing. The standard error of a parameter is the expected value of the standard deviation of that parameter if you repeated the experiment many times. Prism (and most programs) calls that value a standard error, but some others call it a standard deviation.
If a value is preceded by ~, it means the results are 'ambiguous'. Changing the value of any parameter will always move the curve further from the data and increase the sum-of-squares. But when the fit is 'ambiguous', changing other parameters can move the curve so it is near the data again. In other words, many combinations of parameter values lead to curves that fit equally well. The standard error of those parameters is not well defined.
When the SE value is preceded by ~, the corresponding confidence intervals are shown as "very wide" with no numerical range (the range would be infinitely wide).
Starting with Prism 8.2, Prism will only compute and display the SE of parameters if you ask Prism (in the Confidence tab) to report symmetrical confidence intervals. These intervals are computed from the SE values, so it makes sense to display them. We recommend reporting asymmetrical profile likelihood intervals, which are more useful. These are not computed from the SE values, so it makes no sense to report both the SE and the asymmetrical confidence interval. Starting with version 8.2, Prism will only show the SE if you request symmetrical confidence intervals.