If you correct for multiple comparisons using statistical hypothesis testing, one of the main results will be a decision for each comparison as to whether it is statistically significant or not. With all the methods except Fisher's LSD, these decisions correct for multiple comparisons. If all the data in all groups were really sampled from the same population, then there is a 5% (if you pick the traditional value for alpha) that any one (or more) of the comparisons would be designated as statistically significant. Note that the 5% probability is for the family of comparisons, not just for one comparison.
“Statistically significant” is not the same as “scientifically important”.:
• It is better in many cases to focus on the the size of the difference and the precision of that value quantified as a confidence interval.
•Rather than just report which comparisons are, or are not, "statistically significant", Prism can report multiplicity adjusted P values for many tests, and these can be more informative than a simple statement about which comparisons are statistically significant or not.
•Don't get mislead into focusing on whether whether or not error bars overlap. That doesn't tell you much about whether multiple comparisons tests will be statistically significant or not. If two SE error bars overlap, you can be sure that a multiple comparison test comparing those two groups will find no statistical significance. However if two SE error bars do not overlap, you can't tell whether a multiple comparison test will, or will not, find a statistically significant difference. And if you plot SD error bars, rather than SEM, the fact that they do (or don't) overlap does not let you reach any conclusion about statistical significance. Details.
•With one-way ANOVA, you can choose to test for linear trend between column mean and column order and Prism will report the slope. Details here. This test will tell you whether or not the trend is statistically significant or not.