View Single Post
  #11  
Old August 9th, 2017, 12:45 PM
Michael_Stones Michael_Stones is offline
Senior Member
 
Join Date: Feb 2007
Posts: 672
Default

Quote:
Originally Posted by Asher Kelman View Post
James,

I have been a scientist all of my professional life. In medical research, where folks lives are on the line, we take a P value of < 0.05 as the limit of acceptability. (See the caveat below as a footnote).

Using this, (albeit artificial boundary), for plausibility, (not "proof", mind you), we have succeeded in advancing the quality and success of medical care as all new treatments are filtered this way.


Asher
Asher, I too have been a scientist all my professional life and will continue to be one after formal retirement later this year. I'm far less convinced than you are by the 'magic' of p < .05 or a more extreme probability figure. The two types of databases I work are close to a technical ideal of 'census level', which means observations from nearly everyone in specified populations. The numbers of people range from about 5,000 to over 100,000, with the numbers of observations analysed in different models usually limited to about 5-10 variables, plus inclusion of pertinent interactions. With databases as big as this, believe me, the likelihood of not getting p < .05 is close to zero. In other words, the obtained probabilities in a statistical model of a large database are nearly all at p < .05 or beyond. What this means is that a statistical effect is usually significant even if the relative discrepancy between people showing one trend or another is close to 50% (e.g., 49% to 51%). If this ain't bad enough, statistical significance with small samples is rarely meaningful if the purpose is to generalize to a larger population. A more appropriate index of statistical importance is effect size, which means the proportion of overall variability accounted for by a given variable or interaction.

It's for reasons like this that findings are inconsistent upon replication in many major studies in disciplines like social psychology and epidemiology. One type of database I've worked with for nearly 40 years is notable because statistical modelling outcomes by myself or other groups of researchers has proved replicable throughout this time. The main reason is that the models account for something like 98% of the variance we try to explain. In studies that show inconsistency upon replication, the explained variance is usually way, way lower, < 10% in many studies, even in prestigious medical journals.

Now I know little about the methodology and theoretical background of research on climate science. But I do get worried when reports by proponents suggest that data essential to the Paris Agreement are faulty: http://www.bbc.com/news/science-environment-40669449. It makes me wonder how good the data were that led to this agreement. There clearly must be an imperative to collect good data and use appropriate analyses.

Over the years, there have been many examples of Doomsday scenarios promoted by scientists and advocacy groups, with support from government and industry. I remember previous scenarios that the world's oil supply would be used up by now, famine would wipe out large populations in Africa, AIDS would do the same in Africa and beyond, computers would crash and havoc would ensue because of the millennium bug. Most of these scenarios did not happen because of the use of good science for purposes of remediation. It bothers me that we now have climate change deniers that dismiss science they do not understand for reasons I do not understand. As far as I remember, there were few such deniers for preceding doomsday scenarios.

A scenario that worries me more than climate change concerns the obesity epidemic, examples of which I see every day. A debate has begun about whether medicine and nutritional science contributed to this epidemic because of a scientific mistake over 60 years ago: http://www.independent.co.uk/life-st...n-9692121.html. The 'mistake' was to link 'unhealthy eating' with saturated fat rather than carbohydrates by obtaining dietary information during a period when the sample did not eat their regular diet. The jury is out as yet but over 200 Canadian physicians recently petitioned the Canadian Government to change the Canadian Food Guide accordingly. We'll see what happens next.

Cheers
Mike