Meta-science

Meta-science

Why meta-science?

Andrew Gelman:

I spend a lot of time thinking and writing about the research process, rather than just doing research.

And all too often I often find myself taking time out to explain why I’m spending time on meta-science that I could be spending on science instead.

My answer is that meta-science discussions are, like it or not, necessary conditions for more technical work. Without meta-science, we keep getting caught in deference traps. Remember when we and others criticized silly regression discontinuity analyses? The response in some quarters was a reflexive deference to standard methods in econometrics, without reflection on the applicability of these methods to the problems at hand. Remember when we and others criticized silly psychology experiments? The response in some quarters was a reflexive deference to standard practices in that field (“You have no choice but to accept,” etc.).

[…] Remember that gremlins research? Again, the researcher didn’t give an inch; he relied on the deference given to the economics profession, and the economics profession didn’t seem to care that its reputation was being used in this way. Remember beauty and sex ratio, divorce predictions, etc.? Technical criticism means nothing at all to the Freakonomicses, Gladwells, NPRs, and Teds of the world. Remember how methods of modern survey adjusted were blasted by authority figures such as the president of the American Association for Public Opinion Research and the world’s most famous statistical analyst? Again, our technical arguments didn’t matter one bit to these people. Technical reasoning didn’t come into play at all. It was just one deference trap after another. So, yes, I spend hours dismantling these deference traps so we can get to our real work. Perhaps not the best use of my time, given all my technical training, but somebody’s gotta do it. I’m sick and tired of feeling like I have to explain myself, but the alternative, where people don’t understand why I’m doing this, seems worse. In the words of Auden, “To-morrow, perhaps the future.”

Objectivity🔗

(NOTE: Do not mix up "objectivity" with the Ayn Rand cult called Objectivism.)

See also Subjective facts

An objective account of inference is deliberately designed to avoid being misled, despite limitations, biases, and limited information. By being deliberately conscious of the ways biases and prejudices lead us astray, an adequate account of inference develops procedures to set up stringent probes of errors while teaching us new things. I am not saying this is true for Gelman (it isn’t), I am saying that I constantly hear over the years, but “we cannot get rid of human judgments” as a prelude for (a) throwing up one’s hands and saying so all science/inference is subjective and/or (b) regarding as false or self-deceived any claim that a method is objective in this scientifically relevant sense– in accomplishing reliable learning about the world. This is dangerous nonsense that has too often passed as a deep insight as to why scientific objectivity is impossible. In the trivial sense (science is done by humans), it is true. The idea of objectivity as the disinterested scientist (as Mike mentions, citing Galison) is indeed absurd. Objectivity demands a lot of interest–interest in not being fooled and deceived.

Deborah Mayo (emphasis mine) (statmodeling.stat.columbia.edu/2012/02/01/philosophy-of-bayesian-statistics-my-reactions-to-cox-and-mayo/)

On "true objectivity":

We humans would not desire an “objectivity” that was irrelevant to humans in their desire to find things out, avoid being misled, block dogmatic authoritarian positions where relevant criticism is barred, or claimed to be irrelevant or impossible. Who wants “true objectivity” (whatever that might mean, but perhaps “ask the robot” would do) when it hasn’t got a chance of producing the results we want and need to intelligently relate to the world. I really don’t get it…

— Mayo

Science is not just for scientists🔗

See also Responsible Research, Super MoRRI

We should celebrate new discoveries such as the Higgs Boson and the Mars Rover but we also need to find a space where scientists and the public can be involved in a debate about responsible scientific innovation. Both the innovators and the rest of us need to be held to account.

The financial sector shows us what can happen when this accountability is missing. In the wake of the 2008 financial crash, politicians and commentators of all stripes talked of the crisis also being an opportunity to have a public debate on the rebalancing of our economy and how our financial system should work.

Years later, it’s clear that neither the debate nor the rebalancing happened. I’d argue that this is, at least in part, because so few people are engaged enough with the issues to competently participate in any such debate. With scientific advancements playing a larger role in our lives in every year that goes by, we can’t afford for the public to become as antipathetic towards science and scientists as it has towards finance and financiers.

It is vital that the processes and products of science are readily available for the public to understand and interrogate.

[…]

But this leaves most scientists in a fairly unique position of self-regulation. Many other professions and sectors have had this privilege or responsibility removed.

www.theguardian.com/science/political-science/2015/dec/01/science-not-just-for-scientists

A broader community of critical friends would be good for science as a whole, and not just specific areas of research. This extended peer community, as advocated for by Funtowicz and Ravetz, should include representatives of all those that are affected by the subject and that are willing to discuss it. The breadth of their experience would be invaluable in keeping a check on what scientists are doing.

For instance, the recent review of the Research Councils, led by the British Science Association’s esteemed outgoing President, Sir Paul Nurse, had an advisory board made up entirely of scientists or people with a science background. Most of the Research Councils themselves, who disburse funding on behalf of the taxpayer, suffer from the same problem. For how many other sectors would this lack of independent input be tolerated?

— same article

Basically, divert funding based on where it will make difference. Today a good amount of funding comes from companies expecting profit, but the remainder is spent by scientists themselves on fields with prestige. Suppose the fund distribution was not based on prestige (many smart people now flock to physics due to prestige) but social good (almost no one is in anti-aging research due to prestige)?

They may not be talking much about funding actually… govs can already distribute funding as they please, in principle, though apparently scientists get to pick what to spend it on.

As an example, when the Human Fertilisation & Embryology Authority was created in 1991, its rules stipulated that the Chair, Deputy Chair and at least half of HFEA members needed to come from outside medicine or science. The group currently includes several people who have undergone IVF – people who are directly affected by the technology that the authority regulates.

Scientists themselves stand to benefit from this approach. For example, when it comes to making the case for a bigger role for science in society – whether that’s through government funding, industrial policy, education, or regulations – scientists themselves have something of a vested interest problem. Non-scientists could make the case far more effectively.

With more social engagement in science, more funding is politically doable.

"P-hacking"🔗

Multiple comparisons are likely to bring forth a significant result somewhere.

www.explainxkcd.com/wiki/index.php/882:_Significant: visualization of the Texas sharpshooter problem in research. That's the mechanism behind p-hacking / data-dredging.

But we are starting to feel that the term “fishing” was unfortunate, in that it invokes an image of a researcher trying out comparison after comparison, throwing the line into the lake repeatedly until a fish is snagged. We have no reason to think that researchers regularly do that. We think the real story is that researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.

We regret the spread of the terms “fishing” and “p-hacking” (and even “researcher degrees of freedom”) for two reasons: first, because when such terms are used to describe a study, there is the misleading implication that researchers were consciously trying out many different analyses on a single data set; and, second, because it can lead researchers who know they did not try out many different analyses to mistakenly think they are not so strongly subject to problems of researcher degrees of freedom.

[…]

Our key point here is that it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values.

cite:gelmanGardenForkingPaths2013

In other words… p-hacking is a specific brand of the general problem of data-contingent analysis. You work around it by deciding before you even see any data, what methods of analysis you will apply on the data, and sticking to that.

Concretely, this means:

Triple-blinding
simulate a fake dataset, try out different methods of analysis on it to iron out early mistakes in your thinking, then decide on one analysis that will make the most sense to do on the real data. This is called "data-blind analysis", or "triple-blinding" because science has been done double-blind ever since Blondlot's N-rays (before which we were content with single-blinding) and now we merely take it another step.
Pre-registration
pre-register your research: tell an external institution, ideally not your own university, about how you will analyze your data, before you acquire funding and gather the data. This lets other people trust that your methods were not affected by the data you ended up getting.

The analyst herself is a random variable simplystatistics.org/posts/2023-01-04-the-analyst-is-a-random-variable/

Reproducible vs replicable🔗

A replicable study is one that will give the exact same results if someone else follows the steps exactly. This is a low bar to set: of course this is what should happen. But many studies are found not to replicate: among other reasons, because the data is simply made-up. So, require replicability as a good minimum standard.

A reproducible study means that a conclusion can be reproduced even with a different experiment set-up. I think.

Javert paradox

Where the harder you demand that someone fix or retract their paper, the more you look like an obsessive, like the detective Javert in Les Miserables who obsessively hunted for a man who stole a loaf of bread.

In short: the more you care, the crazier you look, the less people listen.

History

Epidemiology

Tell me what you don't know

We’ll ask an expert, or even a student, to “tell me what you know” about some topic. But now I’m thinking it makes more sense to ask people to tell us what they don’t know.

Why? Consider your understanding of a particular topic to be divided into three parts:

  1. What you know.
  2. What you don’t know.
  3. What you don’t know you don’t know.

If you ask someone about 1, you get some sense of the boundary between 1 and 2.

But if you ask someone about 2, you implicitly get a lot of 1, you get a sense of the boundary between 1 and 2, and you get a sense of the boundary between 2 and 3.

The Freakonomics attack🔗

Offenders

Is there a name for this phenomenon where an author adopts a mainstream position, pretends the mainstream isn't there yet (a particularly evil kind of Dismissive review), then argues against a 50-years outdated mainstream to look like an original thinker?

Easy sales, but making the mainstream look so stupid damages the public's confidence in it.

Your publication is not your personal soapbox

On how replication is carried out

What links here

Created (3 years ago)