Showing 241 to 244

Noticing

Background

See Agenty Duck's amazing agentyduck.blogspot.com/2014/09/what-its-like-to-notice-things.html and related agentyduck.blogspot.com/p/noticing.html

A commenter:

Noticing is absolutely the critical skill that allows all sorts of other interesting and vital abilities. For personal anecdotes, I got a huge gain in ability-to-maintain-eye-contact over a weekend where I noticed what it feels like to get distracted by an idea and lose focus on a conversational partner.

Steps to learn to notice a thing:

  1. What is the thing you want to notice?
  2. What does it feel like when this thing is approaching/happening? (E.g: does your body do anything, do your limbs jerk or reach for something, is there a weight in your lungs or a pit in your stomach, do you feel a restless drive or unbalancedness or have any other particular kind of emotion?)
  3. Equip your tally counter. Imagine clearly to yourself what it'll feel like next time, so you're on the lookout. (Upon experiencing it, you'll likely detect aspects that you missed in step 2: learn from that, and repeat this step).
  4. Repeat step 3 until you can reliably notice the thing. This may take a week.
  5. Pick an action to do every time you notice. Stop using the tally counter and do the action instead. Congratulations!

Example

In cognitive behavioral therapy, patients are often taught to monitor their thoughts for specific words or phrases that have emotional power; kids who struggle with ADHD are sometimes encouraged to note exactly what happened right before they got distracted, and the first thing that caught their attention once they looked away.

What happens when I try to reach for a new tab and visit Hacker News? One time: nothing external caught my attention, but the first event was an urge to sort of refocus on the thing I'm reading, give it some extra gas and continue reading – this then triggered a feeling of "meh I give up let's do something fun", and my hands move almost before I've noticed I made any decision.

After trying to watch for it for a few minutes, I missed it. This may be extremely difficult; it's rewarding enough for now to just notice when I make the move to open a new tab or switch workspace for no reason. Maybe later I watch for this mental cue.

Physical tally device

A "knitting counter" or "tally counter" is a simple hardware device where you just press a button or lever to increment a number. As a game, you can press it every time you notice the thing you're trying to notice, and in so doing you train yourself to notice it.

Think twice about which model to buy. Which knitting counter?

Trigger-action plans

You could think of the distance between the dotted and solid lines as a measure of the total effort required to make it back to the better timeline. The quicker you notice that you’ve changed course, the shorter the distance back to the better path. The less time that you’ve spent accelerating in the wrong direction, the less inertia you have to overcome.

Which leads to one of the key actionable insights of the TAPs perspective: there are times when the total effort to switch from 🙁 to 🙂 is zero, or close enough—e.g. simply catching the moment when you would have made the unfortunate switch […]

www.greaterwrong.com/posts/W5HcGywyPoDDdJtbz/trigger-action-planning

Related

What links here

Created (3 years ago)

Reasoning about unknown reasoners

Quantum Risk-Taking, Book: Anthropic bias

Maybe similar to reasoning about unknown unknowns, Black Swans? Barbell strategies, bounded loss.

Looks like a typical decision-theoretic problem, except there the reasoner is known, just not always their motivations. Has anyone analyzed the issue of when you're unaware of what other agents may be on the chessboard?

Created (3 years ago)

The iatrogenics of having a model

"Stan uses No U-Turn sampling, JAGS uses Gibbs sampling". Great. How many people understand that? Is the method proven? What is it proven to do? What are the assumptions that sneak aboard, if any?

The more subcomponents a piece of research is built on, the more people we need to trust. I want to know that the mathematicians/economists/etc that have popularized each of these components knew what they were doing, and that those who get into the gritty math anew also see nothing amiss.

When it comes to using pure math lemma to prove another lemma, there's little to worry about, as math is unusual in being provable as a whole. When a theorem has been proven once, you never worry about it again.

This is one aspect of Trusting research: trusting the implementation correctness. It overlaps a bit with my idea of having an explicit Chain of verifiability, so that you can Measure payoff from increased complexity.

But when I wrote this title, I had something else in mind with "iatrogenics". Basically the harm done when people have a mental framework they trust too much, when they'd be better served having no framework at all.

Example 1: Categorizing has consequences.

Example 2: The finance bubble of 2008 was propped up on unreasonable beliefs in that the economists and financial analysts were modeling the world well. You could call it a species of when "a scientist's mindset oversimplifies reality" but I'd place the fault on the fact that the analysts involved didn't have high epistemic standards and everybody else believed they did. Perhaps I'd prefer to fault the deceptive practices of big banks, but if the analysts were more intimately aware of the "iatrogenics of creating a model", more aware of what they didn't know, perhaps they'd never have gone out and made claims others would bet on.

Sometimes nerds think that people's behavior ought to be reducible to a small set of rules just like physics, and that's an error we could talk about more in high school: it's possible to do real science in social sciences, but due to people being people, it's much harder to extract useful truth than in fields of study that don't involve people (Double hermeneutic), and there is never gonna be a small set of rules. (For more on this: History is not a science)

Created (3 years ago)

Trusting research

Suppose you research the arms race between liars and debunkers. You'd have to hypothesize some, predict where they will coverup next as they catch on.

One resilient strategy is false data, with which a solid methodology (even exaggeratedly solid) means nothing. Verifying data is problematic. There is a trust chain as well. Suppose the university vouches for a researcher's methodology, even sending out someone to watch the data being collected and having someone neutral input the data and record a cryptographic hash of the resulting datafile, before handing it to the researcher. Do you trust the university? This also introduces a vector of corruption, so that those who can't afford to pay an university won't be taken seriously.

Nassim Taleb suggests requiring service providers to have "skin in the game". It's best if this is not simple threat of penalty, which doesn't work in a corrupt society. The provider themselves should benefit from providing what they say they provide.

Some tricks include assigning a singular person rather than an amorphous organization such as the university, which induces a lot of biases in people from marketing and halo effects and has internal diffusion of responsibility. A singular person is something that the university can reject to protect their image. Further, researchers would have motivation to select such a person based on how many people trust that person already, as well as how unrelated they are to the researcher. In fact this whole strategy is implementable by you, today. Just do this thing, have someone who is not your friend, nor stands to benefit (e.g. you give them $100 regardless of whether or not they help you produce a confirmatory paper), vouch. Just ask them to input the data and promise to stand up for the hash they got whenever someone asks. It'll give your paper an edge, right?

Perhaps regardless of corruption, we can benefit from having a measure of "probability the paper is false". John Ioannidis did some work on this. He can give you a prior (80%), if you don't rehearse his evidence.

Pr(Dishonest) increases with:

  • verbosity
  • complexity
  • Big Words (good studies do not need to dress up)
  • grouping data in inconsistent ways
  • not explaining how or why they grouped data or calculated averages
  • not setting hypotheses before the study
  • the Pr(Dishonest) (compared to our prior) of their other papers

On the other end, Pr(Dishonest) can only go so low. If the methodology is exaggeratedly, unusually solid, it may be a savvy researcher who chose to put the lie in the dataset so that the methods can be solid and easy to explain. Imagine that: an actual good study predicting something unexpected! The laurels and fame! We should set a floor of 10%, the double of so-called "statistical significance". Or base it on a prior motivation to lie (if this is not already in our 80% prior).

Check if you agree with this

Pr(Lying|Solid methods) ~= Pr(Lying)

IOW that the solidity of methodology is irrelevant if the researcher intends to lie anyway – it just affects the location of the lie.

Checking who funded a study is a common way to check for bias today. More and more funders will catch on, creating front organizations. In Merchants of Doubt, they show that many front groups sound like grassroots organizations e.g.

Related

What links here

Created (3 years ago)
Showing 241 to 244