Showing 208 to 211

Potential of slipboxes

My beliefs:

  • ([2022-01-21 Fri] 80% confidence): this will massively improve my life

My feelings:

  • [2022-01-21 Fri] I feel very good about it.

I see potential for slipboxes to help the owner with the following things and more

  • Internal Double Crux
  • Goal Factoring
    • when fleshing out a node, add how I feel about the topic
  • halos / affect heuristic
  • ugh fields
  • confirmation bias
  • keeping beliefs cruxy (The purpose of my personal wiki is Cruxiness)
  • beliefs not paying rent
  • circular belief networks
  • rationalization
  • Chesterton's Fences
  • cached thoughts
  • Help find semantic stopsigns
    • When you have no interest in fleshing out a node because it feels obvious or basic, at least write a blurb about your belief and its Epistemic status. This could help you detect a semantic stopsign or some such error, so you're at least aware of it. Such nodes aren't uninteresting extremities of your belief system – they are the underpinnings that justify it! Having access to these statements could also help you later if you want to explore these underpinnings more deeply.
  • bucket errors
    • automatically prevented by the nature of slipboxes, I think
  • inside/outside view (reference class forecasting)
    • Part of recipe
  • calibration / inner sim training / countering the availability heuristic
    • whenever possible, on every topic, present both data and examples on it
      • ideally, data is visualized according to good principles of data visualization
      • this can take so many forms, but for example when you write a node about bicycle tires, you can add an example about what products you might find in your local store and how to interpret the descriptions on those products
  • gears-level understanding
    • The slipbox confronts us with our lack of understanding
    • When a node links to another with an implication of causality, state the causal connection explicitly, and the how or why (if possible).
      • Instead of writing "Free market capitalism may lead to eco-collapse", you'd write "Free market capitalism may, by motivating selfish behavior, lead to eco-collapse". The idea is to make it easy to realize later on that you don't think this way anymore and reverse the conclusion. This requirement also naturally leads to asking "What do I know and why do I think I know it?", which is always good.
  • humility

Prosocial potential

Techno-optimism is unfashionable at the moment, but I suspect we still haven’t come close to realizing the potential of even the internet technology of the 1990s. When thousands of people converge on a topic, the collective knowledge far exceeds any one person, but our current interaction models don’t do a great job of synthesizing it. It’s a difficult problem, but it’s hard to imagine that in a hundred years we won’t have more effective ways to interact.

www.greaterwrong.com/posts/8mjoPYdeESB7ZcZvB/observations-about-writing-and-commenting-on-the-internet

See also The Garden and the Stream

What links here

Created (4 years ago)

Perceptually uniform colormap

This can be considered a form of honesty, Transparent language.

Graphing colors - for example, area charts showing intensity using the rainbow: not good. Intensity should be shown with luminance, not hue.

IBM did research on this back in the day. There is a collection of perceptually uniform maps at colorcet.com/. Also, Matplotlib adopted a new set of defaults in 2015. Lecture at www.youtube.com/watch?v=xAoljeRJ3lU.

Cool guide: seaborn.pydata.org/tutorial/color_palettes.html

What links here

Created (4 years ago)

#statistics

Created (4 years ago)

Akaike Information Criterion

Bayesian methods, #statistics

Reasons for information criteria can be found in a book Information Theory and Statistics by Kullback. Explains well.

True model yi = 1 + x0.1 - x2 0.2 …

Various models (hundreds, thousands) ∑ …

Choose model by best (smallest) AIC/BIC/DIC/WAIC.

AIC = D train + 2p

AIC is an approximation that is reliable only when: (1) The priors are flat or overwhelmed by the likelihood. (2) The posterior distribution is approximately multivariate Gaussian. (3) The sample size N is much greater than the number of parameters k.

Watanabe-Akaike Information Criterion

Like AIC, you can rank models by WAIC. But a more interpretable measure is an Akaike weight. The weight for a model i in a set of m models is given by

node:internal/modules/cjs/loader:1228 throw err; ^ Error: Cannot find module 'katex' Require stack: - /home/kept/private-dotfiles/.config/emacs/texToMathML.js at Module._resolveFilename (node:internal/modules/cjs/loader:1225:15) at Module._load (node:internal/modules/cjs/loader:1051:27) at Module.require (node:internal/modules/cjs/loader:1311:19) at require (node:internal/modules/helpers:179:18) at Object. (/home/kept/private-dotfiles/.config/emacs/texToMathML.js:1:15) at Module._compile (node:internal/modules/cjs/loader:1469:14) at Module._extensions..js (node:internal/modules/cjs/loader:1548:10) at Module.load (node:internal/modules/cjs/loader:1288:32) at Module._load (node:internal/modules/cjs/loader:1104:12) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:173:12) { code: 'MODULE_NOT_FOUND', requireStack: [ '/home/kept/private-dotfiles/.config/emacs/texToMathML.js' ] } Node.js v20.18.1

where dWAIC is the difference between each WAIC and the lowest WAIC, i.e. dWAIC = WAICi - WAICmin.

Leave-one-out cross-validation (LOO-CV)

New kid on the block, around 2020 it was the best (for which situations?).

Created (4 years ago)
Showing 208 to 211