Showing 372 to 375

Book: How to Take Smart Notes

Book by Sönke Ahrens (around 2017) about how to use Niklas Luhmann's slipbox method ("zettelkasten" in German).

Reading this book, I generated these pages:

When learning anything, do everything as if nothing counts other than writing

Imagine a strange alternate universe in which the only way humans could hold on to any abstract information was by writing it out first. How would you approach learning abstract topics? Now what if I told you: that universe is this one!

Slipbox doesn't mean archiving info outside your head

First, the writing process itself alters your brain. Writing is learning. In a way, writing onto paper has the effect of writing it into your grey matter. So the act really creates two copies of the info – you imagine you'll be highly dependent on your notes, but sans having written the notes, the info wouldn't even be in your own head, only the vaguest recollections.

Second, the slipbox helps you discover how an idea fits in with other ideas – hard without access to old notes. You can't remember every idea you've had all the time.

Third, the work involved in "slotting" info into your slipbox is not a wasted overhead. Sans slipbox, you'd have done the same cognitive work in some form or other, once you really needed to do something with an idea (but it'd also have required more upfront work due to the lack of old notes to converse with).

Reading is amusement, writing is learning

  1. Even if we could remember everything we read, it's not clear it's best to just read as much as possible. We want to think about what we take in, and we want to ensure we remember the right things at the right times.
  2. We can't in fact remember everything we read.

Both issues can be taken care of through notes: not excerpts, but condensed reformulated accounts of a text.

Why ever just read? The whole use out of reading is to gather writing ideas! Taking notes is never a detour.

Getting the gist

By "reading with pen in hand" to write slipbox notes, you engage in deliberate practice at getting the gist of an idea.

Doubly so if not using a laptop, but literal pen and paper.

Niklas Luhmann's notes are very condensed. With practice comes the ability to express something in the best possible way. This benefits readers of your texts, and spills over into conversation and even thinking.

Slipbox develops real expertise

Gut feeling is not a mysterious force, but an incorporated history of experience. It is the sedimentation of deeply learned practice through numerous feedback loops on success or failure.

Experts have enough experience that they can intuit answers. This is how intuition is built in the first place. Rheinberger 1997 apparently studied (or refers to studies on) natural scientists in their labs and concludes that science does not function without expertise, intuition and experience.

Chess masters seem to think less than beginners: they make it seem effortless because it is. Veteran paramedics even look like they "do it wrong".

When the time comes to push the shovel, it's best to have done all your thinking already, during past studies. College students don't do this much, I believe it's common to mainly start thinking about the facts you learned, when the time came to rely on them at the workplace. It's even expected at most workplaces that you'll need a novice period. This study method doesn't work well for everyone, leading to low grades for people who could still be very intelligent, who may be better served by an alternative way to study, such as the slipbox.

The slipbox builds gut feeling, and therefore real expertise.

What links here

Created (2 years ago)

Effective altruism

I'll assume you're already familiar with Effective Altruism.

Different viewpoint

There is a totally different approach to personal life policy, potentially also valid, held by Guerrilla Foundation and possibly Extinction Rebellion and others.

See guerrillafoundation.org/some-thoughts-on-effective-altruism/

and guerrillafoundation.org/additional-thoughts-on-effective-altruism/

Some things held in common with EA:

  • Yes, traditional philanthropy may do more harm than good (for reflections on why, Guerrilla mentions book Winner Takes All by Anand Giridharadas)
  • Yes, good to evaluate actions by their consequence instead of their effect on the agent (no "warm-glow giving")
  • Yes, good to seek out neglected causes
  • Yes, donors (of either time or money) should try to seek out where they can make the most difference.
    • Caveat: they argue that the places where you'll make the most difference will rarely be certain or measurable, and it's still worth using gut feelings combined with an EA-like mindset.

Some pain points they see in EA:

  • Does not invite the philanthropist to reflect on the systemic causes of their wealth
    • In addition, there's a worry that once someone has the EA framework, they may never reflect on the system because they've gotten a way to feel ethically superior and hold a delusion of impartiality. When you think you know something, it blocks you from learning.
    • But so what? They still have wealth to give… is the alternative not to give? Is it fine if they give nothing, or waste it on a corrupt organization? Is it so important that the philanthropist knows where they got the wealth, so long as they give away a large chunk of their wealth anyway? Perhaps this sort of "introspecting on your privileges" is more important for people who weren't naturally inclined to give anything.
    • What does it mean to "acknowledge privilege"? Giving/sharing the decision of donation with the historically underprivileged?
  • At least with the values held by most philanthropists that currently buy into EA (highly educated white technologists from Silicon Valley), it's not clear that this effective giving will lead to any improvement in the social system as a whole. These people may be fixing symptoms caused by a broken system instead of going for the root causes, and in the long term this could mean having done zero net good, if there's any risk that the philanthropy means the system is allowed to persist.
    • This anti-good could take many forms:
      • If EA became mainstream, perhaps it winds up enforcing a socially "correct" way to contribute, thus invalidating forms of activism that are harder to justify with numbers.
        • But it seems to me that with EA going mainstream, there will be much more activism overall than there is now (see subpoint 1). So even if most of the activism became EA-guided, the fraction that's not EA-guided would likely still involve even more people than it does currently, so there is nothing lost.
          1. I don't know why I think this, but I have the feeling that EA empowers people and turns activism into a "real option" in people's minds who would never otherwise have gotten into it. Measurable and tangible outcomes energize people to cause those exact outcomes, and starting to consider the idea of donating 80% of your income goes hand-in-hand with adopting other life policies related to doing good. That's how my own process went, anyway, so maybe I'm overestimating the amount of people who would be likewise affected by discovering EA.
        • While it can make sense to have an aversion to "cold numbers" (because number-guided policies and recommendations can be subverted as tools of the powerful), all forms of activism can ultimately be described with numbers, so the problem cannot, strictly speaking, be that the numbers won't work – it just takes probabilistic numbers instead of concrete numbers for the hard-to-measure things. In later years, effective altruists have been increasingly talking about "long shots": fighting for things with huge margins of uncertainty or things that can't be measured. (Proxy measures can often be Fermi-calculated to give you a rough idea of an action's relative impact, and if not, you can "use your gut" to elicit prior probabilities.) Effecting radical systemic change looks to me absolutely as something that can fit within the framework, they're not opposed. I don't know what wouldn't fit within it.
      • It's a "valve for releasing the pressure from systemic injustice". Slowly drain wealth from Haiti over decades, but when disaster hits Haiti, donate $10 and suddenly you're a "white savior" even if in some sense you're just giving Haiti's resources back to them. But I feel this hasn't anything to do with EA specifically, just with the idea of philanthropy in general. EA aims only to improve how the money is given, and maybe empowers more people to start giving in the first place. Of course people can exploit the fact of giving as a moral license to be bad people, but they could do that before EA too.
  • In the linked article, I don't understand this paragraph: "[…] preventing empathy and solidarity for those who aren't as well off as you"
  • EA donors favor things that have already been proven to work, so may fail to experimentally fund "startup charities" that haven't yet shown their worth. Guerrilla Foundation cites how they funded Extinction Rebellion when it was new, without any sort of guarantees. Quote: "More philanthropic funding, about half of it we would argue, should go to initiatives that are still small, unproven and/or academically ‘unprovable’, that tackle the system rather than the symptoms, and adopt a grassroots, participatory bottom-up approach to finding alternative solutions, which might bear more plentiful fruit in the long run."
  • The scale only goes up to "global", but they'd prefer to go up one more level, to the "system"

… the founder of the Chorus Foundation, which started out as a traditional single-issue climate funder. A couple of years into his spending down plan for the foundation (!) he shares one of their main lessons learned arguing that their work is not about “identifying the best policy or the most promising technology or the scariest science” (which is what EA would focus on) but that it’s about “generating the political will to enact the best policy, adopt the most promising technology and heed the scariest science“. This means a more radical, root-cause oriented approach to philanthropy oriented towards a just transition. It involves building political and cultural power to change the goals of the system (e.g. from maximum wealth generation for a few, to wellbeing for all), opposing and breaking power where it is unchallenged and concentrated, building grassroots power and providing the funding for the creation of bold alternatives to the current system.


I agree with Guerrilla that "the end goal of philanthropy must be its own abolition".

Within any specific cause, there's only so much you can give until the cause is "done" and the problem is solved. The same seems to apply to the concept of giving overall, if it's effective and the problems targeted are true social problems. EA is not something you can do forever. Giving is always dependent on how much others have given so far – and on how many people in the past caused the social problem in the first place through some form of exploitation, and exploiting is fundamentally just "negative giving", right? Giving helps correct the balance.

To put another spin on what I said above, take so-called "offset donation" for the climate, where you give money to permanently prevent releasing some X kilograms of CO2-equivalents into the atmosphere, to make up for an action you did, such as flying, that released X into the atmosphere. When we hear about this strategy, we may have any of several immediate reactions: that it looks like moral licensing, someone trying to worm their way out of responsibility with "mere money", that it's cheating, or that it's using your privilege to avoid living by example.

But the thing is: only airplane companies, burger chains and similar consumer services offer "offset" donation, where you only offset X or maybe double of X. Their donation service may also be so ineffective as to be ineffectual, so that you don't even manage to offset X.

By direct donation to an effective organization, on a Western income, you may prevent release of not just X but a thousand times X. It's no longer talk of mere "offsetting", it's a real attack on CO2 levels. And this is possible precisely because so few people do it. Of course if everyone did it, it would no longer be as effective an approach, and they'd then have to look at their lifestyle, but as it stands, looking at lifestyle may even be harmful due to wasting time and money on something that makes a small difference.

As long as EA remains unknown and uncommon, it tends to be a neglected way to spend resources for good. Paradoxically, we want more people to do EA so that EA becomes less necessary. To engage in EA is to live by example, since it's what we'd want the wealthy to do if we were not ourselves wealthy, and it's what we will do more of if and when we become more wealthy.

Not everyone in EA thinks this way. At least near the beginning of EA, people did not discuss what to do in the medium- to long-term, after all the low-hanging fruit are eliminated. Guerrilla argues it must consider more complex, politically-involved actions, even if they're harder to measure.


In Doing Good Better, MacAskill mentions a book, Dead Aid: Why Aid Is Not Working and How There Is a Better Way for Africa by Dambisa Moyo. Griselda suspects that Moyo probably had more to say that MacAskill neglected to respond to.


Quote: "Shouldn’t wealth owners first, at the very least, be sure to have engaged in reparations for past wrongs (an issue that increasingly receives attention right now), act consciously to produce wealth for the many and not the few, through a more just and regenerative economy, and then and only then, think about how to maximize the impact of their philanthropic giving?"

How are these three not one and the same?

I guess it's easy to take a too-narrow perspective when you engage in EA and not consider what systemic effects you may be able to bring about.

But it still seems like it's just guidance for which EA causes to pick, not stepping outside the EA toolkit.

Alternative visions

Jennifer Rubenstein's critique:

The effective altruism movement fails to meet normative criteria of democracy and equality. A supporter of this movement might respond that democracy and equality are less important than improving individual welfare. Yet in the medium-to-long term, the movement will likely fall short in this regard as well. As the low-hanging fruit of basic health programs and cash transfers are exhausted, saving lives and alleviating suffering will require more complicated political action, such as reforming global institutions. Undertaking this action will require outsiders to work with, and follow the lead of, activists in poor countries. Yet the effective altruism movement as Singer describes it does not cultivate the expectations, attitudes, or relationships necessary for this kind of work.

While Singer does not address these difficulties in his book, other adherents of the effective altruism movement are trying to do so. I hope their debates, with each other and external critics, result in a more pluralistic approach that includes poor people as partners or follows their lead, even if this means less certainty about doing the most good. This pluralist approach would have to jettison “social movement of altruists” as an organizing frame, but it would retain the effective altruism movement’s crucial twin insights: some donations do vastly more good than others, and donors should focus on those that do more good.

What might it look like for a philosophy and social movement to grow up around this insight, without EA’s narrow focus on either maximization (doing the most good) or welfare (doing the most good)? Can we imagine an organization, very roughly analogous to GiveWell, that would offer guidance to individual donors who reject warm-glow giving and passionate philanthropy, and are looking instead for guidance on donating that is rigorous, empirically grounded, comparative, and attentive to consequences but that also acknowledges and incorporates a range of hard-to-measure substantive and procedural values, including justice, equality, fairness, and empowerment of those directly affected?

Singer and MacAskill have done an outstanding job of presenting EA to a wide audience. For those of us who see the promise in EA, but also recognize its limitations, their books are best seen as contributions to a much larger conversation.


Guerrilla:

So we agree with one of the EA forum commentators that our challenge, which we take fully onboard, will continue to be to be able to justify our belief that grassroots work to change the political and economic systems is important/tractable/neglected to the extent that it should be at least as prioritized as many interventions that get funded through EA-affiliated organizations.

Basically, yes EA people tend to be too narrowly focused, but their framework of important/tractable/neglected remains useful, and if we aim to support a hard-to-measure solution, we should still try our best to justify why we might expect this solution to be comparable to better than the best 'typical EA' solutions.


All movements have their stupid members, and you see it in the EA movement too. Guerrilla points out:

We fully acknowledge that many EAs accept ‘cluelessness’ and model it in their decision-making accordingly, but many other EAs we’ve come across don’t, and reject social justice philanthropy on the basis that the proof that it works cannot be described in ‘scientific’ terms amenable to them.


Guerrilla blesses "moral circle expansion" among other new sorts of causes for EA.

What we have a problem with is the belief that the technical solutions identified through accepted, Westernized scientific methodologies are the only or main solutions […] Instead, we call for the development and inclusion of more forms of data that can best capture the potential for systems change.

[…] if we want to end the profit-obsessed capitalism and the culture of expansion-at-all-costs that are among the root causes of factory farming, we believe philanthropic euros should be dedicated to tackling the adaptive challenge of how can we as a society [change our culture].

[…] This could include what one of the commentators in the EA forum terms “moral circle expansion” and “corporate campaigns”, not just by changing political processes and coordination mechanisms.


Guerrilla: You must introspect first.

This is exactly what we recommend wealth owners to do: while the technical cure can be outsourced to “experts” to a certain extent, there is a whole deal of personal work that cannot be outsourced: no one can do your physical exercising for you ultimately. As one of the commentators on the EA forum mentions: “Our mental states have significant effects on our actions, so we’d better help others by cleaning up our harmful mental tendencies”. Therefore, along the same lines, we would recommend that financial wealth holders join groups like Resource Generation (US) and Resource Justice (UK), and first and foremost digest and internalize and own up to their privilege.

Concepts I don't understand

  • Just transition
  • Warm data
  • Reparations: Why you would expect to do more good with reparations to the underprivileged in your own country instead of to the global poor? Why you would expect to do more good via reparations in the specific sector that generated your wealth, instead of any other sector?
  • (See EA discussion with Griselda)

Animal rights

See Diet ethics.

Resources

What links here

  • Diet ethics
Created (2 years ago)

Machine learning vs expert system

> This reminds me of an experience I had watching a company trying to replace a system with ML.

> […] The entire system ended up a verbatim port of the VB6 crap which was a verbatim port of the original AS400 crap that actually worked.

> The marketing to this day says it’s ML based and everyone buys into the hype. It’s not. It was a complete failure. But the original system has 30 years of human experience codified in it. [it's an expert system]

Rule engine is the term I believe.

Often the biggest benefit is that the ML version is good at catching when the experts hadn't had their cup of coffee as well.

Most experts are like family doctors, they get the correct diagnosis 70% of the time. And even if you juice them up real good, they will ALWAYS lose 5% to human error.

The ML also hits the 70% mark, but it's a different 70%, so it'll fix 70% of the errors. Then you're batting at 0.91 instead of 0.70.

Created (2 years ago)

Toy Examples

Adapted from McElreath Chapter 10

# Prosocial chimpanzees: experiments to see whether chimps would
# pull a lever that deliver food to a fellow chimp, without cost to themselves.
# L pulled_left: 1 if the left lever was pulled
# P prosoc_left: 1 if the left lever was the prosocial option
#   chose_prosoc: 1 if pulled_left == prosoc_left
# C condition: another chimp was at the other end of the table
#   actor: which one out of 7 chimps
data(chimpanzees)
glimpse(chimpanzees)
help(chimpanzees)

ucb <- UCBadmit %>%
    mutate(male = ifelse(applicant.gender == "male", 1, 0)) %>%
    mutate(case = factor(1:12))
Created (3 years ago)
Showing 372 to 375