Showing 301 to 304

Computing

Hash table🔗

Normal lists and tables take longer to scan through the bigger they are, right? A hash table is a trick that programmers have figured out, so that in a hash table the cost of looking up a value is independent of the amount of items in the table. For small data it's not so appropriate, but for enormous data it's the only appropriate data type. At least if you're going to be accessing values often.

How does this work?

The Blub paradox🔗

Graham considers the hierarchy of programming languages with the example of "Blub", a hypothetically average language "right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful than Cobol or machine language."[21] It was used by Graham to illustrate a comparison, beyond Turing completeness, of programming language power, and more specifically to illustrate the difficulty of comparing a programming language one knows to one that one does not.[22]

Graham considers a hypothetical Blub programmer. When the programmer looks down the "power continuum", he considers the lower languages to be less powerful because they miss some feature that a Blub programmer is used to. But when he looks up, he fails to realise that he is looking up: he merely sees "weird languages" with unnecessary features and assumes they are equivalent in power, but with "other hairy stuff thrown in as well". When Graham considers the point of view of a programmer using a language higher than Blub, he describes that programmer as looking down on Blub and noting its "missing" features from the point of view of the higher language.[22]

Graham describes this as the "Blub paradox" and concludes that "By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one."[22]

Test-driven development (TDD)🔗

Each new feature begins with writing a test, which also means you specify exactly what the feature should do. Then write code to pass the test, nothing more.

What links here

Created (3 years ago)

Systematic review

Science quality assurance

This is a methodology for meta-analysis that only got popular recently, like around 2018.

In hindsight, it's weird that we didn't have it from the beginning. How did anyone do science before? Seems like dumb luck science even got started. Guess actually it was dumb luck; science didn't start for the ~5,000 years after writing, so yeah, in alternate possible worlds we might even now be in the Post-Late-Middle Ages!

It's so important to preserve a good understanding of what makes science work, to carry us through an eventual global collapse… but I digress.

Systematic review is often described with five phases: "Search, Dedupe, Screen, Extract, Report."

Depending on your needs, this process may become cyclic, i.e. you loop back to Search and start again. That doesn't apply to writing a grad paper (since you write it once and forget it), but does apply if you are responsible for developing a line of MRI machines.

There's also rapid review, which drops some parts of the process in favour of efficency. There's even interactive rapid review (ebib:ricoGuidelinesConductingInteractive2020) which started in evidence-based software engineering (EBSE), likenable to agile development.

What links here

Created (3 years ago)

Positivism

The term "positivism"

Etymologically, the name derives from the verb to posit.

The English noun positivism was re-imported in the 19th century from the French word positivisme, derived from positif in its philosophical sense of 'imposed on the mind by experience'. The corresponding adjective (Latin positīvus) has been used in a similar sense to discuss law (positive law (human-made laws that oblige or specify an action) compared to natural law (inherent rights)) since the time of Chaucer.[5]

Positivism is part of a more general ancient quarrel between philosophy and poetry, notably laid out by Plato and later reformulated as a quarrel between the sciences and the humanities.

Wilhelm Dilthey (1833–1911) popularized the distinction between Geisteswissenschaft (humanities) and Naturwissenschaften (natural sciences).[8]

Positivism asserts that all authentic knowledge allows verification and that all authentic knowledge assumes that the only valid knowledge is scientific.

Wilhelm Dilthey (1833–1911), in contrast, fought strenuously against the assumption that only explanations derived from science are valid.[8] He reprised the argument, already found in Vico, that scientific explanations do not reach the inner nature of phenomena[8] and it is humanistic knowledge that gives us insight into thoughts, feelings and desires.[8] Dilthey was in part influenced by the historicism of Leopold von Ranke (1795–1886).[8]

In the original Comtean usage, the term "positivism" roughly meant the use of scientific methods to uncover the laws according to which both physical and human events occur, while "sociology" was the overarching science that would synthesize all such knowledge for the betterment of society. "Positivism is a way of understanding based on science"; people don't rely on the faith in God but instead on the science behind humanity. "Antipositivism" formally dates back to the start of the twentieth century, and is based on the belief that natural and human sciences are ontologically and epistemologically distinct. Neither of these terms is used any longer in this sense.[22] There are no fewer than twelve distinct epistemologies that are referred to as positivism.[45] Many of these approaches do not self-identify as "positivist", some because they themselves arose in opposition to older forms of positivism, and some because the label has over time become a term of abuse[22] by being mistakenly linked with a theoretical empiricism. The extent of antipositivist criticism has also become broad, with many philosophies broadly rejecting the scientifically based social epistemology and other ones only seeking to amend it to reflect 20th century developments in the philosophy of science. However, positivism (understood as the use of scientific methods for studying society) remains the dominant approach to both the research and the theory construction in contemporary sociology, especially in the United States.[22]

This popularity may be because research utilizing positivist quantitative methodologies holds a greater prestige in the social sciences than qualitative work; quantitative work is easier to justify, as data can be manipulated to answer any question.[48] Such research is generally perceived as being more scientific and more trustworthy, and thus has a greater impact on policy and public opinion (though such judgments are frequently contested by scholars doing non-positivist work).[48][need quotation to verify]

Logical positivism

Logical positivism, 1920–1960, was a plague on philosophy.

In the early 20th century, logical positivism—a descendant of Auguste Comte's basic thesis but an independent movement—sprang up in Vienna and grew to become one of the dominant schools in Anglo-American philosophy and the analytic tradition.

Logical positivism (later and more accurately called logical empiricism) is a school of philosophy that combines empiricism, the idea that observational evidence is indispensable for knowledge of the world, with a version of rationalism, the idea that our knowledge includes a component that is not derived from observation.

Postpositivism

Logical positivists (or 'neopositivists') rejected metaphysical speculation and attempted to reduce statements and propositions to pure logic. Strong critiques of this approach by philosophers such as Karl Popper, Willard Van Orman Quine and Thomas Kuhn have been highly influential, and led to the development of postpositivism.

Historians identify two types of positivism: classical positivism, an empirical tradition first described by Henri de Saint-Simon and Auguste Comte,[1] and logical positivism, which is most strongly associated with the Vienna Circle. Postpositivism is the name D.C. Phillips[3] gave to a group of critiques and amendments which apply to both forms of positivism.[3]

While positivists emphasize independence between the researcher and the researched person (or object), postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed.[2] Postpositivists pursue objectivity by recognizing the possible effects of biases.[2][3][4] While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.

Postpositivists believe that a reality exists, but, unlike positivists, they believe reality can be known only imperfectly[3] and probabilistically.[2] Postpositivists also draw from social constructionism in forming their understanding and definition of reality.[3]

While positivists believe that research is or can be value-free or value-neutral, postpositivists take the position that bias is undesired but inevitable, and therefore the investigator must work to detect and try to correct it. Postpositivists work to understand how their axiology (i.e. values and beliefs) may have influenced their research, including through their choice of measures, populations, questions, and definitions, as well as through their interpretation and analysis of their work.[3]

Antipositivism

At the turn of the 20th century the first wave of German sociologists, including Max Weber and Georg Simmel, rejected positivism, thus founding the antipositivist tradition in sociology. Later antipositivists and critical theorists have associated positivism with scientism, science as ideology.

Later in his career, Werner Heisenberg distanced himself from positivism:

The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.[15]

In social science, antipositivism (also Interpretivism, negativism or antinaturalism) is a theoretical stance that proposes that the social realm cannot be studied with the scientific method of investigation utilized within the natural sciences, and that investigation of the social realm requires a different epistemology.

Interpretivism developed among researchers dissatisfied with post-positivism, the theories of which they considered too general and ill-suited to reflect the nuance and variability found in human interaction.

Because the values and beliefs of researchers cannot fully be removed from their inquiry, interpretivists believe research on human beings by human beings cannot yield objective results. Thus, rather than seeking an objective perspective, interpretivists look for meaning in the subjective experiences of individuals engaging in social interaction. Many interpretivist researchers immerse themselves in the social context they are studying, seeking to understand and formulate theories about a community or group of individuals by observing them from the inside. Interpretivism is an inductive practice influenced by philosophical frameworks such as hermeneutics, phenomenology, and symbolic interactionism.

The antipositivist tradition continued in the establishment of critical theory, particularly the work associated with the Frankfurt School of social research. Antipositivism would be further facilitated by rejections of 'scientism'; or science as ideology.

Historical positivism

What links here

Created (3 years ago)

Underdetermination

Meta-science, Willard Van Orman Quine

In the philosophy of science, underdetermination or the underdetermination of theory by data (sometimes abbreviated UTD) is the idea that evidence available to us at a given time may be insufficient to determine what beliefs we should hold in response to it.[1] Underdetermination says that all evidence necessarily underdetermines any scientific theory.[2]

Underdetermined ideas are not implied to be incorrect (taking into account present evidence); rather, we cannot know if they are correct.

To show that a conclusion is underdetermined, one must show that there is a rival conclusion that is equally well supported by the standards of evidence. For example, the conclusion "objects near earth fall toward it when dropped" might be opposed by "objects near earth fall toward it when dropped but only when one checks to see that they do." Since one may append this to any conclusion, all conclusions are at least trivially underdetermined. If one considers such statements to be illegitimate, e.g. by applying Occam's Razor, then such "tricks" are not considered demonstrations of underdetermination.

Arguments involving underdetermination attempt to show that there is no reason to believe some conclusion because it is underdetermined by the evidence. Then, if the evidence available at a particular time can be equally well explained by at least one other hypothesis, there is no reason to believe it rather than the equally supported rival, which can be considered observationally equivalent (although many other hypotheses may still be eliminated).

Underdetermination is often presented as a problem for scientific realism, which holds that we have reason to believe in entities that are not directly observable (such as electrons) talked about by scientific theories. […]

A more general response from the scientific realist is to argue that underdetermination is no special problem for science, because, as indicated earlier in this article, all knowledge that is directly or indirectly supported by evidence suffers from it—for example, conjectures concerning unobserved observables. It is therefore too powerful an argument to have any significance in the philosophy of science, since it does not cast doubt uniquely on conjectured unobservables.

What links here

Created (3 years ago)
Showing 301 to 304