What do scientists mean by observed facts




















Some theorists have named this "quintessence," after the fifth element of the Greek philosophers. But, if quintessence is the answer, we still don't know what it is like, what it interacts with, or why it exists. A last possibility is that Einstein's theory of gravity is not correct. That would not only affect the expansion of the universe, but it would also affect the way that normal matter in galaxies and clusters of galaxies behaved.

This fact would provide a way to decide if the solution to the dark energy problem is a new gravity theory or not: we could observe how galaxies come together in clusters. But if it does turn out that a new theory of gravity is needed, what kind of theory would it be?

How could it correctly describe the motion of the bodies in the Solar System, as Einstein's theory is known to do, and still give us the different prediction for the universe that we need? There are candidate theories, but none are compelling. The thing that is needed to decide between dark energy possibilities - a property of space, a new dynamic fluid, or a new theory of gravity - is more data, better data. What is dark matter? We are much more certain what dark matter is not than we are what it is.

First, it is dark, meaning that it is not in the form of stars and planets that we see. Second, it is not in the form of dark clouds of normal matter, matter made up of particles called baryons.

We know this because we would be able to detect baryonic clouds by their absorption of radiation passing through them. Third, dark matter is not antimatter, because we do not see the unique gamma rays that are produced when antimatter annihilates with matter.

Finally, we can rule out large galaxy-sized black holes on the basis of how many gravitational lenses we see. However, at this point, there are still a few dark matter possibilities that are viable. Baryonic matter could still make up the dark matter if it were all tied up in brown dwarfs or in small, dense chunks of heavy elements. But the most common view is that dark matter is not baryonic at all, but that it is made up of other, more exotic particles like axions or WIMPS Weakly Interacting Massive Particles.

Dark Energy, Dark Matter In the early s, one thing was fairly certain about the expansion of the universe. What Is Dark Energy? View All Forbes. Financial Times.

Washington Post. We support teachers How it Works. Online Resources. We investigate science education. Donate Our Work We support teachers. We block threats to science education. In the Press. DIYSci Activities. March 16, Hypothesis: A tentative statement about the natural world leading to deductions that can be tested. In view of all of this, functional brain imaging differs, e. And similarly for many other methods scientists use to produce non-perceptual evidence.

The role of the senses in fMRI data production is limited to such things as monitoring the equipment and keeping an eye on the subject. Their epistemic role is limited to discriminating the colors in the finished image, reading tables of numbers the computer used to assign them, and so on.

While it is true that researchers typically use their sense of sight to take in visualizations of processed fMRI data—or numbers on a page or screen for that matter—this is not the primary locus of epistemic action.

Researchers learn about brain processes through fMRI data, to the extent that they do, primarily in virtue of the suitability of the causal connection between the target processes and the data records, and of the transformations those data undergo when they are processed into the maps or other results that scientists want to use. The interesting questions are not about observability, i. The epistemic significance of the fMRI data depends on their delivering us the right sort of access to the target, but observation is neither necessary nor sufficient for that access.

However, it is hard to reconcile the idea that highly processed data like fMRI images record observations with the traditional empiricist notion that calculations involving theoretical assumptions and background beliefs must not be allowed on pain of loss of objectivity to intrude into the process of data production.

Observation garnered its special epistemic status in the first place because it seemed more direct, more immediate, and therefore less distorted and muddled than say detection or inference. The production of fMRI images requires extensive statistical manipulation based on theories about the radio signals, and a variety of factors having to do with their detection along with beliefs about relations between blood oxygen levels and neuronal activity, sources of systematic error, and more.

Deposing observation from its traditional perch in empiricist epistemologies of science need not estrange philosophers from scientific practice. In their place, working scientists tend to talk about data.

Philosophers who adopt this usage are free to think about standard examples of observation as members of a large, diverse, and growing family of data production methods. Instead of trying to decide which methods to classify as observational and which things qualify as observables, philosophers can then concentrate on the epistemic influence of the factors that differentiate members of the family.

In particular, they can focus their attention on what questions data produced by a given method can be used to answer, what must be done to use that data fruitfully, and the credibility of the answers they afford Bogen Satisfactorily answering such questions warrants further philosophical work. As Bogen and Woodward have argued, there is often a long road between obtaining a particular dataset replete with idiosyncrasies born of unspecified causal nuances to any claim about the phenomenon ultimately of interest to the researchers.

Empirical data are typically produced in ways that make it impossible to predict them from the generalizations they are used to test, or to derive instances of those generalizations from data and non ad hoc auxiliary hypotheses. Indeed, it is unusual for many members of a set of reasonably precise quantitative data to agree with one another, let alone with a quantitative prediction.

That is because precise, publicly accessible data typically cannot be produced except through processes whose results reflect the influence of causal factors that are too numerous, too different in kind, and too irregular in behavior for any single theory to account for them. When Bernard Katz recorded electrical activity in nerve fiber preparations, the numerical values of his data were influenced by factors peculiar to the operation of his galvanometers and other pieces of equipment, variations among the positions of the stimulating and recording electrodes that had to be inserted into the nerve, the physiological effects of their insertion, and changes in the condition of the nerve as it deteriorated during the course of the experiment.

Hill, walking up and down the stairs outside of the laboratory. To make matters worse, many of these factors influenced the data as parts of irregularly occurring, transient, and shifting assemblies of causal influences. The effects of systematic and random sources of error are typically such that considerable analysis and interpretation are required to take investigators from data sets to conclusions that can be used to evaluate theoretical claims. Interestingly, this applies as much to clear cases of perceptual data as to machine produced records.

When 19 th and early 20 th century astronomers looked through telescopes and pushed buttons to record the time at which they saw a star pass a crosshair, the values of their data points depended, not only upon light from that star, but also upon features of perceptual processes, reaction times, and other psychological factors that varied from observer to observer.

No astronomical theory has the resources to take such things into account. Instead of testing theoretical claims by direct comparison to the data initially collected, investigators use data to infer facts about phenomena, i.

The fact that lead melts at temperatures at or close to Theories that cannot be expected to predict or explain such things as individual temperature readings can nevertheless be evaluated on the basis of how useful they are in predicting or explaining phenomena. The same holds for the action potential as opposed to the electrical data from which its features are calculated, and the motions of astronomical bodies in contrast to the data of observational astronomy.

It is reasonable to ask a genetic theory how probable it is given similar upbringings in similar environments that the offspring of a parent or parents diagnosed with alcohol use disorder will develop one or more symptoms the DSM classifies as indicative of alcohol use disorder.

She argues that when data are suitably packaged, they can travel to new epistemic contexts and retain epistemic utility—it is not just claims about the phenomena that can travel, data travel too. The fact that theories typically predict and explain features of phenomena rather than idiosyncratic data should not be interpreted as a failing.

For many purposes, this is the more useful and illuminating capacity. Suppose you could choose between a theory that predicted or explained the way in which neurotransmitter release relates to neuronal spiking e. For most purposes, the former theory would be preferable to the latter at the very least because it applies to so many more cases. And similarly for theories that predict or explain something about the probability of alcohol use disorder conditional on some genetic factor or a theory that predicted or explained the probability of faulty diagnoses of alcohol use disorder conditional on facts about the training that psychiatrists receive.

For most purposes, these would be preferable to a theory that predicted specific descriptions in a single particular case history. However, there are circumstances in which scientists do want to explain data.

In empirical research it is often crucial to getting a useful signal that scientists deal with sources of background noise and confounding signals. This is part of the long road from newly collected data to useful empirical results. An important step on the way to eliminating unwanted noise or confounds is to determine their sources.

Different sources of noise can have different characteristics that can be derived from and explained by theory.

For instance, light collected by a detector does not arrive all at once or in perfectly continuous fashion. Photons rain onto a detector shot by shot on account of being quanta. Imagine building up an image one photon at a time—at first the structure of the image is barely recognizable, but after the arrival of many photons, the image eventually fills in. In fact, the contribution of noise of this type goes as the square root of the signal.

By contrast, thermal noise is due to non-zero temperature—thermal fluctuations cause a small current to flow in any circuit. If you cool your instrument which very many precision experiments in physics do then you can decrease thermal noise.

Cooling the detector is not going to change the quantum nature of photons though. Simply collecting more photons will improve the signal to noise ratio with respect to shot noise. There are also circumstances in which scientists want to provide a substantive, detailed explanation for a particular idiosyncratic datum, and even circumstances in which procuring such explanations is epistemically imperative.

Millikan not only never published this result, he never published why he failed to publish it. Precisely because they are outliers, some data require specific, detailed, idiosyncratic causal explanations.

Indeed, it is often in virtue of those very explanations that outliers can be responsibly rejected. Otherwise, scientists risk biasing their own work.

Thus, while in transforming data as collected into something useful for learning about phenomena, scientists often account for features of the data such as different types of noise contributions, and sometimes even explain the odd outlying data point or artifact, they simply do not explain every individual teensy tiny causal contribution to the exact character of a data set or datum in full detail.

This is because scientists can neither discover such causal minutia nor would their invocation be necessary for typical research questions. In view of all of this, together with the fact that a great many theoretical claims can only be tested directly against facts about phenomena, it behooves epistemologists to think about how data are used to answer questions about phenomena.

Lacking space for a detailed discussion, the most this entry can do is to mention two main kinds of things investigators do in order to draw conclusions from data. The first is causal analysis carried out with or without the use of statistical techniques.

The second is non-causal statistical analysis. First, investigators must distinguish features of the data that are indicative of facts about the phenomenon of interest from those which can safely be ignored, and those which must be corrected for. Sometimes background knowledge makes this easy.

Under normal circumstances investigators know that their thermometers are sensitive to temperature, and their pressure gauges, to pressure. An astronomer or a chemist who knows what spectrographic equipment does, and what she has applied it to will know what her data indicate. Sometimes it is less obvious. Analogous considerations apply to quantitative data.

It can be harder to tell whether an abrupt jump in the amplitude of a high frequency EEG oscillation was due to a feature of the subjects brain activity or an artifact of extraneous electrical activity in the laboratory or operating room where the measurements were made.

The answers to questions about which features of numerical and non-numerical data are indicative of a phenomenon of interest typically depend at least in part on what is known about the causes that conspire to produce the data. Statistical arguments are often used to deal with questions about the influence of epistemically relevant causal factors. For example, when it is known that similar data can be produced by factors that have nothing to do with the phenomenon of interest, Monte Carlo simulations, regression analyses of sample data, and a variety of other statistical techniques sometimes provide investigators with their best chance of deciding how seriously to take a putatively illuminating feature of their data.

But statistical techniques are also required for purposes other than causal analysis. To calculate the magnitude of a quantity like the melting point of lead from a scatter of numerical data, investigators throw out outliers, calculate the mean and the standard deviation, etc.

Regression and other techniques are applied to the results to estimate how far from the mean the magnitude of interest can be expected to fall in the population of interest e. The fact that little can be learned from data without causal, statistical, and related argumentation has interesting consequences for received ideas about how the use of observational evidence distinguishes science from pseudoscience, religion, and other non-scientific cognitive endeavors.

First, scientists are not the only ones who use observational evidence to support their claims; astrologers and medical quacks use them too. To find epistemically significant differences, one must carefully consider what sorts of data they use, where it comes from, and how it is employed. The virtues of scientific as opposed to non-scientific theory evaluations depend not only on its reliance on empirical data, but also on how the data are produced, analyzed and interpreted to draw conclusions against which theories can be evaluated.

Data are produced, and used in far too many different ways to treat informatively as instances of any single method. Thirdly, it is usually, if not always, impossible for investigators to draw conclusions to test theories against observational data without explicit or implicit reliance on theoretical resources. Bokulich has helpfully outlined a taxonomy of various ways in which data can be model-laden to increase their epistemic utility.

She focuses on seven categories: data conversion, data correction, data interpolation, data scaling, data fusion, data assimilation, and synthetic data. Of these categories, conversion and correction are perhaps the most familiar. In more complicated cases, such as processing the arrival times of acoustic signals in seismic reflection measurements to yield values for subsurface depth, data conversion may involve models ibid.

In this example, models of the composition and geometry of the subsurface are needed in order to account for differences in the speed of sound in different materials. Bokulich rightly points out that involving models in these ways routinely improves the epistemic uses to which data can be put.

Interpolation involves filling in missing data in a patchy data set, under the guidance of models. Data are scaled when they have been generated in a particular scale temporal, spatial, energy and modeling assumptions are recruited to transform them to apply at another scale. For instance, when data from ice cores, tree rings, and the historical logbooks of sea captains are merged into a joint climate dataset.

Scientists must take care in combining data of diverse provenance, and model new uncertainties arising from the very amalgamation of datasets ibid. Synthetic data are virtual, or simulated data, and are not produced by physical interaction with worldly research targets.

Bokulich emphasizes the role that simulated data can usefully play in testing and troubleshooting aspects of data processing that are to eventually be deployed on empirical data ibid. It can be incredibly useful for developing and stress-testing a data processing pipeline to have fake datasets whose characteristics are already known in virtue of having been produced by the researchers, and being available for their inspection at will.

When the characteristics of a dataset are known, or indeed can be tailored according to need, the effects of new processing methods can be more readily traced than without. In this way, researchers can familiarize themselves with the effects of a data processing pipeline, and make adjustments to that pipeline in light of what they learn by feeding fake data through it, before attempting to use that pipeline on actual science data.

Such investigations can be critical to eventually arguing for the credibility of the final empirical results and their appropriate interpretation and use.

Data assimilation is perhaps a less widely appreciated aspect of model-based data processing among philosophers of science, excepting Parker ; Thus, data assimilation involves balancing the contributions of empirical data and the output of models in an integrated estimate, according to the uncertainties associated with these contributions.

Bokulich argues that the involvement of models in these various aspects of data processing does not necessarily lead to better epistemic outcomes. Done wrong, integrating models and data can introduce artifacts and make the processed data unreliable for the purpose at hand ibid. Empirical results are laden with values and theoretical commitments. They have worried about the extent to which human perception itself is distorted by our commitments. They have worried that drawing upon theoretical resources from the very theory to be appraised or its competitors in the generation of empirical results yields vicious circularity or inconsistency.

Do the theory and value-ladenness of empirical results render them hopelessly parochial? That is, when scientists leave theoretical commitments behind and adopt new ones, must they also relinquish the fruits of the empirical research imbued with their prior commitments too?

In this section, we discuss these worries and responses that philosophers have offered to assuage them. If you believe that observation by human sense perception is the objective basis of all scientific knowledge, then you ought to be particularly worried about the potential for human perception to be corrupted by theoretical assumptions, wishful thinking, framing effects, and so on.

Working in , Worthington investigated the hydrodynamics of falling fluid droplets and their evolution upon impacting a hard surface. At first, he had tried to carefully track the drop dynamics with a strobe light to burn a sequence of images into his own retinas. The images he drew to record what he saw were radially symmetric, with rays of the drop splashes emanating evenly from the center of the impact. However, when Worthington transitioned from using his eyes and capacity to draw from memory to using photography in , he was shocked to find that the kind of splashes he had been observing were irregular splats ibid.

Even curiouser, when Worthington returned to his drawings, he found that he had indeed recorded some unsymmetrical splashes. He had evidently dismissed them as uninformative accidents instead of regarding them as revelatory of the phenomenon he was intent on studying ibid.

In attempting to document the ideal form of the splashes, a general and regular form, he had subconsciously down-played the irregularity of individual splashes. Perceptual psychologists, Bruner and Postman, found that subjects who were briefly shown anomalous playing cards, e. For a more up-to-date discussion of theory and conceptual perceptual loading see Lupyan By analogy, Kuhn supposed, when observers working in conflicting paradigms look at the same thing, their conceptual limitations should keep them from having the same visual experiences Kuhn , , —, , —1.

It is plausible that their expectations influence their reports. Indeed, it is possible for scientists to share empirical results, not just across diverse laboratory cultures, but even across serious differences in worldview. Much as they disagreed about the nature of respiration and combustion, Priestley and Lavoisier gave quantitatively similar reports of how long their mice stayed alive and their candles kept burning in closed bell jars.

Priestley taught Lavoisier how to obtain what he took to be measurements of the phlogiston content of an unknown gas. A sample of the gas to be tested is run into a graduated tube filled with water and inverted over a water bath. Priestley, who thought there was no such thing as oxygen, believed the change in water level indicated how much phlogiston the gas contained. Lavoisier reported observing the same water levels as Priestley even after he abandoned phlogiston theory and became convinced that changes in water level indicated free oxygen content Conant , 74— A related issue is that of salience.

Kuhn claimed that if Galileo and an Aristotelian physicist had watched the same pendulum experiment, they would not have looked at or attended to the same things. These last were salient to Galileo because he treated pendulum swings as constrained circular motions.

The Galilean quantities would be of no interest to an Aristotelian who treats the stone as falling under constraint toward the center of the earth ibid.

Thus Galileo and the Aristotelian would not have collected the same data. Absent records of Aristotelian pendulum experiments we can think of this as a thought experiment.

Our theory provides a unified framework that explains all of these facts and hypotheses. But like anything in science, the theory is open to challenge if enough convincing evidence against it is found.

Skip to main content. The Field Museum. Tickets Membership The Field Museum fuels a journey of discovery across time to enable solutions for a brighter future rich in nature and culture. We can see it directly and show it to others. We set out to gather evidence to see if our hypothesis is supported.



0コメント

  • 1000 / 1000