3 of 3 DRAWING CONCLUSIONS For any research project and any - TopicsExpress



          

3 of 3 DRAWING CONCLUSIONS For any research project and any scientific discipline, drawing conclusions is the final, and most important, part of the process. Whichever reasoning processes and experimental methods were used, the final conclusion is critical, determining success or failure. If an otherwise excellent experiment is summarized by a weak conclusion, the results will not be taken seriously. Success or failure is not a measure of whether a hypothesis is accepted or refuted, because both results still advance scientific knowledge. Failure is poor experimental design, or flaws in the reasoning processes, which invalidate the results. As long as the research process is robust and well designed, then the findings are sound, and the process of drawing conclusions begins. The key is to establish what the results mean. How are they applied to the world? WHAT HAS BEEN LEARNED Generally, a researcher will summarize what they believe has been learned from the research, and will try to assess the strength of the hypothesis. Even if the null hypothesis is accepted, a strong conclusion will analyze why the results were not as predicted. In observational research, with no hypothesis, the researcher will analyze the findings, and establish if any valuable new information has been uncovered. GENERATING LEADS FOR FUTURE RESEARCH However, very few experiments give clear-cut results, and most research uncovers more questions than answers. The researcher can use these to suggest interesting directions for further study. If, for example, the null hypothesis was accepted, there may still have been trends apparent within the results. These could form the basis of further study, or experimental refinement and redesign. EVALUATION – FLAWS IN THE RESEARCH PROCESS The researcher will then evaluate any apparent problems with the experiment. This involves critically evaluating any weaknesses and errors in the design, which may have influenced the results. Even strict, ‘true experimental,’ designs have to make compromises, and the researcher must be thorough in pointing these out, justifying the methodology and reasoning. For example, when drawing conclusions, the researcher may think that another causal effect influenced the results, and that this variable was not eliminated during the experimental process. A refined version of the experiment may help to achieve better results, if the new effect is included in the design process. In the global warming example, the researcher might establish that carbon dioxide emission alone cannot be responsible for global warming. They may decide that another effect is contributing, so propose that methane may also be a factor in global warming. A new study would incorporate methane into the model. WHAT ARE THE CLEAR-CUT BENEFITS OF THE RESEARCH The next stage is to evaluate the advantages and benefits of the research. In medicine and psychology, for example, the results may throw out a new way of diagnosing a medical problem, so the advantages are obvious. However, all well constructed research is useful, even if it is just adding to the fount of human knowledge. An accepted null hypothesis has an important meaning to science. SUGGESTIONS BASED UPON THE CONCLUSIONS The final stage is the researcher’s recommendations based upon the results, depending upon the field of study. This area of the research process can be based around the researcher’s personal opinion, and will integrate previous studies. For example, a researcher into schizophrenia may recommend a more effective treatment. A physicist might postulate that our picture of the structure of the atom should be changed. A researcher could make suggestions for refinement of the experimental design, or highlight interesting areas for further study. This final piece of the paper is the most critical, and pulls together all of the findings. The area in a research paper that causes intense and heated debate amongst scientists is when drawing conclusions. It is critical in determining the direction take by the scientific community, but the researcher will have to justify their findings. SUMMARY – THE STRENGTH OF THE RESULTS The key to drawing a valid conclusion is to ensure that the deductive and inductive processes are correctly used, and that all steps of the scientific method were followed. If your research had a robust design, questioning and scrutiny will be devoted to the experiment conclusion, rather than the methods. WHAT IS GENERALIZATION? What is Generalization? It is an essential component of the wider scientific process. In an ideal world, to test a hypothesis, you would sample an entire population. You would use every possible variation of an independent variable. In the vast majority of cases, this is not feasible, so a representative group is chosen to reflect the whole population. For any experiment, you may be criticized for your generalizations about sample, time and size. • You must ensure that the sample group is as truly representative of the whole population as possible. • For many experiments, time is critical as the behaviors can change yearly, monthly or even by the hour. • The size of the group must allow the statistics to be safely extrapolated to an entire population. In reality, this is not possible, due to budget, time and feasibility. For example, you may want to test a hypothesis about the effect of an educational program on schoolchildren in the US. For the perfect experiment, you would test every single child using the program, against a control group. If this number runs into the millions, this may not be possible without a huge number of researchers and a bottomless pit of money. Thus, you need to generalize and try to select a sample group that is representative of the whole population. A high budget research project might take a smaller sample from every school in the country; a lower budget operation may have to concentrate upon one city or even a single school. The key to generalization is to understand how much your results can be applied backwards to represent the group of children, as a whole. The first example, using every school, would be a strong representation, because the range and number of samples is high. Testing one school makes generalization difficult and affects the external validity. You might find that the individual school tested generates better results for children using that particular educational program. However, a school in the next town might contain children who do not like the system. The students may be from a completely different socio-economic background or culture. Critics of your results will pounce upon such discrepancies and question your entire experimental design. Most statistical tests contain an inbuilt mechanism to take into account sample sizes with larger groups and numbers, leading to results that are more significant. The problem is that they cannot distinguish the validity of the results, and determine whether your generalization systems are correct. This is something that must be taken into account when generating a hypothesis and designing the experiment. The other option, if the sample groups are small, is to use proximal similarity and restrict your generalization. This is where you accept that a limited sample group cannot represent all of the population. If you sampled children from one town, it is dangerous to assume that it represents all children. It is, however, reasonable to assume that the results should apply to a similar sized town with a similar socio-economic class. This is not perfect, but certainly contains more external validity and would be an acceptable generalization. What is Generalization? CONFOUNDING VARIABLES Confounding variables are variables that the researcher failed to control, or eliminate, damaging the internal validity of an experiment. A confounding variable, also known as a third variable, can adversely affect the relationship between the independent variable and dependent variable. This may cause the researcher to analyze the results incorrectly. The results may show a false correlation between the dependent and independent variables, leading to an incorrect rejection of the null hypothesis. THE DAMAGE CAUSED BY CONFOUNDING VARIABLES For example, a research group might design a study to determine if heavy drinkers die at a younger age. They proceed to design a study, and set about gathering data. Their results, and a battery of statistical tests, indeed show that people who drink excessively are likely to die younger. Unfortunately, when the researchers do a crosscheck with their peers, the results are ripped apart, because their peers live just as long - maybe there is another factor, not measured, that influences both drinking and living age? The weakness in the experimental design was that they failed to take into account the confounding variables, and did not try to eliminate or control any other factors. For example, it is quite possible that the heaviest drinkers hailed from a different background or social group. Heavy drinkers may be more likely to smoke, or eat junk food, all of which could be factors in reducing longevity. A third variable may have adversely influenced the results. MINIMIZING THE EFFECTS OF CONFOUNDING VARIABLES In many fields of science, it is difficult to remove entirely all of the variables, especially outside the controlled conditions of a lab. A well-planned experimental design, and constant checks, will filter out the worst confounding variables. For example, randomizing groups, utilizing strict controls, and sound operationalization practice all contribute to eliminating potential third variables. After a research, when the results are discussed and assessed, by a group of peers, this is the area that stimulates the most heated debate. When you read stories of different foods making you die young, or hear claims about the next super-food, assess these findings carefully. Many media outlets jump upon sensational results, but never pay any regard to the possibility of confounding variables. CORRELATION AND CAUSATION The principle is closely related to the problem of correlation and causation. For example, a scientist performs statistical tests, sees a correlation and incorrectly announces that there is a causal link between two variables. Constant monitoring, before, during and after an experiment, is the only way to ensure that any confounding variables are eliminated. Statistical tests, whilst excellent for detecting correlations, can be almost too accurate. Human judgment is always needed to eliminate any underlying problems, ensuring that researchers do not jump to conclusions. VALIDITY AND RELIABILITY The principles of validity and reliability are fundamental cornerstones of the scientific method. Together, they are at the core of what is accepted as scientific proof, by scientist and philosopher alike. By following a few basic principles, any experimental design will stand up to rigorous questioning and skepticism. WHAT IS RELIABILITY? The idea behind reliability is that any significant results must be more than a one-off finding and be inherently repeatable. Other researchers must be able to perform exactly the same experiment, under the same conditions and generate the same results. This will reinforce the findings and ensure that the wider scientific community will accept the hypothesis. Without this replication of statistically significant results, the experiment and research have not fulfilled all of the requirements of testability. This prerequisite is essential to a hypothesis establishing itself as an accepted scientific truth. For example, if you are performing a time critical experiment, you will be using some type of stopwatch. Generally, it is reasonable to assume that the instruments are reliable and will keep true and accurate time. However, diligent scientists take measurements many times, to minimize the chances of malfunction and maintain validity and reliability. At the other extreme, any experiment that uses human judgment is always going to come under question. For example, if observers rate certain aspects, like in ‘Bandura’s Bobo Doll Experiment’, then the reliability of the test is compromised. Human judgment can vary wildly between observers, and the same individual may rate things differently depending upon time of day and current mood. This means that such experiments are more difficult to repeat and are inherently less reliable. Reliability is a necessary ingredient for determining the overall validity of a scientific experiment and enhancing the strength of the results. Debate between social and pure scientists, concerning reliability, is robust and ongoing. WHAT IS VALIDITY Validity encompasses the entire experimental concept and establishes whether the results obtained meet all of the requirements of the scientific research method. For example, there must have been randomization of the sample groups and appropriate care and diligence shown in the allocation of controls. Internal validity dictates how an experimental design is structured and encompasses all of the steps of the scientific research method. Even if your results are great, sloppy and inconsistent design will compromise your integrity in the eyes of the scientific community. Internal validity and reliability are at the core of any experimental design. External validity is the process of examining the results and questioning whether there are any other possible causal relationships. Control groups and randomization will lessen external validity problems but no method can be completely successful. This is why the statistical proofs of a hypothesis called significant, not absolute truth. Any scientific research design only puts forward a possible cause for the studied effect. There is always the chance that another unknown factor contributed to the results and findings. This extraneous causal relationship may become more apparent, as techniques are refined and honed. CONCLUSION If you have constructed your experiment to contain internal validity and reliability then the scientific community is more likely to accept your findings. Eliminating other potential causal relationships, by using controls and duplicate samples, is the best way to ensure that your results stand up to rigorous questioning. CORRELATION AND CAUSATION Correlation and causation, closely related to confounding variables, is the incorrect assumption that because something correlates, there is a causal relationship. Causality is the area of statistics that is most commonly misused, and misinterpreted, by non-specialists. Media sources, politicians and lobby groups often leap upon a perceived correlation, and use it to ‘prove’ their own beliefs. They fail to understand that, just because results show a correlation, there is no proof of an underlying causality. Many people assume that because a poll, or a statistic, contains many numbers, it must be scientific, and therefore correct. PATTERNS OF CAUSALITY IN THE MIND Unfortunately, the human mind is built to try and subconsciously establish links between many contrasting pieces of information. The brain often tries to construct patterns from randomness, so jumps to conclusions, and assumes that a relationship exists. Overcoming this tendency is part of academic training of students and academics in most fields, from physics to the arts. The ability to evaluate data, subjectively, is absolutely crucial to academic success. THE SENSATIONALISM OF THE MEDIA The best way to look at the misuse of correlation and causation is by looking at an example: A survey, as reported in a British newspaper, involved questioning a group of teenagers about their behavior, and establishing whether their parents smoked. The newspaper reported, as fact, that children whose parents smoked were more likely to exhibit delinquent behavior. The results seemed to show a correlation between the two variables, so the paper printed the headline; “Parental smoking causes children to misbehave.” The Professor leading the investigation stated that cigarette packets should carry warnings, about social issues, alongside the prominent health warnings. (Source criticalthinking.org.uk/smokingparents/) However, there are a number of problems with this assumption. The first is that correlations can often work in reverse. For example, it is perfectly possible that the parents smoked because of the stress of looking after delinquent children. Another cause may be that social class causes the correlation; the lower classes are usually more likely to smoke, and are more likely to have delinquent children. Therefore, parental smoking and delinquency are both symptoms of the problem of poverty, and may well have no direct link between them. EMOTIVE BIAS INFLUENCES CAUSALITY This example highlights another reason behind correlation and causation errors, because the Professor was strongly anti-smoking. He was hoping to find a link that would support his own agenda. This is not to say that his results were useless, because they showed that there is a root cause behind the problems of delinquency and the likelihood of smoking. This, however, is not the same as a cause and effect relationship, and he allowed his emotions to cloud his judgment. Smoking is a very emotive subject, but academics must remain aloof and unbiased if internal validity is to remain intact. THE COST OF DISREGARDING CORRELATION AND CAUSATION The principle of incorrectly linking correlation and causation is closely linked to post-hoc reasoning, where incorrect assumptions generate an incorrect link between two effects. The principle of correlation and causation is very important for anybody working as a scientist or researcher. It is also a useful principle for non-scientists, especially those studying politics, media and marketing. Understanding causality promotes a greater understanding, and honest evaluation of the alleged facts given by pollsters. Imagine an expensive advertising campaign, based around intense market research, where misunderstanding a correlation could cost a lot of money in advertising, production costs, and damage to the company’s reputation. by Martyn Shuttleworth QUANTITATIVE RESEARCH DESIGN INTRODUCTION Quantitative research design is the standard experimental method of most scientific disciplines. These experiments are sometimes referred to as true science, and use traditional mathematical and statistical means to measure results conclusively. They are most commonly used by physical scientists, although social sciences, education and economics have been known to use this type of research. It is the opposite of qualitative research. Quantitative experiments all use a standard format, with a few minor inter-disciplinary differences, of generating a hypothesis to be proved or disproved. This hypothesis must be provable by mathematical and statistical means, and is the basis around which the whole experiment is designed. Randomization of any study groups is essential, and a control group should be included, wherever possible. A sound quantitative design should only manipulate one variable at a time, or statistical analysis becomes cumbersome and open to question. Ideally, the research should be constructed in a manner that allows others to repeat the experiment and obtain similar results. When to perform the quantitative research design ADVANTAGES Quantitative research design is an excellent way of finalizing results and proving or disproving a hypothesis. The structure has not changed for centuries, so is standard across many scientific fields and disciplines. After statistical analysis of the results, a comprehensive answer is reached, and the results can be legitimately discussed and published. Quantitative experiments also filter out external factors, if properly designed, and so the results gained can be seen as real and unbiased. Quantitative experiments are useful for testing the results gained by a series of qualitative experiments, leading to a final answer, and a narrowing down of possible directions for follow up research to take. DISADVANTAGES Quantitative experiments can be difficult and expensive and require a lot of time to perform. They must be carefully planned to ensure that there is complete randomization and correct designation of control groups. Quantitative studies usually require extensive statistical analysis, which can be difficult, due to most scientists not being statisticians. The field of statistical study is a whole scientific discipline and can be difficult for non-mathematicians In addition, the requirements for the successful statistical confirmation of results are very stringent, with very few experiments comprehensively proving a hypothesis; there is usually some ambiguity, which requires retesting and refinement to the design. This means another investment of time and resources must be committed to fine-tune the results. Quantitative research design also tends to generate only proved or unproven results, with there being very little room for grey areas and uncertainty. For the social sciences, education, anthropology and psychology, human nature is a lot more complex than just a simple yes or no response. QUALITATIVE RESEARCH DESIGN INTRODUCTION Qualitative research design is a method of experimentation used extensively by scientists and researchers studying human behavior and habits. It is also very useful for product designers who want to make a product that will sell. For example, a designer generating some ideas for a new product might want to study people’s habits and preferences, to make sure that the product is commercially viable. Quantitative research is then used to assess whether the completed design is popular or not. Qualitative research is often regarded as a precursor to quantitative research, in that it is often used to generate possible leads and ideas which can be used to formulate a realistic and testable hypothesis. This hypothesis can then be comprehensively tested and mathematically analyzed, with standard quantitative research methods. For these reasons, these qualitative methods are often closely allied with survey design techniques and individual case studies, as a way to reinforce and evaluate findings over a broader scale. One example of a qualitative research design might be a survey constructed as a precursor to the paper towel experiment. A study completed before the experiment was performed would reveal which of the multitude of brands were the most popular. The quantitative experiment could then be constructed around only these brands, saving a lot of time, money and resources. Qualitative methods are probably the oldest of all experimental techniques, with Ancient Greek philosophers qualitatively observing the world around them and trying to come up with answers which explained what they saw. DESIGN The design of qualitative research is probably the most flexible of the various experimental techniques, encompassing a variety of accepted methods and structures. From an individual case study to an extensive survey, this type of study still needs to be carefully constructed and designed, but there is no standardized structure. Case studies and survey designs are the most commonly used methods. When to use the Qualitative Research Design ADVANTAGES Qualitative techniques are extremely useful when a subject is too complex be answered by a simple yes or no hypothesis. These types of designs are much easier to plan and carry out, useful when budgetary decisions have to be taken into account. The broader scope covered by these designs ensures that some useful data is always generated, whereas an unproved hypothesis in a quantitative experiment can mean that a lot of time has been wasted. Qualitative research methods are not as dependent upon sample sizes as quantitative methods; a case study, for example, can generate meaningful results with a small sample group. DISADVANTAGES Whilst not as time or resource consuming as quantitative experiments, qualitative methods still require a lot of careful thought and planning, to ensure that the results obtained are as accurate as possible. Qualitative data cannot be mathematically analyzed in the same comprehensive way as quantitative results, so can only give a guide to general trends. It is a lot more open to personal opinion and judgment, and so can only ever give observations rather than results. Any qualitative research design is usually unique and cannot be exactly recreated, meaning that they do lack the ability to be peer reviewed. experiment-resources/experimental-research.html
Posted on: Wed, 28 Aug 2013 09:05:41 +0000

Trending Topics



Recently Viewed Topics




© 2015