Hopefully no substantial or permanent side effects
My thoughts:
A rating of pain is a hard scale to measure. Some people may simply experience pain differently and with no control groups(as DBA said). Therefore, the study's results cannot be measured with accuracy and is unreliable. Hence, drug shouldnt be sold.
Plenty of drugs have substantial and indeed permanent side effects. What makes a side effect acceptable or not is the benefit of the drug weighed against the side effects. For that reason, drugs that are more likely to save your life are often the kinds of drugs that have quite severe side effects. Chemotherapy tends to come to mind here. The side effects are terrible, but the results of not using the drug often (but not always) more terrible.
You're right in a sense that a pain scale isn't hugely accurate, but we're not necessarily bothered by that. As long as the results are precise and you can demonstrate a difference between the groups, then using such a scale should be fine. Pain is a subjective experience. Using a scale that acknowledges that subjectivity is ok here.
Just some thoughts:
- Since the results are summarised and only average scores were shown, there could be data that is not compatible with the average or affected the average in a great deal but are hidden. For example, the results of each person could be plotted on a graph and a trend line drawn in the middle, where the results are in fact zigzagged and irregular. The results should be shown more elaborativly.
- The scientist's desire to gain money could have possibly lead him to alter some results to fit his hypothesis, and it seems like he is working on his own and not in a team, so his results might not be reliable, since no other scientists were there to make sure the results are true. It is also unclear who summarised the results and put them toegther this way.
1. This is a reasonable point. Normally results are reported with a measure of their spread. I didn't include that because it would have over complicated the question, but suffice to say that in a sample of 10000 participants it is highly unlikely that any outliers skewed the result. This is well beyond VCE science, but methods students might appreciate the effect that a small sample size could have on the accuracy of a data set.
2. This could be true but is reasonably uncommon. There are some issues about blinding that might be more worthwhile considering here than the possibility of research fraud. If you're interested in researcher fraud, Benford's law is an interesting concept you might enjoy reading about! Very much beyond VCE though.
Just a few thoughts-
1) The selection process of the participants was unspecified. The selection process may not have been random sampling and experimentor bias may have been present during sampling ( eg convenience sampling ). The participants may not be representative of the overall population meaning the study can not be generalised
2) There was no control group to contrast such results of the drug to, therefore the study is unreliable
3) As the info has emphasised that he valued money a lot and has not stated any co experimentors, the experimentor May have experimentor bias and chose to not state undesirable results to promote his drug, affecting the accuracy and validity of the results
4) people have different pain tolerates and may rate pain differently. Furthermore, the type of pain experienced varies and so this study is unreliable. An individual may consider a headache worthy of taking the drug whilst another may not. Furthermore, some participants may be prone to more pain than others ( eg if they have medical issues ). A better method would have been to register the same type of pain in an ethical manner to all participants
5) there is no mention of how the results were collected. It may have been a simple question of if the participant had experienced reduction in pain. In this case, participant bias may be present as they may feel inclined to say yes as they took the drug. This is reinforced by the small chnage in the percentage after increasing concentrations of the drug, suggesting participant bias may have taken place- a double blind procedure would have been good.
6) the actual experiment process may not have met ethical guidelines meaning the study itself cannot be reliable. It is unsure if informed consent and withdrawal rights as well as debriefing about the drug and it’s side effects took place. Furthermore, beneficence is important to consider. It is unsure if this drug has allowed for a benefit to the research of pain medications or to the individual and can compete with other drugs.
1) You're quite right that the sampling method wasn't specified here. Again, I omitted this detail to simplify the question. I'm glad that you're thinking about sampling though, as it can have a big impact on the results and potentially distort them. Convenience sampling doesn't generate experimenter bias, it generates selection bias. It's a very broad term and one that isn't really necessary to know in VCE!
2) YES
3) Potentially, but lots of people value money a lot. Fraud happens but we can assume that in this case it hasn't. Any trial involving 10,000 people would have a very large team working on it.
4) You're the second person to raise the point about the pain scale. See above for my response to it. tl;dr: I disagree that it's a problem. We're interested in whether the drug reduces pain broadly. To measure it broadly is entirely appropriate.
5) Again, omitted to simplify the question but, also again, a very worthy point to make.
6) See 5. Very glad you're thinking about these things!
Overall thoughtsThis was a tricky first question. I'm pleased that many of you are looking for the important elements of research, although I think some might have tried to be a little too smart with their answers and missed some of the more obvious problems. The question was deliberately designed to encourage you to do that, as this was something I struggled with a lot during VCE and certainly cost me a lot of marks over the years.
A lot of what is discussed above goes beyond VCE. This is ok. The purpose of this thread isn't to rigidly stick to the course, but to improve our understanding about research more generally so that you develop a framework for thinking about research.
Whenever we ask a question about an experiment, we need to consider what the experiment is for. In this case:
Does drug X cause a reduction in pain?We now need to consider whether the independent and dependent variables of the experiment have been set up to answer that question. The IV is the drug and the DV is the reduction in pain, so yes we have tried to test this question. (In some cases an appropriate question at this point might be whether the original question is the
right question but this is probably taking things too far).
Now that we've ticked this box, we start asking some more detailed questions about our experiment. I always like to think about the people first: do we have enough and did we select them appropriately? In the above example, I didn't go into much detail about this. But suffice to say, we have 10000 people (which is heaps) and they were randomised into three different groups. So this is good!
The next thing we need to ask is whether we have something to compare our results to. The complex answer is sort of, but for the purposes of VCE we can say the answer is
no. We only compare the drug to itself, we don't compare it to a placebo (looks like the drug but no active ingredient). Without a point of comparison, how can we tell whether our drug is truly reducing pain, or whether patients' pain is only being reduced because of their "belief" in the drug? Certainly, if you give someone a tictac and tell them that it's morphine, any pain they're experiencing will likely go away. This effect is called the placebo effect. In this case we can't be sure whether our results are such because the drug works or because of the placebo effect. Therefore, it's not ethical to sell the drug.
Well done to everyone who also asked questions about ethics and about blinding of the participants. These are also important considerations, but the obvious answer here was a lack of a clear control group.