Saturday, February 6, 2016

My Sources

      A credible paper needs to have credible sources and sources that lend to the work as a whole, not things just there to give the work a look of having facts. A good source will have real use to the author and the audience, informing them both deeper on the subject presented.
       To ensure I have the best sources for my work I will present all 10 of them.

mirko. "[Research]" 03/02/2015 via Flickr.
Attribution-NonCommercial-ShareAlike 2.0 Generic

1. Power Failure: Why Small Sample Size Undermines the Reliability of Neuroscience
       This source is from an online review journal, Nature Reviews, specifically the title on Neuroscience. As a source it is fairly credible; specifically made to report scientific news and cater to those who wish to read in depth on some subject.
       There are multiple authors of this source, which I spoke about in my post on the evaluation of general sources. All of them are quite qualified and give their credentials.
       The source was published online on April 10, 2013. Since this source is the start of the controversy this date is the catalyst for the discourse as a whole. In general world news there was not anything of note that would have affected the announcement of this news or the general publishing of it.
       This source is the most important because it starts the controversy, the stakeholders within this source are the authors and they have renown and credibility weighing on their paper and study being of merit. The information within this article gives the base of what is built on throughout the course of the controversy and it is what generated the desire to research more into the statistical power within neuroscience.


    The source for this article is the Guardian US, an American online source of the British newspaper. The credibility of this newspaper, or the online US source at least, only goes back five years, but the original British work formed in 1821. The newspaper then has the credibility of many years of reliability. 
        The author of this source is Kate Button, a research psychologist at the University of Bristol. She has a degree in neuroscience from Fitzwilliam College, so she has the credentials for speaking about this study and discussing its reliability in her own field. 
        This source came out the exact same day that the first source came out, so this obviously affects how the source was received. The same day posting is important, because even if the paper was originally not online the author of this source made a conscious decision to post her commentary the same time the source did become available to a much larger crowd. This could have generated more readership for the original source, since the audience reading this newspaper article would then go read the original paper. 
       The information that this source offers is practically a synopsis of the main source; the author is explaining the information and reasoning behind the original study as well as some deeper explanation of importance of small sample sizes. The more quick reference style of this source allows for a more accessible reading than the scientific article published in a journal, allowing for a broader range of readers to understand the importance of this study.

       The source for this article is the National Geographic, a fairly reliable source for academic works or articles; as it is focused on educating people on scientific happening to view the world more broadly.
       The author of this source is Ed Yong, a science writer for the Atlantic and makes blog posts for National Geographic about scientific phenomena. He has credibility because he has been writing for at least the past five years within this field, if not more.
       This source also came out the same day that the main source was posted online, obviously to draw attention to the original source; the first time brought to attention as well that this is all happening around the time that President Obama had launched the initiative for mapping all the neurons in the brain, and European efforts to aid in the process; an important situation distinction; the main study came out in time to put some negative light upon how the studies within neuroscience are conducted.
       Ed Yong's source about this topic not only describes how this study comes in light of President Obama's initiative for mapping the entire brain, but he helps to shed light into how this problem isn't just neuroscience specific and plagues multiple fields of study. He also goes into detail how it might be gone about to raise the statistical power within the field of neuroscience, providing a possible solution to the controversy.

      The source for this article is an organization called the Public Library of Science (PLOS) which is an advocate and publisher for "Open Access" Research; the specific section of PLOS this research comes from is the Medicine section. This source has fair amount of credibility because of it's use as a research engine that has specific use and ensures that the works published within it have been reviewed for accuracy.
       The author for this work is John P.A. Ionannidis. This man is heavily certified to write any sort of medical text or scientific article; having held multiple administrative and academic positions, like professorships, and receiving many awards and honors for his work. So I would consider this one of the more credible sources for this topic.
      This specific work is actually before the main controversy began, being published online instead in August of 2005, but it was sighted in the main source as a basis for their research so it may have the proclivity of being extremely important. In regards to other news around this time, Hurricane Katrina had just occurred, or was occurring in fact, and perhaps this story was not as wide spread because of other concerns at the time.
      The information provided in this article may have helped stimulate the idea for the original source of the controversy. This source details how, in general, research studies are increasingly discovered to not be true because they had a small sample size. Most likely the inspiration for Button et al. to specifically look into the neuroscience studies statistical power, and give them backup in the research that this is a problem for most scientific fields.

     The source for this article is the online magazine WIRED which talks about how new discoveries may affect everyday life. This source is pretty credible because of the well known quality of it for facts of a scientific nature.
     The author of this source is Greg Miller, a science journalist, both freelance and previously the senior writer at some news sites, who has earned a PhD in Neuroscience; so most definitely has the credibility to discuss this topic in depth. He also completed a graduate science writing program so his specialty is writing for science.
      This article came out on the 15th of April of 2003, just a few days after the main source and the other articles came out, so most likely this author, like the rest, is trying to just bring the information more out into the world and let people know what's happening. So the timeline for this article is nearly identical to the others, except by that date maybe more people would have read the original source.
      Like other sources published so close to the publishing of the original main source, this source mostly breaks down what the statistical power means and how this affects the outcome of studies.

     The source of this article is the same as the original source, the Nature Reviews Neuroscience Section. This is most likely considered a different publication of the review, but still holds the same amount of credibility as it did for the original source.
      The author for this source is a member of the Department of Psychology at the University of York, Philip T. Quinlan. He has published around 78 works, some of which have been in the field of Neuroscience as well, he has the credibility to argue a point in regards to the study and its impact.
      This article was published online on Juky 3rd of 2013, so a couple of months have passed, enough that the original work now has published articles arguing for or against it's points. This time frame is important for thinking about the spread of information in that amount of time.Also within this year, and within the previous month pf when this story was published online, the Snowden debacle happened. This may have affected how online sources such as these were treated, though the effects would be merely speculation.
       This article can be understood as the start of a classy academic, written, bar fight; or that is how some would describe it. The article is a response to the Button et al. work, bringing its own reliability into question as it did the statistical power of Neuroscience. The author of this work is presenting the information that small sample sizes can indeed be used with statistical accuracy thanks to things like "meta-analysis". 

      The source for this article is the same as the previous (#6) as well as the beginning source. It is actually within the same grouping of publications as Misuse of Power. So the credibility of this source, Nature Review, can still be believed to be secure.
       The author for this source, John C. Ashton, works in pharmacology and toxicology at the University of Otago. He has participated in many studies and has credibility within the field for writing and being cited by others.
        The date that this article was published is also July 3rd, 2013; so the same ideas need to be accounted for when analyzing the credibility and possible effects on the audience of this work.
       The information this author is providing is mostly about how, while it is not wrong to talk about how the statistical power is too small in studies, it is more important to talk about how the null hypothesis is not being tested accurately, and that is what really is screwing up statistical power.

       The source for this work is also Nature Review, as it is a review of the article by Button et al., it is all connected. And completely credible. 
     The author of this work is Peter Bacchetti, who is part of the Department of Epidemiology and Biostatistics at the University of California. His qualifications in biostatistics lend to his credibility for talking about the statistical power within scientific studies as he's had to work with them often. 
     This commentary was published online at the same time as the others. This unloading of articles all surrounding the same subject at the same time may have helped to pit the arguments within the works against each other and made the audience, who would probably read all of the articles, to consider the different sides.
     The information within this section concerns how targeting small sample sizes as bad causes more problems than fixes them. The author defend small sample size as being just as efficient and possibly better statistically as a whole. His arguments are important to my paper because they give the other side of the argument from the main source, adding to the controversy. 

      The source for this work is the same as the previous ones. It could be problematic to have all the information, or a good part of it coming from the same source, because it could be argued for bias however, since the source outsources to multiple authors and different opinions it shouldn't be an issue.
      The authors for this article are actually the same large group of authors that started this controversy, Button et al., who have all the degrees of credibility, but also since this was their idea, or at least paper that started this argument they have the credentials to defend the merit of their work.
      This article was posted at the same time as the rest of the responses, obviously a sign they were posted together but as it is an impossibility that they were written all at the same time, most likely all of the correspondence articles were collected elsewhere and published at once to allow the controversy to be easily viewed as a whole.
      The information provided in this response by the original group of authors is what they have to say to either prove their point to the previous article writers or make a statement about their arguments and whether they think they hold merit compared to the evidence these authors have.
     
      This final source comes from the online Proceedings of the National Academy of Science of the U.S.A., a scientific journal with articles. This source has credibility not only for it/s scientific use, but because it is trusted by Universities as a source of information for students. One such University it is affiliated with is the University of Arizona.
      The author of this source is Valen E. Johnson, who has written many articles for the PNAS on statistical evidence, and in fact works in the Department of Statistics at the Texas A&M University.This lends to the authors credibility, also that the author is writing such a work on statistical evidence revisions is important to my work, even though it is not specific to Neuroscience, because it may resolve the controversy within the field.
     This article was written somewhere before  July 18th of 2013, the time it was revised, but was not published online until October of that same year. This timeline is important because obvious ly the article was being written and fixed as the controversy was happening throughout the neuroscience field about statistical importance. The much later electronic publishing could have made it have less of an impact than if it was posted in the same wave as the other works.
       The information being provided in this work has to do with the fact that the statistics for studies are not repeatable and so they are not reliable. The author was able to determine this had to do with there being small sample sizes with extreme levels of significance for the size of the tests. To fix this problem the author is suggesting changing the levels of confidence at which tests are conducted having to do with a certain statistical model, which could put a stop to unreliable low statistical power.

No comments:

Post a Comment