As we all know, budgets are getting tighter in some areas and unfortunately this leads to more competition and thus there are those wanting funds that are either exaggerating their journal findings or sometimes just flat out publishing what might be called a form of fiction that is perhaps close but not the real results. Software and and analytics are helping identify some of this activity as chances are the same was used to substantiate some of the information published. It would seem it has to have the ability to be re-created of course.
When fraud is caught the results can be devastating, and some even go further to cover it up with a lab for an example at stake of losing their credentials over such publications. One of the doctors quoted said that the folks that are real good at faking it are hard to catch. So the next step of course is to retract such publications and that is difficult too as the original document may have had a lot of media coverage if substantial enough to be considered big news or a break through. Software can also be a help with catching plagiarism with images and same exact words used in other documents.
It is mentioned here too that psychology is one area that is hard to monitor. In addition to inaccurate information published we also have this, mislabeled lab tissues which goes back a number of years and is a real problem as researchers find out all their information published is not authentic simply because they were supplied mislabeled tissue and big philanthropy organizations have financed some of these projects. So you have a ton of research that is not accurate here again and not due to the researcher but rather the lab tissues, so here’s the lab needing to make a decision to come clean. It’s kind of a real bad spot as nobody intentionally wrote a bad paper here, but it goes back to the labeling and it has been reported that cover ups occur here too.
Mislabeling Work in Labs Sends Years of Cancer Research Down the Drain With Misidentified Contaminated Cells
One of the most publicized stories was the big story at Duke University with flawed data. This one made it all the way up to 60 Minutes. The researcher published fiction and his own credentials had a bit of fiction included. What was disturbing here is the fact that the story states they reversed the algorithms and then the “sold” theories were not true. Ok so we have math in the picture with software formulas…I called this entry Chapter 15 in the Attack of the Killer Algorithms where a researcher falsified enough material to where clinical trials were to be initiated on what was reported.
Story of Duke University - The Sad Case of Flawed Data Published in Medical Journals That Was Declared Inaccurate 60 Minutes –Attack of the Killer Algorithms Chapter 15
Ok so we are back to one of the topics I harp on here and that is the use of Killer Algorithms. It happens outside of research too and you can also check out the link below for a view on how this functions in the financial markets with “making the numbers work”. I like technology and what it does and the treatments and cures that come out of research for sure but there are always those that play the other side of the fiddle for profit, no matter what the cost is and that’s sad, so I continue on writing about “flawed data” and the need to ensure that we are all not “Algo Duped” in one form or another. Great video at the link below with programmers and a quant discussing how this evolves from their side and how they make big money doing it, but the conscience ends up winning out here with most as they know they are writing some fictional math and formulas. Algo Duping lives amongst us.
Quants: The Alchemists of Wall Street Video Documentary - Why It Needs to Matter What Companies Do and Not Focus Only On the Price of Stock With So Called Value - Attack of the Killer Algorithms Chapter 44
Research that is bogus costs money for all as it could be information related to the creation of new drugs for one example or treatments. When mislabeled lab information gets combined with fictionalized research, we don’t have anything to build upon. Sometimes though too we have legitimate mistakes that occur and hats off to those who admit they made an error. We are still human. You kind of wonder with the investment money at stake at times if some of the “make the numbers work” ethics that are in the financial areas drift over here? I might venture to say there’s a pretty good chance of those types of occurrences…let’s revisit the Duke case.
If this was not an issue then we wouldn’t have Ivan’s website (Ivan Oransky from Reuters) on “Retraction Watch” growing with material. His site is also mentioned in this article from the Guardian at the source link below. I like his other site too, “Embargo Watch” and it’s interesting what shows up over there too. I remember when he started that site and asked for opinions if the web world of social media thought it was a good idea. I think so as new articles appear over there all the time too when information is publicly released ahead of embargo dates, a lot of it being studies and scientific releases and is interesting site to visit. We all try to watch the embargo dates for sure as publishers but it’s interesting when errors are made and what I have seen so far are a lot of “internal” type errors that publish ahead of time and not so much publishers. In speaking for myself I watch those very carefully when I receive information as such so I don’t end up on his page:)
In summary here it is a good thing that we do have enough people out there today that are checking up on scientific publications for sure but all are not caught and some of the cases coming to light are those that go a number of years back. The amount of “flawed data” out there sadly seems to be increasing and it all comes back to formulas and math, everywhere you turn. We are gaining some intelligence with predictive modeling out there for sure and when used and interpreted properly we gain; however, with such predictive models much of the data analyzed and created is combining credible data with non credible data. Non credible data is what I refer to as information from social networks, speculations, etc. that is not verified to having truthfulness. It can give us a lot of knowledge, however when combined with credible data and bad stuff happens when folks go the next step and take it down to an “individual scoring” process to either allow or deny let’s say insurance claims or even with determining eligibility for jobs, we now have something that inaccurately attack the consumers, Killer Algorithms. Using information in this manner that can be helpful for guidance as a methodology to give or deny services as being fully credible is a huge danger zone.
FICO is a great example of this with trying to sell their bogus type of analytics directed in this area with predictive mismatched data promising intelligence that can be used at an individual level and the purpose of it is only to make money. Claiming they can take your credit score and combine with other information mined fro the web which we do not know how much is credible and what is non credible is pretty scary when their overall “sale” here is to score you with all of this an determine if you will be a patient that takes your prescriptions, again nonsense that should never be brought down to the individual level but they can use it for group predictive intelligence reports all they want and insurance companies do that all the time when looking for demographic intelligence.
We simply don’t have enough people qualified to tell the difference and we have companies marketing such analytics for profit and write a lot of this for profit that contains both credible and non credible data combined with their queries and algorithms, that are designed to make money. Consumers have become data chasers today to mostly fix all the “non credible” information they are judged on and the game is not slowing down at all as this concept is beyond what most can look behind the scenes and recognize.
Again I have no problem with using predictive modeling and we get some good information from it, but the misuse of bringing it down to scoring an individual with “flawed data” needs to be fixed as it’s driving consumers crazy as we get denied services, money, etc. all on this process of selling “flawed data” that contains tons of reports that substantiate it’s use, of course those reports are made up by the folks selling the algorithms:) That is why I started my series called “The Attack of the Killer Algorithms” and it’s on my front page with more than 40 links to everyday examples.
Scientific “flawed data” hurts all of us and again we must assume if not money it is the rise to fame that is perhaps a lot of this sadly. Hopefully we will continue to have enough folks out there with computer science and coding knowledge to keep tracking down the flaws and new processes come on line to make it harder to fake. That is in fact happening and is good thing if we catch it soon enough and thus so I’ll make this comment again about executive department heads in government needing to have “some” IT, programming, or even computer science in their backgrounds as we are getting walloped in more ways than one in both research and the financial world as the do meet on one common level and that is money. BD
Dirk Smeesters had spent several years of his career as a social psychologist at Erasmus University in Rotterdam studying how consumers behaved in different situations. Did colour have an effect on what they bought? How did death-related stories in the media affect how people picked products? And was it better to use supermodels in cosmetics adverts than average-looking women?
The questions are certainly intriguing, but unfortunately for anyone wanting truthful answers, some of Smeesters' work turned out to be fraudulent.
The psychologist, who admitted "massaging" the data in some of his papers, resigned from his position in June after being investigated by his university, which had been tipped off by Uri Simonsohn from the University of Pennsylvania in Philadelphia. Simonsohn carried out an independent analysis of the data and was suspicious of how perfect many of Smeesters' results seemed when, statistically speaking, there should have been more variation in his measurements.
0 comments :
Post a Comment