Failures in Research Ethics, Transportation Is Not Immune

If you are a transportation researcher, you absolutely must make yourself aware of the scandal currently embroiling the field of academic criminology. Through either deliberate deceit or incomprehensible incompetence a group of researchers (eleven different co-authors, in total) published a series of articles with dubious findings from a custom survey.  All these articles had a single, common, lead author at Florida State University who authorized the survey and managed the data analysis.  Flaws were detected by an anonymous reader, who sent e-mail messages to all co-authors asking for clarification.  One co-author looked carefully at his own paper and was disturbed enough to request that his paper, published in the leading journal, Criminology, be retracted.  The paper has not been retracted; FSU has not completed its investigation; and the lead author has gone silent.  The co-author requesting the retraction has been subject to much second guessing as to his motives.  A summary of this mess has been published in the Chronicle of Higher Education.

I have refereed many papers with obvious flaws.  Most of the flaws appeared to be stupid errors as opposed to blatant fabrication.  However, there are still many other papers where the results are implausible but not obviously wrong.  I usually ask for clarification in those cases, but a determined author can usually convince me and other referees that the analysis was done correctly.  The reward for publishing a paper in a respected journal is high, and the risk of being found a fraud is low.

Personally, I have never knowingly published erroneous results or participated in a research study where someone else tried to publish erroneous results.  However, early in my career I had personal knowledge of a researcher who did not pull an article from a publication queue after he was told that there was a serious bug in the statistical software he was using.  I had the dilemma as to whether to blow the whistle or not.

A few years ago I was asked to give a speech at a Tau Beta Pi initiation banquet.  The subject of the talk was ethics.  For this talk I laid out my whistle blower dilemma and then asked the audience (engineering college juniors and seniors, mostly) whether they would blow the whistle.  Almost everyone said they would blow.  It is easier said than done.

This incident occurred about midway in my tenure at General Motors Research Labs.  (I am blurring some details to hide the identities of those involved.)  The lead researcher had some management responsibilities and clout within my department.  I will call him Franklin.  He was assisted by a junior researcher whom I will call Harry.  Harry did much of the data analysis for this research project.  Their research was a transportation behavioral study using multinomial logit analysis, and a journal article was well along in the publication process.  Now it is important to mention that publishing research outside of GM was difficult.  A paper must move through multiple steps and management layers, taking months.  A rejection by a journal after all this internal approval work was a blot on the researcher’s record.

Shortly after Franklin submitted the paper for internal approval, somebody recommended I could improve my own research results by also doing logit analysis.  I borrowed Franklin’s and Harry’s software.  This was early in the days of logit estimation with maximum likelihood, and the software was home brew.  I never found out who wrote it, but it wasn’t somebody at GM.  With my data, I got curious results.  Regardless of what order I chose, the first independent variable in the dataset always was most significant.  The last variable was always least significant.  Every reordering produced the same pattern.  Statistical significance had less to do with the nature of the independent variables than the order in which the independent variables was fed to the software.

It was clear to me that all analyses done previously with this software was bad.

I decided to tell Franklin about it.  Franklin showed no concern whatsoever.  When I asked what he intended to do with the submitted paper, he told me he would do nothing.  When I asked him whether they would redo their analysis, he said they would not.

I could have taken it further up the line to the department head, but I judged that this was a fight I could not win and probably did not want to win.  In a corporate environment, there is little to be gained by publicly undercutting another employee.  Franklin was then a friend and ally, and my future dealings with him would be severely compromised.

So I know it happens.  Not often and not initially deliberate, but it happens.  Once a paper gets far enough along in the publication process it takes a huge amount of integrity and will power to stop it.

Initially I felt anger at Franklin.  The paper temporarily helped Franklin’s reputation, but subsequent events, including a restructuring of the department, undid any advantage he might have gained.  I later blamed GM’s culture and management practices more than I blamed Franklin.

I learned it can be a lot more difficult to blow that whistle than most people think.

Epilogue.  Franklin left GM about a year later.  He eventually landed a good research position, where he remained until his retirement.  Harry stayed at GM for his whole career.  By Detroit standards, Harry did quite well for himself.  That paper has been cited 29 times, according to Google Scholar.

Afterword.  Refereeing a journal paper is a difficult job without much, if any, reward.  Referees are pretty much the only defense our profession has against poor research.  I believe it is the author’s responsibility to convince the referee that the research has been done well.  It is also the author’s responsibility to admit to mistakes when they happen.  Even then, erroneous research will still get published.

Alan Horowitz, Whitefish Bay, October 10, 2019