• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Why does 37 occur so often in our DNA?

I agree with your conclusion but not with the casual racism.
What's racist about recognizing the well-earned reputation for quackery from institutions in the former soviet bloc? The Soviets invested quite a lot of time and energy in pushing psuedoscience through the 1970s and then backed off for a while. Putin picked up on this for much the same reason and in the last ten years Russian and Russian-aligned scientists have come to lead the scientific world in quackery. There's nothing racial about that, it's pretty much just the Russians trolling everyone.
 
What's racist about recognizing the well-earned reputation for quackery from institutions in the former soviet bloc? The Soviets invested quite a lot of time and energy in pushing psuedoscience through the 1970s and then backed off for a while. Putin picked up on this for much the same reason and in the last ten years Russian and Russian-aligned scientists have come to lead the scientific world in quackery. There's nothing racial about that, it's pretty much just the Russians trolling everyone.
You tar all Kasakh scientists ( and now include Russian scientists) with the same brush, you are derogatory about the country of Kasakhstan for no sound reason and you make sweeping generalisations. I would suggest taking a moment to consider whether this is behaviour typical of a rational person or of a racist.
 
You tar all Kasakh scientists ( and now include Russian scientists) with the same brush, you are derogatory about the country of Kasakhstan for no sound reason and you make sweeping generalisations.
Well, there was that time the president of Khazakstan asked his country's scientists to discover the secret of immortality, and then there were those two guys who claimed to have developed a solar panel that was 80% efficient before reviewers tore their paper apart. Back in the 90s there was a shit ton of scientists from uzbekistan and tajikistan pushing articles about gamma ray bursts having provable extraterrestrial origin, always -- coincidentally -- around the same time they were plugging for more funding for radio telescopes or renovations thereof.

Not saying the old soviet bloc is unique for its quackery, they're just a lot more likely to have it published.
 
I'm curious, any systematic evidence for a bias in the validity of published studies? Has anyone published a paper looking for significant differences in publishing patterns or proposed hypotheses to explain those patterns? Could the issue be financial, with journals having fewer resources and staff available for editing or validating peer reviewed papers? Could it be down to political pressure?

Has anyone compared the incidence of these papers with the quackery and commercial bias occasionally practised by western drug companies? (The olanzapine debacle springs to mind)
 
Back in the 90s there was a shit ton of scientists from uzbekistan and tajikistan pushing articles about gamma ray bursts having provable extraterrestrial origin, always -- coincidentally -- around the same time they were plugging for more funding for radio telescopes or renovations thereof.

Gamma ray bursts DO have provable extraterrestrial origin. That is, they don’t (well, not the really big ones) originate on Earth.

Did your source confuse “extraterrestrial” with “intelligent alien?”
 
Gamma ray bursts DO have provable extraterrestrial origin. That is, they don’t (well, not the really big ones) originate on Earth.

Did your source confuse “extraterrestrial” with “intelligent alien?”
There wasn't much confusion in those cases, the researchers were trying to make the case that GRBs were evidence of some kind of massive interstellar war, arguing that their intensity was too great to be naturally occurring.
 
Never forget to factor in family-wise error. What are the odds that *any* prime number will come up the number of times we see 37?
 
Benford's law states that the probability of 1 occurring first in a number chosen at random from a given naturally occurring numerical set is higher than for 2, which has a greater probability than 3 etc. This law is useful for indicating whether financial and scientific data has been artificially tampered with. Other such probabilistic rules exist and there may be yet more to be discovered. For example, Fisher information theory predicts that there should be about as many physical constants between 0 and 1 as between 1 and infinity - no matter what units are chosen.
 
Last edited:
I'm curious, any systematic evidence for a bias in the validity of published studies? Has anyone published a paper looking for significant differences in publishing patterns or proposed hypotheses to explain those patterns? Could the issue be financial, with journals having fewer resources and staff available for editing or validating peer reviewed papers? Could it be down to political pressure?

Has anyone compared the incidence of these papers with the quackery and commercial bias occasionally practised by western drug companies? (The olanzapine debacle springs to mind)
I just had the thought that you could probably come up with a predictive model for the probability of researchers engaging in fraud in relation to a country or political bloc's. Generally, for example, you could expect researchers in the United States to be unusually likely to discover a lack of correlation between, say, gun ownership and the rate of gun violence, or between the use of beta blockers and positive outcomes for clinical depression.

But that's probably the wrong tool for quantifying an analyzing this effect, since it implies there's a STATISTICAL pattern to quackery. You can't really apply statistics to something that is primarily caused by the interaction of personalities in the context of money, politics, journalism and history. It's more complicated than that.
 
I just had the thought that you could probably come up with a predictive model for the probability of researchers engaging in fraud in relation to a country or political bloc's. Generally, for example, you could expect researchers in the United States to be unusually likely to discover a lack of correlation between, say, gun ownership and the rate of gun violence, or between the use of beta blockers and positive outcomes for clinical depression.

But that's probably the wrong tool for quantifying an analyzing this effect, since it implies there's a STATISTICAL pattern to quackery. You can't really apply statistics to something that is primarily caused by the interaction of personalities in the context of money, politics, journalism and history. It's more complicated than that.

This was exactly the kind of meta study I had in mind. The statistics would be complex, but I'm not sure they would be impossible to a sufficiently skilled and resourced researcher, you'd just have to have someone who fully understood how to go beyond the basics of looking for trends based on two or three variables and how to correctly control for and weight multiple covariants.

One could identify measures of validity such as replicability (which after all is already pretty much established as a benchmark of scientific merit) or numbers of citations (easily possible given the sort of search functions available in most academic libraries these days) and look for significant differences between individual countries and aggregated political groupings, significant differences between academic disciplines, correlations based on population size and various metrics of socioeconomic success, national crime rates, availability and nature of research grants, whether they are publicly or privately funded, etc.

I'm actually mildly curious if anything along these lines already exists in the literature and I'm not at all sure it is out of the question.

It would be a huge undertaking but one might imagine a wealth of possible hypotheses becoming testable, from broad questions such as looking for significant differences between political systems (such as whether we see significant differences in the rates of fraudulent publishing between privately and publicly funded research and does that relationship hold between broadly capitalist and socialist systems) to more specific and detailing modelling of optimal conditions for both pure and applied research.
 
In Russia, the term "British scientist" is highly risible as it's viewed as being synonymous with quack research. Not sure what the Russians make of American scientists.

I'm wondering if the frequency of numbers occurring in DNA when analysed in a consistent way follows a Zipf distribution. If so, some number has to be the most probable and 37 just might happen to be it.
 
Last edited:
Whenever you see a really improbable pattern, your question shouldn't be "What are the odds of this pattern happening at random", it should be "What are the odds that any pattern I would find equally or more notable occurring at random".
 
I'm curious, any systematic evidence for a bias in the validity of published studies? Has anyone published a paper looking for significant differences in publishing patterns or proposed hypotheses to explain those patterns? Could the issue be financial, with journals having fewer resources and staff available for editing or validating peer reviewed papers? Could it be down to political pressure?

Has anyone compared the incidence of these papers with the quackery and commercial bias occasionally practised by western drug companies? (The olanzapine debacle springs to mind)


Or the cosmetics industry... Some of the words they come up with to describe ingredients.... pffft.
 
This was exactly the kind of meta study I had in mind. The statistics would be complex, but I'm not sure they would be impossible to a sufficiently skilled and resourced researcher, you'd just have to have someone who fully understood how to go beyond the basics of looking for trends based on two or three variables and how to correctly control for and weight multiple covariants.

One could identify measures of validity such as replicability (which after all is already pretty much established as a benchmark of scientific merit) or numbers of citations (easily possible given the sort of search functions available in most academic libraries these days) and look for significant differences between individual countries and aggregated political groupings, significant differences between academic disciplines, correlations based on population size and various metrics of socioeconomic success, national crime rates, availability and nature of research grants, whether they are publicly or privately funded, etc.

I'm actually mildly curious if anything along these lines already exists in the literature and I'm not at all sure it is out of the question.

It would be a huge undertaking but one might imagine a wealth of possible hypotheses becoming testable, from broad questions such as looking for significant differences between political systems (such as whether we see significant differences in the rates of fraudulent publishing between privately and publicly funded research and does that relationship hold between broadly capitalist and socialist systems) to more specific and detailing modelling of optimal conditions for both pure and applied research.
In the UK, with tenure having been largely abolished, there is a great deal of pressure on academic researchers to publish frequently or lose their jobs. There is a tendency for low quality research to be published in whatever journal will accept it. The quality doesn't matter to management, only the quantity. Hence the Russian joke about British scientists, based on research papers into subjects such as "Do duck quacks echo?" The ultimate culprit for the assault on the academic system was Mrs Thatcher, who did research in the chemistry of soft-scoop ice cream before she decided to impose herself on the nation as a politician. The bitch is dead but her legacy lingers on.
 
That the prime number 37 seems to occur so many times is very strange but is it just the human disposition to seek patterns and meaning that's going on here? I suspect that explanation to be the case. Like a conspiracy theory, a form of confirmation bias if not selection bias is probably at play but, dab nagit, it is an intriguing observation.
Agreed on all points.
 
Not sure what the Russians make of American scientists.
I suspect that in Russian the term for "American scientist" is usually meaningless except as the punchline of a joke.

I'm wondering if the frequency of numbers occurring in DNA when analysed in a consistent way follows a Zipf distribution. If so, some number has to be the most probable and 37 just might happen to be it.
Honestly, I'm pretty sure the most common number would turn out to be either 2 or 4. I suspect 37 is cherrypicked from carefully filtered data because it seems less likely to be random.
 
I think the whole story is silly because if you analyzed any complex structure with patterns you are bound to find repeating numbers of different values.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top