Saturday, April 5, 2025

Trump isn't a threat to our democracy, he's the proof that ours is stronger than Europe's!

 By now, you’ve probably heard the usual chorus from the American left: Trump is a threat to democracy. It’s the go-to refrain - part panic, part performance art. But here’s the thing: if Trump is the threat, democracy seems to be handling it like a champ. Impeached twice, indicted four times, dragged through more courts than an NBA free agen, and still, he won the popular vote and the electoral college. That’s not a threat to democracy. That’s democracy dunking on its critics in slow motion, from the free-throw line.


Compare that to what’s going on across the pond in the so-called enlightened bastion of liberal democracy: Europe. Over there, if the populists start winning, the courts jump in to make sure they don’t.


Let’s take a quick tour, shall we?


France just disqualified Marine Le Pen, one of the top contenders for the presidency in 2027, over charges of “embezzling EU funds” from checks notes fake job contracts. Now, I’m not saying she’s innocent, but let’s be real: it’s France. The entire bureaucracy runs on fake jobs. This wasn’t about corruption. It was about stopping her from winning. The establishment couldn’t beat her at the ballot box, so they went with Plan B: ban her from running.


Romania did one better. Their leading populist, Călin Georgescu, actually won the first round of the presidential election. So what did the Constitutional Court do? Annulled the whole election. Claimed “foreign interference.” How convenient. Apparently democracy is only valid if the right people win.


Germany’s AfD (Alternative for Deutschland) had to muzzle their lead candidate, Maximilian Krah, after he said something controversial about the SS (which, yes, is a stupid thing to do in Germany). But the real story here is the way the establishment pounced, using internal party rules and public outrage to silence dissenting views before the voters ever got a chance.


And over in Bosnia, Milorad Dodik, President of Republika Srpska, had an arrest warrant slapped on him by Bosnia’s top court for “undermining the constitutional order.” That’s Balkan for “we don’t like what you stand for.” Interpol even said “nah, we’re good” and refused to issue a warrant. When Interpol tells you to cool it with the legal theatrics, maybe it’s time to reevaluate.


Now, contrast all that with the United States. The left pulled out every procedural stop imaginable to take Trump off the board: impeachments, indictments, lawsuits, you name it. They tried to bar him from the ballot in Colorado. They tried to lock him up before Super Tuesday. And yet… he won. Again.


That’s not authoritarianism. That’s resilience. That’s the will of the people surviving a legal obstacle course that would make Kafka weep.


The irony is thick: The same folks who cheer when Le Pen gets disqualified in France or when AfD candidates get muzzled in Germany are the ones screaming that Trump winning fair and square is a “threat to democracy.” What they really mean is: Democracy is great, unless you vote for the wrong guy.


The truth is, the American left doesn’t hate Trump because he’s undemocratic. They hate him because he is democratic and he keeps winning. He says what he’ll do, does it when elected, and then wins again because people like it. That’s called a mandate. But the problem for the progressive elite is, if every man really does get a vote, their side keeps losing. So they use legal gimmicks to tip the scales.


They don’t fear fascism. They fear the franchise.


So the next time someone starts wringing their hands about Trump being a threat to democracy, remind them: If democracy can survive him, it can survive anything. But if you keep trying to disqualify your opponents instead of beating them at the ballot box, maybe you’re the threat we should be worried about.


Just ask Europe.

Monday, March 10, 2025

Why Trump's slashing of NIH overhead rates is a good thing for science!

 

On February 7th, 2025, the National Institutes of Health (NIH) announced a sweeping policy change that sent shockwaves through the academic research community: overhead reimbursements on extramural grants will now be capped at 15% for all domestic institutions.

This is a seismic shift for elite universities that have grown accustomed to overhead rates often exceeding 60%, and their reaction has been exactly what you would expect from institutions suddenly cut off from a major revenue stream. Universities are in an uproar, fearing massive budget shortfalls and disruptions to longstanding financial models.

For decades, concerns have been raised with NIH’s overhead reimbursement system and how it shapes the culture of academic science. The intent behind overhead payments is to cover the hidden institutional costs of conducting research—expenses such as administrative support, facilities, and maintenance. However, in practice, the system has metastasized into something far beyond that.

Universities have come to depend on overhead as a critical and flexible source of fungible income, and this has influenced institutional priorities in ways that are not always aligned with the best interests of research. The current funding structure has inadvertently shifted priorities, placing a premium on securing large grants, often at the expense of fostering intellectual curiosity and risk-taking in research.

With the recent decision to cap NIH overhead rates at 15%, there is understandable concern and uncertainty. This shift will be painful for many institutions that have planned their budgets around expectations of significant overhead reimbursements that may not materialize, and adjusting to this new reality will be difficult.

While the transition may be abrupt, it provides an opportunity to reassess the way research funding is structured. By moving towards a model more similar to that of the National Science Foundation (NSF), where direct costs are emphasized and grant sizes are more modest, we have the chance to realign incentives in medical science, away from massive, bureaucratic studies toward more innovative and creative research.

This moment, though painful, offers a chance to correct a long-standing imbalance. While universities adapt, this shift could ultimately benefit the broader scientific community by prioritizing research quality over institutional financial strategies. The challenge now is to rebuild a funding culture that values discovery and insight, rather than continuing to reward the pursuit of ever-larger grants because of the overhead they bring.

NIH Overhead: The Silent Engine of Medical School Profits

The way overhead works is that if someone is awarded $100,000 in direct costs, the university gets an additional percentage of that amount to cover “indirect costs” of research.  Before Friday that would have been more than $60,000 extra at elite institutions but now has been cut to $15,000. This is intended to cover the indirect costs of doing research, including administrative support, office space, utilities, IT support, library facilities, building maintenance, safety and research protection services, etc. 

Disparities in negotiated overhead rates have exacerbated inequality among institutions. Elite universities, such as Harvard, Yale, and Johns Hopkins, have secured overhead rates exceeding 60%, while the average overhead rate has reportedly hovered around 27%, such that many institutes must operate with significantly less financial cushion.

Negotiating individual overhead rates separately with each and every institution is a complex problem, leading even President Obama to consider imposing a flat rate for all sites, which would promote greater equity between institutions. Currently, well-endowed universities only get richer, not just because they get more grants, as that is compounded by their higher overhead rates, supposedly justified because they have fancier equipment and more expensive facilities to maintain.

While 15% seems low, it should be pointed out that foreign entities are only eligible for 8% overhead on NIH grants, and yet they still eagerly accept such funding, implying that those universities recognize that NIH research funding clearly benefits them, even without enormous overhead.

Unlike most academic departments, medical schools do not have to pay the full salaries of their own research professors, as NIH grants are expected to cover them. By contrast, NSF grants, which fund non-medical scientific research do not pay faculty salaries during the academic year, as they consider research to be part of one’s normal institutional duties. This is yet another reason there are much higher professor to student ratios in medical schools than elsewhere in the university.

This fact also leads to a system where salaries are generally higher for medical researchers than those of other scientists. Why?  Because the salaries, while determined by the university, are paid largely by NIH, and as the salary goes up, so does the magnitude of the university’s overhead payment.  By contrast, in Mathematics or Anthropology, salary costs are typically paid directly by the university.

The NIH Overhead Game: Bigger Grants, Bigger Profits, Worse Science

Indirect costs are real, but the current system incentivizes researchers to seek ever-larger grants to sustain institutional budgets. Every new grant brings a fresh injection of overhead funds, which universities use to prop up budgets across the board.   The more money a professor brings in, the more overhead the university gets, and this relationship is key to career advancement for medical faculty.

This led to an institutional preference for bigger, more expensive grants, rather than better science, leading to the devaluation of small, hypothesis-driven research or theoretical modeling in favor of behemoth multi-center data collection studies that guarantee massive long-term overhead payouts, and ultimately to a system where faculty spend more time writing grant proposals than doing research. 

Worst of all, the large multi-center studies prioritized because of their big budgets are not only the least creative research scientifically but are also the worst for education and science culture.  Because of their scale, those studies emphasize homogenizing research protocols over many sites, to make sure everyone does everything in exactly the same way, so their data can be easily combined for joint analysis, guaranteeing small, incremental increases in knowledge but rendering revolutionary innovative discoveries and creative original thinking virtually impossible. 

This leaves little room for the people doing the work to engage in real scientific thinking.  Instead, scientists become little more than technicians, processing samples, running prepackaged statistical analyses, executing pre-defined protocols akin to recipe books.  This is very bad for the very culture of academic science as it concentrates thinking in a small number of powerful voices, with everyone else following orders. While large-scale projects provide stability, they can sometimes prioritize institutional security over fostering groundbreaking innovation.

The system rewards universities not for good science, but for maximizing grant revenue streams.  The more faculty applied for and secured NIH grants, the more overhead flowed into institutional budgets.  Medical school faculty are often discouraged for applying for NSF or private institutional grants that do not pay equally high overhead rates, with some universities like California Institute of Technology refusing to accept grants that pay less than 20%, for example.

Many years ago, a senior administrator came around to the basic science departments in our medical campus and commented that although our papers, grants and awards were as good as or better than the other top tier medical schools, the size of our grants was smaller, and hence so was the resulting overhead.

His response was to ask what he could do to encourage us to write more grants - not better grants, not grants for more impactful research, but just more grants, because he needed more overhead to keep the campus in the black.  If it were only funding the indirect costs of the funded research, this complaint would make no sense, would it?

This past week, the Stand Columbia Society, a group of Columbia University affiliates dedicated to “advocating for Columbia University’s core mission of excellence in teaching, learning, research and patient care…” sent out an alarmed message claiming that the 15% overhead cap could cost Columbia anywhere between $114 and $202 million dollars.  They did not express concern that cutting overhead would hurt scientific research, but rather about its devastating effects on non-scientific programs. 

In their own words, the $348.9 million of indirect costs Columbia received in 2024 were fungible, and they “effectively cross-subsidize diverse academic programs” that could not sustain themselves, graduate tuition subsidies, faculty start-up and retention bonuses, and they “indirectly subsidize the arts and humanities which receive no comparable grant support.  This cross-subsidization is essential.” If Columbia could afford to use NIH overhead to pay for all these things outside of science, why should taxpayers keep footing the bill as research costs? 

If the government wants to subsidize such things, let them do it explicitly, rather than through the backdoor. This highlights long-standing concerns that high overhead rates may have provided financial flexibility beyond the direct costs of research.

And it’s certainly nothing unique to Columbia.  Some institutions even give investigators some of their indirect costs back to use as “unrestricted funds” to be spent on whatever unexpected expenses come up in their labs.  This suggests that their universities did not need the full amount of overhead paid to cover research costs either. 

To be completely fair, universities are simply playing the game according to rules set up by the government.  It is completely reasonable that they take this approach when the system supports backdoor subsidies. By negotiating these high overhead rates, universities benefited for decades, but now the system is undergoing a necessary correction.

Big Genetics, Big Money, No Breakthroughs: How NIH Funding Went Wrong

For decades, I was a vocal critic of former NIH Director Francis Collins, long before it became fashionable to do so, arguing that genome-wide association studies (GWAS) and the Human Genome Project (HGP) would fail to deliver meaningful clinical breakthroughs. I was right. The promises of personalized medicine and revolutionary treatments never materialized. Instead, NIH doubled down on bigger and bigger genetic studies, not because they were finding anything, but because they had no choice but to keep going.

Collins had staked his reputation (and NIH’s budget) on the idea that sequencing the human genome would transform medicine. But when smaller GWAS studies failed to uncover actionable genetic variants, they couldn’t admit defeat. Instead, they scaled up, arguing that bigger sample sizes were needed. When those studies failed, they scaled up again. At this point, the Human Genome Project and GWAS were too big to fail—NIH had to keep funneling money into them to justify the massive initial investment.

Universities were all too happy to play along, because these massive projects were ideal for securing huge indirect cost returns. Unlike small-scale, investigator-driven research, GWAS and other massive multi-omics projects required vast infrastructure, computing power, data coordination centers, and armies of administrators - all of which translated into higher indirect cost payouts for universities.

But the scientific returns never matched the investment. If a study needs half a million subjects to detect an effect, that effect is too small to matter. The NIH’s focus on big science crowded out hypothesis-driven, high-risk, high-reward research, replacing it with data hoarding and statistical fishing expeditions.

The new NIH overhead cap may finally force a reckoning. Without the perverse incentives of massive indirect cost payouts, universities will no longer have a financial motive to prioritize scale over substance. NIH itself will have to rethink its addiction to big science, redistributing funds toward smaller, more intellectually rigorous projects that actually test ideas, rather than just generating more data.

Maybe, just maybe, this reset will bring us back to the kind of science that actually moves medicine forward.

Why the Trump Administration Had to Rip Off the Band-Aid on NIH Overhead

It is fair to argue that this policy change was too radical and too fast, and that universities should be given time to adjust.  Their budgets have been developed on the assumption they would receive a predetermined amount of overhead that suddenly seems unlikely to materialize. While I strongly agree with the decision to reform the overhead system, to improve the culture of science, and to set overhead rates to a flat universal rate, changing it suddenly on a Friday afternoon without warning could have serious short-term consequences, especially for universities that lack a large endowment to fall back on. 

Even for rich elite universities like Columbia, unexpectedly losing $200,000,000 is no walk in the park.  Personally, I might have suggested the rate cut should apply only to newly awarded grants, for example, to give some time for universities to gradually adjust to their new reality.  A 4-fold rate cut from 60% to 15% might better have been phased-in gradually, to allow the system to adjust, giving universities time to prepare by pursuing other sources of funding and modifying their long-term budget plan accordingly.   

Normally, when dealing with addiction, you don’t just rip away the supply overnight, you wean people off.  But the Trump Administration did not give universities methadone, they threw them straight into detox.

The backlash from elite universities was immediate, and as expected, they’ve turned to the courts to protect their financial windfall. Earlier this week, a federal judge temporarily blocked this rate cut following a lawsuit filed by 22 states.  If history is any guide, the higher ed lobby will continue to push for delay or even reversal of this reform before it can take full effect. Whether the Trump administration sticks the landing will depend on whether they anticipate this resistance and hold firm against the usual DC pressure.

Given the resistance to change of any kind in Washington, it is likely that the Trump administration felt that the only way to make sure these overhead rates really do get cut is to simply do it suddenly and unexpectedly before the system could rally to stop it.  After all, when they pursued a similar option in 2017, it was quickly quashed.

The Overhead Gravy Train Just Stopped—And Universities Are Scrambling

The professoriate and university administrators are out in full force challenging the cuts.  Several recent articles have amplified the outrage from medical schools, claiming that these cuts will cripple research and cost lives, but when you break down their arguments, it’s clear they are more worried about losing money than losing scientific progress.

For example, the Association of American Medical Colleges declared that the cuts would “diminish the nation’s research capacity, slow scientific progress, and deprive patients, families, and communities across the country of new treatments, diagnostics, and preventative interventions.”

NIH is still fully funding direct research costs: salaries, lab supplies, equipment, and experiments. By reducing overhead payouts, more funds will be available for new research projects, potentially increasing the number of grants awarded and diversifying the range of funded science.

What they really mean is that universities will have less “fungible” money to spend on non-research priorities, and that the institutes that got the highest overhead rates (like Harvard and Stanford) will no longer have such an enormous financial advantage over smaller schools. 

More Reforms Are Coming—Here’s What NIH Should Do Next

Overhead reform is a good start, but NIH’s problems run deeper. For decades, it has funneled billions into bloated megaprojects, entrenched an old boys’ network of grant recipients, and allowed universities to profit off taxpayer-funded research while bearing none of the financial risk. If Trump wants real reform, he’ll need Congress—because one of the biggest problems can’t be fixed by executive action alone.

The Bayh-Dole Act of 1980 is why universities can patent federally funded discoveries. NIH pays for the research—salaries, equipment, and overhead—but when those projects yield lucrative patents, institutions keep the profits. Columbia, for example, has made nearly $790 million from Richard Axel’s co-transformation patents. This isn’t a scandal - it’s how the law works. But there’s something wrong with a system where taxpayers take all the financial risk, yet universities get all the reward.

Bayh-Dole reform would require new legislation, but solutions exist. Universities could be required to reinvest a portion of patent profits into federally funded research or let NIH recover its investment before institutions take full control of revenues. Another option is partial government ownership of patents, perhaps through President Trump’s proposed sovereign wealth fund, ensuring taxpayers see a return. If universities want full control of their discoveries, they should fund the research themselves.

Beyond patents, NIH’s funding model needs a reset. A small group of entrenched researchers control a disproportionate share of grants, locking out early-career scientists and smaller institutions. Some investigators are investigators on ten or more grants simultaneously, creating a monopoly that stifles competition. Just as market monopolies limit innovation, capping the number of simultaneous NIH grants per investigator would open funding to a broader range of researchers, with a wider range of ideas.  This would be consistent with global norms – in Finland, a given investigator can only be named on one active grant at a time from the Academy of Finland, their largest domestic science funder.

But the biggest problem is Big Science bloat. NIH has spent decades dumping money into massive multi-center studies that generate endless data but few breakthroughs. When early GWAS failed to deliver, NIH didn’t pivot - it doubled down, demanding bigger sample sizes and larger budgets. Universities adapted to this model because it guaranteed long-term funding. The result? A system that rewards predictability over risk-taking, institutional security over transformative science, minor incremental progress over potential revolutionary breakthroughs.

NIH needs to break this cycle. Instead of $40 million for one bureaucratic megaproject, fund forty $1 million grants supporting high-risk, high-reward ideas. The safest science is rarely the most important - but under NIH’s current structure, it’s most likely to get funded.

Part of the problem is that NIH bureaucrats - not scientists – too often set the agenda. Funding priorities are increasingly driven by Requests for Applications (RFAs) - pre-determined categories dictated by NIH program officers who aren’t actively conducting research. This governmental central-planning approach pushes politically safe or bureaucratically favored scientific priorities over independent, investigator-initiated proposals. The best way to fix this? Reduce RFA-driven funding and restore competition letting the marketplace of ideas rather than government administrators set the agenda shaping the future of medical discovery.

Finally, there’s the issue of transparency. Universities rely on NIH overhead payments to sustain research infrastructure, but those funds have also been used strategically beyond science. If overhead is covering non-research expenses, the public has a right to know. The answer isn’t eliminating overhead—it’s full disclosure of how taxpayer dollars are spent.

The backlash to Trump’s overhead cap proves how deeply embedded NIH money is in university budgets. But universities aren’t the problem - the system is. With its built-in cycle of government dependency, NIH has spent decades rewarding institutional size over scientific creativity.

If Congress wants real reform, it should start with Bayh-Dole—ensuring taxpayers see a return on investment. NIH should end monopolized funding, prioritize scientific competition over bureaucratic stability, and demand transparency in spending.

This isn’t an attack on universities, it’s a fight to save science from bureaucratic inertia. The best institutions will thrive. The bloated ones will have to adapt. Either way, the era of unchecked, runaway NIH spending is coming to an end.

No, NIH Cuts Won’t Kill Science—They’ll Save It

The NIH overhead cap is not a crisis, it’s a long-overdue course correction. We academics face a painful period of adjustment, but this reform is an opportunity to prioritize real scientific discovery over bureaucratic excess.

By shifting funds away from administrative overhead and into direct research costs, NIH is restoring a healthier, more competitive funding model - one that will reward creativity, risk-taking, and scientific excellence instead of conservative megaprojects that fuel institutional bloat. This shift will allow NIH to fund more projects, support more investigators, and drive more genuine breakthroughs with the same budget, rather than further concentrating resources in a handful of elite institutions.

The best universities - those that truly value the disinterested search for truth - will adapt. Some will streamline operations, seek private partnerships, or restructure their funding models to be less reliant on NIH subsidies. Others may embrace industry collaborations, fueling greater private-sector innovation and breaking free from the culture of dependence on government-controlled research dollars.

For too long, NIH spending has sustained a culture of administrative excess instead of fueling true scientific progress. That era is ending. The universities that embrace innovation will lead the future. The ones that cling to the old system will fall behind. Science isn’t driven by government payouts - it’s driven by daring ideas, visionary research, and fearless intellectual pursuit. It’s time to fund it that way.

 

Wednesday, May 20, 2020

Lockdown cannot last forever - is mass civil disobedience on the horizon?

Personally, I am dealing with lockdown as well as a single person in isolation with zero social contacts for 2.5 months can - it's disorienting and dystopic, to be sure. But I am able to work by telecommuting, unlike most people, and I have what I need to survive and minimally entertain myself - I feel a bit like Gus the Polar Bear from the central park zoo, stuck in a confined habitat with no alternative on the horizon, but with access to food and water and basic necessities, and making the best of an intolerable situation.
Having said that, I also realize that nothing I care about will be coming back any time soon, whether or not we stay on lockdown. Music won't be back for a long time, and neither will sports or international travel. So for me, whether or not I can go to the book store or sit 6 feet away from someone in a bar or restaurant (what the hell is the point of that, when the only reason to go to a bar is to be social?!), the dystopia is virtually guaranteed to last an unacceptably long time.
That said, based on what I hear from friends and family, not to mention what we can all plainly see in social media (if you get out of your bubbles), I do not believe that Americans will agree to stay on government-imposed lockdown much longer, nor will they submit to a second lockdown, should cases start to rise again, despite the consequences and the risks.
The economic and psychological consequences are simply unacceptable to an enormous swath of American society, and people will be willing to take the risk of catching a nasty virus, and even to accept that it means a lot of people of advanced years, and those with pre-existing conditions will die a bit prematurely. Nobody is happy about that, but the virus is part of nature, and nature will take its toll.
Flattening the curve is about controlling the rate of infections, not about preventing them, after all. And the psychological and economic costs are becoming unacceptable to many. Government cannot impose its will indefinitely. And the public have been very good sports in trying to do their part to slow the spread of this virus. But at some point you have to say "fuck it", as despite our best efforts, the virus is everywhere now, and there really isn't any possibility of eliminating it until a vaccine or cure is available, and that is not going to happen in a time-frame that people are going to find acceptable.
I am fine with staying in my apartment and having groceries and other goods delivered, as I have a salary, and can work from home. I have to accept the abhorrent reality that even if people refuse to continue to "shelter in place", Madison Square Garden won't be open for hockey, there will be no orchestras or chamber music ensembles, and international travel won't be a practical option with governments around the world continuing to flex their atrophying muscles by imposing quarantine conditions on arrivals from abroad. Even if we end "shelter in place" tomorrow, my life won't be any better as the things I love involve crowds and airplanes.
I am not one of the people clamoring for a return to barber shops (my last haircut was more than 3 years ago anyway) or socially distanced restaurants or bars (social distancing defeats the entire purpose of going to such places). But as an objective observer, with lots of friends on all sides of the political spectrum, I just don't believe America will agree to go along with government-imposed lockdowns for much longer, even if it puts grandma at risk.

Friday, March 13, 2020

Why do so many infectious diseases seem to come from China?

Everyone needs to stop blaming everyone else for this virus.  Let’s talk facts and reason here.

I have argued for the last 25 years that we invest way too much money in studying the trivial effects of genes on chronic diseases, to the detriment of infectious disease research.  Infectious diseases have only been in check for 100 years or so, mostly because of improvement in sanitation in the Western world, and to a lesser extent, antibiotics, vaccines, and antivirals.  But bacteria, viruses and parasites are also subject to natural selection, and are adapting by random mutation to survive despite our interventions.  Evolution is a powerful force, helping all organisms adapt and survive.  

In recent years we have seen many dangerous bugs emerge, disproportionately in China. Why China? Because of population density, substandard sanitation (compared to the West), and economic prosperity.  Yes, economic prosperity...  50 years ago, if a disease emerged in China it would have been largely contained there for a very long time because there weren’t direct flights to everywhere as there are now. 

Viruses mutate all the time, and sometimes these mutations increase either transmissibility or virulence.  But virulence is generally bad for the virus, as if it kills the host, it dies as well, if there aren’t alternative hosts around which can be infected before its current host dies...  When such mutations occur in sparsely populated places, the effects are highly localized and do not spread.  But in densely populated Chinese cities (remember than New York City would be a small to medium sized city in China), there are tons of potential hosts around to infect, such that even highly virulent bugs can take root and become serious problems.  And the connectivity of the world allows them to spread internationally once they reach critical mass in the population.

These sort of events happen all the time in Africa, but because it is still underdeveloped, there is much less connectivity to the world, so they are often localized and can be controlled before they spread.  Climate also plays a role as many tropical diseases spread largely through insect and other such vectors, and don’t therefore do well in more temperate climates...  But as those countries develop and the world becomes more interconnected, these things will inevitably become a bigger and bigger issue over time.  This is simply how evolution works, and just as we develop countermeasures, mutations happen and sometimes will confer advantages to the bugs that allow them to circumvent out countermeasures.  

The current levels of domination humans have over pathogens is not unlikely to be temporary. I have often posited that we may outlive our great grandchildren because we are lucky to be alive in the era when we have suppressed many infectious diseases, before they have had time for adaptations to arise and spread...  But ultimately nature has a balance...   Overpopulation breeds disease, and connectivity helps it spread.  

We shouldn’t blame China for its rapid economic development - but that really is why these problems are starting to emerge as global threats.  And China shouldn’t blame us for germ warfare, as that is as silly as blaming China for our current problems.  These things are inevitable.  And perhaps ending absurdly inefficient big-science genomics research programs like “All of Us” in favor of more research on infectious disease and epidemic preparation measures would be wise (as I have argued consistently for 25 years).

Furthermore, while this virus is pretty mild for most of us, this is an important exercise for the world’s Public Heath system - just as SARS, MERS, and H1N1 were.  By taking this very seriously we can learn a lot about how to better prepare ourselves for truly horrific bugs that will inevitably emerge in densely populated and connected parts of the world.  And if you deal with old folks or sick folks, it’s really important to isolate them and keep them as safe as possible while we wait for countermeasures to keep this infection in check for a while....

But we need to stop blaming China, and China, Russia and Iran need to stop blaming us. We are in this together, and have to accept that in an interconnected world, we have to work together and not worry about casting blame....

Wednesday, February 26, 2020

"Diversity of ideas"

So, in Spring of 2017, I attended a writing workshop at Columbia University taught by journalists about how to write OpEd pieces. In the course of the seminar they espoused the need for openness to the diversity of ideas, and that it was fine to disagree with ideas but never to disrespect the person... Then they talked about how it's okay in this current political era to be intolerant because what happened in the election is unacceptable.

I was a bit shocked by this hypocrisy and wrote my OpEd exercise piece about this intolerance being practiced by people claiming to respect diversity of ideas while teaching how to express unheard opinions, which mine widely is in the campus environment. While the instructors seemed to be a bit troubled by what I wrote, every colleague, most of whom were humanities professors at Columbia, agreed emphatically with the position I took in my OpEd, and many later told me they had exactly the same initial reaction even as they agreed with the political position of the instructor.

What I want to say is how proud I am of the faculty of my "coastal elite" university for agreeing that my position was completely valid and they added that they were also largely troubled by the very same intolerance to different opinions on campus... Here is a draft of what I wrote trying to be provocative....

--------------------------------------------------------------------------------------------------------------------------

The Public Voices program of the OpEd Project, in which I am excited to participate this year, has been billed as a "diversity of ideas project", which "was founded to ensure that all human beings have a chance to be heard". Furthermore, in the ground rules, it is clearly stated that "we believe in a wide range of ideas, including ideas we may disagree with", and it was suggested that "it is rare that we can have an open, frank discussion about what we think...but take care to be respectful. Disagree with the idea, not the individual." And ground rule 3 explicitly is entitled "everyone is welcome".

Despite this statement, during the first day of the workshop, it was repeatedly alluded to that this class is more important now than ever because of what happened in the recent election. At one point the lead instructor went so far as to say it is okay to be intolerant. I fail to see, however how intolerance is compatible with "diversity of ideas" and "respect for the individual".

Surely among an audience of twenty Columbia professors, it may be unlikely to expect significant degrees of support for President Trump. But those individuals who do support him would seem to be the people whose voice is most strongly silenced by the current political climate, not the opposite. The recent election's results came as a shock to many people largely because people who do support the president often felt inhibited from speaking openly for fear of of being attacked as an individual, rather than for the ideas about which disagreement may exist. In context of this workshop and the desire for open, frank exchange of ideas, this strikes me as a direct violation of Ground Rule number 3.

Personally, I am a libertarian, not a Republican. I was a delegate at the Libertarian National Convention, and helped Governor Gary Johnson in his presidential campaign and even helped write his science policy over the past year. I live in New York, where the election was not competitive, and as such I saw no downside to working for the Libertarian party during the election cycle. That said, Donald Trump was the first mainstream political candidate I was excited about since Pat Buchanan's primary campaign in 1992 which led to the unseating of George H.W. Bush. I agree with President Trump on about 70% of issues which is far more than any candidate offered by either main party since Ronald Reagan. To this end, I donated to Mr. Trump's political campaign, and am very satisfied that the efforts of the Libertarian party to get out the vote for Governor Johnson in swing states helped elect our new President.

I entered this program with much optimism to learn how better to express myself in writing, and learn to better make a visible and open-minded argument for what is a minority view among "the coastal elites", as the instructors called us on Friday. Obviously I realize my political views are the minority view among Columbia faculty, but I was encouraged to participate in your workshop because I believed that diversity of ideas includes everyone, as the ground rules state, "regardless of which side of the aisle you come from, you are welcome."

When instructors openly suggest it is okay to be intolerant of the ideas that won the recent election, or that the people who elected him are just misinformed or ignorant, somehow I am forced to question what was meant by celebration of the "diversity of ideas". I work as a professor at Columbia, as a free lance musician, and am actively engaged in diplomatic outreach in such diverse places as North Korea, Iran, Venezuela, Afghanistan and other troubled areas of the world. I am able to do this because I treat everyone with respect, engage them politely, and celebrate their diversity of opinion. Obviously very few people in my professional or personal life agree with my political views, and yet I have always engaged them with respect and never disparaged anyone for their political views, including my friend North Korean leader Marshal Kim Jong Un. But today, hysteria over the election, and the coming changes to American political life has created an environment where it has somehow become acceptable to attack the individual by openly calling people racist, misogynist homophobes just because they didn't vote for Hillary Clinton. That is not "tolerance" or "celebration" of diversity of opinion by any definition.

I must say that during Friday's presentation, I frequently felt attacked personally not because the instructors disagreed with me about politics, but because of the frequent allusion to it being okay to be intolerant of those who disagree, implying that President Trump and his supporters are horrible people who disparage elites and are destroying American values. In this era, perhaps it is wiser to encourage us to stop being silent and speak up to encourage people to understand and respect that diversity of thought on our campus. I would suggest that this lack of respect creates an environment that silences the frank discussion that the ground rules state are to be encouraged. To this end, I suggest the instructors answer their own question of "whether the ground rules are unacceptable to you". Perhaps it is time for a re-evaluation of what it really means to you to celebrate diversity of ideas, and to respect those who disagree as individuals...

Tuesday, September 9, 2008

Thoughts on Life and Science in Finland

Below is the submitted draft of an article I wrote on the topic of life in Finland and science in Finland for the magazine of the Finnish Academy, who have provided funding for my research there and who wanted some critical perspectives from their foreign professors in the Finland Distinguished Professor program...

------------------------------

When one moves abroad there are various phases of adjustment one goes through – initially one becomes fascinated with the new country and sees all the advantages over the familiar environments of home. Once the novelty wears off, it is said that one exaggerates the negative aspects of the new country and culture in their mind’s eye, but that after one successfully survives this morning-after hangover, they accept the bitter with the better, and reach a stable and healthy equilibrium. The timing of my being requested to write this article about my experiences with life and science in Finland comes for me at a less than ideal time, as I am now waking up to see Suomi-neito without her makeup on, after spending my first year living and working independently in Finland. Of course, this is a transitional phase until I get used to seeing her, warts and all, and that now they seem exaggerated because of the culture shock and frustrations inevitably associated with immigration. Finding myself frustrated in attempts to write fair and balanced academic prose on the topic of life and work in Finland in this context, I opted to present my experiences with these matters through more of a life-history approach, to balancing the highs and lows that are inevitable in every cross-cultural relationship.

Personally, I have been a frequent visitor to Finland since 1992, when I was a very, very young graduate student. Since that time, I have spent an average of 1-2 months a year in Finland, split across 3 – 4 visits. I was infatuated with the country, the people, the culture, the work environment, the climate (my first visit was during an October blizzard), and especially the cuisine (yes, unlike Jacques Chirac, I absolutely love läskisoosi, kalakukko, and makkaraperunat). During my graduate student years as well, I was motivated to study the Finnish language at Columbia University (where it is taught in the department of Germanic Languages, oddly enough), including even one semester devoted to reading Kalevala.

On a professional level, working in Finland was an amazing experience for me as a young student, because scientists were taking advantage of Finland’s unique population and family resources to apply experimental strategies that were unthinkable in Southwestern Europe and the USA. At the time I started here in the early 1990s, there was not a lot of funding in human genetics, but people made more advances here than in the US because they were forced by austerity to be creative and exploit the natural experiment that characterized the Finnish population history. As a statistical geneticist, I was provided with many novel and interesting questions to apply my quantitative modeling skills to, and without doubt both I and my Finnish colleagues benefitted from this collaborative relationship synergistically. Working here, I was exposed to statistical and population genetic issues that scientists working elsewhere never thought about much until recent technological advances enabled them to see the same phenomena in their own populations. These observations have recently been re-discovered to great fanfare by scientists working in larger populations outside of Finland, by many of the same people who had earlier claimed these phenomena were not likely to be generalizable or relevant outside of small isolates. Of course following the undeniable successes in Finland most US-based geneticists have sought collaborations outside the US where more appropriate populations for genetic study can be found. Finland taught the world a lesson that it seems to have forgotten itself in their recent drive to emulate the “big science” and “technology driven” efforts promoted by the larger countries in Southwest Europe and the USA.

I was further impressed by the educational system in science in Finland, and its “big picture” emphasis on thinking and understanding all aspects of a problem. This was in stark contrast to the “trade school” microspecialization mentality of human genetics training in the USA, where most Ph.D. students working in gene mapping projects have the same amount of creative scientific input on their projects as lab technicians who are not receiving a Ph.D. for the same work. In the early to mid 1990s, when I was myself a student, I was very impressed by the way Finnish students were much more interested in discussing science in a more philosophical and “out of the box” manner. That is to say they were more interested in the “what?” and “why?” questions of science than the “how?” questions of engineering. I do not intend to trivialize the latter. Engineering and technology are critically important – perhaps more important to society than science. However, the goals of human geneticists are scientific – to use technology to ask questions about nature. Many of those Finnish students I speak of have now gone on to promising careers in Finnish academia. I would hope that the Finnish Academy devotes equivalent effort to promoting and supporting the career development of these highly talented young Finnish scientists as they do in recruiting foreign experts, as the biggest problem in Finnish academia is that there aren’t enough positions for the many talented young scholars to return to after successful postdocs abroad. More opportunities should be provided to them, as their freedom to explore scientific areas of their own interests represent the most promising prospects for the future of science in Finland. The greatest scientific discoveries are always made by the youngest, freest minds working without the biases and vested interests of the entrenched scientific establishment (i.e. those with the power in a “one professor per department” system).

On an academic level, as a scientist who was quite successful in publishing articles in a field in which research costs were relatively small, I quickly discovered the most significant factor in career advancement in American academia (in medical sciences, at least) was not the quality of one’s research, but rather to the size of one’s grants. This is largely because the private university system in the USA is largely funded through the overhead universities receive, which can exceed 60 cents for every dollar we bring in. As a result, American universities are run more like businesses than centers of intellectual inquiry, and there is enormous pressure to spend one’s time trying to bring in more and more money, rather than actually teaching and doing research. This has led to an academic culture in which “big science” is overvalued. In Finland, the bias seemed to go in the opposite direction, in that career success is evaluated by an equally unfair standard – number of publications. As I had published tons of papers at minimal monetary cost, I obviously saw the Finnish model as one in which I could thrive. And the opportunity to thrive was provided to me initially through a visiting professor position I held at the University of Helsinki from 2003 – 2006, through which I received also a research grant from the Finnish Academy, and more recently this opportunity has been expanded and extended through the generosity of the Finnish Academy’s FiDiPro program through 2010. This extremely generous program has afforded me the opportunity to expand my research activities in Finland and to collaborate with more research groups by funding the students and staff I needed to make my research happen. In the US I had more than enough funding as well, but because the dollar amount was not large, I received little administrative support, in such areas as hiring and obtaining space for the staff I attempted to hire. This represented a huge advantage of the Finnish system, that I had talented and helpful administrative assistants working with me to navigate the financial side of things (though getting an accurate detailed accounting of transactions and balances on the grants has been surprisingly challenging). I honestly hope to make optimal use of this opportunity, and hope to have the opportunity to extend this collaborative relationship once this funding runs out after 2010, as I am grateful for the opportunity to work collaboratively with my Finnish colleagues and friends.

Of course, once I bought an apartment, started working and living in Helsinki half the time, and became a resident rather than a visitor seeing Finland from the vantage point of a local, I started to realize that while many of the advantages I described above were legitimate, many were more illusory upon closer examination. My earlier experiences were largely from the perspective of Dian Fossey in “Gorillas in the Mist” rather than from the perspective of a fellow gorilla. As a New Yorker, I was shocked by the extreme pressure to conform to a narrow range of cultural norms. For example, the idea that one should expect apartments in downtown Helsinki to be as quiet as Lapland seems absurd to a guy who was born in Manhattan, plays the tuba, and likes to watch sports from the US with his fellow expat friends, often accompanied by boisterous discussions (quite often about Finns). Speaking of conformity, the most personally shocking conversation I ever had was with a high school principal in Finland who proudly told me that not one student in his school would vote for the Republican candidates (as I typically do) in the US presidential elections. It is not that his political view surprised me – even the irony of people wearing Che Guevara shirts to peace demonstrations goes unnoticed here – it was a shock that any academic would brag about the lack of diversity of opinions and attitudes among students he was charged with educating. My earlier impression of Finland was that the Finns challenged the status quo, were open-minded to new ideas, and encouraged “out of the box” contrarian thinking. Upon closer examination, however, it appears that much of what I inferred to be indicative of lively open-minded intellectual debate was rather the expression of ideas that merely differed from mainstream American views with a narrow internal spectrum.

“Change” is not always for the better. Europeans have been trying to remake their science funding apparatus modeled on the US funding system – encouraging a smaller number of large collaborative multinational projects to the detriment of the smaller hypothesis-driven projects. The EU grant system, for example, has modeled itself on all the negative excesses and bureaucratic complexity of the US system while adapting few of the positive characteristics – such as the emphasis in the US on smaller investigator-initiated research projects which generally lead to more creative individual thinking. When bureaucrats in funding agencies decide on scientific initiatives, rather than letting the marketplace of ideas sort out good from bad, one tends to dilute the creativity that scientists are able to employ. It is obvious that smaller countries with smaller resources are not going to outspend the US and UK. Thus, the best way for smaller countries to be successful is to pursue approaches that are different and contrarian. This is exactly what Finns did in the 1980s and 90s in human genetics – exploiting their unique advantages and resources, rather than emulating the American approach.

In the end, while life and work in the USA has many downsides which are beyond the scope of this blurb, many of the advantages of the American system only became clear to me after living abroad and experiencing Finnish (and previously, British) culture and society first hand. As is the case for most immigrants, I have become more pro-American through my experiences living abroad, from social bonding with the American expat community, to the frequent call to rhetorically defend my country and society in discussions with my Finnish friends, especially during this election season, where my attitudes diverge sharply from the Finnish mainstream. Further, my experiences living in a society that values social conformity and harmony over individual rights and freedoms here in Finland have made me even more American and more libertarian than I was before I came here. As I said from the outset, my current mindset is admittedly biased by the phase of the immigrant adjustment cycle I find myself in at the moment, and surely with the passage of time, the warts underneath Suomi-neito’s makeup will become endearing, and the passion I felt for her since my first visit will inevitably reassert itself.

Monday, August 25, 2008

The rise and fall of human genetics and the common variant - common disease hypothesis.

There is an enormity of positive press coverage for the Human Genome Project and its successor, the HapMap Project, even though within the field the initial euphoric party when the first results came out has already done a full 180 to be replaced by the hangover that inevitably follows such excesses.

For those of you not familiar with the history of this field and the controversies about its prognosis which were present from the outset, I refer you to a review paper I and a colleague wrote back in 2000 at the height of the controversy - Nature Genetics 26, 151 - 157 . The basic gist of the argument put forward for the HapMap project was the so-called common variant/common disease hypothesis (CV/CD) which proposed that "most of the genetic risk for common, complex diseases is due to disease loci where there is one common variant (or a small number of them)" [Hum Molec Genet 11:2417-23]. Under those circumstances it was widely argued that using the technologies being developed for the HapMap project, that one would be able to identify these genes using "genome-wide association studies" (GWAS), basically by scoring the genotype for each individual in a cross sectional study for each of 500,000 to 1,000,000 individual marker loci - the argument being that if common variants explained a large fraction of the attributable risk for a given disease, that one could identify them by comparing allele frequencies at nearby common variants in affected vs unaffected individuals. This point was contested by researchers only with regard to how many markers you might have to study for this to work if that model of the true state of nature applied. Many overly optimistic scientists initially proposed 30,000 such loci would be sufficient, and when Kruglyak suggested it might take 500,000 such markers people attacked his models, yet today the current technological platforms use 1,000,000 and more markers, with products in the pipelines to increase this even more, because it quickly became clear that the earlier models of regular and predictable levels of linkage disequiblrium were not realistic, something that should have been clear from even the most basic understanding of population genetics, or even empirical data from lower organisms.

Today such studies are widespread, having been conducted for virtually every disease under the sun, and yet the number of common variants with appreciable attributable fractions that have been identified is miniscule. Scientists have trumpetted such results as have been found for Crohn's disease, in which 32 genes were detected using panels of thousands of individuals genotyped at hundreds of thousands of markers - this sounds great until you start looking at the fine print, in which it is pointed out that all of these loci put together explain less than 10% of the attributable risk of disease, and for various well-known statistical reasons, this is a gross overestimate of the actual percentage of the variance explained. Most of these loci individually explain far less than half a percent of the risk, meaning that while this may be biologically interesting, it has no impact at all on public health as most of the risk remains unexplained. This is completely opposite to the CV/CD theory proposed as defined above. In fact, this is about the best case for any complex trait studied, with virtually every example dataset I have personally looked at there is absolutely nothing discovered at all.

At the beginning of the euphoria for such association studies, the example "poster child" used to justify the proposal was the relationship between variation at the ApoE gene and risk of Alzheimer disease. In an impressively gutsy paper recently, a GWAS study was performed in Alzheimer disease and published as an important result, with a title that sent me rolling on the floor in tears laughing: "A high-density whole-genome association study reveals that APOE is the major susceptibility gene for sporadic late-onset Alzheimer's disease" [ J Clin Psychiatry. 2007 Apr;68(4):613-8 ] - in an amazingly negative study they did not even have the expected number of false positive findings - just ApoE and absolutely nothing else... And the authors went on to describe how important this result was and claimed this means they need more money to do bigger studies to find the rest of the genes. Has anyone ever heard of stopping rules, that maybe there aren't any common variants of high attributable fraction??? This was a claim that Ken Weiss and I put forward many times over the past 15 years, and Ken has been making this point for a decade before that even, in his book, "Genetic variation and human disease", which anyone working in this field should read if they are not familiar with the basic evolutionary theory and empirical data which show why noone should ever have expected the CV/CD hypothesis to hold...

In many other fields, the studies that have been done at enormous expense have found absolutely nothing, and in what Ken Weiss calls a form of Western Zen (in which no means yes), the failure of one's research to find anything means they should get more money to do bigger studies, since obviously there are things to find but they did not have big enough studies with enough patients or enough markers - it could not possibly be that their hypotheses are wrong, and should be rejected... It is a truly bizarre world where failure is rewarded with more money - but when it comes to promising upper-middle-aged men (i.e. Congress) that they might not die if they fund our projects, they are happy to invest in things that have pretty much now been proven not to work...

While in a truly bizarre propaganda piece, Francis Collins, in a parting sycophantic commentary (J Clin Invest. 2008 May;118(5):1590-605) claimed that the controversy about the CV/CD hypothesis was "... ultimately resolved by the remarkable success of the genetic association studies enabled by the HapMap project." He went on to list a massive table of "successful" studies, including loci for such traits as bipolar, Parkinson disease and schizophrenia, and of course the laughable success of ApoE and Alzheimer disease. To be objective about these claims, let me quote from what researchers studying those diseases had to say.

Parkinson disease: "Taken together, studies appear to provide substantial evidence that none of the SNPs originally featured as PD loci (sic from GWAS studies) are convincingly replicated and that all may be false positives...it is worth examining the implications for GWAS in general." Am J Hum Genet 78:1081-82

Schizophrenia: "...data do not provide evidence for involvement of any genomic region with schizophrenia detectable with moderate [sic 1500 people!] sample size" Mol Psych 13:570-84

Bipolar AND Schizophrenia: "There has been great anticipation in the world of psychaitric research over the past year, with the community awaiting the results of a number of GWAS's... Similar pictures emerged for both disorders - no strong replications across studies, no candidates with strong effect on disease risk, and no clear replications of genes implicated by candidate gene studies." - Report of the World Congress of Psychiatric Genetics.

Ischaemic stroke: "We produced more than 200 million genotypes...Preliminary analysis of these data did not reveal any single locus conferring a large effect on risk for ischaemic stroke." Lancet Neurol. 2007 May;6(5):383-4.

And the list goes on and on of traits for which nothing was found, with the authors concluding they need more money for bigger studies with more markers. It is really scary that people are never willing to let go of hypotheses that did not pan out. Clearly CV/CD is not a reasonable model for complex traits. Even the diseases where they claim enormous success are not fitting with the model - they get very small p-values for associations that confer relative risks of 1.03 or so - not "the majority of the risk" as the CV/CD hypothesis proposed.

One must recall that in the intial paper proposing GWAS by Risch and Merikangas (Science 1996 Sep 13;273(5281):1516-7) - a paper which, incidentally, pointed out that one always has more power for such studies when collecting families rather than unrelated individuals - the authors stated that "despite the small magnitude of such (sic: common variants in)genes, the magnitude of their attributable risk (the proportion of people affected due to them) may be large because they are quite frequent in the population (sic: meaning >>10% in their models), making them of public health significance." The obvious corollary of this is that if they are not quite frequency, they are NOT having high attributable fraction and are therefore NOT of public health significance.

And yet, you still have scientists claiming that the results of these studies will lead to a scenario in which "we will say to you, 'suppose you have a 65% chance of getting prostate cancer when you're 65. If you start taking these pills when you're 45, that percent will change to 2". Amazing claims when the empirical evidence is clear that the majority of the risk of the majority of complex diseases is not explained by anything common across ethnicities, or common in populations... (Leroy Hood, quoted in the Seattle Post-Intelligencer). Francis Collins recently claimed that by 2020, "new gene-based designer drugs will be developed for ... ALzheimer disease, schizophrenia and many other conditions", and by 2010, "predictive genetic tests will be available for as many as a dozen common conditions". This does not jibe with the empirical evidence... In Breast Cancer for example, researchers claimed that knowledge of the BRCA1 and BRCA2 genes (which confer enormously high risk of breast cancer to carriers) was uninteresting as it had such a small attributable fraction in the population. Of course now they have performed GWAS studies and examined tens of thousands of individuals and have identified several additional loci which put together have a much smaller attributable fraction than BRCA1 and BRCA2, yet they claim this proves how important GWAS is. Interesting how the arguments change to fit the data, and everything is made to sound as if it were consistent with the theory.

I suggest that people go back and read "How many diseases does it take to map a gene with SNPs?" (2000) 26, 151 - 157. There are virtually no arguments we made in that controversial commentary 8 years ago which we could not make even stronger today, as the empirical data which has come up since then basically supports our theory almost perfectly, and refutes conclusively the CV/CD hypothesis, despite Francis Collins' rather odd claims to the contrary...

In the end, these projects will likely continue to be funded for another 5 or 10 years before people start realizing the boy has been crying wolf for a damned long time... This is a real problem for science in America, however, as NIH is spending big money on these rather non-scientific technologically-driven hypothesis-free projects at the expense of investigator-initiated hypothesis-driven science. Even more tragically training grants are enormously plentiful meaning that we are training an enormous number of students and postdocs in a field for which there will never be job opportunities for them, even if things are successful. Hypothesis-free science should never be allowed to result in Ph.D. degrees if one believes that science is about questioning what truth is and asking questions about nature, while engineering is about how to accomplish a definable task (like sequencing the genome quickly and cheaply). The mythological "financial crisis" at NIH is really more a function of the enormous amounts of money going into projects that are predetermined to be funded by political appointees and government bureaucrats rather than the marketplace of ideas through investigator-initiated proposals. Enormous amounts of government funding into small numbers of projects is a bad idea - one which began with Eric Lander's group at MIT proposing to build large factories for the sequencing of the genome rather than spreading it across sites, with the goal of getting it done faster (an engineering goal) instead of getting more sites involved so that perhaps better scientific research could have come along the way. This has led to a scenario years later in which the factories now want to do science and not just engineering, which is totally contrary to their raison d'etre, and leads to further concentrations of funding in small numbers of hands when science is better served, perhaps by a larger number of groups receiving a smaller amount of money so that more brains are working in different directions thinking of novel and innovative ideas not reliant on pure throughput. Human genetics has transformed from a field with low funding, driven by creative thinking into a field driven by big money and sheep following whatever shepherd du jour is telling them they should do (i.e. innovative means doing what they current trend is rather than something truly original and creative). This is bad for science, and also is bad science. GWAS has been successful technologically, and it has resoundingly rejected the CV/CD hypothesis through empirical data. If we accept this and move on, we can put the HapMap and HGP where it belongs, in the same scientific fate as the Supercollider, and let us get back to thinking instead of throwing money at problems that are fundamentally biological and not technological!


(most notably in terms of the big money NIH is sending into these non-scientific technologically-driven hypothesis-free studies, rather than investigator initiated hypothesis-driven science - one of the main causes of the "funding crisis" at NIH where a tiny portion of new grants are funded - get rid of the big science that is not working - like the supercollider! - and there is no funding crisis)