If You Haven’t Read Hundreds of Books

Last week I have read Call Sign Chaos, by Jim Mattis (and Bing West). The book is the memoir of a general who served for decades in the USA Marines, participating with leading roles in the two Gulf wars and in the Afghanistan campaign.

In the book he offers his perspective on these decades of war and also highlights several political errors he thinks have been committed on both sides of the political spectrum. As most books, this one contains some perspectives I agree with and others I don’t agree with. Regardless, it’s a book that should be read and a masterpiece.

Here I want to comment on a specific concept that is expressed from the point of view of a military commander but that extends to any field, really.

Reading is an honor and a gift from a warrior or historian who—a decade or a thousand decades ago—set aside time to write. He distilled a lifetime of campaigning in order to have a “conversation” with you. […] it would be idiotic and unethical to not take advantage of such accumulated experiences. If you haven’t read hundreds of books, you are functionally illiterate, and you will be incompetent, because your personal experiences alone aren’t broad enough to sustain you. Any commander who claims he is “too busy to read” is going to fill body bags with his troops as he learns the hard way.

Call me a bookworm, but on this Mattis and I agree 100%.

I wish many people integrated this Weltanschauung in their lives. The world would be a better place.

That feeling of unease

The COVID-19 epidemic forced all the university staff to work from home. I consider myself very lucky for having a job that can be performed from home in its entirety,  because my main issue has been to follow all the recommendations to avoid as much as possible catching the virus: many people had to struggle with that but at the same time lost their jobs or passed through a really hard time because of lack of the usual volume of income.

While working from home I have started feeling more stressed than the usual. At the beginning, I was attributing this to the situation of being at home entire days, seldom going out for groceries or a walk. More recently, now that the lockdown started being lifted, I started going out more but the feeling of stress didn’t go down.

I thought that maybe I just don’t work well at home, but personally I enjoy quite a lot working from home: although I have some interruptions from the family, I can focus pretty well, I have my cats nearby, all my books nearby, my comfy armchair, and so on and so forth.

Then I went a couple days at the university, and it dawned on me. When I commute, I profit from two very strong effects. First, I have uninterrupted time in which I can only think with no distractions, and can’t do anything else. Second, commuting forces me to stop working at a very precise hour, in order to arrive home in time. Yesterday I made an approximate count of the daily hours I worked during the lockdown, and it turns out I worked on average two hours per day more than usual: no wonder I was feeling stressed and unhappy, with the feeling that I could not detach from work!

The good news is that the solution is very simple: I’ll just track my working hours and shut down the laptop when I reach the daily maximum! This is something I had already learnt years ago, but the epidemic-induced unstructured time (no “I arrive at work X, I leave work at Y” boundaries) sneaked up on me in a very deceitful way! No more!

Maybe (un)true

Andrew Gelman (whose BDA 3 is definitely worth having and consulting) comments on the reactions to a COVID19-related preprint that has among the authors John P.A. Ioannidis. I am following the full story because I had included part of it in a seminar I gave recently on statistical fluctuations, reproducibility crysis, causality, and the like.  I wrote about another excerpt from the seminar when commenting on Sabine Hossenfelder’s opinion on model predictivity. I should definitely stop quoting this until I finish drafting my write-up.

In any case, you might know Ioannidis for being the author of the famous “Why Most Published Research Findings Are False”, a 2005 paper where he argued that several causes concur in making a large number of research findings non-reproducible. Ioannidis identified several causes, mostly related to conscious or unconscious biases of the researchers: sheer prejudice, incorrect application of statistical methods, biases stemming from competition between research groups in lively fields, and publishing bias. Publishing bias consists in that journals tend to publish more easily positive rather than negative results: the idea is that researchers, as a consequence, would unconsciously prefer (give less scrutiny to) a positive preliminary result rather than a negative one.

talebBack to the COVID19 preprint. The preprint sparked quite some discussion on the internet: Gelman pointed out some fatal flaws of the study, and Taleb… well, Taleb went ballistic (see photo). To spice things up, there has also apparently been a whistleblower complaint stating that the study has been funded by the JetBlue founder (who is notoriously skeptic on the COVID-19 mortality rate).

What I want to highlight of the whole affair is what Gelman says in the comment I also linked above:

[…] “peer-reviewed research” is also “provisional knowledge — maybe true, maybe not.”

That, fellow researchers, is the whole point of the science we do.

J.S. Bach Clichés

I am listening to a handy 2-hours Youtube video of Johann Sebastian Bach’s violin concertos—although I must say I fancy the Brandenburgischen Konzerte more. Yesterday I was listening to Vivaldi instead. I like to listen to this kind of structured music when I do some programming.

Guardians Of The Galaxy Awesome Mix Vol 1
Image from MyStickerMania, not sure of the copyright, can take it out if requested by the owners.

At the risk of sounding cliché, I had a flashback to when I was in middle and high school and studying piano and composition: to put my hands on anything like these concertos I had to ask my parents for money, then get on my bike, go to the records shop, buy a tape cassette, head back home, and only then I could finally listen to the music.

Now I have practically every single music piece written by humanity in the last centuries at a mouse click’s distance.

Oh and speaking of all humanity’s knowledge at our fingertips, do you know about Emergency Kitten? You can even fork them on Github!!!

All models are wrong, anyway

Today I gave a seminar on the ASA statement about p-values, the 5-sigma criterion, and other amenities. In the seminar I slipped by some comments on the stance of Bob Cousins on models, hypotheses, and laws of Nature, and ended up ranting about interventionist definitions of causality (following Pearl, mainly).

A few minutes ago I opened Twitter and found the somehow excessive prodromes of an internet flame sparked by a post by Sabine Hossenfelder titled “Predictions are Overrated”. I disagree with that post in a few key points, which are too long to describe on Twitter. Bear with me here.

 

The argument of the shady forecaster

The way in which scammer forecasters work is to generate huge amount of predictions, send them to random people, and finally follow up only with the people who received the predictions which a posteriori proved to be correct. The scammers then repeat this iteratively until for a small set of people they appear as if they always provided correct predictions. It’s a well-known strategy, which I think was described extensively in a book by Nate Silver. Or Nassim Taleb. Or Levitt and Dubner. I read too much.

Sabine uses this story to argue that since any successful prediction can be successful just by chance—because of the size of the pool of scientists producing models—therefore judging theories based on their predictive power is meaningless. However, the shady-forecaster example to me is quite disconnected from the topic at hand: the shady forecaster relies on selecting its targets based on a-posteriori considerations and conditional probabilities, whereas the point by Sabine is one of pure unconditional chance.

 

The power of doing

Sabine remarks that epidemic models are incomplete because they don’t include the  “actions society takes to prevent the spread”; that’s true, but Sabine hints that’s because “They require, basically, to predict the minds of political leaders”. The real point is instead that to some extent researchers cannot access the causal structure underlying the epidemic model because they are stuck with conditional probabilities and cannot trasform them into conditional interventions; in other words, they cannot fix some of the conditions to remove faux causal links and highlight the true causal structure—mainly because they would need the politicians to take certain actions which sometimes would be plainly unethical and sometimes would have too high a political cost.

 

A theory should describe nature

The exact quote from Sabine’s article is “If I have a scientific theory, it is either a good description of nature, or it is not“. Books have been written on the meaning of a good description of nature, but the full sentence is simply what Quine would call a logical truth, that is an expression which is true regardless of the effective content of the sentence—if I have a cup, it is either broken or not broken. I won’t into deeper considerations about factual truth vs logical truth.

Sabine in any case goes on defining an explanatory power which “measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory”. This is ultimately an expression of Occam’s Razor, which is embedded in our mentality of scientists—and in Bayesian model selection.

Sabine also points out that ultimately there is a trade-off between obtaining a better fit and introducing more (ad-hoc) assumptions, which again is something deeply embedded in Bayesian model selection and in formal procedures such as the Fisher test for choosing the minimal complexity of a model we want to fit to the data. So far so good.

We diverge towards the end, where Sabine claims that “By now I hope it is clear that you should not judge the scientific worth of a theory by looking at the predictions it has made. It is a crude and error-prone criterion.” and laments that “it has become widely mistaken as a measure for scientific quality, and this has serious consequences, way beyond physics”.

To me the explanatory power of a model, or even better its interpretability, should indeed be a fundamental characteristic of a model, but I subscribe to Box’s all models are wrong. I want to have a reasonable explanatory power or interpretability before considering a theory minimally acceptable as a physics model, but ultimately the sole judge for the success or a failure of a model are indeed the data. Rather than focussing on the possibility that a model predicts the data because of chance, I prefer to focus on requiring that a well-interpretable model predicts multiple data, in multiple scenarios, in multiple independent experiments. If its predictions are successful, I’ll take it as the current working assumption about how things work.

Particle physics, as pointed out by Bob Cousins in the paper linked above, is indeed a happy realm where we have tremendous predictive power and where we can often build models starting from first principles rather than just figuring out what type of line fits data the best (as is common in other sciences, I recently experienced). Bob is also right when he remarks that when we go from Newtonian motion to special/general relativity the former is the correct mathematical limits in a precise sense” rather than an approximation. However, all of this to me simply justifies the use of (quasi-)point null hypotheses: it does not imply any strong connection with a ground truth. More importantly, even our choice of those very reasonable assumptions (symmetries and whatnot) that generate the explanatory power or interpretability of the model might ultimately result in a very successful theory by chance alone. After all—I insist—all models are wrong anyway.

 

“Readers of some journals don’t read other journals” (or maybe they do?)

Last September I was in Crete for a conference and overstayed to have a few days of vacation.

The wife and I did a wonderful hike walking down through the Samaría Gorge. I won’t enter into the details: just be warned that it is a magnificent experience but also quite taxing. Bring food, water, hats, hiking shoes, and more water.

After the hike we had the opportunity of sunbathing on a nice rocky beach and take a swim, before taking a ferry back to the picking point for the bus back to Chaniá. During the bus trip the wife fell asleep, and I listened (albeit interrupted by the occasional loss of connection in the mountains) to an episode of the recommendable EverythingHertz podcast, hosted by Dan Quintana and James Heathers, featuring a nice interview with Kristin Sainani.

The interview mentioned Sainani’s scientific writing course on Coursera, but I shamelessly forgot about that—until last Sunday. While searching for a few references about writing, I serendipitously stumbled again upon the course, enrolled, and spent the last couple nights going through the very cool material. If my prose in this blog is not improving it’s totally by my fault: the course is very good.

Long story short, I just watched two of the course’s interviews: one with Brad Efron (the real one, not a bootstrapped replica), which I cite merely because you should really go and watch it; and one with George Lundberg, which I cite because I want to speculate on a point Lundberg raised.

Lundberg—a medic and editor of many journals, from what I could gather—states that you should choose the appropriate journal for your paper based on the typical readers you want to reach. On one side I agree (as also Efron mentioned, if you want readers interested in theoretical statistics you shouldn’t submit to the Journal of Applied Statistics), on the other side I tend to disagree on one specific sentence: “readers of some journals don’t read other journals”.

This was probably true in the pre-internet era, when you had to go to your university’s library to pick up a printed journal: maybe (if the library had not enough copies) you could also read it only for a moderate amount of time, to leave space for colleagues to read too, and had to ruthlessly select the papers you wanted to photocopy. I imagine that bibliographic research was based on a similar approach too—my older readers, if any, are welcome to comment on this. I hear that often people snail-mailed authors to ask for (snail-mailed) copies of their papers.

Nowadays you tend to search for papers online; you can get practically any paper from any journal via a plethora of sources: preprints, open-access journals, university online subscriptions, or a colleague willing to send you a PDF—or sci-hub, if you feel particularly remorseless.

I think that the younger generation certainly still accounts for the perceived importance of a journal when choosing what to read. But I also think that the separation between readers of this or that journal might have washed out so much that readers now should be rather divided by search keys—people searching for “sampling techniques” rather than “likelihood asymptotics” or “cats loving statistics”.

My hunch is totally anecdotal, but if you are interested let me know and we might think of setting up a study (if one does not exist already) in which scientists are interviewed about their reading habits.

Academic writing in High Energy Physics

It turns out I was not writing in this blog since last April, which is a bit disappointing.

Since then, I got more and more involved in academic writing; I have a couple draft articles (not within the Collaboration) that I am now polishing, and I got a contract with a prestigious press for a textbook due next year. As a result, I started writing almost every working day, which is something that as a particle physicist you don’t really do.

The life of a particle physicist in a large experimental Collaboration revolves around doing analysis work and service work. The typical service work consists in accessory tasks like working at tuning some calibration of the detector, or reviewing a specific aspect of analyses you did not perform yourself, or other menial tasks that are nevertheless extremely important for the company Collaboration to keep functioning. Not much writing there (except for emails. You will always be writing emails).

The typical analysis work can be roughly schematized in a workflow like this:

  • Design an analysis targeting an interesting physics case, and reading the relevant bibliography (old analyses targeting the same case, related theory papers, etc);
  • Perform the analysis (select an interesting subset of your data sample, estimate some tricky accessory quantities you need, study the systematic uncertainties your analysis is affected by, extract estimates for the parameters you are targeting);
  • Present a few times the analysis in a meeting to get feedback by other members of the collaboration;
  • Write down a detailed internal documentation (the Analysis Note), and get some more feedback;
  • Write down a draft of the public documentation (journal paper or preliminary analysis summary);
  • Get the analysis approved from the point of view of the physics;
  • Get the paper approved from the point of view of the writing (including the best way of relying the desired concepts, and style/grammar considerations).

I don’t claim total generality, I just find that me and most of the colleagues I know have this workflow; you might have a different one, probably a better one, and that’s just fine.

The implication of such a workflow is that you end up writing down the documentation (internal or external) only after having finalized the bulk of all the analysis work; until that moment, the logical organization of the material is deferred to slides presented at meetings. When you write the documentation you are also generally under pressure to respect some deadline—usually a conference in which your result should be presented. Sadly, sometimes there is not even much organization of the material to be done, because most analyses have been performed and optimized in the past, and the modifications you can do are kind of adiabatic (plug in a different estimate for a specific background, or training a classification algorithm, and so on). For new analyses, the track is predetermined anyway (tune your object identification, tune your event selection, estimate backgrounds, plug in some analysis method specific to the case at hand, estimate systematic uncertainties, calculate the final numbers representing your result).

That’s all fine, but the unintended consequence of this workflow is, in my opinion and experience, that academic writing ends up relegated to the role of a task you have to do pretty quickly and is a mere accessory to an analysis that you have already done.

Things are made worse by the latest stage of the workflow; the review of the paper text made by the collaboration (usually in the form of a Publication Committee) is designed to standardize the text of all the Collaboration’s papers and to ensure the highest standards of quality of the resulting text. The problem is that, while iterating with the internal reviewers on the text, you will often feel that your authorship is taken away from you. What I mean is that the set of rules and comments is designed to produce a perfect Collaboration text, and this will strip most of your personality (reflected in your personal writing style) away from the paper. Unless you discuss a lot and manage to slip some lively bits into it.

Just to make things clear, I am not complaining about the existence of these rules; it is certainly desirable that the Collaboration outputs papers with the highest standard of text quality, and setting internal reviews and writing rules is a necessity. It’s just that the papers end up being the Collaboration’s papers, not your papers.

In any case, my point is that this kind of workflow unwittingly teaches us that writing is the last thing you do after having done everything else, and that the final result is not entirely under your control, because it will be the product of the Collaboration.

If you look at other fields, maybe even going into social sciences or the humanities, writing tends to be seen more as a necessary tool to organize your thoughts. This generally applies to the point of using writing to organize your thoughts into a paper-like format, which helps you at any stage identifying what do you need from an analysis point of view, but it also applies in general to taking random notes to fix your thoughts and reorganize them.

Once I started writing for my own projects regularly, I realized that what in high school was a vague unidentified feeling is actually a clear truth: writing is probably the best way of interacting with your own mind, and that is true regardless of what you are writing about (work, feelings, life in general). Writing activates your mind and enhances its capabilities.

In addition to the projects I am working on, I started to regularly jot down notes on pretty much anything (meetings, random thoughts, summaries of papers I have read, etc). The result is that I feel more focussed, I feel like I am thinking more clearly about pretty much anything, and I am retaining information in an extremely easier way. A bonus is also that I can retrieve from my notes any information I have forgotten or not retained!

In high school I could write pretty easily, but I guess my ability has atrophied in the years; now I think I regained it and pushed it even further. I can now probably be defined a writing junkie. A resource that helped me quite a lot in regaining momentum is Joli Jensen’s Write No Matter What,  a very nice book whose main point is that in order to write you should have frequent, low-stress, and high-reward contacts with your writing.

How does all of this apply to this blog? Well, for long I thought that to write regularly I would need to regularly produce very long pieces of text, mainly because the blogs I usually enjoy reading are made of very long posts. Recently I started to follow and enjoy a lot a blog which mixes longer posts and very short random posts, and I finally came to terms with the idea that a blog can be entertaining and useful even if a post is very short or consists in the jotting down of a single random idea. I will try this new format. I actually started this post with the idea of writing just a few lines to kick off the blog again and look, here I am at 1310 words and a couple more paragraphs to go.

I even have plans for a whole series of posts. The COVID-19 boredom induced me to slip a couple slides about The interesting paper of the week in the news slides of the weekly meeting I chair at my institution. It’s a meeting about the group’s CMS efforts, but all the papers I am slipping in are about Bayesian statistics or Machine learning because that’s where my interests lie right now. Yesterday it suddenly dawned to me that porting those weekly slides to weekly posts would make for a great low-stress series.

So, basically, I’m back and with plans of finally kicking this blog truly off on its intended course.

My New Paper is a Manifesto for Good Practices!

Too much time has passed since my last post; I had a couple busy months, and I will have a few more 🙂

Among my recent activities, I spent last week in Zürich attending Standard Model at the LHC 2019 , where I presented the status of W and Z multiboson measurements in ATLAS and CMS. Together with Carlos from Oviedo—where I previously was based—we have produced a nice result, that has been part of my presentation, on WZ inclusive+differential cross section and search for anomalous couplings , which has been published in JHEP just a couple days before the conference 🙂

The SM@LHC series consists actually of specialized workshops designed to bring together experienced researcher and have them discuss the open topics and points of improvement that concern Standard Model physics. Well, not only Standard Model, actually; nowadays the precision of SM measurements is so large that we expect to be able to see sizeable discrepancies from SM predictions in case there is some new physics nested into the couplings (parameters representing the strength of an interaction between a set of particles).

In the “usual” conferences about HEP physics, a talk on “multiboson measurements in ATLAS and CMS” would consist in a list of nice results with highlights about who did what with a better precision; while showcasing results is very important, one sometimes feels the need of a more critical discussion of the results, to identify possible improvements to be made and therefore inform future action.

Workshops like SM@LHC satisfy exactly this need; speakers are invited by the organizers to give talks focused more on the issues and open points than on the accomplishments. In order to prepare my overview of multiboson measurements, I have read in detail a number of ATLAS and CMS papers, following this mandate. Because of my tasks within the Collaboration (I review papers for the phrasing of statistical claims, for example), I have grown a bit picky on the topic of reporting results, and I started to notice things.

After preparing the talk I took a plane to Zürich right before Easter, spent the weekend visiting the town with my wife, and started thinking about systematizing the observations I had made to possibly abstracting some kind of guidelines.

During the four days of the workshop, I started jotting out a few ideas, and on Friday morning I submitted the result to the ArXiv.

My Reporting Results in High Energy Physics Papers: a Manifesto is now out, Screenshot from 2019-04-29 15-07-12and I have already received feedback from the community (quite good, so far!). If you feel like reading these 10 entertaining pages, make sure you drop me a like with additional feedback; I surely missed some point and the document can be always improved.

Last but not least, the act of writing the acknowledgements section for this paper led me to investigate about my CMS membership; I realized this year (in July, actually) marks my 10th year in CMS; I am not sure this is a milestone, but somehow it feels like one.

For sure, looking back, I realize how many things I now kind of understand—things  I had absolutely no clue about when I started. And that feels good 😀

Turok, Dark Matter, and the Issue of Telephone Games in Science

Chinese Whispers is a children’s game; according to the linked Wikipedia article, it’s called Telephone Game in American English, which better resembles the Italian telefono senza fili (literally, wireless phone).

Regardless of the name, which might stir up some discussion in its British version due to stereotype, the point is it’s a game in which information gets progressively distorted at each step—or I should rather say that opportunity for distortion at each step is embedded in the rules of the game.

Information is usually distorted by the environment (i.e. by the challenge of quickly whispering words one player to each other), but there’s always the chance that a player intentionally changes the message. This makes often the game a bit less funny (the funniest realizations—at least to me—are the ones in which the changes are unintentional), but results in no big deal; the message has no real utility.

In the real world, messages are usually important in being meant to have some effect on the recipient, and intentional distortion becomes an issue because the distortion is motivated by the hidden agenda of the player (or in general actor, in this context) that distorts the message.

In science this issues can rise in the way scientific results are presented to the general public, and also in the way results are presented to a public of peers; I will discuss two recent examples that bothered me a bit.

The first example is the popular book The Order of Time by Carlo Rovelli. In the book, Rovelli argues essentially that time is a sort of emergent property rather than a fundamental entity. The book has been followed by a series of interviews and articles in the press, which helped popularize it and certainly pumped up sales.

The book—and the general attitude shown in press articles and interviews—creates huge harm, though, because the notion that sticks with the layman is precisely that time does not exist. While this is certainly an interesting theory, worth discussion and scientific exploration (if feasible), it is a theory. A fancy, interesting theory that is not supported by any evidence whatsoever, at this moment in time (pun intended).

I think that selling (because the issue here is selling) a theory as if it was a fact is seriously damaging both the public and the community, with the aggravating factor that the public is defenseless; the public just trusts whatever is written in a popular book or in a press article, regardless of the truth—as the Trump campaign taught us. Furthermore, unfortunately the general public does not go and check more informed reports such as an article from Nature which points out that the theory is just Rovelli’s theory and that the layman should not buy the theory as if it was the truth.

If you think that I am exaggerating, consider that I am one of the administrators of what is probably the major Italian Facebook group on outreach on the topic of Quantum Mechanics, Meccanica Quantistica; Gruppo Serio; every couple days we have users that keep posting their thoughts “on the fact that time does not exist”, to the point that we stopped allowing those posts to pass through our filters. When we still accepted those discussions, I have been able to experience firsthand that these people have read the book (or a press article about it) and have taken home the message that the state of the art of scientific knowledge is that time does not exist. And this is very bothering. I think Rovelli messed up very badly in this, and I have the impression (I hope the incorrect impression) that he is unwilling or not caring about correcting this mistake.

Rovelli’s book is not the only example of a book that does a disservice to outreach by projecting the theory or the biases of the author into the general public; another recent example would be the book (and blog post about FCC) by Sabine Hossenfelder in which she claims that a new particle collider would be a waste of money, but I think that others have already written extensively about the topic, so I won’t delve into the topic in this blog post (I already did on Twitter, though), and my second example won’t be Sabine’s book.

My second example will be a sneakier example I have assisted to last week in a seminar in my institution, Université catholique de Louvain. In the context of the assignment of some PhDs honoris causa to renown scientists, Neil Turok has been invited and gave a couple lectures. One lecture was to the general public, and I missed it because of other commitment; you can find the full video of it in my institution’s website. The second lecture, the one I will focus on, was to a semi-general public; not only researchers like me from the CP3 (Centre for Cosmology, Particle Physics and Phenomenology—kudos for centre, Oxford comma is missing though), but also bachelor and master students in Physics.

A seminar for specialists is pretty much an open field, where it’s assumed that the spectators will be actively engaged and will critically evaluate any bit of information transmitted by the speaker.

A lecture with bachelor and master students—who were encouraged to participate and make questions—is a more delicate scenario, in which I would argue that you want to make sure that everything will be communicated with the necessary caveats. Either well-established theories should be presented, or new, bizarre, untested theories; in the case of the latter, there should be ample warnings about the theories not being part of the scientific consensus. I am not saying that new/bizarre/untested theories should not be presented; on the contrary, it is good for the formation of the critical mind of the students that debate is stirred up and that exciting possibilites are presented to them. What I am saying is that such possibilities should be presented as such, and not as the unquestionable truth; here is where I think Turok messed up pretty badly.

The lecture was about a CPT-symmetric universe; a couple slides into the talk, he presented a slide in which he wrote an equation and outlined the different components and the scientists that solved those pieces of the puzzle. There was an almost invisible (dark violet on black) bit of the equation that I was not able to read but that turned out to be pretty crucial; he claimed that he used to put disclaimers about that piece of the equation, because it referred to dark matter, but that recently he removed the disclaimer because that part of the puzzle has been solved.

At that point, I kind of woke up, because to this day we are pretty far from being able to state that “we solved Dark Matter”.

It became clear a few slides later that what he meant is that his theory is that Dark Matter is constituted by right-handed (RH) neutrinos, and that consequently the standard model plus right-handed neutrinos is enough to explain all the universe.

He then went on to state that competing theories such as freeze-out and freeze-in are full of ad-hoc assumptions, whereas his theory was simple and elegant; he even threw in the middle some paternalistic comments saying that in astrophysics/cosmology lately people just produce bad papers for the sake of it, whereas he prefers simple solutions based on works from 50 years ago.

Now, it might be true that some people produce bad papers just for the sake of it, and it might be true that going back to the roots of a discipline can result in ideas with a newly found strength and solidity. But using this argument to bash at competing models seems to me a bit arrogant and uncalled for. Particularly in front of undergraduate students.

During the Q&A, a couple colleagues of mine argued on two different fronts; one argued that freeze-in mechanisms—contrary to what stated by Turok—do not assume a huge number of new fields and ad-hoc assumptions. I am no expert on astrophysics, but we had in the past weeks two or three seminars about freeze-out and freeze-in mechanisms at CP3, and I am pretty sure my colleague was right; yet, Turok dismissed him basically saying that he was sure my colleague was wrong, and the moderator in the end had to use the traditional diplomatic let’s continue discussing this during the coffee break before things went awry.

The other colleague argued that the “very simple and standard-model only” model by Turok assumed not just the Standard Model but also right-handed neutrinos, to which a small exchange followed about whether RH neutrinos can be considered practically-Standard-Model or not. The discussion dragged on a bit, and at some point Turok admitted—although very en-passant—that also his model is affected by totally ad-hoc assumptions such the Z2 symmetry that makes one and only one of the RH neutrinos stable. And yes, that assumption is totally ad-hoc and is apparently the only way in which the theory can explain why of all RH neutrinos only one should be stable and give rise to Dark Matter. Again, I think that while it’s healthy that students are exposed to debate and to new ideas, the way in which the theory has been presented before the critics has been very problematic.

Screenshot from 2019-02-12 22-27-35
Screenshot from the Turok fandom wiki

Summarizing, I think our duty as scientists is to give both the public and the students the most objective picture about whatever new theory we fancy at the moment—even if we ourselves devised that theory.

It is good to expose the public to some degree of the professional debate about some topics—although it probably depends on the topic; debate about CPT has not the same impact on the layman as a debate about black holes—remember when people believed the LHC would have destroyed the Earth?—or vaccines.

However, when speaking to—or writing for—people that have not the capabilities of critically sieving through information, we should be very careful to not misrepresent the difference between the current scientific consensus and yet untested theories.

After all, not everything is about Turok (the Neil); the image above teaches us that Dark Matter is a pretty delicate issue in Turok (the game) as well 😀