Archive for January, 2009

mere voodoo?

Statistics in fMRI studies: mere voodoo?

“Do you think the media are partly responsible for sensationalizing the findings of social neuroscience? And how can the media do a better job of reporting on brain scanning data?

Ed Vul: In general, I would advocate a bit more skepticism on the part of reporters, with respect to all scientific findings. I think reporters generally try to write up conclusions in slightly grander terms than the scientists used originally. What they may not realize is that scientists themselves have often oversold the implications of their findings a bit. You put these things together and you can end up with really overblown coverage. (On the other hand, perhaps if this advice were followed, science columns would end up dull and unread, so perhaps I should withdraw the suggestion.).”

This is from an interview of Ed Vul, a graduate student with an inquisitive mind.

Ed Vul

Ed Vul

He has an article in press which caused big waves in the small community of social neuroscientists and neuroeconomists. In this paper, he makes a strong critique of the statistical methods used to correlate a behavioral trait with a particular brain region – which is the bread and butter of fMRI studies. For the interested readers, here is the exchange in chronological order:

Ed Vul and al., article in press: Voodoo Correlations in Social Neuroscience

Original Rebuttal by incriminated scientists (including Tania Singer, a neuroeconomist at Zurich), new version of their rebuttal – and they work on an article-length version as well.

Rejoinder by Ed Vul (to the first version of the rebuttal)

Interview of Ed Vul for Scientific American

My bet on the final issue of this debate, for what it is worth? From the quick look I had on the papers, it seems that “regression to the mean” is a central issue in this statistical debate. And ah! if there is one topic where nobody agrees on (among and between statisticians, biologists, and economists), this is this one.* So in my humble opinion, the debate is ripe for taking a turn that is very common in these cases: “It always ends in statistics“. Participants will retort with increasingly sophisticated and intractable analytical refinements, obfuscating the core issue that draw a large audience to the debate in the first place.

* See an article by Stephen Stigler on the topic


Read Full Post »

The institutionalization of neuromarketing

The institutionalization of neuromarketing

To my knowledge, history and sociology of science concentrate their efforts much on ideas (intellectual histories) and technologies (technology studies, or STS). Coming from economics, I developed the feeling that this distinction leads to neglect the applied side of science: neither purely ethereal as ideas can be, nor completely embodied in objects, with contours and patented identities, as technologies can be.

Applied economics, like finance, health economics, agricultural economics, etc., surely deserve a special reflection on their organizational dimension, and the special places where they are developed. Beyond the dichotomy of ideas and tools, theoretical and technological, the stuff of applied science is the organizational, and the intercultural. Where is it practiced? Under which contractual arrangements? For which output, measured against which standards? Sales, publications, royalties, size of an organization, a successful career of entrepreneur? Peer-review process or hierarchical coordination? Impact factor,  profit target, or audience ratings?

To give an example. Yesterday, the university of Reading posted a job announcement to hire a “Neuro Marketing Researcher“.

Bunnyfoot specializes in eyetracking

Bunnyfoot specializes in eyetracking

The deal is a two-years project to work not in the university, but in a private firm specializing in eye-tracking, Bunnyfoot Ltd. The firm cooperates with the university of Reading under a Knowledge Transfer Partnership, for example in this recruitment process.

To me, this is a nice illustration of how neuromarketing develops in practice, and that it must be understood by observing its development at the interface between several cultures. It can’t be classified, or studied, as merely an academic venture OR a business opportunity. Just like finance, neuroeconomics, economics of development, and any economics-of-applied-stuff, neuromarketing does not develop only in the ivory tower of academia, but also in consumer groups, small consulting firms, hospitals, courts, cabinets, NGOs, funding agencies, professional and popular media, and the interstices between all of these. What an exciting program for research!

Read Full Post »

This is not everyday that a research group gets the attention of international media. That was the case a few days ago with a study published by my colleagues at the Erasmus Centre for Neuroeconomics, on social conformity, ie peer-pressure: they got CNN coverage!

Vasily Klucharev, lead author on the study of social conformity

Vasily Klucharev, lead author on the study of social conformity

You can check it out there: http://edition.cnn.com/2009/HEALTH/01/15/social.conformity.brain/#cnnSTCVideo

Or for a popular-science version, click there: http://www.sciencedaily.com/releases/2009/01/090114124109.htm

And the original article:


What accounts for such a huge reaction from the media? I tend to think that the seriousness of brain research and the sexyness of probing social issues in relation to our daily lives makes it a very efficient cocktail. It secures both the publication of the study in a top journal with a lot of exposure (its impact factor has two digits…!!), and the intelligibility of the research question to a wider audience, which makes it easy for journalists to “translate” the results into broad moral lessons. Note, by the way, how the journalists on CNN conclude the story by denying completely the results suggested by the study!

Read Full Post »

At the opening symposium of the Donders Institute (Netherlands) last November, Victor Lamme showed a video that fully grasped the attention of the packed audience. It was a filmed experiment in psychology where people were being fooled by simple, even obvious  tricks.

It introduced his discussion on the nature of consciousness and awareness, with difficult questions such as: can we say of someone who is not aware of what s/he perceives that s/he is still conscious? Or as Victor Lamme says it elsewhere: ‘In my research I want to separate becoming conscious of the outside world from the reporting on it’.

The talk was good, with Lamme suggesting a definition of consciousness that included functional and structural criteria of neural activity. However, at the very end a question from the floor completely destroyed his argument, when someone pointed that, according to the criteria developed in Lamme’s definition, neural activity during sleep would in fact also qualify for his proposed definition of consciousness – quite a problem !!

Anyway, I have found a similar video demonstrating how easy it is to trick conscious and fully aware people. It is less academic than the one presented by Lamme but it has the benefit of working on you, the viewer. So, will you be tricked? (please answer the anonymous poll thereafter).

Read Full Post »

It would seem that using the language of neuroscience makes it easier to trick people into believing false statements. This argument was made by a group of psychologists in a celebrated paper from the Journal of Cognitive Neuroscience last year, but it deserves a third glance – at least for its ironic conclusion.

On irony… Think about it: If making something sound ‘neurosciency’ makes it easier to believe for others, how much easier is it to make people believe an argument about neurosciency appendages in a psychology / neuroscience peer-reviewed journal… In fact, every sucker around should be clamouring to believe that paper. I say sucker, and by that I mean myself – who liked the argument immediately, and I guess non neuro-scientists like the media, popular press… and psychologists?

This is where ‘celebrated’ comes in, as the paper was popularized in the media, and the paper spent quite a bit of time congratulating itself and the subject as “it is hardly mysterious that members of the public should find psychological research fascinating” (p. 470)

So are the results really good? Well, maybe… and maybe not. The authors noted old evidence that longer explanations seem more credible to the unsuspecting public, and the neuro-sciency explanations in the test were longer… Last week a neuroscience blogger noted that what “the authors have strictly shown is that longer, more jargon-filled explanations are rated as better – which is an interesting finding, but is not necessarily specific to neuroscience.” This point is also admitted in the original paper where the authors “believe that our results are not necessarily limited to neuroscience or even to psychology” (p. 476), and is really just what Kikas (2003) had already said.

Well, if the papers fundamental argument is weak, the popular reaction ironically  underlines its thesis but at the same time it questions the quality of the research… Then by extension, does it also question neuro-science research as a whole? or just the accompanying psychology?

Read Full Post »