podbaydoor wrote:...so basically, if you read the article all the way to the end, it doesn't say anything with the certainty that the headline seems to imply. How typical.
You know how it goes. "Organized religion shows steady long-term decline" fed through the media machine comes out as "Gee Willikers religion is going extinct!"
As it pertains to this BBC article, the gap is actually a bit wider. It is a good case study of the well-known problem that news-source-level articles based on journal-level articles overgeneralize the latter's conclusions. That this article appears at the top of BBC's "most read" list isn't surprising; it simply says more about the newness of supposed news than the inventiveness or insight of the research. But there isn't much about the statements in the BBC article that are actually wrong. They're just arranged and directed to make implications far beyond any of that insight.
Here's a few examples:
BBC wrote:The study found a steady rise in those claiming no religious affiliation.
It did, but hardly to any extent that wasn't already shown extensively by the papers referenced. From the paper:
Paper wrote:For decades, authors have commented on the surprisingly rapid decline of organized religion in many regions of the world. The work we have presented does not exclude previous models, but provides a new framework for the understanding of dierent models of human behavior in majority/minority social systems in which groups compete for members.
An understanding, as it is, of an almost entirely mathematical variety. Readers must understand that the general motivating context for the contributions of the paper (a social understanding of where we are heading as an increasingly secular society) is less visible than the specific context of most similar research papers (a bit-by-bit formalization of poorly understood complex phenomena for the eventual hope that the sum contribution of those bits will achieve the former). In the end, the paper deals and contributes primarily in the realm of numerical models, perhaps with the hope that data driven application of those models will lead to better overall present explanation of that present data, without any silly notion of expecting to predict the future without future data.
To help conform the model to the data that was (apparently) newly collected by the authors, they propose a computational numerical experiment (tl;dr: they wrote a computer program) to test the introduction of a novel component into that model:
Paper wrote:We have thus far assumed that society is highly interconnected in the sense that individual benefits stem from membership in the group that has an overall majority. For that reason, the model as written is best applied on a small spatial scale where interaction is more nearly all-to-all. We can generalize this model to include the effects of social networks: rather than an individual deriving benefits from membership in the global majority group, he or she will instead benefit from belonging to the local majority among his or her social contacts.
Which is followed by the aforementioned formalization into a mathematical framework, and a description of the results of experimenting that formalization with a simulation. Explained in the language of a news article, this new paper theorizes and tests the effect of changing the common model of whole-population social interactions into one that includes the effects of separation (geographical or otherwise) of members of those populations. Not exactly a newsworthy or exciting headline, but that's the point.
Compared to the most technical language that the BBC article bothers to use, even that sounds banal:
BBC wrote:"It posits that social groups that have more members are going to be more attractive to join, and it posits that social groups have a social status or utility.
It does posit those things, but merely in the introductory section explaining how such factors have already been analyzed and used to form models, and only before the main point of the paper which is to modify the model to account for a nuance to those factors. In the real world, we and the authors knew all along the obvious problems of using either the old or modified model to make any sort of overgeneralized implication. In fact, the only generalized point of the paper seems to be that the introduction of the "perturbations" only confirm that the models continue to be well-behaved. From the paper:
Paper wrote:According to our calculations, the steady-state predictions should remain valid under small perturbations to the all-to-all network structure that the model assumes, and, in fact, the all-to-all analysis remains applicable to networks very different from all-to-all. Even an idealized highly polarized society with a two-clique network structure follows the dynamics of our all-to-all model closely, albeit with the introduction of a time delay. This perturbation analysis suggests why the simple all-to-all model ts data from societies that undoubtedly have more complex network structures.
This is a point extremely interesting in it's own right, and for those of us paying attention, really only useful there. Inside this computational realm, were new authors to attempt to create a model that accounted for all actual real-world factors, they would end up with one with very little stability, and hence very little predictive ability. In the language of the paper, their monotonically-increasing (over the domain of number of adherents) conversion probability function explicitly depends on a variable which they call perceived utility. What if we were to make this variable explicitly dependent on time (vastly complicating, incidentally, solving of the differential equation analytically), as it is in the case of the long term trend that the paper contributes to is helping to explain (that the perceived utility of religion has decreased in many countries in recent decades)? An interesting approach would be to generate various over-time noise functions (of varying temporal frequency) and apply them to that level of dependence. This might show how well-suited the model is to account for unexpected fluctuations in perceived utility, without providing a way to predict those fluctuations. As it is, the best method to discover what those fluctuations might look like would be data-driven. As data is in the past, that's where we'd probably find the most utility in using the model to explain actual phenomena.
But these are theoretical points. The real point is not to take the validity of these points to be confirmation or denial of the headline of the BBC article. Religion may be headed toward extinction, but then again, so is the human race.