Matthew Williams
My grandfather is a statistician. Throughout his decades of work, he encountered tremendous amounts of problematic social science research. In 2002, he even wrote a book about the tales behind the original innovations and ridiculous flaws endemic to social science research.
One if his fondest stories is about the time his local Jewish federation asked him to conduct a communal study. Once he had determined the relative overall size of the Jewish community—easier back then because of the preponderance of affiliation with a limited number of institutions, namely synagogues and temples—he suggested sending out a mere few hundred surveys and following up with respondents. A number of the communal leaders could not believe that such a limited sample would really represent their community. To remedy the problem, one of the federation leaders passed out the survey to all the Jews he knew. My grandfather chuckles every time. In social science research, he understood, a much smaller, random sample with a high or even decent response rate far outpaces the size of a biased survey in its ability to represent—with any degree of accuracy—the community it seeks to study.
The tale seems especially pertinent. Two new surveys have recently emerged on the Modern Orthodox landscape attempting to offer an in-depth look at the religious behaviors and beliefs of a sample that is largely and historically neglected by the broader Jewish social science research community, with a few notable exceptions. This population, too, is subject to some of the costliest interventions (e.g., Jewish day schools) on the contemporary Jewish scene, making the lack of data to gauge philanthropic returns even more strange. They are the “The Nishma Research Profile of American Modern Orthodox Jews” (September, 2017) and the Lookstein Center’s Zvi Grumet’s “Survey of Yeshiva High School Graduates” (January, 2018). One of the key findings of both studies include a seeming fragmentation of modern Orthodoxy as some adherents “slide to the right” while others move “left” or “leave the fold.” Both highlight, too, the transformational impact that the younger generation seems to have as roughly one third, on each side, move further away from the “center,” in terms of various types of observances and beliefs.
Unfortunately, both studies fall well short of the standards of social science research, generally. In doing so, both end up reinforcing many of the problems endemic to the study of Jews, specifically.
The lens these researchers utilize to investigate and portray their subject—measuring a population against an “accepted” constellation of standards and the words used to describe them—comes with troubling implications. To name just two problems: first, the studies assume a constellation of “core” values but this does not allow for space or opportunity for participants to offer their own definitions of behaviors and beliefs. As a result, both surveys provide less data about the sampled population. Instead, they offer a rather skewed view of how these participants perceive themselves relative to these asserted standards.
To take one example, the Lookstein study writes that “while 93.9% required rabbinic kashrut certification for products in the home, only 76.4% indicated the same requirement for restaurants, suggesting that communal norms on having a home that others could eat in was more important than the personal observance of the restrictions.” Setting aside whether or not those percentages are even accurate, here we find a discussion about observance that takes places entirely in the realm of the researcher’s analyses. There’s no place in the survey that allows respondents to define a set of standards by which they measure “observance.” This question would have provided surer footing for the speculation offered here. Without the respondents own correlative set of definitions, we’re left with an implicit frame developed and deployed by the researcher based on what… we don’t know.
The second—and perhaps more troubling feature—is that the language used in the surveys themselves (e.g., OTD or “Off the Derekh,” to refer to those who “leave” Orthodoxy) can alienate potential respondents (e.g., many who leave Orthodoxy prefer the term ex-O). In addition to the political and social repercussions—it is a difficult thing to do to an otherwise already marginalized community—alienating respondents also narrows the population that surveys can potentially draw from to help craft a more comprehensive image.
This last point, namely the alienation of potential respondents, gets us to the crux of the matter. Over the course of conducting and reviewing dozens of studies on faith and ethnic communities, I have come to believe that the threshold for accuracy is very low as long as the rhetorical flavor is right; as long as a studies’ findings can offer “provocative” points that continue to prompt discussion around “issue du jour,” whether it’s the place of LGBTQ identifying individuals in the Orthodox community or the potential for women in the rabbinate. Perhaps this is a bit harsh but, the Jewish community, as evidenced by these and many other studies, does not really seem to care about alienating respondents because it does not care about getting it right.
On the one hand, both surveys, to their credit, acknowledge these limitations. The authors of the Nishma research write:
[T]he social research profession advises treating web-based opt-in surveys with caution. That means, for example, that we should draw conclusions only if the findings are rather pronounced and we have good theoretical reason to believe them. We follow that approach throughout our analysis. We seek findings that have statistical validity and have underlying theoretical rationale.
Similarly, the Lookstein survey notes that “[t]here are limitations to this survey. The method of its distribution does not guarantee a representative sampling, even though it is clear that it did reach multiple segments of the population with equal opportunities for distribution through each respondent.” And here, it’s author hits on the core of the problem—“Because the survey was distributed through social media and not through individual contacts there is no way to gauge response rates.”
On the other hand, both of these surveys treat their samples as if they were representative of their larger populations. This goes for the statistical methods they use and the conclusions they draw. The authors of the Nishma report write that “All survey questions were asked of the Modern Orthodox and the overall responses for the group are accurate within ±1.7% at the standard 95% confidence interval.” This is a patently false claim. Any social science researcher or statistician will tell you that plus/minus accuracy and confidence intervals can only be applied to a random sample, otherwise the potential bias of those who care to respond overwhelms any attempt to define the general attitudes of the community studied.
Internet-based opt-in surveys are becoming the norm in market research. After all, they’re inexpensive. They’re easy to set up. They’re easy to distribute. Finally, large numbers of respondents make for a seemingly attractive size, or high N (number) of respondents. Yet, researchers have begun to struggle mightily with the simple question: what, if anything, can we really learn from these surveys? If you do not have a response rate (as Lookstein admits) or if you don’t have a sense of the overall size of the population (as both Lookstein and Nishma say) then what is it that we’re really doing? Can we learn anything at all? Most social science researchers today are skeptical that we can, especially if we don’t know the response rate (the percent of people who viewed or were asked to fill out the survey, but didn’t, relative to the percent who did).
Many social scientists in the Jewish community would argue—as Nishma does—that we should take these findings seriously, given a sound theoretical basis or a large discrepancy in a result. Other social science researchers in the Jewish community have argued that if a number of surveys’ findings bunch together then their collective weight confirms the validity of their findings despite their individual lack of statistical significance.
All these defenses have been roundly debunked by the majority of social science researchers.
Theories are meant to be tested, not used as a basis for the reliability of surveys. Large discrepancies can be merely an artifact unique to a survey, sample, or theory. It is not necessarily anything reflective of the community you’re trying to understand. Finally, researchers simply cannot compare results across samples when even some of those samples come with severe limitations, like not knowing a response-rate or not knowing the size of the overall population.
All of this is to say that we don’t really know what we think we know. To pretend otherwise, seems to me, like a grave mistake. If we are truly interested in understanding the populations we hope to study then we have to do a much better job designing surveys that not only include less judgmental questions, but also sample with accuracy, both of which allow us to really engage with the Jewish community as it exists.
But, therein might be my mistake, my own naiveté. Claims like “we need to take a study seriously even if it’s not representative” or that “we ought to think of uses for such research regardless of its statistical significance” underline a deeper point. The Jewish community has accepted this level of sociological competency because we either do not know it or do not care about it.
These surveys pass what I’ve come to think of as the “Shabbos table” threshold. They seem plausible enough to the layperson. They are good to debate because they touch on the “issues” of the day. They provide fodder for various communal pundits. They also reinforce many existing conversational touchstones of the community—perhaps most significantly that Jewish practice is in various states of decline. What is more, their designers make for great synagogue speakers.
There is too much on the line, though, for the Jewish community to settle for this threshold.
The first is that philanthropists, foundations, federations, and service agencies take these surveys seriously when thinking about how and where they should invest in the Jewish community. Million-dollar bets that rest upon a house of cards simply do not do. Not only do they lead to significant and, sometimes, severe waste but they also set up unfair expectations for service agencies who end up having to evaluate their programs against such data. But, even more tragic than wasting money, these communal portraits may be woefully misaligned with the realities they ostensibly seek to represent.
Without representative samples or statistically viable research programs, the quantitative data produced by these surveys only captures as marginal view of a vibrant, diverse, and idiosyncratic community of individuals who we really know very little about.
But, by making these surveys part of, if not foundational to “Shabbos table” conversation, the Jewish community signals that it doesn’t care about those who are left out. They become the voiceless Jews who aren’t counted, who aren’t considered, and who fall between the cracks opened by the standards of our accepted methodology. That loss—hopefully not just to me—seems plainly unacceptable.