American Orthodoxy

Drawing The right Conclusions: A Defense of a Recent Orthodox Survey

Orthodox Survey
Print Friendly, PDF & Email

Zvi Grumet

Matt Williams critiques a study I recently conducted and publicized. His critique focuses on two main elements. First, that my work does not meet the standards of academic, statistically valid, social science research. The second is that the types of questions, language, and analysis I used reflect an inherent bias. Let me address each of those.

Williams is right. I am neither a professional social science researcher nor a statistician, and I do not pretend to be either one. I am an educator deeply concerned that we don’t know enough about what we’re doing in our educational systems and its long-term impact. I have spoken to hundreds, if not thousands, of students over the years and noticed certain patterns, particularly in the last 10-15 years. I thought it was important to find out if those patterns were true beyond my limited interactions. As such I undertook the study.

It was clear from the outset that using social media to disseminate the survey instrument would compromise the statistical purity of a truly random sample demanded by the academic community. I acknowledged that limitation in the beginning of my report, and in an addendum I later posted on the blog where the report first appeared. I acknowledged that there were clearly populations which were underrepresented. And despite that, I was and still am comfortable with that limitation, and I would probably do it again the same way.

The primary reason is practicality. Without significant funding and access to a database of Yeshiva high school graduates there was no reasonable way to reach and target this population. And although I have good relationships with many people deeply involved in Yeshiva high schools, I doubt that they would have released the names and contact information for their graduates for a plethora of good reasons.

It was because of this methodological limitation that I was careful in my report to emphasize that it represented nothing more than what the respondents reported. A quick search of the report will reveal that I used the word “respondents” 62 times (including seven times in the summary) to emphasize that point. Further, I made no specific recommendations for how the educational community should respond other to consider my findings as they deliberate how they plot their educational programming and messages.

This brings me to the motivation which drove me to undertake this project. The Yeshiva high school system (it is more of a loose association of independent schools than a system) costs Jewish families upwards of $250 million annually, yet I am not aware of a single study done by the schools regarding their long-term success in their Jewish program. In fact, it is not clear to me how they would even define long-term success. I am hoping that, in the wake of this study, the schools will begin to develop their own instruments to measure that success and use that data to reevaluate and refine what they teach and how they teach.

This is not what Williams calls “Shabbos-table” talk and is not directed at central agencies tasked with distributing large sums of money. This, in my opinion, should be the talk of board meetings, parent meetings, and faculty meetings of schools at the local level.

I believe that, despite the statistical limitation, the data in this study provides enough grounds to warrant beginning that kind of reflection process. I do not believe that the distinctions between Orthodox practice and Orthodox belief or between public and private observance revealed in the study are the products of an anomalous sample; I do not believe that the distinction between intimacy practice and other practice is an accident in the data; I do not believe that noticing shifts in religious behavior through various stages of life or movement in and out of Orthodox practice are the result of bad sampling.

It is hard to believe that two studies conducted in very different ways and with no contact with each other would find the same patterns based on the same accidental anomalies—rejecting those findings because they lack the elegance of statistical purity raises more questions about the rejection than about the studies. The numbers may not be precise, and I’m not convinced that the numbers of any survey are going to be precise, but that is not the point of this survey.

In many fields, including education, there is a research method known as action research, in which the practitioners gather data about their practice and its impact to inform future practice. That data would never stand up to the statistical demands Williams would require, nor should they. It is a different kind of research, one which provides data which is useful in real time. Action research, like social-media based research, are examples of new forms of gathering meaningful data quickly so that they can be used effectively. It may be frowned upon by the academy, but it has considerable value in making positive change before the data become irrelevant.

That being said, the demographic data do actually suggest that the respondent population is not completely skewed, which Williams might want us to believe. There is an almost 50-50 split between men and women. Two-thirds of the respondents were raised in the greater metropolitan NY area, with the rest scattered amongst 19 other regions with Jewish communities. Nearly 80 percent attended Yeshivot/seminaries in Israel. All this is fairly consonant with what we would expect in a completely random sample.

All the above relates to the purist, statistical critique.

But Williams critiques other elements of the survey as well. For example, he insists that there is questioning bias. In his words:

The lens these researchers utilize to investigate and portray their subject—measuring a population against an “accepted” constellation of standards and the words used to describe them—comes with troubling implications. To name just two problems: first, the studies assume a constellation of “core” values but this does not allow for space or opportunity for participants to offer their own definitions of behaviors and beliefs. As a result, both surveys provide less data about the sampled population. Instead, they offer a rather skewed view of how these participants perceive themselves relative to these asserted standards.

While Williams decries “an ‘accepted’ constellation of standards and the words used to describe them,” that is precisely what most of the Orthodox community, and I venture to say nearly all of Orthodox education, is built upon. To avoid using that accepted constellation would miss the point of the study—to what extent have the respondents bought into that constellation which is the warp and woof of the Jewish dialogue in the Orthodox educational world. I would assert that to sidestep the use of those terms would completely misunderstand and misrepresent this population.

Williams also critiques the analysis employed. In his example,

To take one example, the Lookstein (sic) study writes that “while 93.9% required rabbinic kashrut certification for products in the home, only 76.4% indicated the same requirement for restaurants, suggesting that communal norms on having a home that others could eat in was more important than the personal observance of the restrictions.” Setting aside whether or not those percentages are even accurate, here we find a discussion about observance that takes places entirely in the realm of the researcher’s analyses. There’s no place in the survey that allows respondents to define a set of standards by which they measure “observance.”

It seems to me that he completely missed the point. Both halakhically and sociologically, kashrut (outside of Israel) is defined primarily by two factors—the nature of the products being consumed and the separation between meat and milk. In contemporary Orthodoxy, levels of kashrut are defined by the extent to which one is careful in these two areas. Discrepancy between the observance of these rules in two different realms demands an explanation, and the explanation offered emerges from both the evidence provided and insider knowledge of the community and its practices.

Another critique leveled by Williams relates to the language used. For example, he does not like the term “off the derekh” or OTD, and suggests that

the language used in the surveys themselves (e.g., OTD or “Off the Derekh,” to refer to those who “leave” Orthodoxy) can alienate potential respondents (e.g., many who leave Orthodoxy prefer the term ex-O). In addition to the political and social repercussions—it is a difficult thing to do to an otherwise already marginalized community—alienating respondents also narrows the population that surveys can potentially draw from to help craft a more comprehensive image.

Williams could not be more wrong, and he would have realized that had he read the report more carefully. I used the term once in the report to describe a phenomena as described by others. That language, or any implication of it, did not appear anywhere in the survey instrument itself. There is no way that any respondent could have been alienated by a term that was not used or even implied.

There are other areas in which Williams would not have erred had he have read the survey carefully. For example, the report’s first page explicitly states that:

This survey was undertaken as a private research project by the author after 35 years of work in and for day schools. It was not sponsored by any granting organization and not influenced by any agenda other than my own desire to find out where the graduates of Yeshiva high schools are.

Despite this, multiple times in his critique he identifies the survey as one conducted by The Lookstein Center. Although I do work for The Lookstein Center in a completely different capacity, it played no role in this project and bears no responsibility for creating or administering the survey instrument, nor for the content or flaws in the analyses and the report.

And here is the rub. While Williams may be right to call for greater statistical rigor in studies of the Jewish community, it is he who may be drawing conclusions based on something other than the evidence. Someone committed to ensuring that we are learning the right things about the Jewish community based on careful work should take greater care in charging that “the Jewish community, as evidenced by these and many other studies, does not really seem to care about alienating respondents because it does not care about getting it right.”

I would be more careful about drawing spurious conclusions from scant evidence.

Zvi Grumet earned his Rabbinic ordination and Ed.D. at Yeshiva University. He teaches at Yeshivat Eretz HaTzvi and other university-level programs in Israel. Rabbi Grumet is Director of Education at The Lookstein Center for Jewish Education. His books include Moses and the Path to Leadership (Urim, 2014) and Genesis: From Creation to Covenant (Maggid, 2017), and he is Senior Editor of the soon-to-be published Koren Young Adult Humash - Lev Ladaat (Koren).