The Method in Our Madness: Data Collection and Analysis for Our Study of Higher Education, Part II

by Wendy Fischman and Howard Gardner

When hearing about our ambitious national study of higher education, colleagues often ask us how we went about carrying out the study and how we will analyze the various kinds of data and draw conclusions. At first blush, the decision to carry out approximately 2000 semi-structured hour-long interviews across ten deliberately disparate campuses, to record and transcribe them, and then to analyze the resulting “data” seems overwhelming—and not just to others! Moreover, when asked for the “hypotheses” being tested, we always reply that we did not have specific hypotheses—at most, we knew what general issues we wanted to probe (e.g. academic, campus life, and general perspectives on the means and goals of higher education). Additionally, we wanted to discover approaches and programs that seemed promising and to probe them sufficiently so that we could write about them convincingly and—with luck—evocatively.

An Earlier Model

We did not undertake this study entirely free of expectations. Our colleague Richard Light, now a senior adviser to the project, spent decades studying higher education in the United States; he provided much valuable background information, ideas about promising avenues to investigate, and some intriguing questions to include in our interview. Both of us (Wendy and Howard) had devoted over a decade to an empirical study of “good work” across the professions. In that research, planned and carried out with psychologists Mihaly Csikszentmihalyi and William Damon and their colleagues, we had interviewed well over 1200 workers drawn from nine distinct professions. The methods of interviewing—and the lack of guiding hypotheses—were quite similar. Because we were frequently asked about our methodological approach, we prepared a free-standing paper on the “empirical basis” of good work. In addition, reports on our findings yielded ten books and close to 100 articles; moreover this project led to several other lines of research—see TheGoodProject.org. Our prior work on “good work” served as a reasonable model as we undertook an equally ambitious study of higher education.

In this blog and the two others in this series, we seek to convey the “method” to our undertaking.

Part II. Major Concepts in our National Study of Higher Education

Initial Surprises and Emerging Concepts

When thoughtful researchers begin to carry out interviews, and speak with one another about what they have seen and heard, their own thinking inevitably evolves. At the time our study began, we were at most dimly aware of the importance of mental health issues across American campuses; and yet, as that impression solidified, we necessarily added interviews with leaders and counselors who are directly concerned with student well-being. And more recently, as we encountered evidence of the importance of a feeling of “belonging” on campus, we also explored that topic in more depth, specifically in our coding. 

From another angle, we were surprised at the lack of mention of artistic opportunities on campus (by students who do not major in an art form), and the lack of detailed information with respect to ethical and moral dilemmas that arise. And so we did additional probing on those topics to learn more about participants’ experiences and perspectives. To our disappointment, most participants had little or nothing to say about these topics.

The original title of our study was “Liberal Arts and Sciences in the 21st Century”—and certainly we were motivated to find out the current status and ultimate fate of a form of non-vocational education that had been valorized in the United States (and in some other nations) in the last decades of the 20th century. Many individuals seem to have little or no knowledge about the meaning of “liberal arts”; and many others have quite serious misconceptions—e.g. that the word “liberal” signals a political rather than a scholarly orientation. Though we will write more about these misunderstandings in forthcoming publications, we do not intend to emphasize the specific term, as not to polarize our potential readers.

But concern about non-vocational higher education certainly informs our thinking. And indeed, once we had carried out a sufficient number of interviews (a few hundred), we began to develop concepts that would allow us to assess the quality and sophistication of thinking of our participants as well as their orientations to higher education.

Two Key Concepts

Assume that you speak to an individual for an hour, touching on a variety of topics, and giving that person the opportunity to express herself freely, to make connections across questions and issues, and to question or interrogate the questions herself—an analogy would be an uninterrupted wide-ranging conversation with a stranger on a train or a plane. More often than not, after such a conversation, you learn a lot about that person, not only in terms of what she says, but also how she thinks. We believe that over the course of an hour, you can also make a reasonable determination of whether that person has somehow acquired the equivalence of non-vocational education in the liberal arts and sciences.

1. LAS Capital

Accordingly, we developed the concept of “LAS capital” (pun intended)—which we call LASCAP—a rough measure of the extent to which a participant displays the kind of thinking that one would expect of a graduate of non-vocational higher education (and not, except in rare cases, of a high school student).

And here is where our methods begin to reveal themselves. To assess an individual’s LASCAP, we use two separate measures, also administered separately. First, a coder “blind scores” (e.g. the coder does not know any identifying information about the participant) a participant’s responses to seven specific questions in the interview. The questions range widely, from how a participant rank orders the four main purposes of college to her selection of a book to give to a hypothetical graduating student.

Second, the same coder scores a participant’s LASCAP based on the entire interview, a cumulative measure. Taken together, the scoring of each of the seven questions and the whole interview ranges from 1 to 3 (1 for little or no capital; 3 for a high amount of capital). We code LASCAP in this way to ensure that we consider the specific questions that we believe elicit the most LASCAP as well as monitoring what emerges over the course of an hour (in case a participant does not have much to say about some specific questions).

For both measures, we ensured that scoring was consistent across coders (through pilot testing the measures and discussing them in team meetings). Similar to the other holistic concepts in the coding scheme, if there are any scoring disagreements (among a coder and shadow coder), these disagreements are discussed and resolved. In our coding, the two measures (overall measure, mean of seven questions) correlate quite well.

Additional points about LASCAP:

a.) It is difficult to prove that someone lacks LAS capital; scoring is based on the degree of its presence.

b.) Needless to say, first year college students differ greatly in the amount of capital that they display. Of special interest, within and across campuses, is the difference in LASCAP between beginning and graduating students. Ideally, one would want longitudinal data, showing the difference between mean LASCAP for first year students and mean LASCAP for these same individuals several years later. But given that caveat, the cross-sectional data that we have assembled can be quite revealing; when we publish our findings, we will report whether the mean scores of graduating students are higher than those of the first year students, or if there are no differences between first year students and graduating students. Based on initial data, we expect to find evidence of “growth” between first year students and graduating students (even though these are two different groups of students) as a function of the college experience; we also expect the degree of change may differ across campuses.

2. Mental models

Few questions in our study are more important than how individuals think about the purpose of college and what they hope to get out of it. Again, this question can be approached in two ways.

One way is quite simple and straightforward. Toward the end of the interview, we ask subjects to rank order various purposes of college that have been proposed by individuals (as briefly mentioned above). All subjects are given the following four choices in this order:

  • To get a job

  • To gain diverse perspectives on people, knowledge, and the world

  • To learn to live independently

  • To study a particular content area in depth

Most participants answer this question quite readily. Then, to sweeten the pie, we ask participants for the rationale of their ordering. And then we ask them how they think that other constituencies would rank order the options. Thus, for example, if we are speaking to a student, we ask how faculty, administrators, trustees, parents, and alums would answer the question. Needless to say, there may be a lot of “projection” onto other constituencies of purposes which the participants may be reluctant themselves to rank as most important. For example, a first year student might say that faculty would rank studying a content area in depth is most important, but her parents might rank getting a job as most important.

Our other approach, more complex, is analogous to the “holistic” measure of LASCAP. On the basis of pilot and early interviews, we posited the existence of four different “mental models” of college (described here with prototypical responses):

1. Inertial: “First one goes to high school; then the next step is to go to college. I am not exactly sure what college is about or why I am here.”

2. Transactional: “I am here to get a degree, period. I’ll do whatever is required, and be sure not to do anything that will jeopardize that chance. I will do what is required, in terms of academics, social life, and extra-curricular activities, to ensure that I will get in to graduate school or get a job after college.”

3, Exploratory: “College offers me a once-in-a-lifetime opportunity to take new courses, make new friends, participate in unfamiliar activities, and travel and spend time in new destinations. I intend to make the most of this opportunity—better to venture and fail than to stick to the tried and the true.”

4. Transformational:Before coming here, I was one kind of person, from a certain locale, demography, and set of expectations. In college, I shall strive to become a new person—fashion and refashion my identity, interact differently with individuals, and, as a result of my studies, gain new and different ways of thinking about people and content knowledge, using my mind, and my imagination in unanticipated ways. I’ll visit for sure but ‘I won’t go home again.’”

Researchers code each participant as one particular mental model based on a “holistic” reading of the transcript. Though coders do not score specific questions (as they do for LASCAP), there are particular questions which elicit the most useful information—for example, participants’ goals for college, views on what all students should learn from college, and, as mentioned above, how they rank the main purposes of college. As with LASCAP, we are interested in the ways in which these mental models may differ between first year students and graduating students within and across campuses.

In the conclusion of this three-part blog, we describe other key analyses that we are undertaking.

© 2018 Wendy Fischman and Howard Gardner

Previous
Previous

The Method in Our Madness: Data Collection and Analysis for Our Study of Higher Education, Part III

Next
Next

The Method in our Madness: Data Collection and Analysis for Our Study of Higher Education, Part I