This is part II of my analysis of the research supporting the LETRS professional development program. LETRS is owned by Lexia which has an annual revenue of anywhere between $100 million to $500 million (Google research). The parent group of Lexia, Cambium Learning, has an annual revenue anywhere from $250 million to $750 million (google research). Keep in mind, however, that these are just Google facts, so the accuracy cannot be verified. But suffice it to say, there’s a whole bunch of money being distributed here, and none of it is going to poor people or to fight global warming. My Alexa App tells me that Dr. Louisa Moats, the author of LETRS, is worth $20 million. Again, like Google, this is not a very good source of data. Alexia can say anything. There’s no peer review. But even if this is just a little bit correct, it’s still about $20 million more than I and the teachers who are forced to take LETRS training are worth. And why do I mention this? Context. Context matters. Whether you are looking at a letter, a word, a sentence, a fact, data, a reading curriculum, a professional development program, or a research study … context matters. As such, research must also be considered in the context of the researcher’s past work as well as the social, political, and economic contexts in which it is found. Context matters. The context of LETRS is money. LETRS professional development cannot be fully understood outside the context of a whole bunch of money. It’s designed by profiteers to generate profits for profiteers (cite). Also, LETRS cannot be fully understood without understanding political forces shaping its forced implementation. These are forces who are trying to disempower and control teachers (cite) and de-legitimize public education (cite). Make no mistake, LETRS is not an academic act. As this short review of the supporting research shows, it is a political act. Context matters. LETRS and Causal Variables Let’s start with the simple three-part proposition put forward by the good Dr. Moats and the LETRS family: What teachers know impacts their ability to teach, and teachers’ ability to teach impacts students’ ability to read; therefore, we need to find out what teachers don’t know and make sure they know it. Like Moses coming down from the mountain, Dr. Moats offers us this special knowledge to save us from our reading instructional sins. Glory hallelujah. Put another way -- A (LETRS professional development) = B (more effective teaching). B = C (higher levels of reading achievement). Therefore, A = C Also, A = D (the solution to all reading problems). Also, A > E (all other forms of teacher professional development for reading) Also, F (all teacher preparation programs) < A. Amen. All well and good. However, the monkey wrench in this fly ointment is that Dr. Moats has yet to establish a causal link from A to B, C, or D. The empirical data she offers consists of a lot of “perceptions”, observations, and surveys. Also, there is a distinct lack of comparative research showing that A is greater than E, or that F is less than A. However, there is one causal variable that has been established: A (LETRS professional development) = G (profits). Again, we know that knowledge is important to good teaching. However, I’m still looking for some legitimate research that shows Dr. Moats’s knowledge is the right kind of knowledge, or that Dr. Moats’s knowledge is more effective than say, Dr. Johnson’s knowledge, or Dr. Allington’s knowledge, or the type of knowledge you might find in the owner’s manual for an Evinrude outboard motor. LETRS and the SoR LETRS professional development can be found in the context of the new Science of Reading initiative here in Minnesota called The Read Act. Currently, there are similar SoR initiatives in 38 states. But what exactly is the Science of Reading? The good Dr. Moats (2019) defines the science of reading as: “It (SoR) is the emerging consensus from many related disciplines, based on literally thousands of studies, supported by hundreds of millions of research dollars, conducted across the world in many languages.” All well and good, but this doesn’t really tell us anything. Dr. Timothy Shanahan (2021) gets us a little closer to a specific definition of SoR by stating that it is applied research (conducted in classroom settings) vs. basic research (conducted apart from the context in which it is used or applied). He went on to say that, “The science of reading should refer to all empirical studies of any aspect of learning to read, write, and spell in any language” (Shanahan, 2024). Empirical research is research that is based on observation and measurement of phenomena, as directly experienced by the researcher. Technically that could include qualitative research; however, the US Department of Education has determined that only a single type of research methodology can be used to ask and answer questions in the field of education. It defines scientifically-based educational research is that which: “ … is evaluated using experimental or quasi-experimental designs in which individuals, entities, programs, or activities are assigned to different conditions and with appropriate controls to evaluate the effects of the condition of interest, with a preference for random-assignment experiments, or other designs to the extent that those designs contain within-condition or across-condition controls”, This similar to the standard employed by Dr. Shanahan and the National Reading Panel. “To make a determination that any instructional practice could be or should be adopted widely to improve reading achievement requires that the belief, assumption, or claim supporting the practice be causally linked to a particular outcome. The highest standard of evidence for such a claim is the experimental study, in which it is shown that treatment can make such changes and effect such outcomes. Sometimes when it is not feasible to do a randomized experiment, a quasi-experimental study is conducted. This type of study provides a standard of evidence that, while not as high, is acceptable, depending on the study design” (NRP Report, p. 7). This means, that the SoR refers to a general consensus among researchers about the strategies and practices that lead to improved reading outcomes. These strategies and practices have been determined to be effective using experimental or quasi-experimental research. This research will have established a causal variable between strategies or practices and student outcomes (or reading) (see Chapter * for quasi experimental design). Further, this research will have been conducted in actual classroom learning environments. According to SoR advocates, this is the standard that should be used to make all decisions related to reading instruction and policy. “Are you teaching SoR?” “Are you a SoR teacher?” “Do you teach SoR in your reading methods class?” “Are you doing SoR at your school?” When these questions are asked, most often they are referencing a set of pedagogical strategies and practices often involving some form of direct instruction. Basic SoR definition = Using strategies and practices shown to be effective using controlled experimental or quasi-experimental research and conducted in classroom settings. Even though I think the SoR definition above represents a very narrow view of what reading research is, I could live with it … (dramatic pause) … if SoR advocates would hold themselves to the same standard. That is, if mandated programs, curriculum and policies were all based on that which was shown to be effective using controlled experimental research and conducted in the settings in which they will be used: classrooms. But, alas alack, they’re not. A good example of this is LETRS. This is one of three professional development programs that teachers are required to take here in Minnesota. One would think that one would be able to find at least one of the “literally thousands of studies, supported by hundreds of millions of research dollars, conducted across the world in many languages” that Dr. Moats is referencing. These would demonstrate a causal link between LETRS professional development and (a) teachers’ ability to teach reading effectively or (b) readers’ ability to read effectively. One would also think that one could find at least one of the “literally thousands “of controlled experimental research studies comparing LETRS to other forms of professional development. This, after all, is the SoR standard. However, … (another dramatic pause) … there are relatively few. And the research studies that have been conducted don’t seem to meet basic SOR standards (see Figure 1). It seems as if Dr. Moats and her LETRS family are given a free pass. The question is, why? Why are they not held to the same scientific standards as the reading teachers in Minnesota? A standard is not really a standard if it is not standardized. Which leads me to offer my own SoR definition:. Andy’s SoR definition = You just can’t say shit. LETRS Knowledge Again, Dr. Moats and her LETRS minions would have us believe that if teachers just had the right kind of knowledge (her kind of knowledge), then they could teach reading effectively and all those pesky reading problems would go away. Now, they are partially correct. It has been well established that having a body of knowledge is an essential component of expertise in any domain (Sternberg & Williams, 2010) including teaching (Bruer, 1999; Darling-Hammond, 1999; Eggen & Kauchak, 2007; Sternberg & Williams, 2010). However, the real question has always been, what kind of knowledge is important? Dr. Moats claims that her kind of knowledge is the right kind of knowledge. According to Dr. Moats, it’s not just important, it’s imperative for teaching reading effectively. To which I would respond in the most respectful way possible, “pish-posh”. Baloney-Based Conclusions Before we examine some of the research used to support the claims of Dr. Moats, we need to understand three important things about research. 1. Research is a process. It’s a process in which the researcher asks a question, and then conducts research to generate or gather data to answer the question. Logical inferences in the form of conclusions are made based on the data. 2. Conclusions should be made based solely on the data collected. I mention this because in some of the research studies conducted by Dr. Moats, there was a tendency to go well beyond the data in making conclusions (Moats, 1994; Moats & Foorman, 2003; Moats, 2004; Foorman, et al., 2003; Foorman, et al.,2006). In technical terms, this is known as baloney. 3. Baloney-based conclusions are problematic. This is because readers have a tendency to look at baloney-based conclusions and say, “Aha! Research supports baloney”, when in fact the data collected do nothing of the kind. Baloney-based conclusions may be interesting and even compelling, but if they go beyond the data in a research study, they are baloney. And as everyone knows, baloney is for sandwiches, not so go for research. SoR Research Standards So, if we want to be responsible consumers of educational research, and if we truly want to be in alignment with the SoR, the research standards in Figure 1 should be used to evaluate programs such as LETRS or policies such as The Read Act in Minnesota. Figure 1. SoR research standards Conducted in an actual learning environment Experimental or quasi-experimental design Treatment linked to teaching effectiveness Treatment linked to reading achievement Treatment compared to something else Supports the proposition/s for which it was cited Conclusions are based solely on the data The specific questions or purpose is clear Three Primary Research Conducted by Louisa Moats Below, I closely analyze three research studies conducted by Dr. Moats that are often used to support LETRS in some form. Moats, L. (1994). The missing foundation in teacher education: Knowledge of the structure of spoken and written language. The Annals of Dyslexia, 44, 81-102. This 1994 study cited was in her white paper, Literacy Professional Learning: 10 Reasons Why It’s Essential (Moats, 2021) found on the Lexia LETRS website. It was used to support two propositions: • Teachers need more training in phoneme awareness and phonics. • Many adults who become teachers of reading do not have fully developed phoneme awareness or an understanding of why words are spelled the way they are. For this study, Dr. Moats created a survey and gave it to 89 students enrolled in six sections of an elective class, Reading, Spelling, and Phonology. The survey was based on what students wanted to learn in this elective class (reading, spelling, and phonology). The participants included an equal distribution of classroom teachers, speech-language pathologists, reading teachers, classroom teaching assistants, and graduate students. So, 89 self-select students, maybe 17 of whom were classroom teachers, were given a survey/test. The content of survey/test is in Figure 2. Figure 2. Content of “survey” used in Moats 1994 research. • Identify an inflection and inflected word form. • Identify the number of morphemes in a word. • Consistently identify consonant blends. • Consistently identify consonant digraphs. • Count the number of phonemes in the following words: ox, straight, king, precious, thank. • Identify the number of syllables in talked. • Identify schwa vowels in written words. • Explain when ck was used. • Explain the "y to i" rule. • Know they six syllable types. • Explain Greek spellings. • Explain spelling of double m. This knowledge is neither necessary nor sufficient for being an effective reading teacher. Also, there was no specific research question provided in this research report. The essence of this research is this: Moats gave a survey/test based on what she thought was important, and then wrote a bunch of stuff, some of which related to the survey. This is one of the research studies used to generalize about teachers’ knowledge and teacher education programs. Moats, L., & Foorman, B. (2003). Measuring teachers' content knowledge of language and reading. Annals of Dyslexia, 53, 23-45 The second study we’ll examine is a 2003, three-year study (see Figure 4). In year one, 50 kindergarten, first-grade, and second-grade teachers were given a survey. In year two, 41 second and third-grade teachers were given a survey. And in year three, 103 third and fourth-grade teachers were given a survey. The content of these surveys was similar to that in the study above (Moats 1994). Not surprisingly, she found “surprising gaps” in teachers’ insights about learning to read” (p. 36). In the abstract and the discussion section of this article, she tried to make the connection to measures of students’ reading achievement levels as well as teachers’ observed teaching competence. However, these were part of this larger study (Foorman & Moats, 2004). In this larger study, the instrument used to observe teachers was not related to teaching reading and had an emphasis on teacher-centered, direct instruction. Also, the measures for reading achievement focused on a couple of low-level reading subskills. But the larger point is, neither of these were part of this current research. Moats, L. (2004). Efficacy of a structured, systematic, language curriculum for adolescent poor readers, Reading and Writing Quarterly, 20, 145-159. In this third study, Dr. Moats measured gain scores of poorly performing middle school students over the course of two years (see Figure 5). These students used a structured language curriculum called LANGUAGE! (Greene, 1995). Three schools were chosen in which the majority of their students (83%) scored below the 25th percentile in reading on the Standard Achievement Test (SAT-9). Selected subtests from three standardized instruments were used as pre- and post-test measures for reading and spelling: comprehension subtest, word attack, letter-word identification, and cloze for comprehension. Gain scores were used to compare pre-test to post-test measures. There are a couple of problems with using single group gain scores (Marsden & Torgerson, 2012; Rock, 2007; Pike, 1992; Tennant, Arnold, Ellison, & Gilthorpe, 2021): 1. Floor and ceiling effect. Students with lower baseline scores consistently make larger gains than those with higher baseline scores. It’s easier to make larger gains if you start lower. For example, an 8% growth starting at the 23rd percentile is more likely than an 8% growth starting at the 78th percentile. 2. Regression toward the mean. The mean may stay the same, but if a few lower scores improve slightly, the mean will improve. This is more apt to occur if you have an overrepresentation of students with low beginning scores (like this study). 3. Test effects. Improvement could be a result of the test itself. Students could remember the questions or the questions could raise an awareness that would trigger learning after the pre-test. 4. Maturation. Simply growing, developing, and being exposed to content is going to cause growth. Learners tend to improve over time simply due to maturation. Without a control group, we can’t say that the treatment was the causal factor. 5. History. The treatment is not compared to anything else. We don’t know if the change from pre- to post is a result of (a) the treatment, (b) the effects from normal educational experience, or (c) other innovations or differences in practices. LETRS Efficacy Research and White Paper LETRS is published by Lexia®. If you go to their website (www.lexia.learning.com), you will find, Lexia® LETRS® Efficacy Research. (Lexia, 2023). This contains 18 research studies that Lexia claims, “constitutes the evidence base for LETRS” (p. 1). On that same site, you will find a white paper written by Louisa Moats (2021) entitled, Literacy Professional Learning: 10 Reasons Why It’s Essential. Of these 38 research studies, none of them meet basic SoR standards. None of them make a causal link between LETRS professional development and (a) teachers’ ability to teach reading effectively or (b) readers’ ability to read effectively. And none of them compare LETRS to other forms of professional development. Conclusions The Science of Reading promotes the exclusionary use of strategies and practices that have been shown to be effective using controlled experimental or quasi-experimental research conducted in actual classroom settings. Further, this standard should be the basis upon which decisions should be made about reading instruction and reading policies. What are we to conclude based on the data? This standard is being selectively applied. Also, LETRS fails to meet this basic SoR standard.