1
00:00:00,000 --> 00:00:06,320
This is the Reading Instruction Show. I'm your host as always, Dr. Andy Johnson.

2
00:00:06,320 --> 00:00:11,160
The topic of today's podcast is an important one, well they're all

3
00:00:11,160 --> 00:00:16,640
important, for reading instructors, for teachers, for educators. It's called

4
00:00:16,640 --> 00:00:24,840
collecting data is not research. There's a difference between collecting data and

5
00:00:24,840 --> 00:00:32,880
research. While the research process includes data collection, collecting data

6
00:00:32,880 --> 00:00:41,160
is not research. Let me explain. The basic research process is this. You ask a

7
00:00:41,160 --> 00:00:49,200
question and then use data to answer the question. That's the basic process. What's

8
00:00:49,200 --> 00:00:56,080
the temperature? That's the question. Look at the thermometer. That's the data. 72

9
00:00:56,080 --> 00:01:02,880
degrees. That's the answer. That's the basic process. An example in reading. Do

10
00:01:02,880 --> 00:01:09,520
our eyes fixate on every word as we read? That's the question. The data would be an

11
00:01:09,520 --> 00:01:16,560
analysis of reading using eye tracking systems. That's the data. And the answer

12
00:01:16,560 --> 00:01:22,640
would be no, our eyes do not fixate on every word as we read. The research

13
00:01:22,640 --> 00:01:30,560
process includes data, but collecting data is not research. In the world of

14
00:01:30,560 --> 00:01:38,640
science and academia and education and reading instruction, research is not

15
00:01:38,640 --> 00:01:44,800
research unless and until it has been subjected to blind peer review.

16
00:01:44,800 --> 00:01:53,240
Collecting data is not research. Let me explain. Peer review or blind peer

17
00:01:53,240 --> 00:02:00,760
review refers to the process used to evaluate the quality of research. This

18
00:02:00,760 --> 00:02:07,240
is how it works. Once a study has been conducted, researchers write an article

19
00:02:07,240 --> 00:02:14,480
describing what they did and what they found. This article is sent off to an

20
00:02:14,480 --> 00:02:21,040
academic journal for consideration for publication. The editor of the journal

21
00:02:21,040 --> 00:02:26,760
selects reviewers who are considered to have expertise in the field and

22
00:02:26,760 --> 00:02:32,480
considered to know something about the research or what the research was about.

23
00:02:32,480 --> 00:02:40,400
These reviewers evaluate the study without knowing who conducted it. Hence

24
00:02:40,400 --> 00:02:47,720
the term blind peer reviewer. They review. They take the name off the study, off

25
00:02:47,720 --> 00:02:54,000
the article. Reviewers consider such thing as the clarity of the research

26
00:02:54,000 --> 00:03:00,480
question, the theoretical context in which the research question was set, the

27
00:03:00,480 --> 00:03:07,440
adequacy of the methodology, the analysis of the data, the interpretation of the

28
00:03:07,440 --> 00:03:13,720
data, the validity of the conclusions, and the quality of the writing. They then

29
00:03:13,720 --> 00:03:20,240
have four options. One, recommend it for publication. Two, recommend it for

30
00:03:20,240 --> 00:03:26,800
publication with revisions. Three, suggest specific revisions be made and that it

31
00:03:26,800 --> 00:03:35,680
be resubmitted for consideration. And four, recommend the article be rejected. Peer

32
00:03:35,680 --> 00:03:45,280
review simply denotes a process and the quality or rigor of this process varies.

33
00:03:45,280 --> 00:03:51,520
Reviewers and editors of highly prestigious academic journals use a

34
00:03:51,520 --> 00:03:58,480
process that's rigorous and very selective. These journals have low

35
00:03:58,480 --> 00:04:07,080
acceptance rates and tend to have considerable influence on the field.

36
00:04:07,080 --> 00:04:13,680
Other journals have a less rigorous review process and higher acceptance

37
00:04:13,680 --> 00:04:22,280
rates. However, all are still considered to be peer reviewed journals. Blind peer

38
00:04:22,280 --> 00:04:29,320
review is not a perfect process, but it is a process and this process is

39
00:04:29,320 --> 00:04:38,240
important. We recognize the process is not without bias or flaws. Peer review

40
00:04:38,240 --> 00:04:45,040
does not magically make research unbiased or pure. It's not possible for human

41
00:04:45,040 --> 00:04:51,160
beings to have a completely objective unbiased view of anything. Peer review is

42
00:04:51,160 --> 00:04:58,120
simply another filter to try to remove some of the impurities related to bias,

43
00:04:58,120 --> 00:05:05,440
methodology, theoretical context, analysis, applications, and conclusions. But you,

44
00:05:05,440 --> 00:05:13,280
dear podcast listener, are the ultimate filter. You are the most important peer

45
00:05:13,280 --> 00:05:21,160
reviewer. In this respect, you must always ask, does the strategy or approach work

46
00:05:21,160 --> 00:05:26,440
with the students in front of you? Does it enhance their ability to create

47
00:05:26,440 --> 00:05:33,240
meaning with print? Does it move them forward, unimpeded, in their journal to

48
00:05:33,240 --> 00:05:39,200
achieve their full literacy potential? Does it matter if a strategy or approach

49
00:05:39,200 --> 00:05:45,160
demonstrates significant results with a large sample size if it doesn't work

50
00:05:45,160 --> 00:05:53,880
with your small sample size, your students? So back to data collection. Data is not

51
00:05:53,880 --> 00:05:59,320
the same as research even though some would like you to think so. Let's say I

52
00:05:59,320 --> 00:06:04,400
am the owner of a new type of reading instruction program. We'll call it RIDO.

53
00:06:04,400 --> 00:06:11,880
RIDO is adopted by a school district and I decide to collect data. At the

54
00:06:11,880 --> 00:06:16,920
beginning of the year, the average score on the Walmart standardized reading test

55
00:06:16,920 --> 00:06:24,600
was 85. After three months, the Walmart standardized reading test was again

56
00:06:24,600 --> 00:06:33,080
given. This time, the average score was 95. It raised 10 points. I collected data.

57
00:06:33,080 --> 00:06:40,520
So I then put the following label on my RIDO website. Research shows that RIDO is

58
00:06:40,520 --> 00:06:47,200
effective in raising students' reading achievement. Okay, I had data to back it

59
00:06:47,200 --> 00:06:53,960
up. But it wasn't research. It was just data. We don't know what the variable was

60
00:06:53,960 --> 00:07:00,160
here. What caused the scores to increase? Students' scores could have increased due

61
00:07:00,160 --> 00:07:07,080
to maturation or they were exposed to good books and writing. Or maybe they took

62
00:07:07,080 --> 00:07:14,080
the pre-test on a really bad day and the post-test on a really good day. Maybe

63
00:07:14,080 --> 00:07:18,680
all the good readers were sick on the pre-test and all the struggling readers

64
00:07:18,680 --> 00:07:25,240
were sick on the post-test. Maybe my sample size was only 10 and there are

65
00:07:25,240 --> 00:07:31,200
also no comparison groups. We don't know if RIDO was better than something else or

66
00:07:31,200 --> 00:07:38,520
better than nothing or better than what usually is. Confusing data with research

67
00:07:38,520 --> 00:07:45,600
is a common practice in the for-profit realm. Or let's say I compared RIDO to a

68
00:07:45,600 --> 00:07:53,680
variety of other programs. Posting test scores show that the RIDO average was 95

69
00:07:53,680 --> 00:07:58,560
on the Walmart standardized reading test while the average score of the other

70
00:07:58,560 --> 00:08:08,880
programs was 93. I could say that RIDO outperforms all the other programs.

71
00:08:08,880 --> 00:08:15,440
However, the difference in scores was not statistically significance.

72
00:08:15,440 --> 00:08:23,400
Statistical significance means greater than could occur by chance, which means

73
00:08:23,400 --> 00:08:31,360
there really was no difference. Confusing data with research is also a common

74
00:08:31,360 --> 00:08:38,080
practice with those who have a political or ideological agenda. Without peer

75
00:08:38,080 --> 00:08:44,720
review, we'd have no sense of how these other programs were taught by who, for

76
00:08:44,720 --> 00:08:51,200
how long, or how often. This is done often in the reading instruction world

77
00:08:51,200 --> 00:08:59,160
when they try to say one program or approach is better than another. A

78
00:08:59,160 --> 00:09:06,600
common trick is to have highly trained teachers of program A give extended

79
00:09:06,600 --> 00:09:13,960
specialized programming instruction to a group of students and the measures used

80
00:09:13,960 --> 00:09:19,960
to determine growth essentially replicate what program A is teaching. And

81
00:09:19,960 --> 00:09:27,440
of course program A is going to have higher scores than the other programs.

82
00:09:27,440 --> 00:09:33,040
Maybe these other programs did not have specialized training. Maybe they did not

83
00:09:33,040 --> 00:09:40,080
have specialized instruction. We just don't know without peer review. Confusing

84
00:09:40,080 --> 00:09:46,960
data with research is a common practice outside of education by those outside of

85
00:09:46,960 --> 00:09:53,760
the research or literacy field. Another common example is this. Test scores are

86
00:09:53,760 --> 00:10:01,640
going down. For example, here in Minnesota some data show that reading test scores

87
00:10:01,640 --> 00:10:08,720
in our state have gone down over the last three years. Conclusions by some are

88
00:10:08,720 --> 00:10:16,800
these. Oh, our schools must be doing a horrible job. It's the teachers or the

89
00:10:16,800 --> 00:10:23,360
problem is the teacher education programs. They're not training new teachers how to

90
00:10:23,360 --> 00:10:30,880
teach reading. That's the common wind. We need to buy a product with big fancy

91
00:10:30,880 --> 00:10:38,320
words and fancy charts and graphs or we need to pay a private for-profit person

92
00:10:38,320 --> 00:10:45,080
or group or service to fix the problem. If it's a for-profit and has lobbyists

93
00:10:45,080 --> 00:10:51,320
and PR and marketing people, it has to be better than anything someone from the

94
00:10:51,320 --> 00:10:58,000
public sector could offer. After all, lobbyists and marketing people are always

95
00:10:58,000 --> 00:11:03,680
much more credible than say people with terminal degrees and literacy instruction

96
00:11:03,680 --> 00:11:10,000
who spend years studying research and how to best teach reading, right? This is

97
00:11:10,000 --> 00:11:16,000
the case here in Minnesota. Of all the variables impacting children and their

98
00:11:16,000 --> 00:11:23,200
reading scores, some have decided that it has to be the teacher education programs

99
00:11:23,200 --> 00:11:30,240
that's making the test scores go down. It just has to be that. State

100
00:11:30,240 --> 00:11:37,240
legislators and school administrators who have undoubtedly never read

101
00:11:37,240 --> 00:11:41,800
a research article about reading instruction or teacher education or

102
00:11:41,800 --> 00:11:48,120
anything have decided that since they once went to first grade that they must

103
00:11:48,120 --> 00:11:55,480
have all the answers for teaching reading and preparing teachers. They want to

104
00:11:55,480 --> 00:12:03,120
mandate letters, a for-profit training program, professional development

105
00:12:03,120 --> 00:12:10,240
program for teachers, be mandated for teachers and pre-service teachers and

106
00:12:10,240 --> 00:12:16,960
teacher preparation program. After all, experts say it puts the Y behind the

107
00:12:16,960 --> 00:12:23,560
phonics systems and it is effective and it's research-based. But who is an

108
00:12:23,560 --> 00:12:30,640
expert? Am I an expert? Are you an expert? What does it take to be an expert? And is

109
00:12:30,640 --> 00:12:36,720
the expert an expert in reading instruction or perhaps the expert is an

110
00:12:36,720 --> 00:12:43,760
expert in animal husbandry? It doesn't really say. But apparently an expert has

111
00:12:43,760 --> 00:12:49,720
said it. An expert has said this letters is an important program. After all an

112
00:12:49,720 --> 00:12:58,840
expert said it. You can't argue with an expert can you? Now, reading scores may

113
00:12:58,840 --> 00:13:04,000
have indeed gone down in the last three years but has anything else been going

114
00:13:04,000 --> 00:13:11,080
on? Oh yes, we've had a pandemic and students have been at home and George

115
00:13:11,080 --> 00:13:19,480
Floyd and housing shortages and crowded classrooms. But if you look at long at

116
00:13:19,480 --> 00:13:26,600
the long term at the National Assessment of Educational Progress data by the US

117
00:13:26,600 --> 00:13:32,000
Department of Education and AEP data, Minnesota 4th and 8th grade students are

118
00:13:32,000 --> 00:13:39,280
at about the same place they were in 1998. In 1998 the average reading scores for

119
00:13:39,280 --> 00:13:50,200
4th grade students was 219. It steadily rose to 225 in 2017 and was 222 in 2019.

120
00:13:50,200 --> 00:13:56,040
Same with 8th grade. About the same. We're at about the same place. Now it's

121
00:13:56,040 --> 00:14:02,000
natural for there to be a fluctuation in scores but instead of reporting on the

122
00:14:02,000 --> 00:14:10,920
steady rise in reading scores from 1998 to 2017 or the statistically significant

123
00:14:10,920 --> 00:14:19,720
steadily rise in scores from 1972 to 2019, the focus is on a short-term dip in

124
00:14:19,720 --> 00:14:28,720
scores. Instead of calling for smaller class sizes and better working conditions

125
00:14:28,720 --> 00:14:35,160
for teachers and funding for health care or food assistance, the call is for a

126
00:14:35,160 --> 00:14:42,720
mandatory for-profit letters training program for teachers and for students in

127
00:14:42,720 --> 00:14:49,840
teacher preparation programs because the experts say...

