Changing Institutional Research Productivity of United States ’ Science Education Programs : 2000 s v . 1990 s

The purpose of this study was to identify the major science education programs in the United States, where the science education researchers published their research during the first decade of the twenty-first century. This research study of the scholarly productivity of science education programs at United States’ institutions of higher education is compared with the last decade of 1990s (Barrow, Settlage, & Germann, 2008). Each issue of the eight research journals (Journal of Research is Science Teaching, Science Education, International Journal of Science Education, Journal of Science Teacher Education, School Science and Mathematics, Journal of Computers in Math and Science Teaching, Journal of Science Education and Technology, and Journal of Elementary Science Education) published in the 2000s provided the author(s) and their institutional affiliation. The resultant ranking of raw and weighted counts for the top 30 U.S science educations programs shows variation in journals where research was published. There were almost 50% of the 2000s top 30 institutions which were not listed for the 1990s. Potential explanations for variations and uses for ranking are discussed.


Introduction 1.1 Introduce the Problem
This study of institutional research productivity in science education is a replication of the first instructional research productivity study in science education (Barrow et al, 2008) Why is it important to compare 2000s with 1990s institutional productivity?If institutions are basing decisions on the 1990s results; it is important to know whether rankings are static or dynamic.The first purpose of this study was to compare major U.S science education programs ~ 45 ~ based upon their empirical research publications of the 2000s with the 1990s.The second purpose was to compare raw and weighted counts to determine the best rating for future instructional productivity research.

Related Literature
A review of perception or prestige/reputation (rankings based upon perceptions [e.g.U.S. News and World Report's (annual ranking of graduate schools)]) and productivity (e.g.faculty was members' publications) was detailed by Barrow et al. (2008).According to Keith and Babchuk (1994), use of institutional program as a unit of analysis is superior to use of individuals.In 2010, the National Research Council published an extensive review of graduate programs at many U.S higher education institutions.It should be noted that science education like most of education programs were excluded.
In addition to the study of science education's institutional productivity (Barrow et al., 2008), the fields of library and information science, sociology, social work, counseling, psychology, and reading/ literacy education have had studies reported on institutional productivity over the past two plus decades.Budd and colleagues (Budd & Seavey, 1996;Budd, 2000;Adkins & Budd, 2006) have conducted replication of institutional productive research for library and information science.For their three studies, Budd and colleagues used only Social Science Citation Index (SSCI) to rank programs.Over the ten year period, they concluded there was an increase in research productivity and productive programs of 1996 were still productive in 2006.The SSCI favors faculty with longer careers resulting in more citations.Keith and Bobchuk (1994) conducted a longitudinal assessment  of three dominant sociology empirical research journals.They concluded that previous prestige level was considered more important than current level of scholarly productivity and number of faculty influences prestige.In the related area of social work, Thyer, Ligon and colleagues conducted a series of replication studies every five years: 1979-1983 (Thyer & Bentley, 1986), 1984-1988 (Ligon, Thyer, & Dixon, 1995;Thyer, Boynton, Bennis, & Levine, 1994), 1994-1998 (Ligon & Thyer, 2002), 1999-2003 (Ligon, Jackson & Thyer, 2007), and 2004-2008 (Ligon, Cobb, & Thyer, 2012).Their longitudinal analysis of over 25 years showed an increase in number of articles per institution has steadily increased.This analysis used the same six journals.Ligon et al. (2012) cautioned that these six journals do not capture overall scholarly activity of social work researchers, including nonsocial work.
Another career productivity index called h-index (Hirsch, 2005) was used by Tracey to study institutional productivity in counseling psychology.Earlier, the Journal of Counseling Psychology was used to rate counseling psychology programs (Bohn Jr., 1966;Cox & Cott, 1977).While Howard (1983), Delgado and Howard (1994) (1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007) nominated by 14 journal editors and members of editing boards.They used a weighted score for individuals and correlated productivity for the top 30 for each journal.Researchers tended to publish in mainly one of the journals.They noted the variability due to some faculty could be starting their career and other are tapering off into retirement.Morrison and Wilcox (2008) focused on institutional productivity for reading/literacy education publication in nine different journals.They considered consistency among top ranked programs that had been reported by Johns (1986), Hopkins (1979), and Hollingsworth and Reutzel (1994).Morrison and Wilcox noted regardless of time period and different journals that major reading/literacy education programs are still dominant.The stability of faculty employment contributes to this pattern.Also, Morrison and Wilcox reported on increase in multiple authors from collaboration institutions, including precollege authors.

Ku (2009) identified researchers whose work was published in Educational Technology
Research and Development (ETR&D) for 20 years.Ku used only the first three authors of ETR&D manuscripts.He considered that only the first three authors make major contributions to the work.He concluded that first authors differed for productivity and total authorship in ETR&D.There has been greater number of multiple authors involved in research studies.For example, there were 23 authors associated with one published concept inventory (Johanes Stroble, Personal Communication, August 9, 2012).
In summary, institutional productivity research in these disciplines has found stability and variation in rankings of programs.The lack of consistent criteria (e.g.SSCI, raw score, weighted, or different journals in the field) causes questions about the results.This study was needed to see if science education programs institutional productivity was stable or not.

Method
This replication study maintained the criteria from the baseline study (Barrow et al., 2008): 10 year time interval as recommended by Howard (1983), all articles were utilized except letter to editor, editorials, and book reviews, same journals (JRST, SE, IJSE, JSTE, SS&M [only science articles], JCM&ST [only science articles], JESE, and JES&T), all authors for raw credit received 1.0 and weighted count where decreased position in author chain resulted in lower rated based upon the article's total value was 1.0 as recommended by Howard et al. (1987).According to Lykken (1968) and clarified by Kelly, Chase, and Tucker (1979), this is a literal replication since the only difference was the publication years (2000-9).
In Australia, Howard (2011) summarized attributes for top research journals as having low acceptance rate and editorial board consisting of leaders in the field.The eight journals in this study are noted for publishing empirical science education results.~ 48 ~ Every article provided title of article, all authors, and institutional affiliation were utilized except editorials, letter to editor, and book review.All authors regardless whether faculty or graduate student in contrast to Ku (2009) who used only the first three authors and Robinson, Hartley, and Dunn (2001) who recognized only the first author.Researchers who were not specialists in science education were included.During the 2000s, only the contributors and their institution at the time of submission were considered.
The weighted ranking of institutional productivity has been previously utilized by Barrow et al. (2008), Corrado and Ferries (1997) and Scott and Mitias (1996).The weighted formula based upon Howard et al. (1987) which was utilized for the 1990s (Barrow et al., 2008) was also used in this study.(See Table 3 to see the weighted approach).This method provides greater weight to the senior author tradition who made the most substantive contributions to the manuscript.The total raw and weighted were compiled for each program and subsequently ranked from high too low for all eight journals.Table 3 Weighting formula applied to authorship sequence

Results
There were a total of 1591 research authors (raw) in the eight research journals during the first decade of 2000.All institutions were totaled for the top 30 for both raw and weighted rankings.Table 4 contains the alphabetical listing for the top 30 U.S. institutions for raw count for each journal.The top five were Indiana University, University of Michigan, Purdue University, North Carolina State University, and University of Georgia.Table 5 contains the alphabetical listing for the top 30 U.S. institutions for weighted count for each journal.In contrast to the 1990s, 15 of 30 institutions were new in the 2000s.Ten percent of top 30 raw rankings were not in the 1990s top 30 weighted rankings (Table 6).All three were in the lower third of raw rankings.There was a 90% agreement among 2000s top 10 for both raw and weighted rankings; although, there was slight variation.This 2000s ranking, regardless of raw or weighted, shows significant changes since the 1990s.Only 60% of the top 30 for 1990s are found in the 200s ranking.Also, 50% of the 1990s top 30 programs are absent in 2000s ranking.Among the top for 2000s, 50% were in the bottom 10 or absent on the 1990s rankings.Eight of the top 30 programs for 2000s ranked more than five positions higher.Barrow and Tang (2009) reported at National Association for Research in Science Teaching (NARST) about a 5 rather than 10 year analysis.A total of 10 institutions from 1990s study were not in the top 30 in 2004.Although, three did appear in the top 30 for 2000s.Future studies should use the 10 years interval since the review process can be long for some journals.The use of only printed manuscript than electronic acceptance seems to vary from journal to journal.

Discussion
Why is this replication study important?This study kept all conditions stable from the research about science education institutional productivity except the time period-first decade of 21 st century.This study is important because of the drastic changes in ranking of institutions.Current rankings are important because prospective faculty and graduate students should be aware of most recent ranking rather than for 1990s.
In this study, like the baseline research (Barrow et al., 2008), used eight science education journals which publish empirical research.The multiple journals allow for the entire field of science education (Yager, 1984), to be considered.This pattern follows the patterns of other disciplines such as counseling education (Howard, 1983 Recently, Barrow and Tang (2013) compared institutional productivity in the top four science education research journals (JRST, SE, IJSE, and JSTE) as identified by van Aalst (2010).There was a 90% agreement among the top 10 ranked institutions in this study for 2000s.There was also a 90% agreement in the bottom 10 rankings for 2000s, but variation in rank was observed.

Implications
The next decade of research will use only seven empirical science education journals since JESE has been integrated into JSTE when providing a possible three decade analysis of institutional productivity in science education.This three decade research would allow analysis about stability of an institution or not.With the retirement, death, and/or faculty mobility, what is the impact upon the involved institutions?Historically when a full professor position becomes open, it is filled by an assistant professor.Typically, it takes six years to achieve tenure and how does this impact an institution's productivity?The changing institutional productivity could be more drastic for small programs with three or fewer faculty.If the search process takes longer than a year, it could result in a lower institutional productivity level.Table 7 allows quick comparison about raw and weighted ranking for the top 30 science education programs.Administrators and faculty can compare their programs productivity for both rankings.Due to increases in number of authors/publications as noted by Ku (2009), we recommend the use of weighted ranking when all members of the author chain are to be recognized.If the number of members in an author chain keeps increasing than maybe future analysis should only use the first three authors like was done by Ku.We question whether the seventh individual in an author team makes a significant contribution.However, in raw count they would receive the same credit as senior author.
That baseline study identified the top 30 U.S. science education programs for the 1990s based upon eight research journals (Journal of Research is Science Teaching [JRST], Science Education [SE], International Journal of Science Education [IJSE], Journal of Science Teacher Education [JSTE], School Science and Mathematics [SS&M], Journal of Computers in Math and Science Teaching, Journal of Elementary Science Education [JESE], and Journal of Science Education and Technology [JCM&T]) to provide a coherent view of science education research.The original study identified the top 30 ranked U.S science education programs based upon both raw count (all authors) and weighted score (credit assigned to author's position with reduced emphasis).The only change in this study was the 2000-2009 time period.
study should be conducted for top U.S. science education research institutions.A Delphi study could involve 5-10 year leadership of science education organizations, editorial boards, etc. could identify attributes considered most important for identifying outstanding institutional research programs.It appears that science education researchers seem to have a preference for one or two research journals where most of their research is published.A study similar toLau et al. (2008) focused upon individual researchers and where research was published.A correlation study using weighted values for each science education research Journal could be conducted.
Tinsley and Tinsley (1979)(1987)ferent time periods plus different journals to rank counseling psychology programs, Smith et al. reported that counseling psychology research was published in more than five journals in their ranking.Earlier,Barrow (2002)made a similar observation for science education.Smith used a weighted count as recommended byHoward, Cole, and Maxwell (1987).Smith noted institutions continue to publish LGBT manuscripts which were consistent with earlier analysis byTinsley and Tinsley (1979).

Table 1 .
Table 1 contains the acceptance rates for each journal based upon Cabell (2011) or editor's personal communication.Generally, these research journals are associated with a professional organization except SE, IJSE, and JSE&T.Also, editors were faculty at major science education research institutions.Table 2 contains a listing of journals, volumes, years, editors, and institutional affiliation for the 2000s.Acceptance rate for each science education journal

Table 2 .
Volumes, institutional affiliation, editors and years for journals

Table 4 .
Raw counts for Journals of Top Institutions

Table 5 .
Weighted Counts for Journals of Top Institutions

Table 6 .
Top 30 Science Education Programs of 2000s -Raw Rankings compared to 1990s Rankings ; Tracey, Claiborn, Goodyear, Lichtenberg, & Wampold, 2008; Smith et al., 2003), library science and information (Budd & Seavey, 1996; Budd, 2000; Adkins & Budd, 2006), and literary education (Morrison & Wilcox, 2008; Johns, 1986; Hollingsworth & Reutzel, 1994) rather than a single journal that was used by Ku (2009).Across the science education research journals found that Indiana University, Iowa State University, Michigan State University, North Carolina State University, Purdue University, Teachers College Columbia University, University of Georgia, and University of Minnesota had publication in each journal for 2000s.The use of only a few science education research journals could result in a drastic different ranking.All of the top 30 ranked U.S. institutions had publications in JRST and SE.Only 3 of the institutions lacked a publication in IJSE.There were 15 and 13 institutions without publication in JCM&ST and JESE, respectively.

Table 7 .
Alphabetical Listing of Top 30 Science Education Programs of 2000s for Raw and Weighted