Keep up with what's happening at PARCC.
PARCC has commissioned several white papers to be written by TAC members discussing psychometric and assessment design related issues. There are two more white papers now available: one on growth models and value-added design and the other on performance level descriptors (PLDs). All white papers can be found on the Technical Advisory Committee section of the PARCC website.
Making Inferences about Growth and Value-Added: Design Issues for the PARCC Consortium by Derek Briggs (University of Colorado at Boulder) FULL TEXT
There is often confusion about distinctions between growth models and value-added models. The first half of this paper attempts to dispel some of these confusions by clarifying terminology and illustrating by example how the results from a large-scale assesment can and will be used to make inferences about student growth and the value-added attributable to teachers or schools. Two key differences between growth models and value-added models are discussed: the unit of analysis (growth models focus first and foremost on students; value-added models focus on teachers or schools) and inferential intent (growth models are primarily descriptive, value-added models are meant to support causal inferences). The point is made that all growth models can be used to make value-added inferences, but value-added models almost never lead to student-specific inferences about growth. The focus of the second half of this paper is on design issues that will need to be considered by the PARCC consortium such that test scores can be used for either growth or value-added inferences. It is shown that vertically scaled test scores are not prerequisite for value-added modeling. Vertical scales are most desirable in support of student-level growth interpretations, but certain conditions must be met before their creation would be defensible. In particular, the case is made that vertical scales will be most compatible with a learning progression perspective on construct and item development. The second half of the paper also discusses design factors that would minimize the role that measurement error will play in distorting inferences about growth and value-added. Finally, the paper concludes with the some recommendations for the PARCC consortia.
Defining and Measuring College and Career Readiness and Informing the Development of Performance Level Descriptors (PLDs) by Wayne Camara (College Board) and Rachel Quenemoen (National Center on Educational Outcomes) FULL TEXT
EXECUTIVE SUMMARYPARCC is a consortia of states that is developing a next-generation assessment system in English and math anchored in what it takes to be ready for college and careers. To accomplish this goal the consortia must determine whether individual students are college-and career-ready, or are on track. A direct empirical relationship between PARCC assessment scores and subsequent success in college and career training provides the strongest form of evidence. This paper reviews many criteria that can be used to gauge college and career success but argues that student academic performance (e.g., grades, GPA) in credit bearing courses is the most relevant and available criteria for defining success. College and Career Readiness can be conceptually defined as including multiple factors, but consortia assessments should be more narrowly tailored to a definition which is based on the cognitive skills and content knowledge required in the common core standards and types of learning which occurs in schools and classrooms. There are alternative approaches to establishing performance level descriptors (PLDs), cut scores and metrics that will be used to determine if students are college-and career-ready. Judgmental approaches have generally been used in state assessment programs, but since scores from these assessments will primarily be used to make inferences about future performance, empirical methods (e.g., prediction models, regression, linking scores across assessments) traditional used in admissions and placement testing programs are of greater relevance (Kane, 2001). The paper describes a general validation approach and the required evidence to conduct predictive studies between PARCC secondary assessments postsecondary success. The progression and coherence of PLDs should be based on empirical data from the statistical links between high school assessments and college and career outcomes, as well as, educators’ judgments from content-based standard setting 3 approaches. The paper provides examples of PLDs based on statistical data and postsecondary outcomes, and nine major recommendations for establishing a validity argument for consortia assessments.