Question: Create and validate system for assigning difficulty level to quantitative reasoning (QR) skills within context of data analysis



Yüklə 9,13 Kb.
tarix28.07.2018
ölçüsü9,13 Kb.
#59258

BSP-Ruscetti-Annotated Bibliography

20150712


Annotated Bibliography
Question: Create and validate system for assigning difficulty level to quantitative reasoning (QR) skills within context of data analysis.
What do I need to know? Other measures of difficulty


  1. Momsen, J. L., T. M. Long, S. A. Wyse, and D. Ebert-May. "Just the Facts? Introductory Undergraduate Biology Courses Focus on Low-Level Cognitive Skills." Cell Biology Education 9.4 (2010): 435-40.

This paper describes a method in which complex assessments can be assigned a difficulty score. This is essentially a weighted average of cognitive ability. The paper developed an equation to apply to any question. First, each level of Bloom’s is assigned a number 1-6 with 1 being the lowest level of cognition. Next, the points of the question are divided up according to Bloom’s level and multiplied to the level. Example: A 10 point question in which 6 points are knowledge (1), 3 points are understanding (2) and 1 point is application (3) would yield a difficulty score of 1.5 [(6/10 x 1) + (3/10 x 2) + (1/10 x 3) = 1.5]. The paper also described that the assignment of Bloom’s level was by committee and there was close agreement. This paper does not focus on assigning the difficulty score but rather how low level the questions are in introductory Biology courses. This paper is relative because I had independently developed a similar tool for assigning difficulty scores to any assessment. In my tool, the multiplier is 100 rather than 1 to give scores that range from 100-600. I call this tool “Numerical Difficulty Leveling” so I can directly compare difficulty across any assessment. This paper doesn’t address quantitative skills assessments directly.


2. Lemons, P. P., and J. D. Lemons. "Questions for Assessing Higher-Order Cognitive Skills: It's Not Just Bloom's." Cell Biology Education 12.1 (2013): 47-58.


This paper discusses the difficult task of building higher order cognitive (HOCs) assessments. They used multiple choice questions for clicker based assessments. Here, the higher order cognition comes when students do not have the expertise to answer the question directly but must instead determine the most relevant evidence to use to answer the question. Questions were developed by one team and rated by another. Below is the list of HOCs used to determine if a question required HOC.

  • Use information, methods, concepts, or theories in new situations

  • Predict consequences and outcomes

  • Solve problems in which students must select the approach to use

  • Break down a problem into its parts

  • Identify the critical components of a new problem

  • See patterns and organization of parts (e.g., classify,order)

  • Determine the quality/importance of different pieces of information

  • Discriminate among ideas

  • Weigh the relative value of different pieces of evidence to determine the likelihood of certain outcomes/scenarios

  • Make choices based on reasoned argument

  • Includes Bloom’s categories application, analysis,and evaluation

They found that about 66% of the raters did not use Bloom’s to describe difficulty. Often, raters discussed how long it took students or how much prior experience they needed to complete the problem. This paper is related to my problem because Bloom’s may not be enough to describe the difficulty of more complex quantitative problems. Some quantitative problems may be broken down into the number of steps required to complete them. I’m stuck on finding methods in math that I can use as a template. “Item difficulty index”


3. Follette, Katherine B.; McCarthy, Donald W.; Dokter, Erin; Buxner, Sanlyn; and Prather, Edward (2015) "The Quantitative Reasoning for College Science (QuaRCS) Assessment, 1: Development and Validation," Numeracy: Vol. 8: Iss. 2, Article 2.

DOI: http://dx.doi.org/10.5038/1936-4660.8.2.2

Available at: http://scholarcommons.usf.edu/numeracy/vol8/iss2/art2
In this paper, they discuss the development of an assessment tool for non-majors students in general science courses. There are other tools out there, they say, but they are proprietary, or expensive, or subject specific. They are evaluating the QRLA (Quantitative Reasoning Learning Assessment) and the QuaRCS (Quantitative Reasoning for College Science). They asked both science and math instructors what were the most important QR skills and the top 5 are: Graph Reading, Table Reading, Arithmetic, Proportional Reasoning, and Estimation. They also say that Percentages, Measurement, Probability, Statistics, Area/Volume, Error, Using Numbers in Writing, and Dimensional Analysis/Unit Conversions are very important but eliminate using numbers in writing and measurement because they can’t assess easily in multiple choice format. They mention Carleton College's Quantitative Inquiry, Reasoning and Knowledge (QuIRK) as trying to manage the types of assessments not suited to multiple choice questions. This paper also gets at preconceptions of faculty and students in regards to math (our students don’t hate math as much as we think they do). And the paper teases out the students attitude about their confidence, their effort, and their behavior on the instrument. This paper uses the words “easy” and “hard” to denote difficulty level but make no attempt to assess true difficulty of any of their questions. For example, Volume is easy and stats/error is hard. They gave their instrument in 40 courses over 5 years. They validated questions using a long form open ended free response to evolve the wording of the questions. This paper helps me see a plan for validating more difficult quantitative questions.
4. Teaching quantitative biology: goals, assessments, and resources Melissa L. Aikens and Erin L. Dolan Mol Biol Cell. 2014 Nov 5 ;25(22):3478-81. doi: 10.1091/mbc.E14-06-1045.

This paper looks at a variety of quantitative assessment tools. Rather than be a paper on how to assess QR, it is a compilation of tools with QR assessments embedded in them. The paper is very good at explaining how important QR is but supported by precious little evidence of robust assessment. This paper helps me get a sense of the types of QR assessments. None of them are about writing with QR.
5. Polito, Jessica (2014) "The Language of Comparisons: Communicating about Percentages," Numeracy: Vol. 7: Iss. 1, Article 6. DOI: http://dx.doi.org/10.5038/1936-4660.7.1.6 Available at: http://scholarcommons.usf.edu/numeracy/vol7/iss1/art6
This paper dissects the way in which students make comparisons in written communication. She uses percentage as an example of common failings amongst learners. She provides a number of salient examples of how she teaches a QR course. This paper may relate by helping me rank common issues that students face when writing quantitatively supported statements.

  • “fails to provide numbers that would contextualize the argument,”

  • “numbers without comparisons that might give them meaning”

  • “Presents numbers but doesn’t weave them into a coherent argument.”

The author also outlines step by step process to teach students how to break down a comparison problem.
6. The Carleton Quantitative Inquiry, Reasoning, and Knowledge (QuIRK) initiative. This initiative tries to assess quantitative reasoning in writing. I didn’t see any papers to come out of QuIRK and they seemed to shut down about 2010, but they have rubrics and procedures for assessing QR in writing.
Yüklə 9,13 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə