Qualitative Manual Assessment Of Motor Control (q Mac)

CIVE 240 – Engineering and Sustainable Development M. Pandey, University of Waterloo Fault Tree Analysis – Page 6 General Procedure for Fault Tree Analysis from the U.S. Coast Guard Risk-based Decision-making Guidelines, Vol. 3 - Risk Assessment Tools Reference, Chapter 9 –.

Get evidence-based resources for assessing occupational performance and infusing occupation into practice.

Get evidence-based resources for assessing occupational performance and infusing occupation into practice.

Complete the Calibration Process to Become Certified

Complete the Calibration Process to Become Certified

  1. Measure the Quality of Your Client’s Performance
  2. Show Evidence of Your Client’s Improvement
  3. Support Your Recommendations with Evidence
  1. Measure the Quality of Your Clients’ Performance
  2. Show Evidence of Your Clients’ Improvement
  3. Support Your Recommendations with Evidence

Occupational Therapy Intervention Process Model (OTIPM)

Center your practice on occupation to more effectively help your clients meet their occupational goals.

  • Keep your focus on occupation
  • Implement more precise evaluations and interventions
  • Effectively document outcomes
  • Renew your passion for occupational therapy

Assessment Tools

SCHOOL ASSESSMENT OF MOTOR AND PROCESS SKILLS
(SCHOOL AMPS)

Measure the quality of schoolwork task performance

Assessment of Compared Qualities – Occupational Performance
(ACQ-OP)

Measure the extent of discrepancy between a person’s reported quality of ADL task performance and what the occupational therapist observed

Assessment of Compared Qualities – Social Interaction
(ACQ-SI)

Manual

Measure the extent of discrepancy between a person’s reported quality of social interaction and what the occupational therapist observed

“The AMPS course has completely changed the way I practice occupational therapy.”
“I have been an OT for 20 years and [AMPS] is by far the best OT assessment there is.”
“[The OTIPM] is the first model I have seen in a long time that has tempted me to return to the clinic.”

Each primary type of qualitative data contributes unique and valuable perspectives about student learning to the outcomes-based assessment process. When used in combination, a more complete or holistic picture of student learning is created

Folks:

The posting below describes the differences between quantitative and qualitative research and the appropriate uses of each of them. It is from Chapter 4, Assessment Methods in the book: Demonstrating Student Success, A Practical Guide to Outcomes-Based Assessment of Learning and Development in Student Affairs, by Marilee J. Bresciani, Megan Moore Gardner, and Jessica Hickmott. Published by Stylus Publishing , LLC 2283 Quicksilver Drive, Sterling, Virginia 20166-2102.[http://www.styluspub.com] © Copyright 2009 by Stylus Publishing, LLC. All rights reserved. Reprinted with permission.

Regards,

Rick Reis

reis@stanford.edu

UP NEXT: What Mentors Do

Tomorrow's Research

--------------------------------------------- 1,700 words -------------------------------------------

Quantitative and Qualitative and Assessment Methods

Quantitative Assessment Methods

Quantitative methods use numbers for interpreting data (Maki, 2004) and 'are distinguished by emphasis on numbers, measurement, experimental design, and statistical analysis' (Palomba & Banta 1999). Large numbers of cases may be analyzed using quantitative design, and this type of design is deductive in nature, often stemming from a preconceived hypothesis (Patton, 2002). The potential to generalize results to a broader audience and situations make this type of research/assessment design popular with many. Although assessment can be carried out with the rigor of traditional research, including a hypothesis and results that are statistically significant, this is not a necessary component of programmatic outcomes-based assessment. It is not essential to have a certain sample size unless the scope of your assessment is on the institutional level.

A traditionally favored type of research design that has influenced outcomes-based assessment methodology is quantitative assessment. Quantitative assessment offers a myriad of data collection tools including structured interviews, questionnaires, and tests. In the higher education setting, this type of design is found in many nationally employed assessment tools (e.g., National Survey of Student engagement, Community College survey of Student Engagement, and the CORE Institute Alcohol and Drug Survey) but can also be locally developed and used to assess more specific campus needs and student learning outcomes. It is important when engaging in quantitative methodological design, sampling, analysis, and interpretation to ensure that those individuals involved are knowledgeable about, as well as comfortable with, engaging in quantitative design (Palomba & Banta, 1999).

At Colorado State University, two primary quantitative assessment methods are used to examine apartment life on campus. 'The Apartment Life Exit Survey is given to residents as they begin the 'vacate' process from their apartment. Results are tabulated twice each year, once at the end of fall semester and once in the summer' (Bresciani et al., in press).

Administrators at Pennsylvania State University originally measured the success of their newspaper readership program based on satisfaction and use. The quantitative survey they were using was later revised 'to include more detailed information on students' readership behavior (e.g., how frequently they are reading a paper, how long, and which sections), students' engagement on campus and in the community, and their self-reported gains in various outcomes (e.g., developing an understanding of current issues, expanding their vocabulary, articulating their views on issues, increasing their reading comprehension)' (Bresiani et al., 2009). This revision allowed them to use survey methodology while still measuring the impact of the program on student learning.

CSUS underwent a similar revision process of a locally developed quantitative survey looking at its new student orientation program. Originally, only student and parent satisfaction were measured. This was later revised to include a true/false component in the orientation evaluation that used a form of indirect assessment. In the final revision, a pre-and post-test were administered to those students attending orientation to measure the knowledge gained in the orientation session (Bresciani et al., 2009).

In addition, a great deal of data already contained in student transactional systems can be used to assist in the evaluation of programs. Data such as facility usage, service usage, adviser notations, participation in student organizations, leadership role held, and length of community service can all help in explaining why outcomes may have been met. For instance, staff at an institution's counseling service desire for all students who are treated for sexually transmitted diseases to be able to identify the steps and strategies to avoid contracting them before leaving the 45-minute office appointment. However, when they evaluated this, they learned that only 70% of the students were able to do this, but they also examined their office appointment log and realized that because of the high volume of patients, they were only able to spend 27 minutes with each student on average. The decreased intended time to teach students about their well-being may explain why the counseling staff's results were lower than they would have desired.

Qualitative Assessment Methods

According to Denzin and Lincoln (2004), qualitative research is 'multimethod in focus, involving an interpretive, naturalistic approach to its subject matter' (p. 2). Upcraft and Schuh (1996) expand this definition by stating, 'Qualitative methodology is the detailed description of situations, events, people, interactions, and observed behaviors, the use of direct quotations from people about their experiences, attitudes, beliefs, and thoughts' (p. 21). Qualitative assessment is focused on understanding how people make meaning of and experience their environment or world (Patton, 2002). It is narrow in scope, applicable to specific situations and experiences, and is not intended for generalization to broad situations. Different from quantitative research, qualitative research employs the researcher as the primary means of data collection (e.g., interviews, focus groups, and observations. Also unlike quantitative research, the qualitative approach is inductive in nature, leading to the development or creation of a theory rather than the testing of a preconceived theory of hypothesis (Patton). It is important to note then that when applying qualitative methodology to outcomes-based assessment, you are not fully using an inductive approach because you are using the methodology to determine whether an intended outcome has been identified. However, the application of the methods themselves can yield very rich findings for outcomes-based assessment.

Data for qualitative analysis generally result from fieldwork. According to Patton (2002), during fieldwork a researcher spends a significant amount of time in the setting that is being investigated or examined. Generally multimethod in focus, three types of findings often result from the qualitative fieldwork experience; interviews, observations, and documents.

Each primary type of qualitative data contributes unique and valuable perspectives about student learning to the outcomes-based assessment process. When used in combination, a more complete or holistic picture of student learning is created.

Interviews

Interviews comprise a number of open-ended, questions that result in responses that yield information 'about people's experiences, perceptions, opinions, feelings, and knowledge' (Patton, 2002, p.4). It is common to engage in face-to face verbal interviews with one individual; however, interviews may also be conducted with a group and administered via mail, telephone, or the Web (Upcraft & Schuh, 1996). Though questions and format may differ, an essential component of any interview is the 'trust and rapport to be built with respondents' (Upcraft & schuh, p. 32). Open-ended questions can also be given to students at the conclusion of a program or an event to receive quick and immediate feedback. At Widener University, 'questions presented before, during, and after the {student health services} presentations allowed for an interactive experience and a means to monitor learning progress' (Bresciani et al., in press.

Observations

Observations, on the other hand, do not require direct contact with a study participant or group. Rather, this type of data collection involves a researcher providing information-rich descriptions of behavior, conversations, interactions, organizational processes, or any other type of human experience obtained through observation. Such observation may be either participant, in which the researcher is actually involved in the activities, conversations, or organizational processes, or nonparticipant, in which the research remains on the outside of the activity, conversation, or organizational process in scope (Creswell, 1998; Denzin & Lincoln, 2000; Patton, 2002). In keeping a record of observations, many methods can be used. One way is to take notes during the observation; another method commonly employed is to create a checklist or rubric to use during the observation. The checklist or rubric not only gives the observer a set of criteria to observe, but it also allows the observer to show student progress over time and to correlate a number with a qualitative process. At North Carolina State University, for example,

a total of 259 students that were found guilty of a violation of the {Student} Code {of Conduct} were assigned a paper with questions

specifically written to correspond with the criteria for the development of insight and impact on life issues, as identified in the learning

outcome. A rubric was used to review the papers. The rubric was created based on a theory of insight by Mary M. Murray (1995). In her book

Artwork of the Mind, Murray describes how to determine the development of insight through writing. Initially 20 papers were drawn randomly to

test the rubric. The rubric originally had a scale with three categories; beginning, developing and achieved and six dimensions based on the

theory and practice. In total, 22 papers were drawn and reviewed based on the rubric. (Bresciani eit al., 2009).

Isothermal Community College (ICC) incorporated the qualitative assessment method of using portfolios for professionals completing the assessment process. Although this particular example focuses on staff and departments using portfolios, this method of assessment is commonly used with students as well. At ICC

each year staff set aside time to reflect on what has been learned through assessment, compile related documents into a portfolio, and summarize

major areas of learning into what we refer to as 'reflective narratives.' The process is systematic and ongoing with portfolios and narratives

submitted for review by various administrators in June of each year. (Bresciani et al., 2009.

Documents

Finally, documents include 'written materials and other documents from organizational, clinical, or programs records; memoranda and correspondence; official publications and reports; personal diaries, letters, artistic works, photographs, and memorabilia; and written responses to open-ended surveys' (Patton, 2002, p. 4). Public records and personal documents are the two primary categories of documents one might use when doing outcomes-based assessment or research (Upcraft & Schuh, 1996). Newspaper and magazine excerpts, enrollment and retention records, and judicial records are examples of personal records. Both types of documents can enhance the overall data collected in an assessment project. It is important to note, however, that the authenticity of documents must be determined prior to using them for assessment (Creswell, 1998; Patton, 2992; Upcraft & Schuh, 1996).

In addition to the aforementioned documents, many student affairs professionals also use portfolios, student reflections, reports, or other forms of classroom-type documents for outcomes-based assessment data collection. Again, criteria checklists or rubrics can be used in the analysis of documents to identify whether outcomes are met. Keep in mind that whenever criteria are used with a qualitative method, the process of inductive discovery is diminished and therefore is the true nature of the qualitative methodology. Nonetheless, documents are a rich source of information and provide a great starting point for any assessment project.

REFERENCES

Bresciani, M. J. (in press-a). Challenges in the implementation of outcome-based assessment program review in a California Community College District. Community College Journal of Research and Practice.

Bresciani, M. J. (in press-b). An introduction to outcomes-based assessment; A comparison of approaches. McClellan & J. Stringer (Eds.), Handbook for student affairs administration (3rd ed.). San Francisco; Jossey Bass.

Bresciani, M. J. (in press-c). Understanding barriers to student affairs/services professionals' engagement in outcomes-based assessment of student learning and development. College Student Journal.

Qualitative Manual Assessment Of Motor Control (q Mac) For Free

Creswell, J. W. (1998). Qualitative inquiry and research design: choosing among five traditions. Thousand Oaks, CA: Sage.

Denzin, N., & Lincoln, Y. (Eds). (2000). Handbook of qualitative research. Thousand Oaks, CA: Sage.

Maki, P. L., (2004). Assessing for learning: Building a sustainable commitment across the institution Sterling, VA: Stylus.

Palomba, C. A. & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving assessment in higher education. San Francisco: Jossey-Bass.

Patton, M.Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage.

Motor Control Brain

Schuh, J. H., Upcraft, M.L., & Associates. (2001). Assessment practice in student affairs: an application manual. San Francisco: Jossey-Bass.

Upcraft, M.L. & Schuh, J.H. (1996). Assessment in student affairs: A guide for practitioners. San Francisco: Jossey-Bass.