If you’re like many introductory physics instructors, you likely use an online homework system. These systems have evolved tremendously over the past decade, and now allow instructors to highly customize “assignments settings.” For example, instructors control how many attempts each student has and whether hints and feedback are available. While flexibility can be good, it can also cause problems. Parameters that are too strict can lead to frustrated and demotivated students. As problems mount, they eat into your busy schedule as you take the time to field student questions and concerns.
Recently, an instructor new to the Expert TA online homework system indicated that his students “didn’t like the feedback.” After investigating, we found that the students were accessing available feedback less than 1 percent of the time. The instructor’s grade settings offered an explanation; while incorrect answers resulted in a 1 percent grade deduction, accessing hints and feedback cost 10 percent. Logically, students weren’t willing to lose a full letter grade for using one hint, so they simply avoided them. After changing these settings to fall within the ranges described below, these students began accessing hints and feedback readily, and the overall student success rate on homework increased significantly. (See a recent case study about this topic.)
With so much flexibility available, it is natural to wonder if there is an optimal configuration that increases student engagement and leads to improved outcomes. The Expert TA team decided to use “Big Data” and our Analytics Platform to find out. In a large case study, we analyzed data from over 100 classes at 75 institutions (millions of submitted answers) and found some very clear regions where productive work was happening.
So what are the sweet spots?
- Number of Allowed Answer Submissions: 4-7 submissions. One submission attempt clearly does not allow for any correction. The data showed students with only 2-3 tries were often headed in the right direction, but did not have enough attempts to reach the correct answer. At the other extreme, the data shows that students with more than 10 incorrect answers on a given question rarely reach a successful outcome. It appears after a certain point, students are just putting in additional guesses and are not being thoughtful or productive. Students were most engaged and reached successful outcomes more often when 4-7 submission attempts were given.
- Deduction for Incorrect Answers: 5 points. The penalty for a wrong answer should be 2-3 times greater than the cost for accessing hints and feedback. This helps to avoid the “plug-and-chug” model of simply reorganizing equations or changing values to result in a different answer, which is no better than just guessing. The idea is to encourage students to rethink their strategy and reevaluate before making another submission.
- Deduction for Hints: 1-2 points. Accessing hints should not be free, but they should be less than the deduction for incorrect answers. There has to be some value placed on them to make them psychologically meaningful, but not so much that it creates a disincentive for using them. Since hints are used to get students “unstuck,” there should be an incentive to accessing the hint, rather than just guessing.
- Deduction for Feedback: 1-2 points. Feedback and Hints are similar, but certainly different. In Expert TA, feedback is based on the incorrect submission and targeted to the specific mistake made. The deduction for feedback should be in the same range as the value for accessing hints. The goal again is to incentivize the student to access and read this information designed help frame the problem.
- Late Work: Some partial credit (Ex: 50%). Most instructors in the case study offered some credit for late submissions up until either the next exam or the final exam. Consider a student who finished 7 out of 10 homework questions on time. Without any incentive, that student may leave homework points and needed practice on the table. Giving partial “late work” credit for the remaining questions can motivate the student to work those problems prior to the exam, creating a win-win situation; they can boost their homework grade a little while studying for the exam.
Interestingly, the recipe above produces overall grades that philosophically resonate with most of the instructors interviewed about this data. Consider an “average” student who is working through a given assignment with real effort. This hypothetical student eventually gets to the correct answer for all of the questions, but takes two or three submission attempts and needs to access hints and/or feedback along the way. What grade do you think the student would deserve? Almost no instructors suggested a D or F and very few suggested a high A. Most felt that the student would deserve a grade in the range of C+ to B (or equivalently a ~78 to 85 on a 100-point scale). The setting ranges described here provide overall grades in that range.
Each classroom is different, so we suggest tweaking your settings to what makes the most sense for your structure, but hopefully this commentary and these results are helpful.
SEE EXPERT TA’S HOMEWORK SYSTEM FOR YOURSELF THROUGH AN ONLINE DEMO.
Formed from the belief that a homework system should help instructors teach and students learn, Expert TA harnesses the power of technology to encourage practice during homework, while also giving meaningful feedback to both instructors and students. The Expert TA blog was created to serve as a hub of information to help educators track and discuss trends in education, software and student performance.