The three principles of Universal Design for Learning (version 3.0) each address specific aspects
of learning and how we process information.
They are:
Representation: which focuses
on recognition networks in the mind
Action and
Expression: which focuses strategic network in the mind
Engagement: which focuses
on affective networks in the mind
The first principle pertains to how we present information
to learner. Essentially, it is about what they are learning. To ensure
learning is most successful, offer a choice of multiple presentations of the
information.
People differ in the ways that they perceive and comprehend
information. How we present it to them matters. Besides sensory disabilities,
such as blindness or deafness, and various learning disabilities, there are
several varying factors that are associated with standard accessibility that
significantly affect individual learning. A few of these factors include:
Language differences
Cultural differences
Economic differences
Each of these may require different strategies to presenting
content to ensure student success. In this way a more inclusive approach would
accommodate all of these factors to offer an equal opportunity for
understanding.
Disregarding all of these factors, other learners may simply
acquire information faster through one means of presentation instead of
another, such as excelling at visual presentations instead of textual
information. For this reason having multiple representations of content
improves student success at learning. This allows students to make connections
between, and within, the content that improves learning and the transfer of
concepts. Thus the use of multiple representations of content is vital
for everyone to learn because there is not one optimal
representation for learning and the use of multiple representations allows the
students to better comprehend the material during the learning process.
There are three guidelines to follow when focusing on the
representation as of content. They are:
Perception
Language and Symbols
Building Knowledge (version 3.0)
Each of these guidelines, each with their own goals to meet,
ensure students have the best chance to recognize what they need to
learn. Meeting the Principle of Representation also contributes to
student inclusion, as it does not bar access to the content for any
group. This results in
overall student success and is just good practice in teaching.
AI company CEOs are claiming that artificial intelligence
will replace workers (Cutter & Zimmerman, 2025) based on the vast amount of
content quickly created by AI. However, faster does not necessarily entail
better.When one needs brain surgery, it
is probably not wise to go with the surgeon who claims to perform the procedure
faster than anyone else with no attention to the survivability rate.When stakes are high, we value accuracy. Sacrificing
a marginal amount of time becomes an acceptable trade.
Online learning took years of research to confirm its legitimacy,
and in 2009 the Department of Education confirmed that not only were online courses
comparable, but were in some cases better than, traditional courses (Means, et
al., 2009).Once the pandemic
necessitated a move to online instruction, a vast number of academics moving to
online instruction frequently resulted in best practices being ignored in favor
of speed by those new to the online modality (Farrell, 2022).Years later, the effects on online
instruction without proper instructional design is still not determined.
The introduction of AI provided tools has produced more AI
generated text than the combined output of human generated text since the Guttenberg
printing press (Vincent, 2021).This
vast amount of content is hawked as a good thing.However, quality isn’t measured in terabytes.
When writing computer programming code, AI code often contains bugs and
introduces security risks (Perry et al, 2023). Alarmingly, while programmers predicted that
AI tools can reduce their completion time by at least 20%, studies reveal the
tools slowed down completion by 19% (Becker, et al, 2025). This significant
difference illustrates that if one is enamored by a quick an easy tool it could
lead to a sacrifice in quality that will impede the end result. Simply having a
tool that can produce vast amount of content does not vouch for the quality of that
content.
How does this apply to online learning? Over twenty years of
research molded an online design approach that focuses on positive learning experiences
(Wasson & Kirschner, 2020) where courses are designed to be accessible to
all students, follow a Universal Design for Learning approach (Dell 2015), and focusing
on complex learning through engaging designs instead of merely presenting
content (Wasson & Kirschner, 2020). A few features that are expected in
these online designs include:
A focus on the learner
Prototyping designs
Designs that accommodate multiple learning strategies (Rose & Meyer, 2002)
Iterative development incorporating student feedback (Adnan & Rizhaupt, 2018)
Aligning learning outcomes to assessments (Ni She, et al, 2021)
Being able to develop online learning environments that
apply best practices defined by applied research in learning is critical for creating
courses that promote student success.While
AI may be able to quickly produce content, would it be able to create
discussion activities for adult learners that:
Promote active learning
Provide open-ended questions that explore and apply concepts
Encourage learners to apply their real-world experience to the content
Avoid soliciting facts that close off conversations
Apply scaffolds that ensure inclusive learning practices
With the vast growth of the AI industry, faculty access to
AI, and economic issues facing schools, it is quite possible that AI will quickly
provide shoddy content at the cost of the learners.
Case Study: Brightspace LUMI AI
LUMI is an AI tool available through D2L Brightspace that applies only the specific
content instructors provide while promising content privacy. Based on the Anthropic’s Claude architecture,
the tool can generate:
test questions,
discussion questions,
assignment ideas, and
module summaries.
This can be done quickly and relatively easily. Impressively,
users can even set the level of Bloom’s taxonomy for many of the outputs. Being
integrated within Brightspace allows for a common platform for a school to use
that will save money and training time.Its one-press button is also convenient for the faculty.
Will it replace developers and instructional designers?Like the claims of many selling AI tools, they
are exaggerated.When using AI to
develop questions for online discussions, I noticed a disturbing trend.The questions were similar to a new teacher
who has no knowledge of best practices in instructional design.
With each generated output, the discussion questions lacked
instructional design principles for online learning. Presumably developers will
edit the material and make necessary additions or changes.In reality, overworked instructors are not
incentivized to do so.And this is the
crux of the problem with AI.It
encourages the quick and easy solution and contributes to ignoring specialists.
AI slop is quick and easy; however, it does not promote student learning (Weller,
2024).
Below is a LUMI Pro AI training guide:
The Lesson
AI is a tool – not a
solution.If you are going to use AI,
be sure to edit and apply best practices in instructional design for
learning.For example, online
discussion questions should be written with:
Open ended questions
Providing follow-up questions and time for reflection
Linked to authentic life experiences
Encourage sharing references and promoting an academic conversation.
While there is nothing wrong with using AI to help inspire
you when developing courses, you will probably note that the best courses have
inspiring and engaging learning activities that are more than just a vague
question for a standard online forum. To be fair to LUMI, Brightspace has
designed it such that instructors must review/edit the generative results before
they are applied.
The AI Apocalypse - Wow?
Replacing experts with stochastic parrots is probably a
recipe for disaster.The hallmark of good
college is its instruction.AI is a tool
and it is important to not be enamored by its novelty.Instead, learn to work with it and know its
limitation. Be sure to vet all generative
AI material.Be warned:You may be like the programmers who think the
AI tools are increasing their speed, when in actuality the tool has become an albatross.
Since instructional design is a detailed field, it is prudent
to understand best practices in online course development to assist your
assessment of the AI content.Most
teaching and learning centers are filled with instructional designers who are
more than happy to share the knowledge with faculty.While the may dispel the razzle-dazzle of
some AI tools, it will contribute to developing courses that increase student success.
Perry, N, Srivatava, M, Kumar, D, and D, Boneh (2023) Do Users Write
More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and
Communications Security (CCS ’23), November 26–30, 2023, Copenhagen,
Denmark. ACM, New York, NY, USA.
Rose, D. & A. Meyer (2002). Teaching every
student in the digital age: universal design for learning. Alexandria, VA:
Association for Supervision and Curriculum Development.
When we teach, either online courses or in the classroom, gathering information for assessment is critical to improve what we do as educators. Assessment is one of the aspects that educators often don’t do well. Many times, course evaluation surveys are the extent of our attempts to assess whether we are meeting the courses goals. These surveys are distributed at the end of the course and if the instructor is lucky, the results may be available months later. This process neglects the needs of the instructor by not affording them the information they need to make corrections and improve in real time. Naturally, the students also suffer, as they will not benefit from real time corrections by the instructor.
The ADDIE model is a traditional instructional design methodology to help streamline the production of your course. The name is an acronym for the 5 steps of the model:
Analyze
Design
Develop
Implement
Evaluation
Evaluation is set at the end of the process, and while the process is supposed to be cyclical, often critiques of the process is that it is too time consuming. Models like Rapid Instruction Design (RID) or Agile emphasize including evaluation throughout the stage. Whether these critiques of the ADDIE model or if traditionally people have failed to apply the ADDIE model functionally at different levels is an interesting question, but will not directly help you improve you course now.
Micro-Feedback
Developing mechanisms to collect feedback throughout the course, or micro-feedback, and using it to improve your course on the fly can be very helpful. By including short surveys at the module level (or smaller), instead of just one large survey at the end of the course, instructors can gain valuable insight that can help them rapidly adapt to their students’ needs. This improvement will directly affect the students. Moreover, the adaptation can meet the specific needs of the students ad may vary the next time the course is taught.
Adopting a flexible and adaptive approach using micro-feedback gives you a greater understanding of your course. Surveys can be short, and can include some reflective feedback from the student to give you the information you need to both better meet their needs and improve student success. While it may seem difficult, adding a small (5 question) option survey can give you valuable insight on improving your courses. You may find that the information can help you by giving you insight that could streamline your instruction and reduce your work by making it more effective.
Computer Science faculty have been dealing with AI tools and coding before the vast popularity of generative AI (Gen AI) in 2023. Since that popularity, there has been much attention to how AI will transform software development and coding.
Pros of AI Assisted Programming
Specifically, proponents of Gen AI identify key benefits that it offers. These include:
Decrease coding time: Programmers in code up to twice as fast using generative AI (Deniz et al, 2023).
Higher Satisfaction: 60-75% of programmers have reported that using AI felt more fulfilled with their job. Likewise, 73% stated that AI assisted their focus and 87% stated that they benefited from the reduced mental effort form repetitive tasks (Kalliamvakou,2024). The increase rate of adopting the suggestions from AI also correlates with the programmers perceived satisfaction (Ziegler et al, 2022).
However, it is important to note that perceived satisfaction from surveys may contain biases that is not the same as quantitative research on productivity and programmer focus (Moore, 2024b)
Cons of AI Assisted Programming
However, there are also significant costs that come with using gen AI to code. A few of these costs include:
Security: Programmers who use AI for assisting coding are more likely to introduce security vulnerabilities in their code (Perry et al, 2023). Those investing in repeated inquiries and examining the AI generated code could off-set this increase, but at a cost of time saved by using AI.
Productivity: There has been no productivity gain and potentially some serious downsides. First, programmers using gen AI had a 41% increase in their bug rate (Moore, 2024a). This creates a greater need to debug the code created.
Satisfaction and Burn-out: Controlled for sustained effort, the extended worktime outside of standard times shows that there was no reduction of the rate of burn-out from programmers when using AI. (Moore, 2024c)
These costs suggest that whatever perceived benefits of using Ai to code can be off-set by the increased time debugging and examining the code it generates.
The appealing nature of AI coding tools to create vast amounts of code poses long-term danger: as developers get accustomed to the perceived speed of the AI tools, they may gain false confidence and rely on them to program and review code more. This will perpetrate more bugs and will be amplified by code that is not documented. There will also be a skill loss in organizations.
Tips When Using AI Assisted Programming
If one is going to use Gen AI to assist with programming, it is important to understand its limitations. This entails:
Repeatedly examine the code for bugs and errors. Gen AI repeatedly produces code with errors. These errors do not reduce when increasing its prompts. In some cases, AI produced twice as many errors and took longer to patch errors (Jesse, 2023).
Do not rely on Gen AI for the most difficult programming issues. AI increasingly creates unsatisfactory code the harder the problem becomes. This is also the case when working with specific organizational content. Since the AI was not trained on the specific content, it regularly creates errors that prefer its training data and not solving the problem in within the organizational context (Moore, 2024b).
Given these needs, programmers need to leverage their skills to monitor code to reduce the error rates. Producing less bugs and security issues in the final product will be the hallmark of desired programmers. This will also leverage programming expertise against the speed of dubious code produced by Gen AI and ensure that skilled programmers will be more preferable than those that rely on Gen AI to produce their final product.
How Does that Affect Education?
If decisions are not made soon, educators will lose the ability to shape future programmers and face future challenges to come (Becker, et al, 2023). Responsible educators must remind their students of critical issues associated with coding with AI. Further they need to produce learners who can assess and reflect on Gen AI as a tool instead of a solution.
Meyer, A, Fritz, T, Murphy, G, and T. Zimmermann. (2014). Software developers' perceptions of productivity.In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). Association for Computing Machinery, New York, NY, USA, 19–29.
Perry, N, Srivatava, M, Kumar, D, and D, Boneh (2023)Do Users Write More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS ’23), November 26–30, 2023, Copenhagen, Denmark. ACM, New York, NY, USA.
Ziegler, A, Kalliamvakou, E, Li, A, Rice,A, Rifkin,D, Simister, S, Sittampalam, G, and E.Aftandilian. (2022) Productivity Assessment of Neural Code Completion.In Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming (MAPS ’22), June 13, 2022, San Diego, CA, USA. ACM, New York, NY, USA
While there has been considerable attention recently about Artificial Intelligence, AI, development in the field is not new. In 1956 Dartmouth college coined the term during a summer research project organized by Dr. John McCarthy.
Since then, “artificial intelligence” has been used in one way or another in education for years. Search engines, personal assistants on phones, assistive technology to increase accessibility, and other technology all use some form of applied artificial intelligence. However, what most people are concerned about is the advances OpenAI technology found in GPT-4 and ChatGPT. To address this, let's briefly look at how ChatGPT works and a few of the issues they pose through their architecture.
CHAT-GPT: How it Works
Unlike Deep Blue, that uses “brute force” to check all possible outcomes to determine an optimal answer, ChatGPT uses an artificial neural network that trains on data sets from billions of webpages from the internet, with trillions of words, and selects a word that statistically will follow its predecessor. Artificial Neural Networks have been around for a long time, and they are excellent at pattern matching. In the case of ChatGPT, the program weights all the words that have been connected to the input and selects the word that has the highest probability the weights have assigned to it. The program then looks for the next word the same way.
GPT-3 (the program the runs ChatGPT) used over 3000 HGX A100 servers with over 28,000 GPS to train on over 570 GB of text data to train and assign weights to words. With months of training, it created over 170 billion connections between all the words and added weights representing the importance of each connected relative to the word. This level of computing was very costly, at about $500,000/day and using more electricity than 150,000 people use in a month (23M KWh). The trained Artificial Neural Network then had researchers ‘tweak’ the weights to help rule out odd responses and was trained until the responses were consistent to what the programmers want.
This results in a program that looks for the next -token prediction from a list of rated words as the word that will likely appear next. There is no semantic grounding, set knowledge base of information based on confirmed scientific data, or any criterion of truth other than that of the statistically probability of another work following your prompt given its large training set. For example, when you use CHATGPT you:
Give it a prompt
ChatGPT looks at the last word of the prompt and assigns a number to encode it.
It then multiplies that number by the connections of everything it learned how words are associated with each other (embedded) which creates a 12,000-dimensional matrix
Attention transformers that identify which words in the prompt should have more attention that others (such as nouns over adverbs)
“normalize” output to make it seem more like a matrix again
Feed results forward to another layer of attention transformers (repeat 95 times)
Produce one word
ChatGPT repeats steps 2-7 for the rest of the words in the prompt.
At no point does ChatGPT know what the question is.
There is no knowledge-base being consulted.
It just works assign weights to nodes in a matrix representing the probability of a word that is most likely to follow its predecessor and selecting the one with the highest value. These weights are assigned through a training algorithm, that includes a ‘randomness” to ensure that the text appears “fresh”, working with a training set, as well as researchers “tweak” the weights to help create desired results.
It seems clear that one of the desired results is to create a program that appears intelligent regardless of knowledge or being accurate. This might not be too surprising when we remember that ChatGPT is a product of a private company where perception of a successful product is key for profit and increasing stock prices. Moreover, there is a lot of pressure to sell these systems. The current costs of these AI systems are creating a money pit for investors while the product seems to be a solution looking for a problem
What Could Go Wrong?
Misinformation and the Lack of Truth
ChatGPT may be designed to present its information in a convincing way. Since the first chatbot, ELIZA, people tend to anthropomorphize and place trust in these software applications. This can be dangerous with a program that can produce misleading statements. While there is a disclaimer to ChatGPT, OPEN AI benefited from the hype while not warning of issues with their product.
ChatGPT is capable of generating a considerable amount of nonsense, such as:
“Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”
While this is noticeably absurd, other statements may not be obvious. ChatGPT has already been documented to fabricate information and to adamantly defend these fabrications. Often these cases are referred to as “hallucinations”, as the chatbot produces responses as though they are correct. Apparently ChatGPT has been so convincing, that already a lawyer has been caught citing cases hallucinated by the program. This was a career ending mistake.
OPENAI recommends that users check the output of ChatGPT. However, not producing accurate information seems to be a serious flaw of the tool. Instead, ChatGPT is producing vast amounts of text with a variable level of “truthiness”. Another term for its output would be fiction or misinformation. Others argue that its output conforms with the technical term coined by Harry Frankford – the term ‘bullshit ’. It is named this in reference to the game where actors deliberately try to convince others of a statement with no regard to its truth.
In small doses misinformation on the web is not a problem, however AI has already produced more text than humans have since the Guttenberg printing press. Each day ChatGPT produces approximately 4.5 billion words a day. This flood of information will make it harder to find accurate and truthful information on the web. This is harmful on many levels, including contributing to undermining democracies. If colleges are going to promote the whole scale adoption of this technology, we do have to consider the increase in misinformation and resources that this will produce. It may be wise to have a more tempered approach.
Whenever a new language model like ChatGPT comes out, it gets a lot of hype. However, how should we proceed? We are not faced with the dilemma of promoting the misinformation spread by encouraging it or banning all AI reminiscent of human history in Herbert’s book Dune. Another option includes employing a measured approach where we carefully employ AI to illustrate its flaws and prepare students for the future. This may include how to combat a vast amount of misinformation and should certainly include highlighting the importance of information literacy and research librarians, who are regularly under-utilized by students.
ChatGPT may be designed to present its information in a convincing way. Since the first chatbot, ELIZA could encourage people to anthropomorphize and place trust in the software application. This can be dangerous when a program that can produce misleading statements. While there is a disclaimer to ChatGPT, OPEN AI benefited from the hype while not warning of issues with their product.
ChatGPTis capable of generating a considerable amount of nonsense, such as:
“Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.”
While this is noticeably absurd, other statements may not be obvious. ChatGPT has already been documented to fabricate information and to adamantly defend these fabrications. Often these cases are referred to as “hallucinations”, as the chatbot produces responses as though they are correct. Apparently ChatGPT has been so convincing, that already a lawyer has been caught citing cases hallucinated by the program. This was a career ending mistake.
OPENAI recommends that users check the output of ChatGPT. However, not producing accurate information seems to be a serious flaw of the tool. Instead, ChatGPT is producing vast amounts of text with a variable level of “truthiness”. Another term for its output would be fiction or misinformation. Others argue that its output conforms with the technical term coined by Harry Frankford – it is ‘bullshit’.
In small doses misinformation on the web is not a problem, however AI has already produced more text than humans have since the Guttenberg printing press. Each day ChatGPT produces approximately 4.5 billion words a day. This flood of information will make it harder to find accurate and truthful information on the web. This is harmful on many levels, including contributing to undermining democracies. If colleges are going to promote the whole scale adoption of this technology, we do have to consider the increase in misinformation and resources that this will produce. It may be wise to have a more tempered approach.
Whenever a new language model like ChatGPT comes out, it gets a lot of hype. However, how should we proceed? We are not faced with the dilemma of promoting the misinformation spread by encouraging it or banning all AI reminiscent of human history in Herbert’ book Dune. Another option includes employing a measured approach where we carefully employ AI to illustrate its flaws and prepare students for the future. This may include how to combat a vast amount of misinformation and should certainly include highlighting the importance of information literacy and research librarians, who are regularly under-utilized by students.
Reference
Berry, D (2018) "Weizenbaum, ELIZA and the End of Human Reason". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern [Hello, I'm Eliza: Fifty Years of Conversations with Computers] (in German) (1st ed.). Berlin: Projekt Verlag. pp. 53–70.
Estreich, G (2019) Fables and Futures: Biotechnology, Disability, and the Stories We Tell Ourselves, Cambridge, MA: MIT Press.
Frankfurt, H. G. (1988) “On Bullshit.” The Importance of What We Care About: Philosophical Essays, pp. 117–133. Cambridge: Cambridge University Press (originally published in the Raritan Quarterly Review, 6(2): 81–100, 1986; reprinted as a book in 2005 by Princeton University Press).
Gebru, T, Morgenstern J, Vecchione, B, Wortman Vaughan, J, Wallack, H, Daume, H, and K. Crawford (2022) Excerpt from Datasheets for Datasets. In Ethics of Data and Analytics: Concepts and Cases. Martin, K. New York: Auerbach Publications
With the popularity and extraordinary, and dubious, claims
of the ability of AI (Narayanan & Kapoor, 2024), a heightened concern about
students cheating has occurred. Naturally, proctoring and anti-plagiarism
companies benefit from this fear and the sales of their ‘solutions’
increase.The problem with this policing
solution is that beyond a point, teachers get into an ‘arms race’ between AI
and AI-detection software where the real loser is the institution footing the
bill.One might think that the companies
are happy to ‘stir the pot’ to increase anxiety for the benefit of their
shareholders. Meanwhile, research has indicated that the level of academic
dishonestly has not changed with the prevalence of AI (Lee, et al, 2024) Yet
the focus on student policing places instructors in an adversarial relationship
with the students, instead of an instructive one.
A Better
Solution
Low-stakes assignments have the added benefit of deterring
academic dishonesty.Because there is
less risk involved in the assignment, there is less incentive to cheat, which
could bring about sever consequences.Why take the risk, when there is so little to gain?
Low-stakes assignments, such as threaded assignments, thwart
using AI that quickly generates content.The low-stakes assignments act as scaffold that requires further
reflection and meta-cognitive skills that are not easily replicated by the
stochastic language models modern AI uses. This will force the would-be culprit
to reflect and expend so much energy producing something to submit that it
isn’t worth the effort to get whatever the AI model can produce.
When assignments:
offer less stress,
supply tools for the
students to succeed,
clearly express expectations, and
encourage the student to
take control of their learning,
the students begin to see the value in what
they are learning.This dissuades the
learner from cutting corners instead of easily going through the process.Not only will this lower academic dishonesty
in your classes, low stakes assignments will encourage more of your students to
become engaged active learners.
Low-stakes assignments engage the students in the learning
process.The best way to eliminate
academic dishonestly is to remove the incentive to cheat and be up-front about
the rules.Students are often ‘cheating’
because often they have not received adequate instruction and expectations
(Waltzer, Bareket-Shavit & Dahl, 2023).To solve this, explicitly state the acceptable level of AI usage and you
will curb the level of unintended violations to your academic expectations. A
group of scaffolded low stakes assignments, often scaffolded to create a large
assignment, then undermines the pay-off from cheating.
Narayanan, A & S. Kapoor (2024) AI Snake Oil: What artificial
Intelligence can do, what it can’t, and how to tell the difference. Princeton
University Press: Princeton, NJ.
Did you ever have a dream where you were back in school, you
enter a class, and you realize you have a final exam on a topic that you have
no idea what it is? The pressure of
exams is so great that it unconsciously affects us decades later. In fact, high-stakes assignments and testing have
been linking to increasing suicidal ideations (Wang, 2016), and higher suicide
rates (Kapur, 2021; Singer, 2017). They have been connected to undermining
educational goals, perpetrating inequalities, crating unequitable learning
environments, and encouraging cheating both from students and educational
actors, such as teachers, administrators, and even state officials (Nicols
& Berliner, 2007). So then, why do we use high-stakes testing and
assignments? Tradition?
Low-stakes assignments when taken individually do not
significantly impact a students’ grades. Their purpose is primarily to provide
students with a performance indicator. Students
can then reflect on the areas in need of improvement and how to improve. The
low-stakes assignment also provides assistive scaffolding by providing regular
formative feedback that is frequent and timely (Kuh, Kinzie, Schuh, & Whitt,
2010). They work best when providing formative feedback, starting and
continuing throughout the course.
Benefits of Low-Stakes Assignments
A few of the benefits of low-stakes assignments include:
They provide feedback for
instructors about how successful students are learning. This can
be particularly effective in environments where it is hard to pick up on
subtle clues of students struggling, such as in online or hybrid classes.
Allowing instructors to direct
students to resources if they need further assistance or support
Early feedback opens up communication between
students and their instructors, possibly increasing their likeliness
to seek help when needed
Allowing students to be active
participants in the evaluation of their own learning
Encouraging
students and increasing the
likelihood of their engagement and attendance
Many of these will rise your retention rates and help
students succeed.
Examples of Low-Stakes Assignments
But what would a low-stakes assignment look like? Some
examples of low-stakes assignments include:
Self-tests.
(ungraded or low-points). These can even be automated with online testing so
that it does not take any time in the classroom. These can also be anonymized to give students
comfort. Self-tests are particularly effective when combined with having…
Multiple
attempts (on questions or whole exam). This feature reduces test-anxiety and
allows students to learn from their errors.
When feedback is given for each question, you will notice the best
results. The knowledge of that they can take the exam another time also reduces
the pressure to cheat (Wehlburg, 2021).
Discussion/Collaboration:
Students sharing their writing or thoughts with others and get feedback will
assist their learning and meeting learning outcomes
Multiple
submissions of a paper. Feedback from a first submission with time to reflect
and rewrite their paper allows students to hone their writing skills.
Reflective
journaling. Writing self-reflective content both increases one’s
meta-cognitive skills used for learning as well as better develops writing
skills. An added perk is that AI tools
have a hard time replicating this type of writing as well.
A Threaded
Assignmenti.e., breaking down
the assignment into several parts.
Individually, the grades or low, but collaboratively the project
aggregates to a large assignment, such as a term paper. This technique often
proves the scaffolds that help disenfranchised, or otherwise struggling,
students succeed. A sample of deconstructing a large assignment into components
would be making a thesis paper into the following smaller assignments:
Thesis/Abstract
Outline
Annotated bibliography
1st draft
Final draft
These are just a few examples; however, they offer an
excellent opportunity for both you and your students to get needed feedback to
help improve your course’s student success rate. These also help develop a
grading system that can clearly show the steps necessary for mastering the meeting
the learning outcomes of the course.
Drabick D. A. G., Weisberg R., Paul L., Bubier J. L. (2007).
Keeping it short and sweet: Brief, ungraded writing assignments facilitate
learning. Teaching of Psychology, 34, 172–176.
Kuh, G. D., Kinzie, J., Schuh, J.S., and Whitt, E.J.
(2010). Student success in
college: Creating conditions that matter. San Francisco, CA:
Jossey-Bass.
Nicols, S. and D. Berliner (2007) Collateral Damage: how
High-States Testing Corrupts America’s Schools. Harvard Education Press,
Cambridge, MA.