Fostering Digital Scholarship Based on Critical Thinking and Reflective Use of AI
EATAW Position Statement on Generative AI
May 2024
The growth of large language models such as ChatGPT along with other generative AI resources raises questions about how these technologies (hereafter generative AI) should or should not be used, taught or monitored in higher education. Now that the initial hype surrounding generative AI technologies has receded a little, we can ponder their benefits but also consider their shortcomings and hazards.
Higher education aims to prepare students with the knowledge and skills they need to contribute to society in a variety of ways and roles. As educators, tutors, instructors, coaches, advisors and researchers of academic writing (hereafter writing specialists), we see a dual role for writing in the acquisition of these skills. It is a skill that students will need in future social roles and careers, and it is a tool for thinking that facilitates learning and assessment. It is this latter role that stands to be potentially sidelined by generative AI and something that we need to be cautious about. As writing specialists, EATAW members have a distinct perspective on the development of generative AI and its use in higher education. Therefore, we feel we are in a position to comment and advise on the pedagogical implications of generative AI use in higher education specifically pertaining to the teaching of academic writing.
This is not the first time writing specialists such as EATAW members have encountered new, disruptive technologies. The appearance of word processors and the internet both challenged our assumptions about the role of writing in academic work and the nature of academic work itself. We know from these earlier digital revolutions that we need to proactively open our working fields to new technologies while protecting them from the abusive use of technology and inadequate pedagogical practices. To accomplish this, we point in this statement to areas of concern and opportunity for EATAW members and their institutions in deciding the most appropriate policy choices and practices for the use and monitoring of generative AI in their specific context of academic writing. We thus offer some guidelines for the discussion and interpretation of the developments in light of what we think writing research can contribute.
Writing research shows that there is no single “academic writing,” but multiple “academic writings” that emerge from disciplines and are enacted through disciplinary genres. By this token, there is no single relationship between generative AI and writing. For each discipline and each of its genres, there may be a different use for AI. For some disciplines, AI use may become almost obligatory; for others, it may be unhelpful or even harmful.
While we recognise that one set of standards is not always appropriate for every context, in line with the UNESCO (2023) analysis of the use of digital technologies during the pandemic, we believe that “[t]he future of education needs to be a humanistic one” (p.19). Students will need to learn how to work with AI, but their personal development and the benefits of society as a whole must take first place.
The rapid growth of generative AI raises new concerns for educators and their institutions in four principal and overlapping areas:
1. AI literacy and digital scholarship
Developing digital scholarship
Generative AI is not the starting point for a new kind of literacy but builds on almost fifty years of digitalization in writing and education. “Technology ever increasingly is taking over the work previously done by humans in the composition, distribution, storage, access, and use of communications,” Bazerman (2016, p. 187) has said. Generative AI tops the former development in important ways. With the arrival of generative AI, we potentially enter an age in which a new form of collaboration between computers and humans emerges. Generative AI has the potential to transform the writing process by automating or assisting with language-related tasks, thereby reshaping the roles and responsibilities of writing professionals and students. Students may also turn to AI – rightly or wrongly – for assistance with higher order thinking tasks.
To get hold of the competences needed to cope with these developments, we suggest adopting the term “digital scholarship” as proposed by Borgman (2007) and Weller (2011), which refers to the skills, attitudes and working habits necessary to cope with the digitalization of working and thinking processes in higher education. This concept applies to all levels of education from undergraduate studies to early career research training and allows for expansions in many directions, including AI features that are still to come. It includes the reflective adjustment to a constantly changing digital landscape and an understanding of the trajectories of technological innovations. It also includes a high number of elementary skills in technology use such as those listed in the UNESCO’s (2011) digital skills model or in the EU’s DigComp Conceptual Reference Model (https://joint-research-centre.ec.europa.eu/digcomp/digcomp-framework_en). In contrast to pure competence models, it stresses the individual’s higher-order competences and proposes a self-directed learning and acquisition mode. Acquisition of digital scholarship is what we see as the desired aim of higher education in all levels and disciplines.
Studying our students’ use of AI
University students are often quicker in picking up new technologies than their teachers. It is advisable to monitor students’ technology use and listen to their evaluations and critiques of the new and emerging technologies. Questionnaires exist for diagnosing and comparing student skills and self-reported usage patterns (for instance, Cieliebak, 2023).
Supporting AI use in writing centres
Writing centres should strengthen their role as facilitators of digital writing and digital scholarship. Making transparent what AI can do and what its shortcomings are has become a key task and will be a permanent mission for the future, as will supporting faculty in the understanding and teaching of AI use and in finding the right tools for their particular disciplines and courses. For many colleagues, the rapid evolution of AI has had an overwhelming effect on their readiness to learn and use the new technologies.
Monitoring the impact of generative AI on learning
As at this stage, relatively little is known about the long-term impact of using generative AI on students’ critical thinking and writing skills. Institutions should make efforts to monitor and reflect on whatever impacts may occur. In this way, policies can be adjusted according to the positive or negative effects of AI on critical skills. Writing specialists can play a valuable role in this monitoring and should be supported and compensated accordingly. They should also seek research opportunities to contribute to a deeper empirical understanding of the impact of generative AI and thereby help design better policies.
2. AI as a teaching subject
Integrating generative AI into the curriculum
To effectively integrate digital and generative AI skills into education, appropriate policies, guidelines, and competence frameworks are needed. Moreover, certain skills connected to the use of generative AI in writing are likely to become central to the world of work. Writing teachers should consider and advise on how these can be incorporated into the curriculum. Intended learning outcomes of curricula and courses should reflect this, including:
- Data literacy
- Meta-skills of knowledge attainment and assessment in digital contexts
- Higher-order thinking skills rooted in human discourses and rhetoric
- AI ethical awareness and responsible usage
Digital and AI skills should be integrated into courses on academic literacy, writing and text production, integrity, methodology, and information literacy (as the libraries teach it).
As writing teachers may be no more knowledgeable regarding AI than their students, qualification programs for them may be needed as well.
Focussing on skills that humans can do better than machines
The growth of generative AI tools creates a new need to educate students in their critical use of AI, and notably in the effective evaluation of the reliability of content, bias, accuracy, and relevance of AI-generated texts. Writing specialists should take the initiative and be supported in developing learning goals, teaching and learning activities, and appropriate assessment to ensure these skills are acquired. Students, more than ever, will require fine-grained analytical skills, rooted in their disciplines’ writing cultures, to evaluate AI outputs.
Critical thinking covers more than just the critical use of AI but also refers to the kind of thinking that remains for humans when the machines have done their work. Critical thinking, per definition, cannot be performed by AI. Particularly, the reflective aspects of critical thinking such as taking initiative, acting intentionally, assessing contexts, making judgements in unclear problem situations, thinking analytically or hypothetically, enduring ambivalence and uncertainty, thinking from multiple perspectives, inventing something useful, reflecting ethical judgement, breaking out of established thought routines, questioning something critically and gaining knowledge collaboratively and similar seem out of reach for AI. Taken together, these intellectual skills or habits are essential parts of critical thinking and should be a prime target for the teaching of AI. Therefore, not simply the mastery of AI is the main aim but the ability to take responsibility for what has been co-produced between the writer and AI.
Monitoring and encouraging meaningful collaboration
Learning accelerates through meaningful collaboration between students, instructors and developers. When students explore software together, they exchange competences and skills, while instructors and developers collaborate to create tools that align with students’ needs. This collaborative approach fosters a deeper understanding of technology and its potential application in learning. Writing specialists are in a position to both encourage meaningful collaboration and to learn from it – to observe students’ collaborative use of AI, and to explore what this means for academic work.
3. Sharpening ethical awareness
Like any technology, generative AI is used by humans who must make ethical choices about using the technology in ways that either facilitate their own learning, or provide them unfair advantage. Universities will need to both promote and ensure the ethical use of generative AI, so as to maximise learning and minimise academic dishonesty. In the area of writing, the topic of authorship is of particular salience along with the declaration of machine-made contributions to student papers.
Awareness of the potential of (human) writing to foster thinking also needs to be increased. A deeper understanding of the possible negative impact of generative AI use on the human cognitive abilities is needed, and writing specialists should devote attention to this concern.
Developing clear policies
Writing specialists will need to understand the ways in which writing can occur with the use of generative AI, including the selection of appropriate technologies, effective use, and risk management. They should also seek to advise their institutions, so that informed decisions can be made as to which uses are ethical and educational, and which not.
Institutions will need to develop clear policies about the ways in which generative AI tools may be used: how they may support or facilitate writing, what is not appropriate, and what sanctions are to be applied in the case of infringements. These policies should clearly define permitted and prohibited uses of AI in writing, specifically of AI tools allowed and the contexts in which they can be used. These should include both overarching institutional principles, and also specifics at the course level that may differ, depending on the intended learning outcomes or the nature of a written assignment. Such policies should also make clearer what types of machine assistance (e.g., grammar checks, translation tools, literature search tools) are acceptable in which contexts. They should also establish a clear and consistent system of sanctions against inappropriate AI use, which should be available to all students and teaching staff. Writing specialists should facilitate and advise in these discussions.
Equity
Institutional and course policies related to the use of AI in writing will need to consider equity of access, notably when more sophisticated technologies are behind paywalls, but also when some students may have more limited internet connectivity or be constrained by older equipment. When learning goals or assessment methods entail or permit the use of AI tools that not all students have equal access to, inequalities will arise that are detrimental to the goals of education. Policies should ensure that the use of AI tools does not create unfair advantages for students with access to more advanced or costly technology and provide alternative assignments or accommodations for students who may not have equal access to AI tools.
Attribution
Policies should include institutional or course positions on the attribution of generative AI. Unless a particular course/assignment already stipulates the use of AI, policies should require students to make clear whether AI resources have been used, and in what ways their use has contributed to the final written product. Standardised methods for students to attribute AI assistance in their work, such as specific format or statement, should be developed.
Education and Monitoring
Policies should include provisions for educating students and faculty about the proper use of AI in writing and the consequences of misuse. Regular review and updating of policies should be conducted to keep pace with the evolving landscape of AI in writing, and feedback from students and faculty should be gathered to assess the effectiveness of the incentives and consequences outlined in the policies.
Acknowledgement
This document has been written by the members of the EATAW board 2024. An early draft was shared with the following persons for feedback and comments: Annette Vee, Otto Kruse, Heidi McKee, Douglas Eyman, Julia Molinari and Mike Sharples . We wish to thank and acknowledge their valuable feedback, comments, and suggestions. The present version has incorporated many of their suggestions to reflect EATAW’s current position.
References
Bazerman, C. (2018). What do humans do best? Developing communicative humans in the changing socio-cyborgian landscape. In S. Logan, & W. Slater (Eds.), Perspectives on academic and professional writing in an age of accountability (pp. 187- 203). Southern Illinois University Press.
Borgman, C. L. (2007). Scholarship in the Digital Age: Information, Infrastructure, and the Internet. MIT Press.
Cieliebak, M., Drewek, A., Jakob Grob, K., Kruse, O., Mlynchyk, K., Rapp, C., & Waller, G. (2023). Generative KI beim Verfassen von Bachelorarbeiten: Ergebnisse einer Studierendenbefragung im Juli 2023 [Generative AI in the writing of bachelor theses: Results of a student survey in July 2023]. ZHAW Zurich University of Applied Sciences. https://doi.org/10.21256/zhaw-2491
Gouseti, A. (2017). Exploring doctoral students’ use of digital technologies: what do they use them for and why? Educational Review, 69(5), 638-654. https://doi.org/10.1080/00131911.2017.1291492
Gudanowska, A. E. (2016). Technology mapping – Proposal of a method of technology analysis in foresight studies. Verslas: Teorija ir praktika / Business: Theory & Practice, 17(3), 243-250. http://dx.doi.org/10.3846/btp.2016.774
UNESCO. (2018). A Global Framework of Reference on Digital Literacy Skills for Indicator 4.4.2. Information Paper No. 51. UIS/2018/ICT/IP/51. https://uis.unesco.org/sites/default/files/documents/ip51-global-framework-reference-digital-literacy-skills-2018-en.pdf
Weller, M. (2011). The Digital Scholar: How Technology is Transforming Scholarly Practice. Bloomsbury.