Policy on the use of Artificial Intelligence

1 1. Definition and scope

1.1 Artificial Intelligence (AI) is technology that enables a computer to think or act in a more ‘human’ way. It does this by taking in data, and deciding its response based on algorithms.

1.2 In this policy, generative AI is being referred to. The Department for Education (2023) defines it as:

“Technology that can be used to create new content based on large volumes of data that models have been trained on. This can include audio, code, images, text, simulations, and videos.”

1.3 This policy draws upon advice from HM Government, Department of Education (DfE), Joint Council for Qualifications (JCQ), Advance HE, and from academics based in UK and international higher education providers

1.4 This policy applies to the use of AI by all employees and students at the College.

2. Principles

The following underlying principles have guided the procedures within this policy:

2.1 AI poses opportunities and challenges for the education sector. The College will make the best use of opportunities, build trust, and mitigate challenges to protect integrity, safety and security.

2.3 AI tools can make tasks quicker and easier. They generate routine information that would take a human much longer. AI meets the parameters set for it by users, therefore users need to be skilled in asking effective questions.

2.4 Using AI tools can improve comprehension and retention of key concepts, reduce frustration and motivate and engage the users (Chen, Chen and Lin 2020 and DfE 2023).

2.4 Having access to AI is not a substitute for having knowledge because humans cannot make the most of AI without knowledge to draw upon. We learn how to write good prompts for AI tools by writing clearly and understanding the subject; we sense check the results if we have a schema against which to compare them (University of Exeter 2023). AI is not a replacement for effective teaching, learning or professional development activities.

2.5 Information generated by AI is not always accurate or appropriate, so users need skills to verify, analyse, evaluate and adapt material produced by AI tools.

2.6 AI tends to be developed by a specific demographic; therefore, it could perpetuate a onedimensional view. Cultural differences and a range of voices may not be generated by AI tools. Users need to be aware of this and the potential for bias in AI output.

2.7 Personal and sensitive data entered into AI tools might be shared with unknown parties, posing a security risk and potential data breach.

3. Roles, Responsibilities and Procedures

3.1 Students

3.1.1 Students may use AI to support their studies, provided text generated is:

• Checked for validity, accuracy, reliability and relevance.
• Free from bias or prejudice and used with integrity.
• Critically evaluated, like any other information source.
• Referenced correctly in-text and in final references.

In-Text Citations

3.1.2 The in-text citation must follow these rules:

State who used the AI tool.
• Name the AI tool and the developer.
• State what question was asked, and any additional parameters set.
• State the year the question was asked/parameters set.
• Explain that the full response appears in an appendix, and state which one – ensure the appendix contains everything generated by the AI tool on this occasion.
• Evaluate the AI response.
• If text is taken directly from AI, quotation marks must be used. The text must be exact, including errors or use of American English.

3.1.3 In-text citation example 1:

When prompted by the author of this assignment, ChatGPT responded to the question, ‘What is a definition of academic integrity?’ with the following:

“An ethical code or set of principles that governs honest and responsible behavior.” (OpenAI ChatGPT 2023)

A copy of the full response can be found in Appendix 1.

This definition does not explain what that code is, or what those principles might be, so is generalised and of limited use

3.1.4 In-text citation example 2:

The author’s tutor, Uzma Patel, used a different AI tool and specified that the definition should be specific to Higher Education settings. This returned the following response:

“Academic integrity in higher education refers to the ethical and moral framework that guides the behavior of students, faculty, researchers, and staff within colleges and universities.” (Google Bard 2023).

A copy of the full response can be found in Appendix 2

This refers to frameworks, and who they apply to, but does not specify what those frameworks might contain, so requires further research to define.

3.1.5 Table 1 below contains analysis of examples used in paragraphs 3.1.3 and 3.1.4, to show how each part of the text in the examples meets the citation rules.

Table 1: Analysis of examples

Artificial Intelligence

Final Reference List

3.1.6 When compiling the final reference list, AI is treated as personal communication. The following information is required for Harvard style referencing of personal communication with AI:

• Name of AI tool and developer
• Year (in brackets)
• Medium of the communication
• Receiver of the communication
• Day and month of communication

3.1.7 Final reference list example 1:

OpenAI (2023) ChatGPT online response to Alex Radu, 2nd April.

3.1.8 Final reference list example 2:

Google Bard (2023) Bard online response to Uzma Patel, 3rd April.

3.1.9 If AI is used and not referenced, it will be treated as cheating under the College’s Academic Misconduct Policy. It is the student’s responsibility to ensure AI is correctly referenced and that the information gained from AI tools is accurate and used appropriately in the work submitted.

3.1.10 If there is an over-reliance on AI, without critical analysis or evaluation, the student will not be considered to have “independently met the marking criteria and therefore will not be rewarded.” JCQ (2023). It is the student’s responsibility to ensure the evidence submitted for assessment demonstrates that they have met the criteria independently of their use of AI.

3.2 Lecturers

3.2.1 Lecturers must teach students critical AI literacy so they have the skills to use it responsibly, ethically and appropriately. This supports students in preparing for workplaces which are constantly changing. Students must be able to use emerging technologies by understanding:

• benefits and limitations
• reliability and validity
• potential bias
• organisation and ranking of information on the internet
• online safety to protect against harmful or misleading content

3.2.2 The following are examples of strategies used by lecturers to encourage open and transparent use of AI by students:

• Making the AI policy, and students’ responsibilities under this policy, clear to them during induction, as well as throughout the duration of their programme.
• Encouraging students to use AI for feedback on their formative assessments, and then to discuss the value of the AI output with their peers. For example, to refine a research proposal and research questions.
• Asking students to critique and edit an AI-generated answer, solution, or translation.
• Openly modelling the ethical, appropriate and critically evaluative use of AI during their teaching, familiarising students with these tools.
• Asking students to reflect on the extent to which AI has been useful for a task/unit and the extent to which a human was needed.
• Using AI to analyse and draw conclusions from a data set, then discussing the strengths and weaknesses of the output.
• Getting AI to create experimental design and data collection for research, then comparing with students’ own approaches.
• Asking students to identify AI-generated answers, giving their justifications.
• Discussing AI hallucinations (where AI generates false information and presents it as fact), explaining why they might seem plausible.
• Setting an AI-generated artistic element, e.g. logo design, where students explain their choice of prompts.
• Getting AI to generate prompts or questions, if students get stuck on reflective logs.
• Asking AI to identify key themes in reflective logs and asking students to reflect on and respond to these themes.
• Asking students to include an AI-generated literature review and provide a critique.
• Asking students to post prompts for advice and solutions for simulations, with critique of results.
• Asking AI to create a structure for a report, paper, article or other written document.
• Writing clear assignment briefs that include analytical and evaluative use of AI in the tasks. Some examples are shown in table 2 below:

Table 2: Examples of how to include AI in assignment briefs.

Artificial Intelligence
Artificial Intelligence

3.2.3 Lecturers must ensure they are aware of possible AI-related assessment issues and how to make assessment more resilient to avoid academic misconduct. Some examples are shown in
Table 3 below.

Artificial Intelligence

3.2.4 Student submissions can be run through AI detectors, such as OpenAI Classifier, GPT Zero or GLTR, but these are not always accurate or reliable. They base their scores on the predictability of words and may give lower scores where text has been subsequently adapted. They should be used alongside other methods for checking authenticity in a holistic approach to academic misconduct.

3.2.5 Some indications that a submission may have been generated using AI include

• use of American spelling, currency, terms and localisations
• use of language or vocabulary which might not be appropriate to the qualification level
• lack of direct quotations and/or references where these are required/expected
• lack of graphs/data tables/visual aids where these would normally be expected
• references which cannot be found or verified
• lack of reference to events occurring after a certain date
• incorrect/inconsistent use of first-person and third-person perspective
• difference in the language style used when compared to that used by a student in the classroom or in other previously submitted work
• overly verbose language
• submission of student work in a typed format, where their normal output is handwritten
• inclusion by students of warnings or provisos produced by AI to highlight the limits of its ability, or the hypothetical nature of its output
• unusual use of several concluding statements throughout the text, or several repetitions of an overarching essay structure within a single lengthy essay, which can be a result of AI being asked to produce an essay several times to add depth or variety
• use of non-sequiturs (lack of meaning relative to what was previously said)
• confidently incorrect statements within otherwise cohesive content
• lack of specific local or topical knowledge
• content of a generic nature rather than relating to the student themself, the task or scenario

3.2.5 Lecturers must make sure students understand submission and declaration forms cover the use of AI in the evidence they have submitted. This should be pointed out during induction, with reminders at each assessment point during the course.

3.2.6 AI tools can be used in the production of learning resources, plans and documents, provided the following points are considered:

(i) Lecturers must carefully check their own AI-generated materials to protect students from potentially harmful, inaccurate or biased content.

(ii) In many cases, a given tool will not have been trained on the English curriculum and AI can only return results based on the dataset it has been trained on. Lecturers cannot assume that AI output will be comparable with a human-designed resource that has been developed in the context of the College’s curriculum.

(iii) The quality and content of the final document, plan or resource remains the professional responsibility of the lecturer who produces it, and the College.

3.2.7 Lecturers must not use AI tools to generate their summative assessment feedback to learners. Effective feedback is motivational, specific, developmental and personalised for each learner by the lecturer. AI cannot do this as it does not know individual students like a human does.

3.2.8 AI can be used to give instant feedback to students on formative assessment tasks, e.g. online quizzes.

3.2.9 In order to protect students and staff, personal and sensitive data must never be entered into AI tools. This would be a breach of GDPR.

3.2.10 If a lecturer believes AI has been used without crediting it as a source of information, the Academic Misconduct Policy should be followed. The lecturer needs to report it as a suspected case of cheating to the Higher Education Manager and the Lead IV for further investigation.

3.2.11 If there is over-reliance on AI to the extent that the lecturer decides the student has not independently demonstrated the assessment criteria, the work submitted will not be awarded a pass and should be referred for resubmission. The lecturer’s feedback must clearly explain how the use of AI contributed to the referral, so the student is aware of how to improve their use of AI in future.

3.3 Programme Leaders

3.3.1 Programme leaders need to monitor induction activities, learning resources, plans and documents produced by lecturers using AI, for appropriateness and accuracy. They need to ensure lecturers are following the most recent version of the policy and are aware of their responsibilities.

3.3.2 Use of AI should be included on the agenda for regular discussion at Programme Team Meetings to support a collaborative approach to ethical use of AI.

3.3.3 If a need for Professional Development relating to AI amongst team members is identified, Programme Leaders must notify the Quality Manager and Principal so this can be arranged.

3.3.4 Use of AI must be included in onboarding processes. Programme Leaders must also ensure their team members have undertaken mandatory GDPR training and updates.

3.3.5 Where cases of cheating by using AI are suspected, Programme Leaders should advise lecturers in their team and ensure the Higher Education Manager and Lead IV are aware of each
case, supporting the resulting investigation where necessary.

3.4 Internal Verifiers

3.4.1 IVs must be aware of all issues relating to use of AI above, so they can support high quality, ethical assessment processes and consistent practice in the College. Monitoring the appropriate use of AI in assessment is an important part of the IV process.

3.4.2 The Lead IV, along with the Manager of Higher Education, will investigate and recommend outcomes for any breaches of the Academic Misconduct Policy that involve AI.

3.5 All Employees

3.5.1 All employees need to be vigilant with regards cyber security, particularly as AI could increase the sophistication and credibility of attacks (DfE 2023).

3.5.2 Employees may use AI in their own work, provided:

• No private or sensitive data is entered into AI tools
• AI tools are credited and referenced correctly (see paragraphs 3.1.2 to 3.1.8)

3.5.3 Any employee who suspects AI has been used by students inappropriately should report this to the Higher Education Manager and Lead IV for further investigation.

4. References

Acar, O.A. (2023) Are Your Students Ready for AI? A 4-step framework to prepare learners for a ChatGPT world. Harvard Business Publishing: Education, June 15 2023. Available at

https://hbsp.harvard.edu/inspiring-minds/are-your-students-ready-for-ai? [Accessed 9 th October 2023]

Chen, L, Chen P and Lin, Z. (2020) Artificial Intelligence in Education: A Review. In IEEE Access, 17 April 2020, vol 8 pp. 75264-75278. Available at https://ieeeaccess.ieee.org/featuredarticles/ai_in_education_review/ [Accessed 6 th October 2023]

DfE (2023) Generative artificial intelligence in education. Available at
https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education [Accessed 7th October 2023]

JCQ (2023) AI Use in Assessments: Protecting the Integrity of Qualifications. Available at https://www.jcq.org.uk/exams-office/malpractice/artificial-intelligence/ [Accessed 7th October 2023]

University of Exeter (2023) AI and Assessment Matrix. Available at https://s3.eu-west2.amazonaws.com/assets.creode.advancehe-document-manager/documents/advancehe/AI%20and%20Assessment%20matrix_1693985641.pdf. [Accessed 7 th October 2023]