HAN UAS has a task force on AI use.
The group has formulated guidelines for the use of AI (Dutch). Framework_gebruik_AI_binnen_de_HAN-juni-2024.pdf'
The use of AI tools in education raises questions about the reliability and validity of existing assessment methods, teaching methods, and the learning process of students as a whole. If a student uses an AI tool to (partly) write an essay, for example, how do you as a teacher and assessor (or indeed the students themselves) have sufficient insight into the extent of their knowledge and skills?
Unauthorized use of texts produced by others is not something new (e.g., ghostwriters, friends, parents, fellow students). What is new is how easy it is to produce texts that appear very convincing using AI tools such as ChatGPT, which can do so for free or against relatively very low costs. Students can use AI tools to quickly gather information and produce long and impressive texts. It is very hard to detect whether a text was wholly or partly produced by such a tool.
As there are many differences between tools, academies, degree programmes, and courses, there is not a one-size-fits-all solution and different or additional rules may apply. From a central level, further information and knowledge is disseminated, for instance via the HAN Study Centre and an AI Taskforce .
It is up to the programme management and the Board of Examiners of each degree programme to determine whether the learning outcomes are still up-to-date and correctly formulated and if assessment needs to be adapted. That being said, in general it can be stated that:
Unauthorized use of AI tools by students that hinders the assessment of a learning goal can be considered fraud and a breach of academic integrity, covered by existing definitions. Fraud always needs to be determined by the Board of Examiners.
If the use of AI tools is allowed in a course, the conditions (for instance when acknowledgements are necessary) need to be shared and clarified beforehand to prevent possible misunderstanding by students.
The easy access to AI tools may lead to changes in education. Possible changes include building in more safeguards or adapting the structure, organization, or method of assessment. Changes can only be made under the conditions that A) the adopted learning outcomes continue to be achieved after the change of format, and B) the changes are communicated to students in time and in accordance with the Rules and Regulations of the Board of Examiners and the TER.
If contracts and processing agreements with the owner of a third party tool are not in place (such as is the case with OpenAI/ChatGPT), the use of a tool may not be made mandatory for students.
I Several measures against the use of AI can also be taken - suggestions are described below. However, please note these have important limitations and complications.
The unauthorized use of AI tools by students (or indeed outsourcing any form of work to a third party) which hinders assessment of learning outcomes is an act of fraud and breach of academic integrity. This needs to be determined by the Board of Examiners of the study program. In the Student Statute / OER this is already covered by the current section on cheating and plagiarism:
Cheating is an act or omission by a student that partly or wholly hinders the forming of a correct assessment of their – or another’s – knowledge, understanding, and skills.
Cheating also includes plagiarism, which means copying someone else’s or your own work without correctly acknowledging the source.
The assessment of theses and written assignments requires a plagiarism check to be performed, by means of a plagiarism scanner accessed by the University. Students are individually responsible for maintaining academic integrity.
If a student cheats, the Board of Examiners may exclude that student from participation in one or more examinations or final assessments for a maximum of one year.
In the event of serious cheating, the Board of Examiners can ask the Board of the University to definitively terminate the student’s registration in the degree programme.
The Board of Examiners sets out its course of action in the event of cheating in its Rules and Regulations.
Sanctions can only be imposed by the Board of Examiners, and only after it has been established indisputably that fraud has been committed. Before any action can be taken in this regard, it must be determined whether an AI tool has been used without authorization. In the process of doing so it is important that all parties involved have the opportunity to express their views.
Is it possible to recognize the use of AI tools? Not that easily, especially as AI tools continue to be developed and produce ever more convincing output. The plagiarism scanner available through Brightspace does not properly detect texts created by AI tools, as it was not designed to do so. Some detectors that do provide estimations of the use of AI for the creation of texts are available online, but it is important to note that it is not transparent how these detectors work and therefore if they were accurate in their estimation. These detectors can function as so-called black boxes and can result in false positives, making it hard to prove a student has used AI.
One could ask AI if the text is an AI product.
There are some telltale signs that AI tools may have been used in a writing assignment, but be aware that the output of AI will continue to become more convincing and that these signs do not constitute a foolproof detection method:
Factual errors: as AI tools are language models that calculate the next probable word based on preceding words, they can make factual errors;
Wrong or absent references: some chat AI’s provide sources with their answers. The quality differs from one AI tool to the next. ChatGPT has been known to make up references and quotes. Bing Chat (which currently runs on GPT-4) however lists links to webpages that were used as sources in its output. Google’s Bard is said to list sources sometimes, but not always.
Statements that are very different from the material discussed in class;
Writing style: language models such as ChatGPT tend to generate linear sentences and pick broad, obvious words instead of the occasionally narrower vocabulary that a student would use. Language models also tend to use recurring patterns in their writing, for example in the way multiple sentences in a piece text are structured. However, these models keep improving and can therefore mimic natural language better and better.
Outdated information: ChatGtP is 3 years old.
Some of the tech giants, such as Google and Microsoft, have announced that images and videos that are created by their AI tools will be watermarked in the near future. However, validatable digital watermarks for texts created by AI tools do not yet exist and are likely to be quickly by-passed when these are introduced. Historical races between creation and detection suggest that detection is always one or more steps behind.
The risks for unacknowledged use of AI by students seem highest for unsupervised take-home exams and essays and thus may have the most impact on assessment in those academic fields, degree programmes and courses where such assessment methods are a dominant part of the curriculum. Measures taken against the use of AI should always depend on the context of the degree programme, the impact on achieving the stated learning goals, and the existence of other assessment methods.
The following suggestions for short-term measures to safeguard assessment are based on both external sources as well as various discussions and presentations within the UG. The list focuses on unauthorized use of AI tools such as chat AIs. Please note: changes in content/topics and assessment methods can only be made under the conditions that A) the adopted learning outcomes continue to be achieved after the change of format, and B) the changes are communicated to students in time and in accordance with the Rules and Regulations of the Board of Examiners. Moreover, some of the suggested measures can have a high impact on the workload of teachers.
Be clear to students up front that the use of AI tools is not allowed. For example: ‘AI-based writing tools cannot be used to actually write the final version of this assignment’. Students may not always realize that the use of AI tools is considered cheating.
Discuss academic responsibility, academic integrity and possible negative consequences of using tools (factual mistakes, student is always responsible for the submitted work, unacknowledged use may constitute fraud) with your students. Also, explain to your students the relevance for their future careers of the skills they need to learn and are being assessed on. Explain that using AI tools may interfere with the learning process.
Pick topics as specific as possible, and personalize them and/or connect them to specific contexts. At the moment it is still difficult for most AI tools to generalize existing knowledge to a specific context. This is, however, not applicable to all fields of education and AI tools are quickly improving in this aspect.
Require verifiable sources and quotations.
Use more continuous assessment, including intermediate products and personal reflection by the student as an extra assignment. Ask for documentation of the writing process (logbooks, mind maps, track changes).
Create assessments of critical thinking, problem-solving, and communication skills. Ask students about argumentation and the process. Instead of writing an essay, let students engage in group discussions, presentations and other interactive activities that involve the application of knowledge and skills.
Use direct supervision during assessments and/or restrict (digital) learning environments. On computers used for assessment in the UG Aletta Jacobs hall, access to the internet is restricted. Students do not have access to non-authorized internet pages in the digital assessment environment and therefore cannot use AI tools during these exams. It is possible to allow the use of certain (AI) websites during digital exams, however capacity problems of external websites cannot be solved yet by the UG and thus availability during an exam can be unreliable. Applications such as GPT4All are being developed that allow individuals to run language models locally.
Monitor the individual writing process of students.
Apply peer review by students to let them engage in peer learning. Note: only examiners have official assessment authority.
Check for factual mistakes, correct use of references, and logical structures in the works of students. Certain recurring patterns may indicate the use of an AI tool.
Carry out extra (random) oral summative assessments and formative feedback.
Changing assessment to another frequency (more intermediate assignments and checks, including self-reflection on the writing process) or a variety of methods (presentations, oral exams, podcasts, et cetera) should always align with the teaching and learning goals of the course. Changes in methods could improve the quality of assessment, but could also have major implications for time requirements and work pressure for both teachers and students. A statement about changing the method of assessment needs to be present in the Teaching and Examination Regulations (OER) of the degree programme. As stated in the UG Model OER, “In situations of force majeure, where it is not reasonably possible to conduct examinations in the manner indicated in Ocasys, it is possible to temporarily switch to another format of teaching and examination. This is also subject to the condition that the adopted learning outcomes continue to be achieved after the change of format.”
The method of assessment must be communicated to students in time and in accordance with the Rules and Regulations of the Board of Examiners. The UG Model Rules and Regulations states: no less than 4 weeks before the assessment moment via Ocasys. Information on which, if any, resources (including AI) are allowed during the assessment, must also be communicated to students in time (not necessarily via Ocasys).
While the use of AI tools can be used to aid the learning process, it can also lead to a reduction in opportunities for personal interaction. In turn relying too heavily on AI-powered educational tools may negatively impact students’ sense of support.
Students and teachers providing questions/prompts to AI tools can create privacy risks as the data is open to anyone. The party behind the tool can access and use texts provided to the tools and also save prompts and information provided by users. For a lot of AI tools it is unclear how (and which) data is stored by the company behind the tool. For that reason it is advised against processing personal data and research data with third-party AI tools for the time being, until the requirements of the GDPR (AVG) can be demonstrably met. A data processing agreement (‘Verwerkersovereenkomst’) needs to be in place.
The intellectual property/legal ownership of AI-generated content is a topic that is currently being debated. As soon as there is more clarity on this topic, this page will be updated accordingly.
Although there is currently no specific AI regulation, in general the HAN is bound by the General Data Protection Regulation (GDPR/AVG) that applies to any operation with personal data. Meanwhile, the EU-commission has also proposed a legal framework for AI which addresses the risks of AI (Artificial Intelligence Act; AIA). More regulation is therefore to be expected.
When you do choose to use an AI tool, it is also important for teachers and students to be aware of potential biases of many types, such as racist, sexist, or political biases. This can amplify real-world disparities and reinforce stereotypes. Using AI could potentially reduce cultural and linguistic diversity, which can also lead to systematic marginalization of less-represented groups. ChatGPT, for example, has mainly been trained on English data and data written in a few other languages, meaning that source materials in English are overrepresented. In general, the outcome of AI always needs to be checked (‘human-in-the-loop’) as it can have a steering effect on information and (self-)learning processes.
Selected AI-Based Literature Review Tools
Disclaimer:
Updates:
Suggestions:
------------------------------------------------------------------------------------------------------------------------------------------------------
The 4000+ results can be sorted by Fields of Study, Date Range, Author, Journals & Conferences
Over 200-millions scholarly documents / Peer-reviewed / Subjects: Medical sciences, physics to social sciences and economics.
Prompt “write me a paragraph about the impact of climate change on GDP with citations“
Increased risk of sudden infant death syndrome (SIDS)
Increased risk of premature birth
Increased risk of low birth weight
Increased risk of respiratory problems in newborns
Increased risk of respiratory problems in infants exposed to secondhand smoke
Increased risk of developing asthma and other respiratory illnesses later in life for infants exposed to secondhand smoke
[Note: ChatGPT may generate non-existent references or false knowledge. To find out why Large Language Models hallucinate, check out this Wiki article: Hallucination (artificial intelligence) and this blog post - A Gentle Introduction to Hallucinations in Large Language Models by Adrian Tam ]
Infant death
Neonatal mortality (referring specifically to deaths within the first 28 days of life)
Perinatal mortality (referring to deaths occurring during the period from 22 completed weeks of gestation to 7 completed days after birth)
Early childhood mortality (referring to deaths occurring within the first five years of life)
Child mortality (referring to deaths occurring before the age of 18)
[Press the Regenerate button to get more.]
(("Infant Death"[Mesh] OR "Sudden Infant Death Syndrome"[Mesh] OR "Infant Mortality"[Mesh] OR "Perinatal Mortality"[Mesh] OR "Neonatal Mortality"[Mesh] OR "Fetal Mortality"[Mesh] OR "Infant Death" OR "Sudden Infant Death Syndrome" OR "Infant Mortality" OR "Perinatal Mortality" OR "Neonatal Mortality" OR "Fetal Mortality") AND ("Risk Factors"[Mesh] OR "Etiology"[Mesh] OR "Prevention and Control"[Mesh] OR "Interventions"[Mesh] OR "Public Health"[Mesh] OR "Health Disparities"[Mesh] OR "Healthcare Disparities"[Mesh] OR "Poverty"[Mesh] OR "Malnutrition"[Mesh] OR "Infection"[Mesh] OR "Congenital Abnormalities"[Mesh] OR "Prematurity"[Mesh] OR "Low Birth Weight"[Mesh] OR "Maternal Education"[Mesh] OR "Maternal Health"[Mesh] OR "Risk Factors" OR "Etiology" OR "Prevention and Control" OR "Interventions" OR "Public Health" OR "Health Disparities" OR "Healthcare Disparities" OR "Poverty" OR "Malnutrition" OR "Infection" OR "Congenital Abnormalities" OR "Prematurity" OR "Low Birth Weight" OR "Maternal Education" OR "Maternal Health"))
Comments and Suggestions:
#1 -This search string may not work on all database platforms. For example, Medline (OVID) may not be able to run it, Medline & CINAHL (EBSCO) can run it without any problems while PubMed will run it but it will indicate which MeSH terms are not indexed. See PubMed's search History and Details under Advanced.
#2 -This is just one possible combination. You may need to adjust it based on your task and the databases you are using.
#3 - It might be easier if you break the task into 3 parts: ask for synonyms > ask for thesauri terms > and ask it to combine them.
See other Comparison Examples under Bing AI.
I am going to upload a PDF file. Please extract its Research Objectives, Hypotheses,Methodology and findings for me, and then save it into a text file.
ChatGPT40: Yes, you can upload the PDF..... The uploaded document appears to be a paper titled "......" I'll begin by searching for and extracting the relevant sections. Extracting Sections: Research Objectives and Hypotheses, Methodology, Results and Findings
Let me start with the extraction. The information has been saved into a text file. You can download it using the link below:Download Extracted_Info.txt
Bing Chat / Copilot
https://www.bing.com/images/create
See HOW TO WRITE PROMPTS FOR IMAGE GENERATION on the left-side panel about how to do it.
- Please create some images of data extraction in white background
(Note: Generated by using Bing Chat opened in Microsoft Edge. Each of the images will come with its link for sharing or web publishing. You can also download them. How to cite or credit the images? See Bing's recommendations: "The images I generate are original creations and do not require citation. However, if you would like to give credit, you can simply mention that the image was generated by Bing."
The ERIC Thesaurus contains nearly 12,000 terms, including more than 4,500 descriptors and 7,000 synonyms1. You can use the term “Academic Achievement” or its synonyms such as “Educational Achievement”, “Student Achievement”, “Academic Performance”, “Academic Success”, “Learning Achievement” and more.
[Note: The follow-up responses include some additional sources. See two below:
https://www.powerthesaurus.org/academic_achievement
https://eric.ed.gov/?ti=Achievement+Gap including https://eric.ed.gov/?ti=Academic+Achievement|
My topic is about psychology. Which grey literature databases should I use?
ChatGPT
PsycEXTRA: This database is produced by the American Psychological Association (APA) and includes grey literature such as conference papers, white papers, and government reports.
OpenGrey: This database provides access to grey literature produced in Europe, including reports, theses, and conference papers
Grey Literature Report: This database is produced by The New York Academy of Medicine and includes reports, conference papers, and other grey literature on health and biomedical topics, including psychology.
ProQuest Dissertations and Theses Global: This database provides access to graduate-level research, including theses and dissertations related to psychology.
Google Scholar: While not exclusively a grey literature database, Google Scholar can be a useful tool for finding conference papers and other grey literature related to psychology.
National Technical Information Service (NTIS): This database provides access to technical reports produced by U.S. government agencies, including those related to psychology.
Bing AI - [asked in the Precise mode]
For psychology, some recommended grey literature databases include PsycEXTRA and PsyArXiv. PsycEXTRA is a database of grey literature material relating to psychology, behavioral sciences and health. PsyArXiv is a psychology archive designed to facilitate rapid dissemination of psychological research.
What other models have been developed based on SERVQUAL?
ChatGPT
SERVPERF: This is a modified version of SERVQUAL that measures service performance instead of service quality. It was developed by Cronin and Taylor (1992) and is based on the same five dimensions as SERVQUAL.
SERVPERF-GAP: This model, developed by Caruana and Ewing (2006), extends the SERVQUAL model to include three additional gaps that can occur when customers evaluate service quality.
E-SERVQUAL: This model, developed by Parasuraman et al. (2005), adapts the SERVQUAL scale to measure the quality of online services.
SERVCO: This is a modified version of SERVQUAL that includes a sixth dimension, "cost," in addition to the five original dimensions. It was developed by Sureshchandar et al. (2002) to address the importance of cost in service quality evaluation.
SERVQ-HC: This is a modified version of SERVQUAL that is specifically designed for evaluating the quality of health care services. It was developed by Cronin et al. (2000) and includes 20 items across five dimensions.
Bing AI
The Nordic model of service quality2.
The hierarchical model of service quality3.
The American Customer Satisfaction Index (ACSI)4.
Bing AI Sidebar - CHAT, THINK, & WRITE
STEP 1
Prompt: As a research assistant specializing in renewable energy, your task is to locate comparable research studies based on the provided abstract from an article: "This paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and replacement of fossil fuels by various sources of renewable energy."
Bing Chat then found 3 documents: the ScienceDirect paper where the abstract came from and two other documents from un.org and frontiersin.org respectively.
STEP 2
Prompt: Please synthesize the findings of the three studies provided above.
Bing Chat: "Sure! Here is a synthesis of the findings from the three studies:
All three studies emphasize the importance of renewable energy in achieving sustainable development and mitigating climate change. They all discuss various strategies and actions that can be taken to increase the use of renewable energy sources and reduce reliance on fossil fuels.
Henrik Lund’s paper discusses the perspective of renewable energy (wind, solar, wave and biomass) in the making of strategies for a sustainable development. Such strategies typically involve three major technological changes: energy savings on the demand side, efficiency improvements in the energy production, and .....
The United Nations outlines five critical actions the world needs to prioritize now to transform our energy systems and speed up the shift to renewable energy: Make renewable energy technology a global public good, Improve global access to components and raw materials......
“Sustainable Energy Transition for Renewable and Low Carbon Grid” discusses that a sustainable global electricity transition will entail increased use of renewable energy sources particularly wind and solar, nuclear energy as a low carbon energy source, electrification of transport and.......
In summary, these studies suggest that transitioning to renewable energy sources is crucial for sustainable development and mitigating climate change. This can be achieved through various strategies such as increasing energy efficiency, improving access to renewable technology and materials, leveling the playing field for renewables, ending fossil fuel subsidies, and redirecting investments towards renewables."
Example #2
You are an expert researcher, given the following article, please discuss its key findings. Reference: How Large Language Models are Transforming Machine-Paraphrased Plagiarism at https://arxiv.org/abs/2210.03568
This is another way to put REFERENCE TEXT. As a side not, role specifying or the use of persona is important as it can impact the level of responses obtained. Just imagine if the designated role were "You are a high school student...."
Example #3
Please read this article and summarize it for me - "Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making" and then find 5 similar studies and please also summarize each for me.
Daily paper digest /
Conference papers digest /
Best paper digest /
Topic tracking.
In Account enter the subject areas interested. Daily Digest will upload studies based on your interests.
CITING GENERATIVE AI
Although it can be different from one program to the other, in general terms it can be stated that AI tools such as chat AI can help students with learning basic concepts and thereby free up time to help them address the more complex cognitive levels.
That being said, it is difficult to give general advice on the use of AI tools, as all faculties and programs run into different challenges and opportunities.
So the first step is to explore the tools by trying them out (with your students) and identifying what the tools are good at (writing drafts in different styles) and what they are bad at (sourcing, argumentation). AI tools are here to stay and will be a part of the future workplace of the students as well.
One of the tasks of educators is to prepare students for this. Therefore, it would be good to explore the possibilities and limitations of AI use together with your students and have conversations about the use of AI tools and academic responsibility with them. For example, discuss with your students how AI tools can be used during the writing of an assignment for looking up information or proofreading, but not for actually writing the entire final version of the assignment. Once this is clear you can look for ways to leverage the benefits of the tool(s), and simultaneously make students aware of the downsides and how to overcome them. Also address the importance of good prompting with your students, as this is essential to get the most out of AI tools.
Being able to construct the input you give to an AI tool in such a way that its output is close to what you are looking for (i.e., prompting or ‘prompt engineering’) is essential for effective use of these tools. If there is a large likelihood that students will be using prompting skills in their work after completing their academic career, it may be desirable to include the learning of prompt engineering in the curriculum. But also if students are merely using AI tools as an aid during their studies, good prompt engineering is crucial in producing high-quality output from AI writing tools as it ensures that the content generated by the tool is relevant, coherent, and in line with the desired output.
By asking well-crafted questions, students can better understand concepts, develop logical reasoning skills, and learn to express their ideas clearly. Effective prompting also helps students to become more engaged in their learning, and can lead to a deeper understanding of the materials they are working with.
The following guidelines will help you write effective prompts for Chat AI to get better output from them. Be aware that the study of the effectiveness of prompt engineering is still a developing field of science.
Be specific about what you want the AI writing tool to generate. For example, you can tell the AI writing tool which aspects it should focus on, what format and style to write in, who the intended audience is, and what the length of your desired output should be. By adjusting the prompt you can generate different results.
Be clear and specific about what you want the AI writing tool to cover. Provide it with a list of points or topics that you want it to address in the output. You can also add constraints to the prompt, such as work length or writing style. A longer prompt that includes clarity, specifications and context will give the tool more to work with and lead to better output compared to a shorter prompt.
Be simple and precise in your language. Avoid using words or phrases that are too technical, vague, or ambiguous. This also includes avoiding the use of jargon. Use common and concrete terms that the AI can easily interpret and follow.
Be direct in your cues. For example, you can use symbols, numbers, bullet points, brackets or quotation marks to indicate what you want the AI to do or generate.
Be proactive and preventative in your communication. Anticipate potential problems or misunderstandings that the AI might encounter and give it pre-corrections and prompts to guide it. For example, you can tell the AI what to do or not to do before it starts generating, or remind it of the rules or criteria during the generation process.
You can further enhance your prompt by using the ‘few shot standard prompts’ approach: mention the task description, provide an example of an answer, followed by the prompt. Be sure to use delimiters in your prompt here. The AI writing tool will then answer in a similar way as you showed in the example you provided. By doing this you are essentially modeling the type of output you would like to receive from the tool.
Example:
‘Role prompting’ is another way in which output can be enhanced. In it you tell the AI writing tool which particular perspective or point of view you want it to assume when creating the output. This is especially useful when you are generating a text that needs to be written with a certain tone or style that is appropriate for a particular role you fulfill. For example, ask the AI tool to act as a writer, teacher, HR advisor, etc.
Another method you can use is ‘chain of thought prompting’. Here you provide some few-shot examples where the reasoning is explained in the same example. In this way, the reasoning process will also be shown when answering the prompt. This type of prompting is especially useful in arithmetic and symbolic reasoning tasks. Prompting an AI tool to reveal and follow a ‘chain of thought’ can even improve its output.
Example:
Image taken from: Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.
There are many tools that can be used in the different types of courses offered in higher education. Below you will find an overview of one course (a thesis project or similar) where it is outlined which different tools can be used at different points throughout this type of course. Note, however, that this is just to give you an example of how and when these tools can be used and not to say that all (or any) tools have to be used in such a course.
Finding literature: students can use Elicit to find relevant papers. It even works without a perfect keyword match, meaning that it does not only search for the literal words that are provided as input. The tool can also extract and summarize key information and takeaways from the papers that are specific to the research question. Another tool that could be used is Scispace. This tool can aid in understanding scientific literature. The user can highlight confusing text, math, and tables to get a simple explanation, can ask follow-up questions and get instant answers. A third tool that is useful in this part of the process is Connected Papers. This AI tool helps analyze scientific papers and can visualize their relationships. It can also show citation maps and connections between papers.
Analyzing data: If students are, for example, conducting interviews and need to transcribe the text to be able to analyze it, an audio-to-text tool they can use is MacWhisper. This tool automatically transcribes audio files into text. An additional benefit is that it runs locally, so potential data protection risks are minimized. If students want to convert written text into spoken form, text-to-audio tools that can be used are: Eleven Labs and Uberduck.
AI tools can also be used when analyzing data using a programming language such as R or Python. By describing the analysis you want to do to a chat AI, it can provide you with the code that is necessary to execute that analysis.
Writing: A student can use a chat AI when (re-)formulating a research question. For example, they could present the different topics and their relationship that should be included and engage in a conversation to refine the question.
If a student wants to illustrate a certain topic, text-to-image tools that can be used are DALL-E 2 and Midjourney. These tools can create unique and realistic images based on text prompts written in natural language. These tools are not useful for graphs, tables, or images that include language, but can be used for more general illustrations to visualize concepts.
When writing the thesis, students could use AI tools to aid in this process. For example, when reporting statistical information in the results section a chat AI could be used to present this information in the correct format (e.g., APA style). Next to this, when finding the correct formulation and tone for a piece of the text, the Sudowrite tool could be used.
There are several tools that can be used to paraphrase (Quillbot) or summarize (Bearly) long texts. Glasp is a tool that generates summaries based on users’ notes and highlights, whereas Perplexity is an AI-powered search engine that provides short answers with citations.
Presenting: A student can use AI tools to make a presentation for the purposes of presenting their results. The tools mentioned under ‘writing’ can also be used here to generate the content of the presentation. For example Quillbotor Bearlycan be used to paraphrase or summarize long texts to distill bullet points. Furthermore, text-to-image tools such as DALL-E 2 and Midjourney can be used to create visuals that can be used in the presentation. Finally, to create the presentation itself a tool such as Gamma can be used to make and refine a visually pleasing presentation.
Disclaimer: It should be stressed that for a lot of AI tools it is unclear how (and which) data is stored by the company behind the tool. As has always been the case, the user (teacher, student, researcher) is responsible for critically choosing tools to work with. It is advised against processing personal data, including research data, with third-party AI tools for the time being, until the requirements of the GDPR (AVG) can be demonstrably met. Privacy is something more than just a 'concern' in this respect, as you have obligations under the law. Open source and/or UG/local tools are sometimes already available.
Han therefore prefers CO-PILOT as a tool, which has a few safeguards
AI tools are becoming increasingly sophisticated and are being used for a wide range of applications. Below you can find just a few examples that can be relevant to (higher) education. Be aware that the UG has no licenses for any of these tools (yet), and also be careful which information you share or use within the tool.
ChatGPT: can generate texts on a wide variety of topics based on user prompts has risen in popularity.
Claude: can help with writing tasks such as summarizing, searching, answering questions and coding.
Ivy Chatbot: An AI-powered chatbot that can answer student questions and provide support
Lex: generates essays
Bing (runs on GPT4): search engine/Chat AI
Talk to Books: a search engine with a large curated database of books that returns quotes based on user prompts
Sheet Plus: converts text to Google Sheets or Excel formulas
Bearly: summarizes long texts
Glasp: generates summaries based on users’ notes and highlights
Perplexity: an AI-powered search engine that provides short answers with citations
Quillbot AI powered paraphrasing tool
Sudowrite AI tool that can rewrite texts in various tones, lengths, etc.
DALL-E 2: can create unique and realistic images based on text prompts written in natural language
Midjourney: generates images
Leonardo: a tool that uses AI to create game assets, and concept art. It enables users to ideate and train their own AI models, and create unique production-ready assets with an artist-friendly interface.
Stable diffusion: Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
Elicit: Can find relevant papers without perfect keyword match, summarize takeaways from the paper specific to your question, and extract key information from the papers.
Scispace: Helps to understand scientific literature. Highlight confusing text, math, and tables to get a simple explanation, ask follow-up questions and get instant answers.
Connected Papers: an AI tool that helps analyze scientific papers and can visualize their relationships. Can show citation maps, connections between papers.
Gradescope: The Gradescope AI tool enables students to assess each other while providing feedback, which are often time-consuming tasks without AI technology.
MacWhisper: AI app that transcribes audio files into text. Runs locally, so minimal privacy concerns.
Eleven Labs: AI tool that transforms text to speech
Uberduck: AI tool that transforms text to speech and with which voice-overs can be created
DeepL: Translation tool
Gamma: AI tool that creates and adjusts a presentation based on text input.