Skip to Main Content
Douglas College Library About Us Articles & Databases Research Guides Services Faculty News Events Learning Centre

Artificial Intelligence

Risks when using AI-generated research

Relying on content generated by AI for research for class assignments or even as a study aid for quizzes, tests, and exams can be risky for several reasons.

AI Large Language Models (like ChatGPT) are trained on existing data, much of which contains bias against various groups (including but not limited to women, people of colour, religious believers, and different political groups). Additionally, these tools have been known to produce inaccurate, incorrect, or completely made up information, including "hallucinated" academic articles - OpenAI even has a warning about this

See below for ways to evaluate any information created by generative AI tools (and information from more traditional sources, as well!). 

Accuracy

Content created by generative AI tools often contains errors, false claims, or totally made up information.

Generative AI can also be used to create fake images or videos so well that they are increasingly difficult to detect, so be careful which images and videos you trust, as they may have been created to spread disinformation. Understanding where, how, and why the information you use was created is important. 

Fact checking is crucial when using AI!

Bias

Generative AI relies on the information it finds on the internet to create new output. As information online is often biased, the newly generated content may contain a similar kind of bias. Example of potential bias include gender-bias, racial bias, cultural bias, political bias, religious bias, and so on.  Closely scrutinize AI-generated content to check for inherent biases.  

Comprehensiveness

AI content may be selective as it depends on the algorithm which it uses to create the responses, and although it accesses a huge amount of information found on the internet, it may not be able to access subscription-based information that is secured behind firewalls. Content may also lack depth, be vague rather than specific, and it may be full of clichés, repetitions, and even contradictions. As AI becomes more prevalent in our lives, it may start to impact results found in search engines

For help finding academic articles in library databases, ask a librarian!

Currency

AI tools may not always use the most current information in the content they create. In some disciplines, it is crucial to have the most recent and updated information available. Think, for example, about the recent pandemic. Research was going at a very fast pace and it was important to have not only the most comprehensive and most reliable data available, but also the most recent. Technology is another area that is constantly changing, and information that is valid one year, may not be valid the next. There are many other examples, and it is important that you check the publication dates for any sources of information that are used in AI-generated texts. 

Sources/References

Generative AI tools don't always include citations to the sources of information. It is also known to create citations which are incorrect and to simply make up citations to non-existent sources (sometimes referred to as AI Hallucination). It may provide citations by an author that usually writes about your topic, or even identify a relevant well-known journal, but the title, pages numbers, dates, and sometimes authors are completely fictional. 

Not crediting sources of information used and creating fake citations are both cases of plagiarism, and therefore breaches of Academic Integrity. Be sure to check the Library's OneSearch, Databases, and/or Google Scholar to verify whether the sources are correct or even exist.

Copyright

Generative AI tools rely on what they can find in their vast knowledge repository to create new work, and a new work may infringe on copyright if it uses copyrighted work for the new creation.

For example, there have been several lawsuits against tech companies that use images found on the internet to program their AI tools. One such lawsuit in the United States is by Getty Images which accuses Stable Diffusion of using millions of pictures from Getty's library to train its AI tool. They are claiming damages of US $1.8 trillion.

There is much debate about the ownership of copyright to a product that was created by AI. Is it the person who wrote the code for the AI tool, the person who came up with the prompt, or is it the AI-tool itself? Although currently in Canada, AI-generated works are not copyright protected, this may change in the future.  Also note that laws in other countries may differ from that in Canada. 

Uploading class material to a generative AI tool to create study materials may infringe upon your instructor's intellectual property rights.