Skip to main content
Service Pages

What do I do when I want to use Copilot or other AI chatbots?

What is prompting? 

The prompt is the text or instruction you give generative AI tools to achieve the desired response. 

A prompt can be simple, e.g. one line, or very complicated, describing in detail what response the chatbot should generate. 
The prompt also sets the tone and direction of the conversation with the chatbot, which can ensure that the chatbot delivers relevant responses.

To achieve a good result, you need to be precise and clear. The rule of thumb is: you only get what you ask for! 

Here are some tips for creating effective prompts:

Be specific

How precise the prompt is makes a great difference to the response.

– Simple prompt: Is often just one line and is similar to a quick question you would write in a Google search.
– Precise prompt: Consists of several lines that more explicitly describe the response you want to have.

Think of chatbots as a colleague 

If you were to ask your colleague to help you, how would you ask the question or describe the task you need help with? Create prompts along the same lines.

Generative AI chatbots are not only a search engine, but also a conversation partner.

If you are not happy with the initial response, continue with new prompts asking for elaboration, clarification, adjustment, rewriting, etc. 

For example:

- Clarify the angle and target audience: What level should the text be written at and in what tone, what style/genre, what attitude, who is the target audience, etc.?
- Narrow the context: Enter relevant details about the topic or desired action.
- Text length and structure: Specify whether the text should be short, long or a certain number of words. Indicate whether the text should be structured in a certain way, such as purpose, analysis and conclusion.
- Chatbot methodology: Ask the chatbot to explain how it arrived at the answer.
- Set a condition that the chatbot draws on high quality sources. 

In this way, the chatbot will generate something that is at least on your own level.

Use anonymised data 

Before entering data and loading images, ensure that personal/personally identifiable data and business-critical information have been anonymised or removed from the dataset.

The various AI chatbots are energy-intensive, which affects the carbon footprint of every prompt. Image, audio and video prompts are the most energy-intensive prompts compared with text prompts.

What is SDU’s current position?
The system suppliers, such as
Microsoft, are obliged to deliver sustainable solutions, and this is a point of focus in SDU’s dialogue with the system suppliers.
Generative AI is one of the indirect emissions that SDU is focusing on in Scope 3 of SDU’s climate goals and plan for 2030, which include all other SDU-related emissions originating from air transport, construction, procurement of goods and services, waste and water consumption, etc.
Focus is on identifying the University’s energy consumption in relation to servers, cloud solutions, etc. in order to reduce the University’s digital climate footprint (see Package A [Campus, buildings and operations] in SDU’s climate goals and plan for 2030).

Read SDU’s climate goals and plan for 2030

At SDU, Copilot Chat is available for use, but it is not forbidden to use other generative AI programmes. The only condition is that GDPR, copyright and IT security requirements are not violated.

For Copilot Chat, you need to have logged in with your SDU account and you must select the conversation style.
Other generative AI programmes usually require a login, which can be free or or require a payment.
If you are unsure about data security or other terms related to a free or paid generative AI programme, please use the contact form (in Danish) on the service page
https://sdunet.dk/da/servicesider/digital/inkoeb_af_it_systemer_short

You must ensure and vouch for the quality of what you use from the chatbot

- Transparency: Communicate clearly with colleagues, students or other stakeholders about how generative AI has been used and what data has been used: for example, if you use generative AI to get an overview of a subject area and this affects your further processing or assumptions about a topic.
- Accountability: Be aware of any errors and bias resulting from generative AI and make the necessary corrections or compensations. 

Do not leave the thinking to Copilot (or other chatbots) and do not attribute human characteristics to them. 
Chatbots cannot think, feel, remember or empathise. 

Bias and false information

Be aware of bias and false information inherited from the data the chatbot has been trained on.
Generative AI is not perfect, and it can produce unexpected or even categorically false results. 
It is important to quality assure the output so that what you want to use is correct and meets the necessary standards.

Ethical considerations

Generative AI can raise ethical issues, particularly when it comes to personal data and business-sensitive information. 
The service or product you use has a code in its programming to assess what is right and wrong. This shapes the answers, and therefore views that exist in the source material may be suppressed or omitted in the response. 
This gives rise to ethical considerations about inclusion and the representation of views, for example.

As a university, SDU also represents a code of good behaviour and ethics, and as an employee you are obliged to comply with this. This also applies to the use of generative AI products.

You do not know the source – remember source criticism and critical thinking!

You do not know the extent or quality of the data from which the chatbot generates responses. Although Copilot Chat combines the use of big data modelling with the Bing search engine, references to various websites in Copilot chat responses should be checked.

It is important to follow the general rules of source criticism:

– Know your source: Find out who is behind the information/visualisation (image, film, etc.). Is the source credible?
– Background of the author/graphic designer: What are the author’s qualifications and expertise in the field?
– Date of publication: Is the information/visualisation current or is it outdated?
– The purpose of the source: Is the purpose of the source to inform, convince, sell something or maybe even to mislead?
– Objectivity: Is the information/visualisation presented in an objective way or are there signs of bias?
– Accuracy: Is the information/visualisation correct? Can it be verified through other sources?
– Coverage: Does the source fully cover the topic or are important aspects missing?
– Consistency:
Is the information/visualisation consistent with what you already know or with what is reported in other sources?
– References: Are there references to other sources and are these sources credible?
- Language: Is the language in the source professional or are there signs of sloppy language, misspellings or grammatical errors?
- Deepfakes: Is something not quite right with the visualisation? For example, are all parts of the visualisation consistent in style, relation between voice and mouth movements, etc.? 


Last Updated 02.07.2024