Skip to main content
DA / EN

AI at SDU

How artificial intelligence is used at SDU – Anton Pottegård’s experiences

Artificial intelligence is transforming society – and how research, teaching and innovation is done at universities. In a new series, we ask SDU employees how they use artificial intelligence (AI) and what difference it makes. Here is the response from Anton Pottegård, professor at the Department of Public Health.

By Anton Pottegård og Susan Grønbech Kongpetsak, , 11/26/2025

1. Do you use AI in your group?

Yes, in our research group, AI has quickly become a natural part of everyday life, and we consider it a fundamental tool comparable to, for example, literature searches or our statistical programmes. Of course, this doesn’t mean that everyone uses it in the same way or to the same extent. We haven’t created a formal strategy – but we’re trying to create an environment where you’re expected to try out the technology and find uses that make sense in your own work. We learn from each other and we’re continuously developing how we use the technology.

2. Can you give examples of what you use AI for in your group?

We use AI in many strange ways, but three main areas stand out. Firstly, we work a lot with AI in writing and writing training. In this respect, the models act as regular sparring partners: they suggest structures, rewrite paragraphs, provide rhetorical alternatives and point out logical weaknesses. Particularly for our junior researchers, it has been a great help in the early stages of writing, where you have to learn the linguistic, structural, logical and pedagogical aspects. We use it as a kind of writing tutor that makes training more continuous than traditional feedback.

Secondly, we use AI very actively in programming training. We use the analytics programme ‘R’ in particular in our research, but the learning curve for newcomers can be steep. Here, AI helps as a kind of code mentor, explaining errors, generating examples, suggesting alternative solutions and providing insights into how code can be optimised. This dynamic support allows people to make good headway because the feedback is immediate.

Finally, we use AI in a wide range of administrative and communication tasks. This might involve distilling long documents into short texts for websites, preparing draft emails and memos, structuring meeting materials or creating first drafts of project descriptions. This frees up time from routine tasks and makes it easier to keep the quality of communication consistent.

3. What difference has AI made in the group so far?

The most noticeable difference is that some processes that previously took a long time or required many iterations are now significantly faster. It has fundamentally made us more productive. More importantly, it leaves room for the parts of the work where professionalism really makes a difference – like methodology, interpretation, design and discussion. It reduces frustration at work because you’re not stuck for long periods of time, wrestling with technical or language barriers.

Across the group, it also supports a shared dialogue about work processes. We talk about how we use AI, what our experiences are and what works best. It has created a fun and experimental period of trying things out and sharing both successes (like when someone has found a smart solution) and failures (like when someone has wasted two hours playing with ChatGPT and nothing useful came out of it).

4. How do researchers and lecturers get started using the technology?

You have to jump right in with a specific task you’re already working on instead of sitting down to ‘learn AI’. Instead, take a current text, code snippet or teaching task and ask: can AI help me take a step forward here? The possibilities become clear when you start experimenting with your own materials.

At the same time, you have to accept that it’s not possible – or necessary – to fully understand the technology before you get started. Many researchers have a natural tendency to want to figure out the mechanisms first, but in this case it will only delay the process. The pace of development is so fast that no one can grasp all the underlying issues. The most important thing is therefore to let go of the traditional need for control, test the technology in practice and learn through actual use. It’s in doing the experiments that the real applications are revealed.

5. What do you see as SDU’s strengths and opportunities in terms of utilising AI?

I really enjoy working at SDU. One thing that particularly appeals to me is that we are big enough to have strong academic environments and strong administrative structures – but small enough to be able to move quickly when something new arises. I think this is an excellent starting point for being among those who are experimenting with and really utilising the new technology that’s available. I see a thriving culture of interdisciplinarity and curiosity, which is crucial for realising the potential of AI.

In my experience, we are also good at focusing on creating good environments for junior researchers. Younger people in particular are open to new ways of working and do not carry the same historical notions of how academic work should be organised. Let us (old fogies) be inspired by them. This can make SDU an environment where AI can become a source of real innovation – methodologically, pedagogically and organisationally.

6. What principles do you think should guide the responsible and development-oriented use of AI at SDU?

The guiding principle that has worked best for me is: ‘good enough for now, safe enough to try.’ This means that we can’t and don’t need to fully understand the technology before we use it, but that we need to dive in and adjust as we go. This also means that it’s acceptable if something goes wrong. It could be that you end up wasting your time chasing a smart solution that ultimately doesn’t work. The experiment is valuable in itself. 

Accountability for me is not about waiting, but about working transparently, documenting processes and maintaining that professional judgement always lies with people. We must not use AI for something where we can’t judge for ourselves whether it has been solved correctly.

Of course, we need to have boundaries: we don’t use AI for anything that could mislead or blur professional responsibility. For example, photorealistic generation or automated decision-making is beyond what I consider justifiable in an academic context. For now.

Within that framework, however, I think we need to be brave and creative. If we dare to experiment responsibly, we can develop new methods, new ways of teaching and new ways of organising our work. And striking that balance – between courage and thoughtfulness – is precisely what I see as one of SDU’s key tasks in the journey we are embarking on.

P.S.: The article above should have been based on a classic interview. Due to travel commitments and a tight deadline, this was not possible. Instead, the article was structured and written by Anton with support from ChatGPT 5.1. Specifically, he first had a 15-minute oral conversation with ChatGPT about his thoughts on the subject (while picking up pizza one Friday night). He then suggested a structure for an article and asked ChatGPT to summarise his thoughts for an article format based on this. This did not work at all. Instead, he asked it to suggest which questions could summarise the main points if it were to be a Q&A format instead. These questions were manually corrected and then sent to Susan Kongpetsak, who also adjusted them. ChatGPT was then asked to complete a first draft of answers to the final questions (based on the previous conversation). It was asked to restructure some of the answers to emphasise other points. The final draft was then corrected in Word and subsequently checked and edited by Susan.

Read more about artificial intelligence at SDU

Anton Pottegård

Professor at the Department of Public Health

Editing was completed: 26.11.2025