AI at SDU
How artificial intelligence is used at SDU – the experiences of Jacob Jensen, director of SDU Analytics
Artificial intelligence is transforming society – and how research, teaching and innovation is done at universities. In a new series, we ask SDU employees how they use artificial intelligence (AI) and what difference it makes. Here are Jacob Jensen’s answers.
1. Do you use AI in your unit?
Yes, in SDU Analytics we use AI in two basic ways.
Firstly, as individual professional support – as do many others at SDU, for instance, generative AI such as Copilot and ChatGPT.
Secondly, AI is part of what we encode as an integral part of the analytical support for use in, for example, prediction, hypothesis testing and digital assistants.
However, we are very aware that there are many different types of AI and not least acutely aware of what AI is suitable for and what AI is not suitable for.
SDU Analytics provides data for official government reporting, internal quality processes, planning and strategic discussions. This places high demands on consistency, documentation and clarification of concepts across applications. This is also why I get concerned as a professional when Excel extracts from analytical reports are uploaded directly into Copilot with the expectation that new insights will be generated. Generative AI produces quick and plausible answers. Those answers are not necessarily the correct, explanatory or professionally responsible answers.
As a university, we have a special responsibility to protect what knowledge is and how knowledge is created. In our use of AI, we must not gamble with our academic and professional credibility. For this reason, we are very conscious of our professional responsibilities at SDU Analytics. We prioritise high data quality, methodological transparency and a correct interpretation of concepts. Our ambition is that managers and employees at SDU can trust the figures and analyses we provide, even if we use AI along the way.
2. Can you give examples of what you use AI for in your unit?
We have worked with predictive models for several years, such as those used to forecast recruitment and graduate unemployment. In the process, we have learned that the challenges lie in the fact that figures are deceptively simple to communicate, but the preconditions of the figures are not. We strongly suggest that our predictions be used with caution. Regardless of the choice of AI model, there will be statistical uncertainty. What’s more, the structural changes in the university sector skew the assumptions of the AI models’ training data, and therefore the results of the models will unfortunately fail from time to time.
In addition, we have high expectations for our ongoing development of AI agents. The SDU Analysis Portal contains more than 200 web reports, which makes it difficult to find the exact insights you need. That is why we are now developing digital analysis assistants that users can communicate with using natural language. Behind the facade, we will combine generative AI with standard, rule-based machine learning methods. The generative AI will be applied to the dialogue with the user, whereas traditional machine learning such as symbolic AI will ensure that what can be retrieved and presented are the correct, updated and approved data. Our pilot test agent is already capable of, for example, generating PowerPoint-appropriate visualisations with correct data in SDU’s brand colours; perhaps it should also provide a brief interpretation of results in the form of speech notes? AI agents also learn from user behaviour in different ways than what we as analysts do today. This is why I am eagerly anticipating the user testing of our first digital analysis assistants among colleagues at SDU in 2026, because this will help us develop a completely different service than what we offer today.
3. What difference has AI made in the unit so far?
Overall, we have learned not to overestimate the benefits of AI and not to underestimate the cost side of AI. And we are still learning every single day because the use of AI entails so many types of influences.
Already in 2018–2021, we were working systematically with ‘advanced analytics’, that is, before AI became a widely known concept. One key realisation from that period was that model development represents only a small portion of the total resource consumption. When an AI model is deployed behind an analytical product, new types of problems arise. Therefore, launching an analytical product based on AI is not the conclusion of a project but the beginning of a new project on handling new data noise, new data breaches, user behaviours and new interpretations of use, all of which are resource-intensive to maintain.
We also learned that legal frameworks, organisational values and ethical considerations are not supplements but fundamental prerequisites for the responsible use of data and AI models. Today, we can build on these experiences to achieve a more nuanced and realistic understanding of both the potentials and the limitations of AI.
4. How do TAPs get started using the technology?
It is hard to present just one piece of good advice. In fact, I am well aware that for the individual TAP colleague, training the use of AI tools during task solution involves a certain amount of time spent. This means that the AI experience can be frustrating in a busy everyday life, because they are facing the dilemma of alternative cost: If they had used established working methods, the time spent experimenting with AI for task solution could have been spent solving some of the other tasks piling up on their desk.
However, without a doubt, all of us will have to get started with AI. In the future, I expect strong AI skills to become a natural part of the job requirements for TAPs, both in job postings and in the annual performance and development review (MUS). Managers therefore play a key role in creating safe learning spaces in which employees can experiment with AI in relation to real tasks, including accepting mistakes and uncertainties along the way, because this is where learning takes place.
Imagine if every TAP at SDU chose just one specific task each week that they tried to solve with the help of AI. If each TAPs chooses a new task every week that is exposed to AI in its solution, their trouble and their excitement will enable us to generate reflections, competences and organisational learning to promote the administration of the future at SDU.
5. What do you see as SDU’s strengths and opportunities in terms of utilising AI?
SDU’s strong culture of collaboration across the organisation, including between academic and technical-administrative staff, is a very significant strength because AI as an approach only truly creates value when experiences and solutions can be shared and scaled.
If we share experiences, methods and solutions more systematically at SDU, in the long term AI will transform the way in which we are a university. Not only internally, but also in relation to the surrounding society. We can always start small, but with AI we are allowed to dream big.
I believe that AI will lead to new forms of co-creation; not only internally within the processes at SDU, but also between SDU and the surrounding society, where teaching, research and knowledge sharing are taking place off campus to a greater extent. AI has the potential to be a catalyst for an even more open and engaged university.
6. What principles do you think should guide the responsible and development-oriented use of AI at SDU?
In 2025, SDU has implemented an AI code of conduct, which provides a good framework for responsible development with AI. Perhaps the next step is a more widespread shared recognition that AI is not just a set of new tools but an occasion to rethink workflows, roles and value creation in terms of who creates value and in what ways, through the use of AI.
If AI is only used to optimise the speed of existing ways of working and is only based on individual approaches, then we risk losing significant strategic potential at organisational level. The framework must be able to support new ways of thinking and encourage experimenting; adding more rungs to the existing ladder will not get us to the moon.
In my view, it is a sound principle to promote the use of AI as a strategic opportunity to challenge habits and encourage new ways of working. In the coming years, we will have to balance control with curiosity, not least because there are many different kinds of considerations in relation to the increasing use of AI: at the moment, we hardly have a complete overview of the constitutive effects, such as environmental impacts and IT and information security.
Basically, as an employee at SDU, I feel quite confident that SDU’s strategy 2030 and our core academic critical thinking will largely ensure an appropriate development in the use of AI at SDU.
Jacob Jensen
Director of SDU Analytics