Responsible Use of Artificial Intelligence in Doctoral Research
[Last updated: 16/01/2026]
The Doctoral School is developing a framework to promote the responsible, transparent, and ethical use of AI throughout doctoral studies. This framework builds on an internal draft (06/02/2025) that outlines the main risks, limitations, and ethical considerations associated with AI, including over-reliance on automated tools, loss of critical research skills, issues of authorship and academic integrity, and the potential reproduction of gender, geographic, racial, or disciplinary biases. The draft also emphasizes the inherent limitations of AI systems, particularly their lack of contextual understanding, dependence on training data quality, and inability to replace human creativity, judgment, and scholarly responsibility, as well as the importance of preserving human supervision and mentorship as core elements of doctoral training.
As part of these actions, the Doctoral School will soon implement (as announced on 14/10/2025) a self-assessment questionnaire to be completed by doctoral candidates at the time of thesis submission. This questionnaire invites candidates to reflect on whether and how AI tools have been used across different stages of their doctoral work (such as planning, literature review, writing, translation, and data analysis) and to assess the degree of influence of these tools, the verification and validation of results, and the ethical implications of their use. The aim is not to prohibit AI, but to foster awareness, critical use, transparency, and accountability.
This approach is aligned with international publishing standards, such as (in the contect of Computing) the ACM Policy on Authorship. This policy clearly states that only identifiable human beings may be listed as authors of a scientific work, that authors must have made substantial intellectual contributions and must take full responsibility for the content of the publication, and that generative AI tools cannot be considered authors. While the use of AI tools to assist in content creation is permitted, it must be explicitly disclosed in the manuscript, ensuring transparency, human authorship, and the integrity and trustworthiness of scholarly work.
We encourage all PhD students to review these policies carefully and to remain attentive to future updates, as the boundary between traditional writing tools (such as spelling and grammar checkers) and AI-based systems (including text completion and generative AI tools) is becoming increasingly blurred.
Share: