Artificial intelligence (AI) can be found at CERN in many contexts: embedded in devices, software products and cloud services procured by CERN, brought on-site by individuals or developed in-house. Following the approval of a CERN-wide AI strategy, these general principles are designed to promote the responsible and ethical use, development and deployment (collectively “use”) of AI at CERN. They are technology neutral and apply to all AI technologies as they become available. The principles apply across all areas of CERN’s activities, including: AI for scientific and technical research: data analysis, anomaly detection, simulation, predictive maintenance and optimisation of accelerator performance or detector operations, and AI for productivity and administrative use: document drafting, note taking, automated translation, language correction and enhancement, coding assistants, and workflow automation. General Principles CERN, members of its personnel, and anyone using CERN computing facilities shall ensure that AI is used in accordance with the following principles: Transparency and explainability: Document and communicate when and how AI is used, and how AI contributes to specific tasks or decisions. Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability. Lawfulness and conduct: The use of AI must be lawful, compliant with CERN’s internal legal framework and respect third-party rights and CERN’s Code of Conduct. Fairness, non-discrimination, and “do no harm”: AI must be used in a way that promotes fairness and inclusiveness and prevents bias, discrimination and any other form of harm. Security and safety: AI must be adequately protected to reduce the likelihood and impact of cybersecurity incidents. AI must be used in a way that is safe, respects confidentiality, integrity and availability requirements, and prevents negative outcomes. S...
First seen: 2025-11-24 12:21
Last seen: 2025-11-24 15:21