RISKS ASSOCIATED WITH THE USE OF ARTIFICIAL INTELLIGENCE

With the increasing use of Artificial Intelligence tools, it is necessary to perceive not only their benefits, but also their potential risks. Risks may include, for example, distorted algorithm outputs, privacy violations, security threats, the spread of disinformation, but also ethical issues associated with automated decision-making, or the misuse of AI in an academic environment (e.g. in grading papers or in cheating).


The aim of this section of the website is to offer useful links, and to support the debate on how to recognize, evaluate, and actively prevent these risks in a timely manner – not only in a technical, but also in a social, legal, and ethical context.

​​The Karel Čapek Centre for the Study of Values ​​in Science and Technology has published a report entitled “Ethical Rules for the Regulation of AI”, which addresses the need for ethical regulation of the development and use of artificial intelligence. The aim is to ensure that AI respects fundamental human rights, values, ​​and ethical principles, while being technically robust and reliable. The report emphasises the importance of trustworthy AI that will not lead to moral panic, or rejection of its applications in society. The document also analyses the situation in the Czech Republic, points to the lack of experts in the field of AI ethics, and proposes the creation of ethical guidelines regulating research, development, and the application of AI in practice.

​​The Kempelen Institute for Intelligent Technologies (KInIT) is an independent, non-profit organization, based in Slovakia, that focuses on research and development in the field of artificial intelligence and other areas of computer science. Its mission is to connect cutting-edge research with practical applications for people and industry. On the website, you will find information about the institute, its research areas, educational activities, current job opportunities, news, and co-operation opportunities. The main research areas include web and user data processing, solutions for a sustainable and safe environment, natural language processing, and ethics and human values ​​in technology.

This page on the Digital Czech Republic website summarizes how AI systems are classified according to their risk level under the European Artificial Intelligence Act. It presents four basic categories: prohibited practices, high-risk systems, limited-risk systems, and minimal-risk systems. The classification aims to ensure the safe, transparent, and ethical use of AI in the EU.

The OECD website on AI-related risks and incidents provides information on monitoring and analysing AI-related incidents. Through the AI ​​Incidents Monitor (AIM), the OECD documents these incidents and risks, with the aim of providing better evidence for policymakers, experts, and other stakeholders. The website provides definitions of key terms such as “AI incidents” and “AI risks”, as well as information on the OECD’s efforts to develop a common framework for reporting these incidents.

The non-profit organization AI Future Projects has published predictions for the development of AI until 2027 and beyond. The team of authors, including former researchers from OpenAI and other experts in AI security and policy, describe possible scenarios for the development of artificial intelligence in the coming years in great detail – including the gradual emergence of AGI (Artificial General Intelligence), the impacts on the labour market, geopolitics, security, and the potential risk of “taking AI out of direct human control”. These predictions have provoked a number of more or less critical reactions. Here you can find a link to one of these from researcher Max Harms, in connection with the Machine Intelligence Research Institute (MIRI). You can find an interview about this material in Czech on the Czech Radio website in the podcast Vinohradská 12.

Tým ACS vedený Janem Kulveitem text Gradual Disempowerment, jehož je spoluautorem. Věnují se v něm méně dramatickému, ale o to reálnějšímu scénáři vývoje umělé inteligence, který nepočítá s náhlou katastrofou, nýbrž s pozvolnou ztrátou lidského vlivu. Nově definovaná kategorie rizik může být nenápadnější než klasické obrazy AI vzpoury či globálního zneužití, ale právě proto je podle autorů potřeba ji brát vážně.