Future of AI Systems On Legal Basis

In today’s world where technological innovations permeate more than ever, AI has started to be influential on numerous fields of study. These advancements pave the way for being fast and steady for both sides, sellers of goods, and services and customers. Data processing has never been faster as it is today. AI provides a variety of advantages including safer internet environment, faster and neater management of inventories, solutions to problems that customers face. However, AI has also brought disadvantages to these areas. In the perspective of law, protection of private data, contestability of a decision made by AI, liability of harms given are few of the problems can emerge. AI will become more integrated with our lives in the future. For instance, people will start to control daily activities such as house chords by using AI. In this assumption, more dangerous problems can occur such as physical harms caused by the AI systems. To that end, law makers are obliged to find answers for several questions. Who should be liable for the damage made by AI? How to control and prevent AI from causing discrimination to people? What is the boundary of data process performed by AI? In 2018 European Union (EU) made a draft on AI regulations. EU classified AI systems by their harm level and sorted out 3 levels of AI. According to draft there are Unacceptable-risk AI systems, High-risk AI systems, and Limited and minimal-risk AI systems. Systems like spam bots or chat bots are regarded as minimal-risk AI systems because these systems cause almost no harm. Moreover, with the help of them internet is safer environment today. High-risk AI systems are mostly involving AI activity in the crucial areas of our lives. Developers and users of this system have to be very careful while setting, programming and using it because without strong cyber security, they can be used for unethical purposes. Even in the system which secured perfectly, there can be problems such as discrimination made by AI in recruiting employees or choosing eligible people for loan. Therefore, EU is planning to regulate liabilities for the ones who are going to use them. Unacceptable-risk AI systems are going to be restricted in most of the cases except the ones used by the legal authorities. For example, defending systems has broad relation with AI which can be used for harming people and provide threat for national security. Thus, these are going to be used by only few organizations. Regulations of EU does not limit the authority of this regulations in Europe. Ones who are in relation with EU through the system are going to be the subject of the regulation. In that case, individuals or companies located in within the European Union, placing an AI system on the market of EU, or in a relation with EU through the AI system will be accountable for regulations.

Resources:

 

https://openai.com/blog/gpt-3-apps/

https://openai.com/blog/openai-codex/

https://www.notboring.co/p/scale-rational-in-the-fullness-of

https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/what-the-draft-european-union-ai-regulations-mean-for-business

https://www.sciencedirect.com/science/article/pii/S2666659620300056

https://www.weforum.org/agenda/2021/07/ai-machine-learning-bias-discrimination/

 

 

Previous
Previous

LIMITATION OF LIABILITIES UNDER TURKISH LAW

Next
Next

Uluslararası Hukuk Açısından Mültecilerin Statüsü