Artificial intelligence (AI), one of the most groundbreaking technologies of the digital age, is rapidly permeating every aspect of our lives. AI systems, which increase efficiency, optimize processes, and offer innovative solutions in numerous sectors, from healthcare and finance to education and transportation, also raise important legal and ethical questions. Chief among these questions is the protection and privacy of personal data. Within the framework of the Personal Data Protection Law (KVKK) in Türkiye, the processing of personal data by AI systems poses a complex challenge for both data controllers and individuals. Does the use of AI violate the KVKK, or can it find legal grounds under certain circumstances? This article examines this critical question in detail, providing a comprehensive guide to new regulations, potential risks, and criminal liability.

The Relationship Between Artificial Intelligence and the Personal Data Protection Law (KVKK)

Artificial intelligence systems require large amounts of data to perform learning, prediction, and decision-making processes. This data often includes personal data. Personal data, defined as any information that allows a person’s identity to be directly or indirectly determined, is strictly protected under the Personal Data Protection Law (KVKK). AI processing of personal data requires compliance with the KVKK provisions at every stage, from collection and storage to analysis and sharing.

The primary purpose of the KVKK is to protect the fundamental rights and freedoms of individuals in the processing of personal data and to regulate the obligations of individuals and legal entities who process personal data, as well as the procedures and principles they must adhere to. To this end, artificial intelligence applications must adhere to fundamental principles such as data minimization, transparency, purpose limitation, and data security.

Data Processing Conditions in AI Applications

The Personal Data Protection Law (KVKK) stipulates certain conditions for the processing of personal data. These conditions must also be met in data processing activities using artificial intelligence systems:

* Explicit Consent: One of the most fundamental requirements for processing personal data is the free, informed consent of the data subject. Properly obtaining this consent from individuals and providing the opportunity to withdraw it at any time is crucial in the data collection and processing processes of AI systems. However, the complex nature and continuous learning process of AI systems can create challenges in determining the scope of explicit consent.
* Explicitly Provisioned by Law: In some cases, the processing of personal data may be expressly regulated by law. For example, this may include processing data by an AI system to comply with a legal obligation.
* Establishment or Performance of a Contract: Processing of personal data belonging to the parties to a contract is possible provided that it is directly related to the establishment or performance of a contract to which the data subject is a party. AI-supported customer service or contract management applications may rely on this condition.
* Legal Obligation: Data processing is mandatory for the data controller to fulfill a legal obligation.
* Legitimate Interest: Data processing is necessary for the data controller’s legitimate interests, provided it does not harm the data subject’s fundamental rights and freedoms. This requirement frequently arises in AI development and optimization processes. However, careful consideration must be given to whether the legitimate interest actually exists and whether it overrides the individual’s rights.
* Actual Impossibility: If the person is unable to give his consent due to actual impossibility or if his consent is not legally valid, it is necessary to protect his own life or the life or physical integrity of another person.

Special Personal Data and Artificial Intelligence

The Personal Data Protection Law defines data related to race, ethnicity, political views, philosophical beliefs, religion, sect or other beliefs, appearance and dress code, membership in associations, foundations or unions, health, sexual life, criminal convictions, and security measures, as well as biometric and genetic data, as “special personal data” and imposes much stricter conditions for their processing. When artificial intelligence systems are used, particularly in the areas of health, biometrics, or genetics, the processing of such data requires either the explicit consent of the data subject or the existence of a condition clearly stipulated in the law. Otherwise, serious legal penalties are inevitable.

New Regulations and International Approaches to Artificial Intelligence

The rapidly evolving nature of AI has raised concerns that existing legal frameworks may be inadequate. Consequently, efforts to develop AI-specific regulations have accelerated globally.

European Union Artificial Intelligence Act (AI Act)

The European Union has adopted the Artificial Intelligence Law (AI Act), the world’s first comprehensive legal framework for artificial intelligence technologies. This law classifies AI systems according to their risk levels (unacceptable risk, high risk, limited risk, and minimal risk) and imposes different obligations. Strict rules on data management, transparency, human oversight, and security are particularly important for high-risk AI systems (e.g., those used in employment, education, critical infrastructure, and legal processes). This EU approach aims to strengthen personal data protection in AI applications by working in integration with data protection laws such as the KVKK. Türkiye’s close relationship with the EU and the KVKK’s inspiration from the GDPR (General Data Protection Regulation) suggest that similar regulations may be on the agenda in Türkiye in the future.

Developments and Expectations in Türkiye

Türkiye does not yet have a comprehensive AI-specific law. However, the Personal Data Protection Authority (KVKK) publishes guidelines and continues its awareness-raising efforts on artificial intelligence and personal data protection. Existing KVKK provisions form the legal basis for personal data processing activities in AI applications. However, given the unique risks of AI (e.g., algorithmic discrimination, the “black box” problem, automated decision-making processes), it is expected that future specific AI regulations will be introduced in line with EU developments or in line with national needs. These regulations are likely to focus specifically on automated decision-making, explainability, and transparency.

Risks and Criminal Liability in the Use of Artificial Intelligence

The processing of personal data by AI systems presents a number of risks. If these risks materialize, data controllers could face significant administrative fines and even criminal liability.

Data Breaches and Security Vulnerabilities

Security breaches in the datasets used to train or use artificial intelligence models could lead to the leak of the personal data of millions of people. Data integrity, confidentiality, or accessibility may be compromised due to cyberattacks, malicious software, or system vulnerabilities. Under the Personal Data Protection Law (KVKK), data controllers are obligated to take all technical and administrative measures to ensure the security of personal data. In the event of a data breach, they are obligated to notify the Authority and the relevant individuals. Failure to fulfill these obligations or to implement adequate security measures may result in administrative fines.

Algorithmic Discrimination and Bias

AI systems can reflect and even reinforce biases in the data they are trained on. This could lead to algorithmic decisions that discriminate against individuals based on race, gender, age, or other sensitive categories. For example, if a recruiting AI consistently excludes a certain demographic group or produces discriminatory results in loan applications, this creates legal and ethical issues. The Personal Data Protection Law (KVKK) prohibits the unlawful processing of personal data and embraces non-discrimination as a fundamental principle. Identifying such discrimination can result in serious legal penalties and significant damage to the data controller’s reputation.

Transparency, Explainability and Automated Decision Making

The “black box” nature of AI systems makes it difficult to understand how they make decisions. Pursuant to Article 11 of the Personal Data Protection Law, data subjects have the right to object to a conclusion that results from the analysis of their personal data by automated systems that is detrimental to them. In AI-based automated decision-making processes, individuals must be informed of the reasons for the decision and provided with appeal mechanisms. Failure to fulfill this obligation of transparency and explainability could lead to legal violations.

Sanctions to be Applied in Case of KVKK Violation

In case of use of artificial intelligence contrary to KVKK, various sanctions may be imposed on data controllers:

* Administrative Fines: In accordance with Article 18 of the KVKK, administrative fines ranging from tens of thousands to millions of Turkish Lira may be imposed depending on the nature of the violation, in cases where the data controller fails to fulfill the obligation to inform, fails to take data security measures, fails to comply with the Board decisions, or violates the obligation to register with the data registry and notify.
* Criminal Liability: Articles 135 to 140 of the Turkish Penal Code (TCK) stipulate prison sentences for the unlawful recording, seizure, dissemination, or destruction of personal data. If these crimes are committed through artificial intelligence systems, criminal prosecution may be initiated against the individuals involved.

Obligations of Data Controllers and Precautions to Be Taken

Data controllers who plan to use or use artificial intelligence technologies must take proactive steps to comply with the KVKK and other relevant legislation.

* Data Protection Impact Assessment (DPIA): Conducting a DPIA that assesses the potential impacts of data processing activities on personal data is critical, especially for high-risk AI projects. This assessment helps to proactively identify and mitigate risks.
* Obligation to Disclose: Data subjects must be informed in a clear, understandable and transparent manner about the purposes for which their personal data is processed by AI systems, how it is processed, with whom it is shared and their rights.
* Consent Management: In cases requiring explicit consent, care must be taken to ensure that consent is validly obtained, documented, and easily withdrawn.
* Data Minimization and Anonymization/Pseudonymized Processing: Artificial intelligence systems should be ensured to process only the minimum amount of personal data necessary, and anonymized or pseudonymized data should be used whenever possible.
* Security Measures: Technical (encryption, access control, penetration testing, etc.) and administrative (policy, training, auditing, etc.) measures should be taken to ensure the security of personal data processed in AI systems.
* Transparency and Explainability Mechanisms: In automated decision-making processes, mechanisms should be developed to explain to individuals the logic behind the decision and provide them with the opportunity to object.
* Regular Audit and Compliance: The compliance of AI systems with KVKK should be regularly audited, and deficiencies should be identified and remedied through internal and external audits.

While artificial intelligence offers opportunities, it also presents significant challenges in the area of personal data protection. The legal and ethical use of this technology is crucial for both protecting the rights of individuals and minimizing legal risks for businesses. Data controllers must carefully plan their AI projects in accordance with the Personal Data Protection Law (KVKK) and relevant international standards, implement the necessary technical and administrative measures, and obtain legal consultancy during this process to protect themselves from potential sanctions and build a secure digital future.