توسل دولتها به هوش مصنوعی جهت تأمین امنیت ملی و مواجهه با تهدیدات حقوق بشری

نوع مقاله : سیاست گذاری و حکمرانی سیاسی

نویسندگان

1 گروه حقوق وعلوم سیاسی ،دانشگاه ازاداسلامی مدرس دانشگاه

2 دکتری تخصصی علوم سیاسی (اندیشه سیاسی) ، محل اخذ مدرک: دانشگاه آزاد اسلامی واحد تبریز

3 (کارشناسی ارشد) علوم سیاسی دانشگاه تبریز

چکیده

بسیاری از دولتها از هوش مصنوعی و فناوری مرتبط با آن برای حفظ امنیت ملی بهره میگیرند. این دولتها بهره گیری از هوش مصنوعی در راستای تأمین امنیت ملی را با این استدلال که «فناوری و ماشین‌ها بیطرف هستند» توجیه می‌کنند. در این پژوهش با هدف «بررسی توسل دولت ها به هوش مصنوعی برای تأمین امنیت ملی و تهدید بنیادی ترین حقوق بشر»؛ این پرسش مطرح شد که «چگونه توسل دولت‌ها به هوش مصنوعی برای تأمین اهداف امنیت ملی، بنیادی ترین حقوق بشر را تهدید میکند؟» که با توجه به پژوهش میتوان نتیجه گرفت: فناوری در هوش مصنوعی میتواند حاوی اطلاعات مغرضانه، سوگیری‌ها و خطاها باشد که این مساله منجر به ایجاد نتایج مثبت کاذب و منفی کاذب می‌شود. هنگامی که این فناوری برای اهداف امنیت ملی توسعه می‌یابد، برخی از موارد مرتبط با حقوق بشر شهروندان به خطر می افتد. در واقع فناوری و تکنولوژی نمیتواند بیطرف باشد که چرا سوگیری و خطاهای ذاتی، بخش جدانشدنی از آن میباشند و این مساله به صورت جدی حقوق اساسی مردم از جمله حفظ حریم خصوصی، حق محاکمه عادلانه، حق آزادی عقیده و حتی حق حیات را به بهانه ی حمایت از امنیت ملی تهدید می‌کند. در این پژوهش به روش توصیفی- تحلیلی و بهره‌گیری از روش فیش برداری سعی در بررسی حقوقی شده است که با بهره‌گیری دولت‌ها از هوش مصنوعی برای اهداف امنیت ملی، در معرض خطر قرار گرفته اند

کلیدواژه‌ها


عنوان مقاله [English]

Governments resort to artificial intelligence for national security and threaten human rights

نویسندگان [English]

  • pezhman elhami taleshmikaeil 1
  • amir kord karimie 2
  • hessam urujii 3
1 گروه حقوق وعلوم سیاسی ،دانشگاه ازاداسلامی مدرس دانشگاه
2 دکتری تخصصی علوم سیاسی (اندیشه سیاسی) ، محل اخذ مدرک: دانشگاه آزاد اسلامی واحد تبریز
3 (کارشناسی ارشد) علوم سیاسی دانشگاه تبریز
چکیده [English]

Many governments use artificial intelligence and related technology to maintain national security. These governments justify the use of artificial intelligence in order to provide national security with the argument that "technology and machines are neutral". In this research, with the aim of "investigating governments' recourse to artificial intelligence to ensure national security and threaten the most fundamental human rights"; The question was raised that "how does the government's use of artificial intelligence to secure national security goals threaten the most fundamental human rights?" According to the research, it can be concluded: artificial intelligence technology can contain biased information, biases and errors, which leads to false positive and false negative results. When this technology is developed for national security purposes, some issues related to the human rights of citizens are compromised. In fact, technology cannot be neutral, because bias and inherent errors are an inseparable part of it, and this issue seriously undermines people's fundamental rights, including the protection of privacy, the right to a fair trial, the right to freedom of opinion, and even the right to life. It threatens the support of national security. In this research, using the descriptive-analytical method and using the phishing method, an attempt has been made to examine the rights that have been put at risk by the use of artificial intelligence by governments for national security purposes

کلیدواژه‌ها [English]

  • Artificial intelligence
  • national security
  • surveillance
  • right to privacy
  • freedom of speech
  • government

Smiley face

Azarshab, Mohammad Taghi; Jamabadi, Morteza; Bakhshi Taliabi, Ramin (2016), The Place of Security in the Copenhagen School: A Framework for Analysis, Political Science Quarterly, Volume 13, Vol. 40
Ahmadi, Ali; Zargar, Afshin; Admi, Ali (1400), the role of emerging technologies in the security and national power of countries: opportunities and threats, International Studies Quarterly, year 18, Vol. 4
Daagovian, Daoud (2018), The Soft War of Satellite Networks in Smoothing the International Foreign Policy of Countries, International Studies Quarterly, Volume 17, Vol. 2
Sorenson, Georke; Jackson, Robert (2013), Introduction to International Relations, translated by: Zakarian, Mehdi; Taghizadeh, Ahmed; Saeed Kolahi Street, Hassan, Tehran: Mizan Publications
Abdullah-Khani, Ali (2013), security theories; An introduction to national security doctrine planning, Volume 1, Tehran: Abrar Contemporary International Institute of Cultural Studies and Research.
Went, Alexander (2004), Social Theory of International Politics, Hamira, translated by: Moshirzadeh, Tehran: Ministry of Foreign Affairs.
Access Now. (2018). Human rights in the age of artificial intelligence. https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf
Babuta, A., Oswald, M., & Janjeva, A. (2020). Artificial intelligence and UK National Security: Policy considerations. Royal United Services Institute for Defense and Security Studies. https://static.rusi.org/ai_national_security_final_web_version.pdf
Blount, K. (2021). Applying the presumption of innocence to policing with AI. Artificial intelligence, big data and automated decision-making in criminal justice. Revue Internationale de Droit Pénale, 92(1), 33–48. https://orbilu.uni.lu/bitstream/10993/48564/1/Blount%20RIDP%20PDF.pdf
Bovens, G. R., & Schillemans, T. (2016). The Oxford handbook of public accountability. Oxford University Press
Cambron, R. J. (2019). World war web: Rethinking “aiding and abetting” in the social media age.Case Western Reserve Journal of International Law, 51(1), 293–325.
Committee of Experts on Internet Intermediaries. (2018). Algorithms and human rights: Study on the human rights dimensions of automated data processing techniques and possible regulatory implications. Council of Europe.
https://edoc.coe.int/en/internet/7589-algorithms-and-humanrights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-andpossible-regulatory-implications.html
Crawford, K. (2019, August 27). Halt the use of facial-recognition technology until it is regulated. Nature.
https://www.nature.com/articles/d41586-019-02514-7
Cummings, L. M. (2017). Artificial intelligence and the future of warfare. Chatham House. https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificialintelligence-future-warfare-cummings-final.pdf
Dieu, O., Dau, P. M., & Vermeulen, G. (2021). ISIL terrorists and the use of social media platforms. Are offensive and proactive cyber-attacks the solution to the online presence of ISIL? [Master Thesis, University of Ghent]. Lib UGent https://libstore.ugent.be/fulltxt/RUG01/003/007/897/RUG01-003007897_2021_0001_AC.pdf
Elkin-Koren, N. (2020). Contesting algorithms: Restoring the public interest in content filtering by artificial intelligence. Big Data & Society, 7(2), 1–13.
https://journals.sagepub.com/doi/full/10.1177/2053951720932296
Enarsson, T., Enqvist, L., & Naarttijarvi, M. (2022). Approaching the human in the loop – Legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31(1), 123–153.
https://doi.org/10.1080/13600834.2021.1958860
European Court of Human Rights. (2021). Guide on article 6 of the European convention on human rights: Right to a fair trial (criminal limb). Council of Europe
https://www.echr.coe.int/documents/guide_art_6_criminal_eng.pdf
European Court of Human Rights. (2022). Mass surveillance. Council of Europe.
 https://www.echr.coe.int/documents/fs_mass_surveillance_eng.pdf
European Union Agency for Fundamental Rights. (2020). Facial recognition technology: Fundamental rights considerations in the context of law enforcement.
 https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-facial-recognition-technology-focus-paper-1_en.pdf
Feldstein, S. (2019). The global expansion of AI surveillance. Carnegie Endowment for International Peace
https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillancepub-79847
GCHQ. (n.d.). Pioneering a new National Security: The ethics of artificial intelligence. Retrieved March 1, 2022, from:
https://www.gchq.gov.uk/artificial-intelligence/accessible-version.html
Gollatz, K., Beer, F., & Katzenbach, C. (2018). The turn to artificial intelligence in governing communication online (HIIG workshop report). Big Data & Society (special issue), https://www.ssoar.info/ssoar/bitstream/handle/document/59528/ssoar-2018-gollatz_et_al-The_Turn_to_Artificial_Intelligence.pdf?sequence=1&isAllowed=y&lnkname=ssoar-2018-gollatz_et_al-The_Turn_to_Artificial_Intelligence.pdf
Hardyns, W., & Rummens, A. (2018). Predictive policing as a new tool for law enforcement? Recent developments and challenges. European Journal on Criminal Policy and Research, 24,201–218.  https://doi.org/10.1007/s10610-017-9361-2
Heilemann, J. (2021). Click, collect and calculate: The growing importance of big data in predicting future criminal behaviour. Artificial intelligence, big data and automated decisionmakingin criminal justice. Revue Internationale de Droit Pénale, 92(1), 49–67.  
http://real.mtak.hu/133496/1/RIDP_2021_1_Karsai.pdf
Huszti-Orban, K. (2018). Internet intermediaries and counter-terrorism: Between self-regulation and outsourcing law enforcement. In T. Minarik, L. Lindstrom, & R. Jakschis (Eds.) , 10th international conference on cyber conflict: CyCon X: Maximising effects (pp. 227–243). NATO CCD COE Publications.
https://doi.org/10.23919/CYCON.2018.8405019
Koops, B.-J. (2021). The concept of function creep. Law, Innovation and Technology, 13(1), 29–56
 https://doi.org/10.1080/17579961.2021.1898299
Lazarus, L., Le Toquin, J.-C., Magri o Aires, M., Nunes, F., Staciwa, K., Vermeulen,G., & Walden, I. (2021). Respecting human rights and the rule of law when using automated technology to detect online child sexual exploitation and abuse (Independent experts’ report). Directorate General of Human Rights and Rule of Law & Directorate General of Democracy.
 https://rm.coe.int/respecting-human-rights-and-the-rule-of-law-whenusing-automated-techn/1680a2f5ee
 
Llanso, E., Van Hoboken, J., Leerssen, P., & Harambam, J. (2020). Artificial intelligence, contentmoderation, and freedom of expression. Transatlantic Working Group
https://www.ivir.nl/publicaties/download/AI-Llanso-Van-Hoboken-Feb-2020.pdf
Loideain, N. N. (2019). A bridge too far? The investigatory powers act 2016 and human rights law.In L. Edwards (Ed.) , Law, policy, and the internet (pp. 165–192). Hart Publishing.
McKendrick, K. (2019). Artificial intelligence prediction and counterterrorism. Chatham House
https://www.chathamhouse.org/sites/default/files/2019-08-07-AICounterterrorism.pdf
Melzer, N. (2019). International humanitarian law: A comprehensive introduction. ICRC
Mijatovic, D. (2018, July 3). In the era of artificial intelligence: Safeguarding human rights. Open Democracy. https://www.opendemocracy.net/en/digitaliberties/in-era-of-artificial-intelligencesafeguarding-human-rights/
Moses, L. B., & Janet, C. (2018). Algorithmic prediction in policing: Assumptions, evaluation, and accountability. Policing and Society, 28(7), 806–822.
Qantara. (2021, March 8). Activists in race to save digital trace of Syria war. https://en.qantara.de/content/activists-in-race-to-save-digital-trace-of-syria-war
Ratcliffe, J. (2014). What is the future .. of predictive policing? Translational Criminology, 6,4–5.
https://www.academia.edu/26606550/What_Is_the_Future_of_Predictive_Policing
Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4. https://doi.org/10.1016/j.jrt.2020.100005
Saurabh, Bagchi(2023), What is a black box? A computer scientist explains what it means when the inner workings of AIs are hidden, the conversation, electronic publication. Available at:
https://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888
Special Counsel Robert S. Mueller. (2019). Report on the investigation into Russian interference in the 2016 presidential election. U.S. Department of Justice.
https://www.justice.gov/archives/sco/file/1373816/download
Szocik, K., & Jurkowska-Gomułka, A. (2021). Ethical, legal and political challenges of artificial intelligence: Law as a response to AI-related threats and hopes. World Futures, 1–17
https://doi.org/10.1080/02604027.2021.2012876
United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. (2018, August 29). Promotion and protection of the right to freedom of opinion and expression (A/73/348). https://digitallibrary.un.org/record/1643488?ln=en#recordfiles-collapse-header
World Commission on the Ethics of Scientific Knowledge and Technology. (2017). Report of COMEST on robotics ethics (SHS/YES/COMEST-10/17/2 REV.).
https://unesdoc.unesco.org/ark:/48223/pf0000253952
World Commission on the Ethics of Scientific Knowledge and Technology. (2017). Report of COMEST on robotics ethics (SHS/YES/COMEST-10/17/2 REV.).
https://unesdoc.unesco.org/ark:/48223/pf0000253952
Yu, S., & Carroll, F. (2021). Implications of AI in National Security: Understanding the security issues and ethical challenges. In R. Montasari & H. Jahankhani (Eds.) , Artificial intelligence in cyber security: Impact and implications. Security challenges, technical and ethical issues, forensic investigative challenges (pp. 157–175). Springer.
 https://doi.org/10.1007/978-3-030-88040-8