The Korean Society for Journalism & Communication (KSJCS)
[ Article ]
Korean Journal of Journalism & Communication Studies - Vol. 67, No. 4, pp.238-271
ISSN: 2586-7369 (Online)
Print publication date 31 Aug 2023
Received 16 Feb 2023 Revised 13 Jul 2023 Accepted 28 Jul 2023
DOI: https://doi.org/10.20879/kjjcs.2023.67.4.007

팩트체킹 인공지능 기술과 사실성의 역학 : 현장 참여자의 심층 인터뷰 분석

박소영** ; 이정현***
**조선대학교 신문방송학과 조교수 sy.park@chosun.ac.kr
***중앙대학교 인문콘텐츠연구소 HK연구교수 maryjlee1205@cau.ac.kr
Artificial Intelligence Fact-Checking Technology and the Dynamics of Factuality : An In-depth Interview Analysis of Field Participants
Soyoung Park** ; Jeonghyun Lee***
**Assistant Professor, Chosun University, first author sy.park@chosun.ac.kr
***HK Research Professor, Humanities Research Institute, Chung-Ang University, corresponding author maryjlee1205@cau.ac.kr

초록

디지털 환경 속에서 잘못되거나 왜곡된 정보들이 대량으로 생산되고 빠르게 유통, 소비되면서 ‘팩트체킹(fact-checking, 사실 확인)’이 대응책으로 주목 받고 있다. 국내에서도 팩트체크 전문기관이 등장했고 팩트체크 관행을 뉴스 생산 현장에서 강조하고자 하는 언론사 수도 증가해왔으며, 2010년대 후반부터는 정부의 지원에 힘입어 인공지능을 기반으로 팩트체킹을 자동화하려는 시도가 발전해 왔다. 하지만 정작 국내 인공지능 팩트체크 기술 개발의 필요성, 한계, 전망, 방향성 등에 대한 현장의 목소리는 사회적 담론으로 충분히 생산되지 못했다. 이 연구는 이에 문제 의식을 갖고 팩트체크 현장에의 참여 경험이 있는 이해관계자를 인터뷰함으로써 국내 인공지능 기반 자동화 팩트체크 기술의 현황, 과제 및 전망을 제시하고자 했다. 국내 팩트체크 전문기관에서 활동하고 있는 7인의 주요 이해관계자에 대해 심층 인터뷰를 진행했고, 한국형 인공지능 기반 자동화 팩트체킹 기술 개발과 관련된 정부기관의 기술개발 지원 현황과 학계 발표 연구 등을 현상 분석했다. 연구 결과, 현재 한국형 인공지능 팩트체크 기술은 국가적 차원의 연구지원이 소모적 정쟁 속에서 정치적으로 변질되고, 양질의 한글 데이터를 확보하기 어려운 비우호적인 연구환경 속에서 연구가 답보 상태에 머물러 있음이 드러났다. 현장 참여자들은 팩트체킹 과정에서 인공지능의 역할과 범위에 대해 다양한 이해관계자 간 합의점을 찾기 위한 사회적 노력이 필요하다고 강조했다. 특히, 가치와 맥락 정보가 내재된 사회적 해석의 영역인 팩트체킹에 내재된 본질적인 주관성과 정치성을 감안해 기술 개발 과정에 사회적 합의가 수반되어야 함을 강조하고 있다. 본 연구의 결과는 팩트체크 생태계 내에서 실제 활동 중인 참여자들의 시각을 사회적 담론으로 재구성하는 데 일조하고, 다양한 행위자의 지형도 안에서 한국형 인공지능 팩트체크 기술이 나아갈 방향성을 제시하고 있다. 본 연구의 의의는 국내 인공지능 기반 팩트체크 기술 개발 과정과 시행착오를 기술사의 일부로 기록하여 남기고, 사회적 숙의 과정을 거친 인공지능 개발 방향을 제안하는 데 있다.

Abstract

‘Fact-check’ has drawn attention as a countermeasure against false or misleading information that is produced in large quantities and rapidly distributed in the digital environment. In response, the number of fact-checking organizations as well as news outlets that try to emphasize fact-checking practices in the news production process has also increased in Korea. Since the late 2010s, attempts to automate fact-checking based on artificial intelligence (AI) technologies have developed with government-backed financial support. However, up to date, the social discourse on the necessity, limitations, prospects, and direction of AI-based fact-checking technologies has not been sufficiently developed. Facing this issue, this study aims to present the current status, challenges, and prospects of national AI fact-checking technology research and development through in-depth interviews with seven stakeholders who have been involved in two representative fact-check institutes. This study also conducted a phenomenological analysis of government policy documents and published research. The results of the study suggest that Korean AI fact-checking technology is currently stuck in an unfriendly research environment, including a lack of national research support and a lack of high-quality Korean data, which has turned into a politicized debate. Participants also emphasized that social efforts are needed to find consensus among various stakeholders on the role and scope of AI in the fact-checking process. They also suggested that the factcheck technology development process should be accompanied by a mature social discourse given the subjectivity and politics inherent in fact-checking, a domain of social interpretation with embedded values and contextual information. Based on these findings, the study emphasizes the need to explore alternative governance systems to ensure and strengthen the independence and impartiality of fact-checking, and calls for self-reflection by all members of society to establish the fact-checking process as a virtuous cycle. This means that we all need to establish the correct perception of what constitutes a "fact," implement appropriate practices in the actual fact-checking process, and most importantly, respect and accept the fact-checking results presented without being bound by partisanship to further solidify the socio-cognitive foundation for a healthy social debate. All in all, our findings contribute to restructuring the sociotechnical discourse within the topography of various actors in the realm of the fact-checking ecosystem, thereby suggesting the future direction of AI fact-checking technologies in Korea. The significance of this study is that it documents the historical development process of trial and error of national AI-based fact-checking technology and raises the issue of overly politicized or blindly focused development of AI fact-checking research and regulation in Korea without mature social deliberation.

Keywords:

Fact-Check, Artificial Intelligence, Automation, Journalism, Fake News

키워드:

팩트체크, 인공지능, 자동화, 저널리즘, 가짜뉴스

Acknowledgments

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2017S1A6A3A01078538)(이 논문은 2017년 대한민국 교육부와 한국연구재단의 지원을 받아 수행된 연구임(NRF-2017S1A6A3A01078538)).

References

  • Adair, B., & Stencel, M. (2016, June 22). How we identify fact-checkers. Duke Reporter’s Lab. Retrieved 7/2/22 from https://reporterslab.org/how-we-identify-fact-checkers/
  • Alam, F., Cresci, S., Chakraborty, T., Silvestri, F., Dimitrov, D., Martino, G. D. S., ... & Nakov, P. (2022). A survey on multimodal disinformation detection. In Proceedings of the 29th International Conference on Computational Linguistics. [https://doi.org/10.48550/arxiv.2103.12541]
  • Baik, J., Lee, S. E., Han, J., & Cha, M. (2021, October). Objectivity in Korean news reporting: Machine learning-based verification of news headline accuracy. Paper presented at the annual conference on Human and Cognitive Language Technology, Online. [백지수·이승언·한지영·차미영 (2021, 10월). <기계학습 기반 국내 뉴스 헤드라인의 정확성 검증 연구>. 제33회 한글 및 한국어 정보처리 학술대회. 온라인.]
  • Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
  • Choi, S., & Koh, M. (2016). Media trend report. SNU Institute of Communication Research Newsletter, 3, 1-19. [최순욱·고문정 (2016). ICR Media Trend Report. <서울대학교 언론정보연구소 뉴스레터>, 통권 3호, 1-19.]
  • Choi, S., & Youn, S. (2017). The implications of collaborative fact-check service: Case of <SNU FactCheck>. Journal of Cybercommunication Academic Society, 34(2), 173-205. [최순욱·윤석민 (2017). 협업형 사실검증 서비스의 의의와 과제: <SNU 팩트체크>의 사례. <사이버커뮤니케이션학보>, 34권 2호, 173-205.]
  • Chong, E. (2018). The characteristics of Korea’s fact check journalism: With a focus on fact check journalists’ perception of “facts” and investigation of their fact verification processes. Journal of Communication Research, 55(4), 5-53. [정은령 (2018). 한국 팩트체크 저널리즘의 특징: 팩트체크 언론인들의 사실 인식과 사실 검증과정 탐색을 중심으로. <언론정보연구>, 55권 4호, 5-53.] [https://doi.org/10.22174/jcr.2018.55.4.5]
  • Chong, E. (2019). Fact check news and the recovery of credibility of South Korea’s broadcast journalism: Focusing on the broadcast reporters’ perceptions on formatting and news values of fact check news. Studies of Broadcasting Culture, 31(1), 47-101. [정은령 (2019). 팩트체크 뉴스와 한국 방송 저널리즘의 신뢰 회복: 방송 기자들의 팩트체크 뉴스 양식과 뉴스가치에 대한 인식을 중심으로. <방송문화연구>, 31권 1호, 47-101.]
  • Chong, E. (2022, May 10). Fact-checking in the post-truth era. OpenNet. Retrieved 5/7/22 from https://www.opennet.or.kr/20850, [정은령 (2022, 5, 10). 탈 진실시대의 팩트체크. <오픈넷>.]
  • Dotto, C., & Smith, R. (2019, October). Newsgathering and monitoring on the social web. First Draft News. Retrieved 5/12/22 from https://firstdraftnews.org/wp-content/uploads/2019/10/Newsgathering_and_Monitoring_Digital_AW3.pdf?x21167
  • EDMO. (n.d.). Repository: Fact-Checking Initiatives in the EU (and in the UK). Retrieved 7/4/23 from https://edmo.eu/fact-checking-activities/
  • Factchecknet. (2021). Rules and regulations for operation. Retrieved 7/2/22 from https://factchecker.or.kr/terms/11, [팩트체크넷 (2021). 운영규정.]
  • Graves, D. (2018). Understanding the promise and limits of automated fact-checking. Reuters Institute for the Study of Journalism.
  • Guo, Z., Schlichtkrull, M., & Vlachos, A. (2022). A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10, 178-206. [https://doi.org/10.1162/tacl_a_00454]
  • Han, Y., & Kim, G. (2021). A Study on automated fake news detection using verification articles. KIPS Transactions on Software and Data Engineering, 10(12), 569-578. [한윤진·김근형 (2021). 검증 자료를 활용한 가짜뉴스 탐지 자동화 연구. <정보처리학회논문지: 소프트웨어 및 데이터 공학>, 10권 12호, 569-578.]
  • Hyun, Y., & Kim, N. (2018). Text mining-based fake news detection using news and social media data. The Journal of Society for e-Business Studies, 23(4), 19-39. [현윤진·김남규 (2018). 뉴스와 소셜 데이터를 활용한 텍스트 기반 가짜 뉴스 탐지 방법론. <한국전자거래학회지>, 23권 4호, 19-39.]
  • IFCN. (2016). International fact-checking network fact-checkers’ code of principles. Retrieved 7/2/22 from https://www.poynter.org/ifcn-fact-checkers-code-of-principles/
  • Jung, I., & Ahn, H. (2022). A study on the detection of fake news - The comparison of detection performance according to the use of social engagement networks. Journal of Intelligence and Information Systems, 28(1), 197-216. [정이태·안현철 (2022). 그래프 임베딩을 활용한 코로나 19 가짜뉴스 탐지 연구 - 사회적 참여 네트워크의 이용 여부에 따른 탐지 성능 비교. <지능정보연구>, 28권 1호, 197-216.]
  • Jung, S., & Lee, J. (2019). Possibility of automated fact checking through SNS user characteristics and confirmatory bias. Korean Journal of Broadcasting & Telecommunications Research, 108, 78-117. [정성욱·이준환 (2019). SNS 사용자 특성과 확증 편향을 통한 자동화된 팩트체킹의 가능성: 정치인 관련 트윗 데이터를 중심으로. <방송통신연구>, 통권 108호, 78-117.]
  • Konstantinovskiy, L., Price, O., Babakar, M., & Zubiaga, A. (2021). Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital Threats: Research and Practice, 2(2), 1-16. [https://doi.org/10.1145/3412869]
  • Lee, D., Kim, Y., Kim, H., Park, S., & Yang, Y. (2019). Fake news detection using deep learning. Journal of Information Processing Systems, 15(5), 1119-1130.
  • Lee, D., & Moon, J. (2020). A method of detection of deepfake using bidirectional convolutional LSTM. Journal of the Korea Institute of Information Security and Cryptology, 30(6), 1053-1065. [이대현·문종섭 (2020). Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법. <정보보호학회논문지>, 30권 6호, 1053-1065.]
  • Lee, J. (2021). AI factcheck. Factchecknet. Retrieved 7/2/22 from https://ainet.factchecker.or.kr/, [이준환 (2021). AI 팩트체크. <팩트체크넷>.]
  • Lee, N. (2018). A content analysis of fact-checking of Korean news organizations in the 19th presidential election in South Korea: Based on principles of the international fact-checking network. Journal of Communication Research, 55(4), 99-138. [이나연 (2018). 한국 언론의 팩트체크: 19대 대통령선거에서의 후보자 검증 기사를 중심으로. <언론정보연구>, 55권 4호, 99-138.] [https://doi.org/10.22174/jcr.2018.55.4.99]
  • Lee, S. (2020). Characterization and detection of opinion manipulation on common interest groups in online communities. Journal of Internet Computing and Services, 21(6), 57-69. [이시형 (2020). 온라인 공간에서 관심집단 대상 비정상 정보의 특징 분석과 탐지. <인터넷정보학회논문지>, 21권 6호, 57-69.]
  • Lee, T. W., Yang, Y., Park, J. S., & Shon, J. G. (2021, November). Fake news detection based on convolutional neural network and sentiment analysis. Paper presented at the annual conference of Korea Information Processing Society, Yeosu. [이태원·양영욱·박지수·손진곤 (2021, 11월). <합성곱신경망과 감성분석 기반의 가짜뉴스 탐지>. 한국정보처리학회 추계학술발표대회. 여수.]
  • Lim, D., Kim, G., & Choi, K. (2021). Development of a fake news detection model using text mining and deep learning algorithms. Information Systems Review, 23(4), 127-146. [임동훈·김건우·최근호 (2021). 텍스트 마이닝과 딥러닝 알고리즘을 이용한 가짜 뉴스 탐지 모델 개발. <경영정보학연구>, 23권 4호, 127-146.] [https://doi.org/10.14329/isr.2021.23.4.127]
  • Marwick, A., & Lewis, R. (2017, May). Media manipulation and disinformation online. Data and Society. Retrieved 7/5/23 from https://datasociety.net/library/media-manipulation-and-disinfo-online/
  • Mueller, R. S. (2019, March). Report on the investigation into Russian interference in the 2016 presidential election (Mueller report). Retrieved 5/12/22 from https://www.govinfo.gov/content/pkg/GPO-SCREPORT-MUELLER/pdf/GPO-SCREPORT-MUELLER.pdf
  • Nakov, P., Corney, D., Hasanain, M., Alam, F., Elsayed, T., Barrón-Cedeño, A., … & Da San Martino, G. (2021). Automated fact-checking for assisting human fact-checkers. [https://doi.org/10.48550/arXiv.2103.07769]
  • Oh, M., Lee, J., Choi, H., Jin, J., & Chun, M. (2022). A study on fake news detection through machine learning in the health and social welfare (2022-48). Retrieved 7/7/23 from http://repository.kihasa.re.kr/handle/201002/42340, [오미애·이정란·최호식·진재현·천미경 (2022). 기계학습 기반 보건복지분야 가짜뉴스(fake news) 탐지 방법 연구. (한국보건사회연구원 연구보고서, 2022-48).]
  • Petroni, F., Rocktäschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A. H., & Riedel, S. (2019). Language models as knowledge bases? [https://doi.org/10.48550/arXiv.1909.01066]
  • Posetti, J., & Matthews, A. (2018). A short guide to the history of ‘fake news’ and disinformation: A new ICFJ learning module. International Center for Journalists, 7(2018), 2018-07. Retrieved 7/2/22 from https://www.icfj.org/news/short-guide-history-fake-news-and-disinformation-new-icfj-learning-module
  • Reporter’s Lab. (2022). Global fact-checking map and database. Retrieved 7/2/22 from https://reporterslab.org/fact-checking/
  • Shim, J. S., Won, H. R., & Ahn, H. (2019). A study on the effect of the document summarization technique on the fake news detection model. Journal of Intelligence and Information Systems, 25(3), 201-220.
  • Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755-769. [https://doi.org/10.1111/j.1540-5907.2006.00214.x]
  • Thorne, J., & Vlachos, A. (2018). Automated fact checking: Task formulations, methods and future directions. [https://doi.org/10.48550/arXiv.1806.07687]
  • Thorne, J., Vlachos, A., Christodoulopoulos, C., & Mittal, A. (2018). Fever: A large-scale dataset for fact extraction and verification. [https://doi.org/10.48550/arXiv.1803.05355]
  • Tong, C., Gill, H., Li, J., Valenzuela, S., & Rojas, H. (2020). “Fake news is anything they say!”—Conceptualization and weaponization of fake news among the American public. Mass Communication and Society, 23(5), 755-778. [https://doi.org/10.1080/15205436.2020.1789661]
  • Uscinski, J. E. (2015). The epistemology of fact checking (is still naìve): Rejoinder to Amazeen. Critical Review, 27(2), 243-252. [https://doi.org/10.1080/08913811.2015.1055892]
  • Uscinski, J. E., & Butler, R. W. (2013). The epistemology of fact checking. Critical Review, 25(2), 162-180. [https://doi.org/10.1080/08913811.2013.843872]
  • Wardle, C., & Derakhshan, H. (2018). Thinking about ‘information disorder’: Formats of misinformation, disinformation, and mal-information. In C. Ireton & J. Posetti (Eds.), Journalism, ‘fake news’ & disinformation (pp. 43-54). Paris, France: UNESCO.
  • Yao, B. M., Shah, A., Sun, L., Cho, J. H., & Huang, L. (2022). End-to-end multimodal fact-checking and explanation generation: A challenging dataset and models. [https://doi.org/10.48550/arXiv.2205.12487]
  • Ye, Q. (2023). Comparison of the transparency of fact-checking: A global perspective. Journalism Practice. [https://doi.org/10.1080/17512786.2023.2211555]
  • York, J. C. (2022). Silicon values: The future of free speech under surveillance capitalism. Verso Books.
  • Youn, T., & Ahn, H. (2018). Fake news detection for Korean news using text mining and machine learning techniques. Journal of Information Technology Applications & Management, 25(1), 19-32. [윤태욱·안현철 (2018). 텍스트 마이닝과 기계 학습을 이용한 국내 가짜뉴스 예측. <Journal of Information Technology Applications & Management>, 25권 1호, 19-32.]
  • Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676. [https://doi.org/10.1016/S0140-6736(20)30461-X]
  • Zuccon, G., & Koopman, B. (2023). Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness. [https://doi.org/10.48550/arXiv.2302.13793]