The Korean Society for Journalism & Communication (KSJCS)
[ Article ]
Korean Journal of Journalism & Communication Studies - Vol. 68, No. 1, pp.348-385
ISSN: 2586-7369 (Online)
Print publication date 29 Feb 2024
Received 13 Oct 2023 Accepted 25 Jan 2024 Revised 29 Jan 2024
DOI: https://doi.org/10.20879/kjjcs.2024.68.1.010

맞춤화 정보 환경에서 뉴스 추천 알고리즘에 대한 이용자 이해도와 인식의 중요성 : 관점 일치 뉴스 노출, 뉴스 신뢰, 뉴스 추구 행위와의 관계를 중심으로

이슬기** ; 강신후***
**부산대학교 미디어커뮤니케이션학과 조교수 sg.lee@pusan.ac.kr
***서강대학교 신문방송학과 박사과정 kswho98@naver.com
User Understanding and Perceptions of News Recommendation Algorithms : Relationships with Attitude-Consistent News Exposure, News Trust, and News-Seeking Behavior
Slgi (Sage) Lee** ; Shin-Who Kang***
**Assistant Professor, Department of Media and Communication, Pusan National University, corresponding author sg.lee@pusan.ac.kr
***Ph.D. Student, Department of Mass Communication, Sogang University kswho98@naver.com

초록

뉴스 추천 알고리즘이 보편화됨에 따라 추천 알고리즘의 이용이 이용자의 개별적 선호사항과 관심사에 치우친, 폐쇄적 정보 환경을 초래할 수 있다는 우려가 제기되었다. 본 연구는 이용자의 정보 환경이 알고리즘 맞춤화의 이용 여부에 따라 획일적으로 결정되는 것이 아닌, 알고리즘에 대한 이용자의 이해 정도와 인식에 따라 다르게 형성될 가능성을 탐색하였다. 이를 위해 (1) 추천 알고리즘에 대한 인지된 이해도와 알고리즘 인식(편리성, 프라이버시 침해, 편향성)간의 관계를 탐색하고, (2) 알고리즘 인식들과 이용자 정보 환경의 폐쇄성을 예측할 수 있는 세 가지 변인—관점일치 뉴스 노출, 뉴스 신뢰, 뉴스 추구 행위—간의 관계를 분석하였다. 1,169명의 온라인 설문 응답을 분석한 결과, 첫째, 뉴스 추천 알고리즘에 대한 이해도가 높은 개인일수록 편리성과 편향성을 높게 인식하였으나, 프라이버시 침해 인식과는 유의미한 관계가 없었다. 둘째, 편리성 인식은 추천 알고리즘 수용 행위와 정적인 관계를 보였고, 이를 통해 관점 일치 뉴스에 대한 우연적 노출에 정적 관계를 보였다. 또한 편리성 인식은 알고리즘 추천 뉴스를 신뢰하고, 뉴스를 능동적으로 추구하는 행위와도 정적인 관계를 보였다. 한편 프라이버시 침해 인식과 편향성 인식은 알고리즘 추천 뉴스 신뢰와는 유의미한 관계가 없었으나 알고리즘 수용 행위와의 부적 관계를 통해 관점 일치 뉴스 노출의 감소와 유의미한 관계를 보였다. 또한 편향성 인식이 뉴스 추구 행위와 정적 관계에 있었던 것에 반해 프라이버시 침해 인식은 뉴스 추구 행위와 부적인 관계를 보였다. 맞춤화 환경에서 의 정보 다양성을 위해 알고리즘 이해와 인식이 중요함을 확인하였으며, 리터러시 교육과 관련된 함의를 논의하였다.

Abstract

The use of news recommendation algorithms is becoming more widespread, raising concerns that their implementation could lead to a “filter bubble”, a biased and closed information environment tailored to individual preferences. Previous literature presents mixed findings about whether or not recommendation algorithms contribute to constructing filter bubbles. While a body of literature has confirmed that an algorithmic environment can encourage attitude polarization and a homogenous information environment, some research has shown contrasting evidence that the algorithmic environment can increase incidental exposure to diverse information or cross-cutting viewpoints. Each body of literature offers valid points about recommendation algorithms and their filter bubble possibilities, however, little work has considered users’ agency in using algorithms, and how individuals’ use of algorithms can result in making differences in the information diversity level. Drawing on the possibility that users’ understanding and perceptions of algorithms can influence the use of algorithms, this study explored whether users’ information environment is distinctively formed by users’ understanding and perceptions of the algorithms. We first examined the relationship between the level of algorithm understanding and three algorithm perceptions that are frequently discussed, namely, convenience, privacy risk, and bias perceptions. Convenience perception is defined as individuals’ belief that the utilization of a recommendation algorithm service would enable users to effortlessly locate the most recent information aligning with their desires and needs, ultimately contributing to the enhancement of their goals. Privacy risk perception concerns users‘ recognition that the recommendation algorithm may compromise personal information during the collection and analysis of users‘ private data. Bias perception pertains to the awareness that the employment of news recommendation algorithms may hinder users’ exposure to a diverse array of social issues, potentially leading to the consumption of information that predominantly emphasizes one-sided perspectives or political viewpoints. We also examined how these algorithm perceptions are respectively associated with the following three predictors of a closed information environment: (1) exposure to attitude-consistent news, (2) news trust (trust in algorithm-recommended news), and (3) news-seeking behavior. The analysis of 1,169 online survey responses revealed that first, individuals with a higher understanding of news recommendation algorithms perceived a greater level of convenience and bias in algorithms, but algorithm understanding had no significant relationship with privacy risk perception. Second, convenience perception had a positive relationship with recommendation algorithm-accepting behavior, and this, in turn, was associated with an increase in exposure to attitude-consistent news. Convenience perception also showed positive relationships with news trust and news-seeking behavior respectively. Meanwhile, privacy risk and bias perceptions - while having no significant relationships with news trust - had negative relationships with attitude-consistent news exposure through a lowered algorithm-accepting behavior. Bias perception showed a positive relationship with news-seeking behavior while privacy risk perception was negatively linked to news-seeking behavior. We confirmed the importance of users’ algorithm understanding and perceptions in achieving information diversity in the customized environment. The findings present meaningful insights into the necessary enhancements of users’ perceptions concerning news recommendation algorithms, and the corresponding corrections needed to foster a more diverse information environment. Implications for algorithm literacy education were also discussed.

Keywords:

News-Recommendation Algorithm Perceptions, Algorithm Understanding, Attitude-Consistent News Exposure, News Trust, News-Seeking Behavior

키워드:

뉴스 추천 알고리즘 인식, 알고리즘 이해도, 관점 일치 뉴스 노출, 뉴스 신뢰, 뉴스 추구 행위

Acknowledgments

This work was supported by a 2-Year Research Grant of Pusan National University(이 논문은 부산대학교 기본연구지원사업(2년)에 의하여 연구되었음).

References

  • Anderson, J. R., & Reder, L. M. (1979). An elaborative processing explanation of depth of processing. In L. S. Cermak & F. I. M. Craik (Eds.), Levels of processing in human memory (pp. 385-403). Hillsdale, NJ: Erlbaum.
  • Arts, J. W. C., Frambach, R. T., & Bijmolt, T. H. A. (2011). Generalizations on consumer innovation adoption: A meta-analysis on drivers of intention and behavior. International Journal of Research in Marketing, 28(2), 134-144. [https://doi.org/10.1016/j.ijresmar.2010.11.002]
  • Ayaburi, E. W. (2023). Understanding online information disclosure: Examination of data breach victimization experience effect. Information Technology & People, 36(1), 95-114. [https://doi.org/10.1108/ITP-04-2021-0262]
  • Cho, J., Ahmed, S., Hilbert, M., Liu, B., & Luu, J. (2020). Do search algorithms endanger democracy? An experimental investigation of algorithm effects on political polarization. Journal of Broadcasting & Electronic Media, 64(2), 150-172. [https://doi.org/10.1080/08838151.2020.1757365]
  • Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. [https://doi.org/10.2307/249008]
  • Debevec, K., & Romeo, J. B. (1992). Self‐referent processing in perceptions of verbal and visual commercial information. Journal of Consumer Psychology, 1(1), 83-102. [https://doi.org/10.1207/s15327663jcp0101_05]
  • Gil de Zúñiga, H., Weeks, B., & Ardèvol-Abreu, A. (2017). Effects of the News-Finds-Me perception in communication: Social media use implications for news seeking and learning about politics. Journal of Computer-Mediated Communication, 22(3), 105-123. [https://doi.org/10.1111/jcc4.12185]
  • Gross, R., & Acquisti, A. (2005, November). Information revelation and privacy in online social networks. In Proceedings of the 2005 ACM workshop on privacy in the electronic society (pp. 71-80). [https://doi.org/10.1145/1102199.1102214]
  • Gunther, A. C., & Liebhart, J. L. (2006). Broad reach or biased source? Decomposing the hostile media effect. Journal of Communication, 56(3), 449-466. [https://doi.org/10.1111/j.1460-2466.2006.00295.x]
  • Haim, M., Graefe, A., & Brosius, H. B. (2018). Burst of the filter bubble? Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330-343. [https://doi.org/10.1080/21670811.2017.1338145]
  • Hamilton, K. A., Lee, S. Y., Chung, U. C., Liu, W., & Duff, B. R. (2021). Putting the “Me” in endorsement: Understanding and conceptualizing dimensions of self-endorsement using intelligent personal assistants. New Media & Society, 23(6), 1506-1526. [https://doi.org/10.1177/1461444820912197]
  • Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: The Guilford Press.
  • Hyun, K., & Chae, Y. (2022). A study on the factors affecting the trust of portal news. Media and Society, 30(2), 5-41.
    현기득·채영길 (2022). 포털 뉴스 신뢰도 지각에 미치는 요인들과 그 영향에 대한 분석. <언론과 사회>, 30권 2호, 5-41. [ https://doi.org/10.52874/medsoc.2022.05.30.2.5 ]
  • Hyun, K., & Seo, M. (2019). Comparing conservative and progressive audiences in their partisan perception, trust and use of hostile and friendly news media. Korean Journal of Journalism & Communication Studies, 63(2), 46-76.
    현기득·서미혜 (2019). 한국 정파언론 환경의 특수성은 보수와 진보 수용자의 매체 태도와 이용에 차별적 영향을 미치는가?: 적대적 및 우호적 매체에 대한 정파성 지각이 매체 신뢰와 이용에 미치는 영향. <한국언론학보>, 63권 2호, 46-76. [ https://doi.org/10.20879/kjjcs.2019.63.2.002 ]
  • Keum. J. (2023, July 8). Looking into news algorithms shared by Naver. Media Today. Retrieved 10/1/23 from http://www.mediatoday.co.kr/news/articleView.html?idxno=311154
    금준경 (2023, 7, 8). 네이버 뉴스 알고리즘 공개 내용 살펴보니. <미디어오늘>.
  • Kim, A. (2015). Understanding media literacy education. Seoul: Communication Books.
    김아미 (2015). <미디어 리터러시 교육의 이해>. 서울: 커뮤니케이션북스
  • Kim, D. (2023, April 18). National unity commission launches media special team... Aims to block fake news on portals. The Chosun Ilbo. Retrieved 10/1/23 from https://www.chosun.com/politics/politics_general/2023/04/17/NJS7OF4VC5ESTEWIGI2TOGCF2E/
    김동하 (2023, 4, 18). 국민통합위, 미디어특위 출범... 포털 가짜뉴스 차단 나선다. <조선일보>.
  • Kim, K., Kim, A., Kim, H., Park, D., Park, Y., Song, H., Lee, K., Jang, Y., Jung, J., Cho, Y., & Hong, Y. (2023). Media literacy: Perspectives on viewing the world. Seoul: Korea Press Foundation.
    김경달·김아미·김해원·박대용·박영흠·송해엽·이경원·장윤재·정재민·조연하·홍종윤 (2023). <세상을 바라보는 눈 미디어 리터러시>. 서울: 한국언론진흥재단.
  • Kim, K., Lee, S., & Go, J. (2018). Antecedents and coping strategies in perceived news overload and news uses. Korean Journal of Journalism & Communication Studies, 62(5), 7-36.
    김균수·이선경·고준 (2018). 뉴스과잉 지각과 뉴스이용의 관계: 선행요인과 대응전략을 중심으로. <한국언론학보>, 62권5호, 7-36. [ https://doi.org/10.20879/kjjcs.2018.62.5.001 ]
  • Kim, M. (2022a). The effect of user’s attitude on perception of algorithm recommendation customized service: Mediating effects of false consensus, perceived risk and perceived bias. Journal of Communication Science, 22(2), 196-231.
    김미경 (2022a). 디지털 플랫폼의 알고리즘 추천 서비스의 개인화 지각이 이용태도에 미치는 영향: 허위합의, 지각된 위험, 지각된 편향의 매개효과. <언론과학연구>, 22권 2호, 196-231. [ https://doi.org/10.14696/jcs.2022.06.22.2.196 ]
  • Kim, M. (2022b). The effect of perception of the usefulness of youtube algorithm recommendation on media trust on Youtube: Mediated effects of perceived harm, confirmation bias, and privacy concerns. Journal of Speech, Media and Communication Research, 21(4), 7-42.
    김미경 (2022b). 유튜브 알고리즘 추천의 유용성 인식에 따른 유튜브에 대한 미디어 신뢰도: 지각된 유해성, 확증편향, 프라이버시 염려의 매개 효과. <한국소통학보>, 21권 4호, 7-42. [ https://doi.org/10.51652/ksmca.2022.21.4.1 ]
  • Kim, M., & Lee, E. (2019). Digital news algorithm platform’s news reliability and false consensus effect: An analysis of the influence of motivation, perceived usefulness, perceived risk and perceived bias. Journal of Political Communication, 55, 39-83.
    김미경·이은지 (2019). 디지털 뉴스 알고리즘 플랫폼의 뉴스 신뢰도와 합의착각 효과: 이용 동기, 지각된 유용성, 지각된 위험성과 지각된 편향성의 영향. <정치커뮤니케이션연구>, 통권 55호, 39-83.
  • Kiousis, S. (2001). Public trust or mistrust? Perceptions of media credibility in the information age. Mass Communication & Society, 4(4), 381-403. [https://doi.org/10.1207/S15327825MCS0404_4]
  • Lee, J., & Kang, J. (2020). A new perception of news consumption: Focusing on the relationship between News-Finds-Me and news consumption through traditional media. Social Science Research, 27(3), 227-257.
    이장근·강재원 (2020). 뉴스 소비의 새로운 인식: 뉴스파인즈미(News-Finds-Me)와 전통매체를 통한 뉴스 소비의 관계를 중심으로. <사회과학연구>, 27권 3호, 227-257. [ https://doi.org/10.46415/jss.2020.09.27.3.227 ]
  • Lee, S., & Son, Y. (2018). Coorientational analysis among media literacy practitioners - literacy experienced persons - literacy nonexperienced persons. Journal of Communication Research, 55(2), 213-257.
    이수범·손영곤 (2018). 미디어 리터러시에 대한 기획자, 경험자, 비경험자간 인식 차이: 상호지향성 모델을 중심으로. <언론정보연구>, 55권 2호, 213-257. [ https://doi.org/10.22174/jcr.2018.55.2.213 ]
  • Liu, Y., Keum, H., & Cho, J. (2020). The effects of customized information environments on knowledge and expression on SNS: Focusing on the roles of selective exposure and incidental exposure. Korean Journal of Journalism & Communication Studies, 64(4), 289-324.
    유연·금희조·조재호 (2020). 이용자의 정보 환경 맞춤화가 시사 지식과 SNS상의 의견 표현에 미치는 영향: 정치 성향에 따른 선택적 노출과 우연적 이견 노출의 역할을 중심으로. <한국언론학보>, 64권 4호, 289-324. [ https://doi.org/10.20879/kjjcs.2020.64.4.008 ]
  • Mao, C. M., & Hovick, S. R. (2022). Adding affordances and communication efficacy to the technology acceptance model to study the messaging features of online patient portals among young adults. Health Communication, 37(3), 307-315. [https://doi.org/10.1080/10410236.2020.1838106]
  • Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics, Journal of Pragmatics, 59, 210-220. [https://doi.org/10.1016/j.pragma.2013.07.012]
  • Metzger, M. J., Hartsell, E. H., & Flanagin, A. J. (2020). Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Communication Research, 47(1), 3-28. [https://doi.org/10.1177/0093650215613136]
  • Min, J., & Kim, B. (2013). A study on continued intention of social network services by applying privacy calculus model: Facebook and KakaoTalk cases, Information Systems Review, 15(1), 105-122.
    민진영·김병수 (2013). 프라이버시 계산 모형을 적용한 SNS 지속 사용 의도에 대한 연구: 페이스북과 카카오톡 사례 중심으로. <Information Systems Review>, 15권 1호, 105-122.
  • Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959-977. [https://doi.org/10.1080/1369118X.2018.1444076]
  • Nechushtai, E., & Lewis, S. C. (2019). What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations. Computers in Human Behavior, 90, 298-307. [https://doi.org/10.1016/j.chb.2018.07.043]
  • Nie, N. H., Miller, D. W., Golde, S., Butler, D. M., & Winneg, K. (2010). The world wide web and the US political news market. American Journal of Political Science, 54(2), 428-439. [https://doi.org/10.1111/j.1540-5907.2010.00439.x]
  • Oh, S. (2019). Mobile news use patterns after portals’ adoption of algorithm-based news arrangement. Media Issue, 5(4), 1-16.
    오세욱 (2019). 포털 등의 알고리즘 배열 전환 이후 모바일 뉴스 이용 행태. <미디어이슈>, 5권 4호, 1-16.
  • Oh, S., & Yoon, H. (2022). ‘Algorithm’ approached with ‘media literacy’: Focusing on the case of ‘NewsAlgo’. Korean Journal of Broadcasting & Telecommunications Research, 2022 Special Issue, 7-37.
    오세욱·윤현옥 (2022). ‘미디어 리터러시’로 접근한 ‘알고리즘’: ‘뉴스알고(NewsAlgo)’ 사례를 중심으로. <방송통신연구>, 2022년 특집호, 7-37.
  • Ostlund, L. E. (1974). Perceived innovation attributes as predictors of innovativeness. Journal of Consumer Research, 1(2), 23-29. [https://doi.org/10.1086/208587]
  • Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. New York, NY: Penguin Press.
  • Park, S., Sung, I., Seo, S., Hwang, J., Noh, J., & Kim, D. (2017). News recommendation service using machine learning: Focusing on Kakao’s RUBICS. Journal of Cybercommunication Academic Society, 34(1), 5-48.
    박승택·성인재·서상원·황지수·노지성·김대원 (2017). 기계학습 기반의 뉴스 추천 서비스 구조와 그 효과에 대한 고찰: 카카오의 루빅스를 중심으로. <사이버커뮤니케이션학보>, 34권 1호, 5-48.
  • Pavlou, P. A. (2002). Institution-based trust in interorganizational exchange relationships: The role of online B2B marketplaces on trust formation. The Journal of Strategic Information Systems, 11(3-4), 215-243. [https://doi.org/10.1016/S0963-8687(02)00017-3]
  • Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123-205. [https://doi.org/10.1016/S0065-2601(08)60214-2]
  • Shin, Y., & Lee, S. (2021). An analysis of filter bubble phenomenon on YouTube recommendation algorithm using text mining. The Journal of the Korea Contents Association, 12(5), 1-10.
    신유진·이상우 (2021). 텍스트 마이닝 기법을 이용한 유튜브 추천 알고리즘의 필터버블 현상 분석. <한국콘텐츠학회논문지>, 21권 5호, 1-10.
  • Song, H., Jung, J., & Kim, Y. (2017). Perceived news overload and its cognitive and attitudinal consequences for news usage in South Korea. Journalism & Mass Communication Quarterly, 94(4), 1172-1190. [https://doi.org/10.1177/1077699016679975]
  • Strauß, N., Huber, B., & Gil de Zúñiga, H. (2021). Structural influences on the news finds me perception: Why people believe they don’t have to actively seek news anymore. Social Media + Society, 7(2). [https://doi.org/10.1177/20563051211024966]
  • Sundar, S. S., & Marathe, S. S. (2010). Personalization versus customization: The importance of agency, privacy, and power usage. Human Communication Research, 36(3), 298-322. [https://doi.org/10.1111/j.1468-2958.2010.01377.x]
  • Sunstein, C. R. (2001). Republic.com. Princeton, NJ: Princeton University Press.
  • Thorson, K. (2020). Attracting the news: Algorithms, platforms, and reframing incidental exposure. Journalism, 1-16. [https://doi.org/10.1177/1464884920915352]
  • Thorson, K., Cotter, K., Medeiros, M., & Pak, C. (2021). Algorithmic inference, political interest, and exposure to news and politics on Facebook. Information, Communication & Society, 24(2), 183-200. [https://doi.org/10.1080/1369118X.2019.1642934]
  • Tsfati, Y., & Cappella, J. N. (2003). Do people watch what they do not trust? Exploring the association between news media skepticism and exposure. Communication Research, 30(5), 504-529. [https://doi.org/10.1177/0093650203253371]
  • Velasquez, A., & Rojas, H. (2017). Political expression on social media: The role of communication competence and expected outcomes. Social Media + Society, 3(1) [https://doi.org/10.1177/2056305117696521]
  • Weeks, B. E., Lane, D. S., Kim, D. H., Lee, S. S., & Kwak, N. (2017). Incidental exposure, selective exposure, and political information sharing: Integrating online exposure patterns and expression on social media. Journal of Computer-Mediated Communication, 22(6), 363-379. [https://doi.org/10.1111/jcc4.12199]
  • Weihong, X. I. E., & Qian, Z. (2022). The online website privacy disclosure behavior of users based on concerns-outcomes model. Soft Computing, 26(21), 11733-11747. [https://doi.org/10.1007/s00500-022-07369-1]
  • Yan, Y., Zha, D., Yan, A., & Zhang, Q. (2016). Exploring the effect of individual differences on self-efficacy in getting information. Information Development, 32(4), 1097-1108. [https://doi.org/10.1177/0266666915588795]
  • Yeom, J., & Jung, S. (2018). Research on fake news perception and fact-checking effect: Role of prior-belief consistency. Korean Journal of Journalism & Communication Studies, 62(2), 41-80.
    염정윤·정세훈 (2018). 가짜뉴스에 대한 인식과 팩트체크 효과 연구: 기존 신념과의 일치 여부를 중심으로. <한국언론학보>, 62권 2호, 41-80. [ https://doi.org/10.20879/kjjcs.2018.62.2.002 ]
  • Zhang, Q., & Sapp, D. A. (2013). Psychological reactance and resistance intention in the classroom: Effects of perceived request politeness and legitimacy, relationship distance, and teacher credibility. Communication Education, 62(1), 1-25. [https://doi.org/10.1080/03634523.2012.727008]
  • Zhang, X., Akhter, S., Nassani, A. A., & Haffar, M. (2022). Impact of news overload on social media news curation: Mediating role of news avoidance. Frontiers in Psychology, 13. [https://doi.org/10.3389/fpsyg.2022.865246]