Noticias
16 de diciembre de 2024
Data Protection in the era of AI-powered marketing
Por: Paola Cardozo Solano - PhD Researcher & Lecturer. Vrije Universiteit (VU) Amsterdam, The Netherlands
Second edition. Data Protection and Privacy Concerns in AI-based Marketing
The previous post explored how Artificial Intelligence (AI) is transforming the marketing discipline by providing unprecedented insights into consumer behavior, allowing marketers to make predictions, personalize offers and advertisements, and automate processes. However, the omnipresence of data processing in all stages of marketing, exacerbated by AI, raises major concerns as these new techniques can subliminally impact individuals affecting their preferences and beliefs (Verdoodt, 2019). For instance, Malgieri (2021, p. 2) claims that commercial manipulation via AI tools is “a growing reality and a concrete threat to mental privacy in the digital market”. Therefore, the growing demand from users for greater privacy, especially in areas such as advertising and marketing, is unsurprising. These fields have always relied on consumer persuasion but now risk violating fundamental rights on a massive scale.
The following outline of the general concerns of AI marketing in terms of privacy and data protection adopts the structure proposed by Tamò-Larrieux (2018), which considers security, anonymity, autonomy, the balancing of interests, and transparency as fundamental angles of data protection under the GDPR. Additionally, considerations of data minimization and purpose limitation are incorporated into the discussion.
Concerning security, Quach et al. (2022) state that the utilization of digital technologies and the growing reliance of Marketing on data escalates privacy concerns in regards to the actions of the firms impacting users, buyers, and regulators. It is worth noting that today’s AI systems are vulnerable to adversarial attacks that violate confidentiality, integrity, or availability by injecting malicious data at the testing phase causing false predictions (Oseni et al., 2021). Machine Learning models can unintentionally disclose private information from their training data, which is especially problematic when those models are trained with sensitive datasets such as health data records (Jurafsky & Martin, 2008). For example, a data breach in marketing software trained with personal data concerning customer habits and preferences, classified as special categories of data under the GDPR, can have severe consequences.
In addition, the literature has identified a further risk posed to data privacy, the AI capabilities of identifying anonymized data (Kopalle et al., 2022). An interesting study by Rocher et al. (2019) found that even with heavy sampling and anonymization, 99.98% of US citizens can be accurately re-identified using just 15 data points, questioning the adequacy of current de-identification practices. Another study found that social security numbers can be predicted from publicly available data, as well as hidden social connections and sexual orientation can be inferred from public social media (Acquisti & Gross as cited by Bleier et al., 2020a).
Consequently, the mass processing of data for marketing purposes poses risks to individuals since, in many cases, it involves personal data and enables the individualization of consumers. Once again, the privacy and data protection expectations of potential customers are likely at odds with the sharing of sensitive information with entities like chatbots or the use of past purchase data, which belongs to the individual’s private sphere.
In regards to individual autonomy, the various categorizations of algorithmic manipulation ―as summarized by Malgieri (2021)― share common elements that apply to AI marketing strategies. These strategies have the potential to (i) micro-target individuals based on previously collected data; (iii) exploit cognitive and emotional vulnerabilities; (iii) use non-transparent advanced data analytic tools; and (iv) influence people in a hidden manner. Then, more in-depth, machine learning tools applied to marketing raise concerns on “the growing capacity not only to predict choices but also to influence emotions and thoughts (…) sometimes subliminally” (Committee of Ministers, 2019). In our view, this is one of the most worrisome consequences of the indiscriminate use of advanced tools for marketing purposes without an ethical or legal framework that deters these conducts that harm fundamental rights.
Individuals may feel uncomfortable receiving personalized advertising when they realize how extensively their data is being collected and analyzed. Similarly, customers who are more concerned about their data privacy tend to respond negatively to the brand (Aslam et al., 2021). Therefore, one key point under this analysis is that complying with privacy and data protection regulations adds value and makes organizations more competitive; then marketers should invest in knowing the privacy and data protection expectations of their target audience to offer an even higher level of protection that the one contained in the legislation.
Finally, transparency is essential for consumers, who increasingly question the uncanny accuracy of targeted advertisements and call for stronger protections (Quach et al., 2022). They seek clear information on how their personal data is processed, as well as insights into how predictions, classifications, and recommendations about their behavior are generated. How AI comes up with a recommendation is often opaque to humans, and can have implications for fair marketing exchanges (Pitt et al., 2021). For example, it may surprise users the psychographics and behavioral factors used by Spotify for providing recommendations, as listening patters reflect preferences, location, and even moods and habits (van de Haar et al., 2019).
Haapio and Ousitalo observe that most organizations gather large amounts of data concerning app use or cookies for mining information “and selectively use it to fit the needs of the organization” (2021, p. 152). Such strategies around the massive capture of personal data often conflict with the principles of data minimization and purpose limitation. From a different perspective, in the study conducted by Aslam et al.(2021), several interviewees in charge of marketing strategies explained that companies do not know what to do with the enormous quantity of data they collect; thus, the most significant issues with AI for companies are data quality and data storage. It is safe to say that the bulk-gathering mentality will change because keeping data is expensive for companies; as Belo & Maldonado (2022) affirm, data minimization will not only be a principle but a business need.
Interestingly, Google (2021) reports on privacy point out that consumers’ expectations go beyond what marketers expect and can deliver. Then, the industry is aware that there is a lot to be done to comply with the new data protection frameworks and satisfy the growing consumers’ demand for more protection and control over their personal data, especially if AI is involved.
While much attention has been given to consumer privacy expectations in the business-to-consumer (B2C) space, the differences between B2C and business-to-business (B2B) marketing strategies also merit consideration. B2C marketing strategies are targeted at the customers’ emotions and irrationality of purchase decision, whereas B2B aims to promote the product features and functionalities from suppliers to buyers (Rėklaitis & Pilelienė, 2019). Given these distinctions, the literature has predominantly concentrated on analyzing marketing and advertising techniques aimed at consumers, leaving the realm of B2B privacy expectations little understood, a territory that nonetheless holds substantial potential for exploration.
Swani et al. (2023) argue that buyers’ privacy and data protection expectations exist beyond information protection and must be protected. The authors argue that privacy violations in the B2B environment can affect the value chain and the company’s competitiveness.
However, apart from the above, the situation is more severe than it appears at first glance; referencing a 2018 Salesforce report, Swani et al. (2023) reveal that around 50% of buyers manifested discomfort with the manner their personal and business data is processed by suppliers, especially when using automation technologies including AI. Finally, buyers expressed how they felt marketers invaded their personal time, did not satisfy the transparency expectations on how their data would be used, lack of sufficient protocols and conflicted with their expectations to be left alone, surpassing the orbit of the right to data protection and landing in the field of the right to privacy.
In sum, AI and data-intensive technologies pose significant privacy challenges, as they involve the processing of personal data that can impact individuals’ private lives. Moreover, seen in the light of marketing, it becomes concerning how marketers are skeptical about implementing data protection in their campaigns. Therefore, privacy by design becomes crucial in protecting individuals’ privacy throughout the entire product lifecycle.
The extended use of AI in marketing raises significant privacy and data protection concerns. Key areas of concern include security vulnerabilities, the risk of re-identification of anonymized data, algorithmic manipulation impacting individuals’ autonomy, and the lack of transparency in data practices. This is of growing relevance, considering that consumers are increasingly demanding more information about how their data is collected, processed, and used for marketing purposes.
The next post will introduce two prominent AI marketing applications: AI-based Customer Relationship Management (CRM) and chatbots. It will delve into their functionality, highlight the benefits they offer to marketers, and examine the key concerns they pose in terms of data protection.
Bibliography:
Aslam, B., Karjaluoto, H., & Varmavuo, E. (2021). Data obstacles and privacy concerns in artifcial intelligence initiatives. En O. Niininen, Contemporary Issues in Digital Marketing. Milton: Taylor and Francis. https://doi.org/10.4324/9781003093909
Bleier, A., Goldfarb, A., & Tucker, C. (2020). Consumer privacy and the future of data-based innovation and marketing. International Journal of Research in Marketing, 37(3), 466-480. https://doi.org/10.1016/j.ijresmar.2020.03.006
Belo, J., & Maldonado, S. (s. f.). Artificial Intelligence in MarTech and AdTech. https://open.spotify.com/episode/4nnwVP0D3scEsLGHlRrk4L
Committee of Ministers. (2019). Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes. Data Protection. https://www.coe.int/en/web/data-protection/-/declaration-by-the-committee-of-ministers-on-the-manipulative-capabilities-of-algorithmic-processes
Google, I. (2021). Privacy by Design: Exceeding Customer Expectations. https://iapp.org/resources/article/privacy-by-design-exceeding-customer-expectations/
Haapio, H., & Ousitalo, O. (2021). ‘Interesting but scary’ Customers’ perceived value of MyData. En O. Niininen, Contemporary Issues in Digital Marketing. Milton: Taylor and Francis. https://doi.org/10.4324/9781003093909
Jurafsky, D., & Martin, J. (2008). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition (Vol. 2).
Kopalle, P. K., Gangwar, M., Kaplan, A., Ramachandran, D., Reinartz, W., & Rindfleisch, A. (2022). Examining artificial intelligence (AI) technologies in marketing via a global lens: Current trends and future research opportunities. International Journal of Research in Marketing, 39(2), 522-540. https://doi.org/10.1016/j.ijresmar.2021.11.002
Malgieri, G. (2021). In/acceptable Marketing and Consumers’ Privacy Expectations: Four Tests from EU Data Protection Law (SSRN Scholarly Paper N.o 3973353). https://doi.org/10.2139/ssrn.3973353
Oseni, A., Moustafa, N., Janicke, H., Liu, P., Tari, Z., & Vasilakos, A. (2021). Security and Privacy for Artificial Intelligence: Opportunities and Challenges (arXiv:2102.04661). arXiv. http://arxiv.org/abs/2102.04661
Pitt, C., Paschen, J., Kietzmann, J. H., Pitt, L. F., & Pala, E. (2021). Artificial Intelligence, Marketing, and the History of Technology: Kranzberg’s Laws as a Conceptual Lens. Australasian Marketing Journal, 31, 81-89.
Quach, S., Thaichon, P., Martin, K. D., Weaven, S., & Palmatier, R. W. (2022). Digital technologies: Tensions in privacy and data. Journal of the Academy of Marketing Science, 50(6), 1299-1323. https://doi.org/10.1007/s11747-022-00845-y
Rėklaitis, K., & Pilelienė, L. (2019). Principle Differences between B2B and B2C Marketing Communication Processes. Management of Organizations: Systematic Research, 81, 73-86. https://doi.org/10.1515/mosr-2019-0005
Tamò-Larrieux, A. (2018). Designing for Privacy and Its Legal Framework: Data Protection by Design and Default for the Internet of Things. Springer International Publishing AG. http://ebookcentral.proquest.com/lib/leidenuniv/detail.action?docID=5596908
Swani, K., Milne, G. R., & Brown, B. P. (2023). The benefits of meeting buyer privacy expectations across information, time, and space dimensions. Industrial Marketing Management, 112, 14-26. https://doi.org/10.1016/j.indmarman.2023.04.013
van de Haar, I., Broberg, C. P., & Doshoris, I. (2019). How Artificial Intelligence is changing The Relationship Between The Consumer and Brand in The Music Industry. LBMG Strategic Brand Management – Masters Paper Series. http://lup.lub.lu.se/student-papers/record/9007033
Verdoodt, V. (2019). The Role of Children’s Rights in Regulating Digital Advertising. The International Journal of Children’s Rights, 27(3), 455-481. https://doi.org/10.1163/15718182-02703002