Sixty-three percent of marketing leaders intend to invest in generative AI (Gen AI) in the next two years. But, while the technology has the promise to be transformative, incorporating it has challenges. Here are five questions every marketing leader needs to ask before implementing Gen AI for the technology to be effective.
How good is your organization’s current data?
Gen AI tools that build on your data will only be as good as your data. For instance, if you plan to deliver personalized content, then the Gen AI tool must rely on your existing customer information and that data will need to be, amongst other things, accurate and complete. Is there any old and outdated information in your customer data repository? Then the data is inaccurate. Your data is also incomplete if it is missing important information like demographic details for some of your customers. Such inaccuracies and lack of completeness will severely hamper your Gen AI tool in delivering personalized marketing collateral. Your data teams must do a comprehensive audit of the marketing data and prepare it for your Gen AI investment.
Will the Gen AI tool integrate with your marketing technology stack?
Getting a fancy tool that does not integrate into your current technology stack will compromise your goals. Proper integration will ensure, for instance, that data flows smoothly between multiple systems, enabling good analysis and content personalization. To be successful, you must evaluate the compatibility of the tool with your current systems, especially with respect to APIs and data formats. Include your IT counterpart as well as data science leaders in assessing technical requirements. Work with them to build an integration roadmap, which includes a phased integration timeline, allowing for testing without disrupting your current marketing efforts.
What privacy and security measures do you need to have in place?
Some Gen AI tools, like ChatGPT, learn from data you input into their models. Consequently, you must carefully deploy them to avoid losing confidential company information or any personally identifiable customer information. Companies should train such tools with zero-party data, which is data that customers share proactively, and first-party data, which companies collect directly. Work with your legal, compliance, and IT teams to conduct a detailed review of the data privacy and security policies of the tool you are considering. The prospective tool must be compliant with governmental and industry standards (e.g. CCPA, GDPR), and, at the minimum, offer secure data storage and transmission along with transparent policies on data use. Involving your legal and compliance counterparts from the outset will help you navigate these complex issues well.
How will your customers perceive the content created by Gen AI?
A lot of companies use AI to power their customer support, especially chatbots. The reactions of consumers to such chatbots is a bit mixed and that raises the question: How will customers perceive personalized content created by Gen AI? Gen AI will likely be invisible in some context or use cases. How a trained model assists on delighting customers is what matters. But to overcome any customer apprehensions associated with AI, you must be transparent with them about the use of Gen AI. You should educate customers on how you use Gen AI and its potential to transform their experience. Customers will be more receptive if they understand that Gen AI enhances their experience.
What ethical considerations should you think about and how could you address them?
Gen AI models suffer from the inherent bias present in the datasets on which they are trained. So, the output produced by these models could perpetuate discrimination and stereotypes. The output can also be inaccurate or misleading. For instance, in a study of images produced by one model, more than 80% of images generated for the word “inmate” were of people with darker skin color; this is despite the fact that the Federal Bureau of Prisons’ data shows that people of color make up less than 50% of the U.S. prison population. In this context, it is critical to develop an internal standard for accuracy and fairness, and invest in human oversight of AI-generated content before such content is published.
Barie Carmichael is a senior counselor at APCO and Praveen Krishnamurthy is a product marketing manager at Adobe.