Labeling AI-Generated Content: 57% of Companies Expect a Positive Impact from Meta’s Initiative

The transparent and ethical use of AI increasingly requires compliance with established rules and guidelines rather than simply maintaining quality. Recent initiatives by Meta, the parent company of Facebook and Instagram, aim to label every post or piece of content produced by algorithms with the “AI-generated” label.

This decision will, in theory, not only help users gain some perspective on the content they view, but also set a precedent for other companies to follow. How are new content labeling initiatives changing the generative AI ecosystem?

GetApp sheds light on these topics, accompanied by the expert insight of Julien Pibourret [1], a social selling specialist, author, and social media and web marketing trainer, whom we had the pleasure of interviewing.

Key Takeaways

  • 47% of companies say they are aware of Meta’s initiative to label AI-generated images and content. The majority expect this initiative to have a positive impact.
  • 27% of companies “always” indicate that their content was created by or with the help of AI. 44% indicate “sometimes,” and 27% “never” mention it. These findings reflect the need for clearer
  • guidelines and more consistent transparency practices for disclosing AI involvement in content creation.
  • More than a third of respondents say that between 10% and 25% of the social media content they produce is currently generated using AI-generated tools.
  • 66% of respondents believe that all digital marketing tasks will be supported by AI-generated tools within the next five years. Social Media Engagement and Impressions Boosted by GenAI

AI-generated content, while effective, can sometimes lack the nuanced understanding and creativity that humans bring, potentially leading to generic or poor-quality results. While there are risks associated with using GenAI, 61% report an increase (“significant” for 14% of respondents and “slight” for 47%) in engagement and impressions on their social media. Furthermore, when respondents were asked to what extent GenAI-assisted content performed better on social media than content generated solely by humans, 45% said AI performed better.

As Julien Pibourret points out, “Thanks to automation, companies will be able to generate content faster and at scale, while adapting this content to each user’s specific preferences using advanced personalization techniques. Content optimization will also be facilitated by AI, which will analyze performance in real time to adjust and improve content creation strategies.”

Internal and External Policies: Regulations and Integrity
More than a third of respondents say that between 10% and 25% of the social media content they produce is currently produced using GenAI tools. The trend is growing: one in three respondents believe that AI will be present in a quarter to half of the social media content they produce over the next 18 months.

This is further illustrated by the fact that 66% of respondents agree (22% “strongly agree” and 44% “somewhat agree”) that over the next five years, all digital marketing tasks will be assisted by generative artificial intelligence tools.

This increase in production therefore requires more rigorous policies to guarantee the quality, authenticity, and integrity of this content, as Julien Pibourret explains: “Ultimately, we won’t be able to differentiate between human-generated content and AI-generated content. Improving content transparency for consumers is very important, especially today when we face legal issues, but also issues of authenticity.”

Indeed, clear guidelines and effective monitoring practices are becoming essential to maintain consumer trust, particularly that of specific groups. Julien Pibourret adds: “I also think that different generations have different expectations and sensitivities about the content they consume online. Ignoring these generational differences in transparency expectations can not only weaken engagement, but also create a trust gap.”

between the brand and its various audiences.”

According to him, “To maintain the integrity of AI-generated content, it is possible to use artificial intelligence locally, without going online to avoid data leaks. Teams must also be regularly informed about best practices and new technologies to stay abreast of AI developments. Finally, at the corporate and governance levels, it is necessary to develop a critical mindset when it comes to integrating an AI tool.”

The Importance of a Policy Regarding the Use of GenAI
In this GetApp study, 40% of respondents reported having a formal, documented policy regarding the use of generative AI. According to Julien Pibourret, “Companies can implement a formal, documented policy or refer to the AI ​​ACT, a law regarding the use of GenAI that will come into effect by August 2024. For now, in France, I would say we’re more in a logic of opportunity than regulation, and I find that some countries like India, for example, are much further ahead than France in the use and regulation of GenAI.”

Companies should be aware of the potential legal implications of AI-generated content, such as copyright infringement or plagiarism, as mentioned on the European Parliament’s website. [2] These concerns should prompt companies to establish strict quality standards, implement regular audits, and use advanced detection tools to evaluate AI-generated content.

While companies employ various strategies to ensure content integrity, new initiatives continue to emerge to further improve transparency and trust among consumers and internet users. One such initiative, recently launched by Meta, involves labeling AI-generated content on its platforms (Instagram, Facebook, Threads).

Meta’s Initiative: Labeling AI-Generated Content
47% of companies reported being “moderately aware” and 31% “very aware” of Meta’s initiative to label AI-generated content. Currently, upon publication of When it comes to AI-generated content, only 27% of companies “always” indicate that the content was created by or with the help of AI, meaning that the majority of companies are not always transparent about its use in their content creation strategies.

“Failing to disclose that content is AI-generated can compromise a company’s reputation, even weaken community engagement, and lead to legal consequences,” explains Julien Pibourret.

Meta’s initiative to indicate AI-generated content on its platforms aims to increase transparency and help users easily identify such content. “By proactively communicating about the use of AI, companies can demonstrate their commitment to the transparency and ethics of their online content strategy,” adds Julien Pibourret.

If social media platforms were to require the labeling of all AI-generated content, the impact on social media campaigns would appear to be mixed; according to the survey data, 16% of marketers believe it would have a “very positive” impact, likely improving transparency and trust with their audiences. This is compounded by 42% who expect a “somewhat positive” impact, potentially improving content authenticity and engagement.

However, 27% of them anticipate no impact, suggesting that labeling will not significantly alter their campaign results. On the other hand, 11% anticipate a “somewhat negative” impact, likely due to concerns about public perception and the distrust associated with AI-generated content.

Navigating Algorithms with GenAI
Content driven by generative AI isn’t necessarily punished by social media algorithms, but its reception and processing depend on several factors.

1. Quality and Relevance

High-quality, engaging, and well-targeted content is likely to capture audience attention and stimulate interaction. Social media algorithms prioritize content that resonates with users, generating likes, shares, and comments. Conversely, if AI-generated content is perceived as spammy, irrelevant, or lacking substance, its visibility will likely be reduced. Ensuring that GenAI content meets high quality standards and closely aligns with audience interests is crucial to obtaining favorable algorithmic treatment.

2. Transparency and Labeling

High-profile labeling can truly benefit businesses by aligning their practices with evolving platform standards. Algorithms can adapt to prioritize clearly labeled content, rewarding honesty rather than punishing the use of AI. By being open about the role of AI in content creation, companies can strengthen their credibility and foster a more trusting relationship with their audience.

3. User Engagement

User engagement remains a key factor in how social media algorithms rank content. Posts that garner likes, shares, comments, and extended watch times are typically promoted by algorithms. Authenticity plays a key role here; users are more likely to engage with content that appears authentic and interactive. Therefore, companies should focus on creating AI-generated content that encourages meaningful interactions, fostering a sense of community and connection with their audience.

4. Ethical Considerations

Social media platforms are keen to promote content that adheres to community guidelines and ethical standards. AI content that is ethically produced, transparently labeled, and meets user expectations is less likely to be penalized. Conversely, deceptive practices or the misuse of AI to generate misleading content can lead to a decrease in trust and potential algorithmic penalties. Maintaining ethical standards not only protects a brand’s reputation but also ensures compliance with platform policies, thereby fostering a safer and more trustworthy digital environment.

5. Algorithm Changes

Social media platforms frequently update their algorithms to improve user experience and adapt to new trends. Businesses must stay informed of these changes to optimize their content strategy through GenAI. Adapting to algorithm updates involves continuously monitoring performance metrics and adjusting content accordingly. Understanding and anticipating algorithmic trends allows businesses to refine their strategies, ensuring sustained engagement and compliance with platform requirements.

Finally, to maintain control and ensure the integrity of the content produced, companies can:

  • Use plagiarism detection tools to help ensure that content is thoroughly checked and reviewed before publication;
  • Leverage AI content detection tools to assess content quality.

Related Posts