Welcome to Publishing Pulse, your weekly source for industry updates in online publishing. Stay informed about the latest trends and breakthroughs in the ad ecosystem, content creation, SEO, AI technology, and monetization.
If you prefer to listen to industry news, you can tune in to The Publisher Lab podcast. New episodes are released weekly on Thursday.
Google November 2023 Reviews Update
Google’s November 2023 update, targeting review-based content, marks its last standalone update in this category. The update will evaluate content at the page level, especially for sites where reviews are not the primary focus, focusing on articles and blog posts that offer opinions and analysis rather than third-party user reviews. This rollout, coinciding with a core search algorithm update, could lead to changes in search traffic and rankings.
Google plans to implement regular, ongoing changes to its review system without specific update announcements. This approach means sites primarily featuring reviews must constantly monitor and adapt their content, although they might recover more quickly from ranking issues. The update, initially affecting English content, emphasizes the importance of following Google’s quality guidelines, which include originality, transparency, and balanced coverage.
Sites heavily relying on review content might see ranking fluctuations and are advised to audit their reviews in line with Google’s guidelines, focusing on enhancing weak content. This change allows affected sites to leverage Google Ads while improving their review quality. Overall, the update ushers in a new phase of continuous evaluation for review-centric websites.
Google Initiates Lawsuits Against Bard AI Scammers and Perpetrators of Copyright Fraud
Google is actively engaging in legal action against two groups for separate offenses: AI chatbot Bard-related scams and mass copyright fraud.
The first lawsuit targets scammers who created misleading social media pages and online advertisements about Bard, directing users to malware rather than the actual chatbot. This legal move aims to halt the spread of malware and address ethical abuses in AI.
The second lawsuit is against entities involved in fraudulent copyright claims, used as a tactic to take down competitor websites. This scheme has affected over 100,000 businesses.
Through these lawsuits, Google aims to address the immediate issues and deter future AI-related scams and copyright abuses, contributing to a safer internet environment.
EU Gives Meta and Snapchat Deadline For Child Protection Measures
The European Commission has set a December 1 deadline for Meta Platforms (Facebook) and Snap to submit detailed information on their measures to safeguard children from illegal and harmful online content.
This follows similar requests to YouTube and TikTok regarding child protection policies. The Commission had previously issued urgent orders to Meta, X, and TikTok, demanding an account of their efforts against terrorism, violence, and hate speech.
Failure to provide satisfactory responses could lead to formal investigations. Under the new Digital Services Act (DSA), major internet platforms are required to remove illegal and harmful content, with the risk of facing substantial fines up to 6% of their global turnover for non-compliance.
Court Papers Suggest Zuckerberg Blocked Meta’s Teen Mental Health Initiatives
A lawsuit against Meta has brought to light internal communications suggesting that CEO Mark Zuckerberg repeatedly obstructed efforts to enhance teen well-being on Facebook and Instagram. Despite proposals from top executives like Instagram CEO Adam Mosseri and President of Global Affairs Nick Clegg for increased protections for teenage users, Zuckerberg is alleged to have overruled these initiatives.
Notably, he rejected a 2019 proposal to disable Instagram’s “beauty filters,” which have been linked to unrealistic body image standards and potential mental health impacts on teens. Zuckerberg’s decision was reportedly based on the demand for these filters and a lack of data proving their harm, even though other executives supported the proposal.
In 2021, Clegg pushed for more investment in well-being measures due to concerns over addiction, self-harm, and bullying on the platforms, but his efforts saw minimal response. Zuckerberg’s reliance on data and insistence on a high standard of proof for well-being interventions have faced criticism.
The lawsuit also accuses Meta of exploiting adolescent psychology to boost engagement on Instagram. These revelations have drawn condemnation from technology advocacy groups and various critics, spotlighting Meta’s approach to user well-being and safety.
Meta Mandates AI Disclosure for Political Ads
Meta is set to enforce a new policy for political advertisers on Facebook and Instagram, mandating the disclosure of AI or digital methods used in ad creation.
This global policy, starting next year, addresses growing concerns about AI-generated misinformation’s potential impact on elections. Advertisers will need to declare any significant digital alterations, like photorealistic visuals or realistic audio, though minor changes like resizing images are exempt. Non-compliance could lead to ad rejection and penalties.
This move, aimed at boosting ad transparency, builds on Meta’s existing measures like authorization processes and “paid for by” labels for political ads. Google has already implemented a similar policy earlier this year.
NMA CEO Addresses Challenges of AI Training with News Content at AI+ Summit
At Axios’ AI+ Summit, News/Media Alliance (NMA) CEO Danielle Coffey expressed concerns about AI models being trained using news content without proper compensation or permission. Representing 2,000 global news and magazine media outlets, NMA is tackling challenges from Big Tech and now from generative AI.
The organization has formally opposed the practice of content scraping by AI companies in comments to the U.S. Copyright Office. Coffey highlighted the necessity for accountability and standards in AI training, cautioning against relying solely on “citizen journalism.”
She advocated for bilateral agreements and collective negotiations through bodies like NMA, emphasizing that individual deals by news organizations with AI companies, like those pursued by the New York Times and AP, aren’t sufficient for ensuring the fair use of news content.