Response to the public consultation on the Digital Fairness Act
Introduction: Addressing mental to environmental risks
This response from the Centre for Future Generations (CFG) contributes to the European Commission’s call for evidence for the Digital Fairness Act (Ref. Ares(2025)5829481, legal basis Article 114 of TFEU), launched on 17 July 2025. As a think-and-do tank supporting decision-makers in responsibly governing rapid technological change in the best interests of current and future generations, CFG welcomes the Commission’s ambition to close persistent gaps in online consumer protection. The Fitness Check of EU Consumer Law[1] confirmed that consumers behave differently online than offline and do not always feel fully in control of their digital environment – underscoring the need for stronger safeguards against manipulative or opaque design practices.
Despite the entry into force of the Digital Services Act (DSA), Digital Markets Act (DMA), and the AI Act (AIA), critical blind spots remain in the EU’s digital regulatory framework. Neither the DSA nor the DMA directly addresses the design of user experiences, behavioural manipulation, or the erosion of individual agency through algorithmic influence—mechanisms that underpin dark patterns and hyper-personalisation[2] . And due to the specific scope of the AI Act, the vast majority of consumer AI systems fall outside the AI Act’s high-risk categories[3], leaving most personalised recommendation and advertising systems effectively unregulated or only subject to voluntary action. The Digital Fairness Act (DFA) would not duplicate existing rules – it operationalises fairness and individual empowerment in ways that the DSA/DMA/AIA structurally cannot, due to their differing legislative and policy scope.
Experts increasingly warn that the EU’s framework must evolve[4] to keep pace with emerging manipulative architectures. Just as the EU has moved to regulate political advertising[5] to prevent the manipulation of democratic will, commercial practices that distort consumer choice through deceptive or hyper-personalised persuasion should also be scrutinised.
Finally, calls for simplification of EU law must not become an excuse for inaction on digital power asymmetries. They should instead be harnessed to overcome existing implementation, enforcement, and compliance hurdles experienced so far across the rest of the EU’s digital and data regulatory framework—harnessed toward a more cohesive, independent digital governance framework for Europe.
The DFA therefore represents a crucial opportunity to fill these regulatory gaps and to protect both people and the planet in an era of accelerating technology compounded with environmental degradation[6]. As digital systems increasingly shape everyday choices, fairness in the emerging tech age must mean freedom from manipulative personalisation and the right to understand and control how algorithms influence decisions —including their environmental footprint.
Moreover, in today’s tech ecosystems, advertising and personalisation operate in contexts of asymmetric information and psychological influence, which can undermine informed and autonomous decision-making. The consequences of this imbalance extend beyond individual welfare to the collective well-being of society – and to future generations who will inherit the social and ecological impacts of today’s technology design choices.
We recommend the DFA address:
1) Manipulative hyper-personalisation based on mental-state inferences
2) Consumer transactions in the digital attention economy
3) Lack of clarity on wellness versus medical marketing
4) Hidden drivers of environmental harm
5) Unfair and deceptive commercial practices on the horizon
6) Governance and enforcement gaps for digital fairness in Europe
1) Manipulative hyper-personalisation based on mental-state inferences
Our research[7] indicates that an increasing number of consumer wellness products collect or infer users’ emotional or cognitive data. Examples include devices that aim to help users meditate, track their focus, or provide health-related metrics. While such data may support well-being and even certain health applications[8], it can expose individuals’ mental states and psychological vulnerabilities such as fatigue, stress, or lack of focus to the device manufacturers.
At present, there are no regulatory safeguards that prevent manufacturers from exploiting consumers’ inferred mental states, for instance to personalise content, notifications, or even pricing. Information about mental states risks becoming an additional variable in time-optimised personalisation strategies designed to maximise engagement or sales.
This issue is especially pertinent as big tech companies—many of which rely on ad-based business models—are increasingly taking an interest[9] in neurotechnology and health-adjacent products. Big tech has been seen to commercialise[10], patent[11], and develop[12] their own devices in addition to partnering[13] with other neurotechnology or health companies. For example, there are already scientific and technological initiatives, such as neuroadaptive computing and passive brain-computer interfaces[14], exploring how mental-state information could be used to adapt digital environments to the user. While these developments can enhance human-machine interaction or personalise learning[15] and gaming experiences, they could also be used to exploit psychological vulnerabilities – for example, by adjusting tone, timing, or discounts when a user appears stressed or fatigued, and thus more susceptible to persuasion.
AI chatbots, such as ChatGPT, could also be fed with emotional state information and adjust their language to be more empathetic or patient, but also more persuasive or manipulative. This is of particular concern, as emotional prompting appears to amplify misinformation in LLMs[16]. At the same time, as AI systems advance, inferences about mental and cognitive states are likely to become more powerful. Traditional indicators of focus or fatigue could lean toward revealing data relating to age, cognitive decline, anxiety, depression, or other mental health conditions. This could enable companies to rank users by inferred cognitive or emotional characteristics and adjust pricing or offers accordingly.
Recommendation: in line with the upcoming UNESCO Recommendation on the Ethics of Neurotechnology[17], we recommend that the DFA explicitly classify the use of information regarding mental states for persuasion, social control, commercial surveillance, retention strategies, engagement optimisation, or pricing adjustment as an unfair commercial practice, comparable to dark patterns or addictive design mechanisms.
Our review[18] also indicates that the technologies that allow for emotion-based personalisation could soon be available in applications used by young people (e.g. companion chatbots, neurotech in education, therapy chatbots).
Recommendation: Emotion-based personalisation often involves the use of online activity tracking technologies, and could soon include eye and biometric trackers and neurotechnologies. Because children may be particularly susceptible to emotional nudging, the DFA could strengthen child-protection provisions by requiring emotion-based personalisation to be off by default, and available via opt-in. Policy should also be informed by developmental milestones, age windows, and gender, as research has shown that these factors impact sensitivity to online harms.[19],[20]
2) Reframing consumer transactions in the digital attention economy
In the digital economy, consumer transactions should not be understood solely as monetary exchanges. Users also ‘transact’ when they devote attention, engagement, or personal data to a service. These non-monetary exchanges underpin the business models of most digital platforms, particularly social media and AI chatbots which offer assistance, companionship or therapy.
Our research[21] suggests that recommender systems and user-experience interfaces utilised in social media increasingly constrain users’ ability to autonomously limit their engagement. Transparency around the strength of these systems is limited, preventing users from making fully informed decisions about the content they encounter. Both elements arguably compromise key bioethical principles such as autonomy and fairness for consumers.
Other interface designs include infinite scroll, autoplay, pull-to-refresh, ephemeral content loops, engagement incentives, and disengagement frictions. These all function as dark patterns which further erode user autonomy. Additional ethical concerns arise from dishonest anthropomorphism[22], where AI agents adopt human-like traits to elicit emotional trust and attachment, often to serve commercial gain[23].
To drive engagement, these designs exploit human vulnerabilities[24], including, but not limited to reduced attentional capacity, cognitive biases, novelty- and emotion-seeking, and the tendency to anthropomorphise entities. This results in a systemic erosion of consumer autonomy, which is magnified in children and adolescents, who are still developing[25] cognitive, emotional and social capacities. This trend may be exacerbated by a rise in wearable neurotechnologies, as described in section 1.
Recommendation: The DFA should explicitly recognise attention, engagement and user data as commodities for exchange and introduce precise and age-appropriate information requirements by platforms to ensure consumers are aware of the personalisation practices they are being subject to.
3) Clarity on wellness versus medical marketing claims across all communication channels
The distinction between wellness and medical products is becoming increasingly blurred[26], particularly on online platforms such as social media, due to the language used in marketing. Examples of products that tread this line include therapy chatbots, neurotechnologies, biometric tracking wearables, and AI lifestyle coaches. Many companies strategically adapt their marketing claims to appeal to health-conscious consumers, while retaining their legal status as consumer companies. While a statement such as “treats anxiety” would bring a device under the scope of the Medical Device Regulation (MDR), a softer phrasing like “alleviates stress” places it in a regulatory grey zone and allows for its commercialisation as a consumer product. Such language ambiguities can lead consumers to believe that a product not tested for efficacy and marketed solely for wellbeing purposes is a medically-validated device. Crucially, disclosures about products not being medical devices are often absent or buried in lengthy Terms and Conditions.
Our research also shows that claims about mental or physical benefits derived from the use of these consumer products often appear in transient content on social media more prominently than official websites[27]. Such platforms include Instagram and LinkedIn, and typically use tactics such as webinars, conversations with experts, user or patient testimonials, and endorsements from clinicians to tout the health benefits of the devices.
Recommendation: To promote fairness and informed decision-making, especially in the context of Europe’s ongoing mental health crisis[28], the DFA could require clear front-end disclosures indicating whether a product is classified as wellness/lifestyle or medical/clinical. These disclosures should apply consistently across all communication channels – including transient content, websites, apps, and social media. Such transparency would help consumers understand the level of evidence, oversight, and protection they can reasonably expect, without having to search through Terms and Conditions and regardless of where the claim appears.
4) Tackling hidden drivers of environmental harm
In addition to the mental aspects of hyper-personalisation and dark patterns, algorithms should not invisibly push people toward environmental harm. The DFA is in effect a timely lever to align digital fairness with the EU’s just green transition. Neither the DSA nor the DMA address the environmental impacts of digital platforms or service design. Enabled emissions – the real but hidden carbon costs of algorithmic nudging – drive overconsumption without users’ knowledge or consent. From impulse travel bookings to endless shopping recommendations, recommender systems and manipulative design patterns carry indirect environmental costs – a concern recognised in the DFA’s own call for evidence – as they promote high-carbon behaviours that distort consumer choice and undermine sustainability goals. In keeping with the EU’s priority of a twin transition, the DFA is an important lever for meeting the EU’s 2040 climate target – which as the Commission’s own Impact Assessment[29] notes – will depend on fostering sustainable lifestyles and consumption.
Algorithmic dark patterns further entrench unsustainable habits across fast fashion, electronics, food delivery, and logistics. The fashion industry alone accounts for around 10% of global emissions[30], with recommender systems perpetuating rapid consumption cycles and waste[31]. These are not isolated consumer actions but systemic effects of opaque, profit-optimised design. Momentum is already building to embed environmental safeguards into consumer protection: France’s proposed eco-score[32] for ultra-fast fashion signals growing political will to confront digital drivers of unsustainability.
Recommendation: The DFA should close the regulatory blindspot between digital fairness and indirect environmental harm by explicitly recognising and mitigating enabled emissions. This can be achieved through provisions on algorithmic transparency[33], sustainability-by-default[34] design, and user rights to carbon-aware personalisation[35]. Regulators should also consider mandatory sustainability impact assessments[36] for recommender systems, bans on dark patterns[37] that drive overconsumption, and clear disclosure[38] of the environmental footprint of personalised ads and rankings. In doing so, the DFA can help ensure emerging tech systems serve both people and the planet – advancing fairness, agency, and sustainability together.
5) Unfair and deceptive commercial practices on the horizon
The monetisation of parasocial relationships[39] through real and AI-generated influencers blurs the line between authentic connection and commercial persuasion. This occurs, for example, when influencers integrate brand preferences into personal narratives and interactive exchanges with their communities, or when AI-generated influencers simulate emotional connection to promote products or values[40]. These trends are on the rise, as highlighted by recent public controversies[41] that have prompted the creation of ad-hoc national codes of conduct[42]. Moreover, parasocial dynamics have begun to attract institutional attention[43], underscoring the need for clearer regulation to distinguish commercial communication from editorial content.
A related concern is the commercialisation of children, who increasingly appear not only as consumers but as a form of commercial product — popularised as “kidfluencers”[44], either through standalone profiles or via content shared by their caregivers (“sharenting”[45]). From the consumer angle, children’s visibility reliably increases engagement, which is amplified by recommender system dynamics, representing an emotional dark pattern that weaponises human empathy for profit.
These emerging practices are especially unfair to children and adolescents, who have limited capacity to recognise manipulative intent or outcome. CFG is actively addressing these and related challenges through its Tech & the Brain workstream[46], which explores how digital technologies interact with cognitive and emotional processes to inform policy frameworks.
Recommendation: The DFA should explicitly address the monetisation of parasocial relationships and the algorithmic amplification of children’s visibility as emerging unfair commercial practices. It should require clear disclosure of promotional intent, apply stricter safeguards for child audiences, and extend consumer-protection and advertising standards to both human and AI-generated influencers.
6) A coherent and independent governance framework for digital fairness in Europe
To deliver on its objectives, the DFA must be enforced effectively and consistently, both substantively and practically—in alignment with the EU’s existing consumer and digital regulatory frameworks, and across the Union.
Replicating an existing governance model—whether it be fragmented like the DSA/DMA or directive-based like the Consumer Rights Directive—risks introducing additional layers of bureaucracy on top of what’s already a patchwork digital enforcement ecosystem. This could result in more divergent practices, uneven and slow implementation, and diminished coherence overall, effectively thwarting the EU’s ability to uphold the spirit of the act.
Alternatively, considering the accelerating pace of technological change, coupled with the EU’s drive for regulatory simplification, the adoption of a more tightly centralised and independent governance structure should be prioritised in order to safeguard the consistency, authority, and efficacy of enforcement—not only of the DFA, but of the entire digital rulebook.
With the next Multiannual Financial Framework in sight, the EU has the chance to invest strategically in digital governance for a more sovereign and fair digital future. To complement its industrial policy ambitions for competitiveness and sovereignty, the EU should wield its purchasing power toward a more independent and centralised European digital enforcement agency.
Such an agency could provide a single locus of expertise and accountability, bringing together all relevant regulators, boards, coordination centers, and taskforces related to the digital rulebook under one umbrella for better oversight, cooperation, and coordination. This would require careful consideration and due multi-stakeholder dialogue to design and implement, but the political will of member states has already begun leaning toward such an investment, and the time is right to accelerate planning for how such an agency should look and feel for the next budget cycle.
Additionally, a digital fairness board, an independent board with juridical purpose modelled on the European Data Protection Board, could promote regulatory convergence and issue joint guidance across related domains beyond digital, such as consumer protection, data governance, and algorithmic accountability.
Complementing these institutional measures, the DFA’s implementation should place impact assessment at its core, ensuring that all regulatory and enforcement actions are informed by robust, routine, forward-looking, and transparent evaluations of their social, environmental, and economic implications. In parallel, a civic enforcement dimension, inspired by the DSA’s trusted flaggers network, should empower accredited organisations and independent experts to support the monitoring of systemic risks and unfair practices.
Recommendation: The DFA should be the regulatory impetus for establishing a centralised and independent EU digital governance framework. It should be anchored by a new European digital enforcement agency and digital fairness board that combines central oversight, tighter implementation and enforcement, civic participation, and robust impact assessment to ensure consistent, evidence-based, and future-proof digital fairness across the Union.
Our commitment moving forward
In conclusion, in light of the rapidly accelerating trends in the digital landscape, CFG strongly supports the DFA’s ambition and stands ready to contribute our multidisciplinary expertise, cross-sectoral networks, and forward-looking policy insights to help translate its vision into lasting impact.
Please contact Maria Koomen or Virginia Mahieu at the Centre for Future Generations with any questions or to request further briefing.
Endnotes
[1]European Commission, Review of EU Consumer Law, 4 October 2024, https://commission.europa.eu/law/law-topic/consumer-protection-law/review-eu-consumer-law_en#digital-fairness-fitness-check-of-eu-consumer-law (accessed 18 September 2025)
[2] Fassiaux, S., Preserving Consumer Autonomy through European Union Regulation of Artificial Intelligence: A Long-Term Approach, European Journal of Risk Regulation, Vol. 14 Special Issue 4, December 2023, pp. 710-730,https://www.cambridge.org/core/journals/european-journal-of-risk-regulation/article/preserving-consumer-autonomy-through-european-union-regulation-of-artificial-intelligence-a-longterm-approach/C59490014B968AB10ECA772683D2B283 (accessed 12 October 2025)
[3]International Association of Privacy Professionals (IAPP), Marketing sits in a gray zone under EU AI Act, 2024, https://iapp.org/news/a/at-aigg-2024-marketing-sits-in-a-gray-zone-under-eu-ai-act? (accessed 12 October 2025)
[4] European Parliamentary Research Service, Regulating dark patterns in the EU: Towards digital fairness, 14 January 2025, https://epthinktank.eu/2025/01/14/regulating-dark-patterns-in-the-eu-towards-digital-fairness/ (accessed 12 October 2025)
[5] European Commission, New EU rules on political advertising come into effect, 10 October 2025, https://commission.europa.eu/news-and-media/news/new-eu-rules-political-advertising-come-effect-2025-10-10_en (accessed 12 October 2025)
[6] European Environment Agency (EEA), State of Europe’s environment not good: threats to nature and impacts of climate change top challenges, 2025, https://www.eea.europa.eu/en/newsroom/news/state-of-europes-environment-2025 (accessed 12 October 2025)
[7] Centre for Future Generations (CFG), Neurotech consumer market atlas: How the sector is making moves into the mainstream, 2025, https://cfg.eu/neurotech-market-atlas/ (accessed 12 October 2025)
[8] Wall, C., Hetherington, V. & Godfrey, A., Beyond the clinic: the rise of wearables and smartphones in decentralising healthcare, Nature Digital Medicine, 2023, https://www.nature.com/articles/s41746-023-00971-z (accessed 12 October 2025)
[9] Centre for Future Generations (CFG), Neurotech consumer market atlas: How the sector is making moves into the mainstream, 2025, https://cfg.eu/neurotech-market-atlas/ (accessed 12 October 2025)
[10] Meta Platforms, Meta Ray-Ban Display: AI Glasses With an EMG Wristband, 17 September 2025, https://about.fb.com/news/2025/09/meta-ray-ban-display-ai-glasses-emg-wristband/ (accessed 12 October 2025)
[11] World Intellectual Property Organization (WIPO), US20230225659 – Biosignal Sensing Device Using Dynamic Selection of Electrodes, 2023, https://patentscope.wipo.int/search/en/detail.jsf?docId=US402825807&_cid=P10-LRT3OJ-01103-1 (accessed 12 October 2025)
[12] IEEE Sensors Journal, Development of a New Around-the-Ear Electroencephalography Device for Passive Brain–Computer Interface Applications, IEEE, 2025, https://ieeexplore.ieee.org/abstract/document/11123617/authors#authors (accessed 12 October 2025)
[13] Synchron, Synchron BCI x Apple Vision Pro, 2025, (accessed 12 October 2025)
[14] Zander Labs, Human intelligence for a neuroadaptive world, 2025, https://www.zanderlabs.com/scientific (accessed 12 October 2025)
[15] Sensors (MDPI), Application of Electroencephalography Sensors and Artificial Intelligence in Automated Language Teaching, 2025, https://www.mdpi.com/1424-8220/24/21/6969 (accessed 12 October 2025)
[16] Frontiers in Artificial Intelligence, Emotional prompting amplifies disinformation generation in AI large language models, 2025, https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1543603/full (accessed 12 October 2025)
[17] UNESCO, Draft Recommendation on the Ethics of Neurotechnology, 2025, https://unesdoc.unesco.org/ark:/48223/pf0000394866 (accessed 12 October 2025)
[18] Centre for Future Generations (CFG), Emerging tech, emerging mental health risks (review), 2025, https://cfg.eu/emerging-tech-emerging-mental-health-risks/ (accessed 12 October 2025)
[19] 5Rights Foundation, Develop products that are age appropriate by design and consider using age assurance, Child Rights by Design, 2023, https://childrightsbydesign.5rightsfoundation.com/principles/4-age-appropriate/ (accessed 23 October, 2025)
[20] Orben, A., Przybylski, A.K., Blakemore, SJ. et al. Windows of developmental sensitivity to social media. Nat Commun 13, 1649 (2022). https://doi.org/10.1038/s41467-022-29296-3 (accessed 23 October 2025)
[21] Centre for Future Generations (CFG), Emerging tech, emerging mental health risks (research), 2025, https://cfg.eu/emerging-tech-emerging-mental-health-risks/ (accessed 12 October 2025)
[22] Akbulut, C., All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI, Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (AIES-24), 2024, https://ojs.aaai.org/index.php/AIES/article/view/31613 (accessed 12 October 2025)
[23] ScienceDirect, Virtually human: anthropomorphism in virtual influencer marketing, 2024, https://www.sciencedirect.com/science/article/abs/pii/S0969698924000936 (accessed 12 October 2025)
[24] Nature Human Behaviour, A taxonomy of technology design features that promote potentially addictive online behaviours, 14 February 2023, https://www.nature.com/articles/s44159-023-00153-4 (accessed 12 October 2025)
[25] Blakemore, S.J., Development of the adolescent brain: implications for executive function and social cognition, European Psychiatry, 2018, https://www.sciencedirect.com/science/article/abs/pii/S0924977X17320473 (accessed 12 October 2025)
[26] Centre for Future Generations (CFG), Between wellness and medicine, https://cfg.eu/between-wellness-and-medicine/ (publication forthcoming at time of writing).
[27] Centre for Future Generations (CFG), Between wellness and medicine, https://cfg.eu/between-wellness-and-medicine/ (publication forthcoming at time of writing).
[28] European Commission, EU comprehensive approach to mental health, 2024, https://health.ec.europa.eu/non-communicable-diseases/mental-health_en (accessed 12 October 2025)
[29] European Commission, Commission Staff Working Document: Impact Assessment Report, Strasbourg, 6 February 2024, https://publications.europa.eu/resource/cellar/6c154426-c5a6-11ee-95d9-01aa75ed71a1.0001.03/DOC_3 (accessed 12 October 2025)
[30] Earth.Org, The Environmental Impact of Fast Fashion, Explained, 2024, https://earth.org/fast-fashions-detrimental-effect-on-the-environment/ (accessed 12 October 2025)
[31] Long, X., How does fast fashion affect the environment?, Economics Observatory, 26 March 2025, https://www.economicsobservatory.com/how-does-fast-fashion-affect-the-environment (accessed 12 October 2025)
[32] CarbonFact, The French Textile Eco-Score: October 2025 – Key Updates and What Fashion Brands Need to Know, October 2025, https://www.carbonfact.com/blog/policy/french-eco-score (accessed 12 October 2025)
[33] OECD, Measuring the environmental impacts of artificial intelligence compute and applications, 2025, https://www.oecd.org/en/publications/measuring-the-environmental-impacts-of-artificial-intelligence-compute-and-applications_7babf571-en.html (accessed 12 October 2025)
[34] OECD, Dark Commercial Patterns, OECD Digital Economy Papers No. 336, October 2022, https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/10/dark-commercial-patterns_9f6169cd/44f5e846-en.pdf (accessed 12 October 2025)
[35] Wegmeth, L., Vente, T., Said, A. & Beel, J., Green Recommender Systems: Understanding and Minimizing the Carbon Footprint of AI-Powered Personalization, arXiv preprint, September 2025, https://arxiv.org/abs/2509.13001 (accessed 12 October 2025)
[36] Vente, T., Wegmeth, L., Said, A. & Beel, J., From Clicks to Carbon: The Environmental Toll of Recommender Systems, arXiv preprint, August 2024, https://arxiv.org/abs/2408.08203 (accessed 12 October 2025)
[37] OECD, Stronger consumer protections needed to address current and emerging harms consumers face online, Press release, 9 October 2024, https://www.oecd.org/en/about/news/press-releases/2024/10/stronger-consumer-protections-needed-to-address-current-and-emerging-harms-consumers-face-online.html (accessed 12 October 2025)
[38] Wegmeth, L., Vente, T., Said, A. & Beel, J., Green Recommender Systems: Understanding and Minimizing the Carbon Footprint of AI-Powered Personalization, arXiv preprint, September 2025, https://arxiv.org/abs/2509.13001 (accessed 12 October 2025)
[39] Marketing Journal (Japan Marketing Association), The Parasocial Relationships Between Influencers and Consumers: The Impact on Product Attitudes and Product Referrals, 2025, https://www.jstage.jst.go.jp/article/marketing/45/3/45_2025.032/_pdf/-char/en (accessed 12 October 2025)
[40] Block, E. & Lovegrove, R., Discordant storytelling, ‘honest fakery’, identity peddling: How uncanny CGI characters are jamming public relations and influencer practices, Public Relations Inquiry, 2021, https://journals.sagepub.com/doi/10.1177/2046147X211026936 (accessed 12 October 2025)
[41]Euronews – Business, Chiara Ferragni: How influential are influencers on consumer decisions?, 22 December 2023, https://www.euronews.com/business/2023/12/22/chiara-ferragni-how-influential-are-influencers-on-consumer-decisions (accessed 12 October 2025)
[42] Italian Communications Authority (AGCOM), Resolution 197/25/CONS: Amendments to the guidelines set out in Resolution 7/24/CONS and approval of the Code of Conduct for Influencers, 23 July 2025, (Translated) https://www.agcom.it/provvedimenti/delibera-197-25-cons (accessed 12 October 2025)
[43] Michaelsen, F., Collini, L. et al., The impact of influencers on advertising and consumer protection in the Single Market, February 2022, Policy Department for Economic, Scientific and Quality of Life Policies, European Parliament, https://www.europarl.europa.eu/thinktank/en/document/IPOL_STU%282022%29703350 (accessed 12 October 2025) European Parliament+1
[44] Taylor & Francis Online, Children’s ‘playbour’ as influencers on social media: an investigation into the legal and ethical issues surrounding kidfluencers, 2025, https://www.tandfonline.com/doi/full/10.1080/22041451.2025.2523654#abstract (accessed 12 October 2025)
[45] Blum-Ross, A. & Livingstone, S., “Sharenting,” parent blogging and the boundaries of the digital self, Popular Communication, 2017, https://www.tandfonline.com/doi/full/10.1080/15405702.2016.1223300 (accessed 12 October 2025)
[46] Mehmet, O., Koomen, M. & Mahieu, V., Emerging tech, emerging mental health risks, 10 September 2025, https://cfg.eu/emerging-tech-emerging-mental-health-risks/