Social media “likes” are now fuelling the next generation of AI training data—but AI might no longer need them. From AI-generated influencers to personalised chatbots, we’re entering a world where both creators and fans could be artificial. As bots start liking bots, the question isn’t just what we like—but who is doing the liking.
AI Is Using Your Likes to Get Inside Your Head
AI isn't just learning from your likes—it’s predicting them, shaping them, and maybe soon, replacing them entirely. This shift is part of a broader trend where AI recalibrates the value of data, moving beyond simple metrics to deeper behavioural insights. What is the future of the Like button?
The implications of AI influencing our preferences are vast, touching upon how we interact with everything from content to products. For instance, platforms are already exploring how ChatGPT's 'Buy It' Button Is Quietly Rewriting Online Shopping by leveraging AI to streamline purchasing decisions. This evolution highlights a future where AI doesn't just react to our choices but proactively guides them, raising questions about agency and consumer behaviour.
Moreover, as AI becomes more sophisticated, it blurs the lines between genuine human engagement and algorithmic influence. AI artists are topping the charts weekly, demonstrating how AI-generated content can resonate with audiences, even if the "liking" mechanism is driven by other AI systems. This dynamic creates a complex feedback loop where AI trains on human data, then generates content that is liked by humans (or other AIs), further refining its models. A recent study by the Pew Research Center on AI's impact on human behaviour delves into these evolving interactions Pew Research Center.
Over to YOU: If future of the like button is that AI knows what you like before you do, can you still call it your choice?







Latest Comments (2)
this point about AI not just reacting to choices but proactively guiding them is pretty central to the whole agency debate in ML ethics. if the "buy it" button, like you mentioned with ChatGPT, is basically AI nudging us towards purchases, how do we even define free will in a system where algorithms are optimising for certain outcomes? it's not just about content anymore, it's about decision-making itself. wonder if any of the new papers on recommender system fairness touch on this level of pre-emption.
it's interesting how the article touches on AI rewriting online shopping, but I wonder about the ethical implications for languages and cultures less represented in current AI training data. if AI is proactively guiding choices, as mentioned with the 'Buy It' button, how do we ensure it doesn't inadvertently erase or marginalize local preferences and even minor languages, especially in diverse regions like india where online shopping is booming?
Leave a Comment