By Sam Page, CEO of 7DOTS
Is innovation always a good thing? The answer hinges on who benefits and how. In the case of AI, the rapid advancements have propelled the technology exponentially, as companies race to integrate it into digital experiences.
While tech leaders focus on how to innovate faster, the public’s reception is more cautious. Research from Dentsu reveals a stark contrast: 78% of consumers believe AI is the future, yet only 39% are enthusiastic about it.
This highlights a clear democratic deficit in the adoption of technology, which can only be addressed by prioritising the needs of citizens and consumers.
This isn’t to say we should put the AI genie back in the bottle, but that businesses and policymakers need to ensure a plurality of voices are responsible for guiding its magic. We also need a robust framework to support ethical AI practices. However, growing concerns exist that we may be heading in the wrong direction. The incoming US administration’s stance on AI regulation is a prime example.
The Perils of Unregulated AI
As the undisputed global leader in tech and AI, the US attracted a staggering $67.2 billion in private AI investment in 2023, dwarfing other countries.
In an effort to maintain this dominance, Donald Trump has pledged to deregulate the industry to accelerate innovation. While this might seem appealing to businesses, it could have severe unintended consequences.
Despite claims that deregulation spurs progress, even a pioneering leader like Sam Altman, CEO of OpenAI, has emphasised the need for the U.S. to ‘align AI leadership with democratic values.’
Without robust AI regulations, we risk a future plagued by biased algorithms, opaque decision-making, and harmful tools like deepfakes. History should guide us as it is rife with examples of technological innovations that, without proper safeguards, have led to unintended negative consequences.
The 2008 financial crisis serves as a warning of the dangers of unregulated financial technology. The whole finance industry’s reputation was severely damaged for the failure to properly understand and regulate the use of complex financial instruments.
More recently, social media platforms have connected people globally, but they have also contributed to the spread of misinformation, cyberbullying, privacy infringements and addiction. This has led to significant public backlash against these platforms.
AI technology, if not handled responsibly, could exacerbate existing issues like data privacy, algorithmic bias, and the misuse of AI tools, particularly impacting vulnerable groups and amplifying social inequalities. So companies must proceed with caution to avoid alienating their audiences.
Building Consumer Trust in the AI Age
In 2025 the focus of brands and businesses has to be on bringing customers on the journey. It is vital to build in true consent and focus on establishing stronger, universally agreed-upon standards.
To mitigate the risks of widespread alienation, businesses and policymakers should adopt a human-centric approach to AI, prioritising the needs and well-being of users. Demonstrating the benefits while recognising the boundaries people want to see.
After all, AI has the ability to improve everything businesses do, across digital experience from chatbots to personalised content. But this means nothing if consent hasn’t first been established. In order to do this, businesses should keep the following in mind:
- Be Transparent About AI Use: Clearly inform consumers when AI is being used in chatbots, recommendations, or content to build trust.
- Ensure Responsible AI Practices: Use AI ethically, respecting privacy and data security, and communicate these practices to audiences.
- Maintain Human Touchpoints: Balance AI interactions with opportunities for consumers to connect with real people when needed.
Several companies have successfully pioneered this approach, earning the trust of their customers. Apple, for instance, has thoughtfully integrated AI, known as Apple Intelligence, into its digital experience through Siri.
By prioritising transparency and user experience, Apple has gradually introduced AI features without overwhelming users. This measured approach involves bringing customers along the journey.
Spotify’s AI-powered recommendation engine takes a similar approach, openly communicating its benefits to users. By analysing listening habits, the engine curates personalised playlists, forging a deeper connection between users and their favorite artists. This positive value exchange is built on user consent and transparency.
A New Approach to AI
To ensure that we guide customers on a successful journey into the AI-driven future, businesses must strike a balance between innovation and responsibility. By prioritising human values and long-term societal impact, we can build a culture of ethical innovation.
In 2025 let’s focus on inspiring people with the positive potential of AI. By prioritising consent and transparency, we can build trust and ensure that innovation benefits everyone.







