Getting AI right in 2025: control, control, control

2024 has been a year of rapid AI adoption, with many businesses scrambling to capitalize on the latest advancements for fear of being left behind. However, despite significant investment, organizations often struggle to realize tangible benefits from their AI initiatives. In fact, reports suggest that while 68% of large companies have integrated AI, a quarter of IT professionals regret rapid AI adoption, and two-thirds wish they had chosen technologies more carefully.

Arguably, the root of this issue lies in a lack of control. Organizations are struggling to implement AI tools in a manner that not only brings benefits, but also does not compromise their data privacy. In 2025, businesses need to ensure they choose the right AI tool for the job while retaining the control and privacy their data needs.

Identify why you want an AI tool

Before embarking on an AI initiative, it’s crucial to define clear objectives. What specific problem are you trying to solve? What value do you expect to derive from AI? Is it threat intelligence, enhanced decision-making, or improved customer experience? It is only once these goals are identified that a business can know what type of AI they need.

Crucial to this is finding the right tool for the job. The first step is to understand that while Large Language Models (LLMs) have been dominating the headlines and fueling the hype, they are not the only form of AI model. Instead, there are a number of different tools available that are focused on specialist tasks and solutions that may not only be more suitable, but also more able.

This is because specialist AI is designed to tackle a specific task as opposed to being trained to offer a solution for everyone – both professionally and personally. What’s more, unlike LLMs, which are trained on vast, often uncurated datasets, specialist AI models focus only on relevant data, resulting in higher accuracy and efficiency. Finally, specialist AI models are more efficient in terms of computational resources and energy consumption, making them more cost-effective, less environmentally impactful, and faster to implement.

It is crucial to consider all options when seeking the right tool for the job to make sure you retain control of your data and are focused on the job, not the hype. After all, if you choose the wrong tool then you will lose control of your data the second you sign in.

The right data and the right privacy

A heavily touted advantage of LLMs is that they are trained on vast amounts of data and are therefore able to provide insights and generate content for organizations from all industries and regions. However, while this is indeed an advantage for people looking for a tool able to provide such scope, in most business cases this is in fact a negative.

This is because being trained on such huge pools of data can cause a reduction in the quality, accuracy, and integrity of that data. What’s more, it is often difficult to discover what data the LLM was specifically trained on in order to validate it. This is a particular challenge for businesses who need a high degree of transparency and accuracy with their outputs as LLMs have been shown to be prone to hallucinations and biases as a result of learning from such vast and varied data.

Specialist AI tools, meanwhile, can offer users the option to choose the data the model is trained on with the customer able to see and curate those sources with transparency. For example a Small Language Model (SLM) AI tool can be fed a number of sources in the form of thesauruses so it can accurately understand the specific needs of a user – this includes not just languages in a formal sense, but also the ability to understand the technical jargon and expertise of a company’s industry as well as that company’s own annotations and coded shorthands. This can offer a highly efficient approach for an organization when it comes to AI adoption as it is the tool that is adapted to the user, rather than staff having to be trained for the tool.

A further aspect to consider is the privacy of that data. It is crucial that any data an organization gives an AI tool to tailor its training and make it work for them is kept private and confidential and not shared externally. This is important not just to protect a business from breaches and to keep their sensitive and confidential information secret, but also for regulatory and legal reasons with many industries having strict control over many aspects of financial, health, and PII data. This also goes for data used as part of prompts and AI analysis once the tool is being used too, with any data that passes through or is subjected to an AI tool needing to be secure and private.

For example, LLMs often require vast amounts of data to be shared with third-party providers. This can pose significant risks to sensitive information, particularly for businesses operating in highly regulated industries. In contrast, private AI models, such as specialist AI, can be deployed within a secure, zero-trust environment, ensuring that data remains confidential and protected from unauthorized access.

By opting for a private AI solution, organizations can safeguard their intellectual property and maintain control over their data, mitigating the potential for data breaches and reputational damage. They are therefore able to use the AI with even their most confidential and regulated data as opposed to having to limit it to publicly available material, thus maximizing the potential gains of the tool.

Integration, control and security

It is imperative that an organization has complete control over how the AI is implemented into their workflow and system with all data access tightly controlled and transparent. This is particularly important in industries working with sensitive and regulated data as they need to be able to report on how that data has been used and who has had access to it.

The importance of this has been highlighted in 2024 by a number of surveys and reports uncovering the prevalence of data exposure due to AI tools. For example, research by Syrenis found that 71% of AI users regret sharing their data with AI tools after realizing the extent of what was shared, while a RiverSafe survey of CISOs revealed that one in five UK companies exposed sensitive corporate data as a result of employees using AI tools.

To put it bluntly, if an AI tool, or indeed any tool, harvests a business’s data or shares their information externally, then that business is at risk of a breach and could be at risk of failing compliance requirements.

When implementing new AI tools pay close attention to how they are integrated within a business’s existing architecture and ensure that it does not require data to be stored outside of your control. For example, if a business chooses to use a cloud-based AI tool, it is crucial to ensure they have the ability to either host that cloud structure on their own system, or to prevent third-party access to the data and protect it from cyberattacks such as ransomware. This can be achieved through combining the cloud provider’s infrastructure with your own decentralized storage, for example blockchain, and implementing strict access control and encryption.

These same encryption and access measures can also ensure you have control over what data is accessed and by who, ensuring that your information is protected by least privileged access with nobody able to access data they do not need. Homomorphic encryption can also ensure that data can remain encrypted at rest, in transit and in use with search and computations possible on the fully encrypted data. However, while the security and privacy of the data is crucial, it is also important to check the scalability and speed of the system to ensure the AI is able to provide the real-time insights and services that are needed in today’s market.

Final thoughts

The successful implementation of AI hinges on a balanced approach that prioritizes control, data privacy, and security. By carefully selecting AI tools tailored to specific needs, prioritizing data quality and transparency, and implementing robust security measures, organizations can harness the power of AI while mitigating potential risks.

As the AI landscape continues to evolve, it is imperative to stay informed about emerging technologies and best practices to ensure that AI is used responsibly and ethically. By adopting a proactive and strategic approach, organizations can unlock the full potential of AI and drive innovation while safeguarding their interests by retaining control.

We’ve featured the best AI website builder.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro