James W. Marshall and ChatGPT 3.5 have one thing in common. Both sparked a “rush” and changed the world. While Marshall found the first gold nugget in January 1848, initiating a rush of 300,000 people to California, the launch of ChatGPT 3.5 in November 2022 caused a stir among users and tech investors. AI and Large Language Models (LLMs) suddenly became mainstream, with millions of users rushing to use the chatbot, changing the world forever.
Negative aspects of the AI boom are now coming to light, whether in handling copyrights, bias, ethics, privacy, security, or the impact on jobs. That is why the EU’s intention to consider ethical and moral issues by regulating technology with the AI Act is timely and appropriate. At the same time, perhaps every major company on the planet has considered how to intelligently integrate Artificial Intelligence into their websites, products, and services to increase productivity, optimize customer satisfaction, and ultimately boost sales.
Don’t turn a blind eye to risks and side effects
Like the gold rush, the AI boom also created a rapid influx of people jumping on the bandwagon to avoid missing out on the opportunity it presented. However, the use of AI in companies should not be conducted in a manner akin to the ‘Wild West’ like the gold rush; instead, it should come with a clear warning similar to nicotine advertising because ignoring AI risks and side effects could – in extreme circumstances – have fatal consequences.
The more typical risks scale from development departments accidentally sharing designs or lines of code with public LLMs to changing customer expectations about how companies use AI and data in the process. But these risks can scale exponentially, causing harm; for example, in 2016 when, Microsoft’s Tay pushed ~95,000 tweets over 16 hours, with many being racist and misogynistic. According to a study by Cohesity, more than three-quarters of consumers (78 percent) have serious concerns about the unrestricted or uncontrolled use of their data by AI.
But how can AI be tamed? It has already been deployed across many companies without anyone setting rules for its use or monitoring compliance – comparable to how the “rush” to cloud computing occurred, leading to many firms restarting over from scratch and losing time and money. To prevent this from causing all problems, any organization that wants to use AI responsibly in the coming year must regulate this proliferation internally, control access, and have strict AI policies. In recent times, many companies, including Amazon and financial giant JPMC, have introduced restrictions to their staff using ChatGPT to bring a high level of control in place before the floodgates open and plan to gently reintroduce appropriate access as and when usage policies and technical controls are in place.
It is also crucial for companies to clearly define which data their own AI projects can access and how they can process it. Classic role-based access controls that link roles and tasks with data sources are a good option for controlling this in a scalable manner. Only those with the necessary privileges can open the data sources. These roles should also reflect that someone not allowed to open specific data sources for legal reasons cannot. And that geographic constraints such as data sovereignty are tightly controlled.
What is rarely checked currently – and could become problematic in the future – is whether and how to trace what, exactly, the AI models were fed (trained on) and in what order. This blind spot may have legal, moral, and ethical consequences. If AI makes a fatal decision in the future, that will have problematic consequences in at least one – or, in the worst case – all of those areas. A rigorous judge will want to know how the AI models were trained to achieve the fatal outcome. You would also be required to keep an entire version history of model training for the prescribed period.
Make learning processes transparent and install a “back” button
Therefore, it is crucial to classify the data fed in and document the learning process. This will enable companies to create more transparency for customers and improve the quality of the learning process. It is also right to approach this in a governed and responsible manner, only using appropriately approved data, ensuring that the AI and its human element have the right level of access to the data and that they cannot amend data inappropriately or access data they are not allowed to see – role-based access controls ensuring both privacy, but also that AI access is also properly controlled.
At the same time, however, the AI learning process is still a mystery; it takes place in mathematically complex algorithms, and, above all, it takes a long time. For years, Tesla has trained its AI to drive autonomously in real traffic situations. But how do you protect the essence of years of learning from loss and incorrect input? How do you protect that learning from your competition or threat actors, who may want to influence behaviors adversely? How do you protect your intellectual property from being included in AI training unlawfully? A good example of the latter is the New York Times suing OpenAI and Microsoft for the unpermitted use of NYT articles to train GPT LLMs. This leads us nicely back to approaching AI in a responsible and governed manner.
So far, no startup has devised how the AI engine can record which bits and bytes were changed in the learning process after entering fresh data. Anyone who wants to reset the AI to an earlier state because they fed it incorrectly – with, for example, legally protected content – will be unable to do this directly in the AI engine. They need a workaround that has already been established in other IT areas. In IT security, tried and tested methods can also be useful for better-protecting AI models. Some solutions make it possible to take snapshots of the entire system and then return to a previous version in an emergency. You then lose the days between the snapshot creation and the identified time problems, but not all the knowledge works. Companies need to consider this and take advantage of it when considering the risks of AI.
We’ve compiled an extensive list of the best AI tools.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro