Last updated on August 1st, 2023 at 12:38 pm
This past fall, the detonation of ChatGPT introduced a new technology term to the mainstream: “generative artificial intelligence (AI)”. New competitors in the field are rapidly emerging on the hunt for new and innovative data solutions for companies and individuals alike. While the range of potential applications is staggering, errors will be costly. At a crossroads, it is imperative that companies consider the risks they can face with generative AI, including discrimination, fraud, and increasingly sophisticated manipulation using fake news and deepfakes. Companies must get it right and below are some key considerations to define when an AI governance framework.
Define what AI can do for you.
Given the rise in its popularity, people are inundated with information about the latest generative AI tool each day. While this progress seems to happen at rocket speed, generative AI needs to be seen through the right prism, which can be a difficult task for organizations.
Beneath the glitz and glamour of this new technology, it is crucial to see generative AI through the right prism: the prism of productivity. When used properly, these tools will assist (not replace) humans. With the right long-term strategy, generative AI will unlock unprecedented creativity and profoundly transform the operations of a company. By streamlining tasks, providing quick access to information and offering valuable support in various aspects of work and daily life, there is no question that generative AI can significantly help us become more productive. By training AI on enterprise data, employees can focus on less repetitive work while AI focuses on mundane tasks.
Companies craft the right AI governance framework by harnessing data, to ensure it is organized appropriately and that there are skills in the company able to prepare the right datasheets to avoid biased outputs as well as privacy-related issues. For example, at Wipro, we are working on multiple use cases with our customers to create synthesized datasets that could be used for various testing scenarios and to build new operating models. This is very important, especially as we need to deal with the requirements for large datasets. Synthetic data helps as it removes a lot of legal challenges around personal and confidential information.
Employees are an asset in your digital transformation journey.
In a nutshell, AI is not here to hire and fire employees, but rather the opposite. By integrating AI, it can help unleash creativity, increase knowledge, and help give employees independence. Employees play a key part in all stages of the digital transformation journey, whether they are training, upskilling, or involving themselves in technology.
This means employees must be trained to be familiar with these tools. As part of our recently released ai360 program, 250,000 Wipro employees will be trained in AI within a span of three years. Although this is an ambitious plan, I believe there is no alternative. As these tools hold both promise and risk, it is important to familiarize ourselves with them and develop a critical approach to harness their potential and mitigate any risks.
Establish the right AI governance framework.
Regulation specific to AI is also on the rise in Canada with the first national regulatory framework for AI proposed within Bill C-27, the Digital Charter Implementation Act. Aligning with the European Union’s Artificial Intelligence Act, C-27 includes legislation on the responsible development and adoption of AI across the Canadian economy by taking a risk-based approach.
Companies cannot wait and need to act and ensure that however they decide to deploy these tools, they have the right AI governance framework in place. Whether it is directly via an application programming interface (API), via finetuning with enterprise data, or by building LLMs from scratch, companies need to be able to establish responsible use and development policies and procedures. Companies must also put resources into monitoring the product once it is out. At Wipro, we have a task force which ensures that all we use and develop is in tune with our guiding principles of responsible AI, which include privacy, fairness, transparency, interpretability, and environmental and social sustainability.
Bringing together innovation & responsibility
As the world continues to face challenges, AI can be part of both the problem and the solution – although there is no one size fits all. Organizations need to ensure that they have the right AI governance framework in place, with AI responsibility at its very heart.
AI products are socio-technical artifacts, in the sense that they are a bundle of data, parameters, and people. Fairness is not going to be an obvious outcome unless there is a deliberate choice to achieve an equitable output. This means that companies will have to make choices and, in some situations, fairness and efficiency may be at odds.
Similar choices will have to be made around privacy and data protection. Investments in privacy-enhancing solutions and synthetic data will not only enable moving away from a model rooted in data extraction but will also reduce risks for individuals.
While generative AI continues to gain momentum in organizations, patience for trial and error will be necessary. Now more than ever, it is important for companies to focus on long-term innovation, productivity, and the well-being of our society and people, which will happen if innovation and responsibility go hand in hand.
By Ivana Bartoletti, Global Chief Privacy Officer at Wipro