Accurate AI responses demand appropriate queries
So, why is ChatGPT having issues 鈥� and does it mean we can鈥檛 rely on its capabilities? The answers lie in knowing its limitations.
First and foremost, it鈥檚 important to note that ChatGPT is an example of Artificial Narrow Intelligence (ANI) and not Artificial General Intelligence (AGI). ANI systems are very good at performing one type of task for which they have been trained, but they are not suitable for tasks in which they have not been trained, however simple. For example, an ANI system designed to generate images will likely not be able to solve a simple mathematical question such as聽What is five plus seven?6
Secondly, ChatGPT is a generative AI model 鈥� designed to generate new content based on a clear set of inputs and rules. Its primary application is to generate human-like responses. However, ChatGPT lacks human-like reasoning skills. In ChatGPT鈥檚 own words:聽鈥淚 am designed to be able to generate human-like text by predicting the next word in a sequence based on the context of the words that come before it.鈥�
Therefore, for ChatGPT to be considered trusted, it鈥檚 the responsibility of each user to apply its AI capabilities to a suitable use case. Equally important, the developers should use reliable data sets to train the AI model and apply relevant bias and content filters. In the case of classical computing, the concept of GIGO 鈥� Garbage in, garbage out 鈥� is pervasive and holds true. But when it comes to AI, it鈥檚 GISGO 鈥� Garbage in, Super garbage out 鈥� making it critical that developers use reliable data to train the AI model.
The good news is that ChatGPT is quite aware of its limitations and can appropriately respond to users. Also, ChatGPT combines a supervised and reinforcement learning model, which provides the benefits of faster learning through a reward system and the ability to learn based on human inputs.
Establish guardrails to maximize the benefits of AI
As organizations explore use cases for powerful new AI solutions like ChatGPT and others, it鈥檚 crucial that cyber and risk teams set guardrails for secure implementation. The following are some steps to help get ahead of the hype. This is a non-exhaustive list and merely initial steps to consider as AI continues to emerge:
- Set expectations聽for how ChatGPT and similar solutions should be used in an enterprise context. Develop acceptable use policies, define a list of all approved solutions, use cases and data that staff can rely on, and require that checks be established to validate the accuracy of responses.
- Establish internal processes聽to review the implications and evolution of regulations regarding the use of cognitive automation solutions, particularly the management of intellectual property, personal data, and inclusion and diversity where appropriate.
- Educate your people聽on the benefits and risks of using these AI solutions, as well as how to get the most out of them, including suitable use cases and the importance of training the model with reliable datasets.
- Implement technical cyber controls,聽paying special attention to testing code for operational resilience and scanning for malicious payloads. Other controls include, but are not limited to:
- Multifactor authentication and enabling access only to authorized users;
- Application of data loss-prevention solutions;
- Processes to ensure all code produced by the tool undergoes standard reviews and cannot be directly copied into production environments;
- Configuration of web filtering to provide alerts when staff accesses non-approved solutions.