• 1000

Ask a question on almost any topic, and ChatGPT has a reasonable answer ready. You can ask it to write a song or give you a five-part framework for a corporate digital strategy. On most general topics, like the one in our example, the output will likely be sensible. But on more specific questions, it might get a fair amount of detail wrong.

People have used generative AI to negotiate discounts on phone bills, dispense therapy to real-life patients, write Python code, poems, songs or novels, and to take (or cheat in) exams. Generally, large language models (LLMs) produce good results that appear amazing.

As such, they could signal a shift in the way communications and businesses work. But it would be all too easy to assume that it's time to make room for our AI overlords. Several writers have, with some irony, written about how AI will likely put them out of business. That sort of panic is a mistake. To understand the potential, let's look at how AI tools like ChatGPT work, what they鈥檙e capable of, and how businesses can use them.

What鈥檚 behind the interface?

The most recent generation of AI is based on LLMs. Interestingly, ChatGPT combines an LLM with an interaction layer that uses reinforcement learning.

An LLM is a neural network model that uses unsupervised learning to predict outcomes. Among the many AI models developed, LLMs are uniquely unexplainable.

Language models (as distinct from large language models) have existed for a while and can predict the next word or phrase in a sentence. They use different techniques than LLMs and have different applications 鈥� auto-correct is a common use.

Adding the 鈥榣arge鈥� element involves training the models on a large collection of publicly accessible electronic documents. That collection (or 鈥榗orpus鈥� in AI terminology) comprises many petabytes of data 鈥� one petabyte being a million gigabytes. Training a model on such massive amounts of data allows it to learn about many topics, as well as language patterns.

So LLMs are 鈥榣arge鈥� partly because of the amount of data they鈥檙e trained on. But also due to the size of the models themselves. A few years ago, a complex model might have had a couple of hundred parameters; LLMs have billions. ChatGPT鈥檚 underlying LLM has 175 billion parameters and is training on something like 500 billion 鈥榯okens鈥� (words or word fragments).

The advances we鈥檝e seen so far are largely a result of efforts to answer a single question: how can a model with that many parameters do something useful?

In the Philippines, AI is also being integrated in business operations with these advanced AI technologies used for a wide range of applications, including natural language processing, predictive analytics and machine learning.

However, it's important to note that the use of AI also raises ethical and privacy concerns, particularly around the use of personal data.


It is of paramount importance that businesses embrace a mindset that places transparency, responsibility and ethics at the core of their decision-making processes when it comes to harnessing the power of artificial intelligence. By doing so, businesses can navigate the complexities and potential pitfalls associated with AI adoption as well as foster an environment of trust and accountability among stakeholders



Jallain Marcel S. Manrique
Technology Consulting Head
乐鱼(Leyu)体育官网 in the Philippines


Making AI work

1. A good UX

AI results must meet a certain threshold to be useful. For the better part of a decade, AI tools have outperformed clinicians in determining whether MRI scans show cancer. Others can predict whether an employee will be successful at a company from their CV for 20 years. But these applications failed to gain traction because the users that would need to adopt them weren't convinced by the UX.

Perhaps an extension of the Turing Test should be whether AI tools feel too smart or bossy; and whether we can ask our questions, rather than being told what to ask.

2. Failing well

ChatGPT answers sound coherent and authoritative, even when some of the details are flawed. This is what we call a good 鈥渇ail state鈥�.

Failing well can be more critical to AI adoption than succeeding (i.e., being accurate). If users have a poor experience, even just once or twice, they鈥檒l quickly lose trust in the tool.

Good fail states vary between applications. Sounding plausible, but getting the finer details wrong, isn鈥檛 a great fail state for an investment advice tool. In some cases, a good fail state may mean asking for more information or allowing users to refine the output via a conversation. ChatGPT does this, as do some image-generation AI tools.

3. Ethical boundaries

The use of AI is fraught with ethical complications. There are many examples of language models 鈥渓earning鈥� to be offensive because they absorb offensive content into their corpus. And let鈥檚 face it: social media is rife with bad patterns for them to learn from.

It's pretty difficult to get LLMs not to learn certain things 鈥� even though that's what their analytical and policy layers are for. This is a particular problem for image-generation models, which are capable of producing images in the style of specific artists. That's why some recently developed tools have introduced artist rights protections.

Consent is another key ethical consideration. A non-profit, mental-health platform ran into trouble for using AI to provide health counseling 鈥� without informing the patients that the content was AI-generated.

Work in progress

LLMs can be deployed to approach a wide range of problems 鈥� many quite limited in scope. They can be modified to help summarize and classify legal documents; respond to customer inquiries; assist expert advisors; and generate engineering and architectural drawings. Such applications require labeling to produce a good UX 鈥� but far less of it than previous generations of language modeling technology.

All the same, there鈥檚 work to do before we can use the latest generation of AI for customer, employee, citizen and business interactions.

The starting point for that work is UX. While modifying models and working on analytics, businesses must think about users, interactions and processes. Users may range from employees and expert practitioners to customers, regulators and legal supervisors. How may each of these transition to using AI-powered tools?

Counterintuitively, limiting what models can do may have a more transformative impact, as users are likelier to reject more far-reaching change.

Start-ups may have the luxury of targeting users who are comfortable with change. And they can start small, making headway before regulation and enforcement catch up with the potential impact of their applications at scale. However, larger firms will likely face a UX challenge from the outset; that should be the target for AI pilots.

The excerpt was taken from the 乐鱼(Leyu)体育官网 Thought Leadership publication: /xx/en/home/insights/2023/02/the-potential-impact-of-chatgpt-and-the-new-ai-on-business.html

漏 2023 R.G. Manabat & Co., a Philippine partnership and a member firm of the 乐鱼(Leyu)体育官网 global organization of independent member firms affiliated with 乐鱼(Leyu)体育官网 International Limited, a private English company limited by guarantee. All rights reserved.

For more information, you may reach out to Technology Consulting Head Jallain Marcel S. Manrique through [email protected], social media or visit .

This article is for general information purposes only and should not be considered as professional advice to a specific issue or entity. The views and opinions expressed herein are those of the author and do not necessarily represent 乐鱼(Leyu)体育官网 International or 乐鱼(Leyu)体育官网 in the Philippines.