Automate multilingual CX in 5 steps ⟶
Automate multilingual CX in 5 steps
Since OpenAI’s release of ChatGPT in November, there has been hype surrounding its ability to produce human-like text for a variety of purposes: writing essays, creating marketing content, and even writing code.
A quick technical background: ChatGPT is trained with humans-in-the-loop. It is optimised for contextual conversations using the reinforcement learning with human feedback (RLHF) technique, which relies on humans to author the articles and conversations that form the training data of the model. Humans are also required to write instructions to direct the model to generate long, informative, and coherent texts. Applying this technique to the largest-ever volume of training data has allowed ChatGPT to offer impressive and smooth conversational ability.
Amidst the initial hype, there is speculation that ChatGPT could outright replace search engines, since it can generate a coherent answer that is much easier for a human to absorb than a list of search results on a Wikipedia page. However, within parallel industries such as AI customer experience, large language models (LLMs) like ChatGPT have certain limitations and complementary capabilities that are worth digging into…
Since the release of transformer models in 2017, large language models are taking over the field of natural language processing (NLP) with ever-increasing datasets and higher associated cloud-hosting costs. Bidirectional Encoder Representations from Transformers (BERT) and Generative Pre-trained Transformer (GPT) are probably the most famous large language models, trained to predict the next words within a particular context. However, LLMs are not trained to extract information for input and generate outputs on a factual basis.
Simply put, just because ChatGPT’s responses are human-like does not make them correct. What makes ChatGPT unique is its superior capability to make its stochastic results read as a plausible response – meaning the conversations are incredibly smooth. This is certainly the cutting edge in AI, but beyond low-risk search and chatting purposes, it can pose a red flag for applications such as supporting patients, protecting consumers, or processing financial transactions.
LLMs are programmed to maximise the similarity between input and output, but they have no ability to access the internet. ChatGPT cannot update its information after the training process is complete – so while ChatGPT can answer “Who won the World Cup in 2006” it cannot answer “Who won the World Cup in 2022” as its knowledge is cut-off in 2021.
The only way to update or modify the response of a LLM is to gather enough data and restart the RLHF framework. Imagine having to provide thousands of conversations as training data, simply to change the answer to a question like: “What’s your refund policy?”
As OpenAI CEO Sam Altman tweeted, “It's a mistake to be relying on it [ChatGPT] for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
In short, ChatGPT and LLMs do not interpret the information contained within the question as data analysts, customer service agents, or any human would. For specific industry use-cases, it is important to build chatbots from best practices, such as those offered by Proto’s sample bots and experienced chatbot developers.
Let’s look at an example. A travel website might get the inquiry: “What’s the best 10 restaurants to go to in Manila?”
A human agent will be likely to open their restaurant database, set Manila in the location filter, exclude the ones with a closed tag, sort by ratings, then respond with the top 10 results. In contrast, ChatGPT will simply invoke its training data to randomly select or make up 10 names and summaries of restaurants.
A specially trained customer experience chatbot is able to recognise the inquiry as a RECOMMEND_RESTURANTS(location=Manila) intent, use an API action to fetch the data, and then render the information into a chatbot reply.
There are, however, several advantages of using ChatGPT when it comes to designing chatbots for more specific and riskier use-cases. ChatGPT’s standout strength is the generation of contextual and human-like texts, which offer immediate potential for the following:
Multilingual communication with customers has many natural language quirks, such as intentional typos, slang, loanwords and mixed-languages (e.g. Taglish + English = Taglish). Understanding all of this for unique business use cases requires NLP training capability with access to local, industry-specific data.
Innovators at the ground-level in these markets like Proto are gathering this localised data to augment large language models like BERT with their own deep-learning algorithms. This delivers businesses and governments a chatbot capability for phenomenons like mixed-language and datasets for low-resourced languages from deployments in countries such as Cebuano in the Philippines and Kinyarwanda in Rwanda.
ChatGPT is an excellent example of how large language models have the power to automate certain manual and low-risk work. It proves that chatbots for multilingual customer experience should be using large language models, as Proto has done for two years with its own augmentations. Into the future, the AI customer experience industry will deliver better chatbots thanks to the advantages of text generation and exhaustive testing. But it is critical to not go overboard with this technology, and harness the power ChatGPT within a specialised AICX platform to ensure accuracy, localisation, model improvement, customer protection and brand integrity. To learn more about how Proto can enable your business to use ChatGPT safely, reach out to our experts today.