Intelligent CIO Africa Issue 81 | Page 69

t cht lk from the agent ’ s verbiage . Once the transcription is completed and stored in the processed data repository , the text is pushed to the inference pipeline for the language model to process what was said .

t cht lk from the agent ’ s verbiage . Once the transcription is completed and stored in the processed data repository , the text is pushed to the inference pipeline for the language model to process what was said .

The inference pipeline contains three components , defined as SageMaker processing jobs to allow longer runtimes and custom Virtual Machines , VM since a more powerful machine is needed to load and use the Transformer Language Model . These three components include pre-processing , inference and post-processing .
At this stage , we should mention that the inference pipeline is the only component that needs to be changed for this solution architecture to be used for any other use cases .
The pre-processing prepares the text data along with any additional metadata and puts it into a format that the model expects . Any additional data cleaning and Personal Identifiable Information , PII redaction components are implemented in the pre-processing step . Once the data has been prepared , it is sent through the inference component .
The secret ingredient that makes this entire solution possible and so incredibly effective is the type of AI that is used in the inference component . We make use of a pre-trained Transformer model that allows us to obtain a language-agnostic numeric representation of the verbiage that retains semantic and syntactic information in a way that allows us to perform meaningful analytics on it . These numeric representations are called embeddings .
Once we have the embeddings , we invoke the postprocessing component . Here we apply business logic to use the model output to derive value from it .
• How much of my workforce is being used ?
• Which products are being discussed most often ?
• How efficiently are the agents picking up calls and how much silence is picked up during calls ?
• Are calls taking longer for certain agents , products ?
• At which times of the day , week does the call centre experience the largest , smallest call volumes ?
In order to understand why clients are calling the call centre , we simply need to swap out the type of model used in the inference component of the architecture . This could be implemented as either a supervised or unsupervised approach .
We could either try to extract topics in a data-driven way , or we could map them to a pre-defined list of intents . Typically , for this type of use case , we would suggest implementing an unsupervised paradigm first to understand your data better , as an exploratory data analysis , exercise and then switch to a supervised paradigm to refine and deepen the way in which the model tries to understand the verbiage .
This will allow you to answer questions such as :
Hugo Loubser , Associate Principal Machine Learning Engineer at Synthesis .
In the case of compliance , this business logic includes having a list of the phrases that the agents are expected to utter to clients . We then use the embeddings to determine whether or not each of these phrases was spoken .
This means that if we have a list of 50 phrases that need to be spoken for that call to be deemed compliant , we obtain 50 output scores for each of the incoming phrases from the conversation . These outputs function as probabilities that the specific compliance phrase was found in the spoken language .
• Why are my clients calling ?
• When are my clients calling to discuss which topics , products ?
• Why are calls being dropped prematurely ?
We can say that for a specific call , that included Employee A and Customer B , we know that Product A was discussed , whether or not the Employee was compliant , and why they were not compliant in the negative case , what the customer ’ s sentiment was , towards the organisation , product or agent and what if any , complaints or compliments were made .
Beyond compliance
Call data is annotated with valuable insights . It allows for access to data that could answer questions such as :
As mentioned , this solution architecture can be used in many ways other than compliance . If you can formulate the business rule , you can use this architecture to derive the data from the interactions with your clients . p
www . intelligentcio . com INTELLIGENTCIO AFRICA 69