Article | May 3, 2021
Clear conceptualization, taxonomies, categories, criteria, properties when solving complex real-life contextualized problems is non-negotiable, a “must” to unveil the hidden potential of NPL impacting on the transparency of a model.
It is common knowledge that many authors and researchers in the field of natural language processing (NLP) and machine learning (ML) are prone to use explainability and interpretability interchangeably, which from the start constitutes a fallacy. They do not mean the same, even when looking for a definition from different perspectives.
A formal definition of what explanation, explainable, explainability mean can be traced to social science, psychology, hermeneutics, philosophy, physics and biology. In The Nature of Explanation, Craik (1967:7) states that “explanations are not purely subjective things; they win general approval or have to be withdrawn in the face of evidence or criticism.” Moreover, the power of explanation means the power of insight and anticipation and why one explanation is satisfactory involves a prior question why any explanation at all should be satisfactory or in machine learning terminology how a model is performant in different contextual situations. Besides its utilitarian value, that impulse to resolve a problem whether or not (in the end) there is a practical application and which will be verified or disapproved in the course of time, explanations should be “meaningful”.
We come across explanations every day. Perhaps the most common are reason-giving ones. Before advancing in the realm of ExNLP, it is crucial to conceptualize what constitutes an explanation. Miller (2017) considered explanations as “social interactions between the explainer and explainee”, therefore the social context has a significant impact in the actual content of an explanation. Explanations in general terms, seek to answer the why type of question. There is a need for justification. According to Bengtsson (2003) “we will accept an explanation when we feel satisfied that the explanans reaches what we already hold to be true of the explanandum”, (being the explanandum a statement that describes the phenomenon to be explained (it is a description, not the phenomenon itself) and the explanan at least two sets of statements, used for the purpose of elucidating the phenomenon).
In discourse theory (my approach), it is important to highlight that there is a correlation between understanding and explanation, first and foremost. Both are articulated although they belong to different paradigmatic fields. This dichotomous pair is perceived as a duality, which represents an irreducible form of intelligibility.
When there are observable external facts subject to empirical validation, systematicity, subordination to hypothetic procedures then we can say that we explain. An explanation is inscribed in the analytical domain, the realm of rules, laws and structures. When we explain we display propositions and meaning. But we do not explain in a vacuum. The contextual situation permeates the content of an explanation, in other words, explanation is an epistemic activity: it can only relate things described or conceptualized in a certain way. Explanations are answers to questions in the form: why fact, which most authors agree upon.
Understanding can mean a number of things in different contexts. According to Ricoeur “understanding precedes, accompanies and swathes an explanation, and an explanation analytically develops understanding.” Following this line of thought, when we understand we grasp or perceive the chain of partial senses as a whole in a single act of synthesis. Originally, belonging to the field of the so-called human science, then, understanding refers to a circular process and it is directed to the intentional unit of discourse whereas an explanation is oriented to the analytical structure of a discourse.
Now, to ground any discussion on what interpretation is, it is crucial to highlight that the concept of interpretation opposes the concept of explanation. They cannot be used interchangeably. If considered as a unit, they composed what is called une combinaison éprouvé (a contrasted dichotomy). Besides, in dissecting both definitions we will see that the agent that performs the explanation differs from the one that produce the interpretation.
At present there is a challenge of defining—and evaluating—what constitutes a quality interpretation. Linguistically speaking, “interpretation” is the complete process that encompasses understanding and explanation. It is true that there is more than one way to interprete an explanation (and then, an explanation of a prediction) but it is also true that there is a limited number of possible explanations if not a unique one since they are contextualized. And it is also true that an interpretation must not only be plausible, but more plausible than another interpretation. Of course there are certain criteria to solve this conflict. And to prove that an interpretation is more plausible based on an explanation or the knowledge could be related to the logic of validation rather than to the logic of subjective probability.
Narrowing it down
How are these concepts transferred from theory to praxis? What is the importance of the "interpretability" of an explainable model? What do we call a "good" explainable model? What constitutes a "good explanation"? These are some of the many questions that researchers from both academia and industry are still trying to answer.
In the realm on machine learning current approaches conceptualize interpretation in a rather ad-hoc manner, motivated by practical use cases and applications. Some suggest model interpretability as a remedy, but only a few are able to articulate precisely what interpretability means or why it is important. Hence more, most in the research community and industry use this term as synonym of explainability, which is certainly not. They are not overlapping terms. Needless to say, in most cases technical descriptions of interpretable models are diverse and occasionally discordant.
A model is better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model (Molnar, 2021). For a model to be interpretable (being interpretable the quality of the model), the information conferred by an interpretation may be useful. Thus, one purpose of interpretations may be to convey useful information of any kind. In Molnar’s words the higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.” I will make an observation here and add “the higher the interpretability of an explainable machine learning model”. Luo et. al. (2021) defines “interpretability as ‘the ability [of a model] to explain or to present [its predictions] in understandable terms to a human.” Notice that in this definition the author includes “understanding” as part of the definition, giving the idea of completeness. Thus, the triadic closure explanation-understanding-interpretation is fulfilled, in which the explainer and interpretant (the agents) belong to different instances and where interpretation allows the extraction and formation of additional knowledge captured by the explainable model.
Now are the models inherently interpretable? Well, it is more a matter of selecting the methods of achieving interpretability: by (a) interpreting existing models via post-hoc techniques, or (b) designing inherently interpretable models, which claim to provide more faithful interpretations than post-hoc interpretation of blackbox models. The difference also lies in the agency –like I said before– , and how in one case interpretation may affect the explanation process, that is model’s inner working or just include natural language explanations of learned representations or models.
Article | May 31, 2021
According to Google trends, predictive data analytics has gained a significant amount of popularity over the last few years. Many businesses have implemented predictive analytics applications to increase their business reach, gain new customers, forecast sales, and more.
Predictive Analytics is a type of data analytics technology that makes predictions with the help of data sets, statistical modeling, and machine learning. Predictive analytics uses historical data. This historical data is fed into a mathematical model that recognizes patterns and trends that are then applied to current data to forecast trends, practices, and behaviors from milliseconds to days and even years.
Based on the parameters supplied to them, organizations find patterns within that data to detect risks, opportunities, forecast conditions, and events that would occur at a particular time. At its heart, the use of predictive analytics answers a simple question, “What would happen based on my current data and what can be done to change the outcome.”
In the current times, businesses have multiple products offerings at their disposal to choose from vendors of big data predictive analytics in different industries. They can help these businesses leverage historical data discovering complex data correlation, recognizing patterns, and forecasting.
Organizations are turning to predictive analytics to increase their bottom line and gain advantages against their competition. Some of those reasons are listed below:
• With the growing amount and types of data, there is more interest in utilizing it to produce valuable insights
• Better computers
• An abundance of easy to use software
• Need of competitive differentiation due to tougher
As more and more easy-to-use software have been introduced, businesses no longer need statisticians and mathematicians for predictive analytics and forecasting.
Benefits of Predictive Analytics
Competitive edge over other businesses
The most common reason why multiple companies picked up predictive analytics was to gain an advantage over their competitors. Customer trends and buying patterns keep changing from time to time. The ones who can identify it first will go ahead in the game. Embracing predictive analytics is how you will stay ahead of your competition. Predictive analytics will aid in qualified lead generation and give you an insight into the present and potential customers.
Businesses opt for predictive analytics to predict customer behavior, preferences, and responses. Using this information, they attract their target audience and entice them into becoming loyal customers. Predictive analytics gives valuable information about your customers such as which of them are likely to lapse, how to retain them, whether you should market directly at them, etc. The more you know about them, the stronger your marketing will become. Your business will become the leader in predicting your customer’s exact needs.
Retaining existing customers is almost five times more difficult than acquiring new ones. The most successful company is the one that invests money in retaining those customers as much as acquiring new ones.
Predictive analytics helps in directing marketing strategies towards your existing customers and get them to return frequently. The analytics tool will make sure your marketing strategy caters to the diverse requirements of your customers.
Earlier marketing strategies revolved around the ‘one size fits all’ approach, but gone are those days. If you want to retain and acquire new customers, you have to create personalized marketing campaigns to attract customers.
Predictive analytics and data management help you to get new information about customer expectations, previous purchases, buying behaviors, and patterns. Using this data, you can create these personalized marketing strategies that will help keep up the engagement and acquire new customers.
Application of Predictive Analytics
Customer targeting divides the customer base into different demographic groups according to age, gender, interests, buying, and spending habits. It helps companies to create tailored marketing communications specifically to the customers who are likely to buy their products. Traditional techniques do not even come close to identifying potential customers as well as predictive analytics does.
The major constituents that create these customer groups are:
• Socio-demographic factors: age, gender, education, and marital status
• Engagement factors: recent interaction, frequency, spending habits, etc.
• Past campaign response: contact response, type, day, month, etc.
The customer-specific targeting for the company is highly advantageous. They can:
• Better communicate with the customers
• Save money on marketing
• Increase profits
Customer churn prevention
Customer churn prevention creates major hurdles in a company’s growth. Although it has been proven that retaining customers is cheaper than gaining new ones, it can become a problem. Detecting a client’s dissatisfaction is not an easy task as they can abruptly stop using your services without any warning.
Here, churn prevention comes into the picture. Churn prevention aims to predict who will end their relationship with the company, when, and why. The existing data sets can help develop predictive models so companies can be proactive to prevent the fallout.
Factors that can influence the churn are as follows:
• Customer variables
• Service use
• Competitor variables
Using these variables, companies can then take necessary steps to avoid the churn by offering customers personalized services or products.
Risk assessment and management processes in many companies are antiquated. Even though customer information is abundantly available for evaluation, it is still antiquated.
With advanced analytics, this data can be quickly and accurately analyzed while maintaining customer privacy and boundaries. Risk assessment thus allows companies to analyze problems with any business. Predictive analytics can approximate with certainty which operations are profitable and which are not.
Risk assessment analyzes the following data types:
• Socio-demographic factors
• Product details
• Customer behavior
• Risk metrics
Evaluating the previous history, seasonality, and market-affecting events make revenue predicting vital for a company’s planning and result in a company’s demand for a product or a service. This can be applied to short-term, medium-term, and long-term forecasting.
Predictive models help in anticipating a customer’s reaction to the factors that affect sales.
Following factors can be used in sales forecasting:
• Calendar data
• Weather data
• Company data
• Social data
• Demand data
Sales forecasting allows revenue prediction and optimal resource allocation.
Healthcare organizations have begun to use predictive analytics as this technology is helping them save money. They are using predictive analytics in several different ways. With the help of this technology, based on past trends they can now allocate facility resources, optimize staff schedules, identify patients at risk, adding intelligence to pharmaceutical and supply acquisition management.
Using predictive analytics in the health domain has also helped in preventing cases and risks of developing health complications like diabetes, asthma, and other life-threatening problems. The application of predictive analytics in health care can lead to making better clinical decisions for patients.
Predictive analytics is being used across different industries and is good way to advance your company’s growth and forecast future events to act accordingly. It has gained support from many different organizations at a global scale and will continue to grow rapidly.
Frequently Asked Questions
What is predictive analytics?
Predictive analytics uses historical data to predict future events. The historical data is used to build mathematical model that captures essential trends. That predictive model is based on current data that predicts what will happen next or suggest steps to take for optimal outcomes.
How to do predictive analytics?
• Define business objectives
• Collect relevant data available from resources
• Improve on collected data by data cleaning methods
• Choose a model or build your own to test data
• Evaluate and validate the predictive model to ensure
How does predictive analytics work for business?
Predictive analytics helps businesses attract, retain, and grow their profitable customers. It also helps them in improving their operations.
What tools are used for predictive analytics?
Some tools used for predictive analytics are:
• SAS Advanced Analytics
• Oracle DataScience
• IBM SPSS Statistics
• SAP Predictive Analytics
• Q Research
"name": "What is predictive analytics?",
"text": "Predictive analytics uses historical data to predict future events. The historical data is used to build a mathematical model that captures essential trends. That predictive model is based on current data that predicts what will happen next or suggest steps to take for optimal outcomes."
"name": "How to do predictive analytics?",
"text": "Define business objectives
Collect relevant data available from resources
Improve on collected data by data cleaning methods
Choose a model or build your own to test data
Evaluate and validate the predictive model to ensure "
"name": "How does predictive analytics work for business?",
"text": "Predictive analytics helps businesses attract, retain, and grow their profitable customers. It also helps them in improving their operations."
"name": "What tools are used for predictive analytics?",
"text": "Some tools used for predictive analytics are:
SAS Advanced Analytics
IBM SPSS Statistics
SAP Predictive Analytics
Article | March 23, 2020
Despite the huge contributions of deep learning to the field of artificial intelligence, there’s something very wrong with it: It requires huge amounts of data. This is one thing that both the pioneers and critics of deep learning agree on. In fact, deep learning didn’t emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
Article | February 20, 2020
The world is now heading into the Fourth Industrial Revolution, as Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, described it in 2016. Artificial Intelligence (AI) is a key driver in this revolution and with it, machine learning is critical. But critical to the whole process is the need to process a tremendous amount of data which in turns boosts the demand for computing power exponentially.A study by OpenAI suggested that the computing power required for AI training surged by more than 300,000 times between 2012 and 2018. This represents a doubling of computing power every three months and two weeks; a number that is significantly quicker than Moore’s Law which has traditionally measured the time it takes to double computing power. Conventional methodology is no longer enough for such significant leaps, and we desperately need a different computing architecture to stay ahead in the game.