Article | April 1, 2020
Powerful technologies and expertise can help provide better data and help people better understand their situation. As the world contends with the ongoing coronavirus outbreak, officials battling the pandemic need tools and valid information at scale to help foster a greater sense of security for the public. As technologists, we have been heartened by the prevalence of projects such as Call for Code, hackathons and other attempts by our colleagues to rapidly create tools that might be able to help stem the crisis. But for these tools to work, they need data from sources they can validate. For example, reopening the world’s economy will likely require not only testing millions of people, but also being able to map who tested positive, where people can and can’t go and who is at exceptionally high risk of exposure and must be quarantined again.
Article | January 6, 2021
As the organizations go digital the amount of data generated whether in-house or from outside is humongous. In fact, this data keeps increasing with every tick of the clock.
There is no doubt about the fact that most of this data can be junk, however, at the same time this is also the data set from where an organization can get a whole lot of insight about itself.
It is a given that organizations that don’t use this generated data to build value to their organization are prone to speed up their obsolescence or might be at the edge of losing the competitive edge in the market.
Interestingly it is not just the larger firms that can harness this data and analytics to improve their overall performance while achieving operational excellence. Even the small size private equity firms can also leverage this data to create value and develop competitive edge. Thus private equity firms can achieve a high return on an initial investment that is low.
Private Equity industry is skeptical about using data and analytics citing the reason that it is meant for larger firms or the firms that have deep pockets, which can afford the revamping cost or can replace their technology infrastructure. While there are few private equity investment professionals who may want to use this advanced data and analytics but are not able to do so for the lack of required knowledge.
US Private Equity Firms are trying to understand the importance of advanced data and analytics and are thus seeking professionals with the expertise in dealing with data and advanced analytics. For private equity firms it is imperative to comprehend that data and analytics’ ability is to select the various use cases, which will offer the huge promise for creating value. Top Private Equity firms all over the world can utilize those use cases and create quick wins, which will in turn build momentum for wider transformation of businesses.
Pinpointing the right use cases needs strategic thinking by private equity investment professionals, as they work on filling the relevant gaps or even address vulnerabilities. Private Equity professionals most of the time are also found thinking operationally to recognize where can they find the available data.
Top private equity firms in the US have to realize that the insights which Big data and advanced analytics offer can result in an incredible opportunity for the growth of private equity industry. As Private Equity firms realize the potential and the power of big data and analytics they will understand the invaluableness of the insights offered by big data and analytics.
Private Equity firms can use the analytics insights to study any target organization including its competitive position in the market and plan their next move that may include aggressive bidding for organizations that have shown promise for growth or leaving the organization that is stuffed with loads of underlying issues.
But for all these and also to build careers in private equity it is important to have reputed qualification as well. A qualified private equity investment professional will be able to devise information-backed strategies in no time at all.
In addition, with Big Data and analytics in place, private equity firms can let go of numerous tasks that are done manually and let the technology do the dirty work. There have been various studies that show how big data and analytics can help a private Equity firm.
Article | May 3, 2021
Clear conceptualization, taxonomies, categories, criteria, properties when solving complex real-life contextualized problems is non-negotiable, a “must” to unveil the hidden potential of NPL impacting on the transparency of a model.
It is common knowledge that many authors and researchers in the field of natural language processing (NLP) and machine learning (ML) are prone to use explainability and interpretability interchangeably, which from the start constitutes a fallacy. They do not mean the same, even when looking for a definition from different perspectives.
A formal definition of what explanation, explainable, explainability mean can be traced to social science, psychology, hermeneutics, philosophy, physics and biology. In The Nature of Explanation, Craik (1967:7) states that “explanations are not purely subjective things; they win general approval or have to be withdrawn in the face of evidence or criticism.” Moreover, the power of explanation means the power of insight and anticipation and why one explanation is satisfactory involves a prior question why any explanation at all should be satisfactory or in machine learning terminology how a model is performant in different contextual situations. Besides its utilitarian value, that impulse to resolve a problem whether or not (in the end) there is a practical application and which will be verified or disapproved in the course of time, explanations should be “meaningful”.
We come across explanations every day. Perhaps the most common are reason-giving ones. Before advancing in the realm of ExNLP, it is crucial to conceptualize what constitutes an explanation. Miller (2017) considered explanations as “social interactions between the explainer and explainee”, therefore the social context has a significant impact in the actual content of an explanation. Explanations in general terms, seek to answer the why type of question. There is a need for justification. According to Bengtsson (2003) “we will accept an explanation when we feel satisfied that the explanans reaches what we already hold to be true of the explanandum”, (being the explanandum a statement that describes the phenomenon to be explained (it is a description, not the phenomenon itself) and the explanan at least two sets of statements, used for the purpose of elucidating the phenomenon).
In discourse theory (my approach), it is important to highlight that there is a correlation between understanding and explanation, first and foremost. Both are articulated although they belong to different paradigmatic fields. This dichotomous pair is perceived as a duality, which represents an irreducible form of intelligibility.
When there are observable external facts subject to empirical validation, systematicity, subordination to hypothetic procedures then we can say that we explain. An explanation is inscribed in the analytical domain, the realm of rules, laws and structures. When we explain we display propositions and meaning. But we do not explain in a vacuum. The contextual situation permeates the content of an explanation, in other words, explanation is an epistemic activity: it can only relate things described or conceptualized in a certain way. Explanations are answers to questions in the form: why fact, which most authors agree upon.
Understanding can mean a number of things in different contexts. According to Ricoeur “understanding precedes, accompanies and swathes an explanation, and an explanation analytically develops understanding.” Following this line of thought, when we understand we grasp or perceive the chain of partial senses as a whole in a single act of synthesis. Originally, belonging to the field of the so-called human science, then, understanding refers to a circular process and it is directed to the intentional unit of discourse whereas an explanation is oriented to the analytical structure of a discourse.
Now, to ground any discussion on what interpretation is, it is crucial to highlight that the concept of interpretation opposes the concept of explanation. They cannot be used interchangeably. If considered as a unit, they composed what is called une combinaison éprouvé (a contrasted dichotomy). Besides, in dissecting both definitions we will see that the agent that performs the explanation differs from the one that produce the interpretation.
At present there is a challenge of defining—and evaluating—what constitutes a quality interpretation. Linguistically speaking, “interpretation” is the complete process that encompasses understanding and explanation. It is true that there is more than one way to interprete an explanation (and then, an explanation of a prediction) but it is also true that there is a limited number of possible explanations if not a unique one since they are contextualized. And it is also true that an interpretation must not only be plausible, but more plausible than another interpretation. Of course there are certain criteria to solve this conflict. And to prove that an interpretation is more plausible based on an explanation or the knowledge could be related to the logic of validation rather than to the logic of subjective probability.
Narrowing it down
How are these concepts transferred from theory to praxis? What is the importance of the "interpretability" of an explainable model? What do we call a "good" explainable model? What constitutes a "good explanation"? These are some of the many questions that researchers from both academia and industry are still trying to answer.
In the realm on machine learning current approaches conceptualize interpretation in a rather ad-hoc manner, motivated by practical use cases and applications. Some suggest model interpretability as a remedy, but only a few are able to articulate precisely what interpretability means or why it is important. Hence more, most in the research community and industry use this term as synonym of explainability, which is certainly not. They are not overlapping terms. Needless to say, in most cases technical descriptions of interpretable models are diverse and occasionally discordant.
A model is better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model (Molnar, 2021). For a model to be interpretable (being interpretable the quality of the model), the information conferred by an interpretation may be useful. Thus, one purpose of interpretations may be to convey useful information of any kind. In Molnar’s words the higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.” I will make an observation here and add “the higher the interpretability of an explainable machine learning model”. Luo et. al. (2021) defines “interpretability as ‘the ability [of a model] to explain or to present [its predictions] in understandable terms to a human.” Notice that in this definition the author includes “understanding” as part of the definition, giving the idea of completeness. Thus, the triadic closure explanation-understanding-interpretation is fulfilled, in which the explainer and interpretant (the agents) belong to different instances and where interpretation allows the extraction and formation of additional knowledge captured by the explainable model.
Now are the models inherently interpretable? Well, it is more a matter of selecting the methods of achieving interpretability: by (a) interpreting existing models via post-hoc techniques, or (b) designing inherently interpretable models, which claim to provide more faithful interpretations than post-hoc interpretation of blackbox models. The difference also lies in the agency –like I said before– , and how in one case interpretation may affect the explanation process, that is model’s inner working or just include natural language explanations of learned representations or models.
Article | July 13, 2021
We are living in the age of Big Data, and data has become the heart and the most valuable asset for businesses across industry verticals. In the hyper-competitive market that exists today, data acts as a major contributor to achieving business intelligence and brand equity. Thus, effective data management is the key to accelerating the success of businesses. For effective data management to take place, organizations must ensure that the data that is used is accurate and reliable. With the advent of AI, businesses can now leverage machine learning to predict outcomes using historical data. This is called predictive analytics. With predictive analytics, organizations can predict anything from customer turnover to forecasting equipment maintenance. Moreover, the data that is acquired through predictive analytics is of high quality and very accurate. Let us take a look at how AI enables accurate data prediction and helps businesses to equip themselves for the digital future.