Re-envisioning your Information Goldmine with Artificial Intelligence

| November 27, 2018

article image
A few weeks ago, I was in Moscow with one of our customers in the financial services industry discussing their over 1,400 legacy-application decommissioning issue. They were looking for a modern and lightweight architected solution that could grant the chain of custody and compliance preservation for their data and content, allowing full decommissioning of their old legacy systems once for all. ROI for this use case is relatively easy to calculate and in the order of the millions of dollars, so the value of decommissioning is clear. But as so often happens when handling old legacy applications, there is very little knowledge in the enterprise about the actual data and the content that its infrastructure architecture is serving, preventing the company from applying advanced optimization and governance strategies that can generate additional benefits.

Spotlight

Carma Systems Inc.

Makers of CarmaLink, the simple driver monitoring tool for fleets. Every trip made with CarmaLink is logged with GPS and uploaded with speed, time and safety alerts on Google maps so you always know what your drivers are up to. See where your vehicles are in realtime, if they are parked or on the move and how fast they are traveling.

OTHER ARTICLES

Natural Language Desiderata: Understanding, explaining and interpreting a model.

Article | May 3, 2021

Clear conceptualization, taxonomies, categories, criteria, properties when solving complex real-life contextualized problems is non-negotiable, a “must” to unveil the hidden potential of NPL impacting on the transparency of a model. It is common knowledge that many authors and researchers in the field of natural language processing (NLP) and machine learning (ML) are prone to use explainability and interpretability interchangeably, which from the start constitutes a fallacy. They do not mean the same, even when looking for a definition from different perspectives. A formal definition of what explanation, explainable, explainability mean can be traced to social science, psychology, hermeneutics, philosophy, physics and biology. In The Nature of Explanation, Craik (1967:7) states that “explanations are not purely subjective things; they win general approval or have to be withdrawn in the face of evidence or criticism.” Moreover, the power of explanation means the power of insight and anticipation and why one explanation is satisfactory involves a prior question why any explanation at all should be satisfactory or in machine learning terminology how a model is performant in different contextual situations. Besides its utilitarian value, that impulse to resolve a problem whether or not (in the end) there is a practical application and which will be verified or disapproved in the course of time, explanations should be “meaningful”. We come across explanations every day. Perhaps the most common are reason-giving ones. Before advancing in the realm of ExNLP, it is crucial to conceptualize what constitutes an explanation. Miller (2017) considered explanations as “social interactions between the explainer and explainee”, therefore the social context has a significant impact in the actual content of an explanation. Explanations in general terms, seek to answer the why type of question. There is a need for justification. According to Bengtsson (2003) “we will accept an explanation when we feel satisfied that the explanans reaches what we already hold to be true of the explanandum”, (being the explanandum a statement that describes the phenomenon to be explained (it is a description, not the phenomenon itself) and the explanan at least two sets of statements, used for the purpose of elucidating the phenomenon). In discourse theory (my approach), it is important to highlight that there is a correlation between understanding and explanation, first and foremost. Both are articulated although they belong to different paradigmatic fields. This dichotomous pair is perceived as a duality, which represents an irreducible form of intelligibility. When there are observable external facts subject to empirical validation, systematicity, subordination to hypothetic procedures then we can say that we explain. An explanation is inscribed in the analytical domain, the realm of rules, laws and structures. When we explain we display propositions and meaning. But we do not explain in a vacuum. The contextual situation permeates the content of an explanation, in other words, explanation is an epistemic activity: it can only relate things described or conceptualized in a certain way. Explanations are answers to questions in the form: why fact, which most authors agree upon. Understanding can mean a number of things in different contexts. According to Ricoeur “understanding precedes, accompanies and swathes an explanation, and an explanation analytically develops understanding.” Following this line of thought, when we understand we grasp or perceive the chain of partial senses as a whole in a single act of synthesis. Originally, belonging to the field of the so-called human science, then, understanding refers to a circular process and it is directed to the intentional unit of discourse whereas an explanation is oriented to the analytical structure of a discourse. Now, to ground any discussion on what interpretation is, it is crucial to highlight that the concept of interpretation opposes the concept of explanation. They cannot be used interchangeably. If considered as a unit, they composed what is called une combinaison éprouvé (a contrasted dichotomy). Besides, in dissecting both definitions we will see that the agent that performs the explanation differs from the one that produce the interpretation. At present there is a challenge of defining—and evaluating—what constitutes a quality interpretation. Linguistically speaking, “interpretation” is the complete process that encompasses understanding and explanation. It is true that there is more than one way to interprete an explanation (and then, an explanation of a prediction) but it is also true that there is a limited number of possible explanations if not a unique one since they are contextualized. And it is also true that an interpretation must not only be plausible, but more plausible than another interpretation. Of course there are certain criteria to solve this conflict. And to prove that an interpretation is more plausible based on an explanation or the knowledge could be related to the logic of validation rather than to the logic of subjective probability. Narrowing it down How are these concepts transferred from theory to praxis? What is the importance of the "interpretability" of an explainable model? What do we call a "good" explainable model? What constitutes a "good explanation"? These are some of the many questions that researchers from both academia and industry are still trying to answer. In the realm on machine learning current approaches conceptualize interpretation in a rather ad-hoc manner, motivated by practical use cases and applications. Some suggest model interpretability as a remedy, but only a few are able to articulate precisely what interpretability means or why it is important. Hence more, most in the research community and industry use this term as synonym of explainability, which is certainly not. They are not overlapping terms. Needless to say, in most cases technical descriptions of interpretable models are diverse and occasionally discordant. A model is better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model (Molnar, 2021). For a model to be interpretable (being interpretable the quality of the model), the information conferred by an interpretation may be useful. Thus, one purpose of interpretations may be to convey useful information of any kind. In Molnar’s words the higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.” I will make an observation here and add “the higher the interpretability of an explainable machine learning model”. Luo et. al. (2021) defines “interpretability as ‘the ability [of a model] to explain or to present [its predictions] in understandable terms to a human.” Notice that in this definition the author includes “understanding” as part of the definition, giving the idea of completeness. Thus, the triadic closure explanation-understanding-interpretation is fulfilled, in which the explainer and interpretant (the agents) belong to different instances and where interpretation allows the extraction and formation of additional knowledge captured by the explainable model. Now are the models inherently interpretable? Well, it is more a matter of selecting the methods of achieving interpretability: by (a) interpreting existing models via post-hoc techniques, or (b) designing inherently interpretable models, which claim to provide more faithful interpretations than post-hoc interpretation of blackbox models. The difference also lies in the agency –like I said before– , and how in one case interpretation may affect the explanation process, that is model’s inner working or just include natural language explanations of learned representations or models.

Read More

A learning guide to accelerate data analysis with SPSS Statistics

Article | May 20, 2021

IBM SPSS Statistics provides a powerful suite of data analytics tools which allows you to quickly analyze your data with a simple point-and-click interface and enables you to extract critical insights with ease. During these times of rapid change that demand agility, it is imperative to embrace data driven decision-making to improve business outcomes. Organizations of all kinds have relied on IBM SPSS Statistics for decades to help solve a wide range of business and research problems.

Read More

The importance of Big Data in the Food Industry Strategies and best practices

Article | March 5, 2020

Do you know the real importance of Big Data in the Food Industry? Knowing your audience is important, even fundamental for any kind of business. In this article we will analyze the best practices and the best data-driven strategies (marketing, but not only) for the food industry. Food and Beverage is a large and complex sector that embraces a number of very different players, some of whom are interconnected. The ecosystem includes both small producers and large multinational brands, players who cater to everyone and those who target a specific niche; then there are the distributors, clubs, restaurants both small and large, and retail chains.

Read More

Taking a qualitative approach to a data-driven market

Article | February 18, 2021

While digital transformation is proving to have many benefits for businesses, what is perhaps the most significant, is the vast amount of data there is available. And now, with an increasing number of businesses turning their focus to online, there is even more to be collected on competitors and markets than ever before. Having all this information to hand may seem like any business owner’s dream, as they can now make insightful and informed commercial decisions based on what others are doing, what customers want and where markets are heading. But according to Nate Burke, CEO of Diginius, a propriety software and solutions provider for ecommerce businesses, data should not be all a company relies upon when making important decisions. Instead, there is a line to be drawn on where data is required and where human expertise and judgement can provide greater value. Undeniably, the power of data is unmatched. With an abundance of data collection opportunities available online, and with an increasing number of businesses taking them, the potential and value of such information is richer than ever before. And businesses are benefiting. Particularly where data concerns customer behaviour and market patterns. For instance, over the recent Christmas period, data was clearly suggesting a preference for ecommerce, with marketplaces such as Amazon leading the way due to greater convenience and price advantages. Businesses that recognised and understood the trend could better prepare for the digital shopping season, placing greater emphasis on their online marketing tactics to encourage purchases and allocating resources to ensure product availability and on-time delivery. While on the other hand, businesses who ignored, or simply did not utilise the information available to them, would have been left with overstocked shops and now, out of season items that would have to be heavily discounted or worse, disposed of. Similarly, search and sales data can be used to understand changing consumer needs, and consequently, what items businesses should be ordering, manufacturing, marketing and selling for the best returns. For instance, understandably, in 2020, DIY was at its peak, with increases in searches for “DIY facemasks”, “DIY decking” and “DIY garden ideas”. For those who had recognised the trend early on, they had the chance to shift their offerings and marketing in accordance, in turn really reaping the rewards. So, paying attention to data certainly does pay off. And thanks to smarter and more sophisticated ways of collecting data online, such as cookies, and through AI and machine learning technologies, the value and use of such information is only likely to increase. The future, therefore, looks bright. But even with all this potential at our fingertips, there are a number of issues businesses may face if their approach relies entirely on a data and insight-driven approach. Just like disregarding its power and potential can be damaging, so can using it as the sole basis upon which important decisions are based. Human error While the value of data for understanding the market and consumer patterns is undeniable, its value is only as rich as the quality of data being inputted. So, if businesses are collecting and analysing their data on their own activity, and then using this to draw meaningful insight, there should be strong focus on the data gathering phase, with attention given to what needs to be collected, why it should be collected, how it will be collected, and whether in fact this is an accurate representation of what it is you are trying to monitor or measure. Human error can become an issue when this is done by individuals or teams who do not completely understand the numbers and patterns they are seeing. There is also an obstacle presented when there are various channels and platforms which are generating leads or sales for the business. In this case, any omission can skew results and provide an inaccurate picture. So, when used in decision making, there is the possibility of ineffective and unsuccessful changes. But while data gathering becomes more and more autonomous, the possibility of human error is lessened. Although, this may add fuel to the next issue. Drawing a line The benefits of data and insights are clear, particularly as the tasks of collection and analysis become less of a burden for businesses and their people thanks to automation and AI advancements. But due to how effortless data collection and analysis is becoming, we can only expect more businesses to be doing it, meaning its ability to offer each individual company something unique is also being lessened. So, businesses need to look elsewhere for their edge. And interestingly, this is where a line should be drawn and human judgement should be used in order to set them apart from the competition and differentiate from what everyone else is doing. It makes perfect sense when you think about it. Your business is unique for a number of reasons, but mainly because of the brand, its values, reputation and perceptions of the services you are upheld by. And it’s usually these aspects that encourage consumers to choose your business rather than a competitor. But often, these intangible aspects are much more difficult to measure and monitor through data collection and analysis, especially in the autonomous, number-driven format that many platforms utilise. Here then, there is a great case for businesses to use their own judgements, expertise and experiences to determine what works well and what does not. For instance, you can begin to determine consumer perceptions towards a change in your product or services, which quantitative data may not be able to pick up until much later when sales figures begin to rise or fall. And while the data will eventually pick it up, it might not necessarily be able to help you decide on what an appropriate alternative solution may be, should the latter occur. Human judgement, however, can listen to and understand qualitative feedback and consumer sentiments which can often provide much more meaningful insights for businesses to base their decisions on. So, when it comes to competitor analysis, using insights generated from figure-based data sets and performance metrics is key to ensuring you are doing the same as the competition. But if you are looking to get ahead, you may want to consider taking a human approach too.

Read More

Spotlight

Carma Systems Inc.

Makers of CarmaLink, the simple driver monitoring tool for fleets. Every trip made with CarmaLink is logged with GPS and uploaded with speed, time and safety alerts on Google maps so you always know what your drivers are up to. See where your vehicles are in realtime, if they are parked or on the move and how fast they are traveling.

Events