Heroku Explains How-to Integrate Salesforce Data into Looker #JOINData 2016

| November 29, 2016

article image
Overseeing all of Heroku’s Business Operations including Finance, Sales and Marketing Operations, and Analytics and Data Science, Michael provides real insight about how marketing and sales operations gurus use data to offer real-time enlightenment in any SaaS-based business.Topics covered in this presentation include: Heroku's data infrastructure architecture. Heroku's best practices in Looker- Integrating Salesforce data into Looker

Spotlight

Aspen Technology

AspenTech (NASDAQ: AZPN), the asset optimization software company, makes the best companies better by optimizing the design, operation and maintenance of critical assets in complex, capital-intensive environments. For more than three decades, we have been fueling customers’ competitive drive by pushing the boundaries of what’s possible in process modeling, simulation and optimization. We’re now extending optimization’s reach by combining our process modeling expertise with big data machine learning. We model customer success with software and insight to run assets faster, safer, longer and greener.

OTHER ARTICLES

Saurav Singla, the machine learning guru, empowering society

Article | December 10, 2020

Saurav Singla is a Senior Data Scientist, a Machine Learning Expert, an Author, a Technical Writer, a Data Science Course Creator and Instructor, a Mentor, a Speaker. While Media 7 has followed Saurav Singla’s story closely, this chat with Saurav was about analytics, his journey as a data scientist, and what he brings to the table with his 15 years of extensive statistical modeling, machine learning, natural language processing, deep learning, and data analytics across Consumer Durable, Retail, Finance, Energy, Human Resource and Healthcare sectors. He has grown multiple businesses in the past and is still a researcher at heart. In the past, Analytics and Predictive Modeling is predominant in few industries but in current times becoming an eminent part of emerging fields such as health, human resource management, pharma, IoT, and other smart solutions as well. Saurav had worked in data science since 2003. Over the years, he realized that all the people they had hired — whether they are from business or engineering backgrounds — needed extensive training to be able to perform analytics on real-world business datasets. He got an opportunity to move to Australia in the year 2003. He joined a retail company Harvey Norman in Australia, working out of their Melbourne office for four years. After moving back to India, in 2008, he joined one of the verticals of Siemens — one of the few companies in India then using analytics services in-house for eight years. He is a very passionate believer that the use of data and analytics will dramatically change not only corporations but also our societies. Building and expanding the application of analytics for supply chain, logistics, sales, marketing, finance at Siemens was a very fulfilling and enjoyable experience for him. Siemens was a tremendously rewarding and enjoyable experience for him. He grew the team from zero to fifteen while he was the data scientist leader. He believes those eight years taught him how to think big, scale organizations using data science. He has demonstrated success in developing and seamlessly executing plans in complex organizational structures. He has also been recognized for maximizing performance by implementing appropriate project management tools through analysis of details to ensure quality control and understanding of emerging technology. In the year 2016, he started getting a serious inner push to start thinking about joining a consulting and shifted to a company based out in Delhi NCR. During his ten-month path with them, he improved the way clients and businesses implement and exploit machine learning in their consumer commitments. As part of that vision, he developed class-defining applications that eliminate tension technologies, processes, and humans. Another main aspect of his plan was to ensure that it was affected in very fast agile cycles. Towards that he was actively innovating on operating and engagement models. In the year 2017, he moved to London and joined a digital technology company, and assisted in building artificial intelligence and machine learning products for their clients. He aimed to solve problems and transform the costs using technology and machine learning. He was associated with them for 2 years. At the beginning of the year 2018, he joined Mindrops. He developed advanced machine learning technologies and processes to solve client problems. Mentored the Data Science function and guide them in the development of the solution. He built robust clients Data Science capabilities which can be scalable across multiple business use cases. Outside work, Saurav associated with Mentoring Club and Revive. He volunteers in his spare time for helping, coaching, and mentoring young people in taking up careers in the data science domain, data practitioners to build high-performing teams and grow the industry. He assists data science enthusiasts to stay motivated and guide them along their career path. He helps fill the knowledge gap and help aspirants understand the core of the industry. He helps aspirants analyze their progress and help them upskill accordingly. He also helps them connect with potential job opportunities with their industry-leading network. Additionally, in the year 2018, he joined as a mentor in the Transaction Behavioral Intelligence company that accelerates business growth for banks with the use of Artificial Intelligence and Machine Learning enabled products. He is guiding their machine learning engineers with their projects. He is enhancing the capabilities of their AI-driven recommendation engine product. Saurav is teaching the learners to grasp data science knowledge more engaging way by providing courses on the Udemy marketplace. He has created two courses on Udemy, with over twenty thousand students enrolled in it. He regularly speaks at meetups on data science topics and writes articles on data science topics in major publications such as AI Time Journal, Towards Data Science, Data Science Central, Kdnuggets, Data-Driven Investor, HackerNoon, and Infotech Report. He actively contributes academic research papers in machine learning, deep learning, natural language processing, statistics and artificial intelligence. His book on Machine Learning for Finance was published by BPB Publications which is Asia's largest publisher of Computer and IT Books. This is possibly one of the biggest milestones of his career. Saurav turned his passion to make knowledge available for society. Saurav believes sharing knowledge is cool, and he wishes everyone should have that passion for knowledge sharing. That would be his success.

Read More

What Is The Value Of A Big Data Project

Article | April 7, 2020

According to software vendors executing the big data projects, the answer is clear: More data means more options. Then add a bit of machine learning (ML) for good measure to get told what to do, and the revenue will thrive.This is not really feasible. Therefore, before starting a big data project, a checklist might come in handy.Make sure that the insights gained through machine learning are actionable. Gaining insights is always good, but it is even better if you can act on this new knowledge.A shopping basket analysis shows which products are sold together. What to do with that information?Companies could place the two products in opposite corners of the shop, so customers walk through all areas and will find other products to buy in addition. Or they could place both products next to each other so each boosts the sales of the other. Or how about discounting one product to gain more customers?As all actions have unknown side effects, companies have to decide for themselves which action makes sense to take in their case.

Read More

Natural Language Desiderata: Understanding, explaining and interpreting a model.

Article | May 3, 2021

Clear conceptualization, taxonomies, categories, criteria, properties when solving complex real-life contextualized problems is non-negotiable, a “must” to unveil the hidden potential of NPL impacting on the transparency of a model. It is common knowledge that many authors and researchers in the field of natural language processing (NLP) and machine learning (ML) are prone to use explainability and interpretability interchangeably, which from the start constitutes a fallacy. They do not mean the same, even when looking for a definition from different perspectives. A formal definition of what explanation, explainable, explainability mean can be traced to social science, psychology, hermeneutics, philosophy, physics and biology. In The Nature of Explanation, Craik (1967:7) states that “explanations are not purely subjective things; they win general approval or have to be withdrawn in the face of evidence or criticism.” Moreover, the power of explanation means the power of insight and anticipation and why one explanation is satisfactory involves a prior question why any explanation at all should be satisfactory or in machine learning terminology how a model is performant in different contextual situations. Besides its utilitarian value, that impulse to resolve a problem whether or not (in the end) there is a practical application and which will be verified or disapproved in the course of time, explanations should be “meaningful”. We come across explanations every day. Perhaps the most common are reason-giving ones. Before advancing in the realm of ExNLP, it is crucial to conceptualize what constitutes an explanation. Miller (2017) considered explanations as “social interactions between the explainer and explainee”, therefore the social context has a significant impact in the actual content of an explanation. Explanations in general terms, seek to answer the why type of question. There is a need for justification. According to Bengtsson (2003) “we will accept an explanation when we feel satisfied that the explanans reaches what we already hold to be true of the explanandum”, (being the explanandum a statement that describes the phenomenon to be explained (it is a description, not the phenomenon itself) and the explanan at least two sets of statements, used for the purpose of elucidating the phenomenon). In discourse theory (my approach), it is important to highlight that there is a correlation between understanding and explanation, first and foremost. Both are articulated although they belong to different paradigmatic fields. This dichotomous pair is perceived as a duality, which represents an irreducible form of intelligibility. When there are observable external facts subject to empirical validation, systematicity, subordination to hypothetic procedures then we can say that we explain. An explanation is inscribed in the analytical domain, the realm of rules, laws and structures. When we explain we display propositions and meaning. But we do not explain in a vacuum. The contextual situation permeates the content of an explanation, in other words, explanation is an epistemic activity: it can only relate things described or conceptualized in a certain way. Explanations are answers to questions in the form: why fact, which most authors agree upon. Understanding can mean a number of things in different contexts. According to Ricoeur “understanding precedes, accompanies and swathes an explanation, and an explanation analytically develops understanding.” Following this line of thought, when we understand we grasp or perceive the chain of partial senses as a whole in a single act of synthesis. Originally, belonging to the field of the so-called human science, then, understanding refers to a circular process and it is directed to the intentional unit of discourse whereas an explanation is oriented to the analytical structure of a discourse. Now, to ground any discussion on what interpretation is, it is crucial to highlight that the concept of interpretation opposes the concept of explanation. They cannot be used interchangeably. If considered as a unit, they composed what is called une combinaison éprouvé (a contrasted dichotomy). Besides, in dissecting both definitions we will see that the agent that performs the explanation differs from the one that produce the interpretation. At present there is a challenge of defining—and evaluating—what constitutes a quality interpretation. Linguistically speaking, “interpretation” is the complete process that encompasses understanding and explanation. It is true that there is more than one way to interprete an explanation (and then, an explanation of a prediction) but it is also true that there is a limited number of possible explanations if not a unique one since they are contextualized. And it is also true that an interpretation must not only be plausible, but more plausible than another interpretation. Of course there are certain criteria to solve this conflict. And to prove that an interpretation is more plausible based on an explanation or the knowledge could be related to the logic of validation rather than to the logic of subjective probability. Narrowing it down How are these concepts transferred from theory to praxis? What is the importance of the "interpretability" of an explainable model? What do we call a "good" explainable model? What constitutes a "good explanation"? These are some of the many questions that researchers from both academia and industry are still trying to answer. In the realm on machine learning current approaches conceptualize interpretation in a rather ad-hoc manner, motivated by practical use cases and applications. Some suggest model interpretability as a remedy, but only a few are able to articulate precisely what interpretability means or why it is important. Hence more, most in the research community and industry use this term as synonym of explainability, which is certainly not. They are not overlapping terms. Needless to say, in most cases technical descriptions of interpretable models are diverse and occasionally discordant. A model is better interpretable than another model if its decisions are easier for a human to comprehend than decisions from the other model (Molnar, 2021). For a model to be interpretable (being interpretable the quality of the model), the information conferred by an interpretation may be useful. Thus, one purpose of interpretations may be to convey useful information of any kind. In Molnar’s words the higher the interpretability of a machine learning model, the easier it is for someone to comprehend why certain decisions or predictions have been made.” I will make an observation here and add “the higher the interpretability of an explainable machine learning model”. Luo et. al. (2021) defines “interpretability as ‘the ability [of a model] to explain or to present [its predictions] in understandable terms to a human.” Notice that in this definition the author includes “understanding” as part of the definition, giving the idea of completeness. Thus, the triadic closure explanation-understanding-interpretation is fulfilled, in which the explainer and interpretant (the agents) belong to different instances and where interpretation allows the extraction and formation of additional knowledge captured by the explainable model. Now are the models inherently interpretable? Well, it is more a matter of selecting the methods of achieving interpretability: by (a) interpreting existing models via post-hoc techniques, or (b) designing inherently interpretable models, which claim to provide more faithful interpretations than post-hoc interpretation of blackbox models. The difference also lies in the agency –like I said before– , and how in one case interpretation may affect the explanation process, that is model’s inner working or just include natural language explanations of learned representations or models.

Read More

Can you really trust Amazon Product Recommendation?

Article | January 28, 2021

Since the internet became popular, the way we purchase things has evolved from a simple process to a more complicated process. Unlike traditional shopping, it is not possible to experience the products first-hand when purchasing online. Not only this, but there are more options or variants in a single product than ever before, which makes it more challenging to decide. To not make a bad investment, the consumer has to rely heavily on the customer reviews posted by people who are using the product. However, sorting through relevant reviews at multiple eCommerce platforms of different products and then comparing them to choose can work too much. To provide a solution to this problem, Amazon has come up with sentiment analysis using product review data. Amazon performs sentiment analysis on product review data with Artificial Intelligence technology to develop the best suitable products for the customer. This technology enables Amazon to create products that are most likely to be ideal for the customer. A consumer wants to search for only relevant and useful reviews when deciding on a product. A rating system is an excellent way to determine the quality and efficiency of a product. However, it still cannot provide complete information about the product as ratings can be biased. Textual detailed reviews are necessary to improve the consumer experience and in helping them make informed choices. Consumer experience is a vital tool to understand the customer's behavior and increase sales. Amazon has come up with a unique way to make things easier for their customers. They do not promote products that look similar to the other customer's search history. Instead, they recommend products that are similar to the product a user is searching for. This way, they guide the customer using the correlation between the products. To understand this concept better, we must understand how Amazon's recommendation algorithm has upgraded with time. The history of Amazon's recommendation algorithm Before Amazon started a sentiment analysis of customer product reviews using machine learning, they used the same collaborative filtering to make recommendations. Collaborative filtering is the most used way to recommend products online. Earlier, people used user-based collaborative filtering, which was not suitable as there were many uncounted factors. Researchers at Amazon came up with a better way to recommend products that depend on the correlation between products instead of similarities between customers. In user-based collaborative filtering, a customer would be shown recommendations based on people's purchase history with similar search history. In item-to-item collaborative filtering, people are shown recommendations of similar products to their recent purchase history. For example, if a person bought a mobile phone, he will be shown hints of that phone's accessories. Amazon's Personalization team found that using purchase history at a product level can provide better recommendations. This way of filtering also offered a better computational advantage. User-based collaborative filtering requires analyzing several users that have similar shopping history. This process is time-consuming as there are several demographic factors to consider, such as location, gender, age, etc. Also, a customer's shopping history can change in a day. To keep the data relevant, you would have to update the index storing the shopping history daily. However, item-to-item collaborative filtering is easy to maintain as only a tiny subset of the website's customers purchase a specific product. Computing a list of individuals who bought a particular item is much easier than analyzing all the site's customers for similar shopping history. However, there is a proper science between calculating the relatedness of a product. You cannot merely count the number of times a person bought two items together, as that would not make accurate recommendations. Amazon research uses a relatedness metric to come up with recommendations. If a person purchased an item X, then the item Y will only be related to the person if purchasers of item X are more likely to buy item Y. If users who purchased the item X are more likely to purchase the item Y, then only it is considered to be an accurate recommendation. Conclusion In order to provide a good recommendation to a customer, you must show products that have a higher chance of being relevant. There are countless products on Amazon's marketplace, and the customer will not go through several of them to figure out the best one. Eventually, the customer will become frustrated with thousands of options and choose to try a different platform. So Amazon has to develop a unique and efficient way to recommend the products that work better than its competition. User-based collaborative filtering was working fine until the competition increased. As the product listing has increased in the marketplace, you cannot merely rely on previous working algorithms. There are more filters and factors to consider than there were before. Item-to-item collaborative filtering is much more efficient as it automatically filters out products that are likely to be purchased. This limits the factors that require analysis to provide useful recommendations. Amazon has grown into the biggest marketplace in the industry as customers trust and rely on its service. They frequently make changes to fit the recent trends and provide the best customer experience possible.

Read More

Spotlight

Aspen Technology

AspenTech (NASDAQ: AZPN), the asset optimization software company, makes the best companies better by optimizing the design, operation and maintenance of critical assets in complex, capital-intensive environments. For more than three decades, we have been fueling customers’ competitive drive by pushing the boundaries of what’s possible in process modeling, simulation and optimization. We’re now extending optimization’s reach by combining our process modeling expertise with big data machine learning. We model customer success with software and insight to run assets faster, safer, longer and greener.

Events