Artificial Intelligence and Advertising: Understanding Essentials

| December 12, 2018

article image
How can artificial intelligence help advertisers? The world of advertising is not exempt when it comes to the use of AI. Some of the most popular examples of AI usage in the industry include the following: Ad targeting. AI helps marketers analyze your interests, activities, and device used to determine the ads that should be served to you. This is why you’ll see ads related to a product or service that you just searched or found on the website you just visited. Retargeting. AI is really effective in determining the kind of content that appeals to you based on content that you’ve read or watched before. Search. Amazed at how Google seems to know exactly what you’re searching for, regardless of your keywords’ spelling or grammar? The search engine giant uses an AI to analyze your spoken or written keywords and to produce search results that are the closest thing to what you’re looking for.

Spotlight

Omnitracs

Omnitracs, LLC is a global pioneer of fleet management, routing and predictive analytics solutions for private and for-hire fleets. Omnitracs’ nearly 1,000 employees deliver software-as-a-service-based solutions to help more than 50,000 private and for-hire fleet customers manage nearly 1,500,000 mobile assets in more than 70 countries. The company pioneered the use of commercial vehicle telematics over 25 years ago and serves today as a powerhouse of innovative, intuitive technologies. Omnitracs transforms the transportation industry through technology and insight, featuring best-in-class solutions for compliance, safety and security, productivity, telematics and tracking, transportation management (TMS), planning and delivery, data and analytics, and professional services.

OTHER ARTICLES

DEEP THOMAS EMBEDDING DATA-DRIVEN CULTURE ACROSS BUSINESS WITH CUTTING EDGE INNOVATION

Article | February 24, 2020

A US$ 48.3 billion-corporation, the Aditya Birla Group is in the league of Fortune 500. Anchored by an extraordinary force of over 120,000 employees belonging to 42 nationalities, the Group is built on a strong foundation of stakeholder value creation. With over 7 decades of responsible business practices, Aditya Birla Group’s businesses have grown into global powerhouses in a wide range of sectors metals, chemicals, pulp & fibre, textiles, carbon black, cement and telecom. Today, over 50% of its revenues flow from overseas operations that span 36 countries in North and South America, Africa and Asia.The Group Data ‘n’ Analytics Cell (GDNA) is the Big Data and Analytics arm of the Aditya Birla Group created at its centre to strategize and partner with 18+ Group businesses across B2B and B2C domains to deliver on its strategic priorities through the power of AI. The company represents strong analytics and domain expertise drawn from the best-in-class talent from leading global and Indian businesses that leverage cutting edge tools and advanced AI algorithms built on a highly scalable and robust big data infrastructure to mine and act upon petabytes of structured and unstructured data.

Read More

Thinking Like a Data Scientist

Article | December 23, 2020

Introduction Nowadays, everyone with some technical expertise and a data science bootcamp under their belt calls themselves a data scientist. Also, most managers don't know enough about the field to distinguish an actual data scientist from a make-believe one someone who calls themselves a data science professional today but may work as a cab driver next year. As data science is a very responsible field dealing with complex problems that require serious attention and work, the data scientist role has never been more significant. So, perhaps instead of arguing about which programming language or which all-in-one solution is the best one, we should focus on something more fundamental. More specifically, the thinking process of a data scientist. The challenges of the Data Science professional Any data science professional, regardless of his specialization, faces certain challenges in his day-to-day work. The most important of these involves decisions regarding how he goes about his work. He may have planned to use a particular model for his predictions or that model may not yield adequate performance (e.g., not high enough accuracy or too high computational cost, among other issues). What should he do then? Also, it could be that the data doesn't have a strong enough signal, and last time I checked, there wasn't a fool-proof method on any data science programming library that provided a clear-cut view on this matter. These are calls that the data scientist has to make and shoulder all the responsibility that goes with them. Why Data Science automation often fails Then there is the matter of automation of data science tasks. Although the idea sounds promising, it's probably the most challenging task in a data science pipeline. It's not unfeasible, but it takes a lot of work and a lot of expertise that's usually impossible to find in a single data scientist. Often, you need to combine the work of data engineers, software developers, data scientists, and even data modelers. Since most organizations don't have all that expertise or don't know how to manage it effectively, automation doesn't happen as they envision, resulting in a large part of the data science pipeline needing to be done manually. The Data Science mindset overall The data science mindset is the thinking process of the data scientist, the operating system of her mind. Without it, she can't do her work properly, in the large variety of circumstances she may find herself in. It's her mindset that organizes her know-how and helps her find solutions to the complex problems she encounters, whether it is wrangling data, building and testing a model or deploying the model on the cloud. This mindset is her strategy potential, the think tank within, which enables her to make the tough calls she often needs to make for the data science projects to move forward. Specific aspects of the Data Science mindset Of course, the data science mindset is more than a general thing. It involves specific components, such as specialized know-how, tools that are compatible with each other and relevant to the task at hand, a deep understanding of the methodologies used in data science work, problem-solving skills, and most importantly, communication abilities. The latter involves both the data scientist expressing himself clearly and also him understanding what the stakeholders need and expect of him. Naturally, the data science mindset also includes organizational skills (project management), the ability to work well with other professionals (even those not directly related to data science), and the ability to come up with creative approaches to the problem at hand. The Data Science process The data science process/pipeline is a distillation of data science work in a comprehensible manner. It's particularly useful for understanding the various stages of a data science project and help plan accordingly. You can view one version of it in Fig. 1 below. If the data science mindset is one's ability to navigate the data science landscape, the data science process is a map of that landscape. It's not 100% accurate but good enough to help you gain perspective if you feel overwhelmed or need to get a better grip on the bigger picture. Learning more about the topic Naturally, it's impossible to exhaust this topic in a single article (or even a series of articles). The material I've gathered on it can fill a book! If you are interested in such a book, feel free to check out the one I put together a few years back; it's called Data Science Mindset, Methodologies, and Misconceptions and it's geared both towards data scientist, data science learners, and people involved in data science work in some way (e.g. project leaders or data analysts). Check it out when you have a moment. Cheers!

Read More

Data Analytics Convergence: Business Intelligence(BI) Meets Machine Learning (ML)

Article | July 29, 2020

Headquartered in London, England, BP (NYSE: BP) is a multinational oil and gas company. Operating since 1909, the organization offers its customers with fuel for transportation, energy for heat and light, lubricants to keep engines moving, and the petrochemicals products. Business intelligence has always been a key enabler for improving decision making processes in large enterprises from early days of spreadsheet software to building enterprise data warehouses for housing large sets of enterprise data and to more recent developments of mining those datasets to unearth hidden relationships. One underlying theme throughout this evolution has been the delegation of crucial task of finding out the remarkable relationships between various objects of interest to human beings. What BI technology has been doing, in other words, is to make it possible (and often easy too) to find the needle in the proverbial haystack if you somehow know in which sectors of the barn it is likely to be. It is a validatory as opposed to a predictory technology. When the amount of data is huge in terms of variety, amount, and dimensionality (a.k.a. Big Data) and/or the relationship between datasets are beyond first-order linear relationships amicable to human intuition, the above strategy of relying solely on humans to make essential thinking about the datasets and utilizing machines only for crucial but dumb data infrastructure tasks becomes totally inadequate. The remedy to the problem follows directly from our characterization of it: finding ways to utilize the machines beyond menial tasks and offloading some or most of cognitive work from humans to the machines. Does this mean all the technology and associated practices developed over the decades in BI space are not useful anymore in Big Data age? Not at all. On the contrary, they are more useful than ever: whereas in the past humans were in the driving seat and controlling the demand for the use of the datasets acquired and curated diligently, we have now machines taking up that important role and hence unleashing manifold different ways of using the data and finding out obscure, non-intuitive relationships that allude humans. Moreover, machines can bring unprecedented speed and processing scalability to the game that would be either prohibitively expensive or outright impossible to do with human workforce. Companies have to realize both the enormous potential of using new automated, predictive analytics technologies such as machine learning and how to successfully incorporate and utilize those advanced technologies into the data analysis and processing fabric of their existing infrastructure. It is this marrying of relatively old, stable technologies of data mining, data warehousing, enterprise data models, etc. with the new automated predictive technologies that has the huge potential to unleash the benefits so often being hyped by the vested interests of new tools and applications as the answer to all data analytical problems. To see this in the context of predictive analytics, let's consider the machine learning(ML) technology. The easiest way to understand machine learning would be to look at the simplest ML algorithm: linear regression. ML technology will build on basic interpolation idea of the regression and extend it using sophisticated mathematical techniques that would not necessarily be obvious to the causal users. For example, some ML algorithms would extend linear regression approach to model non-linear (i.e. higher order) relationships between dependent and independent variables in the dataset via clever mathematical transformations (a.k.a kernel methods) that will express those non-linear relationship in a linear form and hence suitable to be run through a linear algorithm. Be it a simple linear algorithm or its more sophisticated kernel methods variation, ML algorithms will not have any context on the data they process. This is both a strength and weakness at the same time. Strength because the same algorithms could process a variety of different kinds of data, allowing us to leverage all the work gone through the development of those algorithms in different business contexts, weakness because since the algorithms lack any contextual understanding of the data, perennial computer science truth of garbage in, garbage out manifests itself unceremoniously here : ML models have to be fed "right" kind of data to draw out correct insights that explain the inner relationships in the data being processed. ML technology provides an impressive set of sophisticated data analysis and modelling algorithms that could find out very intricate relationships among the datasets they process. It provides not only very sophisticated, advanced data analysis and modeling methods but also the ability to use these methods in an automated, hence massively distributed and scalable ways. Its Achilles' heel however is its heavy dependence on the data it is being fed with. Best analytic methods would be useless, as far as drawing out useful insights from them are concerned, if they are applied on the wrong kind of data. More seriously, the use of advanced analytical technology could give a false sense of confidence to their users over the analysis results those methods produce, making the whole undertaking not just useless but actually dangerous. We can address the fundamental weakness of ML technology by deploying its advanced, raw algorithmic processing capabilities in conjunction with the existing data analytics technology whereby contextual data relationships and key domain knowledge coming from existing BI estate (data mining efforts, data warehouses, enterprise data models, business rules, etc.) are used to feed ML analytics pipeline. This approach will combine superior algorithmic processing capabilities of the new ML technology with the enterprise knowledge accumulated through BI efforts and will allow companies build on their existing data analytics investments while transitioning to use incoming advanced technologies. This, I believe, is effectively a win-win situation and will be key to the success of any company involved in data analytics efforts.

Read More

CAN QUANTUM COMPUTING BE THE NEW BUZZWORD

Article | March 30, 2020

Quantum Mechanics created their chapter in the history of the early 20th Century. With its regular binary computing twin going out of style, quantum mechanics led quantum computing to be the new belle of the ball! While the memory used in a classical computer encodes binary ‘bits’ – one and zero, quantum computers use qubits (quantum bits). And Qubit is not confined to a two-state solution, but can also exist in superposition i.e., qubits can be employed at 0, 1 and both 1 and 0 at the same time.

Read More

Spotlight

Omnitracs

Omnitracs, LLC is a global pioneer of fleet management, routing and predictive analytics solutions for private and for-hire fleets. Omnitracs’ nearly 1,000 employees deliver software-as-a-service-based solutions to help more than 50,000 private and for-hire fleet customers manage nearly 1,500,000 mobile assets in more than 70 countries. The company pioneered the use of commercial vehicle telematics over 25 years ago and serves today as a powerhouse of innovative, intuitive technologies. Omnitracs transforms the transportation industry through technology and insight, featuring best-in-class solutions for compliance, safety and security, productivity, telematics and tracking, transportation management (TMS), planning and delivery, data and analytics, and professional services.

Events