Q&A with Tom O'Regan, CEO at Madison Logic

MEDIA 7 | September 5, 2019

Q&A with Tom O'Regan
Tom O'Regan, CEO at Madison Logic is a global leader in account based marketing. In this role, O’Regan leads all company initiatives with an emphasis on positioning ActivateABMTM as the only truly global account based marketing platform built for B2B marketers.

Focused on driving Madison Logic momentum and enabling the Marketer to be the driving force for growth and change within the enterprise, O’Regan also sits on the Advisory Board of The Fiscal Times, both the Sales Executive Committee and B2B Operating Group at the IAB.

MEDIA 7: If I were to say to a bunch of people who know you, ‘Give me three adjectives that best describe you’, what would I hear?
TOM O'REGAN: 
Passionate, Optimistic, and Indefatigable

M7: Ovum has named Madison Logic a “Leader” in the Ovum Market Radar: Account-Based Marketing report. What factors have led Madison Logic to emerge as a leader in ABM?
TO'R:
Ovum recognized the strength of our ABM platform in achieving predictable growth among the fastest growing B2B organizations globally. Specifically, the ability to leverage intent data to target the right accounts, measure ABM effectiveness, shorten sales cycles, and accelerate growth. The Ovum report also highlights Madison Logic's data gateway that allows CRM and MAP platforms to connect seamlessly including Salesforce, Marketo, and Oracle Eloqua.


"AI will play a critical role in the dynamic account targeting, content personalization, and conversion metrics that will accelerate growth for B2B marketers."

M7: How will the bi-directional integration with Oracle Eloqua help Madison Logic to empower B2B marketers?
TO'R:
Communication between marketing and sales in addition to integrating with the leading CRM and MAP platforms are key factors for ABM success at large B2B organizations. With the Oracle Eloqua integration, Madison Logic seamlessly accesses target account lists to activate ABM Demand Generation and Advertising programs and transfers engagement data back into Eloqua for lead scoring. Joint clients can also take advantage of Journey Acceleration™, which personalizes messaging based on marketing funnel stage to increase the volume, velocity, and value of deals in the pipeline.


"Communication between marketing and sales in addition to integrating with the leading CRM and MAP platforms are key factors for ABM success at large B2B organizations."

M7: How do you see technology and artificial intelligence impacting the ABM landscape in the near future?
TO'R:
AI will play a critical role in the dynamic account targeting, content personalization, and conversion metrics that will accelerate growth for B2B marketers.



M7: Tom, your love for golf is well-known. Since when have you been playing golf and how do you manufacture time for it?
TO'R:
I’ve played golf since college and have met so many unbelievable people, explored so many amazing places around the world, and had incredible experiences enjoying the game of golf. With offices in NY, Boston, San Francisco, Seattle, London, Dublin, and Singapore, I have the opportunity to play with team members, partners, and clients once in a while. When I’m not working, I pick vacation spots partially on the quality of the links.

M7: What is your superpower?
TO'R:
I’d like to think I have a good eye for talent and finding people who have the skills, but more importantly I have the determination needed to grow and scale a business as a team.

ABOUT MADISON LOGIC

Madison Logic’s global account-based marketing (ABM) solution empowers B2B marketers to convert their best accounts faster. By integrating the ML Data Cloud with CRM and marketing automation platforms, intent data, and more than 20 other datasets, marketers execute a unified activation strategy across the funnel with ABM Advertising and ABM Demand Generation to align with sales, accelerate the buyer journey, and drive growth.

More C-Suite on deck

Listen to your customers, advises Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai

Media 7 | November 16, 2021

Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai shared his insights with us on how marketers can make better use of data, attribution models and natural language processing to promote conversions and increase customer engagement. Read on to find out about his three-part strategy for successful marketing campaigns.

Read More

'Raising the voices of those who may not always be heard is critical,' says Claire Thomas

Media 7 | April 28, 2023

Claire Thomas is responsible for developing and implementing a strategy for diversity, equity, and inclusion (DEI) across Hitachi Vantara through programs that reflect the diverse backgrounds, interests, and passions of their current and future workforce. Continue reading to learn her views on the significance of inclusion and diversity in an organization.

Read More

‘Data teams are critical in defining and driving business growth metrics’ says Gaurav Rewari, CEO of Mode.

Media 7 | March 4, 2022

Gaurav Rewari, CEO of Mode elaborates on his role as a CEO of Mode Analytics, the most comprehensive platform for collaborative Business Intelligence and Interactive Data Science. Read on to know more about his thoughts on digitization and Mode's brand-new visualization tool, Visual Explorer.

Read More

Listen to your customers, advises Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai

Media 7 | November 16, 2021

Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai shared his insights with us on how marketers can make better use of data, attribution models and natural language processing to promote conversions and increase customer engagement. Read on to find out about his three-part strategy for successful marketing campaigns.

Read More

'Raising the voices of those who may not always be heard is critical,' says Claire Thomas

Media 7 | April 28, 2023

Claire Thomas is responsible for developing and implementing a strategy for diversity, equity, and inclusion (DEI) across Hitachi Vantara through programs that reflect the diverse backgrounds, interests, and passions of their current and future workforce. Continue reading to learn her views on the significance of inclusion and diversity in an organization.

Read More

‘Data teams are critical in defining and driving business growth metrics’ says Gaurav Rewari, CEO of Mode.

Media 7 | March 4, 2022

Gaurav Rewari, CEO of Mode elaborates on his role as a CEO of Mode Analytics, the most comprehensive platform for collaborative Business Intelligence and Interactive Data Science. Read on to know more about his thoughts on digitization and Mode's brand-new visualization tool, Visual Explorer.

Read More

Related News

Big Data

Teradata helps customers accelerate AI-led initiatives with new ModelOps capabilities in ClearScape analytics

iTWire | September 27, 2023

Teradata today announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organisations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimise AI/ML deployments to quickly generate business value from their AI investments. Deploying AI models into production is notoriously challenging. A recent O'Reilly's survey on AI adoption in the enterprise found that only 26% of respondents currently have models deployed in production, with many companies stating they have yet to see a return on their AI investments. This is compounded by the recent excitement around generative AI and the pressure many executives are under to implement it within their organisation, according to a recent survey by IDC, sponsored by Teradata. ModelOps in ClearScape Analytics makes it easier than ever to operationalise AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well. We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value. Teradata chief product officer Hillary Ashton “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organisations to cost effectively operationalise and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.” New capabilities and enhancements to ModelOps include: - Bring Your Own Model (BYOM), now with no code capabilities, allows users to deploy their own machine learning models without writing any code, simplifying the deployment journey with automated validation, deployment and monitoring - Mitigation of regulatory risks with advanced model governance capabilities and robust explainability controls to ensure trusted AI - Automatic monitoring of model performance and data drift with zero configuration alerts Teradata customers are already using ModelOps to accelerate time-to-value for their AI investments A major US healthcare institution uses ModelOps to speed up the deployment process and scale its AI/ML personalisation journey. The institution accelerated its deployment with a 3x increase in productivity to successfully deploy thirty AI/ML models that predict which of its patients are most likely to need an office visit to implement “Personalisation at Scale.” A major European financial institution leveraged ModelOps to reduce AI model deployment time from five months to one week. The models are deployed at scale and integrated with operational data to deliver business value.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Big Data Management

Kinetica Redefines Real-Time Analytics with Native LLM Integration

Kinetica | September 22, 2023

Kinetica, a renowned speed layer for generative AI and real-time analytics, has recently unveiled a native Large Language Model (LLM) integrated with Kinetica's innovative architecture. This empowers users to perform ad-hoc data analysis on real-time, structured data with the ease of natural language, all without the need for external API calls and without data ever leaving the secure confines of the customer's environment. This significant milestone follows Kinetica's prior innovation as the first analytic database to integrate with OpenAI. Amid the LLM fervor, enterprises and government agencies are actively seeking inventive ways to automate various business functions while safeguarding sensitive information that could be exposed through fine-tuning or prompt augmentation. Public LLMs, exemplified by OpenAI's GPT 3.5, raise valid concerns regarding privacy and security. These concerns are effectively mitigated through native offerings, seamlessly integrated into the Kinetica deployment, and securely nestled within the customer's network perimeter. Beyond its superior security features, Kinetica's native LLM is finely tuned to the syntax and industry-specific data definitions, spanning domains such as telecommunications, automotive, financial services, logistics, and more. This tailored approach ensures the generation of more reliable and precise SQL queries. Notably, this capability extends beyond conventional SQL, enabling efficient handling of intricate tasks essential for enhanced decision-making capabilities, particularly for time-series, graph, and spatial inquiries. Kinetica's approach to fine-tuning places emphasis on optimizing SQL generation to deliver consistent and accurate results, in stark contrast to more conventional methods that prioritize creativity but yield diverse and unpredictable responses. This steadfast commitment to reliable SQL query outcomes offers businesses and users the peace of mind they deserve. Illustrating the practical impact of this innovation, the US Air Force has been collaborating closely with Kinetica to leverage advanced analytics on sensor data, enabling swift identification and response to potential threats. This partnership contributes significantly to the safety and security of the national airspace system. The US Air Force now employs Kinetica's embedded LLM to detect airspace threats and anomalies using natural language. Kinetica's database excels in converting natural language queries into SQL, delivering responses in mere seconds, even when faced with complex or unfamiliar questions. Furthermore, Kinetica seamlessly combines various analytics modes, including time series, spatial, graph, and machine learning, thereby expanding the range of queries it can effectively address. What truly enables Kinetica to excel in conversational query processing is its ingenious use of native vectorization. In a vectorized query engine, data is organized into fixed-size blocks called vectors, enabling parallel query operations on these vectors. This stands in contrast to traditional approaches that process individual data elements sequentially. The result is significantly accelerated query execution, all within a smaller compute footprint. This remarkable speed is made possible by the utilization of GPUs and the latest CPU advancements, which enable simultaneous calculations on multiple data elements, thereby greatly enhancing the processing speed of computation-intensive tasks across multiple cores or threads. About Kinetica Kinetica is a pioneering company at the forefront of real-time analytics and is the creator of the groundbreaking real-time analytical database specially designed for sensor and machine data. The company offers native vectorized analytics capabilities in the fields of generative AI, spatial analysis, time-series modeling, and graph processing. A distinguished array of the world's largest enterprises spanning diverse sectors, including the public sector, financial services, telecommunications, energy, healthcare, retail, and automotive industries, entrusts Kinetica to forge novel solutions in the realms of time-series data and spatial analysis. The company's clientele includes various illustrious organizations such as the US Air Force, Citibank, Ford, T-Mobile, and numerous others.

Read More

Big Data

Teradata helps customers accelerate AI-led initiatives with new ModelOps capabilities in ClearScape analytics

iTWire | September 27, 2023

Teradata today announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organisations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimise AI/ML deployments to quickly generate business value from their AI investments. Deploying AI models into production is notoriously challenging. A recent O'Reilly's survey on AI adoption in the enterprise found that only 26% of respondents currently have models deployed in production, with many companies stating they have yet to see a return on their AI investments. This is compounded by the recent excitement around generative AI and the pressure many executives are under to implement it within their organisation, according to a recent survey by IDC, sponsored by Teradata. ModelOps in ClearScape Analytics makes it easier than ever to operationalise AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well. We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value. Teradata chief product officer Hillary Ashton “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organisations to cost effectively operationalise and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.” New capabilities and enhancements to ModelOps include: - Bring Your Own Model (BYOM), now with no code capabilities, allows users to deploy their own machine learning models without writing any code, simplifying the deployment journey with automated validation, deployment and monitoring - Mitigation of regulatory risks with advanced model governance capabilities and robust explainability controls to ensure trusted AI - Automatic monitoring of model performance and data drift with zero configuration alerts Teradata customers are already using ModelOps to accelerate time-to-value for their AI investments A major US healthcare institution uses ModelOps to speed up the deployment process and scale its AI/ML personalisation journey. The institution accelerated its deployment with a 3x increase in productivity to successfully deploy thirty AI/ML models that predict which of its patients are most likely to need an office visit to implement “Personalisation at Scale.” A major European financial institution leveraged ModelOps to reduce AI model deployment time from five months to one week. The models are deployed at scale and integrated with operational data to deliver business value.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Big Data Management

Kinetica Redefines Real-Time Analytics with Native LLM Integration

Kinetica | September 22, 2023

Kinetica, a renowned speed layer for generative AI and real-time analytics, has recently unveiled a native Large Language Model (LLM) integrated with Kinetica's innovative architecture. This empowers users to perform ad-hoc data analysis on real-time, structured data with the ease of natural language, all without the need for external API calls and without data ever leaving the secure confines of the customer's environment. This significant milestone follows Kinetica's prior innovation as the first analytic database to integrate with OpenAI. Amid the LLM fervor, enterprises and government agencies are actively seeking inventive ways to automate various business functions while safeguarding sensitive information that could be exposed through fine-tuning or prompt augmentation. Public LLMs, exemplified by OpenAI's GPT 3.5, raise valid concerns regarding privacy and security. These concerns are effectively mitigated through native offerings, seamlessly integrated into the Kinetica deployment, and securely nestled within the customer's network perimeter. Beyond its superior security features, Kinetica's native LLM is finely tuned to the syntax and industry-specific data definitions, spanning domains such as telecommunications, automotive, financial services, logistics, and more. This tailored approach ensures the generation of more reliable and precise SQL queries. Notably, this capability extends beyond conventional SQL, enabling efficient handling of intricate tasks essential for enhanced decision-making capabilities, particularly for time-series, graph, and spatial inquiries. Kinetica's approach to fine-tuning places emphasis on optimizing SQL generation to deliver consistent and accurate results, in stark contrast to more conventional methods that prioritize creativity but yield diverse and unpredictable responses. This steadfast commitment to reliable SQL query outcomes offers businesses and users the peace of mind they deserve. Illustrating the practical impact of this innovation, the US Air Force has been collaborating closely with Kinetica to leverage advanced analytics on sensor data, enabling swift identification and response to potential threats. This partnership contributes significantly to the safety and security of the national airspace system. The US Air Force now employs Kinetica's embedded LLM to detect airspace threats and anomalies using natural language. Kinetica's database excels in converting natural language queries into SQL, delivering responses in mere seconds, even when faced with complex or unfamiliar questions. Furthermore, Kinetica seamlessly combines various analytics modes, including time series, spatial, graph, and machine learning, thereby expanding the range of queries it can effectively address. What truly enables Kinetica to excel in conversational query processing is its ingenious use of native vectorization. In a vectorized query engine, data is organized into fixed-size blocks called vectors, enabling parallel query operations on these vectors. This stands in contrast to traditional approaches that process individual data elements sequentially. The result is significantly accelerated query execution, all within a smaller compute footprint. This remarkable speed is made possible by the utilization of GPUs and the latest CPU advancements, which enable simultaneous calculations on multiple data elements, thereby greatly enhancing the processing speed of computation-intensive tasks across multiple cores or threads. About Kinetica Kinetica is a pioneering company at the forefront of real-time analytics and is the creator of the groundbreaking real-time analytical database specially designed for sensor and machine data. The company offers native vectorized analytics capabilities in the fields of generative AI, spatial analysis, time-series modeling, and graph processing. A distinguished array of the world's largest enterprises spanning diverse sectors, including the public sector, financial services, telecommunications, energy, healthcare, retail, and automotive industries, entrusts Kinetica to forge novel solutions in the realms of time-series data and spatial analysis. The company's clientele includes various illustrious organizations such as the US Air Force, Citibank, Ford, T-Mobile, and numerous others.

Read More

Spotlight

Madison Logic

Madison Logic

Madison Logic’s global account-based marketing (ABM) solution empowers B2B marketers to convert their best accounts faster. By integrating the ML Data Cloud with CRM and marketing automation platforms, intent data, and more than 20 other datasets, marketers execute a unified activation strategy acro...

Events

Resources