Listen to your customers, advises Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai

Media 7 | November 16, 2021

Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai shared his insights with us on how marketers can make better use of data, attribution models and natural language processing to promote conversions and increase customer engagement. Read on to find out about his three-part strategy for successful marketing campaigns.

Data of any kind is a value exchange trade. I give you my data, you give me something in value in return.

MEDIA 7: Hi, Christopher, thank you for your time! We are very excited to have you here! Let us begin with you telling us a little bit about TrustInsights.ai and what has the journey been like. What inspired you to co-found Trust Insights?
CHRISTOPHER PENN:
My partner and I worked together at a public relations firm for a few years before founding the company and a lot of our work was focused on change management and governance, analytics, data science, machine learning, and AI. The firm we were at was moving in a different direction. So that combined with a few other factors made us think it's probably time for us to go off on our own and do something that was aligned with our focus. That is the genesis of the company. We were trying to see if there was a large market for the types of marketing, technology, and management consulting that we wanted to do. And so far, so good! We are three years into our journey and certainly having a lot of fun. We have a decent amount of revenue. We are still only three people, but the power of automation and AI and data science makes that scale well.


M7: That is very inspiring! To be honest, I think it is part of every one of us. At some point, we want to get off and start on our own. So you've also mentioned that you are a minority and women-owned business - could you please tell us a little bit about that? How is the journey been like so far?
CP:
So, yes, it's funny - by law and our company charters, we're 50 - 50 partners. My partner identifies as female. I am a non-majority in the USA and so we meet the criteria for both of us. We've seen and experienced more issues with the gender stuff than with anything around the race. When we were planning the company, we had thought initially, maybe we should try to go get some funding, try and get some investment. We decided that Katie, my partner, was going to be the CEO because she is better at the overall operations and the running of an organization. She is a better people manager and things like that. I am more of the mad scientist who's just going to sit in the lab in the back and just make crazy things and make stuff blow up. Having worked together for three years before founding the company, I knew that she was the better manager, the better executive, better at running an entity than I ever would be. Very early on when talking to some of those investors - we had one investor say to our faces that they will not fund a company that is run by a woman. Like well, all right, we will scratch you off our list of people we ever want to talk to again, because that attitude was an attitude that was common 75 years ago. However, in 2018, in the modern world that just does not fly anymore.

Read More: AudiencePoint's Co-founder and Chief Evangelist, Paul Shriner shows us how businesses can improve their email marketing campaigns

M7: As data and machine learning have been a powerful influence in the marketing industry, at Trust Insights, what are the most powerful data science and machine learning-powered solutions do you offer to your clients?
CP:
Probably the one that is the most in-demand are things like attribution models. So we have the custom code that we've written for getting data out of Google Analytics to look at to build an attribution model. It's based on alignment with how we think attribution works. It's all different from the way Google does theirs, Google has published papers about how theirs works, and theirs make sense of what they're trying to do. It's aligned with their goals and by no means bad technology. If, for example, Google were to use our methodology, it wouldn't scale as well. It's much more computationally intensive. And so it's fine for a consulting firm to reach, run a model on behalf of a client, get set up and go and do its thing, get a sandwich or whatever, and come back in 20 minutes and it's done. If you have an application like Google Analytics, you can't tell users - oh yeah come back in 20 minutes, and click on this button. That's not a good user experience. So we have a good channel level attribution. And then we have content level attribution where we look at what are the places people go to on your website, on your digital properties and what are the paths to conversion?

One of the most commonly walked paths, the pages and the content that most commonly nudge people towards conversion, either in general or they have very high - what we call - conversion efficiency, meaning that if there's a Page A and Page B and Page A requires three visitors to convert, and Page B requires 200, where should I send my traffic to? Let us send it to Page A because it's a more efficient page for conversion. So in terms of machine learning stuff, that's some of the basics that we do that are very effective. Then we have been doing a lot with natural language processing, particularly anything around customer experience. Therefore, a fun example of a recruiting company - a couple of years ago, they hired us to answer the question, why are our job ads not performing. Why are we spending a lot of money and not getting many candidates. Therefore, we took 5000 of the ads and did some natural language processing on them, trying to understand the words and phrases that were being used in these ads and how prominent they were, and then they gave us access to their call center.

We took 17,000 calls from their call centre, converted them into text using just AI-based transcription. What we found was what the candidates were talking about in these calls - starting pay, pay per mile, home on the holidays, what kinds of truckloads am I hauling. None of that was in the job ads. So we said - if you change the language you use in the ads, you will attract more candidates. They did and they saw a 40% increase in conversions within 30 days just by changing the language that they use. That's an example of the natural language processing aspects that you can do that are just so impactful because companies have all this data that's unstructured that they're not using. It just sits in an inbox. It sits in a call center, and nobody ever digs through it to use for marketing purposes.


An important thing about attribution that a lot of people forget is that it is not just an analysis of your marketing efforts, but also a mark of the channels’ effectiveness as well.



M7: That's true. So given that data is so important today, how do you think the marketing industry is changing? What are the latest trends in the marketing industry and how do marketers today need to adapt to these latest trends?
CP:
There's a bunch of trends, the first one, and I think the one that's the most important is that marketers got spoiled on the level of data and access they were able to get, particularly from the third party. So third-party data in the last five years or so, markets have seen a massive amount of data gathered by big companies like Facebook and stuff like that. And now, in the last two years, we have seen legislative efforts. Because GDPR took effect in 2018; CCPA took effect in 2020; CPRA is taking effect in 2023, and PPIL takes effect in one month from today in November that covers the entirety of the People's Republic of China, which is the largest market on the planet - all these laws are intended to curtail the use of information that consumers did not give consent for. So marketers very quickly need to figure out if they haven't already, how they can obtain informed consent for the data they're gathering. If they want to continue gathering that data and to understand that they do not have a right or a privilege to customer data, they have to earn it. Data of any kind is a value exchange trade. I give you my data, you give me something in value in return.

Consumers are aware - B2B and B2C doesn't matter - consumers know that the moment they fill the form that there will be a salesperson calling, there will be 44 emails in the inbox and they are not going to stop stalking them and they are going to see this ad all the time. As a result, consumers say that they don't particularly like that as an experience. They have legislated in places and then you have technologies that are doing the best. I feel saddest but some 30% of people use adblockers of some kind. Apple just released the blocking of third-party cookies on various devices - the new mail privacy protection which 15% consumers do so far and 96% of consumers opted for blocking app tracking in IOS 14 and the new hide my email feature in which creators create a burner email address in your iPhone in your mails is fantastic. So, all these tools exist now for the consumers, to withdraw consent from marketers - to say that you need to provide me more value and if you don't I am not going to be giving you the things that you want.

The next thing is that the pandemic, as everybody knows, accelerated digital transformation in the sense that we have been sitting at home for 18 months and these devices have become our lifelines to other people and so a lot of companies again don’t know this experience, as in how people communicate and companies have not figured it out and are far behind. But, it also means that how people are connecting and communicating is different, and how people use different communications networks now. Facebook itself has lost a lot of traction especially among people under 30 to large networks like Snapchat and TikTok and stuff like that, but also to private social communities and these are popping up like weeds everywhere. So the two platforms that people find the best are Slack and Discord. Discord has taken off like a rocket in the last 18 months. There are servers out there for literally every conceivable thing, many of which you can't talk about in a professional setting. They are out there and that's how people communicate at their invisible ad target. They are invisible to marketers, and to search engines. So we have all those conversations happening, same with text messaging and all the secure messaging apps. Signal and Telegram – that is there where conversations happen, conversations which marketers wish they were part of but they are not. So marketers have to figure out how do we get invited into these conversations; how can we find actual fans and advocates of the brands of our company that will be willing to ambassador us into private communities to introduce us as marketers so that we can even just see what is going on; hear what is being said and again, and there are not many marketers doing that.

They are very far behind the curve and this trend always continues to increase because for a lot of people and a large number of people under the age of 30 don't want companies tracking them. They don't want things happening without their consent. I talked to my 16 year old child, who uses Firefox because of its ad-blocking technology and has something like 14 different email addresses. They have just one for corporations and they have a rule set up that sees an email and it automatically gets deleted after an hour because they are like I don't ever want to hear from a corporation. I just needed to sign up for something. So, as marketers, we have to do two things - we have to understand the changes which are happening and we have to listen to people more and we are not doing either one of those things very well.

Read More: Job.com's Co-Founder and CVO says, 'AI is our friend.'

M7: So what will you say is the benchmark of a successful marketing and sales strategy today - we are talking about a time where email marketing and even newsletters are now almost something people consume and are in demand - so what according to you are some of the benchmarks of a successful marketing strategy? How should a marketer approach?
CP:
You have success in three areas and I evaluate them based on the top, middle and bottom of your marketing operations. On the top is awareness - attention is everything at the top of the funnel because if you don't have people's attention you have nothing else. A lot of people like to go with brands and stuff like that but guess what, if nobody remembers who you are then there is no rest of your funnel. If the top of the funnel is empty, there is no rest of the funnel - that is number 1.

Number 2 is, when it comes to engagement and retention of that attention, publishing and again there are so many different options - there are email users who have made a huge resurgence in the last few years. When you see something like Facebook and the way Facebook behaves, sometimes you feel that you don't want to do business with a company that behaves like that. If you look at the way Google operates, it is very effective but also extremely expensive. Things like email are an intricate part of keeping yourself in front of an audience and if you can figure how to provide a publisher with a lot of value then you grow an audience that you get to hold on to for a substantial time.

In the last three years, my mailing list has gone from 40,000 subscribers to 250,000, because people want stuff in their terms. When I send someone an email it can stay in their inbox for as long as they want it. They don't have to read it when it comes out they don't have to be like - if you look in your social media feed if you don't interact with that post right now you won't find it again because all these different algorithms kind of change what you view - you don't have those issues with email. It is there when you need it to be and they are on your schedule and your preferences and its on-demand publishing. Videos are the same thing, some companies are doing very well and people doing very well with stuff like YouTube. Not because YouTube is inherently better as a channel but simply because it's there when people want it. There are a lot of people who have looked into live streaming and live social audio and stuff like that and that's fun but ultimately most people don't do scheduled appointment media. They do what they want to. Netflix has trained us to have whatever we want and whoever we want and marketers have to adapt to that.

The third part of that strategy is your community. So if you do not have some kind of a community that you can occasionally pitch to it but for the most part it is about benefitting the community and using your responsibility to be a student of that community. Otherwise, you won't be able to hold on to them. More importantly, use the presence of mind when they need it. So we have a Slack group and you can see about 2000 people. It's a community that we created and run and administer and nurture. Most of the time we are not pitching ourselves. Most of the time we are answering questions and people talking to each other. It is a way for us to keep those true fans, if you will, in connection with each other and in a form that we have a little bit more control over. There is no algorithm in a slack group - you see what you see.

So you have a three-part strategy - awareness, publishing for retention, and community, and that is a successful strategy. There is a study done by LinkedIn, either in the LinkedIn labs or LinkedIn institute - one of the two. It was looking specifically to B2B but it applies to a lot of cases for any kind of complex sale and it states that 95%-98% of the audience is not looking to buy. So if all your efforts are saying buy now, at best, you are going to get 2% of your audience and going to piss off the rest. If you create the strategy of awareness and publishing and community and you think about a person and they have the attention spotlight of that of 2 seconds - it is looking around for a solution. If you have their attention, you can earn that business in that very thin slice of time. The rest of the time you provide value so when that spotlight comes around again you are there and that's the strategy that you were pursuing. That is seeing a lot of companies for a successful pursuit.


M7: What are the different ways that TrustInsights helps the clients achieve a funnel, a strategy exactly the way you just explained it?
CP:
So a lot of it is providing with data and analytics so that they can make better decisions for example with awareness. Companies will ask - how do we put together a marketing strategy or a marketing plan or particularly a content marketing plan and using things like predictive analytics, times used to forecast. We can take a keyword list for example out of your SEO tool and forecast forward the probability of that of when each queue will be most searched in the next 52 weeks and then week by week we find topics or phrases people be searching for that week. Then you must change your content strategy to reflect what people are thinking about and when they are thinking about. You will do better because there is nothing quite like a potential customer thinking about say analytics, Google analytics for example, and that week in their inbox they get a newsletter with a lead article and go, “Oh I was just thinking about that!” So predictive analysis depending on the topic can be a very strong solution for not only being real with the customers but understanding what they want, but when they want it too.

The second is analytics infrastructure. If you don't know what you have and can’t measure, you won't be able to manage it. You can’t manage what you can't measure. A lot of our clients when they first come to us don't have the right pieces in the right place. It is kind of like a kitchen where all the appliances are taken in parts and spread on all counters. Technically, they have everything, but it is not put together right and they don't know how to use them. Maybe they have this nice new piece of marketing automation software but they don't know what to do with that thing. It is like having a nice blender and you are like, I should put the steak in this. No, that's not what it's for! So we do a lot of training and education for our clients. It's like you make soup with the blender, for the steak you use the frying pan, and things will work out a little bit better. Don't make soup in the frying pan! It's a horrible idea.

That's another key part of what we do for clients and then, of course, there is a lot of the changed management - helping a company change its process and train its people to be able to use the technology. If you have Google analytics and Google tech manager set up, you have world-class analytics capabilities. If you have set it up properly, it is a world-class tool but if you don't know what to do with that data and don't know how to make decisions from it, then it is like owning a nice tesla you never drive. It looks great near the driveway, but it is not fulfilling the function that it attended for. If you have Google analytics and you don't use it to make decisions, it is just a decoration. That's the third big thing - helping companies use the technology and data to make better decisions.

Read More: Q&A with Ariel Munafo, Founder and Board Member of MarkeTech Group


If you don't know what you have and can’t measure, you won't be able to manage it. You can’t manage what you can't measure.


M7: Speaking about TrustInsights, you also have an AI-powered retribution modelling. Could you please tell us a bit about that?
CP:
Well yeah, that's the mark of chain modelling. So we are using the mark of chain propensity analysis for a channel-based attribution figure. An important thing about attribution that a lot of people forget is that it is not just an analysis of your marketing efforts but also a mark of channels’ effectiveness as well. We tend as marketers and certainly advertising companies that has tried to persuade us that everything is our responsibility but half of the responsibility is of the AdTech platform. If the AdTech platform has a terrible audience, it doesn't matter how good your ads are, you are going to get crap results out of it. So good attribution analysis can help you better understand the effect of all the different channels you are working with and then you can start to dig in and say - well, is this channel not working because we are bad at using it or is this channel not working because it is a bad channel?


M7: You have said before that it is very important for marketers to be aware of their customers and what they want and make their approach based on that so that they can convert them - how would you say, marketers today, should stay in touch and keep up with changing consumer demands?
CP:
That is an easy one - talk to your customers and listen to them when they talk to you. I am astonished at the number of companies who would do things like have three-day executive retreats on your bills. They will be in a room full of post-it notes with what the customer wants, and then when you listen to them you will know that not one of these people, in the last 18 months, have talked to a single customer. They have not listened to the customers and they have been guessing based on their own opinions about what the customer wants and their guesses, in a lot of cases, are wrong. One of the best examples of a company which did this well is T-Mobile and I use this example a lot because it was great in its simplicity. Under John Legere, for about 6 years, they did this marketing board. They went out and did a bunch of marketing research with their customers, talked to their customers, looked into their customer service inbox, listened to their call centre, and they made a long list of everything their customers hate.

Extra fees, absurd regulations of what people can trade-in and they were like - hey, let us make a list of all things that our customers hate and let us stop doing them one at a time. It was a brilliant strategy that allowed them to vacuum up a bunch of issues because they stopped doing things that customers hate. If you look in the marketplace of what all that’s different that your competitors do, and you listen to your customers, you talk to them, you pick up the phone or you go up for coffee - wear a mask - anything like that. Ask them what they don't like about our industry, why don't they like us, what they wish you could change, and you listen. You make a whole big set of long lists, you do some quantitative survey, and they say - let's stop doing the things that our customers hate the most. that are pretty easy in terms of marketing and sell it to the customer. It's like - hey you hate this, we are going to do this less. Now is that is great, not necessarily does the customer appreciate it. It always astonishes me, the length marketers will go to not talk to a customer - we want to stay in tune with our customers, and we will look at social media managers and research firms, but have you tried talking to the customers? Have you gone down to the call centre and just listened? Have you had lunch with your customers recently?

Companies that do that are very successful because they can say that this is what our customers want us to do. You know it’s funny when a lot of the latest product release from Apple with their new line of Macbooks, there are two things - the magnetic power charging, the one thing the people really like, so they brought that back; and nobody liked that touch bar, so that can go away and put our function keys back and everyone was like - this is amazing. If you have listened to the customers and stopped doing the things that they hate, they will like your products more.

ABOUT TRUST INSIGHTS

Trust Insights was founded in 2017 with a simple mission: to help marketers solve/achieve issues with collecting data and measuring their digital marketing efforts so that they can make better decisions with the data and exceed their goals with more automation, fewer errors, and deeper insights. They light up dark data. They help businesses make better decisions, faster. They make the world a better place by helping companies unlock and transform their data into useful analysis, valuable insights, and actionable strategies.

More C-Suite on deck

Q&A with Richard Stevenson, Chief Executive Officer at Red Box

Media 7 | March 25, 2021

Richard Stevenson, Chief Executive Officer at Red Box, is a senior leader who has built a strong track record of execution, having worked in the Software and Financial Services sectors for over twenty years. An effective communicator who is customer-focused with proven leadership capabilities. Richard has a track record of achieving significant revenue growth, both organically and via acquisition, with experience in organizational strategy and the development of fundraising plans. He has worked with a variety of businesses, ranging from start-ups to an FTSE 100 company, in a number of markets including South Africa, USA, Hong Kong, and Germany.

Read More

HeadSpin's Brien Colwell shows us how humans can work together with AI to create successful systems

Media 7 | November 8, 2021

Brien Colwell, Co-Founder, and CTO at HeadSpin takes us through his journey from being a software engineer to building the world’s first Digital Experience AI Platform. Read on as he talks about Artificial Intelligence and how HeadSpin is helping organizations assure optimal digital experiences across mobile and web delivery channels.

Read More

'Raising the voices of those who may not always be heard is critical,' says Claire Thomas

Media 7 | April 28, 2023

Claire Thomas is responsible for developing and implementing a strategy for diversity, equity, and inclusion (DEI) across Hitachi Vantara through programs that reflect the diverse backgrounds, interests, and passions of their current and future workforce. Continue reading to learn her views on the significance of inclusion and diversity in an organization.

Read More

Q&A with Richard Stevenson, Chief Executive Officer at Red Box

Media 7 | March 25, 2021

Richard Stevenson, Chief Executive Officer at Red Box, is a senior leader who has built a strong track record of execution, having worked in the Software and Financial Services sectors for over twenty years. An effective communicator who is customer-focused with proven leadership capabilities. Richard has a track record of achieving significant revenue growth, both organically and via acquisition, with experience in organizational strategy and the development of fundraising plans. He has worked with a variety of businesses, ranging from start-ups to an FTSE 100 company, in a number of markets including South Africa, USA, Hong Kong, and Germany.

Read More

HeadSpin's Brien Colwell shows us how humans can work together with AI to create successful systems

Media 7 | November 8, 2021

Brien Colwell, Co-Founder, and CTO at HeadSpin takes us through his journey from being a software engineer to building the world’s first Digital Experience AI Platform. Read on as he talks about Artificial Intelligence and how HeadSpin is helping organizations assure optimal digital experiences across mobile and web delivery channels.

Read More

'Raising the voices of those who may not always be heard is critical,' says Claire Thomas

Media 7 | April 28, 2023

Claire Thomas is responsible for developing and implementing a strategy for diversity, equity, and inclusion (DEI) across Hitachi Vantara through programs that reflect the diverse backgrounds, interests, and passions of their current and future workforce. Continue reading to learn her views on the significance of inclusion and diversity in an organization.

Read More

Related News

Big Data Management

Ocient's Report Reveals Surge in Hyperscale Data's Impact on Firms

Ocient | September 25, 2023

Ocient, a renowned hyperscale data analytics platform, has recently announced the release of its second annual industry report, titled "Beyond Big Data: Hyperscale Takes Flight." The 2023 report, which is the result of a survey conducted among 500 data and IT leaders responsible for managing data workloads exceeding 150 terabytes, sheds light on the escalating significance of hyperscale data management within enterprises. It also underscores the critical requisites of time, talent, and cutting-edge technology necessary for the effective harnessing of data at scale. Building on the foundation of Ocient's inaugural Beyond Big Data survey, the 2023 report delves into the minds of IT decision-makers, seeking to unravel the challenges, investment priorities, and future prospects occupying the forefront of their agendas for 2023 and beyond. The report offers valuable year-over-year comparisons, drawing on trends identified in 2022 while also presenting fresh, timely insights that mirror the immediate concerns of enterprise leaders in the United States. Among the pivotal insights featured in this year's report are the following: Immediate Emphasis on Data Quality: Organizations committed to leveraging their hyperscale data for critical business decisions are placing paramount importance on ensuring the highest data quality standards. Data Workload Growth as a Driving Force: Data and IT leaders are increasingly recognizing data warehousing and analytics as pivotal elements of their IT strategies, a sentiment vividly reflected in their budget allocations. AI Readiness at the Forefront: Leaders are eager participants in the AI revolution, yet they grapple with concerns surrounding security, accuracy, and trust. Innovation Hindered by Talent and Technology Gaps: Many leaders continue to struggle with the challenges of optimizing their toolsets and scaling their teams swiftly enough to meet the demands posed by hyperscale data volumes. Stephen Catanzano, Senior Analyst, Enterprise Strategy Group, commented: It's clear enterprises are investing in data analytics and warehousing, especially given their costs are being driven up so high with older systems that can't handle the data that's being pushed to them. [Source – Business Wire] Chris Gladwin, Co-Founder and CEO of Ocient, stated that data was not slowing down, and he emphasized that the results of the 2023 Beyond Big Data report confirmed the significance of hyperscale data workloads for enterprises across various industries. He also noted that data volumes were on the rise, as was the importance of comprehending one's data. Nevertheless, the challenges related to data quality, the proliferation of tools, and staffing constraints persisted and were impeding progress in the industry. Furthermore, Gladwin mentioned that the frontier beyond big data had arrived and that Ocient's annual report illustrated the challenges and opportunities that were shaping the enterprise data strategies of the future. About Ocient Ocient is a pioneering hyperscale data analytics solutions company dedicated to empowering organizations to unlock substantial value through the analysis of trillions of data records, achieving performance levels and cost efficiencies previously deemed unattainable. The company is entrusted by leading organizations around the globe to leverage the expertise of its industry professionals in crafting and implementing sophisticated solutions. These solutions not only enable the rapid exploration of new revenue avenues but also streamline operational processes and enhance security measures, all while managing five to 10 times more data and significantly reducing storage requirements by up to 80%.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Big Data

Teradata helps customers accelerate AI-led initiatives with new ModelOps capabilities in ClearScape analytics

iTWire | September 27, 2023

Teradata today announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organisations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimise AI/ML deployments to quickly generate business value from their AI investments. Deploying AI models into production is notoriously challenging. A recent O'Reilly's survey on AI adoption in the enterprise found that only 26% of respondents currently have models deployed in production, with many companies stating they have yet to see a return on their AI investments. This is compounded by the recent excitement around generative AI and the pressure many executives are under to implement it within their organisation, according to a recent survey by IDC, sponsored by Teradata. ModelOps in ClearScape Analytics makes it easier than ever to operationalise AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well. We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value. Teradata chief product officer Hillary Ashton “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organisations to cost effectively operationalise and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.” New capabilities and enhancements to ModelOps include: - Bring Your Own Model (BYOM), now with no code capabilities, allows users to deploy their own machine learning models without writing any code, simplifying the deployment journey with automated validation, deployment and monitoring - Mitigation of regulatory risks with advanced model governance capabilities and robust explainability controls to ensure trusted AI - Automatic monitoring of model performance and data drift with zero configuration alerts Teradata customers are already using ModelOps to accelerate time-to-value for their AI investments A major US healthcare institution uses ModelOps to speed up the deployment process and scale its AI/ML personalisation journey. The institution accelerated its deployment with a 3x increase in productivity to successfully deploy thirty AI/ML models that predict which of its patients are most likely to need an office visit to implement “Personalisation at Scale.” A major European financial institution leveraged ModelOps to reduce AI model deployment time from five months to one week. The models are deployed at scale and integrated with operational data to deliver business value.

Read More

Big Data Management

Ocient's Report Reveals Surge in Hyperscale Data's Impact on Firms

Ocient | September 25, 2023

Ocient, a renowned hyperscale data analytics platform, has recently announced the release of its second annual industry report, titled "Beyond Big Data: Hyperscale Takes Flight." The 2023 report, which is the result of a survey conducted among 500 data and IT leaders responsible for managing data workloads exceeding 150 terabytes, sheds light on the escalating significance of hyperscale data management within enterprises. It also underscores the critical requisites of time, talent, and cutting-edge technology necessary for the effective harnessing of data at scale. Building on the foundation of Ocient's inaugural Beyond Big Data survey, the 2023 report delves into the minds of IT decision-makers, seeking to unravel the challenges, investment priorities, and future prospects occupying the forefront of their agendas for 2023 and beyond. The report offers valuable year-over-year comparisons, drawing on trends identified in 2022 while also presenting fresh, timely insights that mirror the immediate concerns of enterprise leaders in the United States. Among the pivotal insights featured in this year's report are the following: Immediate Emphasis on Data Quality: Organizations committed to leveraging their hyperscale data for critical business decisions are placing paramount importance on ensuring the highest data quality standards. Data Workload Growth as a Driving Force: Data and IT leaders are increasingly recognizing data warehousing and analytics as pivotal elements of their IT strategies, a sentiment vividly reflected in their budget allocations. AI Readiness at the Forefront: Leaders are eager participants in the AI revolution, yet they grapple with concerns surrounding security, accuracy, and trust. Innovation Hindered by Talent and Technology Gaps: Many leaders continue to struggle with the challenges of optimizing their toolsets and scaling their teams swiftly enough to meet the demands posed by hyperscale data volumes. Stephen Catanzano, Senior Analyst, Enterprise Strategy Group, commented: It's clear enterprises are investing in data analytics and warehousing, especially given their costs are being driven up so high with older systems that can't handle the data that's being pushed to them. [Source – Business Wire] Chris Gladwin, Co-Founder and CEO of Ocient, stated that data was not slowing down, and he emphasized that the results of the 2023 Beyond Big Data report confirmed the significance of hyperscale data workloads for enterprises across various industries. He also noted that data volumes were on the rise, as was the importance of comprehending one's data. Nevertheless, the challenges related to data quality, the proliferation of tools, and staffing constraints persisted and were impeding progress in the industry. Furthermore, Gladwin mentioned that the frontier beyond big data had arrived and that Ocient's annual report illustrated the challenges and opportunities that were shaping the enterprise data strategies of the future. About Ocient Ocient is a pioneering hyperscale data analytics solutions company dedicated to empowering organizations to unlock substantial value through the analysis of trillions of data records, achieving performance levels and cost efficiencies previously deemed unattainable. The company is entrusted by leading organizations around the globe to leverage the expertise of its industry professionals in crafting and implementing sophisticated solutions. These solutions not only enable the rapid exploration of new revenue avenues but also streamline operational processes and enhance security measures, all while managing five to 10 times more data and significantly reducing storage requirements by up to 80%.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Big Data

Teradata helps customers accelerate AI-led initiatives with new ModelOps capabilities in ClearScape analytics

iTWire | September 27, 2023

Teradata today announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organisations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimise AI/ML deployments to quickly generate business value from their AI investments. Deploying AI models into production is notoriously challenging. A recent O'Reilly's survey on AI adoption in the enterprise found that only 26% of respondents currently have models deployed in production, with many companies stating they have yet to see a return on their AI investments. This is compounded by the recent excitement around generative AI and the pressure many executives are under to implement it within their organisation, according to a recent survey by IDC, sponsored by Teradata. ModelOps in ClearScape Analytics makes it easier than ever to operationalise AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well. We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value. Teradata chief product officer Hillary Ashton “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organisations to cost effectively operationalise and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.” New capabilities and enhancements to ModelOps include: - Bring Your Own Model (BYOM), now with no code capabilities, allows users to deploy their own machine learning models without writing any code, simplifying the deployment journey with automated validation, deployment and monitoring - Mitigation of regulatory risks with advanced model governance capabilities and robust explainability controls to ensure trusted AI - Automatic monitoring of model performance and data drift with zero configuration alerts Teradata customers are already using ModelOps to accelerate time-to-value for their AI investments A major US healthcare institution uses ModelOps to speed up the deployment process and scale its AI/ML personalisation journey. The institution accelerated its deployment with a 3x increase in productivity to successfully deploy thirty AI/ML models that predict which of its patients are most likely to need an office visit to implement “Personalisation at Scale.” A major European financial institution leveraged ModelOps to reduce AI model deployment time from five months to one week. The models are deployed at scale and integrated with operational data to deliver business value.

Read More

Spotlight

TrustInsights.ai

TrustInsights.ai

Trust Insights was founded in 2017 with a simple mission: to help marketers solve/achieve issues with collecting data and measuring their digital marketing efforts so that they can make better decisions with the data and exceed their goals with more automation, fewer errors, and deeper insights. The...

Events

Resources