Q&A with Richard Stevenson, Chief Executive Officer at Red Box

Media 7 | March 25, 2021

Richard Stevenson, Chief Executive Officer at Red Box, is a senior leader who has built a strong track record of execution, having worked in the Software and Financial Services sectors for over twenty years. An effective communicator who is customer-focused with proven leadership capabilities. Richard has a track record of achieving significant revenue growth, both organically and via acquisition, with experience in organizational strategy and the development of fundraising plans. He has worked with a variety of businesses, ranging from start-ups to an FTSE 100 company, in a number of markets including South Africa, USA, Hong Kong, and Germany.

As data center footprints grow, it is more important than ever that enterprises are fully utilizing the data they collect.



MEDIA 7: Could you please tell us a little bit about yourself and what made you choose this career path?
RICHARD STEVENSON:
I’ve worked in the Software and Financial Services sectors for over twenty years now, focusing on transformation, innovation, and an open approach to platforms and enterprise architecture. I’ve seen firsthand the challenges faced by organizations locked into legacy or proprietary technologies and dealing with data silos, and the negative impact this has on the customer journey, employee experience and in preventing organizations from innovating in an agile way. This agility is increasingly important for organizations to differentiate in rapidly changing and highly competitive marketplaces.

Throughout my career, I have been drawn to leading start-up organizations where a team can be built around a clear challenge and purpose and deliver true innovation for customers and colleagues, which in turn builds shareholder value. Whilst Red Box is not a start-up, I joined as CEO in 2016 at a time when technological advances in Automatic Speech Recognition (ASR) were starting to make it easier to extract insights from the conversations taking place across an organization, at scale, by turning audio files into structured data sets that can be reasoned over by AI and ML engines.

Whilst our core capabilities and expertise remain the same, the new opportunities for voice have expanded significantly. It has become clear that established call recording practices that had served organizations well enough over the years for compliance and quality purposes are now holding them back from fully maximizing ROI from speech analytics investments and Red Box is ideally placed to address this issue and the growing demand for voice data for AI.

My background is in enabling organizations through technology and our mission at Red Box continues that theme by empowering organizations to unlock the value of voice. Our open API philosophy ensures timely access to and control of high-quality voice recordings from across the enterprise which is now critical for any organization looking to leverage voice data as a strategic asset.  We provide customers with full control of and access to their voice data sets and the freedom to tap into that data within their applications of choice.

M7: What is the role of AI in telecommunication? How does Red Box work on voice data analytics and artificial intelligence to unlock the value in the customers’ voice data?
RS:
With each passing year, AI is becoming more mature, and in turn, more viable. When you combine the unique richness of the content of conversations your customers and colleagues are having with AI’s capability to analyze every conversation and combine these insights with other operational data, you can see why there is increasing adoption of speech analytics by organizations seeking to understand the true Voice of the Customer (VoC) and Voice of the Employee (VoE) for experience personalization and optimization. 

With voice data sets increasingly seen as a strategic asset awash with rich insights, timely access to high-quality audio and transcripts for AI engines to reason over is critical. We provide enterprises with open access to, and control of, high-quality unstructured and structured audio data and the freedom to leverage that data in any application they choose. Conversa, our new and first truly open, microservices-based enterprise voice platform, provides real-time, high-quality audio capture. Working with our partners, such as Deepgram, who provide state-of-the-art, deep learning and customizable speech recognition capabilities, we transform this real-time audio stream into highly accurate transcripts for enterprises to leverage in AI, analytics, and compliance solutions. Both the Deepgram and Red Box Conversa platforms have been engineered for flexibility and scalability and offer low compute footprints, market-leading total cost of ownership, and flexible deployment options.


The advancement of technology means opportunities to break into the world of unstructured data are rife, and organizations will be - and should be - looking for faster ways to unlock data and gain insights quickly.



M7: What are some of the barriers to AI adoption?
RS:
Traditionally, there have been significant barriers to AI adoption. Even if people recognize its importance, its complex nature when it comes to analyzing data requires an end-to-end understanding of the process in order to consolidate the customer journey into a single record. Unfortunately, a lot of businesses have installed expensive technology that hasn't been delivered and poor previous experience is certainly a factor in adopting the right AI later down the line.

To avoid disappointment, organizations must focus on specific outcomes and work with vendors that meet specific needs without assuming one provider can do everything. Critically they must also ensure they have complete control of and access to any data they wish to tap into from across the enterprise and in the highest quality possible, as data silos and data quality issues can significantly impact outcomes. Control and access to data also have significant implications for data governance across data residency, data sovereignty, and data localization - concepts particularly relevant when cloud solutions are being considered.

M7: What are your best practices for improving operational performances using voice data?
RS:
As data center footprints grow, it is more important than ever that enterprises are fully utilizing the data they collect. Voice is inherently rich given its ability to convey sentiment, context, intent, emotions, and actions. Enterprises that collect voice data but do not tap into these insights are missing out on information that can provide real organizational intelligence and drive valuable business outcomes.

Secure access to and sovereignty of data is critical in the ‘Data Economy’, as is investing in vendors with an open API approach that gives enterprises flexibility when accessing their data and the capability to leverage voice data into the tools and applications of their choice without tying them to one provider. Indeed, our own research suggests that only 8% of the voice data organizations capture is easily accessible for use in these tools, which is a missed opportunity for those looking to derive insights to help them differentiate in an increasingly competitive marketplace.


Using insights to personalize products and services on a wide scale is changing the fundamentals of competition in many sectors, including banking, education, healthcare and retail.



M7: What do you believe are the top three product challenges in the post COVID-19 era?
RS:
COVID-19 has stirred uncertainty and change for businesses globally. As a result of the pandemic, technology and IT leadership roles have been in the spotlight more so than ever before as companies scrambling to transform operations and customer engagement looks to CIOs for new ways of navigating a rapidly-changing way of working.

With data analytics identified as the number one tech initiative driving 2021 investments, CIOs are facing the challenge of unlocking data insights and incorporating AI/analytics working with a largely remote workforce and while still maintaining a human connection. In this environment, a few emerging product marketing trends can be identified to help stay ahead of the game, not just in 2021 but beyond.

With the scale of insights that can now be generated using voice - from customer journey to markets, competitors and products, as well as the use of voice biometric technology - the growing interest businesses are taking is no surprise considering the advantage the insights and operational efficiencies can provide them with against their competitors. Voice analytics, for example, has the ability to make agents responsible for more complex conversations, whilst efficiently automating more mundane processes for a better customer and agent experience.

As well as the time-saving advantage, analytics of unstructured data has the potential to reveal finer levels of distinctions, and micro-segment populations based on the characteristics of individuals. Deep learning models have significant benefits when it comes to market intelligence and enable businesses to quickly scan unstructured data sets and find patterns. Using insights to personalize products and services on a wide scale is changing the fundamentals of competition in many sectors, including banking, education, healthcare, and retail.

M7: How has the pandemic changed the perception of voice data?
RS:
Voice is fundamental in the communication of all kinds and, naturally, humans are tuned well to understanding it and deriving meaning from it. Customer experience, long a core enterprise priority, became an even greater imperative during the pandemic as companies scrambled to find alternative ways to engage clients, conduct business, and respond to changing requirements as COVID-19 forced business lockdowns and created significant socioeconomic stress.

In all of this, I believe voice is still the best alternative to connecting where face-to-face communication is not an option. IDG found that 81% of CIOs have already confirmed they are implementing new technology to enable better customer experiences and interactions, with 65% of companies now leaning on technology to provide an alternative to face-to-face communications. Even if companies do not rely solely on a call centre, they have some way of communicating with customers or clients and I believe voice is still the richest and most personal form of communication. As well as the external conversations held, recording and transcribing HR meetings can be helpful to understand the culture internally providing privacy concerns are addressed by clear policies and procedures around this.

2021 is already proving to be the year that data separates organizations from their competitors. The ability to unlock, analyze, and act on data will become foundational to growth. As the world leans towards hybrid working models and more communication via technology platforms rather than in person, I believe we will see a much greater reliance on voice continue, thanks to a greater demand for empathetic human connections.

M7: Why do you think data will be the biggest differentiator when it comes to competitors?
RS:
Currently, it’s estimated that around 90% of all data generated in the world is unstructured -  video and audio is an example of this. The small amount of data used by organizations is just the tip of the iceberg. Of the large mass of unstructured data out there, voice is far and away from the biggest opportunity in that it is sizable - but also largely untapped. In the past, the technology to tap into the data in a scalable and meaningful way just didn’t exist and that meant a large team would be required to unlock and analyze conversations to extract insights.

The good news for organizations is the potential to access this kind of data has changed drastically in the last decade. The advancement of technology means opportunities to break into the world of unstructured data are rife, and organizations will be - and should be - looking for faster ways to unlock data and gain insights quickly. According to a 2019 Deloitte Survey, 55% of the business leaders said they were using or planning to use voice. With the large scale of insights that can now be generated using voice - from customer journey to markets, competitors and products, as well as the use of voice biometric technology - the growing interest businesses are taking is no surprise considering the advantage the insights and operational efficiencies can provide them with against their competitors. Voice analytics, for example, has the ability to make agents responsible for more complex conversations, whilst efficiently automating more mundane processes for a better customer and agent experience.

As well as the time-saving advantage, analytics of unstructured data has the potential to reveal finer levels of distinctions, and micro-segment populations based on the characteristics of individuals. Deep learning models, which have hundreds of layers, have significant benefits when it comes to market intelligence and enable businesses to quickly scan unstructured data sets and find patterns. Using insights to personalize products and services on a wide scale is changing the fundamentals of competition in many sectors, including banking, education, healthcare and retail.

ABOUT RED BOX

Red Box is a leading dedicated voice specialist with over 30 years experience in empowering organizations to capture, secure, and unlock the value of enterprise-wide voice. Conversa by Red Box is the next generation and first truly open microservices- based, enterprise voice platform. It provides customers with open access to and control over captured voice and media, resilient capture of high-quality real-time data from across the enterprise, the freedom to use that data in any application, and a market-leading TCO.

Red Box is trusted by leading organizations across financial services, contact center, government, and public safety sectors and we capture and secure millions of calls daily for thousands of customers around the world.

More C-Suite on deck

Listen to your customers, advises Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai

Media 7 | November 16, 2021

Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai shared his insights with us on how marketers can make better use of data, attribution models and natural language processing to promote conversions and increase customer engagement. Read on to find out about his three-part strategy for successful marketing campaigns.

Read More

‘Advances in AI and Machine Learning are not scalable without proper data governance program in place’ says Peggy Tsai, Chief Data Officer at BigID

Media 7 | March 8, 2022

Peggy Tsai, Chief Data Officer at BigID talks about BigID’s data intelligence platform and its integrated applications. Read on to know more about her thoughts on GDPR compliance, the future data intelligence landscape and more in her latest interview with Media 7.

Read More

‘Data teams are critical in defining and driving business growth metrics’ says Gaurav Rewari, CEO of Mode.

Media 7 | March 4, 2022

Gaurav Rewari, CEO of Mode elaborates on his role as a CEO of Mode Analytics, the most comprehensive platform for collaborative Business Intelligence and Interactive Data Science. Read on to know more about his thoughts on digitization and Mode's brand-new visualization tool, Visual Explorer.

Read More

Listen to your customers, advises Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai

Media 7 | November 16, 2021

Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai shared his insights with us on how marketers can make better use of data, attribution models and natural language processing to promote conversions and increase customer engagement. Read on to find out about his three-part strategy for successful marketing campaigns.

Read More

‘Advances in AI and Machine Learning are not scalable without proper data governance program in place’ says Peggy Tsai, Chief Data Officer at BigID

Media 7 | March 8, 2022

Peggy Tsai, Chief Data Officer at BigID talks about BigID’s data intelligence platform and its integrated applications. Read on to know more about her thoughts on GDPR compliance, the future data intelligence landscape and more in her latest interview with Media 7.

Read More

‘Data teams are critical in defining and driving business growth metrics’ says Gaurav Rewari, CEO of Mode.

Media 7 | March 4, 2022

Gaurav Rewari, CEO of Mode elaborates on his role as a CEO of Mode Analytics, the most comprehensive platform for collaborative Business Intelligence and Interactive Data Science. Read on to know more about his thoughts on digitization and Mode's brand-new visualization tool, Visual Explorer.

Read More

Related News

Big Data Management

Kinetica Redefines Real-Time Analytics with Native LLM Integration

Kinetica | September 22, 2023

Kinetica, a renowned speed layer for generative AI and real-time analytics, has recently unveiled a native Large Language Model (LLM) integrated with Kinetica's innovative architecture. This empowers users to perform ad-hoc data analysis on real-time, structured data with the ease of natural language, all without the need for external API calls and without data ever leaving the secure confines of the customer's environment. This significant milestone follows Kinetica's prior innovation as the first analytic database to integrate with OpenAI. Amid the LLM fervor, enterprises and government agencies are actively seeking inventive ways to automate various business functions while safeguarding sensitive information that could be exposed through fine-tuning or prompt augmentation. Public LLMs, exemplified by OpenAI's GPT 3.5, raise valid concerns regarding privacy and security. These concerns are effectively mitigated through native offerings, seamlessly integrated into the Kinetica deployment, and securely nestled within the customer's network perimeter. Beyond its superior security features, Kinetica's native LLM is finely tuned to the syntax and industry-specific data definitions, spanning domains such as telecommunications, automotive, financial services, logistics, and more. This tailored approach ensures the generation of more reliable and precise SQL queries. Notably, this capability extends beyond conventional SQL, enabling efficient handling of intricate tasks essential for enhanced decision-making capabilities, particularly for time-series, graph, and spatial inquiries. Kinetica's approach to fine-tuning places emphasis on optimizing SQL generation to deliver consistent and accurate results, in stark contrast to more conventional methods that prioritize creativity but yield diverse and unpredictable responses. This steadfast commitment to reliable SQL query outcomes offers businesses and users the peace of mind they deserve. Illustrating the practical impact of this innovation, the US Air Force has been collaborating closely with Kinetica to leverage advanced analytics on sensor data, enabling swift identification and response to potential threats. This partnership contributes significantly to the safety and security of the national airspace system. The US Air Force now employs Kinetica's embedded LLM to detect airspace threats and anomalies using natural language. Kinetica's database excels in converting natural language queries into SQL, delivering responses in mere seconds, even when faced with complex or unfamiliar questions. Furthermore, Kinetica seamlessly combines various analytics modes, including time series, spatial, graph, and machine learning, thereby expanding the range of queries it can effectively address. What truly enables Kinetica to excel in conversational query processing is its ingenious use of native vectorization. In a vectorized query engine, data is organized into fixed-size blocks called vectors, enabling parallel query operations on these vectors. This stands in contrast to traditional approaches that process individual data elements sequentially. The result is significantly accelerated query execution, all within a smaller compute footprint. This remarkable speed is made possible by the utilization of GPUs and the latest CPU advancements, which enable simultaneous calculations on multiple data elements, thereby greatly enhancing the processing speed of computation-intensive tasks across multiple cores or threads. About Kinetica Kinetica is a pioneering company at the forefront of real-time analytics and is the creator of the groundbreaking real-time analytical database specially designed for sensor and machine data. The company offers native vectorized analytics capabilities in the fields of generative AI, spatial analysis, time-series modeling, and graph processing. A distinguished array of the world's largest enterprises spanning diverse sectors, including the public sector, financial services, telecommunications, energy, healthcare, retail, and automotive industries, entrusts Kinetica to forge novel solutions in the realms of time-series data and spatial analysis. The company's clientele includes various illustrious organizations such as the US Air Force, Citibank, Ford, T-Mobile, and numerous others.

Read More

Business Intelligence

Oracle's New Next-generation Platform Transforms Business Insights

Oracle | September 25, 2023

Oracle introduces a data, analytics, and AI platform for Fusion Cloud Applications to enhance business outcomes. The platform offers 360-degree Data Models, Prescriptive AI/ML Models, Rich Interactive Analytics, and Intelligent Applications. Oracle plans to extend the platform to NetSuite and other industry applications, enriching analytics offerings. Oracle has recently unveiled the Fusion Data Intelligence Platform, a cutting-edge data, analytics, and AI solution designed to empower Oracle Fusion Cloud Applications users to enhance their business outcomes through the fusion of data-driven insights and intelligent decision-making. This groundbreaking platform, which builds upon the foundations of the Oracle Fusion Analytics Warehouse product, offers business data-as-a-service with automated data pipelines, comprehensive 360-degree data models for critical business entities, interactive analytics, AI/ML models, and intelligent applications. These ready-to-use capabilities run on top of the Oracle Cloud Infrastructure (OCI) Data Lakehouse services, including Oracle Autonomous Database and Oracle Analytics Cloud, thereby facilitating complete extensibility across data, analytics, AI/ML, and application layers. The Oracle Fusion Data Intelligence Platform presents the following suite of pre-built capabilities that are designed to empower Oracle Fusion Cloud Applications users to unlock the full potential of their data: 360-Degree Data Models: This will equip business users with a cohesive and comprehensible representation of their organizational data, allowing them to discern the intricate relationships between data and business processes. By providing a range of conformed data models based on Oracle Fusion Cloud Applications data and other data sources, this platform offers a 360-degree view of various facets of a business, including customers, accounts, products, suppliers, and employees. Prescriptive AI/ML Models: Leveraging pre-configured AI/ML models, such as workforce skills assessment and customer payment forecasting, organizations can solve specific business problems by automating labor-intensive tasks, freeing up resources for strategic endeavors. Furthermore, it empowers organizations to rapidly analyze substantial datasets, uncovering invaluable insights and patterns that can drive business growth and efficiency. Rich Interactive Analytics: Business users can seamlessly explore and visualize their data using pre-built dashboards, reports, and key performance indicators (KPIs). Additionally, Analytics Cloud features like natural language query, auto insights, and mobile applications allow quick access to data and insights. Intelligent Applications: These applications go beyond providing insights offering intelligent recommendations based on pre-existing data models, AI/ML models, and analytics content. They enable organizations to make informed decisions swiftly, ultimately improving business outcomes. The Fusion Data Intelligence Platform is a pivotal step in a long-term vision to transition from data and analytics to actionable decisions that drive business success. Importantly, this platform will extend its reach beyond Oracle Fusion Cloud Applications, with plans to offer the same foundational platform for NetSuite and across various Oracle industry applications, such as healthcare, financial services, and utilities, to facilitate cross-domain insights. The Fusion Data Intelligence Platform includes an extensive portfolio of ready-to-use analytics for Oracle Fusion Cloud Enterprise Resource Planning (ERP), Oracle Fusion Cloud Human Capital Management (HCM), Oracle Fusion Cloud Supply Chain & Manufacturing (SCM), and Oracle Fusion Cloud Customer Experience (CX). These analytics offerings have been further enriched with the following additions: Oracle Fusion ERP Analytics: The introduction of Accounting Hub analytics empowers finance teams to create a system of insights for accounting data sourced from Oracle Accounting Hub sub-ledger applications. Oracle Fusion SCM Analytics: New Manufacturing analytics provide manufacturers with timely insights into work order performance, enhancing shop floor efficiency by rapidly identifying anomalies and continually optimizing plan-to-produce processes by connecting insights across supply chain data. Oracle Fusion HCM Analytics: The addition of Inferred Skills, Payroll Costing, and Continuous Listening analytics equips organizational leaders with integrated workforce insights, covering employee skills, payroll trends and anomalies, and the efficacy of a continuous listening strategy at any given point in time. Oracle Fusion CX Analytics: The new Quote-to-Cash analytics extend the analysis beyond the lead-to-opportunity pipeline by offering insights into how the price, contract, and quote process influences the overall customer experience.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Big Data Management

Kinetica Redefines Real-Time Analytics with Native LLM Integration

Kinetica | September 22, 2023

Kinetica, a renowned speed layer for generative AI and real-time analytics, has recently unveiled a native Large Language Model (LLM) integrated with Kinetica's innovative architecture. This empowers users to perform ad-hoc data analysis on real-time, structured data with the ease of natural language, all without the need for external API calls and without data ever leaving the secure confines of the customer's environment. This significant milestone follows Kinetica's prior innovation as the first analytic database to integrate with OpenAI. Amid the LLM fervor, enterprises and government agencies are actively seeking inventive ways to automate various business functions while safeguarding sensitive information that could be exposed through fine-tuning or prompt augmentation. Public LLMs, exemplified by OpenAI's GPT 3.5, raise valid concerns regarding privacy and security. These concerns are effectively mitigated through native offerings, seamlessly integrated into the Kinetica deployment, and securely nestled within the customer's network perimeter. Beyond its superior security features, Kinetica's native LLM is finely tuned to the syntax and industry-specific data definitions, spanning domains such as telecommunications, automotive, financial services, logistics, and more. This tailored approach ensures the generation of more reliable and precise SQL queries. Notably, this capability extends beyond conventional SQL, enabling efficient handling of intricate tasks essential for enhanced decision-making capabilities, particularly for time-series, graph, and spatial inquiries. Kinetica's approach to fine-tuning places emphasis on optimizing SQL generation to deliver consistent and accurate results, in stark contrast to more conventional methods that prioritize creativity but yield diverse and unpredictable responses. This steadfast commitment to reliable SQL query outcomes offers businesses and users the peace of mind they deserve. Illustrating the practical impact of this innovation, the US Air Force has been collaborating closely with Kinetica to leverage advanced analytics on sensor data, enabling swift identification and response to potential threats. This partnership contributes significantly to the safety and security of the national airspace system. The US Air Force now employs Kinetica's embedded LLM to detect airspace threats and anomalies using natural language. Kinetica's database excels in converting natural language queries into SQL, delivering responses in mere seconds, even when faced with complex or unfamiliar questions. Furthermore, Kinetica seamlessly combines various analytics modes, including time series, spatial, graph, and machine learning, thereby expanding the range of queries it can effectively address. What truly enables Kinetica to excel in conversational query processing is its ingenious use of native vectorization. In a vectorized query engine, data is organized into fixed-size blocks called vectors, enabling parallel query operations on these vectors. This stands in contrast to traditional approaches that process individual data elements sequentially. The result is significantly accelerated query execution, all within a smaller compute footprint. This remarkable speed is made possible by the utilization of GPUs and the latest CPU advancements, which enable simultaneous calculations on multiple data elements, thereby greatly enhancing the processing speed of computation-intensive tasks across multiple cores or threads. About Kinetica Kinetica is a pioneering company at the forefront of real-time analytics and is the creator of the groundbreaking real-time analytical database specially designed for sensor and machine data. The company offers native vectorized analytics capabilities in the fields of generative AI, spatial analysis, time-series modeling, and graph processing. A distinguished array of the world's largest enterprises spanning diverse sectors, including the public sector, financial services, telecommunications, energy, healthcare, retail, and automotive industries, entrusts Kinetica to forge novel solutions in the realms of time-series data and spatial analysis. The company's clientele includes various illustrious organizations such as the US Air Force, Citibank, Ford, T-Mobile, and numerous others.

Read More

Business Intelligence

Oracle's New Next-generation Platform Transforms Business Insights

Oracle | September 25, 2023

Oracle introduces a data, analytics, and AI platform for Fusion Cloud Applications to enhance business outcomes. The platform offers 360-degree Data Models, Prescriptive AI/ML Models, Rich Interactive Analytics, and Intelligent Applications. Oracle plans to extend the platform to NetSuite and other industry applications, enriching analytics offerings. Oracle has recently unveiled the Fusion Data Intelligence Platform, a cutting-edge data, analytics, and AI solution designed to empower Oracle Fusion Cloud Applications users to enhance their business outcomes through the fusion of data-driven insights and intelligent decision-making. This groundbreaking platform, which builds upon the foundations of the Oracle Fusion Analytics Warehouse product, offers business data-as-a-service with automated data pipelines, comprehensive 360-degree data models for critical business entities, interactive analytics, AI/ML models, and intelligent applications. These ready-to-use capabilities run on top of the Oracle Cloud Infrastructure (OCI) Data Lakehouse services, including Oracle Autonomous Database and Oracle Analytics Cloud, thereby facilitating complete extensibility across data, analytics, AI/ML, and application layers. The Oracle Fusion Data Intelligence Platform presents the following suite of pre-built capabilities that are designed to empower Oracle Fusion Cloud Applications users to unlock the full potential of their data: 360-Degree Data Models: This will equip business users with a cohesive and comprehensible representation of their organizational data, allowing them to discern the intricate relationships between data and business processes. By providing a range of conformed data models based on Oracle Fusion Cloud Applications data and other data sources, this platform offers a 360-degree view of various facets of a business, including customers, accounts, products, suppliers, and employees. Prescriptive AI/ML Models: Leveraging pre-configured AI/ML models, such as workforce skills assessment and customer payment forecasting, organizations can solve specific business problems by automating labor-intensive tasks, freeing up resources for strategic endeavors. Furthermore, it empowers organizations to rapidly analyze substantial datasets, uncovering invaluable insights and patterns that can drive business growth and efficiency. Rich Interactive Analytics: Business users can seamlessly explore and visualize their data using pre-built dashboards, reports, and key performance indicators (KPIs). Additionally, Analytics Cloud features like natural language query, auto insights, and mobile applications allow quick access to data and insights. Intelligent Applications: These applications go beyond providing insights offering intelligent recommendations based on pre-existing data models, AI/ML models, and analytics content. They enable organizations to make informed decisions swiftly, ultimately improving business outcomes. The Fusion Data Intelligence Platform is a pivotal step in a long-term vision to transition from data and analytics to actionable decisions that drive business success. Importantly, this platform will extend its reach beyond Oracle Fusion Cloud Applications, with plans to offer the same foundational platform for NetSuite and across various Oracle industry applications, such as healthcare, financial services, and utilities, to facilitate cross-domain insights. The Fusion Data Intelligence Platform includes an extensive portfolio of ready-to-use analytics for Oracle Fusion Cloud Enterprise Resource Planning (ERP), Oracle Fusion Cloud Human Capital Management (HCM), Oracle Fusion Cloud Supply Chain & Manufacturing (SCM), and Oracle Fusion Cloud Customer Experience (CX). These analytics offerings have been further enriched with the following additions: Oracle Fusion ERP Analytics: The introduction of Accounting Hub analytics empowers finance teams to create a system of insights for accounting data sourced from Oracle Accounting Hub sub-ledger applications. Oracle Fusion SCM Analytics: New Manufacturing analytics provide manufacturers with timely insights into work order performance, enhancing shop floor efficiency by rapidly identifying anomalies and continually optimizing plan-to-produce processes by connecting insights across supply chain data. Oracle Fusion HCM Analytics: The addition of Inferred Skills, Payroll Costing, and Continuous Listening analytics equips organizational leaders with integrated workforce insights, covering employee skills, payroll trends and anomalies, and the efficacy of a continuous listening strategy at any given point in time. Oracle Fusion CX Analytics: The new Quote-to-Cash analytics extend the analysis beyond the lead-to-opportunity pipeline by offering insights into how the price, contract, and quote process influences the overall customer experience.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More

Spotlight

Red Box

Red Box

Red Box is a leading dedicated voice specialist with over 30 years experience in empowering organizations to capture, secure, and unlock the value of enterprise-wide voice. Conversa by Red Box is the next generation and first truly open microservices- based, enterprise voice platform. It provides cu...

Events

Resources

resource image

Business Intelligence, Big Data Management, Big Data

Harnessing the Power of Big Data: Events to attend in 2023

Article

resource image

Business Intelligence, Big Data Management

Are Predictive Analytics Truly Predictive?

Whitepaper

resource image

Business Intelligence, Big Data Management, Data Science

Tackling climate change with data science and AI

Whitepaper

resource image

Business Intelligence, Big Data Management, Big Data

Harnessing the Power of Big Data: Events to attend in 2023

Article

resource image

Business Intelligence, Big Data Management

Are Predictive Analytics Truly Predictive?

Whitepaper

resource image

Business Intelligence, Big Data Management, Data Science

Tackling climate change with data science and AI

Whitepaper

Events