Business Intelligence, Big Data Management, Data Architecture

Kinetica Announces Conversational Query First OpenAI ChatGPT Integration with Analytic Database

Kinetica Announces Conversational Query First OpenAI ChatGPT

Kinetica today announced the industry's first analytic database to integrate with ChatGPT, ushering in 'conversational querying.' Users can ask any question of their proprietary data, even complex ones that were not previously known, and receive an answer in seconds. The combination of ChatGPT's front-end interface that converts natural language to Structured Query Language (SQL), and Kinetica's analytic database purpose built for true ad-hoc querying at speed and scale, provides a more intuitive and interactive way of analyzing complex data sets. Together, ChatGPT and Kinetica remove the limits of data exploration and unlock the full potential of an organization's data.

In today's fast-paced world, people expect instant gratification and rapid results, and ChatGPT's ability to deliver on this expectation is a major factor in its popularity. While ChatGPT can convert natural language to SQL, the speed of response for data analytics questions is dependent on the underlying data platform of the organization. Conventional analytic databases require extensive data engineering, indexing, and tuning to enable fast queries, which means the question must be known in advance. If the questions are not known in advance, a query may take hours to run or not complete at all.

The Kinetica database provides answers in seconds without the need for pre-engineering data. What makes it possible for Kinetica to deliver on conversational query is the use of native vectorization. In a vectorized query engine, data is stored in fixed-size blocks called vectors, and query operations are performed on these vectors in parallel, rather than on individual data elements. This allows the query engine to process multiple data elements simultaneously, resulting in radically faster query execution on a smaller compute footprint. Vectorization is made possible by GPUs and the latest advancements in CPUs, which perform simultaneous calculations on multiple data elements, greatly accelerating computation-intensive tasks by allowing them to be processed in parallel across multiple cores or threads.

Further, Kinetica converges multiple modes of analytics such as time series, spatial, graph, and machine learning that broadens the types of questions that can be answered, such as, "How can we improve the customer experience considering factors such as seasonality, service locations and relationships?" Kinetica also ingests massive amounts of streaming data in real-time to ensure answers represent the most up to date information, such as, "What is the real-time status of our inventory levels and how can we reroute active delivery vehicles to reduce the chances of products being out of stock?"

"While ChatGPT integration with analytic databases will become table stakes for vendors in 2023, the real value will come from rapid insights to complex ad-hoc questions," said Nima Negahban, Cofounder and CEO, Kinetica. "Enterprise users will soon expect the same lightning-fast response times to random text-based questions of their data as they currently do for questions against data in the public domain with ChatGPT."

With ChatGPT integration with Kinetica, querying becomes more interactive and conversational. Instead of writing complex SQL queries or navigating through complex user interfaces, users can simply ask questions using natural language. ChatGPT can understand the user's intent and generate queries based on their questions. The user can then ask follow-up questions or provide additional context.

"Kinetica plus ChatGPT makes complex, ad-hoc queries truly interactive, avoiding the 'interactus interruptus' of other integrations between large language models and databases," said Amit Vij, Cofounder and President, Kinetica. "Generative AI is a killer app for data analytics."

Conversational querying has several benefits, including:

  • Ease of Use: Allows users to ask questions using their own words and phrasing, making it easier to express their questions in a natural way. This approach removes the need to write and debug complex SQL queries, making the system more intuitive and accessible for a wider range of users. This broad data accessibility ultimately leads to more data driven decisions.
  • Increased Productivity: Conversational querying increases productivity by providing rapid access to information. Users get immediate answers to their questions without waiting for long running queries or data pipelines to be built. This saves time and improves overall efficiency.
  • Improved Data Insights: Conversational querying can help users uncover new insights and patterns in their data. By asking natural language questions coupled with immediate answers, users can discover unexpected correlations and relationships that may not have been immediately apparent or too tedious to uncover through traditional querying methods. This leads to improved business outcomes and better decision-making overall.

Kinetica with ChatGPT is available now for free in Kinetica Cloud.

About Kinetica

Kinetica is the creator of the only distributed, fully vectorized database for complex real-time analytic workloads, providing unrivaled scale and speed. Many of the world's largest companies across the public sector, financial services, telecommunications, energy, healthcare, retail, automotive and beyond rely on Kinetica, including the US Air Force, Ford, Citibank, T-Mobile, and others. Kinetica is a privately held company, backed by leading global venture capital firms Canvas Ventures, Citi Ventures, GreatPoint Ventures, and Meritech Capital Partners. Kinetica has a rich partner ecosystem, including AWS, Microsoft, NVIDIA, Intel, Dell, Tableau, and Oracle. For more information and to try Kinetica, visit kinetica.com or follow us on LinkedIn and Twitter.

Spotlight

Spotlight

Related News

Big Data

Teradata helps customers accelerate AI-led initiatives with new ModelOps capabilities in ClearScape analytics

iTWire | September 27, 2023

Teradata today announced new enhancements to its leading AI/ML (artificial intelligence/machine learning) model management software in ClearScape Analytics (e.g., ModelOps) to meet the growing demand from organisations across the globe for advanced analytics and AI. These new features – including “no code” capabilities, as well as robust new governance and AI “explainability” controls – enable businesses to accelerate, scale, and optimise AI/ML deployments to quickly generate business value from their AI investments. Deploying AI models into production is notoriously challenging. A recent O'Reilly's survey on AI adoption in the enterprise found that only 26% of respondents currently have models deployed in production, with many companies stating they have yet to see a return on their AI investments. This is compounded by the recent excitement around generative AI and the pressure many executives are under to implement it within their organisation, according to a recent survey by IDC, sponsored by Teradata. ModelOps in ClearScape Analytics makes it easier than ever to operationalise AI investments by addressing many of the key challenges that arise when moving from model development to deployment in production: end-to-end model lifecycle management, automated deployment, governance for trusted AI, and model monitoring. The governed ModelOps capability is designed to supply the framework to manage, deploy, monitor, and maintain analytic outcomes. It includes capabilities like auditing datasets, code tracking, model approval workflows, monitoring model performance, and alerting when models are not performing well. We stand on the precipice of a new AI-driven era, which promises to usher in frontiers of creativity, productivity, and innovation. Teradata is uniquely positioned to help businesses take advantage of advanced analytics, AI, and especially generative AI, to solve the most complex challenges and create massive enterprise business value. Teradata chief product officer Hillary Ashton “We offer the most complete cloud analytics and data platform for AI. And with our enhanced ModelOps capabilities, we are enabling organisations to cost effectively operationalise and scale trusted AI through robust governance and automated lifecycle management, while encouraging rapid AI innovation via our open and connected ecosystem. Teradata is also the most cost-effective, with proven performance and flexibility to innovate faster, enrich customer experiences, and deliver value.” New capabilities and enhancements to ModelOps include: - Bring Your Own Model (BYOM), now with no code capabilities, allows users to deploy their own machine learning models without writing any code, simplifying the deployment journey with automated validation, deployment and monitoring - Mitigation of regulatory risks with advanced model governance capabilities and robust explainability controls to ensure trusted AI - Automatic monitoring of model performance and data drift with zero configuration alerts Teradata customers are already using ModelOps to accelerate time-to-value for their AI investments A major US healthcare institution uses ModelOps to speed up the deployment process and scale its AI/ML personalisation journey. The institution accelerated its deployment with a 3x increase in productivity to successfully deploy thirty AI/ML models that predict which of its patients are most likely to need an office visit to implement “Personalisation at Scale.” A major European financial institution leveraged ModelOps to reduce AI model deployment time from five months to one week. The models are deployed at scale and integrated with operational data to deliver business value.

Read More

Data Architecture

IQVIA Earns Healthcare Leader Recognition in Data Stack Awards

IQVIA | September 18, 2023

Snowflake recognizes IQVIA as a Global Healthcare Leader in the Measurement and Attribution category as part of its annual Modern Marketing Data Stack awards. The Modern Marketing Data Stack report comprehensively analyzes data tools, applications, technologies, and processes in marketing data stacks. Orchestrated Analytics GM Tanveer Nasir expressed his gratitude for the recognition and emphasized the company's commitment to improving brand performance and patient lives through data-driven insights and solutions. Snowflake, a leading data cloud platform, has recognized IQVIA as a Global Healthcare Leader in the prestigious Measurement and Attribution category. This recognition comes as part of Snowflake's annual Modern Marketing Data Stack awards. The Modern Marketing Data Stack report is the outcome of a comprehensive year-long analysis focusing on data tools, applications, technologies, and processes employed by organizations in their marketing data stacks. This exhaustive assessment, encompassing approximately 8,100 Snowflake customers, employs a weighted scoring algorithm to discern "marketplace leaders" across diverse data-driven business functions and technology categories. The report underscores IQVIA's proficiency in aiding healthcare and life sciences organizations in the compliant utilization of extensive data resources. This enables swift and precise measurement and reporting, ultimately leading to actionable insights that facilitate informed decision-making and the formulation of effective sales and marketing strategies. In recent years, life sciences firms have significantly increased their investments in business intelligence (BI) solutions to enhance their competitiveness and performance. However, this growth has also brought forth challenges, such as analytics failing to address essential business questions, the absence of a "single source of truth" for dependable insights, and the inability to prioritize personalized prescriptive insights. IQVIA's Orchestrated Analytics platform has emerged as a preeminent solution in the industry due to its comprehensive consulting and change management approach. This approach guarantees that solutions align with specific business requirements, irrespective of the market while minimizing initial investment risks. Furthermore, the platform offers an array of self-service applications empowering business stakeholders to customize insights and extract reliable and actionable intelligence. An exceptional feature of IQVIA's Orchestrated Analytics is its extensive library of algorithms, featuring over 200 algorithms and a multitude of Key Performance Indicators (KPIs) exceeding 400, all aimed at elevating commercial impact through personalized insights for each user. The platform's user-friendly interface is complemented by embedded smart assistants, ensuring effortless access to personalized intelligence across a spectrum of business intelligence tools. IQVIA's global presence is another hallmark, with a team of over 86,000 experts operating in more than 100 countries. This expansive network accelerates the commercial impact of life sciences companies by furnishing market-relevant insights. In addition, Orchestrated Analytics is entrusted by seven out of the top ten pharmaceutical companies worldwide as they expand their brand portfolios. Tanveer Nasir, General Manager, Orchestrated Analytics, commented We are honored to be selected by Snowflake as the global leader in Measurement and Attribution in their Modern Marketing Data Stack report. [Source: IQVIA] He further explains that their insight recommendations and user adoption framework worked together effectively to enhance the sales force's efficiency and increase productivity. They had exhibited a significant ROI just by demonstrating Rx uplift for a top 10 pharmaceutical brand. Nasir conveyed that their commitment to enhancing brand performance and improving patients' lives worldwide by identifying the right customer at the right time through the correct channel and messaging continues to drive their passion.

Read More

Big Data Management

Microsoft's AI Data Exposure Highlights Challenges in AI Integration

Microsoft | September 22, 2023

AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects. Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub. Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively. Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data. An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them. Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks. Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI. Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue. PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID. The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.

Read More