Big Data Management

Equalum Launches Continuous Data Integration Platform 3.0

Equalum | February 23, 2022

Data Integration
Equalum, a leading provider of data integration and ingestion solutions, has unveiled its most powerful data integration platform since its inception. Equalum's Continuous Data Integration Platform Version 3.0 is the first to natively handle all data integration use cases, including all needed Azure, AWS, and Google Cloud Targets, under a single, unified platform with minimal coding. Equalum can be used for real-time streaming, batch ETL, replication, and Tier One Change Data Capture.

Equalum CDIP provides next-generation capabilities and simplicity with a drag-and-drop interface for real-time and batches data pipeline development. In multi-use case settings, the solution provides equivalent or higher performance than single-use case solutions, like CDC or Streaming ETL tools without CDC. Equalum allows data teams to move from having no knowledge of the platform to having a basic understanding in only a few days of onboarding and then go live with their first use case completed in under an hour.

Kevin Petrie, VP of Research, Eckerson Group, stated that "Equalum helps enterprises address the compelling growth opportunity that is created by digital transformation and hyper-cloud adoption. To survive and compete, enterprises need to synchronize operations and analyze opportunities on a real-time basis. This requires automatically integrating live data across hybrid, cloud, and multi-cloud environments."

Equalum has added hundreds of new features to Version 3.0. And other new offers and refinements, to make complicated transformations and data manipulations easier.

Support for All Required Cloud Targets
Equalum has added support for a variety of Azure, Google Cloud targets, and AWS, involving Amazon RDS, Microsoft Azure for Oracle, Postgres, MySQL, and other databases, as well as Google Cloud Platform (GCP), Google DataProc, Google Cloud Storage (GCS), Google BigQuery, and Google Cloud Database. Oracle Databases on SQL Server, Google, and Azure are also supported.

Equalum's Oracle Binary Log Parser (OBLP)
OBLP has been enhanced to provide an even better performance, making it a perfect replacement for Oracle's deprecated Logminer. Equalum provides 10x throughput increases, an optimized CDC strategy, and current pricing based on flows and endpoints.

SQL Replication Binary Parser (SRBP)
This CDC duplication is based on its SQL Server transactional comparative and requires no installation on the database server. When comComparedSQL Server CDC Solution, Equalum minimizes the effect on SQL Server by 90%, providing improved performance, throughput, and less pressure on production databases.

Cloud Target Expansion
3.0 adds support for Google, Amazon Web Services, and Microsoft Azure Cloud Targets. If you decide to switch to one of these main cloud targets in the future, Equalum completely supports and future-proofs data integration.

Replication Group Enhancements
Enhancements to Replication Groups – Replication groups, built right into the overall Continuous Data Integration Platform, make extensive data migrations and cross-platform data warehousing (replicating to a data lake or data warehouse) and maintaining tens of thousands of items a breeze (UI). Equalum synchronizes Initial Acquisition and Change Data Capture (CDC) to assure "once-and-for-all" data capture.

Automatic Schema Evolution (enhanced)
When column changes or the schema is changed in other integration solutions, data pipelines frequently fail. All changes are caught and suitably propagated in real-time using Equalum's schema evolution, which is automated and straightforward.

Industry-First Native Support for all Data Integration Utilizes Cases with no-code
This is the industry's first native support for all data integration use cases. Equalum is the first in the industry to offer Streaming ETL and ELT and Batch ETL and contemporary, multi-modal Change Data Capture, all on a single, unified platform with a no-code user interface.

From simple pipeline building to huge operationalization, Equalum enables the complete data intake development cycle. For all data pipelines in the system, the platform provides full moncompleteing and execution metrics. Equalum's architecture also offers high availability and failover protection, ensuring data is protected as volume and velocity increase.

Equalum is enhanced for real-time streaming data, IoT streaming, improved batch data intake, data file transformation for real-time analytics, real-time ERP/CRM data access, MemSQL data replication, and enterprise-wide data consolidation to data lakes. Real-time streaming (ETL/ELT), real-time Change Data Capture (CDC), and data warehouse ETL performance enhancements are among the enterprise efforts supported.

Spotlight

Integrating analytics into UX work helps to make data-based decisions and focus efforts on projects with the most significant impact.

Spotlight

Integrating analytics into UX work helps to make data-based decisions and focus efforts on projects with the most significant impact.

Related News

Big Data Management

Synopsys Revolutionizes IC Chip Development with Synopsys.ai EDA Suite Expansion

Synopsys, Inc. | September 07, 2023

Synopsys, Inc. introduces an AI-driven data analytics solution for integrated circuit chip development. The innovation is expected to enhance productivity and quality in the semiconductor industry by leveraging AI for insights and optimization. Sanjay Bali believes the importance of AI is growing in the semiconductor industry. On September 6, 2023, Synopsys, Inc., a renowned Silicon to Software partner for innovative companies engaged in the development of electronic products and software applications, unveiled an expansion of its Synopsys.ai full-stack EDA suite, which now includes a comprehensive AI-driven data analytics continuum designed to support every phase of integrated circuit (IC) chip development. The Synopsys EDA Data Analytics solution represents a significant advancement within the semiconductor industry. It is the first of its kind to deliver AI-driven insights and optimization, thereby enhancing various aspects of the IC chip development process, including exploration, design, manufacturing, and testing. By harnessing AI's power, this solution can efficiently curate and operationalize vast volumes of diverse, multi-domain data. This, in turn, accelerates root-cause analysis and facilitates improvements in design productivity, manufacturing efficiency, and test quality. The AI-powered Synopsys EDA Data Analytics (.da) solution comprises the following components: Synopsys Design.da: This performs in-depth analysis of data generated during the execution of designs using Synopsys.ai. It provides chip designers with comprehensive visibility and actionable insights related to power, performance, and area (PPA) optimization opportunities. Synopsys Fab.da: This module is designed to store and analyze extensive streams of data related to fab equipment process control that enhances operational efficiencies and maximizes product quality and fab yield. Synopsys Silicon.da: This component focuses on collecting petabytes of data from silicon monitors, diagnostic tools, and production tests. The data collected is used to improve chip production metrics, including quality, yield, throughput, as well as silicon operation metrics, such as chip power and performance. Sanjay Bali, VP of Strategy and Product Management for the EDA Group at Synopsys, commented, As IC complexity grows and market windows shrink, the semiconductor industry is increasingly adopting artificial intelligence technologies to enhance the quality of results (QoR), speed verification and testing, improve fab yield, and boost productivity across multiple domains spanning the entire IC design flow. [Source: PR Newswire] With the new data analytics capabilities within the Synopsys.ai EDA suite, Sanjay stated, companies can now aggregate and leverage data across every layer of the EDA stack from architecture exploration, design, test, and manufacturing to drive improvements in PPA, yield, and engineering productivity. Unleashing the Potential Hidden Within Massive Data Sets In the realm of electronic design automation (EDA), testing, and IC fabrication tools, a vast amount of heterogeneous design data is generated. This data encompasses elements like timing paths, power profiles, die pass/fail reports, process control data, and verification coverage metrics. Effectively harnessing this data is essential for enhancing productivity, power-performance-area (PPA) outcomes, and parametric/manufacturing yield. The expansion of the Synopsys.ai full-stack EDA suite with a big data analytics solution enables the aggregation and curation of multi-domain data through AI-driven processes and methodologies. This, in turn, leads to substantial productivity enhancements with improved Quality of Results (QoR). With access to more profound design insights, chip designers can execute more efficient debugging and optimization workflows. Additionally, IC suppliers gain the capability to swiftly identify and rectify problem areas throughout the mask, fabrication, and testing processes before they can negatively impact product quality and yield. Furthermore, organizations can leverage generative AI techniques on their datasets to unlock new use cases, including knowledge assistance, preemptive and prescriptive what-if scenario exploration, and guided issue resolution.

Read More

Business Intelligence, Big Data Management, Data Science

Airbyte Makes Hundreds of Data Sources Available for Artificial Intelligence Applications

Business Wire | August 09, 2023

Airbyte, creators of the fastest-growing open-source data movement platform, today made available connectors for the Pinecone and Chroma vector databases as the destination for moving data from hundreds of data sources, which then can be accessed by artificial intelligence (AI) models. “We are the first general-purpose data movement platform to add support for vector databases – the first to build a bridge between data movement platforms and AI,” said Michel Tricot, CEO, Airbyte. “Now, Pinecone and Chroma users don’t have to struggle with creating custom code to bring in data; they can use the new Airbyte connector to select the data sources they want.” Because vector databases have the ability to interpret data to create relationships, their usage is increasingly popular as users seek to gain more meaning from data. Vector databases are ideal for applications like recommendation systems, anomaly detection and natural language processing, and as sources for AI applications – specifically Large Language Models (LLM). The vector database destination in Airbyte now enables users to configure the full ELT pipeline, starting from extracting records from a wide variety of sources to separating unstructured and structured data, preparing and embedding text contents of records, and finally loading them into vector databases – all through a single, user-friendly interface. These vector databases can then be accessed by LLMs. All existing advantages of the Airbyte platform are now extended to vector databases, including: The largest catalog of data sources that can be connected within minutes, and optimized for performance. Availability of the no-code connector builder that makes it possible to easily and quickly create new connectors for data integrations that addresses the “long-tail” of data sources. Ability to do incremental syncs to only extract changes in the data from a previous sync. Built-in resiliency in the event of a disrupted session moving data, so the connection will resume from the point of the disruption. Secure authentication for data access. Ability to schedule and monitor status of all syncs. Airbyte continues to innovate and support cutting-edge technologies to empower organizations in their data integration journey. The addition of vector database support marks another significant milestone in Airbyte's commitment to providing powerful and efficient solutions for data integration and analysis. The vector database destination is currently in alpha status and available supporting: Pinecone on both Airbyte Cloud and the Open Source Software (OSS) version; Chroma and the embedded DocArray database on Airbyte OSS; plus more options in the future. Airbyte makes moving data easy and affordable across almost any source and destination, helping enterprises provide their users with access to the right data for analysis and decision-making. Airbyte has the largest data engineering contributor community – with more than 800 contributors – and the best tooling to build and maintain connectors. About Airbyte Airbyte is the open-source data movement leader running in the safety of your cloud and syncing data from applications, APIs, and databases to data warehouses, lakes, and other destinations. Airbyte offers four products: Airbyte Open Source, Airbyte Enterprise, Airbyte Cloud, and Powered by Airbyte. Airbyte was co-founded by Michel Tricot (former director of engineering and head of integrations at Liveramp and RideOS) and John Lafleur (serial entrepreneur of dev tools and B2B). The company is headquartered in San Francisco with a distributed team around the world. To learn more, visit airbyte.com.

Read More

Big Data Management

Cloudera and AWS Enhance Cloud Data Management and Analytics

AWS | September 11, 2023

Cloudera and AWS have signed a strategic collaboration agreement to boost cloud-native data management and analytics. Cloudera, AWS ISV WMP Partner, will use AWS services to innovate and reduce costs with its open data lakehouse. Its Data Platform is engineered to integrate directly with AWS services, providing a cost-effective and innovative platform for customers. Cloudera and Amazon Web Services (AWS) have announced a Strategic Collaboration Agreement (SCA) aimed at bolstering cloud-native data management and analytics. Under this agreement, Cloudera will harness AWS services to provide ongoing innovation and cost reduction to customers using the Cloudera open data lakehouse on AWS for enterprise generative AI. As part of this collaboration, Cloudera, already an AWS Independent Software Vendor (ISV) Workload Migration Program (WMP) Partner, will further simplify workload migration to the cloud and the purchase of Cloudera Data Platform (CDP) on AWS, leveraging AWS Marketplace credits. Cloudera has opted to run key elements of CDP on AWS, including data management, data lakes, data warehouses, operational databases, AI and machine learning, master data management, and security components. This allows customers to transition to CDP in the cloud without the need to refactor their applications, facilitating hybrid deployments. Moreover, Cloudera has designed CDP for seamless integration with various AWS services such as Amazon S3, Amazon EKS, Amazon RDS, and Amazon EC2, delivering a tightly integrated platform that reduces costs and capitalizes on AWS innovations. Customers can access AWS native services without the burden of managing integrations themselves. "Deepening our collaboration with AWS gives customers even more reasons to choose to run the Cloudera Data Platform on AWS. With tighter hardware and AWS service integration, customers get the best possible experience with strong security and governance, along with new cost reduction options to support their most critical analytical workloads, said Paul Codding, Executive Vice President of Product Management, Cloudera. He stated that Cloudera and AWS, when combined, provide organizations with the necessary tools to construct and operate data applications in a manner that can optimally cater to the unique and evolving requirements of their business. [Source: PR Newswire] Beyond technology integration, AWS and Cloudera are set to collaborate on marketing and co-selling programs for customers. The partnership solidifies Cloudera's position as a trusted AWS partner in cloud-native data management and data analytics. "Cloudera has strengthened their collaboration with AWS for shared customers to leverage their existing investments in CDP and accelerate their modernization to the cloud, said Chris Grusz, General Manager, Technology Partnerships and Marketplace at AWS. He mentioned that Cloudera is persistently innovating on AWS throughout its data management platform to deliver authentic data analytics and insights for its customers. [Source: PR Newswire] Cloudera's open data lakehouse approach enables secure data management and portable cloud-native data analytics across various cloud environments, aligning with their mission to make data transformation feasible for the future.

Read More