Business Intelligence, Big Data Management, Business Strategy

Fivetran Supports the Automation of the Modern Data Lake on Amazon S3

Fivetran Supports the Automation of the Modern Data Lake on Amazon S3

Fivetran, a global leader in automated data movement, today announced support for Amazon Simple Storage Service (Amazon S3) with Apache Iceberg data lake format. Amazon S3 is an object storage service from Amazon Web Services (AWS) that offers industry-leading scalability, data availability, security and performance. Apache Iceberg is a widely supported open-source data format that offers atomic, consistent, isolated and durable (ACID) transactions for data lakes. Fivetran is the automated data movement platform, anonymizing personally identifiable information (PII) while cleansing, normalizing and automatically loading data into the lake.

With expansive storage capacity and support for multiple data formats, the data lake is a popular destination for teams doing analysis on massive data sets or running extensive data science projects that fuel their business. Hundreds of thousands of data lakes run on top of Amazon S3 and, of the many enterprise teams that have already put them to work, a majority cite enhanced business agility, improvement in developing products and services, and enhancing customer service and engagement as benefits of data lakes.

“Fivetran supporting Amazon S3 as a destination is a big deal for our platform Distilled, and anyone building external data and analytics products,” said Aaron Peabody, Co-Founder and CEO at Untitled Firm. “This new destination allows our customers to tap into the full potential of AWS's services. We couldn't be more excited that Fivetran has invested in this destination as it is a force multiplier catalyst for our own product roadmap at Untitled.”

“We now automatically extract, cleanse, deduplicate, and make ready for analysis large volumes of semi-structured data to power data lakes in the same reliable and secure way our customers get their data into their cloud warehouses today,” said Fraser Harris, Vice President of Product at Fivetran. “Fivetran and AWS share a vision that without structure, governance and accuracy of data in a data lake, organizations are unnecessarily increasing complexity and not realizing the full value of the data they store there. Fivetran’s mission is to make access to data as simple and reliable as electricity, and this new support brings that promise to the world of data lakes.”

​​”We are delighted that the accessibility of Amazon S3 with Iceberg continues to grow,” said Greg Khairallah, Director of Analytics at AWS. “It’s an easy way for our customers to simplify data ingestion while providing customers the scalability of a data lake and the reliable data transformation of a data warehouse.”

As organizations continue to leverage data lakes to run analytics and extract insights from their data, progressive marketing intelligence teams are demanding more of them, and solutions like Amazon S3 and automated pipeline support are meeting that demand. Tinuiti, one of the largest independent performance marketing firms, handles large volumes of data on a daily basis and must have a data lake — Amazon S3 in particular — to power their customers' brand potential.

“The data lake is an easy, affordable, secure and robust way to store all our customers' data,” said Lakshmi Ramesh, Vice President, Data Services at Tinuiti. “The main challenge is in optimizing performance and accessibility, but with Fivetran’s support for Amazon S3 with Iceberg it will further optimize our Fivetran pipeline. Since the data lake is our single source of truth, it is critical that all the data ingested from different sources be accessible in the data lake.”

Instead of focusing on all the manual steps required to ingest data, cleanse it, prepare it for usage, hash and block sensitive data, and then start querying it, modern organizations see great value in reducing data lake management efforts through pipeline automation and governance.

"Fivetran’s support for Amazon S3 and its standardization on Iceberg format makes it easier than ever for organizations to get their data into a lakehouse,” said Tomer Shiran, co-founder and CPO, Dremio. “With Fivetran, AWS and Dremio, organizations can build their open data lakehouse architecture for users to quickly access and query data and provide critical data-driven business insights."

About Fivetran

Fivetran automates data movement out of, into and across cloud data platforms. We automate the most time-consuming parts of the ELT process from extracts to schema drift handling to transformations, so data engineers can focus on higher-impact projects with total pipeline peace of mind. With 99.9% uptime and self-healing pipelines, Fivetran enables hundreds of leading brands across the globe, including Autodesk, Conagra Brands, JetBlue, Lionsgate, Morgan Stanley and Ziff Davis, to accelerate data-driven decisions and drive business growth. Fivetran is headquartered in Oakland, California, with offices around the world.

Spotlight

Spotlight

Related News

Big Data

Airbyte Racks Up Awards from InfoWorld, BigDATAwire, Built In; Builds Largest and Fastest-Growing User Community

Airbyte | January 30, 2024

Airbyte, creators of the leading open-source data movement infrastructure, today announced a series of accomplishments and awards reinforcing its standing as the largest and fastest-growing data movement community. With a focus on innovation, community engagement, and performance enhancement, Airbyte continues to revolutionize the way data is handled and processed across industries. “Airbyte proudly stands as the front-runner in the data movement landscape with the largest community of more than 5,000 daily users and over 125,000 deployments, with monthly data synchronizations of over 2 petabytes,” said Michel Tricot, co-founder and CEO, Airbyte. “This unparalleled growth is a testament to Airbyte's widespread adoption by users and the trust placed in its capabilities.” The Airbyte community has more than 800 code contributors and 12,000 stars on GitHub. Recently, the company held its second annual virtual conference called move(data), which attracted over 5,000 attendees. Airbyte was named an InfoWorld Technology of the Year Award finalist: Data Management – Integration (in October) for cutting-edge products that are changing how IT organizations work and how companies do business. And, at the start of this year, was named to the Built In 2024 Best Places To Work Award in San Francisco – Best Startups to Work For, recognizing the company's commitment to fostering a positive work environment, remote and flexible work opportunities, and programs for diversity, equity, and inclusion. Today, the company received the BigDATAwire Readers/Editors Choice Award – Big Data and AI Startup, which recognizes companies and products that have made a difference. Other key milestones in 2023 include the following. Availability of more than 350 data connectors, making Airbyte the platform with the most connectors in the industry. The company aims to increase that to 500 high-quality connectors supported by the end of this year. More than 2,000 custom connectors were created with the Airbyte No-Code Connector Builder, which enables data connectors to be made in minutes. Significant performance improvement with database replication speed increased by 10 times to support larger datasets. Added support for five vector databases, in addition to unstructured data sources, as the first company to build a bridge between data movement platforms and artificial intelligence (AI). Looking ahead, Airbyte will introduce data lakehouse destinations, as well as a new Publish feature to push data to API destinations. About Airbyte Airbyte is the open-source data movement infrastructure leader running in the safety of your cloud and syncing data from applications, APIs, and databases to data warehouses, lakes, and other destinations. Airbyte offers four products: Airbyte Open Source, Airbyte Self-Managed, Airbyte Cloud, and Powered by Airbyte. Airbyte was co-founded by Michel Tricot (former director of engineering and head of integrations at Liveramp and RideOS) and John Lafleur (serial entrepreneur of dev tools and B2B). The company is headquartered in San Francisco with a distributed team around the world. To learn more, visit airbyte.com.

Read More

Data Architecture

SingleStore Announces Real-time Data Platform to Further Accelerate AI, Analytics and Application Development

SingleStore | January 25, 2024

SingleStore, the database that allows you to transact, analyze and contextualize data, today announced powerful new capabilities — making it the industry’s only real-time data platform. With its latest release, dubbed SingleStore Pro Max, the company announced ground-breaking features like indexed vector search, an on-demand compute service for GPUs/ CPUs and a new free shared tier, among several other innovative new products. Together, these capabilities shrink development cycles while providing the performance and scale that customers need for building applications. In an explosive generative AI landscape, companies are looking for a modern data platform that’s ready for enterprise AI use cases — one with best-available tooling to accelerate development, simultaneously allowing them to marry structured or semi-structured data residing in enterprise systems with unstructured data lying in data lakes. “We believe that a data platform should both create new revenue streams while also decreasing technological costs and complexity for customers. And this can only happen with simplicity at the core,” said Raj Verma, CEO, SingleStore. “This isn’t just a product update, it’s a quantum leap… SingleStore is offering truly transformative capabilities in a single platform for customers to build all kinds of real-time applications, AI or otherwise.” “At Adobe, we aim to change the world through digital experiences,” said Matt Newman, Principal Data Architect, Adobe. “SingleStore’s latest release is exciting as it pushes what is possible when it comes to database technology, real-time analytics and building modern applications that support AI workloads. We’re looking forward to these new features as more and more of our customers are seeking ways to take full advantage of generative Al capabilities.” Key new features launched include: Indexed vector search. SingleStore has announced support for vector search using Approximate Nearest Neighbor (ANN) vector indexing algorithms, leading to 800-1,000x faster vector search performance than precise methods (KNN). With both full-text and indexed vector search capabilities, SingleStore offers developers true hybrid search that takes advantage of the full power of SQL for queries, joins, filters and aggregations. These capabilities firmly place SingleStore above vector-only databases that require niche query languages and are not designed to meet enterprise security and resiliency needs. Free shared tier. SingleStore has announced a new cloud-based Free Shared Tier that’s designed for startups and developers to quickly bring their ideas to life — without the need to commit to a paid plan. On-demand compute service for GPUs and CPUs. SingleStore announces a compute service that works alongside SingleStore’s native Notebooks to let developers spin up GPUs and CPUs to run database-adjacent workloads including data preparation, ETL, third-party native application frameworks, etc. This capability brings compute to algorithms, rather than the other way around, enabling developers to build highly performant AI applications safely and securely using SingleStore — without unnecessary data movement. New CDC capabilities for data ingest and egress. To ease the burden and costs of moving data in and out of SingleStore, SingleStore is adding native capabilities for real-time Change Data Capture (CDC) in for MongoDB®, MySQL and ingestion from Apache Iceberg without requiring other third party CDC tools. SingleStore will also support CDC out capabilities that ease migrations and enable the use of SingleStore as a source for other applications and databases like data warehouses and lakehouses. SingleStore Kai™. Now generally available, and ready for both analytical and transactional processing for apps originally built on MongoDB. Announced in public preview in early 2023, SingleStore Kai is an API to deliver over 100x faster analytics on MongoDB® with no query changes or data transformations required. Today, SingleStore Kai supports BSON data format natively, has improved transactional performance, increased performance for arrays and offers industry-leading compatibility with MongoDB query language. Projections: To further advance as the world’s fastest HTAP database, SingleStore has added Projections. Projections allow developers to greatly speed up range filters and group by operations by introducing secondary sort and shard keys. Query performance improvements range from 2-3x or more, depending on the size of the table. With this latest release, SingleStore becomes the industry’s first and only real-time data platform designed for all applications, analytics and AI. SingleStore supports high-throughput ingest performance, ACID transactions and low-latency analytics; and structured, semi-structured (JSON, BSON, text) and unstructured data (vector embeddings of audio, video, images, PDFs, etc.). Finally, SingleStore’s data platform is designed not just with developers in mind, but also ML engineers, data engineers and data scientists. “Our new features and capabilities advance SingleStore’s mission of offering a real-time data platform for the next wave of gen AI and data applications,” said Nadeem Asghar, SVP, Product Management + Strategy at SingleStore. “New features, including vector search, Projections, Apache Iceberg, Scheduled Notebooks, autoscaling, GPU compute services, SingleStore Kai™, and the Free Shared Tier allow startups — as well as global enterprises — to quickly build and scale enterprise-grade real-time AI applications. We make data integration with third-party databases easy with both CDC in and CDC out support.” "Although generative AI, LLM, and vector search capabilities are early stage, they promise to deliver a richer data experience with translytical architecture," states the 2023 report, “Translytical Architecture 2.0 Evolves To Support Distributed, Multimodel, And AI Capabilities,” authored by Noel Yuhanna, Vice President and Principal Analyst at Forrester Research. "Generative AI and LLM can help democratize data through natural language query (NLQ), offering a ChatGPT-like interface. Also, vector storage and index can be leveraged to perform similarity searches to support data intelligence." SingleStore has been on a fast track leading innovation around generative AI. The company’s product evolution has been accompanied by high-momentum growth in customers and surpassing $100M in ARR late last year. SingleStore also recently ranked #2 in the emerging category of vector databases, and was recognized by TrustRadius as a top vector database in 2023. Finally, SingleStore was a winner of InfoWorld’s Technology of the year in the database category. To learn more about SingleStore visit here. About SingleStore SingleStore empowers the world’s leading organizations to build and scale modern applications using the only database that allows you to transact, analyze and contextualize data in real time. With streaming data ingestion, support for both transactions and analytics, horizontal scalability and hybrid vector search capabilities, SingleStore helps deliver 10-100x better performance at 1/3 the costs compared to legacy architectures. Hundreds of customers worldwide — including Fortune 500 companies and global data leaders — use SingleStore to power real-time applications and analytics. Learn more at singlestore.com. Follow us @SingleStoreDB on Twitter or visit www.singlestore.com.

Read More

Big Data Management

data.world Integrates with Snowflake Data Quality Metrics to Bolster Data Trust

data.world | January 24, 2024

data.world, the data catalog platform company, today announced an integration with Snowflake, the Data Cloud company, that brings new data quality metrics and measurement capabilities to enterprises. The data.world Snowflake Collector now empowers enterprise data teams to measure data quality across their organization on-demand, unifying data quality and analytics. Customers can now achieve greater trust in their data quality and downstream analytics to support mission-critical applications, confident data-driven decision-making, and AI initiatives. Data quality remains one of the top concerns for chief data officers and a critical barrier to creating a data-driven culture. Traditionally, data quality assurance has relied on manual oversight – a process that’s tedious and fraught with inefficacy. The data.world Data Catalog Platform now delivers Snowflake data quality metrics directly to customers, streamlining quality assurance timelines and accelerating data-first initiatives. Data consumers can access contextual information in the catalog or directly within tools such as Tableau and PowerBI via Hoots – data.world’s embedded trust badges – that broadcast data health status and catalog context, bolstering transparency and trust. Additionally, teams can link certification and DataOps workflows to Snowflake's data quality metrics to automate manual workflows and quality alerts. Backed by a knowledge graph architecture, data.world provides greater insight into data quality scores via intelligence on data provenance, usage, and context – all of which support DataOps and governance workflows. “Data trust is increasingly crucial to every facet of business and data teams are struggling to verify the quality of their data, facing increased scrutiny from developers and decision-makers alike on the downstream impacts of their work, including analytics – and soon enough, AI applications,” said Jeff Hollan, Director, Product Management at Snowflake. “Our collaboration with data.world enables data teams and decision-makers to verify and trust their data’s quality to use in mission-critical applications and analytics across their business.” “High-quality data has always been a priority among enterprise data teams and decision-makers. As enterprise AI ambitions grow, the number one priority is ensuring the data powering generative AI is clean, consistent, and contextual,” said Bryon Jacob, CTO at data.world. “Alongside Snowflake, we’re taking steps to ensure data scientists, analysts, and leaders can confidently feed AI and analytics applications data that delivers high-quality insights, and supports the type of decision-making that drives their business forward.” The integration builds on the robust collaboration between data.world and Snowflake. Most recently, the companies announced an exclusive offering for joint customers, streamlining adoption timelines and offering a new attractive price point. The data.world's knowledge graph-powered data catalog already offers unique benefits for Snowflake customers, including support for Snowpark. This offering is now available to all data.world enterprise customers using the Snowflake Collector, as well as customers taking advantage of the Snowflake-only offering. To learn more about the data quality integration or the data.world data catalog platform, visit data.world. About data.world data.world is the data catalog platform built for your AI future. Its cloud-native SaaS (software-as-a-service) platform combines a consumer-grade user experience with a powerful Knowledge Graph to deliver enhanced data discovery, agile data governance, and actionable insights. data.world is a Certified B Corporation and public benefit corporation and home to the world’s largest collaborative open data community with more than two million members, including ninety percent of the Fortune 500. Our company has 76 patents and has been named one of Austin’s Best Places to Work seven years in a row.

Read More