Setting Up a Data Integration Pipeline for Repeatable Analytics Delivery

GoodData Corporation

As part of its platform, GoodData provides a fault-tolerant, high performance and scalable system for data integration. While built for large-scale analytic applications, it is a metadata-driven, modular system that can start small and grow with your business. In this session, Cameron demonstrates how to set up and schedule regular data extraction from SQL databases and other sources. He also covers some of the issues requiring attention in data extraction such as data merging and incremental loads. A future session will cover transformations and data enrichment along with data distribution.
Watch Now

Spotlight

Yogi Berra said, "You can observe a lot by watching." The same is true with data. If you can appropriately display your data, you can already start to draw conclusions from it. I'll go even further: exploring your data is a crucial step in your analysis. When I say exploring your data, I mean organizing and plotting your data, and maybe computing a few numerical summaries about them.

OTHER ON-DEMAND WEBINARS

Experience limitless analytics with Azure Synapse Analytics

View this webinar where our experts discussed the new era of analytics with the Microsoft Azure Synapse Analytics platform. It is a limitless analytics service with unmatched time to insight that bring together data integration, enterprise data warehousing and big data analytics – all into a single service.
Watch Now

ADVANCING CANCER TREATMENT WITH SELF-SERVICE DATA PREPARATION

Paxata

PrecisionProfile – a bioinformatics technology company – focuses on enabling oncologists and research scientists to rapidly analyze genomic profiles and create personalized treatment plans for cancer patients. Like most organizations, PrecisionProfile struggled with the most time-consuming part of every analytics exercise - combing, cleaning, and shaping data into actionable information. With self-service data preparation, they were able to design and develop a platform to accelerate data pipelines, enabling scientists to spend more time analyzing data and formulating how they can leverage it to save lives. View this webcast to learn: Empower researchers and oncologists to spend a fraction of their time restructuring data. Reduced cycle time of a genomic clinical study from 1-3 months to 2-8 hours.
Watch Now

Accelerating Machine Learning on Databricks

CartoDB

Learn how you can combine CARTO and Python for spatial data science from the comfort of your own Jupyter notebook. In this technical webinar, Andy Eschbacher (Data Scientist at CARTO) and Joe Pringle (VP - North America at CARTO) show how to apply CARTOframes and CARTO's Python SDK to build powerful end-to-end spatial data science workflows. This webinar specifically focus on two areas: Using CARTO in a Jupyter notebook. Demo examples of custom data science web applications built with CARTOframes and CARTO VL.
Watch Now

ON DEMAND Q2 WEBINAR PROMOTION: "TOP 5 ELEMENTS OF A WINNING DATA STRATEGY" Organizations with a very effective enterprise data strategies report more profits and better resiliency, according to a recent Enterprise Data Maturity research report. But there’s a catch: less than half of enterprise respondents believe their data str
Watch Now

Spotlight

Yogi Berra said, "You can observe a lot by watching." The same is true with data. If you can appropriately display your data, you can already start to draw conclusions from it. I'll go even further: exploring your data is a crucial step in your analysis. When I say exploring your data, I mean organizing and plotting your data, and maybe computing a few numerical summaries about them.

resources