BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT
Privacera | September 12, 2022
Privacera, the unified data access governance leader founded by the creators of Apache Ranger™, today announced the availability of its AWS Lake Formation integration in private preview, which offers complete data governance automation and fine-grained data access for AWS services including Amazon S3, Amazon Redshift and Amazon RDS. Privacera helps enterprise data teams protect sensitive data and enable privacy across all on-premise, hybrid and multi-cloud data sources while reducing time to insights by automating outdated, manual governance processes.
Privacera is expanding its support and native integration for diverse AWS environments with the new AWS Lake Formation integration to simplify data access governance for complex and heterogeneous data lake and data mesh environments by extending Lake Formation enforcement to third-party services like Databricks, enabling additional governance use-cases. With this new integration, organizations will be able to accelerate their migration to the cloud by leveraging Privacera to securely manage data access policies within a single governance platform across diverse on-premise and cloud data sources. This will significantly reduce the efforts around data migrations to the cloud through increased automation and consistent policy management, and the ability to ensure compliance through an open, consistent and proven standard.
"Organizations operate in diverse data ecosystems, and it's becoming increasingly challenging to not only manage the data from a governance perspective, but ensure that organizations are gleaning timely insights securely through appropriate access controls and automation, and that's why Privacera exists," said Privacera CEO Balaji Ganesan. "As an AWS partner, expanding our capabilities with this new integration allows us to deliver a solution that leverages the strengths of both Privacera and AWS Lake Formation, helping organizations with a secure and simple approach to data access while delivering business value."
The latest integration will give users:
A unified data governance strategy including your lake formation data assets
AWS Lake Formation policy enforcement extended to popular data analytics systems like Databricks
An intuitive and easy-to-use interface to build data access policies on top of AWS Lake Formation
Financial services company Sun Life uses Privacera to accelerate AWS migration and unify data access governance and compliance. "Because Apache Ranger is critical to the success of our entire analytics platform, so is Privacera as it allows us to capitalize on existing technology and deliver critical data to our analytic teams quicker," said a Director of Cloud Infrastructure & Operations at Sun Life. "Our goal was to get our data into a data lake as quickly as possible and then apply access rules so approved Sun Life professionals can actually use the data to generate important insights. Requests that used to take three to four weeks to program can now be reacted to in less than two days."
Founded in 2016 by the creators of Apache Ranger™, Privacera's SaaS-based data security and governance platform enables analytics teams to simplify data access, security, and privacy for data applications and analytical workloads. The Privacera platform supports compliance with regulations such as GDPR, CCPA, LGPD, and HIPAA. Privacera provides a unified view and control for securing sensitive data across multiple cloud services such as AWS, Azure, Databricks, GCP, Snowflake, and Starburst. The Privacera platform is utilized by Fortune 500 customers across finance, insurance, life sciences, retail, media, and consumer industries, as well as government agencies to automate sensitive data discovery, mask sensitive data, and manage high-fidelity policies at petabyte scale on-premises and in the cloud.
BIG DATA MANAGEMENT, DATA VISUALIZATION
Syniti | September 15, 2022
Syniti, a global leader in enterprise data management, today announced new data quality and catalog capabilities available in its industry leading Syniti Knowledge Platform, building on the enhancements in data migration and data matching added earlier this year. The Syniti Knowledge Platform now includes data quality, catalog, matching, replication, migration and governance, all available under one login, in a single cloud solution. This provides users with a complete and unified data management platform enabling them to deliver faster and better business outcomes with data they can trust.
Trustworthy data is critical for the decisions businesses must make to reduce risk, drive competitive advantage and deliver bottom-line growth. According to Gartner® research, "Significant data quality issues remain a key impediment for organizations' digital initiatives. Failure to address data quality issues for critical use cases puts organizations at a disadvantage delivering business value and has severe consequences."1
Historically, in order to get better data, companies would have to buy multiple, point-based solutions like heavy data catalog tools that require massive teams to build and maintain or data quality solutions that only identify problems, rather than helping fix issues. This approach is expensive, unnecessarily complex and does not address the data needs of today's businesses. With the Syniti Knowledge Platform, customers now have a unified solution to address the data needed to drive critical business objectives now and in the future. The same Gartner research states that, "From an end-user perspective, organizations are attracted to [unified data management platforms] this option as well, anticipating improved total cost of ownership due to less integration and maintenance between data quality solutions and adjacent applications."1 Gartner, The State of Data Quality Solutions: Augment, Automate and Simplify , Melody Chien, Ankush Jain, 15 March 2022
Each enhanced component of the Syniti Knowledge Platform includes significant new functionality, updates and enhancements, all of which are amplified by their integration.
With these new combined capabilities, organizations will benefit from:
More efficient data management: From data identification through to resolution, stakeholders can collaborate in one platform. With a single catalog that underpins all data management activities, data activities can be reused across multiple projects helping drive faster and cheaper data management initiatives.
Better resourcing & improved business processes: Linking data management and quality to business outcomes improves processes and decision-making while also helping to ensure more bang-for-their buck when it comes to allocating time and resources. Data quality issues with the greatest impact are automatically detected and KPI improvements are tracked over time with smart remediation pipelines.
Faster ROI & savings potential: The Syniti Knowledge Platform offers hundreds of proven, out of the box data quality rules and reports and business outcome-related dashboards, which can help users discover millions of dollars in savings. Rules created in data migrations can be used for ongoing data quality, saving time and enforcing compliance. Knowledge re-use can help reduce future data projects by 50%.
"Data quality isn't a one-time event. Organizations need a unified approach that enables them not just to rapidly find bad data, but to efficiently fix it and sustain that high quality to drive continuous, ongoing value. The Syniti Knowledge Platform's new capabilities allow our users to leverage a more efficient, interconnected and user-friendly platform in a way that's directly tied to business outcomes and objectives."
Jon Green, Vice President, Product Management, Syniti
Kevin Campbell, CEO, Syniti, said: "Poor quality data pollutes the entire organization, negatively impacting business operations and wasting time, money and resources. We have purpose-built a data platform to drive business value as opposed to the many siloed solutions that treat data quality as purely a technical exercise. We want our customers to spend more time drawing insights from trusted data versus finding and fixing data problems."
Allan Coulter, global chief technology officer for SAP Services, IBM said: "The strategic important of clean, high-quality data cannot be overstated – it is critical to any business modernization effort and to unlocking potential from future analytics and insights. It is exciting to see the new capabilities Syniti is adding to its Syniti Knowledge Platform to help customers succeed in their transformation journeys."
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Syniti solves the world's most complex data challenges by uniquely combining intelligent, AI-driven software and vast data expertise to yield certain and superior business outcomes. For over 25 years, Syniti has partnered with the Fortune 2000 to unlock valuable insights that ignite growth, reduce risk and increase their competitive advantage. Syniti's silo-free enterprise data management platform supports data migration, data quality, data replication, master data management, analytics, data governance, and data strategy in a single, unified solution. Syniti is a portfolio company of private equity firm Bridge Growth Partners LLC.
BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT
Comet | November 17, 2022
Comet, provider of the leading MLOps platform for machine learning (ML) teams from startup to enterprise, today announced a bold new product: Kangas. Open sourced to democratize large scale visual dataset exploration and analysis for the computer vision and machine learning community, Kangas helps users understand and debug their data in a new and highly intuitive way. With Kangas, visualizations are generated in real time; enabling ML practitioners to group, sort, filter, query and interpret their structured and unstructured data to derive meaningful information and accelerate model development.
Data scientists often need to analyze large scale datasets both during the data preparation stage and model training, which can be overwhelming and time-consuming, especially when working on large scale datasets. Kangas makes it possible to intuitively explore, debug and analyze data in real time to quickly gain insights, leading to better, faster decisions. With Kangas, users are able to transform datasets of any scale into clear visualizations.
“A key component of data-centric Machine Learning is being able to understand how your training data impacts model results and where your model predictions are wrong. “Kangas accomplishes both of these goals and dramatically improves the experience for ML practitioners.”
Gideon Mendels, CEO and co-founder of Comet
Putting Large Scale Machine Learning Dataset Analysis at Your Fingertips
Developed with the unique needs of ML practitioners in mind, Kangas is a scalable, dynamic and interoperable tool that allows for the discovery of patterns buried deep within oceans of datasets. With Kangas, data scientists can query their large-scale datasets in a manner that is natural to their problem, allowing them to interact and engage with their data in novel ways.
Noteworthy benefits of Kangas include:
Unparalleled Scalability: Kangas was developed to handle large datasets with high performance.
Purpose Built: Computer Vision/ML concepts like scoring, bounding boxes and more are supported out-of-the-box, and statistics/charts are generated automatically.
Support for Different Forms of Media: Kangas is not limited to traditional text queries. It also supports images, videos and more.
Interoperability: Kangas can run in a notebook, as a standalone local app or even deployed as a web app. It ingests data in a simple format that makes it easy to work with whatever tooling data scientists already use.
Open Source: Kangas is 100% open source and is built by and for the ML community.
Kangas was designed for the entire community, to be embraced by students, researchers and the enterprise. As individuals and teams work to further their ML initiatives, they will be able to leverage the full benefits of Kangas. Being open source, all are able to contribute and further enhance it as well.
“Interoperability and flexibility are inherent in Comet’s value proposition, and Comet aims to expand on that value through open source contributions,” added Mendels. “Kangas is a continuation of all of our efforts, and we couldn’t wait to get its capabilities into the hands of as many data scientists, data engineers and ML engineers as possible. We believe by open sourcing it, Comet can help teams get the most out of their ML projects in ways that have not been possible previously.”
Kangas is available as an open source package for any type of use case. It will be available under Apache License 2 and is open to contributions from community members.
Comet provides an MLOps platform that data scientists and machine learning teams use to manage, optimize, and accelerate the development process across the entire ML lifecycle, from training runs to monitoring models in production. Comet’s platform is trusted by over 150 enterprise customers including Affirm, Cepsa, Etsy, Uber and Zappos. Individuals and academic teams use Comet’s platform to advance research in their fields of study. Founded in 2017, Comet is headquartered in New York, NY with a remote workforce in nine countries on four continents. Comet is free to individuals and academic teams. Startup, team, and enterprise licensing is also available.
BIG DATA MANAGEMENT, BUSINESS STRATEGY
New Relic | September 17, 2022
New Relic , the observability company, announced support for Amazon Virtual Private Cloud (Amazon VPC) Flow Logs on Amazon Kinesis Data Firehose to reduce the friction of sending logs to New Relic. Amazon VPC Flow Logs from AWS is a feature that allows customers to capture information about the IP traffic going to and from network interfaces in their Virtual Private Cloud (VPC). With New Relic support for Amazon VPC Flow Logs, both AWS and New Relic customers can quickly gain a clear understanding of a network’s performance and troubleshoot activity without impacting the network throughput or latency
Network telemetry is challenging even for network engineers. To unlock cloud-scale observability, engineers need to explore VPC performance and connectivity across multiple accounts and regions to understand if an issue started in the network or somewhere else. To solve this, New Relic has streamlined the delivery of Amazon VPC Flow Logs by allowing engineers to send them to New Relic via Kinesis Data Firehose, which reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. With New Relic’s simple “add data” interface, it only takes moments to configure Amazon VPC Flow Logs using the AWS Command Line Interface (AWS CLI) or an AWS CloudFormation template. Instead of digging through raw logs across multiple accounts, any engineer can begin with an Amazon Elastic Compute Cloud (Amazon EC2) instance they own and begin to explore the data that matters, regardless of the AWS account or AWS Region.
“New Relic continues to invest in our relationship with AWS. Helping customers gain visibility into their cloud networking environment increases their overall application observability. “Our support for Amazon VPC shows our commitment to enhancing our joint customers’ observability experience.”
Riya Shanmugam, GVP, Global Alliances and Channels at New Relic
“AWS is delighted to continue our strategic collaboration with New Relic to help customers innovate and migrate faster to the cloud,” said Nishant Mehta, Director of PM – EC2 and VPC Networking at AWS. “New Relic’s connected experience for Amazon VPC Flow Logs, paired with the simplicity of using Kinesis Data Firehose, enables our joint customers to easily understand how their networks are performing, troubleshoot networking issues more quickly, and explore their VPC resources more readily.”
With the New Relic support for Amazon VPC Flow Logs on Kinesis Data Firehose, customers can:
Monitor and alert on network traffic from within New Relic.
Visualize network performance metrics such as bytes and packets per second, as well as accepts and rejects per second across every TCP or UDP port.
Explore flow log deviations to look for unexpected changes in network volume or health.
Diagnose overly restrictive security group rules or potentially malicious traffic issues.
”Our architecture contains above 200 microservices running on AWS. When something goes wrong, we need to find the root cause quickly to put out what we at Gett term as ‘fires,’” said Dani Konstantinovski, Global Support Manager at Gett. “With New Relic capabilities we can identify the problem, understand exactly what services were affected, what’s the reason, and what we need to do to resolve it. New Relic gives us this observability—it helps us to provide better service for our customers.”
“Proactively managing customer experience is essential to all businesses that provide part or all of their services through applications. Therefore it’s essential for engineers to have a clear understanding of their network performance and have the data needed to troubleshoot activity before it impacts customers. Also, the quality of the data is fundamental to making good decisions,” said Stephen Elliot, IDC Group Vice President, I&O, Cloud Operations and DevOps. “Solutions that ensure fast delivery of high-quality data provide engineers with the ability to act quickly and decisively with confidence, saving businesses from the costs associated with negative customer experiences.”
About New Relic
As a leader in observability, New Relic empowers engineers with a data-driven approach to planning, building, deploying, and running great software. New Relic delivers the only unified data platform that empowers engineers to get all telemetry—metrics, events, logs, and traces—paired with powerful full stack analysis tools to help engineers do their best work with data, not opinions. Delivered through the industry’s first usage-based consumption pricing that’s intuitive and predictable, New Relic gives engineers more value for the money by helping improve planning cycle times, change failure rates, release frequency, and mean time to resolution. This helps the world’s leading brands including Adidas Runtastic, American Red Cross, Australia Post, Banco Inter, Chegg, GoTo Group, Ryanair, Sainsbury’s, Signify Health, TopGolf, and World Fuel Services (WFS) improve uptime, reliability, and operational efficiency to deliver exceptional customer experiences that fuel innovation and growth.