BIG DATA MANAGEMENT

Alteryx to Acquire Trifacta

Alteryx | January 07, 2022

The analytics automation company, Alteryx, Inc. , has announced its acquisition of Trifacta. Trifacta is an award-winning cloud company known for making data analytics faster and more intuitive by deploying scalable data management and machine learning.

Enterprise customers deploy modern data architectures based on cloud data warehouses to support SaaS-based applications and analytics. Meanwhile, business users' demand for timely insights from these enormous cloud datasets to power their digital transformation initiatives is at an all-time high, necessitating the development of scalable, secure data analysis solutions.

Trifacta provides cloud-first capabilities to help businesses accelerate their analytics transformation, and the company has a strong presence among the Global 2000 and large corporations. Alteryx's journey to the cloud will be anchored and accelerated by this purchase, which will create new categories of buyers across IT within large companies.

Mark Anderson, CEO of Alteryx, shared, "Trifacta brings highly skilled cloud-first engineering, product, and go-to-market teams with decades of combined experience building and bringing to market mission-critical, cloud-native analytics solutions. Together, Trifacta and Alteryx expand our total addressable market with additional opportunities to target new data and cloud transformation initiatives for Global 2000 customers. He further added, “With Trifacta, our combined cloud platform will serve the needs of entire enterprises, from data analytics teams and IT/technology teams to the line of business users."

Trifacta offers proven, scalable cloud data management solutions for significant cloud installations such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure, natively and securely. Alteryx plans to integrate its premier low-code/no-code analytics solution with Trifacta's cloud-native capabilities to provide business customers with a flexible deployment option for their analytics needs.

"We're incredibly excited to join forces with Alteryx to create the industry's leading independent cloud analytics provider," said Adam Wilson, CEO of Trifacta. "Together, we have the opportunity to enable thousands of customers globally to unlock powerful business insights with the combination of Trifacta's Data Engineering Cloud and Alteryx's Analytics Automation platform."

Spotlight

As organizations embrace AI-powered innovations and deploy deep learning applications, it is crucial to design a robust infrastructure that takes advantage of the compelling features of both on-premises and cloud deployment options. You must consider several technical requirements before you design and deploy an AI-enabled application: the application type, required performance, data lake size and growth, data backups and archives, and TCO, to name a few.


Other News
BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT

Variphy Releases Software Version 13.0

Variphy | September 20, 2022

Variphy, the preferred Cisco Unified Collaboration reporting and analytics software solution for over 1,500 businesses, announced today the release of Variphy 13.0, its latest software version. "Our latest release delivers features allowing for an even faster and more streamlined reporting experience with greatly improved scalability," Derek Falter, director of product development, said. "With the addition of Multi-Database Reporting, Auto-Archiving, and Audible Alerts, we make it easier to monitor and report on CUCM, UCCX, and CUBE activities." Variphy 13.0 core feature enhancements include: CDR Auto Database Archiving & Multi-Database Support Variphy overhauled CUCM and CUBE configuration UIs to isolate CDR-specific settings and activation. Auto Database Archiving allows the archive databases to be automatically created for an even more seamless reporting process. Scheduled Call Analytics Report Queueing The update includes a new application setting to determine the maximum number of reports executed simultaneously. CCX CSQ Widget Threshold Audible Alerts An audible alert will be triggered when a configured threshold is breached. The flexible audio configuration is designed to fit any purpose. Report on Individual CUBE Events Variphy 13.0 features the ability to build CUBE CDR reports, widgets, and searches based on individual events. Pill select input was included to allow clients to focus their CDR output on individual events or sequences. Active Directory Authentication Improvements The Active Directory Server form has been updated to include fields to capture an Active Directory Distinguished Name User and Password. CUBE CDR Monitoring CUBE CDR Monitoring alerts users if Variphy does not receive a minimum number of CDRs from CUBE in the previous hour. The update includes other application settings and improvements for seamless monitoring and reporting. Email alerts, CUBE CDR processing, and widget settings are among the enhancements available. Variphy 13.0 is just the latest version in the company's tradition of consistently delivering software updates and new products. Updates are always free for existing users. About Variphy Variphy creates leading-edge UC tools and analytics software solutions to streamline the service delivery and management of Cisco Unified Communications and Collaboration. Since 2004, it has helped over 1,500 organizations visualize, search, analyze, and report on their Cisco UC environments. Product development, sales and marketing, service delivery, and support teams are based in the United States.

Read More

BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT

Dynatrace Extends Grail to Power Business Analytics with Speed and Precision

Dynatrace | November 16, 2022

Software intelligence company Dynatrace announced today that it is extending its Grail™ causational data lakehouse to power business analytics. As a result, the Dynatrace® platform can instantly capture business data from first and third-party applications at a massive scale without requiring engineering resources or code changes. It prioritizes business data separately from observability data and stores, processes, and analyzes this data while retaining the context of the complex cloud environments where it originated. Dynatrace designed these enhancements to enable business and IT teams to drive accurate, reliable, cost-effective automation and conduct efficient ad hoc analytics covering a wide range of business processes. Examples include order fulfillment and bill payments, service activation and customer onboarding workflows, and the impact on revenue from new digital services. Today’s announcement builds on capabilities that Dynatrace launched in October 2022, leveraging Grail to power log analytics and management. The company expects to continue to extend Grail to power additional development, security, IT, and business solutions. Organizations depend on digital services to drive revenue, customer satisfaction, and competitive differentiation. To optimize these services and user experiences, business and IT teams increasingly rely on insights from various business data, including application usage, conversion rates, and inventory returns. Yet, traditional business intelligence tools lack the speed, scale, flexibility, and granularity required to deliver insights about services built on complex cloud architectures. In fact, according to a study from Deloitte, two-thirds of organizations are not comfortable accessing or using data from their business intelligence tools. Business analytics in modern cloud environments requires a new approach. “Dynatrace gives us valuable insight into the business impact of our applications’ performance and enables our teams to proactively solve problems, deliver better customer experiences, and drive more value for our organization,” said Stephen Evans, Head of Quality, Monitoring, SRE/DevOps Technology at PVH. “This enhanced capability to access and store all of our business data provides the scalability our business needs. It also frees our teams from the constraints of sifting through data to determine what is valuable and what should be stored. Dynatrace’s unique ability to analyze all this data and deliver precise and contextualized answers in real time enables us to improve our digital landscape.” “To drive digital transformation at scale, organizations need trustworthy and real-time insights from their business data. Existing solutions often rely on stale data, fail to deliver precise answers in IT-context, and require manual maintenance and coding from engineers. “The Grail causational data lakehouse uniquely positions the Dynatrace platform to overcome these hurdles. By elevating the priority of business data to ensure it arrives unsampled and with lossless precision, even from third-party applications where developers are not accessible, business and IT teams using the Dynatrace platform can now easily access valuable business insights on demand. This has the capability to unlock nearly unlimited business analytics use cases, allowing our customers to instantly answer their most challenging questions with accuracy, clarity, and speed.” Bernd Greifeneder, Founder and Chief Technical Officer at Dynatrace About Dynatrace Dynatrace exists to make the world’s software work perfectly. Our unified software intelligence platform combines broad and deep observability and continuous runtime application security with the most advanced AIOps to provide answers and intelligent automation from data at an enormous scale. This enables innovators to modernize and automate cloud operations, deliver software faster and more securely, and ensure flawless digital experiences. That’s why the world’s largest organizations trust the Dynatrace® platform to accelerate digital transformation.

Read More

BIG DATA MANAGEMENT, DATA SCIENCE

Cloudflare Launches Data Localization Suite in Asia to Help Customers Achieve Data Sovereignty

Cloudflare | September 22, 2022

Cloudflare, Inc., the security, performance, and reliability company helping to build a better Internet, today announced that Cloudflare’s Data Localization Suite (DLS) is now available in three new countries in the Asia Pacific region: Australia, India, and Japan. The Data Localization Suite will help businesses based in these countries, as well as global companies who do business in these countries, to comply with their data localization obligations by using Cloudflare to easily set rules and controls on where their domestic data goes and who has access to it. This ultimately allows any business with customers in these countries to service their data locally while benefiting from the speed, security, and scalability of Cloudflare’s global network. Nearly 70% of countries in Asia have passed or drafted new data protection and privacy legislation. This often makes it difficult for regional companies to use foreign-based vendors to handle domestic traffic. Without regional support, many businesses are under pressure to use only in-country run vendors and may be required to restrict their application to one data center or one cloud provider’s region. This creates a trade-off between compliance and fast, secure experiences for end users. With the Data Localization Suite, businesses of any size or industry can now use Cloudflare to get more choice and control over how to meet their data locality needs, without sacrificing security or performance. “No business should have to choose between compliance with local data regulation and a superior experience for their customers. And yet, we hear time and again that companies are forced to do so in the face of a complex and ever-changing landscape of regional legislation,” said Matthew Prince, co-founder and CEO, Cloudflare. "By expanding our Data Localization Suite to our customers in Australia, India, and Japan, we're ensuring data locality doesn't have to come at the expense of the speed, security, and privacy users expect and deserve online." Now, businesses in Australia, India, and Japan can use Cloudflare’s Data Localization Suite to: Control where traffic is serviced: Companies can choose the data center locations where their traffic is inspected. Businesses can also use Cloudflare’s Geo Key Manager to choose where private keys are held. Build and deploy serverless code, with regional control: Build applications that allow developers to combine global performance with local compliance regulations. Jurisdiction Restrictions for Workers Durable Objects makes it easy to build serverless applications that are confined to a specific region. Use Cloudflare’s security features to protect their web properties: Customers can use WAF, Bot Management, DDoS protection and more to ensure their websites are safe and stay online. Align with global and regional security certifications: Businesses can trust that they are compliant with global privacy and security certifications like ISO 27001, 27701, and 27018 while still offering performance and speed at scale. “Asia Pacific has over 2.5 billion Internet users, representing more than half of the total Internet users in the world, and data protection and privacy have become increasingly important in this region. Preserving end-user privacy is core to Cloudflare’s mission of helping to build a better Internet, and we look forward to working with businesses across Australia, India, and Japan to enable them to provide fast, private, reliable, and secure services to their end-users.” Jonathon Dixon, VP and Managing Director, Asia Pacific, Japan, and China, Cloudflare Data Localization Suite has supported Cloudflare customers in alignment with European localization requirements and regulations since 2020. “We're thrilled to extend Cloudflare's localization benefits to our customers providing them greater control as they manage international data transfer requirements,” said Blake Brannon, Chief Strategy Officer, OneTrust. “Our partnership with Cloudflare supports our mission to empower our customers to navigate the evolving regulatory landscape with ease.” Today, Cloudflare’s global network spans more than 275 cities in over 100 countries including more than 100 points of presence across Asia Pacific to bring its security, performance, and reliability solutions to as close to its regional customers as possible. Cloudflare continues to invest in the region, with offices in Beijing, Singapore, Sydney, and Tokyo. In March, Cloudflare also announced 18 new cities added to their global network, including Bhubaneshwar, India; Fukuoka, Japan; Kanpur, India; and Naha, Japan. About Cloudflare Cloudflare, Inc. is on a mission to help build a better Internet. Cloudflare’s suite of products protect and accelerate any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare have all web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures 2018 list and ranked among the World’s Most Innovative Companies by Fast Company in 2019. Headquartered in San Francisco, CA, Cloudflare has offices in Austin, TX, Champaign, IL, New York, NY, San Jose, CA, Seattle, WA, Washington, D.C., Toronto, Lisbon, London, Munich, Paris, Beijing, Singapore, Sydney, and Tokyo.

Read More

BIG DATA MANAGEMENT, BUSINESS STRATEGY

New Relic Announces Support for Amazon VPC Flow Logs on Amazon Kinesis Data Firehose

New Relic | September 17, 2022

New Relic , the observability company, announced support for Amazon Virtual Private Cloud (Amazon VPC) Flow Logs on Amazon Kinesis Data Firehose to reduce the friction of sending logs to New Relic. Amazon VPC Flow Logs from AWS is a feature that allows customers to capture information about the IP traffic going to and from network interfaces in their Virtual Private Cloud (VPC). With New Relic support for Amazon VPC Flow Logs, both AWS and New Relic customers can quickly gain a clear understanding of a network’s performance and troubleshoot activity without impacting the network throughput or latency Network telemetry is challenging even for network engineers. To unlock cloud-scale observability, engineers need to explore VPC performance and connectivity across multiple accounts and regions to understand if an issue started in the network or somewhere else. To solve this, New Relic has streamlined the delivery of Amazon VPC Flow Logs by allowing engineers to send them to New Relic via Kinesis Data Firehose, which reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. With New Relic’s simple “add data” interface, it only takes moments to configure Amazon VPC Flow Logs using the AWS Command Line Interface (AWS CLI) or an AWS CloudFormation template. Instead of digging through raw logs across multiple accounts, any engineer can begin with an Amazon Elastic Compute Cloud (Amazon EC2) instance they own and begin to explore the data that matters, regardless of the AWS account or AWS Region. “New Relic continues to invest in our relationship with AWS. Helping customers gain visibility into their cloud networking environment increases their overall application observability. “Our support for Amazon VPC shows our commitment to enhancing our joint customers’ observability experience.” Riya Shanmugam, GVP, Global Alliances and Channels at New Relic “AWS is delighted to continue our strategic collaboration with New Relic to help customers innovate and migrate faster to the cloud,” said Nishant Mehta, Director of PM – EC2 and VPC Networking at AWS. “New Relic’s connected experience for Amazon VPC Flow Logs, paired with the simplicity of using Kinesis Data Firehose, enables our joint customers to easily understand how their networks are performing, troubleshoot networking issues more quickly, and explore their VPC resources more readily.” With the New Relic support for Amazon VPC Flow Logs on Kinesis Data Firehose, customers can: Monitor and alert on network traffic from within New Relic. Visualize network performance metrics such as bytes and packets per second, as well as accepts and rejects per second across every TCP or UDP port. Explore flow log deviations to look for unexpected changes in network volume or health. Diagnose overly restrictive security group rules or potentially malicious traffic issues. ”Our architecture contains above 200 microservices running on AWS. When something goes wrong, we need to find the root cause quickly to put out what we at Gett term as ‘fires,’” said Dani Konstantinovski, Global Support Manager at Gett. “With New Relic capabilities we can identify the problem, understand exactly what services were affected, what’s the reason, and what we need to do to resolve it. New Relic gives us this observability—it helps us to provide better service for our customers.” “Proactively managing customer experience is essential to all businesses that provide part or all of their services through applications. Therefore it’s essential for engineers to have a clear understanding of their network performance and have the data needed to troubleshoot activity before it impacts customers. Also, the quality of the data is fundamental to making good decisions,” said Stephen Elliot, IDC Group Vice President, I&O, Cloud Operations and DevOps. “Solutions that ensure fast delivery of high-quality data provide engineers with the ability to act quickly and decisively with confidence, saving businesses from the costs associated with negative customer experiences.” About New Relic As a leader in observability, New Relic empowers engineers with a data-driven approach to planning, building, deploying, and running great software. New Relic delivers the only unified data platform that empowers engineers to get all telemetry—metrics, events, logs, and traces—paired with powerful full stack analysis tools to help engineers do their best work with data, not opinions. Delivered through the industry’s first usage-based consumption pricing that’s intuitive and predictable, New Relic gives engineers more value for the money by helping improve planning cycle times, change failure rates, release frequency, and mean time to resolution. This helps the world’s leading brands including Adidas Runtastic, American Red Cross, Australia Post, Banco Inter, Chegg, GoTo Group, Ryanair, Sainsbury’s, Signify Health, TopGolf, and World Fuel Services (WFS) improve uptime, reliability, and operational efficiency to deliver exceptional customer experiences that fuel innovation and growth.

Read More

Spotlight

As organizations embrace AI-powered innovations and deploy deep learning applications, it is crucial to design a robust infrastructure that takes advantage of the compelling features of both on-premises and cloud deployment options. You must consider several technical requirements before you design and deploy an AI-enabled application: the application type, required performance, data lake size and growth, data backups and archives, and TCO, to name a few.

Resources