Snowflake-powered OneBill now offers descriptive and predictive data analytics

OneBill | February 02, 2022

According to Gartner, by 2023, more than 33% of big firms will employ analysts practicing decision intelligence (including decision modeling). As a result, businesses will become increasingly reliant on aggregating data from across the organization to inform descriptive and predictive analytics.

OneBill, a provider of end-to-end billing and revenue management software, has seen this rising opportunity to build intelligent analytics using OneBill's diverse data sources and Snowflake Data Marketplace's rich data.

Every month, OneBill processes millions of data inputs spanning product inventories, usage (e.g. minutes, data, volume, etc. ), price values, taxation, and much more to generate an invoice. As a result, OneBill saw an opportunity to investigate how this massive amount of data could be transformed into helpful analytics, allowing their clients to do predictive data analytics that impacts future revenue strategy decisions.

Data from the OneBill platform will be fed into Snowflake and transformed into dashboard reporting tools that can be customized for a firm thanks to the interface created between the two platforms. Customers can also compare and reconcile their billing data with their accounting reports, as Snowflake can provide data from other corporate systems such as Accounting and Taxation platforms.

"We are excited by this partnership, as we see Snowflake as a strong partner to build this modernized, scalable, and adoptive platform, based on their unique set of product portfolio elements. Furthermore, their high level of computing, storage, and security capabilities is second to none, making them a partner we can truly trust in this venture,"

OneBill Software Founder & CEO, JK Chelladurai

"We are encouraged by OneBill's latest capabilities, Powered by Snowflake, which can be transformative for many businesses as they pursue innovation with these advanced revenue reporting tools," said Colleen Kapase, SVP of Worldwide Partners and Alliances. "As Snowflake continues to make strides to mobilize the world's data, partners like OneBIll give our customers greater flexibility around how they make key revenue management decisions."


When an Azure region is not available due to issues at the Datacenter level, this should not affect the availability of data. Hence, database replication is required to avoid data loss at any given point in time. Some Microsoft Azure regions do not support Azure database replication in different regions. (Refer to the image given below) To overcome this, manual replication of MySQL database using MySQL workbench is the only feasible option.


When an Azure region is not available due to issues at the Datacenter level, this should not affect the availability of data. Hence, database replication is required to avoid data loss at any given point in time. Some Microsoft Azure regions do not support Azure database replication in different regions. (Refer to the image given below) To overcome this, manual replication of MySQL database using MySQL workbench is the only feasible option.

Related News


DataRobot Announces the Availability of DataRobot Notebooks

DataRobot | January 11, 2023

On January 10, 2023, DataRobot, a pioneer in the artificial intelligence sector, announced the availability of DataRobot Notebooks, a notebooks solution completely integrated into the DataRobot AI platform that allows data scientists to collaborate across code-first workflows with single-click access to embedded notebooks. Notebooks are essential for data scientists to swiftly experiment and share findings via fast environment creation, interactive computation, and code fragments. However, as the number of notebook users in a data science company rises, data science teams face issues such as managing notebooks at scale, maintaining extensive dependencies, and overwhelming and expensive libraries for data science teams. Mike Leone, Senior Analyst at Enterprise Strategy Group, "We are entering a phase of AI governance where the collaboration and productivity gains of data science teams become increasingly important." He further mentioned, "With DataRobot Notebooks, the flexibility to develop in preferred environments, including open-source ML tooling or in the DataRobot AI platform, streamlines the code development experience and allows data scientists to better collaborate as a team in a unified environment." (Source – Businesswire) DataRobot Notebooks simplifies the process of code development experience for data science processes, emphasizing automation, scalability, reproducibility, and collaboration. In addition, this improved capacity provides the data science teams with unique values, including: Interoperability: DataRobot Notebooks is compliant and interoperable with the Jupyter Notebook standard, accelerating the onboarding process for the DataRobot AI platform. Centralized management: DataRobot Notebooks is a uniform environment with fine-grained access controls and centralized governance, allowing data scientists to swiftly collaborate, organize, and share notebooks and related assets across people and teams. Native integration within DataRobot: DataRobot Notebooks is natively connected with the DataRobot ecosystem, enabling data scientists to run their codes directly on the platform with all the required libraries and tools. Enhanced features: Users can now write and run custom code in cloud-based notebooks that provide access to scalable, private, and containerized computing environments. About DataRobot DataRobot is the pioneer in AI cloud, providing a uniform platform for all users, data types, and environments to expedite the delivery of AI into production. Trusted by worldwide clients across industries and verticals, including a third of the Fortune 50, and providing over a trillion forecasts to the world's best businesses.

Read More


Acceldata Announce the Launch of its Open Source Data Platform Version

Acceldata | January 02, 2023

Acceldata, the global leader in data observability, has recently announced introducing a new open-source version of its data platform, empowering enterprise data teams to innovate with cost-effective solutions for data observability. Several significant organizations from the fintech, telco, and data provider industries have already contributed to, validated, and utilized this platform. The open-source data platform provides reliable and community-validated versions of data observability libraries, as well as support for public, private, and hybrid settings, to address the evolving needs of the modern organization. Historically, enterprise data teams had limited alternatives for migrating to an open-source, community-based data platform. Acceldata's new open-source initiative consists of a data platform and six projects that are available to download for free under the Apache License Version 2.0. The data platform periodically synchronizes with open-source branches to verify conformance with the current code and new development. This enables the addition of new components as innovation in the community and industry evolves. The platform for open-source data offers the following community advantages: Deployment automation enables observability, manageability, and package management Flexibility in adopting technologies that provide on-demand services and elasticity in any environment Enable tenants to use and enhance services without affecting other tenants A platform that guarantees maintainability, consistency, and stability of components over change cycles About Acceldata Acceldata, situated in Campbell, California, was founded in 2018 and has created the world's first enterprise data observability cloud to assist organizations in developing and operating exceptional data products. Acceldata's solutions have been used by customers all over the world, including Oracle, PhonePe (Walmart), PubMatic, DBS, and many more. Insight Partners, Lightspeed Venture Partners, March Capital, Sorenson Ventures, and Emergent Ventures are among the Acceldata investors.

Read More


Hammerspace Shatters Expectations for High-Performance File Data Architectures

Hammerspace | November 14, 2022

Hammerspace, the pioneer of the global data environment, today unveiled the performance capabilities that many of the most data-intensive organizations in the world depend on for high-performance data and storage in decentralized workflows. Hammerspace completely changes previously held notions of how unstructured data architectures can work, delivering the performance needed to free workloads from data silos, eliminate copy proliferation, and provide direct data access to applications and users, no matter where the data is stored. Hammerspace allows organizations to take full advantage of the performance capabilities of any server, storage system and network anywhere in the world. This capability enables a unified, fast, and efficient global data environment for the entire workflow, from data creation to processing, collaboration, and archiving across edge devices, data centers, and public and private clouds. 1) High-Performance Across Data Centers and to the Cloud: Saturate the Available Internet or Private Links Instruments, applications, compute clusters and the workforce are increasingly decentralized. With Hammerspace, all users and applications have globally shared, secured access, to all data no matter which storage platform or location it is on, as if it were all on a local NAS. Hammerspace overcomes data gravity to make remote data fast to use locally. Modern data architectures require data placement to be as local as possible to match the user or application’s latency and performance requirements. Hammerspace’s Parallel Global File System orchestrates data automatically and by policy in advance to make data present locally without wasting time waiting for data placement. And data placement occurs fast! Using dual, 100Gb/E networks, Hammerspace can intelligently orchestrate data at 22.5GB/second to where it is needed. This performance level enables workflow automation to orchestrate data in the background on a file-granular basis directly, by policy, making it possible to start working with the data as soon as the first file is transferred and without needing to wait for the entire data set to be moved locally. Unstructured data workloads in the cloud can take full advantage of as many compute cores as allocated and take advantage of as much bandwidth as is needed for the job, even saturating the network within the cloud when desired to connect the compute environment with applications. A recent analysis of EDA workloads in Microsoft Azure showed that Hammerspace scales performance linearly, taking full advantage of the network configuration available in Azure. This high-performance cloud file access is necessary for compute-intensive use cases, including processing genomics data, rendering visual effects, training machine learning models and implementing high-performance computing architectures in the cloud. High-performance across data centers and to the cloud in the Release 5 software include: Backblaze, Zadara, and Wasabi support Continual system-wide optimization to increase scalability, improve back-end performance, and improve resilience in very large, distributed environments New Hammerspace Management GUI, with user-customizable tiles, better administrator experience, and increased observability of activity within shares Increased scale, increasing the number of Hammerspace clusters supported in a single global data environment from 8 to 16 locations 2) High-Performance Across Interconnect within the Data Center: Saturate Ethernet or InfiniBand Networks within the Data Center Data centers need massive performance to ingest data from instruments and large compute clusters. Hammerspace makes it possible to reduce the friction between resources, to get the most out of both your compute and storage environment, reducing the idle time waiting on data to ingest into storage. Hammerspace supports a wide range of high-performance storage platforms that organizations have in place today. The power of the Hammerspace architecture is its ability to saturate even the fastest storage and network infrastructures, orchestrating direct I/O and scaling linearly across otherwise incompatible platforms to maximize aggregate throughput and IOPS. It does this while providing the performance of a parallel file system coupled with the ease of standards-based global NAS connectivity and out-of-band metadata updates. In one recent test with moderately sized server configurations deploying just 16 DSX nodes, the Hammerspace file system took advantage of the full storage performance to hit 1.17 Tbits/second, which was the max throughput the NVMe storage could handle, and with 32kb file sizes and low CPU utilization. The tests demonstrated that the performance would scale linearly to extreme levels if additional storage and networking were added. High-performance across interconnect within the data center enhancements in the Release 5 software include: 20 percent increase in metadata performance to accelerate file creation in primary storage use cases Accelerated collaboration on shared files in high client count environments RDMA support for global data over NFS v4.2, providing high-performance, coupled with the simplicity and open standards of NAS protocols to all data in the global data environment, no matter where it is located 3) High-Performance Server-local IO: Deliver to Applications Near Theoretical I/O Subsystem Maximum Performance of Cloud Instances, VMs, and Bare Metal Servers High-performance use cases, edge environments and DevOps workloads all benefit from leveraging the full performance of the local server. Hammerspace takes full advantage of the underlying infrastructure, delivering 73.12 Gbits/sec performance from a single NVMe-based server, providing nearly the same performance through the file system that would be achieved on the same server hardware with direct-to-kernel access. The Hammerspace Parallel Global File System architecture separates the metadata control plane from the data path and can use embedded parallel file system clients with NFS v4.2 in Linux, resulting in minimal overhead in the data path. For servers running at the Edge, Hammerspace elegantly handles situations where edge or remote sites become disconnected. Since file metadata is global across all sites, local read/write continues until the site reconnects, at which time the metadata synchronizes with the rest of the global data environment. Quotes: David Flynn, founder and CEO of Hammerspace and previous co-founder and CEO of Fusion-IO “Technology typically follows a continuum of incremental advancements over previous generations. But every once in a while, a quantum leap forward is taken with innovation that changes paradigms. This was the case at Fusion-IO when we invented the concept of highly-reliable high-performance SSDs that ultimately became the NVMe technology. Another paradigm shift is upon us to create high-performance global data architectures incorporating instruments and sensors, edge sites, data centers, and diverse cloud regions.” Eyal Waldman, co-founder and previous CEO of Mellanox Technologies, Hammerspace Advisory Board Member “The innovation at Mellanox was focused on increasing data center efficiency by providing the highest throughput and lowest latency possible in the data center and in the cloud to deliver data faster to applications and unlock system performance capability. I see high-performance access to global data as the next step in innovation for high-performance environments. The challenge of fast networks and fast computers has been well solved for years but making remote data available to these environments was a poorly solved problem until Hammerspace came into the market. Hammerspace makes it possible to take cloud and data utilization to the next level of decentralization, where data resides.” Trond Myklebust, Maintainer for the Linux Kernel NFS Client and Chief Technology Officer of Hammerspace “Hammerspace helped drive the IETF process and wrote enterprise quality code based on the standard, making NFS4.2 enterprise-grade parallel performance NAS a reality.” Jeremy Smith, CTO of Jellyfish Pictures "We wanted to see if the technology really stood up to all the hype about RDMA to NFS4.2 performance. The interconnectivity that RoCE/RDMA provides is really outstanding. When looking to get the maximum amount of performance for our clients, enabling this was an obvious choice.” Mark Nossokoff, Research Director at Hyperion Research “Data being consumed by both traditional HPC modeling and simulation workloads and modern AI and HPDA workloads is being generated, stored, and shared between a disparate range of resources, such as the edge, HPC data centers, and the cloud. Current HPC architectures are struggling to keep up with the challenges presented by such a distributed data environment. By addressing the key areas of collaboration at scale while supporting system performance capabilities and minimizing potential costly data movement in HPC cloud environments, Hammerspace aims to deliver a key missing ingredient that many HPC users and system architects are looking for.” About Hammerspace Hammerspace delivers a Global Data Environment that spans across on-prem data centers and public cloud infrastructure enabling the decentralized cloud. With origins in Linux, NFS, open standards, flash and deep file system and data management technology leadership, Hammerspace delivers the world’s first and only solution to connect global users with their data and applications, on any existing data center infrastructure or public cloud services including AWS, Google Cloud, Microsoft Azure and Seagate Lyve Cloud.

Read More