DATA ARCHITECTURE
Crunchy Data | January 04, 2022
Crunchy Data - the leading provider of trusted Open Source PostgreSQL products, solutions and support, is proud to announce that PostgreSQL advocate Jean-Paul Argudo has joined its team to lead Crunchy Data's expansion into France, building on Crunchy Data's momentum in the European market.
Argudo previously served as the co-founder and CEO of Dalibo, where he helped build the company as the leading PostgreSQL provider in France. He has actively promoted Open Source technology since 2000 and has played an important role in driving PostgreSQL adoption throughout France. Argudo has been active in the PostgreSQL community for many years, serving as a member of the administration council and as one of the founders of the PostgreSQLFr nonprofit organization. He also created the first version of the French-speaking PostgreSQL website and co-founded and served as the treasurer and board member of PostgreSQL Europe between 2008 and 2013.
"Crunchy Data's trusted Open Source Postgres technology, including its cloud-native Postgres for Kubernetes offerings and fully managed Postgres offering, Crunchy Bridge, have seen considerable interest in France and Europe more generally," said Crunchy Data President Paul Laurence. "We are enthusiastic about Jean-Paul joining Crunchy Data to build on the momentum we are seeing in Europe and assist Crunchy Data's customers in their successful adoption of Postgres."
"I am very happy to join Crunchy Data, which is among the finest PostgreSQL technical teams in the world. Its Open Source offerings will meet the expectations of the most demanding PostgreSQL users," said Argudo. "Crunchy Data enables Postgres users to maintain their freedom of choice when operationalizing PostgreSQL and deploying it to any of the popular clouds. Since PostgreSQL has become strategic in many businesses, I'll be happy to explain to them how Crunchy Data can help build a fast and reliable data infrastructure for their needs."
The expansion into France follows Crunchy Data's 2021 expansion into APAC with the addition of Tony Mudie leading Crunchy Data Australia.
PostgreSQL is a powerful, Open Source, object-relational database system with more than 25 years of active Open Source development and a strong global community. Commercial enterprises and government agencies with a focus on advanced data management benefit from PostgreSQL's proven architecture and reputation for reliability, data integrity, and cost effectiveness.
About Crunchy Data
Crunchy Data allows companies to build with confidence as the leading provider of trusted Open Source PostgreSQL and enterprise technology, support and training. Crunchy Data offers Crunchy Certified PostgreSQL, the most advanced and true open source RDBMS on the market. The company also offers Crunchy Bridge, a fully managed cloud Postgres service available on AWS, Azure and Google Cloud. PostgreSQL's active development community, proven architecture, and reputation for reliability, data integrity, and ease of use makes it a prime candidate for enterprises looking for a robust relational database alternative to expensive proprietary database technologies.
Read More
BIG DATA MANAGEMENT
Komprise | May 20, 2022
Komprise, the leader in analytics-driven unstructured data management and mobility, today announced Komprise Smart Data Workflows, a systematic process to discover relevant file and object data across cloud, edge and on-premises datacenters and feed data in native format to AI and machine learning (ML) tools and data lakes.
Industry analysts predict that at least 80% of the world’s data will be unstructured by 2025. This data is critical for AI and ML-driven applications and insights, yet much of it is locked away in disparate data storage silos. This creates an unstructured data blind spot, resulting in billions of dollars in missed big data opportunities.
Komprise has expanded Deep Analytics Actions to include copy and confine operations based on Deep Analytics queries, added the ability to execute external functions such as running natural language processing functions via API and expanded global tagging and search to support these workflows. Komprise Smart Data Workflows allow you to define and execute a process with as many of these steps needed in any sequence, including external functions at the edge, datacenter or cloud. Komprise Global File Index and Smart Data Workflows together reduce the time it takes to find, enrich and move the right unstructured data by up to 80%.
“Komprise has delivered a rapid way to visualize our petabytes of instrument data and then automate processes such as tiering and deletion for optimal savings,” says Jay Smestad, senior director of information technology at PacBio. “Now, the ability to automate workflows so we can further define this data at a more granular level and then feed it into analytics tools to help meet our scientists’ needs is a game changer.”
Komprise Smart Data Workflows are relevant across many sectors. Here’s an example from the pharmaceutical industry:
1) Search: Define and execute a custom query across on-prem, edge and cloud data silos to find all data for Project X with Komprise Deep Analytics and the Komprise Global File Index.
2) Execute & Enrich: Execute an external function on Project X data to look for a specific DNA sequence for a mutation and tag such data as "Mutation XYZ".
3) Cull & Mobilize: Move only Project X data tagged with "Mutation XYZ" to the cloud using Komprise Deep Analytics Actions for central processing.
4) Manage Data Lifecycle: Move the data to a lower storage tier for cost savings once the analysis is complete.
Other Smart Data Workflow use cases include:
Legal Divestiture: Find and tag all files related to a divestiture project and move sensitive data to an object-locked storage bucket and move the rest to a writable bucket.
Autonomous Vehicles: Find crash test data related to abrupt stopping of a specific vehicle model and copy this data to the cloud for further analysis. Execute an external function to identify and tag data with Reason = Abrupt Stop and move only the relevant data to the cloud data lakehouse to reduce time and cost associated with moving and analyzing unrelated data.
“Whether it’s massive volumes of genomics data, surveillance data, IoT, GDPR or user shares across the enterprise, Komprise Smart Data Workflows orchestrate the information lifecycle of this data in the cloud to efficiently find, enrich and move the data you need for analytics projects. “We are excited to move to this next phase of our product journey, making it much easier to manage and mobilize massive volumes of unstructured data for cost reduction, compliance and business value.”
Kumar Goswami, CEO of Komprise
About Komprise
Komprise is a provider of unstructured data management and mobility software that frees enterprises to easily analyze, mobilize, and monetize the right file and object data across clouds without shackling data to any vendor. With Komprise Intelligent Data Management, you can cut 70% of enterprise storage, backup and cloud costs while making data easily available to cloud-based data lakes and analytics tools.
Read More
BIG DATA MANAGEMENT
J.D. Power | June 06, 2022
J.D. Power, a global leader in data analytics and customer intelligence, today announced that it has acquired the data and predictive analytics business of We Predict, the UK-based provider of global automobile service and warranty analytics. We Predict’s software, which is used by auto manufacturers and suppliers to project future component failures and future warranty claims and costs, will be leveraged by J.D. Power to enhance its vehicle quality and dependability analytics, expand repair cost forecasting and provide critical valuation data.
“Robust data and powerful analytics that help manufacturers, suppliers and consumers better predict future repair costs are a key link in the auto industry value chain that will only become more important as fleets of new electric vehicles start rolling off the assembly line,” said Dave Habiger, president and CEO of J.D. Power. “By augmenting our existing offerings with We Predict’s forecasting software, we will be able to deliver a more complete, detailed view of repair-related costs to better anticipate financial risk exposures.”
“As the automobile industry enters a phase of massive transformation in which electric vehicles and ever-more complex technologies are rapidly becoming the norm, warranty claims and repair costs are a critical variable for manufacturers and suppliers to incorporate into their forecasting. “By incorporating We Predict’s comprehensive data and powerful analytics into our vehicle quality, dependability and valuation platforms, we will be able to create the industry’s most robust and accurate view of future warranty claims and repair costs.”
Doug Betts, president of the global automotive division at J.D. Power
We Predict software uses machine learning and predictive analytics to develop detailed projections of future warranty claims and repair costs for the global automobile industry. Drawing on a database of billions of service records, We Predict can accurately forecast true vehicle ownership costs, residual values, repair and warranty claims costs and more.
“J.D. Power invented the idea of using data and analytics to evaluate vehicle quality and dependability, so the opportunity to become a part of that team and bring our software and operational data into the offering is enormously exciting to all of us at We Predict,” said James Davies, We Predict CEO. “The industry and consumers need accurate repair cost forecasting now more than ever and we look forward to being the leader in delivering those solutions.”
Davies will become vice president of repair analytics and data at J.D. Power. We Predict will become part of the global automotive division at J.D. Power.
About J.D. Power
J.D. Power is a global leader in consumer insights, advisory services and data and analytics. A pioneer in the use of big data, artificial intelligence (AI) and algorithmic modeling capabilities to understand consumer behavior, J.D. Power has been delivering incisive industry intelligence on customer interactions with brands and products for more than 50 years. The world's leading businesses across major industries rely on J.D. Power to guide their customer-facing strategies.
About We Predict
Formed in 2009, We Predict uses machine learning and unique predictive methodologies to assist global blue-chip customers in anticipating and accelerating decisions on product, on market, as well as on financial performance. Our top-notch data scientists, mathematicians, computer scientists and industry experts work together with our clients to explore and gain new insights into where your business is headed, creating the opportunity to course-correct with confidence. Using our service, clients gain insights into huge amounts of data at the touch of a button so they can take action—fast. Some guess, we know.
Read More
BIG DATA MANAGEMENT
Penguin | January 03, 2022
Recently, Penguin team has announced the launch of their decentralized data network for Web3.0. With the advancement of blockchain technology, some innovative new players are entering the market. Some are bringing the offline world to a global audience, while others transform the way we invest in our future. Decentralized applications, DeFi, NFTs, and the Metaverse, hold immense potential for future growth and real-world uses. But what the current crypto arena lacks is an independent & one-stop web service that includes a high-performance smart contract blockchain together with a decentralized storage solution. The Penguin network brings in a universal decentralized data network specifically designed for Web 3.0.
Penguin - The Decentralized Storage Platform
Exclusively designed for Web 3.0, Penguin is a peer-to-peer network of nodes, which jointly provides decentralized storage and communication service. By offering a universal decentralized data network for Web3.0, the platform can fulfill multiple roles for different areas of blockchain space. Moreover, Penguin aims to work with the blockchain industry to create decentralized applications (DApps), products, and services seamlessly accessible in Web 3.0.
A unique feature of the platform is that it offers automatic scaling; that is, an increase in storage space demand would be efficiently handled. This will eventually lead to a lowering of costs for the blockchain arena. Penguin also facilitates efficient data storage capabilities and quick data retrieval. The network is economically automated with a native protocol token, PEN, thanks to its built-in smart-contract-based incentive system.
Therefore, the purported goal of the platform is to extend the blockchain by utilizing decentralized storage and communication to position itself as a world computer that can efficiently serve as an operating system and deployment environment for dApps.
Web 3.0 - The Decentralized Internet of the Future
Web 3.0 is not merely a buzzword that tech, crypto, and venture-capital classes have become interested in lately. It aims to provide a future where distributed users and machines can seamlessly interact with data, value, and other counterparties through peer-to-peer networks, eliminating the need for any third parties. It is built majorly on three novel layers of technological innovation. Those are edge computing, decentralized data networks, and artificial intelligence. Web 3.0, built on blockchain, eliminates all big intermediaries, including centralized governing bodies or repositories.
Moreover, the most significant evolution enabled by Web 3.0 is the minimization of the trust required for coordination on a global scale. It fundamentally expands the scale and scope of human and machine interactions to a far new level. These interactions range from easy payments to richer information flows and trusted data transfers, all without passing through a fee-charging intermediary.
Web 3.0 enhances the current internet service with significant characteristics like trustless, verifiable, permissionless, self-governing, etc. This is why a permissionless, decentralized blockchain like Penguin plays a pivotal part in developing the so-called "decentralized internet of the future." Decentralized data networks like Penguin make it possible for data generators to store or sell their data without losing ownership control, compromising privacy, or reliance on intermediaries or go-betweens.
Blockchain Technology and Web 3.0
Blockchain technology and cryptocurrencies have always been an integral part of Web3.0. It provides financial incentives for anyone who wants to create, govern, contribute, or improve projects. Today the internet needs Web 3.0, a new generation of the Internet protocol that facilitates free identity, free contracts, and free assets. Blockchain technology with its advanced network fundamentals offers a near-perfect solution with in-built smart contracts for self-deployment and access, decentralized addresses as accounts, etc. Penguin, the decentralized data network, provides an available decentralized private data storage solution for all Web3.0 developers.
How Does Penguin Benefit The Development Of Web 3.0
Today we live in a data-driven world, where companies often collect massive amounts of user data and use this data with the intent to deliver value. Data privacy has become a greater concern over the past few years. However, the Internet ecosystem has fundamentally changed several concerns like data privacy and storage. This is referred to as Web 3.0, and it ensures this by deploying blockchain.
Penguin primarily focuses on data storage with zero downtime. It also features permanent versionable content storage, zero error operation, and resistance to intermittent disconnection of nodes.
With its exceptional privacy attributes like anonymous browsing, deniable storage, untraceable messaging, and file representation formats that leak no metadata, Penguin meets with the growing security demand on the web. Penguin also offers continuous service and resilience against outages or targeted attacks. The platform facilitates the creation of many products, where all products rely on APIs and SDKs provided by Penguin.
Penguin - An Infrastructure for A Self-Sovereign Society
Penguin is more than just a network; the protocol sets a strong foundation for creating a market economy around data storage and retrieval. The platform also has entered into a host of prospective and strategic partnerships and collaborations with different projects and protocols in the DeFi, GameFi, NFTs, smart contract, and other metaverse spaces. Moreover, as a platform for permissionless publication, the Penguin network promotes information freedom. The platform’s design requirements can only be met by the network native token PEN.
Some of the significant features that Web 3.0 offers are zero central point of control by removing intermediaries, complete ownership of data, sharing information in a permissionless manner, reducing hacks and data breaches with decentralized data, and interoperability.
On the other hand, Penguin aims to build an infrastructure for a self-sovereign society. Without permission and privacy, Penguin efficiently meets the needs of freedom of speech, data sovereignty, open network market, and ensuring its security through integrity protection, censorship resistance, and attack resilience.
Some of its vital meta values are Inclusivity, the need to include the underprivileged in the data economy, lowering the barrier of entry to explain complex data flows, and building decentralized applications.
The integrity of the online persona is necessary. Because Penguin is a network with open participation and offers services and permissionless access to publishing, sharing, and investing your data, users have complete freedom to express their intention and have full authority to decide whether they want to remain anonymous or share interactions.
Incentivization or economic incentives ensure that participants' behavior aligns with the network's desired emergent behavior. Finally, Impartiality guarantees content neutrality and prevents gate-keeping. It successfully rules out other values that treat any particular group as a privileged or express preference for specific content or even data from any specific source. These meta values make Penguin an efficient decentralized, permissionless data network for Web 3.0.
Penguin’s Future-Proof Design Principles - Meeting the Needs of Web 3.0
The information society and data economy have ushered in an era where online transactions and big data are pivotal for everyday life. Therefore, it is essential to have a future-proof and advanced supporting technology like Penguin. The network offers a strong guarantee for continuity. The Penguin network ensures continuity by following some general requirements or system attributes. Some of them are stable and resilient specifications and software implementation. Scalable enough to accommodate many orders of magnitude, more users, and data without lowering the performance or reliability for mass adoption, secure and resilient solution to deliberate attacks, Penguin is a self-sustaining autonomous solution that is independent of human or organizational coordination or any legal entity's business.
Read More