Salesforce | September 14, 2023
Salesforce introduces the groundbreaking Einstein 1 Platform, built on a robust metadata framework.
The Einstein 1 Data Cloud supports large-scale data and high-speed automation, unifying customer data, enterprise content, and more.
The latest iteration of Einstein includes Einstein Copilot and Einstein Copilot Studio.
On September 12, 2023, Salesforce unveiled the Einstein 1 Platform, introducing significant enhancements to the Salesforce Data Cloud and Einstein AI capabilities. The platform is built on Salesforce's underlying metadata framework. Einstein 1 is a reliable AI platform for customer-centric companies that empowers organizations to securely connect diverse datasets, enabling the creation of AI-driven applications using low-code development and the delivery of entirely novel CRM experiences.
Salesforce's original metadata framework plays a crucial role in helping companies organize and comprehend data across various Salesforce applications. This is like establishing a common language to facilitate communication among different applications built on the core platform. It then maps data from disparate systems to the Salesforce metadata framework, thus creating a unified view of enterprise data. This approach allows organizations to tailor user experiences and leverage data for various purposes using low-code platform services, including Einstein for AI predictions and content generation, Flow for automation, and Lightning for user interfaces. Importantly, these customizations are readily accessible to other core applications within the organization, eliminating the need for costly and fragile integration code.
In today's business landscape, customer data is exceedingly fragmented. On average, companies employ a staggering 1,061 different applications, yet only 29% of them are integrated. The complexity of enterprise data systems has increased, and previous computing revolutions, such as cloud computing, social media, and mobile technologies, have generated isolated pockets of customer data.
Furthermore, Salesforce ensures automatic upgrades three times a year, with the metadata framework safeguarding integrations, customizations, and security models from disruptions. This enables organizations to seamlessly incorporate, expand, and evolve their use of Salesforce as the platform evolves.
The Einstein 1 Data Cloud, which supports large-scale data and high-speed automation, paves the way for a new era of data-driven AI applications. This real-time hyperscale data engine combines and harmonizes customer data, enterprise content, telemetry data, Slack conversations, and other structured and unstructured data, culminating in a unified customer view. Currently, the platform is already processing a staggering 30 trillion transactions per month and connecting and unifying 100 billion records daily. The Data Cloud is now natively integrated with the Einstein 1 Platform, and this integration unlocks previously isolated data sources, enabling the creation of comprehensive customer profiles and the delivery of entirely fresh CRM experiences.
The Einstein 1 Platform has been expanded to support thousands of metadata-enabled objects per customer, each able to manage trillions of rows. Furthermore, Marketing Cloud and Commerce Cloud, which joined Salesforce's Customer 360 portfolio through acquisitions, have been reengineered onto the Einstein 1 Platform.
Now, massive volumes of data from external systems can be seamlessly integrated into the platform and transformed into actionable Salesforce objects. Automation at scale is achieved by triggering flows in response to changes in any object, even events from IoT devices or AI predictions, at a rate of up to 20,000 events per second. These flows can interact with any enterprise system, including legacy systems, through MuleSoft.
Analytics also benefit from this scalability, as Salesforce provides a range of insights and analytics solutions, including reports and dashboards, Tableau, CRM analytics, and Marketing Cloud reports. With the Einstein 1 Platform's common metadata schema and access model, these solutions can operate on the same data at scale, delivering valuable insights for various use cases.
Salesforce has additionally made Data Cloud accessible at no cost to every customer with Enterprise Edition or higher. This allows customers to commence data ingestion, harmonization, and exploration, leveraging Data Cloud and Tableau to extend the influence of their data across all business segments and kickstart their AI journey.
Salesforce's latest iteration of Einstein introduces a conversational AI assistant to every CRM application and customer experience. This includes:
Einstein Copilot: This is an out-of-the-box conversational AI assistant integrated into every Salesforce application's user experience. Einstein Copilot enhances productivity by assisting users within their workflow, enabling natural language inquiries, and providing pertinent, trustworthy responses grounded in proprietary company data from the Data Cloud. Furthermore, Einstein Copilot proactively takes action and offers additional options beyond the user's query.
Einstein Copilot Studio: This feature enables companies to create a new generation of AI-powered apps with custom prompts, skills, and AI models. This can help accelerate sales processes, streamline customer service, auto-generate websites based on personalized browsing history, or transform natural language prompts into code. Einstein Copilot Studio offers configurability to make Einstein Copilot available across consumer-facing channels such as websites and messaging platforms like Slack, WhatsApp, or SMS.
Both Einstein Copilot and Einstein Copilot Studio operate within the secure Einstein Trust Layer, an AI architecture seamlessly integrated into the Einstein 1 Platform. This architecture ensures that teams can leverage generative AI while maintaining stringent data privacy and security standards.
The metadata framework within the Einstein 1 Platform expedites AI adoption by providing a flexible, dynamic, and context-rich environment for machine learning algorithms. Metadata describes the structure, relationships, and behaviors of data within the system, allowing AI models to better grasp the context of customer interactions, business processes, and interaction outcomes. This understanding enables fine-tuning of large language models over time, delivering continually improved results.
Big Data Management
NetApp | October 13, 2023
NetApp renews its partnership with Ducati Corse for the 2023-2025 seasons.
NetApp becomes the team's Official Data Infrastructure Partner.
The collaboration results in a virtual data management solution for real-time race analytics.
NetApp, a prominent cloud-led, data-centric software firm, has recently announced the renewal of its partnership agreement with Ducati Corse for the 2023 to 2025 seasons. In this capacity, NetApp is the team's Official Data Infrastructure Partner. A significant outcome of this enduring collaboration is a virtual data management and insights solution primarily designed to enhance race analytics and research and development processes. This solution leverages NetApp's ONTAP technology in conjunction with NetApp SnapMirror and NetApp FlexCache.
In the world of Grand Prix racing, speed is of paramount importance. Ducati Corse identified that the synchronization between the racetrack and the various engineering teams spread across multiple locations, including the Borgo Panigale headquarters, was too slow. Recognizing the potential to address this limitation effectively, Ducati turned to NetApp to co-create a state-of-the-art data insights and management solution. This solution empowers engineers with almost real-time data sharing and analytics, enabling tech teams to extract maximum value from test and race weekend sessions. Such data-driven insights often make the difference between securing pole position and starting further down the grid.
Key advantages of this collaboratively engineered solution encompass:
Seamless connectivity enables engineers to access consistently available data from any location within Ducati's data infrastructure clusters.
Capturing valuable data insights from an elite racing environment and making them accessible for utilization across all Ducati business units.
Implementation of a software-defined approach, which results in a robust and efficient data storage foundation built on NetApp's ONTAP technology. This approach benefits from NetApp's extensive experience in enterprise data management innovation and product leadership.
Optimization of data transfer through NetApp ONTAP with SnapMirror, ensuring unified, high-speed, and secure replication of data.
Introduction of FlexCache remote caching for actively read data, significantly enhancing the speed and productivity of collaboration across multiple locations while simultaneously reducing WAN bandwidth costs and increasing data throughput.
Notably, since 2018, NetApp and Ducati have maintained a fruitful collaboration aimed at successfully managing 200 applications and supporting 90 virtual machines within a disaster recovery center. NetApp delivers the speed and capacity to facilitate the management of branch operations and on-site data analysis for Ducati while seamlessly transitioning data to a hybrid cloud environment. This, in turn, significantly reduces the time required for prototyping and expediting the development and release of new motorcycles to the market.
Big Data Management
Microsoft | September 22, 2023
AI models rely heavily on vast data volumes for their functionality, thus increasing risks associated with mishandling data in AI projects.
Microsoft's AI research team accidentally exposed 38 terabytes of private data on GitHub.
Many companies feel compelled to adopt generative AI but lack the expertise to do so effectively.
Artificial intelligence (AI) models are renowned for their enormous appetite for data, making them among the most data-intensive computing platforms in existence. While AI holds the potential to revolutionize the world, it is utterly dependent on the availability and ingestion of vast volumes of data.
An alarming incident involving Microsoft's AI research team recently highlighted the immense data exposure risks inherent in this technology. The team inadvertently exposed a staggering 38 terabytes of private data when publishing open-source AI training data on the cloud-based code hosting platform GitHub. This exposed data included a complete backup of two Microsoft employees' workstations, containing highly sensitive personal information such as private keys, passwords to internal Microsoft services, and over 30,000 messages from 359 Microsoft employees. The exposure was a result of an accidental configuration, which granted "full control" access instead of "read-only" permissions. This oversight meant that potential attackers could not only view the exposed files but also manipulate, overwrite, or delete them.
Although a crisis was narrowly averted in this instance, it serves as a glaring example of the new risks organizations face as they integrate AI more extensively into their operations. With staff engineers increasingly handling vast amounts of specialized and sensitive data to train AI models, it is imperative for companies to establish robust governance policies and educational safeguards to mitigate security risks.
Training specialized AI models necessitates specialized data. As organizations of all sizes embrace the advantages AI offers in their day-to-day workflows, IT, data, and security teams must grasp the inherent exposure risks associated with each stage of the AI development process. Open data sharing plays a critical role in AI training, with researchers gathering and disseminating extensive amounts of both external and internal data to build the necessary training datasets for their AI models. However, the more data that is shared, the greater the risk if it is not handled correctly, as evidenced by the Microsoft incident. AI, in many ways, challenges an organization's internal corporate policies like no other technology has done before. To harness AI tools effectively and securely, businesses must first establish a robust data infrastructure to avoid the fundamental pitfalls of AI.
Securing the future of AI requires a nuanced approach. Despite concerns about AI's potential risks, organizations should be more concerned about the quality of AI software than the technology turning rogue.
PYMNTS Intelligence's research indicates that many companies are uncertain about their readiness for generative AI but still feel compelled to adopt it. A substantial 62% of surveyed executives believe their companies lack the expertise to harness the technology effectively, according to 'Understanding the Future of Generative AI,' a collaboration between PYMNTS and AI-ID.
The rapid advancement of computing power and cloud storage infrastructure has reshaped the business landscape, setting the stage for data-driven innovations like AI to revolutionize business processes. While tech giants or well-funded startups primarily produce today's AI models, computing power costs are continually decreasing. In a few years, AI models may become so advanced that everyday consumers can run them on personal devices at home, akin to today's cutting-edge platforms. This juncture signifies a tipping point, where the ever-increasing zettabytes of proprietary data produced each year must be addressed promptly. If not, the risks associated with future innovations will scale up in sync with their capabilities.