IBM announces plans to enhance its Watsonx AI and data platform, with a focus on scaling AI impact for enterprises.
Key improvements include new generative AI models, integration of foundation models, and features like Tuning Studio and Synthetic Data Generator.
IBM emphasizes trust, transparency, and governance in training and plans to incorporate AI into its hybrid cloud solutions, although implementation difficulty and cost may be issues.
IBM reveals its plans to introduce new generative AI foundation models and enhancements to its Watsonx AI and data platform. The goal is to provide enterprises with the tools they need to scale and accelerate the impact of AI in their operations. These improvements include a technical preview for watsonx.governance, the addition of new generative AI data services to watsonx.data, and the integration of watsonx.ai foundation models into select software and infrastructure products.
Developers will have the opportunity to explore these capabilities and models at the IBM TechXchange Conference, scheduled to take place from September 11 to 14 in Las Vegas.
The upcoming AI models and features include:
1. Granite Series Models: IBM plans to launch its Granite series models, utilizing the ‘Decoder’ architecture, is essential for large language models (LLMs). These models will support various enterprise natural language processing (NLP) tasks, including summarization, content generation, and insight extraction, with planned availability in Q3 2023.
2. Third-Party Models: IBM is currently offering Meta's Llama 2-chat 70 billion parameter model and the StarCoder LLM for code generation within watsonx.ai on IBM Cloud.
IBM places a strong emphasis on trust and transparency in its training process for foundation models. They follow rigorous data collection procedures and include control points to ensure responsible deployments in terms of governance, risk assessment, privacy, bias mitigation, and compliance.
IBM also intends to introduce new features across the watsonx platform:
Tuning Studio: IBM plans to release the Tuning Studio, featuring prompt tuning, allowing clients to adapt foundation models to their specific enterprise data and tasks. This is expected to be available in 3Q23.
Synthetic Data Generator: IBM has launched a synthetic data generator, enabling users to create artificial tabular data sets for AI model training, reducing risk and accelerating decision-making.
Generative AI: IBM aims to incorporate generative AI capabilities into watsonx.data to help users discover, augment, visualize, and refine data for AI through a self-service, natural language interface. This feature is planned for technical preview in 4Q 2023.
Vector Database Capability: IBM plans to integrate vector database capabilities into watsonx.data to support watsonx.ai retrieval and augmented generation use cases, also expected in the technical preview in 4Q 2023.
Model Risk Governance for Generative AI: IBM is launching a tech preview for watsonx.governance, providing automated collection and documentation of foundation model details and model risk governance capabilities.
Dinesh Nirmal, Senior Vice President, Products, IBM Software, stated that IBM is dedicated to supporting clients throughout the AI lifecycle, from establishing foundational data strategies to model tuning and governance. Additionally, IBM will offer AI assistants to help clients scale AI's impact across various enterprise use cases, such as application modernization, customer care, and HR and talent management.
IBM also intends to integrate watsonx.ai innovations into its hybrid cloud software and infrastructure products, including intelligent IT automation and developer services. IBM's upgrades to the Watsonx AI and data platform offer promise but, come with potential drawbacks. Implementation complexity and the need for additional training may create a steep learning curve. The associated costs of advanced technology could be prohibitive for smaller organizations.
The introduction of generative AI and synthetic data raises data privacy and security concerns. Additionally, despite efforts for responsible AI, the risk of bias in models necessitates ongoing vigilance to avoid legal and ethical issues.