Q&A with Vishal Srivastava, Vice President (Model Validation) at Citi

Vishal Srivastava, Vice President (Model Validation) at Citi was invited as a keynote speaker to present on Fraud Analytics using Machine Learning at the International Automation in Banking Summit in New York in November 2019. Vishal has experience in quantitative risk modeling using advanced engineering, statistical, and machine learning technologies. His academic qualifications in combination with a Ph.D. in Chemical Engineering and an MBA in Finance have enabled him to challenge quantitative risk models with scientific rigor. Vishal’s doctoral thesis included the development of statistical and machine learning-based risk models—some of which are currently being used commercially. Vishal has 120+ peer-reviewed citations in areas such as risk management, quantitative modeling, machine learning, and predictive analytics.

As workplaces start to open, a hybrid model—seems to be a new norm that provides flexibility for people to operate both from their homes and offices, as we emerge out of the pandemic period.



MEDIA 7: Could you please tell us a little bit about yourself and what made you choose this career path?
VISHAL SRIVASTAVA:
Since my childhood, I have had a deep interest in math and science—which led me to pursue a bachelor’s in engineering degree at NIT Trichy (National Institute of Technology, Tiruchirappalli) in India. Later, to advance my knowledge, I pursued MBA and Ph.D. studies in the United States, with fully-funded university scholarships. During my Ph.D. research, I was intrigued by various applications of mathematics with which risk in engineering systems could be quantified. Thanks to my advisors Prof. Carolyn Koh and Prof. Luis Zerpa at the Colorado School of Mines, I got the opportunity to explore ideas—from first principles to machine learning—and to build risk modeling frameworks in high-pressure flow systems. As a result of my Ph.D., we were able to develop risk frameworks to be used by consortium partners that included leading global energy companies. My Ph.D. research in the quantification of risk was an intellectually stimulating experience that taught me that anything is possible if we let our focus and energy stick to a single idea over a reasonable time.

Due to the nature of my Ph.D. research—which included quantitative risk modeling—and my earlier degree of MBA in Finance, I was contacted by several risk management professionals for a potential job opportunity in the finance sector. From a mathematical standpoint, risk management in engineering and finance has a lot of overlap.  The computation of risks in engineering systems deals with investigating factors that can lead to a system failure, which can be predicted using first principles-based engineering methods or with statistical models that include the historical distribution of failure events. Similarly, credit risk management can be approached using the first principle-based mathematical methods or statistical models that forecast defaults as a function of macroeconomic or account level variables. In both cases, a binary classification model can be developed in modeling these default or failure events. I found it fascinating to explore the different avenues where a graduate study in risk engineering could be applied.

About six months before my Ph.D. defense, I had an offer from Bank of the West, BNP Paribas in model risk division. My job role as an Assistant Vice President in the Model Risk Team was to challenge the model-building process of credit and fraud risk models—both involved binary classification models. The credit risk models include a logistic regression framework which is a well-accepted industry methodology for classification and is easy to interpret. The fraud risk models included both—traditional rule-based models—and new age RNN (Recurrent Neural Network) based sequential models, which use complex and non-linear models. From this experience, I learned that from a regulatory standpoint—model explainability could be a key factor while selecting a model. This was a valuable experience, but I’ve always enjoyed challenging myself and moving out of my comfort zone. So, about one and half years later, I accepted an opportunity to work as Vice President with Citibank’s Model Risk Division in the Secured Loan team, where my responsibilities included working with the international model validation team-members to review International and US Mortgage default risk models. My focus at this job is to challenge mortgage default risk models across various continents to ensure that these models are regulatory compliant. This experience is extremely insightful due to the varied nature of credit default events across different continents as well as the homogeneity in the modeling approach towards developing a model.


M7: What are some of the means through which you select appropriate model validation methodology?
VS:
In an increasingly competitive environment, financial institutions depend on models which help them optimize risks and make decisions that are well informed. Model validation managers need to ensure that every step in the model building process—data acquisition, conceptual soundness evaluation, model stability analysis, back-testing, performance assessment, model implementation testing—is well supported by a sound scientific framework. This is done to ensure that critical decisions such as loss estimates, capital allocation, and budget planning are taken based on scientific and mathematical reasoning rather than intuition. One key aspect in the whole model validation process is to ensure that the given model is compliant with the prevailing regulatory framework. In that regard, model developers present an assessment of all model usages and outputs. The performance assessment is conducted for all model usages and model outputs across all forecasting horizons. But one caveat of this process is that model risk assessment across all models can be cost-intensive. Therefore, the model review process is prioritized and models of higher importance—that are of substantial size and with significant risk contribution—are reviewed with a greater frequency. These are some of the key guidelines model validators keep in mind while performing model risk management activities.


The US economy has stayed resilient for most of 2021 when macroeconomic factors such as consumer spending and the unemployment rate have been showing promising trends.



M7: What are some of your go-to model validation techniques that help you effectively identify and manage model risk?
VS:
There can be no fixed technique that can be homogeneously applied to evaluate if a model under review is totally fit for the purpose. However, at a high level, there can be some guiding principles that could be quite useful while deciding to approve or reject a model. The first check is to ensure if there has been enough analysis performed on the conceptual soundness of the final selected methodology which is proposed in the model. Here the goal is to ensure that there is sufficient evidence to justify if the selected methodology is indeed the right modeling approach. For example, for the scorecard model, one can use logistic regression, decision tree, or neural network model. In such a situation, the model validator would review if enough analysis has been performed to justify if the given modeling framework suits the given data best and if the selected model can be sufficiently explained to the regulators. 

Additionally, model developers also explore the alternative modeling framework to demonstrate why the selected modeling framework is superior to the alternative modeling methodologies. The next aspect in model validation is to find if there are any inadequacies towards analysis or model documentation. If that is observed during the validation, the same needs to be recorded in the model validation report as findings and recommendations. In the model validation report, the model validator provides a record of comprehensive documentation to record all model findings and recommendations. This serves later as a reference document for model developers when there is a need for future model enhancement. Next, model developers need to ensure if model assumptions continue to be reasonable and are based on sound theoretical appropriateness. Consequences of model assumptions violations can be expensive. As an example, during the financial crisis of 2007–2008, several modelers assumed that the housing market will continue to grow based on the historical performance and previous data. However, during the financial crisis of 2007–2008, the housing market plunged, and many assumptions of those times were violated. As a result, several companies had to face a huge financial loss. Hence, it is imperative that each of the model assumptions is carefully evaluated. Model validators also need to ensure if the data quality checks have been performed sufficiently. The goal here is to ensure a scientific approach towards data segmentation, data cleaning, data sampling methodology, missing values, and data outliers—which can severely affect the model forecasts. The model validator also needs to ensure if data sources—both internal and external (rating agencies, etc.) are well checked and properly recorded while clearly justifying all data exclusions. The model validator also needs to ensure if the model developer has performed a sound variable selection process and if all variable transformations are well documented. Many times, continuous variables are converted to a categorical variable by a process called binning, and dummy variables are created. Any discrepancy in the variable transformation in the modeling and implementation stage can lead to a big discrepancy between the modeling and production. Another very important part of the model validation exercise is model back-testing and performance analysis.  This is to ensure that model is still producing accurate forecasts even for the recent period with unseen data. As described, the three main pillars of the model validation process can be depicted as below:
 

 



Model validation managers need to ensure that every step in the model building process—data acquisition, conceptual soundness evaluation, model stability analysis, back-testing, performance assessment, model implementation testing—is well supported by a sound scientific framework.

Model validator reviews if the model developer has performed back-testing in OOT (out-of-time) and OOS (out-of-sample) data to ascertain if the model is still accurate when the sample is not from the data that was used in the original developmental period to rule out overfitting. Next, the model validator must ensure if the model is meeting all the necessary regulatory compliance and all the model document fully complies with the necessary regulatory requirements. Model validators also need to review model dependencies. For instance, if the output from one model goes as an input to the second model, and if there is a performance issue with the first model, the performance of the second model can be adversely affected due to model dependencies. These are some of the pointers that model validators use to review a given model. A summary of the model validation review process can be pictorially represented in the below diagram:


M7: What do you see as the most noticeable change right now happening in the workforce, encouraged by the rise of digital technologies?
VS:
There is a Chinese proverb that says— “May you live in interesting times”. If we look around, we are rather living currently in transformational times that will redefine our future. Many banking tasks which earlier required physical proximity, are now being automated with digital innovations—that include advancements in computer vision and image recognition. Financial institutions have already introduced several innovative products—from automatic cheque deposits and online cash transfers to digital payments and transactions.  Additionally, the rise of digital technologies coupled with the changes due to the pandemic has brought irreversible changes in our workforce. As workplaces start to open, a hybrid model—seems to be a new norm that provides flexibility for people to operate both from their homes and offices, as we emerge out of the pandemic period.  There is an immense opportunity to retain the best parts of office culture while getting freedom from inefficient tasks and office meetings, which are unproductive.  This is resulting in the trend that commercial workplaces are moving into residential complexes as organizations are exploring new opportunities to be more efficient. We are seeing a new form of organizational agility, which is empowering teamwork across all disciplines and offshore locations. In my opinion, companies that quickly adapt to this remotely operated flexi-time organizational culture—rather than enforce the orthodoxy of 9-to-5 office-centric work—will have a clear competitive advantage in this new era of work. As digital transactions take precedence, many banking products such as payments and other forms of deposits—are fast becoming obsolete because people are able to use these applications on their cell phones. The ongoing pandemic has accelerated the adoption of automation and AI processes which were started in the pre-covid period. All these changes create immense opportunities in the financial sector in general.


M7: What are the top challenges you see for the industry in general?
VS:
The year 2021 is full of changes in many aspects. First, due to the rapid increase in pandemic cases worldwide, many countries witnessed some sort of slow-down in their economy during last year. However, with the ongoing vaccination drive, and reopening of offices and workplaces, synchronous global recovery has also been witnessed in the recent period. The US economy has stayed resilient for most of 2021 when macroeconomic factors such as consumer spending and the unemployment rate have been showing promising trends. However, the unemployment rate in the US for last year was among the highest in the last several decades. The dynamics and volatility in macroeconomic drivers thus affected many modeling forecasts. This is one of the main challenges from a model risk standpoint when many traditional models don’t seem to work as well as they did during the pre-pandemic time. The rise in macroeconomic volatility in the wake of COVID-19 has increased the uncertainty in modeling forecasts. When this uncertainty is not handled in a sound manner, this could result in two things—An inaccurate forecast from a simple model or a need towards a more complex model, giving rise to overfitting problems. From a model risk validation standpoint, model complexity is a growing challenge in the current times as many products are seeing the adoption of AI and machine learning to make the best use of banking data for improving efficiencies and gaining competitive intelligence. For such models, there is a need for modelers to explain the working of the model not just the performance of the model. With greater use of AI and analytics in the model risk domain, model explainability becomes a challenge faced by modelers. However, there have been significant advancements in model interpretability aspects with Explainable AI due to techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP, which stands for SHapely Additive exPlanation. Nevertheless, it is a constant battle to strike the right balance between model accuracy and model explainability in the wake of regulatory requirements. From a compliance viewpoint, this could also result in an environment that requires greater regulatory intervention in the model risk domain. These are some of the main challenges faced in the model risk domain from a technical standpoint. From a human resource viewpoint, finding good talent in the model risk domain is a big challenge in current times when many technology companies are hiring data scientists for similar roles. All challenges however come with great opportunities. Financial institutions are innovating and offering products that are creative and user-friendly. The speed of innovation has improved and the future only looks more promising.


M7: When you are not working, what else are you seen doing?
VS:
I love jogging and hiking in nature. I have recently finished a 100-day challenge of jogging 3 miles a day without missing a single day and I hope to take this to a next level by joining a marathon in Dallas when I move there next week. Apart from that, I love listening to podcasts on a variety of subjects. I have been recently listening to podcasts of Rich Rolls and Andrew Huberman, a neuroscientist from Stanford, who publicly presents his research about neuroscience and all the fun experiments his team performs at Stanford University. I also enjoy exploring different types of meditations and like to read about the healing effects of meditation. Other than these, I also enjoy swimming and vacationing to hilly places.

��


ABOUT CITIBANK

Citibank is one of the leading financial institutions of the world and is headquartered in New York City. It has one of the largest customer bases and has served more than 200 million clients with operations in more than 160 countries in the world. The U.S. branches are concentrated in six metropolitan areas: New York, Chicago, Los Angeles, San Francisco, Washington, D.C., and Miami. In addition, Citi is also a leading philanthropist company that is focused on catalyzing sustainable growth through transparency, innovation, and market-based solutions.

More THOUGHT LEADERS

Q&A with Charles Southwood, Vice President, N. Europe and MEA at Denodo

Media 7 | September 15, 2021

Charles Southwood, Regional VP at Denodo Technologies is responsible for the company’s business revenues in Northern Europe, Middle East and South Africa. He is passionate about working in rapidly moving and innovative markets to support customer success and to align IT solutions that meet the changing business needs. With a degree in engineering from Imperial College London, Charles has over 20 years of experience in data integration, big data, IT infrastructure/IT operations and Business Analytics....

Read More

Q&A with Sadiqah Musa, Co-Founder at Black In Data

Media 7 | September 1, 2021

Sadiqah Musa, Co-Founder at Black In Data, is also an experienced Senior Data Analyst at Guardian News and Media with a demonstrated history of working in the energy and publishing sectors. She is skilled in Advanced Excel, SQL, Python, data visualization, project management, and Data Analysis and has a strong professional background with a Master of Science (MSc) from The University of Manchester....

Read More

Q&A with Alastair Speare-Cole, President of Insurance at QOMPLX

Media 7 | August 20, 2021

Alastair Speare-Cole, President and General Manager of the Insurance Division at QOMPLX, leads the overall strategy for the business unit, the development of QOMPLX’s underwriting-as-a-service platform, the management of the company’s Managing General Agent (MGA), as well as setting the direction for the company’s next-generation insurance decision platform that leverages a wide variety of data and advanced analytics to provide advanced risk and portfolio management solutions. Prior to joining QOMPLX, he served as Chief Underwriting Officer at Qatar, and he served as the CEO of JLT Towers from 2012 to 2015. He was also COO at Aon Re for ten years and has also held board appointments at reinsurance and banking subsidiaries in the United Kingdom....

Read More

Q&A with Charles Southwood, Vice President, N. Europe and MEA at Denodo

Media 7 | September 15, 2021

Charles Southwood, Regional VP at Denodo Technologies is responsible for the company’s business revenues in Northern Europe, Middle East and South Africa. He is passionate about working in rapidly moving and innovative markets to support customer success and to align IT solutions that meet the changing business needs. With a degree in engineering from Imperial College London, Charles has over 20 years of experience in data integration, big data, IT infrastructure/IT operations and Business Analytics....

Read More

Q&A with Sadiqah Musa, Co-Founder at Black In Data

Media 7 | September 1, 2021

Sadiqah Musa, Co-Founder at Black In Data, is also an experienced Senior Data Analyst at Guardian News and Media with a demonstrated history of working in the energy and publishing sectors. She is skilled in Advanced Excel, SQL, Python, data visualization, project management, and Data Analysis and has a strong professional background with a Master of Science (MSc) from The University of Manchester....

Read More

Q&A with Alastair Speare-Cole, President of Insurance at QOMPLX

Media 7 | August 20, 2021

Alastair Speare-Cole, President and General Manager of the Insurance Division at QOMPLX, leads the overall strategy for the business unit, the development of QOMPLX’s underwriting-as-a-service platform, the management of the company’s Managing General Agent (MGA), as well as setting the direction for the company’s next-generation insurance decision platform that leverages a wide variety of data and advanced analytics to provide advanced risk and portfolio management solutions. Prior to joining QOMPLX, he served as Chief Underwriting Officer at Qatar, and he served as the CEO of JLT Towers from 2012 to 2015. He was also COO at Aon Re for ten years and has also held board appointments at reinsurance and banking subsidiaries in the United Kingdom....

Read More

Related News

Big Data Management

The Modern Data Company Recognized in Gartner's Magic Quadrant for Data Integration

The Modern Data Company | January 23, 2024

The Modern Data Company, recognized for its expertise in developing and managing advanced data products, is delighted to announce its distinction as an honorable mention in Gartner's 'Magic Quadrant for Data Integration Tools,' powered by our leading product, DataOS. “This accolade underscores our commitment to productizing data and revolutionizing data management technologies. Our focus extends beyond traditional data management, guiding companies on their journey to effectively utilize data, realize tangible ROI on their data investments, and harness advanced technologies such as AI, ML, and Large Language Models (LLMs). This recognition is a testament to Modern Data’s alignment with the latest industry trends and our dedication to setting new standards in data integration and utilization.” – Srujan Akula, CEO of The Modern Data Company The inclusion in the Gartner report highlights The Modern Data Company's pivotal role in shaping the future of data integration. Our innovative approach, embodied in DataOS, enables businesses to navigate the complexities of data management, transforming data into a strategic asset. By simplifying data access and integration, we empower organizations to unlock the full potential of their data, driving insights and innovation without disruption. "Modern Data's recognition as an Honorable Mention in the Gartner MQ for Data Integration is a testament to the transformative impact their solutions have on businesses like ours. DataOS has been pivotal in allowing us to integrate multiple data sources, enabling our teams to have access to the data needed to make data driven decisions." – Emma Spight, SVP Technology, MIND 24-7 The Modern Data Company simplifies how organizations manage, access, and interact with data using its DataOS (data operating system) that unifies data silos, at scale. It provides ontology support, graph modeling, and a virtual data tier (e.g. a customer 360 model). From a technical point of view, it closes the gap from conceptual to physical data model. Users can define conceptually what they want and its software traverses and integrates data. DataOS provides a structured, repeatable approach to data integration that enhances agility and ensures high-quality outputs. This shift from traditional pipeline management to data products allows for more efficient data operations, as each 'product' is designed with a specific purpose and standardized interfaces, ensuring consistency across different uses and applications. With DataOS, businesses can expect a transformative impact on their data strategies, marked by increased efficiency and a robust framework for handling complex data ecosystems, allowing for more and faster iterations of conceptual models. About The Modern Data Company The Modern Data Company, with its flagship product DataOS, revolutionizes the creation of data products. DataOS® is engineered to build and manage comprehensive data products to foster data mesh adoption, propelling organizations towards a data-driven future. DataOS directly addresses key AI/ML and LLM challenges: ensuring quality data, scaling computational resources, and integrating seamlessly into business processes. In our commitment to provide open systems, we have created an open data developer platform specification that is gaining wide industry support.

Read More

Machine Learning

InterSystems Introduces Two New Cloud-Native Smart Data Services to Accelerate Database and Machine Learning Application Development

InterSystems | January 12, 2024

InterSystems, a creative data technology provider dedicated to helping customers solve their most critical scalability, interoperability, and speed problems, today announced general availability of the InterSystems IRIS Cloud SQL and InterSystems IRIS Cloud IntegratedML® services. These fully managed cloud-native smart data services empower developers to build cloud-native database and machine learning (ML) applications in SQL environments with ease. With Cloud SQL and Cloud IntegratedML, developers can access a next generation relational database-as-a-service (DBaaS) that is fast and easy to provision and use. Embedded AutoML capabilities allow developers to easily develop and execute machine learning models with just a few SQL-like commands in a fully-managed, elastic cloud-native environment. Shaping a complete data management portfolio for mission-critical applications As part of the InterSystems Cloud portfolio of smart data services, Cloud SQL and Cloud IntegratedML provide application developers with access to InterSystems proven enterprise-class capabilities as self-service, fully managed offerings on Amazon Web Services (AWS), while providing a fast and seamless on-ramp to the full suite of capabilities in InterSystems IRIS® data platform. InterSystems IRIS is a next-generation data platform designed for organizations implementing smart data fabrics that provide powerful database management, integration, and application development capabilities. By consolidating these capabilities into a single product, InterSystems IRIS accelerates the time it takes to realize value from data, simplifies overall system architectures, and reduces both maintenance effort and costs. “We are excited for the capabilities of InterSystems IRIS to be exposed through these new, easy to deploy and easy to use services,” said Scott Gnau, Global Head of Data Platforms at InterSystems. “With native support for AutoML, we give developers the power to build comprehensive, predictive, and prescriptive applications.” Fully managed, enterprise-class reliability with InterSystems IRIS Cloud SQL Cloud SQL makes it easy for application developers to leverage advanced relational database capabilities as a fully managed, secure, scalable, high performance, highly available cloud-native database-as-a-service (DBaaS). Cloud SQL delivers the following benefits for SQL developers: Extremely high performance, especially for ingesting and processing incoming data and performing SQL queries on the data with low latency at scale Fast and easy to provision and use Ability to easily connect client applications via JDBC, ODBC, DB-API, and ADO.NET drivers Automated security, data encryption, and backups Automation of machine learning tasks with InterSystems IRIS Cloud IntegratedML Available as an additional cloud managed service for InterSystems IRIS Cloud SQL customers, Cloud IntegratedML extends the capabilities of Cloud SQL to enable SQL developers to quickly build, tune, and execute machine learning models with just a few SQL-like commands, without moving or copying data to a different environment. A significant advantage of Cloud IntegratedML is the elimination of the need to transfer or replicate data to an external platform to build ML models, or to move ML models to a different environment for execution. Cloud IntegratedML delivers the following benefits for SQL developers: Automation of machine learning processes and resource-intensive tasks such as feature engineering, model development, and fine-tuning Seamless integration of models developed and trained with Cloud IntegratedML within Cloud SQL, facilitating real-time predictive insights and prescriptive actions in response to events and transactions This comprehensive suite of smart data services establishes the InterSystems Cloud portfolio of smart data services as an optimal choice for SQL developers seeking a robust, high-performance database solution tailored to their needs. The new Cloud SQL and Cloud IntegratedML services are available through InterSystems Developer Hub. About InterSystems Established in 1978, InterSystems is the leading provider of next-generation solutions for enterprise digital transformations in the healthcare, finance, manufacturing, and supply chain sectors. Its cloud-first data platforms solve interoperability, speed, and scalability problems for large organizations around the globe. InterSystems is committed to excellence through its award-winning, 24×7 support for customers and partners in more than 80 countries. Privately held and headquartered in Cambridge, Massachusetts, InterSystems has 38 offices in 28 countries worldwide. For more information, please visit InterSystems.com.

Read More

Big Data Management

Radiant Logic Announces RadiantOne AI, with New Generative AI Data Assistant “AIDA”

Radiant Logic | January 11, 2024

Radiant Logic, the Identity Data Fabric company, today unveiled RadiantOne AI, its data lake powered Artificial Intelligence engine, and AIDA, its Generative AI Data Assistant. RadiantOne AI is designed to complement your existing tech stack and governance products by correlating data across multiple sources and providing contextual information to drive better decision making. The result is a radical reduction in the time and resources needed to gather the data required to effectively meet audit demands—meaning fewer security gaps and increased compliance with organizational policies. The first capability to be unveiled on RadiantOne AI is a truly automated user access review (UAR) process, expertly guided by AIDA. Many business leaders are familiar with the tedious UAR process – it’s crucial for demonstrating compliance and improving organizational security posture. But laborious processes can often end in a “bulk approval” to save time and check an audit box instead of accurately reviewing access rights to ensure the right business outcomes. RadiantOne’s AI-driven approach will be a paradigm shift in the way people work, forever transforming and streamlining the usually time-consuming UAR process down to days and minutes instead of months. “Historically, user access reviews are a highly manual process–a ‘necessary evil’ within security practices. This approach not only creates fatigue for the team but also introduces a considerable amount of risk,” says Dr. John Pritchard, Chief Product Officer at Radiant Logic. “While this may work in the short term to satisfy auditor requirements, the company’s assets are never truly protected. There is also still the risk that something may be overlooked, or someone within the business has retained access to something they shouldn’t. With RadiantOne AI and AIDA, existing IAM and IGA processes can be automated and simplified for overworked teams trying to comb through mountains of user access data to make the right decisions to protect their organizations.” With RadiantOne AI, conducting a user access review becomes as easy as following AIDA’s guidance. Using the power of large language models to drive advanced data correlation, contextualization and analysis, combined with an intuitive data visualization dashboard, AIDA will reinvent the user access review ritual. Based on an organization's proprietary data, the fully guided UAR experience will allow reviewers to interact and pose questions to AIDA using natural human language, like “where does this access come from?” or “show me who else has these access rights?” AIDA will highlight any potential user access risks, offer expert insight, and suggest remediations or access modifications based on an organization’s policies. Any changes, such as low risk bulk access approvals or revoking atypical access rights, are completed via a click of a button, so there’s less training required to complete the reviews and less risk of human error during the process. RadiantOne AI’s AIDA-guided user access review capability works to provide enterprise organizations with: Automated workflows: Leverage vast data sets and contextual insights to make intelligent and confident decisions about access rights. Simplified compliance: Easily detect over-privileged accounts or atypical access rights with intuitive data visualization techniques. Greater visibility into user actions: Get beyond roles quickly to see who has access to what and how they received that access so insights and remediations are easily actionable. Click-button remediation: Based on the insights and recommendations from AIDA, reviewers can approve or revoke access or atypical rights individually or take bulk approval/rejection actions with the click of a button. Data into the hands of business owners: Put relevant, risk-based identity data insights into the hands of business users in the language they understand to make it a breeze to adhere to compliance policies. “User access reviews with AIDA are just the beginning,” comments Joe Sander, Radiant Logic’s CEO. “Using RadiantOne’s AI engine, we see potential to revolutionize identity data management, governance, risk, compliance and cybersecurity processes by removing complexity as a roadblock. This frees up critical IT and security resources to focus on other business-critical tasks and expands the role of identity to truly be a business enabler.” RadiantOne AI comes on the heels of the completion of the integration of Brainwave Identity Analytics into the RadiantOne Identity Data Platform. AIDA will initially be available as a complement to the RadiantOne Identity Analytics solution. About Radiant Logic Radiant Logic, the identity data experts, helps organizations turn identity data into a strategic asset that drives automated governance, enhanced security, and operational efficiency. Our RadiantOne Identity Data Platform removes complexity as a roadblock to identity-first strategies by creating an authoritative data source for real-time, context-aware controls. We provide visibility and actionable insights to intelligently detect and remediate risk using AI/ML-powered identity analytics. With RadiantOne, organizations are able to tap into the wealth of information across the infrastructure, combining context and analytics to deploy governance that works for the most advanced use cases.

Read More

Big Data Management

The Modern Data Company Recognized in Gartner's Magic Quadrant for Data Integration

The Modern Data Company | January 23, 2024

The Modern Data Company, recognized for its expertise in developing and managing advanced data products, is delighted to announce its distinction as an honorable mention in Gartner's 'Magic Quadrant for Data Integration Tools,' powered by our leading product, DataOS. “This accolade underscores our commitment to productizing data and revolutionizing data management technologies. Our focus extends beyond traditional data management, guiding companies on their journey to effectively utilize data, realize tangible ROI on their data investments, and harness advanced technologies such as AI, ML, and Large Language Models (LLMs). This recognition is a testament to Modern Data’s alignment with the latest industry trends and our dedication to setting new standards in data integration and utilization.” – Srujan Akula, CEO of The Modern Data Company The inclusion in the Gartner report highlights The Modern Data Company's pivotal role in shaping the future of data integration. Our innovative approach, embodied in DataOS, enables businesses to navigate the complexities of data management, transforming data into a strategic asset. By simplifying data access and integration, we empower organizations to unlock the full potential of their data, driving insights and innovation without disruption. "Modern Data's recognition as an Honorable Mention in the Gartner MQ for Data Integration is a testament to the transformative impact their solutions have on businesses like ours. DataOS has been pivotal in allowing us to integrate multiple data sources, enabling our teams to have access to the data needed to make data driven decisions." – Emma Spight, SVP Technology, MIND 24-7 The Modern Data Company simplifies how organizations manage, access, and interact with data using its DataOS (data operating system) that unifies data silos, at scale. It provides ontology support, graph modeling, and a virtual data tier (e.g. a customer 360 model). From a technical point of view, it closes the gap from conceptual to physical data model. Users can define conceptually what they want and its software traverses and integrates data. DataOS provides a structured, repeatable approach to data integration that enhances agility and ensures high-quality outputs. This shift from traditional pipeline management to data products allows for more efficient data operations, as each 'product' is designed with a specific purpose and standardized interfaces, ensuring consistency across different uses and applications. With DataOS, businesses can expect a transformative impact on their data strategies, marked by increased efficiency and a robust framework for handling complex data ecosystems, allowing for more and faster iterations of conceptual models. About The Modern Data Company The Modern Data Company, with its flagship product DataOS, revolutionizes the creation of data products. DataOS® is engineered to build and manage comprehensive data products to foster data mesh adoption, propelling organizations towards a data-driven future. DataOS directly addresses key AI/ML and LLM challenges: ensuring quality data, scaling computational resources, and integrating seamlessly into business processes. In our commitment to provide open systems, we have created an open data developer platform specification that is gaining wide industry support.

Read More

Machine Learning

InterSystems Introduces Two New Cloud-Native Smart Data Services to Accelerate Database and Machine Learning Application Development

InterSystems | January 12, 2024

InterSystems, a creative data technology provider dedicated to helping customers solve their most critical scalability, interoperability, and speed problems, today announced general availability of the InterSystems IRIS Cloud SQL and InterSystems IRIS Cloud IntegratedML® services. These fully managed cloud-native smart data services empower developers to build cloud-native database and machine learning (ML) applications in SQL environments with ease. With Cloud SQL and Cloud IntegratedML, developers can access a next generation relational database-as-a-service (DBaaS) that is fast and easy to provision and use. Embedded AutoML capabilities allow developers to easily develop and execute machine learning models with just a few SQL-like commands in a fully-managed, elastic cloud-native environment. Shaping a complete data management portfolio for mission-critical applications As part of the InterSystems Cloud portfolio of smart data services, Cloud SQL and Cloud IntegratedML provide application developers with access to InterSystems proven enterprise-class capabilities as self-service, fully managed offerings on Amazon Web Services (AWS), while providing a fast and seamless on-ramp to the full suite of capabilities in InterSystems IRIS® data platform. InterSystems IRIS is a next-generation data platform designed for organizations implementing smart data fabrics that provide powerful database management, integration, and application development capabilities. By consolidating these capabilities into a single product, InterSystems IRIS accelerates the time it takes to realize value from data, simplifies overall system architectures, and reduces both maintenance effort and costs. “We are excited for the capabilities of InterSystems IRIS to be exposed through these new, easy to deploy and easy to use services,” said Scott Gnau, Global Head of Data Platforms at InterSystems. “With native support for AutoML, we give developers the power to build comprehensive, predictive, and prescriptive applications.” Fully managed, enterprise-class reliability with InterSystems IRIS Cloud SQL Cloud SQL makes it easy for application developers to leverage advanced relational database capabilities as a fully managed, secure, scalable, high performance, highly available cloud-native database-as-a-service (DBaaS). Cloud SQL delivers the following benefits for SQL developers: Extremely high performance, especially for ingesting and processing incoming data and performing SQL queries on the data with low latency at scale Fast and easy to provision and use Ability to easily connect client applications via JDBC, ODBC, DB-API, and ADO.NET drivers Automated security, data encryption, and backups Automation of machine learning tasks with InterSystems IRIS Cloud IntegratedML Available as an additional cloud managed service for InterSystems IRIS Cloud SQL customers, Cloud IntegratedML extends the capabilities of Cloud SQL to enable SQL developers to quickly build, tune, and execute machine learning models with just a few SQL-like commands, without moving or copying data to a different environment. A significant advantage of Cloud IntegratedML is the elimination of the need to transfer or replicate data to an external platform to build ML models, or to move ML models to a different environment for execution. Cloud IntegratedML delivers the following benefits for SQL developers: Automation of machine learning processes and resource-intensive tasks such as feature engineering, model development, and fine-tuning Seamless integration of models developed and trained with Cloud IntegratedML within Cloud SQL, facilitating real-time predictive insights and prescriptive actions in response to events and transactions This comprehensive suite of smart data services establishes the InterSystems Cloud portfolio of smart data services as an optimal choice for SQL developers seeking a robust, high-performance database solution tailored to their needs. The new Cloud SQL and Cloud IntegratedML services are available through InterSystems Developer Hub. About InterSystems Established in 1978, InterSystems is the leading provider of next-generation solutions for enterprise digital transformations in the healthcare, finance, manufacturing, and supply chain sectors. Its cloud-first data platforms solve interoperability, speed, and scalability problems for large organizations around the globe. InterSystems is committed to excellence through its award-winning, 24×7 support for customers and partners in more than 80 countries. Privately held and headquartered in Cambridge, Massachusetts, InterSystems has 38 offices in 28 countries worldwide. For more information, please visit InterSystems.com.

Read More

Big Data Management

Radiant Logic Announces RadiantOne AI, with New Generative AI Data Assistant “AIDA”

Radiant Logic | January 11, 2024

Radiant Logic, the Identity Data Fabric company, today unveiled RadiantOne AI, its data lake powered Artificial Intelligence engine, and AIDA, its Generative AI Data Assistant. RadiantOne AI is designed to complement your existing tech stack and governance products by correlating data across multiple sources and providing contextual information to drive better decision making. The result is a radical reduction in the time and resources needed to gather the data required to effectively meet audit demands—meaning fewer security gaps and increased compliance with organizational policies. The first capability to be unveiled on RadiantOne AI is a truly automated user access review (UAR) process, expertly guided by AIDA. Many business leaders are familiar with the tedious UAR process – it’s crucial for demonstrating compliance and improving organizational security posture. But laborious processes can often end in a “bulk approval” to save time and check an audit box instead of accurately reviewing access rights to ensure the right business outcomes. RadiantOne’s AI-driven approach will be a paradigm shift in the way people work, forever transforming and streamlining the usually time-consuming UAR process down to days and minutes instead of months. “Historically, user access reviews are a highly manual process–a ‘necessary evil’ within security practices. This approach not only creates fatigue for the team but also introduces a considerable amount of risk,” says Dr. John Pritchard, Chief Product Officer at Radiant Logic. “While this may work in the short term to satisfy auditor requirements, the company’s assets are never truly protected. There is also still the risk that something may be overlooked, or someone within the business has retained access to something they shouldn’t. With RadiantOne AI and AIDA, existing IAM and IGA processes can be automated and simplified for overworked teams trying to comb through mountains of user access data to make the right decisions to protect their organizations.” With RadiantOne AI, conducting a user access review becomes as easy as following AIDA’s guidance. Using the power of large language models to drive advanced data correlation, contextualization and analysis, combined with an intuitive data visualization dashboard, AIDA will reinvent the user access review ritual. Based on an organization's proprietary data, the fully guided UAR experience will allow reviewers to interact and pose questions to AIDA using natural human language, like “where does this access come from?” or “show me who else has these access rights?” AIDA will highlight any potential user access risks, offer expert insight, and suggest remediations or access modifications based on an organization’s policies. Any changes, such as low risk bulk access approvals or revoking atypical access rights, are completed via a click of a button, so there’s less training required to complete the reviews and less risk of human error during the process. RadiantOne AI’s AIDA-guided user access review capability works to provide enterprise organizations with: Automated workflows: Leverage vast data sets and contextual insights to make intelligent and confident decisions about access rights. Simplified compliance: Easily detect over-privileged accounts or atypical access rights with intuitive data visualization techniques. Greater visibility into user actions: Get beyond roles quickly to see who has access to what and how they received that access so insights and remediations are easily actionable. Click-button remediation: Based on the insights and recommendations from AIDA, reviewers can approve or revoke access or atypical rights individually or take bulk approval/rejection actions with the click of a button. Data into the hands of business owners: Put relevant, risk-based identity data insights into the hands of business users in the language they understand to make it a breeze to adhere to compliance policies. “User access reviews with AIDA are just the beginning,” comments Joe Sander, Radiant Logic’s CEO. “Using RadiantOne’s AI engine, we see potential to revolutionize identity data management, governance, risk, compliance and cybersecurity processes by removing complexity as a roadblock. This frees up critical IT and security resources to focus on other business-critical tasks and expands the role of identity to truly be a business enabler.” RadiantOne AI comes on the heels of the completion of the integration of Brainwave Identity Analytics into the RadiantOne Identity Data Platform. AIDA will initially be available as a complement to the RadiantOne Identity Analytics solution. About Radiant Logic Radiant Logic, the identity data experts, helps organizations turn identity data into a strategic asset that drives automated governance, enhanced security, and operational efficiency. Our RadiantOne Identity Data Platform removes complexity as a roadblock to identity-first strategies by creating an authoritative data source for real-time, context-aware controls. We provide visibility and actionable insights to intelligently detect and remediate risk using AI/ML-powered identity analytics. With RadiantOne, organizations are able to tap into the wealth of information across the infrastructure, combining context and analytics to deploy governance that works for the most advanced use cases.

Read More

Spotlight

Events

Resources

Events