Q&A with Vishal Srivastava, Vice President (Model Validation) at Citi

Media 7 | September 8, 2021

Vishal Srivastava, Vice President (Model Validation) at Citi was invited as a keynote speaker to present on Fraud Analytics using Machine Learning at the International Automation in Banking Summit in New York in November 2019. Vishal has experience in quantitative risk modeling using advanced engineering, statistical, and machine learning technologies. His academic qualifications in combination with a Ph.D. in Chemical Engineering and an MBA in Finance have enabled him to challenge quantitative risk models with scientific rigor. Vishal’s doctoral thesis included the development of statistical and machine learning-based risk models—some of which are currently being used commercially. Vishal has 120+ peer-reviewed citations in areas such as risk management, quantitative modeling, machine learning, and predictive analytics.

As workplaces start to open, a hybrid model—seems to be a new norm that provides flexibility for people to operate both from their homes and offices, as we emerge out of the pandemic period.



MEDIA 7: Could you please tell us a little bit about yourself and what made you choose this career path?
VISHAL SRIVASTAVA:
Since my childhood, I have had a deep interest in math and science—which led me to pursue a bachelor’s in engineering degree at NIT Trichy (National Institute of Technology, Tiruchirappalli) in India. Later, to advance my knowledge, I pursued MBA and Ph.D. studies in the United States, with fully-funded university scholarships. During my Ph.D. research, I was intrigued by various applications of mathematics with which risk in engineering systems could be quantified. Thanks to my advisors Prof. Carolyn Koh and Prof. Luis Zerpa at the Colorado School of Mines, I got the opportunity to explore ideas—from first principles to machine learning—and to build risk modeling frameworks in high-pressure flow systems. As a result of my Ph.D., we were able to develop risk frameworks to be used by consortium partners that included leading global energy companies. My Ph.D. research in the quantification of risk was an intellectually stimulating experience that taught me that anything is possible if we let our focus and energy stick to a single idea over a reasonable time.

Due to the nature of my Ph.D. research—which included quantitative risk modeling—and my earlier degree of MBA in Finance, I was contacted by several risk management professionals for a potential job opportunity in the finance sector. From a mathematical standpoint, risk management in engineering and finance has a lot of overlap.  The computation of risks in engineering systems deals with investigating factors that can lead to a system failure, which can be predicted using first principles-based engineering methods or with statistical models that include the historical distribution of failure events. Similarly, credit risk management can be approached using the first principle-based mathematical methods or statistical models that forecast defaults as a function of macroeconomic or account level variables. In both cases, a binary classification model can be developed in modeling these default or failure events. I found it fascinating to explore the different avenues where a graduate study in risk engineering could be applied.

About six months before my Ph.D. defense, I had an offer from Bank of the West, BNP Paribas in model risk division. My job role as an Assistant Vice President in the Model Risk Team was to challenge the model-building process of credit and fraud risk models—both involved binary classification models. The credit risk models include a logistic regression framework which is a well-accepted industry methodology for classification and is easy to interpret. The fraud risk models included both—traditional rule-based models—and new age RNN (Recurrent Neural Network) based sequential models, which use complex and non-linear models. From this experience, I learned that from a regulatory standpoint—model explainability could be a key factor while selecting a model. This was a valuable experience, but I’ve always enjoyed challenging myself and moving out of my comfort zone. So, about one and half years later, I accepted an opportunity to work as Vice President with Citibank’s Model Risk Division in the Secured Loan team, where my responsibilities included working with the international model validation team-members to review International and US Mortgage default risk models. My focus at this job is to challenge mortgage default risk models across various continents to ensure that these models are regulatory compliant. This experience is extremely insightful due to the varied nature of credit default events across different continents as well as the homogeneity in the modeling approach towards developing a model.


M7: What are some of the means through which you select appropriate model validation methodology?
VS:
In an increasingly competitive environment, financial institutions depend on models which help them optimize risks and make decisions that are well informed. Model validation managers need to ensure that every step in the model building process—data acquisition, conceptual soundness evaluation, model stability analysis, back-testing, performance assessment, model implementation testing—is well supported by a sound scientific framework. This is done to ensure that critical decisions such as loss estimates, capital allocation, and budget planning are taken based on scientific and mathematical reasoning rather than intuition. One key aspect in the whole model validation process is to ensure that the given model is compliant with the prevailing regulatory framework. In that regard, model developers present an assessment of all model usages and outputs. The performance assessment is conducted for all model usages and model outputs across all forecasting horizons. But one caveat of this process is that model risk assessment across all models can be cost-intensive. Therefore, the model review process is prioritized and models of higher importance—that are of substantial size and with significant risk contribution—are reviewed with a greater frequency. These are some of the key guidelines model validators keep in mind while performing model risk management activities.


The US economy has stayed resilient for most of 2021 when macroeconomic factors such as consumer spending and the unemployment rate have been showing promising trends.



M7: What are some of your go-to model validation techniques that help you effectively identify and manage model risk?
VS:
There can be no fixed technique that can be homogeneously applied to evaluate if a model under review is totally fit for the purpose. However, at a high level, there can be some guiding principles that could be quite useful while deciding to approve or reject a model. The first check is to ensure if there has been enough analysis performed on the conceptual soundness of the final selected methodology which is proposed in the model. Here the goal is to ensure that there is sufficient evidence to justify if the selected methodology is indeed the right modeling approach. For example, for the scorecard model, one can use logistic regression, decision tree, or neural network model. In such a situation, the model validator would review if enough analysis has been performed to justify if the given modeling framework suits the given data best and if the selected model can be sufficiently explained to the regulators. 

Additionally, model developers also explore the alternative modeling framework to demonstrate why the selected modeling framework is superior to the alternative modeling methodologies. The next aspect in model validation is to find if there are any inadequacies towards analysis or model documentation. If that is observed during the validation, the same needs to be recorded in the model validation report as findings and recommendations. In the model validation report, the model validator provides a record of comprehensive documentation to record all model findings and recommendations. This serves later as a reference document for model developers when there is a need for future model enhancement. Next, model developers need to ensure if model assumptions continue to be reasonable and are based on sound theoretical appropriateness. Consequences of model assumptions violations can be expensive. As an example, during the financial crisis of 2007–2008, several modelers assumed that the housing market will continue to grow based on the historical performance and previous data. However, during the financial crisis of 2007–2008, the housing market plunged, and many assumptions of those times were violated. As a result, several companies had to face a huge financial loss. Hence, it is imperative that each of the model assumptions is carefully evaluated. Model validators also need to ensure if the data quality checks have been performed sufficiently. The goal here is to ensure a scientific approach towards data segmentation, data cleaning, data sampling methodology, missing values, and data outliers—which can severely affect the model forecasts. The model validator also needs to ensure if data sources—both internal and external (rating agencies, etc.) are well checked and properly recorded while clearly justifying all data exclusions. The model validator also needs to ensure if the model developer has performed a sound variable selection process and if all variable transformations are well documented. Many times, continuous variables are converted to a categorical variable by a process called binning, and dummy variables are created. Any discrepancy in the variable transformation in the modeling and implementation stage can lead to a big discrepancy between the modeling and production. Another very important part of the model validation exercise is model back-testing and performance analysis.  This is to ensure that model is still producing accurate forecasts even for the recent period with unseen data. As described, the three main pillars of the model validation process can be depicted as below:
 

 



Model validation managers need to ensure that every step in the model building process—data acquisition, conceptual soundness evaluation, model stability analysis, back-testing, performance assessment, model implementation testing—is well supported by a sound scientific framework.

Model validator reviews if the model developer has performed back-testing in OOT (out-of-time) and OOS (out-of-sample) data to ascertain if the model is still accurate when the sample is not from the data that was used in the original developmental period to rule out overfitting. Next, the model validator must ensure if the model is meeting all the necessary regulatory compliance and all the model document fully complies with the necessary regulatory requirements. Model validators also need to review model dependencies. For instance, if the output from one model goes as an input to the second model, and if there is a performance issue with the first model, the performance of the second model can be adversely affected due to model dependencies. These are some of the pointers that model validators use to review a given model. A summary of the model validation review process can be pictorially represented in the below diagram:


M7: What do you see as the most noticeable change right now happening in the workforce, encouraged by the rise of digital technologies?
VS:
There is a Chinese proverb that says— “May you live in interesting times”. If we look around, we are rather living currently in transformational times that will redefine our future. Many banking tasks which earlier required physical proximity, are now being automated with digital innovations—that include advancements in computer vision and image recognition. Financial institutions have already introduced several innovative products—from automatic cheque deposits and online cash transfers to digital payments and transactions.  Additionally, the rise of digital technologies coupled with the changes due to the pandemic has brought irreversible changes in our workforce. As workplaces start to open, a hybrid model—seems to be a new norm that provides flexibility for people to operate both from their homes and offices, as we emerge out of the pandemic period.  There is an immense opportunity to retain the best parts of office culture while getting freedom from inefficient tasks and office meetings, which are unproductive.  This is resulting in the trend that commercial workplaces are moving into residential complexes as organizations are exploring new opportunities to be more efficient. We are seeing a new form of organizational agility, which is empowering teamwork across all disciplines and offshore locations. In my opinion, companies that quickly adapt to this remotely operated flexi-time organizational culture—rather than enforce the orthodoxy of 9-to-5 office-centric work—will have a clear competitive advantage in this new era of work. As digital transactions take precedence, many banking products such as payments and other forms of deposits—are fast becoming obsolete because people are able to use these applications on their cell phones. The ongoing pandemic has accelerated the adoption of automation and AI processes which were started in the pre-covid period. All these changes create immense opportunities in the financial sector in general.


M7: What are the top challenges you see for the industry in general?
VS:
The year 2021 is full of changes in many aspects. First, due to the rapid increase in pandemic cases worldwide, many countries witnessed some sort of slow-down in their economy during last year. However, with the ongoing vaccination drive, and reopening of offices and workplaces, synchronous global recovery has also been witnessed in the recent period. The US economy has stayed resilient for most of 2021 when macroeconomic factors such as consumer spending and the unemployment rate have been showing promising trends. However, the unemployment rate in the US for last year was among the highest in the last several decades. The dynamics and volatility in macroeconomic drivers thus affected many modeling forecasts. This is one of the main challenges from a model risk standpoint when many traditional models don’t seem to work as well as they did during the pre-pandemic time. The rise in macroeconomic volatility in the wake of COVID-19 has increased the uncertainty in modeling forecasts. When this uncertainty is not handled in a sound manner, this could result in two things—An inaccurate forecast from a simple model or a need towards a more complex model, giving rise to overfitting problems. From a model risk validation standpoint, model complexity is a growing challenge in the current times as many products are seeing the adoption of AI and machine learning to make the best use of banking data for improving efficiencies and gaining competitive intelligence. For such models, there is a need for modelers to explain the working of the model not just the performance of the model. With greater use of AI and analytics in the model risk domain, model explainability becomes a challenge faced by modelers. However, there have been significant advancements in model interpretability aspects with Explainable AI due to techniques such as LIME (Local Interpretable Model-agnostic Explanations) and SHAP, which stands for SHapely Additive exPlanation. Nevertheless, it is a constant battle to strike the right balance between model accuracy and model explainability in the wake of regulatory requirements. From a compliance viewpoint, this could also result in an environment that requires greater regulatory intervention in the model risk domain. These are some of the main challenges faced in the model risk domain from a technical standpoint. From a human resource viewpoint, finding good talent in the model risk domain is a big challenge in current times when many technology companies are hiring data scientists for similar roles. All challenges however come with great opportunities. Financial institutions are innovating and offering products that are creative and user-friendly. The speed of innovation has improved and the future only looks more promising.


M7: When you are not working, what else are you seen doing?
VS:
I love jogging and hiking in nature. I have recently finished a 100-day challenge of jogging 3 miles a day without missing a single day and I hope to take this to a next level by joining a marathon in Dallas when I move there next week. Apart from that, I love listening to podcasts on a variety of subjects. I have been recently listening to podcasts of Rich Rolls and Andrew Huberman, a neuroscientist from Stanford, who publicly presents his research about neuroscience and all the fun experiments his team performs at Stanford University. I also enjoy exploring different types of meditations and like to read about the healing effects of meditation. Other than these, I also enjoy swimming and vacationing to hilly places.

��


ABOUT CITIBANK

Citibank is one of the leading financial institutions of the world and is headquartered in New York City. It has one of the largest customer bases and has served more than 200 million clients with operations in more than 160 countries in the world. The U.S. branches are concentrated in six metropolitan areas: New York, Chicago, Los Angeles, San Francisco, Washington, D.C., and Miami. In addition, Citi is also a leading philanthropist company that is focused on catalyzing sustainable growth through transparency, innovation, and market-based solutions.

More THOUGHT LEADERS

Q&A with Charles Southwood, Vice President, N. Europe and MEA at Denodo

Media 7 | September 15, 2021

Charles Southwood, Regional VP at Denodo Technologies is responsible for the company’s business revenues in Northern Europe, Middle East and South Africa. He is passionate about working in rapidly moving and innovative markets to support customer success and to align IT solutions that meet the changing business needs. With a degree in engineering from Imperial College London, Charles has over 20 years of experience in data integration, big data, IT infrastructure/IT operations and Business Analytics....

Read More

Q&A with Sadiqah Musa, Co-Founder at Black In Data

Media 7 | September 1, 2021

Sadiqah Musa, Co-Founder at Black In Data, is also an experienced Senior Data Analyst at Guardian News and Media with a demonstrated history of working in the energy and publishing sectors. She is skilled in Advanced Excel, SQL, Python, data visualization, project management, and Data Analysis and has a strong professional background with a Master of Science (MSc) from The University of Manchester....

Read More

Q&A with Alastair Speare-Cole, President of Insurance at QOMPLX

Media 7 | August 20, 2021

Alastair Speare-Cole, President and General Manager of the Insurance Division at QOMPLX, leads the overall strategy for the business unit, the development of QOMPLX’s underwriting-as-a-service platform, the management of the company’s Managing General Agent (MGA), as well as setting the direction for the company’s next-generation insurance decision platform that leverages a wide variety of data and advanced analytics to provide advanced risk and portfolio management solutions. Prior to joining QOMPLX, he served as Chief Underwriting Officer at Qatar, and he served as the CEO of JLT Towers from 2012 to 2015. He was also COO at Aon Re for ten years and has also held board appointments at reinsurance and banking subsidiaries in the United Kingdom....

Read More

Q&A with Charles Southwood, Vice President, N. Europe and MEA at Denodo

Media 7 | September 15, 2021

Charles Southwood, Regional VP at Denodo Technologies is responsible for the company’s business revenues in Northern Europe, Middle East and South Africa. He is passionate about working in rapidly moving and innovative markets to support customer success and to align IT solutions that meet the changing business needs. With a degree in engineering from Imperial College London, Charles has over 20 years of experience in data integration, big data, IT infrastructure/IT operations and Business Analytics....

Read More

Q&A with Sadiqah Musa, Co-Founder at Black In Data

Media 7 | September 1, 2021

Sadiqah Musa, Co-Founder at Black In Data, is also an experienced Senior Data Analyst at Guardian News and Media with a demonstrated history of working in the energy and publishing sectors. She is skilled in Advanced Excel, SQL, Python, data visualization, project management, and Data Analysis and has a strong professional background with a Master of Science (MSc) from The University of Manchester....

Read More

Q&A with Alastair Speare-Cole, President of Insurance at QOMPLX

Media 7 | August 20, 2021

Alastair Speare-Cole, President and General Manager of the Insurance Division at QOMPLX, leads the overall strategy for the business unit, the development of QOMPLX’s underwriting-as-a-service platform, the management of the company’s Managing General Agent (MGA), as well as setting the direction for the company’s next-generation insurance decision platform that leverages a wide variety of data and advanced analytics to provide advanced risk and portfolio management solutions. Prior to joining QOMPLX, he served as Chief Underwriting Officer at Qatar, and he served as the CEO of JLT Towers from 2012 to 2015. He was also COO at Aon Re for ten years and has also held board appointments at reinsurance and banking subsidiaries in the United Kingdom....

Read More

Related News

Big Data

Radiant Logic Announces Expanded Identity Analytics and Data Management Platform Capabilities

Business Wire | September 29, 2023

Radiant Logic, the Identity Data Fabric company, today announces the completed integration of Brainwave GRC following the April 2023 acquisition. These new capabilities solidify Radiant Logic’s entrance in the Identity Analytics market and position our platform in the Identity Governance and Administration market, as seen in the recent Gartner® Market Guide for IGA. With a new website launching today and the release of the full RadiantOne Identity Data Platform, including Identity Analytics, the company celebrates the final integration of Brainwave into Radiant Logic. Radiant Logic, the longtime leader in Identity Data Management, enters the field of Identity Analytics with unprecedented capabilities including Observability, Governance, and Compliance. With 90% of organizations experiencing at least one identity-related breach in the past year, according to the Identity Defined Security Alliance (IDSA), organizations are realizing the essential role of identity data quality and visibility within cybersecurity and overall IT operational maturity best practices. Identity data is the lifeblood for all access decisions, and must be made accessible as the authoritative source for all authentication, authorization, and administration engines. In a recent research note, Gartner recommends that organizations: “Accelerate IAM data improvements for their IAM program by increasing the priority of visibility/observability improvements, including applying the visibility, intelligence, action model to program prioritization decisions." The new release from Radiant Logic represents a major step forward in the ability to use identity data management and identity analytics in cybersecurity and governance practices. Access to the right identity data, at the right time, is critical for any IAM tool, process, or policy. Visibility into all identity data and infrastructure gives clear insight into who has access to what and uncovers outliers and over-privileged access, which helps identify and close security gaps. It’s a powerful combination for any organization. We’re thrilled to announce the full integration of Radiant Logic and Brainwave GRC as one company, one website, and one platform. The new RadiantOne Identity Data Platform will strengthen operational maturity for customers, improve regulatory compliance and audit responses, and enable data-driven security best practices, said John Pritchard, Chief Product Officer, Radiant Logic. We are only seeing the tip of the iceberg regarding the potential for leveraging data science and artificial intelligence in IAM, and we believe that by pairing vast amounts of identity data with analytical inferencing, the possibilities for innovation are endless. With the new capabilities from RadiantOne, identity data can be supplied in a flexible and automated way, allowing organizations to base their security and policy decisions on the most accurate and complete data available. The addition of Identity Analytics brings visibility and intuitive visualization techniques, allowing organizations to use comprehensive identity data to find anomalies, add risk scores, easily respond to audits, and improve their overall security posture. Complete documentation is now available on the Radiant Logic developer portal, the support function is integrated via the customer support portal, and the full platform will be available as an integrated SaaS offering in Q4. The integrated platform combining Identity Data Management and Identity Analytics capabilities will accelerate Zero Trust projects, enable digital transformation, and simplify audit and compliance. Visit our new website at www.radiantlogic.com to learn more. About Radiant Logic Radiant Logic, the identity data experts, helps organizations turn identity data into a strategic asset that drives automated governance, enhanced security, and operational efficiency. Our RadiantOne Identity Data Platform removes complexity as a roadblock to identity-first strategies by creating an authoritative data source for real-time, context-aware controls. We provide visibility and actionable insights to intelligently detect and remediate risk using AI/ML-powered identity analytics. With RadiantOne, organizations are able to tap into the wealth of information across the infrastructure, combining context and analytics to deploy governance that works for the most advanced use cases.

Read More

Big Data Management

SAS accelerates delivery of novel medicines using AI and analytics

WebWire | September 29, 2023

SAS, a leader in AI and analytics, is helping to revolutionize the use of clinical trial data so new medicines can be delivered to patients faster than before. After a thorough evaluation, SAS has been chosen by global biopharmaceutical company AstraZeneca to help increase efficiency and drive automation in the delivery of statistical analyses for clinical and post-approval submissions to regulatory authorities, via SAS’s cloud-based software and technologies. SAS will support the redesign of clinical and patient data flow by delivering industry-leading analytics and AI, manage changing trial designs in a fast-evolving regulatory environment, enable data re-use, and help accelerate reporting and submission timelines. It will also deliver increased capacity, automation, interoperability, and flexibility to bring in and analyze diverse and novel patient data sources – such as those coming from wearables, sensors and precision medicine – as part of the submissions process. This will be achieved by supporting the analysis and reporting phases with SAS® Life Science Analytics Framework and SAS® Viya®, a scalable and powerful cloud-based industry platform enabling swift decision-making regardless of data volumes or complexity using modern cloud technologies. This has the potential to provide significant productivity gains by driving faster time to market and reduced IT costs. The SAS and AstraZeneca partnership will enable teams across the organization to collaborate and increase clinical research innovation. Christopher J Miller, VP Biometrics at AstraZeneca, said, This partnership with SAS supports the transformation of how we use clinical data to support our patient-centric approach and focus on getting medicines to patients faster than ever before. It will also allow us to introduce new ways of working and embrace new technologies and trial models to accelerate our portfolio. Bryan Harris, SAS Executive Vice President and Chief Technology Officer, said, “I’m delighted that SAS is building on the strong relationship it has had with AstraZeneca over many years by being part of this transformation program. The work they do positively impacts the lives of millions of people around the world. “This is exciting because we have solidified a great foundation between our companies, but we also recognize we are just scratching the surface. We pay attention to technology and the advancements in AI, and we thrive on thinking through how our technology blended with AstraZeneca’s expertise and insight can create new medical solutions for their customers.” About SAS SAS is a global leader in AI and analytics software, including industry-specific solutions. SAS helps organizations transform data into trusted decisions faster by providing knowledge in the moments that matter. SAS gives you THE POWER TO KNOW®.

Read More

Business Intelligence

Oracle's New Next-generation Platform Transforms Business Insights

Oracle | September 25, 2023

Oracle introduces a data, analytics, and AI platform for Fusion Cloud Applications to enhance business outcomes. The platform offers 360-degree Data Models, Prescriptive AI/ML Models, Rich Interactive Analytics, and Intelligent Applications. Oracle plans to extend the platform to NetSuite and other industry applications, enriching analytics offerings. Oracle has recently unveiled the Fusion Data Intelligence Platform, a cutting-edge data, analytics, and AI solution designed to empower Oracle Fusion Cloud Applications users to enhance their business outcomes through the fusion of data-driven insights and intelligent decision-making. This groundbreaking platform, which builds upon the foundations of the Oracle Fusion Analytics Warehouse product, offers business data-as-a-service with automated data pipelines, comprehensive 360-degree data models for critical business entities, interactive analytics, AI/ML models, and intelligent applications. These ready-to-use capabilities run on top of the Oracle Cloud Infrastructure (OCI) Data Lakehouse services, including Oracle Autonomous Database and Oracle Analytics Cloud, thereby facilitating complete extensibility across data, analytics, AI/ML, and application layers. The Oracle Fusion Data Intelligence Platform presents the following suite of pre-built capabilities that are designed to empower Oracle Fusion Cloud Applications users to unlock the full potential of their data: 360-Degree Data Models: This will equip business users with a cohesive and comprehensible representation of their organizational data, allowing them to discern the intricate relationships between data and business processes. By providing a range of conformed data models based on Oracle Fusion Cloud Applications data and other data sources, this platform offers a 360-degree view of various facets of a business, including customers, accounts, products, suppliers, and employees. Prescriptive AI/ML Models: Leveraging pre-configured AI/ML models, such as workforce skills assessment and customer payment forecasting, organizations can solve specific business problems by automating labor-intensive tasks, freeing up resources for strategic endeavors. Furthermore, it empowers organizations to rapidly analyze substantial datasets, uncovering invaluable insights and patterns that can drive business growth and efficiency. Rich Interactive Analytics: Business users can seamlessly explore and visualize their data using pre-built dashboards, reports, and key performance indicators (KPIs). Additionally, Analytics Cloud features like natural language query, auto insights, and mobile applications allow quick access to data and insights. Intelligent Applications: These applications go beyond providing insights offering intelligent recommendations based on pre-existing data models, AI/ML models, and analytics content. They enable organizations to make informed decisions swiftly, ultimately improving business outcomes. The Fusion Data Intelligence Platform is a pivotal step in a long-term vision to transition from data and analytics to actionable decisions that drive business success. Importantly, this platform will extend its reach beyond Oracle Fusion Cloud Applications, with plans to offer the same foundational platform for NetSuite and across various Oracle industry applications, such as healthcare, financial services, and utilities, to facilitate cross-domain insights. The Fusion Data Intelligence Platform includes an extensive portfolio of ready-to-use analytics for Oracle Fusion Cloud Enterprise Resource Planning (ERP), Oracle Fusion Cloud Human Capital Management (HCM), Oracle Fusion Cloud Supply Chain & Manufacturing (SCM), and Oracle Fusion Cloud Customer Experience (CX). These analytics offerings have been further enriched with the following additions: Oracle Fusion ERP Analytics: The introduction of Accounting Hub analytics empowers finance teams to create a system of insights for accounting data sourced from Oracle Accounting Hub sub-ledger applications. Oracle Fusion SCM Analytics: New Manufacturing analytics provide manufacturers with timely insights into work order performance, enhancing shop floor efficiency by rapidly identifying anomalies and continually optimizing plan-to-produce processes by connecting insights across supply chain data. Oracle Fusion HCM Analytics: The addition of Inferred Skills, Payroll Costing, and Continuous Listening analytics equips organizational leaders with integrated workforce insights, covering employee skills, payroll trends and anomalies, and the efficacy of a continuous listening strategy at any given point in time. Oracle Fusion CX Analytics: The new Quote-to-Cash analytics extend the analysis beyond the lead-to-opportunity pipeline by offering insights into how the price, contract, and quote process influences the overall customer experience.

Read More

Big Data

Radiant Logic Announces Expanded Identity Analytics and Data Management Platform Capabilities

Business Wire | September 29, 2023

Radiant Logic, the Identity Data Fabric company, today announces the completed integration of Brainwave GRC following the April 2023 acquisition. These new capabilities solidify Radiant Logic’s entrance in the Identity Analytics market and position our platform in the Identity Governance and Administration market, as seen in the recent Gartner® Market Guide for IGA. With a new website launching today and the release of the full RadiantOne Identity Data Platform, including Identity Analytics, the company celebrates the final integration of Brainwave into Radiant Logic. Radiant Logic, the longtime leader in Identity Data Management, enters the field of Identity Analytics with unprecedented capabilities including Observability, Governance, and Compliance. With 90% of organizations experiencing at least one identity-related breach in the past year, according to the Identity Defined Security Alliance (IDSA), organizations are realizing the essential role of identity data quality and visibility within cybersecurity and overall IT operational maturity best practices. Identity data is the lifeblood for all access decisions, and must be made accessible as the authoritative source for all authentication, authorization, and administration engines. In a recent research note, Gartner recommends that organizations: “Accelerate IAM data improvements for their IAM program by increasing the priority of visibility/observability improvements, including applying the visibility, intelligence, action model to program prioritization decisions." The new release from Radiant Logic represents a major step forward in the ability to use identity data management and identity analytics in cybersecurity and governance practices. Access to the right identity data, at the right time, is critical for any IAM tool, process, or policy. Visibility into all identity data and infrastructure gives clear insight into who has access to what and uncovers outliers and over-privileged access, which helps identify and close security gaps. It’s a powerful combination for any organization. We’re thrilled to announce the full integration of Radiant Logic and Brainwave GRC as one company, one website, and one platform. The new RadiantOne Identity Data Platform will strengthen operational maturity for customers, improve regulatory compliance and audit responses, and enable data-driven security best practices, said John Pritchard, Chief Product Officer, Radiant Logic. We are only seeing the tip of the iceberg regarding the potential for leveraging data science and artificial intelligence in IAM, and we believe that by pairing vast amounts of identity data with analytical inferencing, the possibilities for innovation are endless. With the new capabilities from RadiantOne, identity data can be supplied in a flexible and automated way, allowing organizations to base their security and policy decisions on the most accurate and complete data available. The addition of Identity Analytics brings visibility and intuitive visualization techniques, allowing organizations to use comprehensive identity data to find anomalies, add risk scores, easily respond to audits, and improve their overall security posture. Complete documentation is now available on the Radiant Logic developer portal, the support function is integrated via the customer support portal, and the full platform will be available as an integrated SaaS offering in Q4. The integrated platform combining Identity Data Management and Identity Analytics capabilities will accelerate Zero Trust projects, enable digital transformation, and simplify audit and compliance. Visit our new website at www.radiantlogic.com to learn more. About Radiant Logic Radiant Logic, the identity data experts, helps organizations turn identity data into a strategic asset that drives automated governance, enhanced security, and operational efficiency. Our RadiantOne Identity Data Platform removes complexity as a roadblock to identity-first strategies by creating an authoritative data source for real-time, context-aware controls. We provide visibility and actionable insights to intelligently detect and remediate risk using AI/ML-powered identity analytics. With RadiantOne, organizations are able to tap into the wealth of information across the infrastructure, combining context and analytics to deploy governance that works for the most advanced use cases.

Read More

Big Data Management

SAS accelerates delivery of novel medicines using AI and analytics

WebWire | September 29, 2023

SAS, a leader in AI and analytics, is helping to revolutionize the use of clinical trial data so new medicines can be delivered to patients faster than before. After a thorough evaluation, SAS has been chosen by global biopharmaceutical company AstraZeneca to help increase efficiency and drive automation in the delivery of statistical analyses for clinical and post-approval submissions to regulatory authorities, via SAS’s cloud-based software and technologies. SAS will support the redesign of clinical and patient data flow by delivering industry-leading analytics and AI, manage changing trial designs in a fast-evolving regulatory environment, enable data re-use, and help accelerate reporting and submission timelines. It will also deliver increased capacity, automation, interoperability, and flexibility to bring in and analyze diverse and novel patient data sources – such as those coming from wearables, sensors and precision medicine – as part of the submissions process. This will be achieved by supporting the analysis and reporting phases with SAS® Life Science Analytics Framework and SAS® Viya®, a scalable and powerful cloud-based industry platform enabling swift decision-making regardless of data volumes or complexity using modern cloud technologies. This has the potential to provide significant productivity gains by driving faster time to market and reduced IT costs. The SAS and AstraZeneca partnership will enable teams across the organization to collaborate and increase clinical research innovation. Christopher J Miller, VP Biometrics at AstraZeneca, said, This partnership with SAS supports the transformation of how we use clinical data to support our patient-centric approach and focus on getting medicines to patients faster than ever before. It will also allow us to introduce new ways of working and embrace new technologies and trial models to accelerate our portfolio. Bryan Harris, SAS Executive Vice President and Chief Technology Officer, said, “I’m delighted that SAS is building on the strong relationship it has had with AstraZeneca over many years by being part of this transformation program. The work they do positively impacts the lives of millions of people around the world. “This is exciting because we have solidified a great foundation between our companies, but we also recognize we are just scratching the surface. We pay attention to technology and the advancements in AI, and we thrive on thinking through how our technology blended with AstraZeneca’s expertise and insight can create new medical solutions for their customers.” About SAS SAS is a global leader in AI and analytics software, including industry-specific solutions. SAS helps organizations transform data into trusted decisions faster by providing knowledge in the moments that matter. SAS gives you THE POWER TO KNOW®.

Read More

Business Intelligence

Oracle's New Next-generation Platform Transforms Business Insights

Oracle | September 25, 2023

Oracle introduces a data, analytics, and AI platform for Fusion Cloud Applications to enhance business outcomes. The platform offers 360-degree Data Models, Prescriptive AI/ML Models, Rich Interactive Analytics, and Intelligent Applications. Oracle plans to extend the platform to NetSuite and other industry applications, enriching analytics offerings. Oracle has recently unveiled the Fusion Data Intelligence Platform, a cutting-edge data, analytics, and AI solution designed to empower Oracle Fusion Cloud Applications users to enhance their business outcomes through the fusion of data-driven insights and intelligent decision-making. This groundbreaking platform, which builds upon the foundations of the Oracle Fusion Analytics Warehouse product, offers business data-as-a-service with automated data pipelines, comprehensive 360-degree data models for critical business entities, interactive analytics, AI/ML models, and intelligent applications. These ready-to-use capabilities run on top of the Oracle Cloud Infrastructure (OCI) Data Lakehouse services, including Oracle Autonomous Database and Oracle Analytics Cloud, thereby facilitating complete extensibility across data, analytics, AI/ML, and application layers. The Oracle Fusion Data Intelligence Platform presents the following suite of pre-built capabilities that are designed to empower Oracle Fusion Cloud Applications users to unlock the full potential of their data: 360-Degree Data Models: This will equip business users with a cohesive and comprehensible representation of their organizational data, allowing them to discern the intricate relationships between data and business processes. By providing a range of conformed data models based on Oracle Fusion Cloud Applications data and other data sources, this platform offers a 360-degree view of various facets of a business, including customers, accounts, products, suppliers, and employees. Prescriptive AI/ML Models: Leveraging pre-configured AI/ML models, such as workforce skills assessment and customer payment forecasting, organizations can solve specific business problems by automating labor-intensive tasks, freeing up resources for strategic endeavors. Furthermore, it empowers organizations to rapidly analyze substantial datasets, uncovering invaluable insights and patterns that can drive business growth and efficiency. Rich Interactive Analytics: Business users can seamlessly explore and visualize their data using pre-built dashboards, reports, and key performance indicators (KPIs). Additionally, Analytics Cloud features like natural language query, auto insights, and mobile applications allow quick access to data and insights. Intelligent Applications: These applications go beyond providing insights offering intelligent recommendations based on pre-existing data models, AI/ML models, and analytics content. They enable organizations to make informed decisions swiftly, ultimately improving business outcomes. The Fusion Data Intelligence Platform is a pivotal step in a long-term vision to transition from data and analytics to actionable decisions that drive business success. Importantly, this platform will extend its reach beyond Oracle Fusion Cloud Applications, with plans to offer the same foundational platform for NetSuite and across various Oracle industry applications, such as healthcare, financial services, and utilities, to facilitate cross-domain insights. The Fusion Data Intelligence Platform includes an extensive portfolio of ready-to-use analytics for Oracle Fusion Cloud Enterprise Resource Planning (ERP), Oracle Fusion Cloud Human Capital Management (HCM), Oracle Fusion Cloud Supply Chain & Manufacturing (SCM), and Oracle Fusion Cloud Customer Experience (CX). These analytics offerings have been further enriched with the following additions: Oracle Fusion ERP Analytics: The introduction of Accounting Hub analytics empowers finance teams to create a system of insights for accounting data sourced from Oracle Accounting Hub sub-ledger applications. Oracle Fusion SCM Analytics: New Manufacturing analytics provide manufacturers with timely insights into work order performance, enhancing shop floor efficiency by rapidly identifying anomalies and continually optimizing plan-to-produce processes by connecting insights across supply chain data. Oracle Fusion HCM Analytics: The addition of Inferred Skills, Payroll Costing, and Continuous Listening analytics equips organizational leaders with integrated workforce insights, covering employee skills, payroll trends and anomalies, and the efficacy of a continuous listening strategy at any given point in time. Oracle Fusion CX Analytics: The new Quote-to-Cash analytics extend the analysis beyond the lead-to-opportunity pipeline by offering insights into how the price, contract, and quote process influences the overall customer experience.

Read More