The New Promise of Business Intelligence

|

article image
A lot has changed since business intelligence first came on the scene promising to help IT deliver information to the business.Information assets have grown exponentially, business users have become far more proficient with technology, and the pace of business has accelerated.

Spotlight

NTT DATA Services

NTT DATA Services partners with clients to navigate and simplify the modern complexities of business and technology, delivering the insights, solutions and outcomes that matter most. As a division of NTT DATA Corporation, a top 10 global IT services and consulting provider, we wrap deep industry expertise around a comprehensive portfolio of infrastructure, applications and business process services.

OTHER ARTICLES

Will We Be Able to Use AI to Prevent Further Pandemics?

Article | March 9, 2021

For many, 2021 has brought hope that they can cautiously start to prepare for a world after Covid. That includes living with the possibility of future pandemics, and starting to reflect on what has been learned from such a brutal shared experience. One of the areas that has come into its own during Covid has been artificial intelligence (AI), a technology that helped bring the pandemic under control, and allow life to continue through lockdowns and other disruptions. Plenty has been written about how AI has supported many aspects of life at work and home during Covid, from videoconferencing to online food ordering. But the role of AI in preventing Covid causing even more havoc is not necessarily as widely known. Perhaps even more importantly, little has been said about the role AI is likely to play in preparing for, responding to and even preventing future pandemics. From what we saw in 2020, AI will help prevent global outbreaks of new diseases in three ways: prediction, diagnosis and treatment. Prediction Predicting pandemics is all about tracking data that could be possible early signs that a new disease is spreading in a disturbing way. The kind of data we’re talking about includes public health information about symptoms presenting to hospitals and doctors around the world. There is already plenty of this captured in healthcare systems globally, and is consolidated into datasets such as the Johns Hopkins reports that many of us are familiar with from news briefings. Firms like Bluedot and Metabiota are part of a growing number of organisations which use AI to track both publicly available and private data and make relevant predictions about public health threats. Both of these received attention in 2020 by reporting the appearance of Covid before it had been officially acknowledged. Boston Children’s Hospital is an example of a healthcare institution doing something similar with their Healthmap resource. In addition to conventional healthcare data, AI is uniquely able to make use of informal data sources such as social media, news aggregators and discussion forums. This is because of AI techniques such as natural language processing and sentiment analysis. Firms such as Stratifyd use AI to do this in other business settings such as marketing, but also talk publicly about the use of their platform to predict and prevent pandemics. This is an example of so-called augmented intelligence, where AI is used to guide people to noteworthy data patterns, but stops short of deciding what it means, leaving that to human judgement. Another important part of preventing a pandemic is keeping track of the transmission of disease through populations and geographies. A significant issue in 2020 was difficulty tracing people who had come into contact with infection. There was some success using mobile phones for this, and AI was critical in generating useful knowledge from mobile phone data. The emphasis of Covid tracing apps in 2020 was keeping track of how the disease had already spread, but future developments are likely to be about predicting future spread patterns from such data. Prediction is a strength of AI, and the principles used to great effect in weather forecasting are similar to those used to model likely pandemic spread. Diagnosis To prevent future pandemics, it won’t be enough to predict when a disease is spreading rapidly. To make the most of this knowledge, it’s necessary to diagnose and treat cases. One of the greatest early challenges with Covid was the lack of speedy, reliable tests. For future pandemics, AI is likely to be used to create such tests more quickly than was the case in 2020. Creating a useful test involves modelling a disease’s response to different testing reagents, finding right balance between speed, convenience and accuracy. AI modelling simulates in a computer how individual cells respond to different stimuli, and could be used to perform virtual testing of many different types of test to accelerate how quickly the most promising ones reach laboratory and field trials. In 2020 there were also several novel uses of AI to diagnose Covid, but there were few national and global mechanisms to deploy these at scale. One example was the use of AI imaging, diagnosing Covid by analysing chest x-rays for features specific to Covid. This would have been especially valuable in places that didn’t have access to lab testing equipment. Another example was using AI to analyse the sound of coughs to identify unique characteristics of a Covid cough. AI research to systematically investigate innovative diagnosis techniques such as these should result in better planning for alternatives to laboratory testing. Faster and wider rollout of this kind of diagnosis would help control spread of a future disease during the critical period waiting for other tests to be developed or shared. This would be another contribution of AI to preventing a localised outbreak becoming a pandemic. Treatment Historically, vaccination has proven to be an effective tool for dealing with pandemics, and was the long term solution to Covid for most countries. AI was used to accelerate development of Covid vaccines, helping cut the development time from years or decades to months. In principle, the use of AI was similar to that described above for developing diagnostic tests. Different drug development teams used AI in different ways, but they all relied on mathematical modelling of how the Covid virus would respond to many forms of treatment at a microscopic level. Much of the vaccine research and modelling focused on the “spike” proteins that allow Covid to attack human cells and enter the body. These are also found in other viruses, and were already the subject of research before the 2020 pandemic. That research allowed scientists to quickly develop AI models to represent the spikes, and simulate the effects of different possible treatments. This was crucial in trialling thousands of possible treatments in computer models, pinpointing the most likely successes for further investigation. This kind of mathematical simulation using AI continued during drug development, and moved substantial amounts of work from the laboratory to the computer. This modelling also allowed the impact of Covid mutations on vaccines to be assessed quickly. It is why scientists were reasonably confident of developing variants of vaccines for new Covid mutations in days and weeks rather than months. As a result of the global effort to develop Covid vaccines, the body of data and knowledge about virus behaviour has grown substantially. This means it should be possible to understand new pathogens even more rapidly than Covid, potentially in hours or days rather than weeks. AI has also helped create new ways of approaching vaccine development, for example the use of pre-prepared generic vaccines designed to treat viruses from the same family as Covid. Modifying one of these to the specific features of a new virus is much faster than starting from scratch, and AI may even have already simulated exactly such a variation. AI has been involved in many parts of the fight against Covid, and we now have a much better idea than in 2020 of how to predict, diagnose and treat pandemics, especially similar viruses to Covid. So we can be cautiously optimistic that vaccine development for any future Covid-like viruses will be possible before it becomes a pandemic. Perhaps a trickier question is how well we will be able to respond if the next pandemic is from a virus that is nothing like Covid. Was Rahman is an expert in the ethics of artificial intelligence, the CEO of AI Prescience and the author of AI and Machine Learning. See more at www.wasrahman.com

Read More

What is Data Integrity and Why is it Important?

Article | March 9, 2021

In an era of big data, data health has become a pressing issue when more and more data is being stored and processed. Therefore, preserving the integrity of the collected data is becoming increasingly necessary. Understanding the fundamentals of data integrity and how it works is the first step in safeguarding the data. Data integrity is essential for the smooth running of a company. If a company’s data is altered, deleted, or changed, and if there is no way of knowing how it can have significant impact on any data-driven business decisions. Data integrity is the reliability and trustworthiness of data throughout its lifecycle. It is the overall accuracy, completeness, and consistency of data. It can be indicated by lack of alteration between two updates of a data record, which means data is unchanged or intact. Data integrity refers to the safety of data regarding regulatory compliance- like GDPR compliance- and security. A collection of processes, rules, and standards implemented during the design phase maintains the safety and security of data. The information stored in the database will remain secure, complete, and reliable no matter how long it’s been stored; that’s when you know that the integrity of data is safe. A data integrity framework also ensures that no outside forces are harming this data. This term of data integrity may refer to either the state or a process. As a state, the data integrity framework defines a data set that is valid and accurate. Whereas as a process, it describes measures used to ensure validity and accuracy of data set or all data contained in a database or a construct. Data integrity can be enforced at both physical and logical levels. Let us understand the fundamentals of data integrity in detail: Types of Data Integrity There are two types of data integrity: physical and logical. They are collections of processes and methods that enforce data integrity in both hierarchical and relational databases. Physical Integrity Physical integrity protects the wholeness and accuracy of that data as it’s stored and retrieved. It refers to the process of storage and collection of data most accurately while maintaining the accuracy and reliability of data. The physical level of data integrity includes protecting data against different external forces like power cuts, data breaches, unexpected catastrophes, human-caused damages, and more. Logical Integrity Logical integrity keeps the data unchanged as it’s used in different ways in a relational database. Logical integrity checks data accuracy in a particular context. The logical integrity is compromised when errors from a human operator happen while entering data manually into the database. Other causes for compromised integrity of data include bugs, malware, and transferring data from one site within the database to another in the absence of some fields. There are four types of logical integrity: Entity Integrity A database has columns, rows, and tables. These elements need to be as numerous as required for the data to be accurate, but no more than necessary. Entity integrity relies on the primary key, the unique values that identify pieces of data, making sure the data is listed just once and not more to avoid a null field in the table. The feature of relational systems that store data in tables can be linked and utilized in different ways. Referential Integrity Referential integrity means a series of processes that ensure storage and uniform use of data. The database structure has rules embedded into them about the usage of foreign keys and ensures only proper changes, additions, or deletions of data occur. These rules can include limitations eliminating duplicate data entry, accurate data guarantee, and disallowance of data entry that doesn’t apply. Foreign keys relate data that can be shared or null. For example, let’s take a data integrity example, employees that share the same work or work in the same department. Domain Integrity Domain Integrity can be defined as a collection of processes ensuring the accuracy of each piece of data in a domain. A domain is a set of acceptable values a column is allowed to contain. It includes constraints that limit the format, type, and amount of data entered. In domain integrity, all values and categories are set. All categories and values in a database are set, including the nulls. User-Defined Integrity This type of logical integrity involves the user's constraints and rules to fit their specific requirements. The data isn’t always secure with entity, referential, or domain integrity. For example, if an employer creates a column to input corrective actions of the employees, this data would fall under user-defined integrity. Difference between Data Integrity and Data Security Often, the terms data security and data integrity get muddled and are used interchangeably. As a result, the term is incorrectly substituted for data integrity, but each term has a significant meaning. Data integrity and data security play an essential role in the success of each other. Data security means protecting data against unauthorized access or breach and is necessary to ensure data integrity. Data integrity is the result of successful data security. However, the term only refers to the validity and accuracy of data rather than the actual act of protecting data. Data security is one of the many ways to maintain data integrity. Data security focuses on reducing the risk of leaking intellectual property, business documents, healthcare data, emails, trade secrets, and more. Some facets of data security tactics include permissions management, data classification, identity, access management, threat detection, and security analytics. For modern enterprises, data integrity is necessary for accurate and efficient business processes and to make well-intentioned decisions. Data integrity is critical yet manageable for organizations today by backup and replication processes, database integrity constraints, validation processes, and other system protocols through varied data protection methods. Threats to Data Integrity Data integrity can be compromised by human error or any malicious acts. Accidental data alteration during the transfer from one device to another can be compromised. There is an assortment of factors that can affect the integrity of the data stored in databases. Following are a few of the examples: Human Error Data integrity is put in jeopardy when individuals enter information incorrectly, duplicate, or delete data, don’t follow the correct protocols, or make mistakes in implementing procedures to protect data. Transfer Error A transfer error occurs when data is incorrectly transferred from one location in a database to another. This error also happens when a piece of data is present in the destination table but not in the source table in a relational database. Bugs and Viruses Data can be stolen, altered, or deleted by spyware, malware, or any viruses. Compromised Hardware Hardware gets compromised when a computer crashes, a server gets down, or problems with any computer malfunctions. Data can be rendered incorrectly or incompletely, limit, or eliminate data access when hardware gets compromised. Preserving Data Integrity Companies make decisions based on data. If that data is compromised or incorrect, it could harm that company to a great extent. They routinely make data-driven business decisions, and without data integrity, those decisions can have a significant impact on the company’s goals. The threats mentioned above highlight a part of data security that can help preserve data integrity. Minimize the risk to your organization by using the following checklist: Validate Input Require an input validation when your data set is supplied by a known or an unknown source (an end-user, another application, a malicious user, or any number of other sources). The data should be validated and verified to ensure the correct input. Validate Data Verifying data processes haven’t been corrupted is highly critical. Identify key specifications and attributes that are necessary for your organization before you validate the data. Eliminate Duplicate Data Sensitive data from a secure database can easily be found on a document, spreadsheet, email, or shared folders where employees can see it without proper access. Therefore, it is sensible to clean up stray data and remove duplicates. Data Backup Data backups are a critical process in addition to removing duplicates and ensuring data security. Permanent loss of data can be avoided by backing up all necessary information, and it goes a long way. Back up the data as much as possible as it is critical as organizations may get attacked by ransomware. Access Control Another vital data security practice is access control. Individuals in an organization with any wrong intent can harm the data. Implement a model where users who need access can get access is also a successful form of access control. Sensitive servers should be isolated and bolted to the floor, with individuals with an access key are allowed to use them. Keep an Audit Trail In case of a data breach, an audit trail will help you track down your source. In addition, it serves as breadcrumbs to locate and pinpoint the individual and origin of the breach. Conclusion Data collection was difficult not too long ago. It is no longer an issue these days. With the amount of data being collected these days, we must maintain the integrity of the data. Organizations can thus make data-driven decisions confidently and take the company ahead in a proper direction. Frequently Asked Questions What are integrity rules? Precise data integrity rules are short statements about constraints that need to be applied or actions that need to be taken on the data when entering the data resource or while in the data resource. For example, precise data integrity rules do not state or enforce accuracy, precision, scale, or resolution. What is a data integrity example? Data integrity is the overall accuracy, completeness, and consistency of data. A few examples where data integrity is compromised are: • When a user tries to enter a date outside an acceptable range • When a user tries to enter a phone number in the wrong format • When a bug in an application attempts to delete the wrong record What are the principles of data integrity? The principles of data integrity are attributable, legible, contemporaneous, original, and accurate. These simple principles need to be part of a data life cycle, GDP, and data integrity initiatives. { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [{ "@type": "Question", "name": "What are integrity rules?", "acceptedAnswer": { "@type": "Answer", "text": "Precise data integrity rules are short statements about constraints that need to be applied or actions that need to be taken on the data when entering the data resource or while in the data resource. For example, precise data integrity rules do not state or enforce accuracy, precision, scale, or resolution." } },{ "@type": "Question", "name": "What is a data integrity example?", "acceptedAnswer": { "@type": "Answer", "text": "Data integrity is the overall accuracy, completeness, and consistency of data. A few examples where data integrity is compromised are: When a user tries to enter a date outside an acceptable range When a user tries to enter a phone number in the wrong format When a bug in an application attempts to delete the wrong record" } },{ "@type": "Question", "name": "What are the principles of data integrity?", "acceptedAnswer": { "@type": "Answer", "text": "The principles of data integrity are attributable, legible, contemporaneous, original, and accurate. These simple principles need to be part of a data life cycle, GDP, and data integrity initiatives." } }] }

Read More

Machine Learning vs. Deep Learning. Which Does Your Business Need?

Article | March 9, 2021

In recent years, artificial intelligence research and applications have accelerated at a rapid speed. Simply saying your organization will incorporate AI isn’t as specific as it once was. There are diverse implementation options for AI, Machine Learning, and Deep Learning, and within each of them, a series of different algorithms you can leverage to improve operations and establish a competitive edge. Algorithms are utilized across almost every industry. For example, to power the recommendation engines in all media platforms, the chatbots that support customer service efforts at scale, and the self-driving vehicles being tested by the world’s largest automotive and technology companies. Because of how diverse AI has become and the many ways in which it works with data, companies must carefully evaluate what will work best for them.

Read More

DEEP THOMAS EMBEDDING DATA-DRIVEN CULTURE ACROSS BUSINESS WITH CUTTING EDGE INNOVATION

Article | March 9, 2021

A US$ 48.3 billion-corporation, the Aditya Birla Group is in the league of Fortune 500. Anchored by an extraordinary force of over 120,000 employees belonging to 42 nationalities, the Group is built on a strong foundation of stakeholder value creation. With over 7 decades of responsible business practices, Aditya Birla Group’s businesses have grown into global powerhouses in a wide range of sectors metals, chemicals, pulp & fibre, textiles, carbon black, cement and telecom. Today, over 50% of its revenues flow from overseas operations that span 36 countries in North and South America, Africa and Asia.The Group Data ‘n’ Analytics Cell (GDNA) is the Big Data and Analytics arm of the Aditya Birla Group created at its centre to strategize and partner with 18+ Group businesses across B2B and B2C domains to deliver on its strategic priorities through the power of AI. The company represents strong analytics and domain expertise drawn from the best-in-class talent from leading global and Indian businesses that leverage cutting edge tools and advanced AI algorithms built on a highly scalable and robust big data infrastructure to mine and act upon petabytes of structured and unstructured data.

Read More

Spotlight

NTT DATA Services

NTT DATA Services partners with clients to navigate and simplify the modern complexities of business and technology, delivering the insights, solutions and outcomes that matter most. As a division of NTT DATA Corporation, a top 10 global IT services and consulting provider, we wrap deep industry expertise around a comprehensive portfolio of infrastructure, applications and business process services.

Events