The Journey to Intelligent ERP

| January 24, 2019

article image
To drive innovation and grow the business, the use of technologies such as social, mobile, analytics, cloud and artificial intelligence (AI) is a strategic imperative for more and more enterprises. This process often starts with the customer interface, such as a mobile app or web self-service portal. But companies are realizing that overhauling their core processes is equally (if not more) important. That is because that core, often made up of enterprise resource planning (ERP) applications deployed in the 1980s or 1990s, is the source of all the data required by every user interface and business process.

Spotlight

DataRobot

DataRobot offers a machine learning platform for data scientists of all skill levels to build and deploy accurate predictive models in a fraction of the time it used to take. The technology addresses the critical shortage of data scientists by changing the speed and economics of predictive analytics. The DataRobot platform uses massively parallel processing to train and evaluate 1000's of models in R, Python, Spark MLlib, H2O and other open source libraries. It searches through millions of possible combinations of algorithms, pre-processing steps, features, transformations and tuning parameters to deliver the best models for your dataset and prediction target…

OTHER ARTICLES

Choosing External Data Sources: 4 Characteristics to Look For

Article | May 12, 2021

Decision-makers at consumer brands are finally realizing the full transformative potential of external data - but they’re also realizing how difficult it is to source. Forrester reports that 87% of decision-makers in data and analytics have implemented or are planning initiatives to source more external data. And those initiatives are growing outside of the IT team; 29% of those surveyed say that IT has primary ownership of data sourcing, down from 37% in 2016. To support these projects, organizations are increasingly turning to a new specialist: the data hunter, who identifies and vets external data sources. It’s a lot of work to build external data-focused teams, and many leaders are realizing that external data is difficult to scale as the source list grows. Perhaps that’s why 66% of those decision-makers surveyed by Forrester report that they’re using or planning to use external service providers for data, analytics, and insights.

Read More
BIG DATA MANAGEMENT

How can machine learning detect money laundering?

Article | December 16, 2020

In this article, we will explore different techniques to detect money laundering activities. Notwithstanding, regardless of various expected applications inside the financial services sector, explicitly inside the Anti-Money Laundering (AML) appropriation of Artificial Intelligence and Machine Learning (ML) has been generally moderate. What is Money Laundering, Anti Money Laundering? Money Laundering is where someone unlawfully obtains money and moves it to cover up their crimes. Anti-Money Laundering can be characterized as an activity that forestalls or aims to forestall money laundering from occurring. It is assessed by UNO that, money-laundering exchanges account in one year is 2–5% of worldwide GDP or $800 billion — $3 trillion in USD. In 2019, regulators and governmental offices exacted fines of more than $8.14 billion. Indeed, even with these stunning numbers, gauges are that just about 1 % of unlawful worldwide money related streams are ever seized by the specialists. AML activities in banks expend an over the top measure of manpower, assets, and cash flow to deal with the process and comply with the guidelines. What are the punishments for money laundering? In 2019, Celent evaluated that spending came to $8.3 billion and $23.4 billion for technology and operations, individually. This speculation is designated toward guaranteeing anti-money laundering. As we have seen much of the time, reputational costs can likewise convey a hefty price. In 2012, HSBC laundering of an expected £5.57 billion over at least seven years.   What is the current situation of the banks applying ML to stop money laundering? Given the plenty of new instruments the banks have accessible, the potential feature risk, the measure of capital involved, and the gigantic expenses as a form of fines and punishments, this should not be the situation. A solid impact by nations to curb illicit cash movement has brought about a huge yet amazingly little part of money laundering being recognized — a triumph rate of about 2% average. Dutch banks — ABN Amro, Rabobank, ING, Triodos Bank, and Volksbank announced in September 2019 to work toward a joint transaction monitoring to stand-up fight against Money Laundering. A typical challenge in transaction monitoring, for instance, is the generation of a countless number of alerts, which thusly requires operation teams to triage and process the alarms. ML models can identify and perceive dubious conduct and besides they can classify alerts into different classes such as critical, high, medium, or low risk. Critical or High alerts may be directed to senior experts on a high need to quickly explore the issue. Today is the immense number of false positives, gauges show that the normal, of false positives being produced, is the range of 95 and 99%, and this puts extraordinary weight on banks. The examination of false positives is tedious and costs money. An ongoing report found that banks were spending near 3.01€ billion every year exploring false positives. Establishments are looking for increasing productive ways to deal with crime and, in this specific situation, Machine Learning can end up being a significant tool. Financial activities become productive, the gigantic sum and speed of money related exchanges require a viable monitoring framework that can process exchanges rapidly, ideally in real-time.   What are the types of machine learning algorithms which can identify money laundering transactions? Supervised Machine Learning, it is essential to have historical information with events precisely assigned and input variables appropriately captured. If biases or errors are left in the data without being dealt with, they will get passed on to the model, bringing about erroneous models. It is smarter to utilize Unsupervised Machine Learning to have historical data with events accurately assigned. It sees an obscure pattern and results. It recognizes suspicious activity without earlier information of exactly what a money-laundering scheme resembles. What are the different techniques to detect money laundering? K-means Sequence Miner algorithm: Entering banking transactions, at that point running frequent pattern mining algorithms and mining transactions to distinguish money laundering. Clustering transactions and dubious activities to money laundering lastly show them on a chart. Time Series Euclidean distance: Presenting a sequence matching algorithm to distinguish money laundering detection, utilizing sequential detection of suspicious transactions. This method exploits the two references to recognize dubious transactions: a history of every individual’s account and exchange data with different accounts. Bayesian networks: It makes a model of the user’s previous activities, and this model will be a measure of future customer activities. In the event that the exchange or user financial transactions have. Cluster-based local outlier factor algorithm: The money laundering detection utilizing clustering techniques combination and Outliers.   Conclusion For banks, now is the ideal opportunity to deploy ML models into their ecosystem. Despite this opportunity, increased knowledge and the number of ML implementations prompted a discussion about the feasibility of these solutions and the degree to which ML should be trusted and potentially replace human analysis and decision-making. In order to further exploit and achieve ML promise, banks need to continue to expand on its awareness of ML strengths, risks, and limitations and, most critically, to create an ethical system by which the production and use of ML can be controlled and the feasibility and effect of these emerging models proven and eventually trusted.

Read More

Why Data Science Needs DataOps

Article | March 31, 2020

DataOps helps reduce the time data scientists spend preparing data for use in applications. Such tasks consume roughly 80% of their time now.We’re still hopeful that the digital transformation will provide the insights businesses need from big data. As a data scientist, you’re probably aware of the growing pressure from companies to extract meaningful insights from data and find the stories needed for impact.No matter how in-demand data science is in the employment numbers, equal pressure is rising for data scientists to deliver business value and no wonder. We’re approaching the age where data science and AI draw a line in the sand for which companies remain competitive and which ones collapse.One answer to this pressure is the rise of DataOps. Let’s take a look at what it is and how it could provide a path for data scientists to give businesses what they’ve been after.

Read More
none

Will Quantum Computers Make Supercomputers Obsolete in the Field of High Performance Computing?

Article | May 12, 2021

If you want an explicit answer without having to know the extra details, then here it is: Yes, there is a possibility that quantum computers can replace supercomputers in the field of high performance computing, under certain conditions. Now, if you want to know how and why this scenario is a possibility and what those conditions are, I’d encourage you to peruse the rest of this article. To start, we will run through some very simple definitions. Definitions If you work in the IT sector, you probably would have heard of the terms ‘high performance computing’, ‘supercomputers’ and ‘quantum computers’ many times. These words are thrown around quite often nowadays, especially in the area of data science and artificial intelligence. Perhaps you would have deduced their meanings from their context of use, but you may not have gotten the opportunity to explicitly sit down and do the required research on what they are and why they are used. Therefore, it is a good idea to go through their definitions, so that you have a better understanding of each concept. High Performance Computing: It is the process of carrying out complex calculations and computations on data at a very high speed. It is much faster than regular computing. Supercomputer: It is a type of computer that is used to efficiently perform powerful and quick computations. Quantum Computing: It is a type of computer that makes use of quantum mechanics’ concepts like entanglement and superposition, in order to carry out powerful computations. Now that you’ve gotten the gist of these concepts, let’s dive in a little more to get a wider scope of how they are implemented throughout the world. Background High performance computing is a thriving area in the sector of information technology, and rightly so, due to the rapid surge in the amount of data that is produced, stored, and processed every second. Over the last few decades, data has become increasingly significant to large corporations, small businesses, and individuals, as a result of its tremendous potential in their growth and profit. By properly analysing data, it is possible to make beneficial predictions and determine optimal strategies. The challenge is that there are huge amounts of data being generated every day. If traditional computers are used to manage and compute all of this data, the outcome would take an irrationally long time to be produced. Massive amounts of resources like time, computational power, and expenses would also be required in order to effectuate such computations. Supercomputers were therefore introduced into the field of technology to tackle this issue. These computers facilitate the computation of huge quantities of data at much higher speeds than a regular computer. They are a great investment for businesses that require data to be processed often and in large amounts at a time. The main advantage of supercomputers is that they can do what regular computers need to do, but much more quickly and efficiently. They have an overall high level of performance. Till date, they have been applied in the following domains: • Nuclear Weapon Design • Cryptography • Medical Diagnosis • Weather Forecasting • Online Gaming • Study of Subatomic Particles • Tackling the COVID-19 Pandemic Quantum computers, on the other hand, use a completely different principle when functioning. Unlike regular computers that use bits as the smallest units of data, quantum computers generate and manipulate ‘qubits’ or ‘quantum bits’, which are subatomic particles like electrons or photons. These qubits have two interesting quantum properties which allow them to powerfully compute data – • Superposition: Qubits, like regular computer bits, can be in a state of 1 or 0. However, they also have the ability to be in both states of 1 and 0 simultaneously. This combined state allows quantum computers to calculate a large number of possible outcomes, all at once. When the final outcome is determined, the qubits fall back into a state of either 1 or 0. This property iscalled superposition. • Entanglement: Pairs of qubits can exist in such a way that two members of a pair of qubits exist in a single quantum state. In such a situation, changing the state of one of the qubits can instantly change the state of the other qubit. This property is called entanglement. Their most promising applications so far include: • Cybersecurity • Cryptography • Drug Designing • Financial Modelling • Weather Forecasting • Artificial Intelligence • Workforce Management Despite their distinct features, both supercomputers and quantum computers are immensely capable of providing users with strong computing facilities. The question is, how do we know which type of system would be the best for high performance computing? A Comparison High performance computing requires robust machines that can deal with large amounts of data - This involves the collection, storage, manipulation, computation, and exchange of data in order to derive insights that are beneficial to the user. Supercomputers have successfully been used so far for such operations. When the concept of a quantum computer first came about, it caused quite a revolution within the scientific community. People recognised its innumerable and widespread abilities, and began working on ways to convert this theoretical innovation into a realistic breakthrough. What makes a quantum computer so different from a supercomputer? Let’s have a look at Table 1.1 below. From the table, we can draw the following conclusions about supercomputers and quantum computers - 1. Supercomputers have been around for a longer duration of time, and are therefore more advanced. Quantum computers are relatively new and still require a great depth of research to sufficiently comprehend their working and develop a sustainable system. 2. Supercomputers are easier to provide inputs to, while quantum computers need a different input mechanism. 3. Supercomputers are fast, but quantum computers are much faster. 4. Supercomputers and quantum computers have some similar applications. 5. Quantum computers can be perceived as extremely powerful and highly advanced supercomputers. Thus, we find that while supercomputers surpass quantum computers in terms of development and span of existence, quantum computers are comparatively much better in terms of capability and performance. The Verdict We have seen what supercomputers and quantum computers are, and how they can be applied in real-world scenarios, particularly in the field of high performance computing. We have also gone through their differences and made significant observations in this regard. We find that although supercomputers have been working great so far, and they continue to provide substantial provisions to researchers, organisations, and individuals who require intense computational power for the quick processing of enormous amounts of data, quantum computers have the potential to perform much better and provide faster and much more adequate results. Thus, quantum computers can potentially make supercomputers obsolete, especially in the field of high performance computing, if and only if researchers are able to come up with a way to make the development, deployment, and maintenance of these computers scalable, feasible, and optimal for consumers.

Read More

Spotlight

DataRobot

DataRobot offers a machine learning platform for data scientists of all skill levels to build and deploy accurate predictive models in a fraction of the time it used to take. The technology addresses the critical shortage of data scientists by changing the speed and economics of predictive analytics. The DataRobot platform uses massively parallel processing to train and evaluate 1000's of models in R, Python, Spark MLlib, H2O and other open source libraries. It searches through millions of possible combinations of algorithms, pre-processing steps, features, transformations and tuning parameters to deliver the best models for your dataset and prediction target…

Events