Article | May 17, 2021
One approach for better data utilization is the data fabric, a data management approach that arranges data in a single "fabric" that spans multiple systems and endpoints. The goal of the fabric is to link all data so it can easily be accessed.
"DataOps and data fabric are two different but related things," said Ed Thompson, CTO at Matillion, which provides a cloud data integration platform. "DataOps is about taking practices which are common in modern software development and applying them to data projects. Data fabric is about the type of data landscape that you create and how the tools that you use work together."
Article | March 15, 2021
Stephen Hawking, one of the finest minds to have ever lived, once famously said, “AI is likely to be either the best or the worst thing to happen to humanity.” This is of course true, with valid arguments both for and against the proliferation of AI.
As a practitioner, I have witnessed the AI revolution at close quarters as it unfolded at breathtaking pace over the last two decades. My personal view is that there is no clear black and white in this debate. The pros and cons are very contextual – who is developing it, for what application, in what timeframe, towards what end?
It always helps to understand both sides of the debate. So let’s try to take a closer look at what the naysayers say. The most common apprehensions can be clubbed into three main categories:
A. Large-scale Unemployment: This is the most widely acknowledged of all the risks of AI. Technology and machines replacing humans for doing certain types of work isn’t new. We all know about entire professions dwindling, and even disappearing, due to technology. Industrial Revolution too had led to large scale job losses, although many believe that these were eventually compensated for by means of creating new avenues, lowering prices, increasing wages etc.
However, a growing number of economists no longer subscribe to the belief that over a longer term, technology has positive ramifications on overall employment. In fact, multiple studies have predicted large scale job losses due to technological advancements. A 2016 UN report concluded that 75% of jobs in the developing world are expected to be replaced by machines!
Unemployment, particularly at a large scale, is a very perilous thing, often resulting in widespread civil unrest. AI’s potential impact in this area therefore calls for very careful political, sociological and economic thinking, to counter it effectively.
B. Singularity: The concept of Singularity is one of those things that one would have imagined seeing only in the pages of a futuristic Sci-Fi novel. However, in theory, today it is a real possibility. In a nutshell, Singularity refers to that point in human civilization when Artificial Intelligence reaches a tipping point beyond which it evolves into a superintelligence that surpasses human cognitive powers, thereby potentially posing a threat to human existence as we know it today.
While the idea around this explosion of machine intelligence is a very pertinent and widely discussed topic, unlike the case of technology driven unemployment, the concept remains primarily theoretical. There is as yet no consensus amongst experts on whether this tipping point can ever really be reached in reality.
C. Machine Consciousness: Unlike the previous two points, which can be regarded as risks associated with the evolution of AI, the aspect of machine consciousness perhaps is best described as an ethical conundrum. The idea deals with the possibility of implanting human-like consciousness into machines, taking them beyond the realm of ‘thinking’ to that of ‘feeling, emotions and beliefs’.
It’s a complex topic and requires delving into an amalgamation of philosophy, cognitive science and neuroscience. ‘Consciousness’ itself can be interpreted in multiple ways, bringing together a plethora of attributes like self-awareness, cause-effect in mental states, memory, experiences etc. To bring machines to a state of human-like consciousness would entail replicating all the activities that happen at a neural level in a human brain – by no means a meagre task.
If and when this were to be achieved, it would require a paradigm shift in the functioning of the world. Human society, as we know it, will need a major redefinition to incorporate machines with consciousness co-existing with humans. It sounds far-fetched today, but questions such as this need pondering right now, so as to be able to influence the direction in which we move when it comes to AI and machine consciousness, while things are still in the ‘design’ phase so to speak.
While all of the above are pertinent questions, I believe they don’t necessarily outweigh the advantages of AI. Of course, there is a need to address them systematically, control the path of AI development and minimize adverse impact. In my opinion, the greatest and most imminent risk is actually a fourth item, not often taken into consideration, when discussing the pitfalls of AI.
D. Oligarchy: Or to put it differently, the question of control. Due to the very nature of AI – it requires immense investments in technology and science – there are realistically only a handful of organizations (private or government) that can make the leap into taking AI into the mainstream, in a scalable manner, and across a vast array of applications. There is going to be very little room for small upstarts, however smart they might be, to compete at scale against these.
Given the massive aspects of our lives that will likely be steered by AI enabled machines, those who control that ‘intelligence’ will hold immense power over the rest of us. That all familiar phrase ‘with great power, comes great responsibility’ will take a whole new meaning – the organizations and/or individuals that are at the forefront of the generally available AI applications would likely have more power than the most despotic autocrats in history. This is a true and real hazard, aspects of which are already becoming areas of concern in the form of discussions around things like privacy.
In conclusion, AI, like all major transformative events in human history, is certain to have wide reaching ramifications. But with careful forethought these can be addressed. In the short to medium term, the advantages of AI in enhancing our lives, will likely outweigh these risks. Any major conception that touches human lives in a broad manner, if not handled properly, can pose immense danger. The best analogy I can think of is religion – when not channelled appropriately, it probably poses a greater threat than any technological advancement ever could.
Article | February 18, 2021
While digital transformation is proving to have many benefits for businesses, what is perhaps the most significant, is the vast amount of data there is available. And now, with an increasing number of businesses turning their focus to online, there is even more to be collected on competitors and markets than ever before.
Having all this information to hand may seem like any business owner’s dream, as they can now make insightful and informed commercial decisions based on what others are doing, what customers want and where markets are heading.
But according to Nate Burke, CEO of Diginius, a propriety software and solutions provider for ecommerce businesses, data should not be all a company relies upon when making important decisions.
Instead, there is a line to be drawn on where data is required and where human expertise and judgement can provide greater value.
Undeniably, the power of data is unmatched. With an abundance of data collection opportunities available online, and with an increasing number of businesses taking them, the potential and value of such information is richer than ever before.
And businesses are benefiting. Particularly where data concerns customer behaviour and market patterns. For instance, over the recent Christmas period, data was clearly suggesting a preference for ecommerce, with marketplaces such as Amazon leading the way due to greater convenience and price advantages.
Businesses that recognised and understood the trend could better prepare for the digital shopping season, placing greater emphasis on their online marketing tactics to encourage purchases and allocating resources to ensure product availability and on-time delivery.
While on the other hand, businesses who ignored, or simply did not utilise the information available to them, would have been left with overstocked shops and now, out of season items that would have to be heavily discounted or worse, disposed of.
Similarly, search and sales data can be used to understand changing consumer needs, and consequently, what items businesses should be ordering, manufacturing, marketing and selling for the best returns.
For instance, understandably, in 2020, DIY was at its peak, with increases in searches for “DIY facemasks”, “DIY decking” and “DIY garden ideas”. For those who had recognised the trend early on, they had the chance to shift their offerings and marketing in accordance, in turn really reaping the rewards.
So, paying attention to data certainly does pay off. And thanks to smarter and more sophisticated ways of collecting data online, such as cookies, and through AI and machine learning technologies, the value and use of such information is only likely to increase.
The future, therefore, looks bright. But even with all this potential at our fingertips, there are a number of issues businesses may face if their approach relies entirely on a data and insight-driven approach. Just like disregarding its power and potential can be damaging, so can using it as the sole basis upon which important decisions are based.
While the value of data for understanding the market and consumer patterns is undeniable, its value is only as rich as the quality of data being inputted. So, if businesses are collecting and analysing their data on their own activity, and then using this to draw meaningful insight, there should be strong focus on the data gathering phase, with attention given to what needs to be collected, why it should be collected, how it will be collected, and whether in fact this is an accurate representation of what it is you are trying to monitor or measure.
Human error can become an issue when this is done by individuals or teams who do not completely understand the numbers and patterns they are seeing. There is also an obstacle presented when there are various channels and platforms which are generating leads or sales for the business. In this case, any omission can skew results and provide an inaccurate picture. So, when used in decision making, there is the possibility of ineffective and unsuccessful changes.
But while data gathering becomes more and more autonomous, the possibility of human error is lessened. Although, this may add fuel to the next issue.
Drawing a line
The benefits of data and insights are clear, particularly as the tasks of collection and analysis become less of a burden for businesses and their people thanks to automation and AI advancements. But due to how effortless data collection and analysis is becoming, we can only expect more businesses to be doing it, meaning its ability to offer each individual company something unique is also being lessened.
So, businesses need to look elsewhere for their edge. And interestingly, this is where a line should be drawn and human judgement should be used in order to set them apart from the competition and differentiate from what everyone else is doing.
It makes perfect sense when you think about it. Your business is unique for a number of reasons, but mainly because of the brand, its values, reputation and perceptions of the services you are upheld by. And it’s usually these aspects that encourage consumers to choose your business rather than a competitor.
But often, these intangible aspects are much more difficult to measure and monitor through data collection and analysis, especially in the autonomous, number-driven format that many platforms utilise.
Here then, there is a great case for businesses to use their own judgements, expertise and experiences to determine what works well and what does not. For instance, you can begin to determine consumer perceptions towards a change in your product or services, which quantitative data may not be able to pick up until much later when sales figures begin to rise or fall. And while the data will eventually pick it up, it might not necessarily be able to help you decide on what an appropriate alternative solution may be, should the latter occur.
Human judgement, however, can listen to and understand qualitative feedback and consumer sentiments which can often provide much more meaningful insights for businesses to base their decisions on.
So, when it comes to competitor analysis, using insights generated from figure-based data sets and performance metrics is key to ensuring you are doing the same as the competition.
But if you are looking to get ahead, you may want to consider taking a human approach too.
Article | April 6, 2020
Today when we look around, we see how technology has revolutionized our world. It has created amazing elements and resources, putting useful intelligence at our fingertips. With all of these revolutions, technology has also made our lives easier, faster, digital and fun. Perhaps at a point when we are talking about technology, Machine learning and artificial intelligence are increasingly popular buzzwords used in modern terms.Machine Learning has proven to be one of the game changer technological advancements of the past decade. In the increasingly competitive corporate world, Machine learning is enabling companies to fast-track digital transformation and move into an age of automation. Some might even argue that AI/ML is required to stay relevant in some verticals, such as digital payments and fraud detection in banking or product recommendations.To understand what machine learning is, it is important to know the concepts of artificial intelligence (AI). It is defined as a program that exhibits cognitive ability similar to that of a human being. Making computers think like humans and solve problems the way we do is one of the main tenets of artificial intelligence.