Everything You Need to Learn About Data Science

| January 21, 2019

article image
What is data really? Data is defined by information about the world and its individuals that is collected and analyzed in order to aid in decision making. Although data is, today, often associated with helpful data visualization, such as charts and infographics, it is important to understand the historical evolution of data. Looking to 3200 BC, when writing was first being developed in Mesopotamia, scribes would record data from daily life – such as tax and crop information – in order to improve their accounting and agricultural systems. As both the natural and mathematical sciences continued to advance, coupled with the introduction of better technology, mathematical statistics transformed into something more powerful: data science. Data science combines what we tend to think of as traditional statistics and computer science in order to analyze large amounts of data and find new ways of doing so. While data analytics utilizes mathematical statistics in order to model data, data science functions mainly as a discipline that is used to extract information and draw new insight from large amounts of data.

Spotlight

Endor Software Ltd

Endor is a new spin off company from MIT Media Lab that uses the groundbreaking new science of Social Physics to predict human behavior with unmatched accuracy and speed, in any domain - finance, healthcare, communication, security, and retail. Headquartered in New York and Tel Aviv, Endor is backed by Eric Schmidt's Innovation Endeavors.

OTHER ARTICLES

The importance of Big Data in the Food Industry Strategies and best practices

Article | March 5, 2020

Do you know the real importance of Big Data in the Food Industry? Knowing your audience is important, even fundamental for any kind of business. In this article we will analyze the best practices and the best data-driven strategies (marketing, but not only) for the food industry. Food and Beverage is a large and complex sector that embraces a number of very different players, some of whom are interconnected. The ecosystem includes both small producers and large multinational brands, players who cater to everyone and those who target a specific niche; then there are the distributors, clubs, restaurants both small and large, and retail chains.

Read More

MODERNIZED REQUIREMENTS OF EFFICIENT DATA SCIENCE SUCCESS ACROSS ORGANIZATIONS

Article | February 23, 2020

Does the success of companies like Google depend on that of the algorithms or that of data? Today’s fascination with artificial intelligence (AI) reflects both our appetite for data and our excitement about the new opportunities in machine learning. Amalio Telenti, Chief Data Scientist and Head of Computational Biology at Vir Biotechnology Inc. argue that newcomers to the field of data science are blinded by the shiny object of magical algorithms and that they forget the critical infrastructures that are needed to create and to manage data in the first place.Data management and infrastructures are the little ugly duckling of data science but they are necessary for a successful program and therefore need to be built with purpose. This requires careful consideration of strategies for data capture, storage of raw and processed data and instruments for retrieval. Beyond the virtues of analysis, there are also the benefits of facilitated retrieval. While there are many solutions for visualization of corporate or industrial data, there is still a need for flexible retrieval tools in the form of search engines that query the diverse sources and forms of data and information that are generated at a given company or institution.

Read More

Man Vs. Machine: Peaking into the Future of Artificial Intelligence

Article | March 15, 2021

Stephen Hawking, one of the finest minds to have ever lived, once famously said, “AI is likely to be either the best or the worst thing to happen to humanity.” This is of course true, with valid arguments both for and against the proliferation of AI. As a practitioner, I have witnessed the AI revolution at close quarters as it unfolded at breathtaking pace over the last two decades. My personal view is that there is no clear black and white in this debate. The pros and cons are very contextual – who is developing it, for what application, in what timeframe, towards what end? It always helps to understand both sides of the debate. So let’s try to take a closer look at what the naysayers say. The most common apprehensions can be clubbed into three main categories: A. Large-scale Unemployment: This is the most widely acknowledged of all the risks of AI. Technology and machines replacing humans for doing certain types of work isn’t new. We all know about entire professions dwindling, and even disappearing, due to technology. Industrial Revolution too had led to large scale job losses, although many believe that these were eventually compensated for by means of creating new avenues, lowering prices, increasing wages etc. However, a growing number of economists no longer subscribe to the belief that over a longer term, technology has positive ramifications on overall employment. In fact, multiple studies have predicted large scale job losses due to technological advancements. A 2016 UN report concluded that 75% of jobs in the developing world are expected to be replaced by machines! Unemployment, particularly at a large scale, is a very perilous thing, often resulting in widespread civil unrest. AI’s potential impact in this area therefore calls for very careful political, sociological and economic thinking, to counter it effectively. B. Singularity: The concept of Singularity is one of those things that one would have imagined seeing only in the pages of a futuristic Sci-Fi novel. However, in theory, today it is a real possibility. In a nutshell, Singularity refers to that point in human civilization when Artificial Intelligence reaches a tipping point beyond which it evolves into a superintelligence that surpasses human cognitive powers, thereby potentially posing a threat to human existence as we know it today. While the idea around this explosion of machine intelligence is a very pertinent and widely discussed topic, unlike the case of technology driven unemployment, the concept remains primarily theoretical. There is as yet no consensus amongst experts on whether this tipping point can ever really be reached in reality. C. Machine Consciousness: Unlike the previous two points, which can be regarded as risks associated with the evolution of AI, the aspect of machine consciousness perhaps is best described as an ethical conundrum. The idea deals with the possibility of implanting human-like consciousness into machines, taking them beyond the realm of ‘thinking’ to that of ‘feeling, emotions and beliefs’. It’s a complex topic and requires delving into an amalgamation of philosophy, cognitive science and neuroscience. ‘Consciousness’ itself can be interpreted in multiple ways, bringing together a plethora of attributes like self-awareness, cause-effect in mental states, memory, experiences etc. To bring machines to a state of human-like consciousness would entail replicating all the activities that happen at a neural level in a human brain – by no means a meagre task. If and when this were to be achieved, it would require a paradigm shift in the functioning of the world. Human society, as we know it, will need a major redefinition to incorporate machines with consciousness co-existing with humans. It sounds far-fetched today, but questions such as this need pondering right now, so as to be able to influence the direction in which we move when it comes to AI and machine consciousness, while things are still in the ‘design’ phase so to speak. While all of the above are pertinent questions, I believe they don’t necessarily outweigh the advantages of AI. Of course, there is a need to address them systematically, control the path of AI development and minimize adverse impact. In my opinion, the greatest and most imminent risk is actually a fourth item, not often taken into consideration, when discussing the pitfalls of AI. D. Oligarchy: Or to put it differently, the question of control. Due to the very nature of AI – it requires immense investments in technology and science – there are realistically only a handful of organizations (private or government) that can make the leap into taking AI into the mainstream, in a scalable manner, and across a vast array of applications. There is going to be very little room for small upstarts, however smart they might be, to compete at scale against these. Given the massive aspects of our lives that will likely be steered by AI enabled machines, those who control that ‘intelligence’ will hold immense power over the rest of us. That all familiar phrase ‘with great power, comes great responsibility’ will take a whole new meaning – the organizations and/or individuals that are at the forefront of the generally available AI applications would likely have more power than the most despotic autocrats in history. This is a true and real hazard, aspects of which are already becoming areas of concern in the form of discussions around things like privacy. In conclusion, AI, like all major transformative events in human history, is certain to have wide reaching ramifications. But with careful forethought these can be addressed. In the short to medium term, the advantages of AI in enhancing our lives, will likely outweigh these risks. Any major conception that touches human lives in a broad manner, if not handled properly, can pose immense danger. The best analogy I can think of is religion – when not channelled appropriately, it probably poses a greater threat than any technological advancement ever could.

Read More

Why data analytics is helping telcos keep the lights on during unprecedented times

Article | April 8, 2020

Our ‘new normal’, as we adapt to living and working in a COVID-19 era highlights the mission critical role that technology leadership continues to play in all our lives. One where having almost instantaneous access to data and the ability to communicate from anywhere has never been more business critical.Last week, Australia’s major telecommunication service providers were granted authorisation by the ACCC to collaborate to keep critical services operating effectively during the current COVID-19 pandemic.

Read More

Spotlight

Endor Software Ltd

Endor is a new spin off company from MIT Media Lab that uses the groundbreaking new science of Social Physics to predict human behavior with unmatched accuracy and speed, in any domain - finance, healthcare, communication, security, and retail. Headquartered in New York and Tel Aviv, Endor is backed by Eric Schmidt's Innovation Endeavors.

Events