Securing Big Data Platforms: Machine Learning and Encryption

TAN WEE KWANG | June 6, 2016

article image
Government agencies within smart cities are seeing a critical advantage in big data analytics, especially when it comes to accessing large, historical data sets for smarter decision-making. However, many public sector organizations still struggle to take advantage of information-driven decisions and stores of untapped big data sit within these agencies due to difficulties with accessibility or security concerns. When it comes to security, agencies must comply with the strictest regulatory mandates and handle sensitive information; yet the business of government is evolving so quickly that the public sector needs something scalable, flexible, and secure…

Spotlight

Greenplum Database

Greenplum Database® is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes. Greenplum Database® project is released under the Apache 2 license. We want to thank all our current community contributors and are interested in all new potential contributions. For the Greenplum Database community no contribution is too small, we encourage all types of contributions.

OTHER ARTICLES

Time Machine Big Data of the Past for the Future of Europe

Article | February 24, 2020

Emerging technology has the power to transform history and cultural heritage into a living resource. The Time Machine project will digitise archives from museums and libraries, using Artificial Intelligence and Big Data mining, to offer richer interpretations of our past. An inclusive European identity benefits from a deep engagement with the region’s past. The Time Machine project set out to offer this by exploiting already freely accessible Big Data sources. EU support for a preparatory action enabled the development of a decade-long roadmap for the large-scale digitisation of kilometres of archives, from large museum and library collections, into a distributed information system. Artificial Intelligence (AI) will play a key role at each step, from digitisation planning to document interpretation and fact-checking. Once embedded, this infrastructure could create new business and employment opportunities across a range of sectors including ICT, the creative industries and tourism.

Read More
none

Will Quantum Computers Make Supercomputers Obsolete in the Field of High Performance Computing?

Article | May 12, 2021

If you want an explicit answer without having to know the extra details, then here it is: Yes, there is a possibility that quantum computers can replace supercomputers in the field of high performance computing, under certain conditions. Now, if you want to know how and why this scenario is a possibility and what those conditions are, I’d encourage you to peruse the rest of this article. To start, we will run through some very simple definitions. Definitions If you work in the IT sector, you probably would have heard of the terms ‘high performance computing’, ‘supercomputers’ and ‘quantum computers’ many times. These words are thrown around quite often nowadays, especially in the area of data science and artificial intelligence. Perhaps you would have deduced their meanings from their context of use, but you may not have gotten the opportunity to explicitly sit down and do the required research on what they are and why they are used. Therefore, it is a good idea to go through their definitions, so that you have a better understanding of each concept. High Performance Computing: It is the process of carrying out complex calculations and computations on data at a very high speed. It is much faster than regular computing. Supercomputer: It is a type of computer that is used to efficiently perform powerful and quick computations. Quantum Computing: It is a type of computer that makes use of quantum mechanics’ concepts like entanglement and superposition, in order to carry out powerful computations. Now that you’ve gotten the gist of these concepts, let’s dive in a little more to get a wider scope of how they are implemented throughout the world. Background High performance computing is a thriving area in the sector of information technology, and rightly so, due to the rapid surge in the amount of data that is produced, stored, and processed every second. Over the last few decades, data has become increasingly significant to large corporations, small businesses, and individuals, as a result of its tremendous potential in their growth and profit. By properly analysing data, it is possible to make beneficial predictions and determine optimal strategies. The challenge is that there are huge amounts of data being generated every day. If traditional computers are used to manage and compute all of this data, the outcome would take an irrationally long time to be produced. Massive amounts of resources like time, computational power, and expenses would also be required in order to effectuate such computations. Supercomputers were therefore introduced into the field of technology to tackle this issue. These computers facilitate the computation of huge quantities of data at much higher speeds than a regular computer. They are a great investment for businesses that require data to be processed often and in large amounts at a time. The main advantage of supercomputers is that they can do what regular computers need to do, but much more quickly and efficiently. They have an overall high level of performance. Till date, they have been applied in the following domains: • Nuclear Weapon Design • Cryptography • Medical Diagnosis • Weather Forecasting • Online Gaming • Study of Subatomic Particles • Tackling the COVID-19 Pandemic Quantum computers, on the other hand, use a completely different principle when functioning. Unlike regular computers that use bits as the smallest units of data, quantum computers generate and manipulate ‘qubits’ or ‘quantum bits’, which are subatomic particles like electrons or photons. These qubits have two interesting quantum properties which allow them to powerfully compute data – • Superposition: Qubits, like regular computer bits, can be in a state of 1 or 0. However, they also have the ability to be in both states of 1 and 0 simultaneously. This combined state allows quantum computers to calculate a large number of possible outcomes, all at once. When the final outcome is determined, the qubits fall back into a state of either 1 or 0. This property iscalled superposition. • Entanglement: Pairs of qubits can exist in such a way that two members of a pair of qubits exist in a single quantum state. In such a situation, changing the state of one of the qubits can instantly change the state of the other qubit. This property is called entanglement. Their most promising applications so far include: • Cybersecurity • Cryptography • Drug Designing • Financial Modelling • Weather Forecasting • Artificial Intelligence • Workforce Management Despite their distinct features, both supercomputers and quantum computers are immensely capable of providing users with strong computing facilities. The question is, how do we know which type of system would be the best for high performance computing? A Comparison High performance computing requires robust machines that can deal with large amounts of data - This involves the collection, storage, manipulation, computation, and exchange of data in order to derive insights that are beneficial to the user. Supercomputers have successfully been used so far for such operations. When the concept of a quantum computer first came about, it caused quite a revolution within the scientific community. People recognised its innumerable and widespread abilities, and began working on ways to convert this theoretical innovation into a realistic breakthrough. What makes a quantum computer so different from a supercomputer? Let’s have a look at Table 1.1 below. From the table, we can draw the following conclusions about supercomputers and quantum computers - 1. Supercomputers have been around for a longer duration of time, and are therefore more advanced. Quantum computers are relatively new and still require a great depth of research to sufficiently comprehend their working and develop a sustainable system. 2. Supercomputers are easier to provide inputs to, while quantum computers need a different input mechanism. 3. Supercomputers are fast, but quantum computers are much faster. 4. Supercomputers and quantum computers have some similar applications. 5. Quantum computers can be perceived as extremely powerful and highly advanced supercomputers. Thus, we find that while supercomputers surpass quantum computers in terms of development and span of existence, quantum computers are comparatively much better in terms of capability and performance. The Verdict We have seen what supercomputers and quantum computers are, and how they can be applied in real-world scenarios, particularly in the field of high performance computing. We have also gone through their differences and made significant observations in this regard. We find that although supercomputers have been working great so far, and they continue to provide substantial provisions to researchers, organisations, and individuals who require intense computational power for the quick processing of enormous amounts of data, quantum computers have the potential to perform much better and provide faster and much more adequate results. Thus, quantum computers can potentially make supercomputers obsolete, especially in the field of high performance computing, if and only if researchers are able to come up with a way to make the development, deployment, and maintenance of these computers scalable, feasible, and optimal for consumers.

Read More
DATA SCIENCE

Man Vs. Machine: Peaking into the Future of Artificial Intelligence

Article | March 15, 2021

Stephen Hawking, one of the finest minds to have ever lived, once famously said, “AI is likely to be either the best or the worst thing to happen to humanity.” This is of course true, with valid arguments both for and against the proliferation of AI. As a practitioner, I have witnessed the AI revolution at close quarters as it unfolded at breathtaking pace over the last two decades. My personal view is that there is no clear black and white in this debate. The pros and cons are very contextual – who is developing it, for what application, in what timeframe, towards what end? It always helps to understand both sides of the debate. So let’s try to take a closer look at what the naysayers say. The most common apprehensions can be clubbed into three main categories: A. Large-scale Unemployment: This is the most widely acknowledged of all the risks of AI. Technology and machines replacing humans for doing certain types of work isn’t new. We all know about entire professions dwindling, and even disappearing, due to technology. Industrial Revolution too had led to large scale job losses, although many believe that these were eventually compensated for by means of creating new avenues, lowering prices, increasing wages etc. However, a growing number of economists no longer subscribe to the belief that over a longer term, technology has positive ramifications on overall employment. In fact, multiple studies have predicted large scale job losses due to technological advancements. A 2016 UN report concluded that 75% of jobs in the developing world are expected to be replaced by machines! Unemployment, particularly at a large scale, is a very perilous thing, often resulting in widespread civil unrest. AI’s potential impact in this area therefore calls for very careful political, sociological and economic thinking, to counter it effectively. B. Singularity: The concept of Singularity is one of those things that one would have imagined seeing only in the pages of a futuristic Sci-Fi novel. However, in theory, today it is a real possibility. In a nutshell, Singularity refers to that point in human civilization when Artificial Intelligence reaches a tipping point beyond which it evolves into a superintelligence that surpasses human cognitive powers, thereby potentially posing a threat to human existence as we know it today. While the idea around this explosion of machine intelligence is a very pertinent and widely discussed topic, unlike the case of technology driven unemployment, the concept remains primarily theoretical. There is as yet no consensus amongst experts on whether this tipping point can ever really be reached in reality. C. Machine Consciousness: Unlike the previous two points, which can be regarded as risks associated with the evolution of AI, the aspect of machine consciousness perhaps is best described as an ethical conundrum. The idea deals with the possibility of implanting human-like consciousness into machines, taking them beyond the realm of ‘thinking’ to that of ‘feeling, emotions and beliefs’. It’s a complex topic and requires delving into an amalgamation of philosophy, cognitive science and neuroscience. ‘Consciousness’ itself can be interpreted in multiple ways, bringing together a plethora of attributes like self-awareness, cause-effect in mental states, memory, experiences etc. To bring machines to a state of human-like consciousness would entail replicating all the activities that happen at a neural level in a human brain – by no means a meagre task. If and when this were to be achieved, it would require a paradigm shift in the functioning of the world. Human society, as we know it, will need a major redefinition to incorporate machines with consciousness co-existing with humans. It sounds far-fetched today, but questions such as this need pondering right now, so as to be able to influence the direction in which we move when it comes to AI and machine consciousness, while things are still in the ‘design’ phase so to speak. While all of the above are pertinent questions, I believe they don’t necessarily outweigh the advantages of AI. Of course, there is a need to address them systematically, control the path of AI development and minimize adverse impact. In my opinion, the greatest and most imminent risk is actually a fourth item, not often taken into consideration, when discussing the pitfalls of AI. D. Oligarchy: Or to put it differently, the question of control. Due to the very nature of AI – it requires immense investments in technology and science – there are realistically only a handful of organizations (private or government) that can make the leap into taking AI into the mainstream, in a scalable manner, and across a vast array of applications. There is going to be very little room for small upstarts, however smart they might be, to compete at scale against these. Given the massive aspects of our lives that will likely be steered by AI enabled machines, those who control that ‘intelligence’ will hold immense power over the rest of us. That all familiar phrase ‘with great power, comes great responsibility’ will take a whole new meaning – the organizations and/or individuals that are at the forefront of the generally available AI applications would likely have more power than the most despotic autocrats in history. This is a true and real hazard, aspects of which are already becoming areas of concern in the form of discussions around things like privacy. In conclusion, AI, like all major transformative events in human history, is certain to have wide reaching ramifications. But with careful forethought these can be addressed. In the short to medium term, the advantages of AI in enhancing our lives, will likely outweigh these risks. Any major conception that touches human lives in a broad manner, if not handled properly, can pose immense danger. The best analogy I can think of is religion – when not channelled appropriately, it probably poses a greater threat than any technological advancement ever could.

Read More

CISA Keep Customer Focus in AI Adoption

Article | February 27, 2020

When it comes to adopting artificial intelligence (AI) and machine learning (ML) capabilities, it’s important to look at its range of effects from many different viewpoints.According to Senior Advisor for AI at the Cybersecurity and Infrastructure Security Agency (CISA) Martin Stanley, his agency wanted to look at adoption through three different perspectives: how CISA was going to use AI, how stakeholders will use AI, and how U.S. adversaries are going to use AI.You have to understand the needs of your stakeholders, but you also have to do it fast,” Stanley said at a Feb. 26 ServiceNow Federal Forum, adding that it’s a challenge to take in all the necessary information and deliver an outcome. AI and ML can help streamline this process. Stanley spoke about how a big percentage of the AI implementation is being purposeful in how the government’s data is managed and taking care of the data and technology is a key part to the adoption process. He also added that helping people by making work more efficient is key to why AI adoption is important saying: At the end of the day, this is all about helping people.

Read More

Spotlight

Greenplum Database

Greenplum Database® is an advanced, fully featured, open source data warehouse. It provides powerful and rapid analytics on petabyte scale data volumes. Uniquely geared toward big data analytics, Greenplum Database is powered by the world’s most advanced cost-based query optimizer delivering high analytical query performance on large data volumes. Greenplum Database® project is released under the Apache 2 license. We want to thank all our current community contributors and are interested in all new potential contributions. For the Greenplum Database community no contribution is too small, we encourage all types of contributions.

Events