How are aerospace manufacturers using big data?

| August 16, 2017

article image
The Aerospace Technology Institute has revealed the findings of a study into the current state, opportunities and challenges of big data analytics in the UK aerospace sector. Big data involves using advanced technologies and processes such as algorithms, machine learning and data mining to extract value and inform decisions. Today, aerospace companies collect vast – and rapidly increasing – quantities of data. However, data by itself is relatively worthless. Too many businesses, for example, have admitted to collecting data for data’s sake. Big data analytics helps to generate valuable insights which are then being used to improve decision making across the value chain, from measuring product performance to assessing the impact of weather on the supply chain and how it might affect downstream operations. Big data challenges. Before analyzing data, its fundamental architecture needs to be addressed. Data configurations are not always directly transferrable and legacy systems may no longer be supported. Translating input data into value can be difficult and requires skills or roles not traditionally associated with aerospace – such as data engineers or data scientists. Utilizing data analytics to enhance product services can also be restricted by the ownership of data, something which may be impacted by the arrival of the EU’s General Data Protection Regulation (GDPR) which comes into effect in less than 12 months.

Spotlight

Third I

Third I provides advisory services around business intelligence, data warehousing, and data management. Our team of consultants consists of astute, motivated problem solvers who love data, and are always excited about demonstrating the ways in which analytics can make a difference to our clients. Our consultants come from diverse backgrounds, which allow us to understand every business problem – no matter how complex – from a fresh perspective. We like it better this way, because no two challenges are ever the same, and each of our clients deserves a unique solution. Over the years, we’ve had the opportunity to work on some pretty cool stuff – helping a large fast food restaurant chain standardize a performance scoring mechanism across restaurants, leveraging SDK to customize complex web- and mobile-based data visualizations are just some examples…

OTHER ARTICLES

HOW TO PREPARE FOR A CAREER IN DATA SCIENCE?

Article | February 17, 2020

The continuous advancements in technology and the increasing use of smart devices are leading tremendous growth in data. Considering reports, more than 2.5 Quintilian bytes of data are generated on a daily basis and it is expected that 1.7 Mb of data will be produced every second in the near future. This is where data scientists play an influential role in analyzing these immense amounts of data to convert into meaningful insights. Data science is an overriding method today that will remain the same for the future. This drives the need for skilled talent across industries to meet the challenges of data analytics and assist delivering innovation in products, services and society.

Read More

MAKING IOT DATA MEANINGFUL WITH AI-POWERED COGNITIVE COMPUTING

Article | February 17, 2020

Today, the world is all about industry 4.0 and the technologies brought in by it. From Artificial Intelligence (AI) to Big Data Analytics, all technologies are transforming one or the other industries in some ways. AI-powered Cognitive Computing is one such technology that provides high scale automation with ubiquitous connectivity. More so, it is redefining how IoT technology operates.The need for Cognitive computing in the IoT emerges from the significance of information in present-day business. In the brilliant IoT settings of things to come. Everybody from new AI services companies to undertakings to use the information to settle on choices utilizing realities instead of impulses.Cognitive computing uses information and reacts to changes inside it to decide on better options. It is based on explicit gaining from past encounters, contrasted and a standard-based choice framework

Read More

COVID19: A crisis that necessitates Open Data

Article | February 17, 2020

The coronavirus outbreak in China has grown to a pandemic and is affecting the global health & social and economic dynamics. An ever increasing velocity and scale of analysis — in terms of both processing and access is required to succeed in the face of unimaginable shifts of market; health and social paradigms. The COVID-19 pandemic is accompanied by an Infodemic. With the global Novel Coronavirus pandemic filling headlines, TV news space and social media it can seem as if we are drowning in information and data about the virus. With so much data being pushed at us and shared it can be hard for the general public to know what is correct, what is useful and (unfortunately) what is dangerous. In general, levels of trust in scientists are quite high albeit with differences across countries and regions. A 2019 survey conducted across 140 countries showed that, globally, 72% of the respondents trusted scientists at “high” or “medium” levels. However, the proportion expressing “high” or “medium” levels of trust in science ranged from about 90% in Northern and Western Europe to 68% in South America and 48% in Central Africa (Rabesandratana, 2020). In times of crisis, like the ongoing spread of COVID-19, both scientific & non-scientific data should be a trusted source for information, analysis and decision making. While global sharing and collaboration of research data has reached unprecedented levels, challenges remain. Trust in at least some of the data is relatively low, and outstanding issues include the lack of specific standards, co-ordination and interoperability, as well as data quality and interpretation. To strengthen the contribution of open science to the COVID-19 response, policy makers need to ensure adequate data governance models, interoperable standards, sustainable data sharing agreements involving public sector, private sector and civil society, incentives for researchers, sustainable infrastructures, human and institutional capabilities and mechanisms for access to data across borders. The COVID19 data is cited critical for vaccine discovery; planning and forecasting for healthcare set up; emergency systems set up and expected to contribute to policy objectives like higher transparency and accountability, more informed policy debates, better public services, greater citizen engagement, and new business development. This is precisely why the need to have “open data” access to COVID-19 information is critical for humanity to succeed. In global emergencies like the coronavirus (COVID-19) pandemic, open science policies can remove obstacles to the free flow of research data and ideas, and thus accelerate the pace of research critical to combating the disease. UNESCO have set up open access to few data is leading a major role in this direction. Thankfully though, scientists around the world working on COVID-19 are able to work together, share data and findings and hopefully make a difference to the containment, treatment and eventually vaccines for COVID-19. Science and technology are essential to humanity’s collective response to the COVID-19 pandemic. Yet the extent to which policymaking is shaped by scientific evidence and by technological possibilities varies across governments and societies, and can often be limited. At the same time, collaborations across science and technology communities have grown in response to the current crisis, holding promise for enhanced cooperation in the future as well. A prominent example of this is the Coalition for Epidemic Preparedness Innovations (CEPI), launched in 2017 as a partnership between public, private, philanthropic and civil society organizations to accelerate the development of epidemic vaccines. Its ongoing work has cut the expected development time for a COVID-19 vaccine to 12–18 months, and its grants are providing quick funding for some promising early candidates. It is estimated that an investment of USD 2 billion will be needed, with resources being made available from a variety of sources (Yamey, et al., 2020). The Open COVID Pledge was launched in April 2020 by an international coalition of scientists, lawyers, and technology companies, and calls on authors to make all intellectual property (IP) under their control available, free of charge, and without encumbrances to help end the COVID-19 pandemic, and reduce the impact of the disease. Some notable signatories include Intel, Facebook, Amazon, IBM, Sandia National Laboratories, Hewlett Packard, Microsoft, Uber, Open Knowledge Foundation, the Massachusetts Institute of Technology, and AT&T. The signatories will offer a specific non-exclusive royalty-free Open COVID license to use IP for the purpose of diagnosing, preventing and treating COVID-19. Also illustrating the power of open science, online platforms are increasingly facilitating collaborative work of COVID-19 researchers around the world. A few examples include: 1. Research on treatments and vaccines is supported by Elixir, REACTing, CEPI and others. 2. WHO funded research and data organization. 3. London School of Hygiene and Tropical Medicine releases a dataset about the environments that have led to significant clusters of COVID-19 cases,containing more than 250 records with date, location, if the event was indoors or outdoors, and how many individuals became infected. (7/24/20) 4. The European Union Science Hub publishes a report on the concept of data-driven Mobility Functional Areas (MFAs). They demonstrate how mobile data calculated at a European regional scale can be useful for informing policies related to COVID-19 and future outbreaks. (7/16/20) While clinical, epidemiological and laboratory data about COVID-19 is widely available, including genomic sequencing of the pathogen, a number of challenges remain: 1. All data is not sufficiently findable, accessible, interoperable and reusable (FAIR), or not yet FAIR data. 2. Sources of data tend to be dispersed, even though many pooling initiatives are under way, curation needs to be operated “on the fly”. 3. In addition, many issues arise around the interpretation of data – this can be illustrated by the widely followed epidemiological statistics. Typically, the statistics concern “confirmed cases”, “deaths” and “recoveries”. Each of these items seem to be treated differently in different countries, and are sometimes subject to methodological changes within the same country. 4. Specific standards for COVID-19 data therefore need to be established, and this is one of the priorities of the UK COVID-19 Strategy. A working group within Research Data Alliance has been set up to propose such standards at an international level. Given the achievements and challenges of open science in the current crisis, lessons from prior experience & from SARS and MARS outbreaks globally can be drawn to assist the design of open science initiatives to address the COVID-19 crisis. The following actions can help to further strengthen open science in support of responses to the COVID-19 crisis: 1. Providing regulatory frameworks that would enable interoperability within the networks of large electronic health records providers, patient mediated exchanges, and peer-to-peer direct exchanges. Data standards need to ensure that data is findable, accessible, interoperable and reusable, including general data standards, as well as specific standards for the pandemic. 2. Working together by public actors, private actors, and civil society to develop and/or clarify a governance framework for the trusted reuse of privately-held research data toward the public interest. This framework should include governance principles, open data policies, trusted data reuse agreements, transparency requirements and safeguards, and accountability mechanisms, including ethical councils, that clearly define duties of care for data accessed in emergency contexts. 3. Securing adequate infrastructure (including data and software repositories, computational infrastructure, and digital collaboration platforms) to allow for recurrent occurrences of emergency situations. This includes a global network of certified trustworthy and interlinked repositories with compatible standards to guarantee the long-term preservation of FAIR COVID-19 data, as well as the preparedness for any future emergencies. 4. Ensuring that adequate human capital and institutional capabilities are in place to manage, create, curate and reuse research data – both in individual institutions and in institutions that act as data aggregators, whose role is real-time curation of data from different sources. In increasingly knowledge-based societies and economies, data are a key resource. Enhanced access to publicly funded data enables research and innovation, and has far-reaching effects on resource efficiency, productivity and competitiveness, creating benefits for society at large. Yet these benefits must also be balanced against associated risks to privacy, intellectual property, national security and the public interest. Entities such as UNESCO are helping the open science movement to progress towards establishing norms and standards that will facilitate greater, and more timely, access to scientific research across the world. Independent scientific assessments that inform the work of many United Nations bodies are indicating areas needing urgent action, and international cooperation can help with national capacities to implement them. At the same time, actively engaging with different stakeholders in countries around the dissemination of the findings of such assessments can help in building public trust in science.

Read More

Saurav Singla, the machine learning guru, empowering society

Article | February 17, 2020

Saurav Singla is a Senior Data Scientist, a Machine Learning Expert, an Author, a Technical Writer, a Data Science Course Creator and Instructor, a Mentor, a Speaker. While Media 7 has followed Saurav Singla’s story closely, this chat with Saurav was about analytics, his journey as a data scientist, and what he brings to the table with his 15 years of extensive statistical modeling, machine learning, natural language processing, deep learning, and data analytics across Consumer Durable, Retail, Finance, Energy, Human Resource and Healthcare sectors. He has grown multiple businesses in the past and is still a researcher at heart. In the past, Analytics and Predictive Modeling is predominant in few industries but in current times becoming an eminent part of emerging fields such as health, human resource management, pharma, IoT, and other smart solutions as well. Saurav had worked in data science since 2003. Over the years, he realized that all the people they had hired — whether they are from business or engineering backgrounds — needed extensive training to be able to perform analytics on real-world business datasets. He got an opportunity to move to Australia in the year 2003. He joined a retail company Harvey Norman in Australia, working out of their Melbourne office for four years. After moving back to India, in 2008, he joined one of the verticals of Siemens — one of the few companies in India then using analytics services in-house for eight years. He is a very passionate believer that the use of data and analytics will dramatically change not only corporations but also our societies. Building and expanding the application of analytics for supply chain, logistics, sales, marketing, finance at Siemens was a very fulfilling and enjoyable experience for him. Siemens was a tremendously rewarding and enjoyable experience for him. He grew the team from zero to fifteen while he was the data scientist leader. He believes those eight years taught him how to think big, scale organizations using data science. He has demonstrated success in developing and seamlessly executing plans in complex organizational structures. He has also been recognized for maximizing performance by implementing appropriate project management tools through analysis of details to ensure quality control and understanding of emerging technology. In the year 2016, he started getting a serious inner push to start thinking about joining a consulting and shifted to a company based out in Delhi NCR. During his ten-month path with them, he improved the way clients and businesses implement and exploit machine learning in their consumer commitments. As part of that vision, he developed class-defining applications that eliminate tension technologies, processes, and humans. Another main aspect of his plan was to ensure that it was affected in very fast agile cycles. Towards that he was actively innovating on operating and engagement models. In the year 2017, he moved to London and joined a digital technology company, and assisted in building artificial intelligence and machine learning products for their clients. He aimed to solve problems and transform the costs using technology and machine learning. He was associated with them for 2 years. At the beginning of the year 2018, he joined Mindrops. He developed advanced machine learning technologies and processes to solve client problems. Mentored the Data Science function and guide them in the development of the solution. He built robust clients Data Science capabilities which can be scalable across multiple business use cases. Outside work, Saurav associated with Mentoring Club and Revive. He volunteers in his spare time for helping, coaching, and mentoring young people in taking up careers in the data science domain, data practitioners to build high-performing teams and grow the industry. He assists data science enthusiasts to stay motivated and guide them along their career path. He helps fill the knowledge gap and help aspirants understand the core of the industry. He helps aspirants analyze their progress and help them upskill accordingly. He also helps them connect with potential job opportunities with their industry-leading network. Additionally, in the year 2018, he joined as a mentor in the Transaction Behavioral Intelligence company that accelerates business growth for banks with the use of Artificial Intelligence and Machine Learning enabled products. He is guiding their machine learning engineers with their projects. He is enhancing the capabilities of their AI-driven recommendation engine product. Saurav is teaching the learners to grasp data science knowledge more engaging way by providing courses on the Udemy marketplace. He has created two courses on Udemy, with over twenty thousand students enrolled in it. He regularly speaks at meetups on data science topics and writes articles on data science topics in major publications such as AI Time Journal, Towards Data Science, Data Science Central, Kdnuggets, Data-Driven Investor, HackerNoon, and Infotech Report. He actively contributes academic research papers in machine learning, deep learning, natural language processing, statistics and artificial intelligence. His book on Machine Learning for Finance was published by BPB Publications which is Asia's largest publisher of Computer and IT Books. This is possibly one of the biggest milestones of his career. Saurav turned his passion to make knowledge available for society. Saurav believes sharing knowledge is cool, and he wishes everyone should have that passion for knowledge sharing. That would be his success.

Read More

Spotlight

Third I

Third I provides advisory services around business intelligence, data warehousing, and data management. Our team of consultants consists of astute, motivated problem solvers who love data, and are always excited about demonstrating the ways in which analytics can make a difference to our clients. Our consultants come from diverse backgrounds, which allow us to understand every business problem – no matter how complex – from a fresh perspective. We like it better this way, because no two challenges are ever the same, and each of our clients deserves a unique solution. Over the years, we’ve had the opportunity to work on some pretty cool stuff – helping a large fast food restaurant chain standardize a performance scoring mechanism across restaurants, leveraging SDK to customize complex web- and mobile-based data visualizations are just some examples…

Events