Ten Characteristics of a Modern Analytics Platform

| June 13, 2017

article image
Read this white paper by Wayne Eckerson to discover the ten characteristics a solution must have to be considered a true modern analytics platform.Today, many vendors claim to offer a “modern” analytics platform. But as requirements change and more technology becomes available, the definition of this category of solutions is constantly evolving. Read this white paper by Wayne Eckerson to discover the ten characteristics a solution must have to be considered a true modern analytics platform.

Spotlight

Advanced Visual Systems

Advanced Visual Systems, founded in 1991, produces data visualization software and solutions for use in the fields of business intelligence, engineering and research. AVS customers include Independent Software Vendors that embed AVS technology into their own products, Enterprise and corporate users that create proprietary analytic applications, and centers of higher learning that conduct sophisticated research.

OTHER ARTICLES

Is Augmented Analytics the Future of Big Data Analytics?

Article | August 3, 2021

We currently live in the age of data. It’s not just any kind of data, but big data. The current data sets have become huge, complicated, and quick, making it difficult for traditional business intelligence (BI) solutions to handle. These dated BI solutions are either unable to get the data, deal with the data, or understand the data. It is vital to handle the data aptly since data is everywhere and is being produced constantly. Your organization needs to discover any hidden insights in your datasets. Going through all the data will be doable with the right tools like machine learning (ML) and augmented analytics. According to Gartner, augmented analytics is the future of data analytics and defines it as: “Augmented analytics uses machine learning/artificial intelligence (ML/AI) techniques to automate data preparation, insight discovery, and sharing. It also automates data science and ML model development, management, and deployment.” Augmented analytics is different from BI tools because ML technologies work behind the scenes continuously to learn and enhance results. Augmented analytics facilitates this process faster to derive insights from large amounts of structured and unstructured data to gain ML-based recommendations. In addition, it helps to find patterns in the data that usually go unnoticed, removes human bias, and allows predictive capabilities to inform an organization of what to do next. Artificial intelligence has brought about an augmented analytics trend, and there has been a significant increase in the demand for augmented analytics. Benefits of Augmented Analytics Organizations now understand the benefits of augmented analytics which has led them to adopt it to deal with the increasing volume of structured and unstructured data. Oracle identified top four reasons organizations are opting for augmented analytics: Data Democratization Augmented data science availability to everyone has become a possibility thanks to augmented analytics. Augmented analytics solutions come prebuilt with models and algorithms, so data scientists are not needed to do this work. In addition, these augmented analytics models have user-friendly interfaces, making it easier for business users and executives to use them. Quicker Decision-making You will receive suggestions and recommendations through augmented analytics about which datasets to incorporate in analyses, alert users with dataset upgrades, and recommend new datasets when the results are not what the users expect. With just one click, augmented analytics provides precise forecasts and predictions on historical data. Programmed Recommendations Natural language processing (NLP) is featured on the augmented analytics platforms enabling non-technical users to question the source data easily. Interpreting the complex data into text with intelligent recommendations is automated by natural language generation (NLG), thus speeding up the analytic insights. Anyone using the tools can find out hidden patterns and predict trends to optimize the time it takes to go from data to insights to decisions using automated recommendations for data improvement and visualization. Non-expert users can use NLP technology to make sense of large amounts of data. Users can ask doubts about data using typical business terms. The software will find and question the correct data, making the results easy to digest using visualization tools or natural language output. Grow into a Data-driven Company It is more significant to understand data and business while organizations are rapidly adjusting to changes. Analytics has become more critical to doing everything from understanding sales trends, to segment customers, based on their online behaviors, and predicting how much inventory to hold to strategizing marketing campaigns. Analytics is what makes data a valuable asset. Essential Capabilities of Augmented Analytics Augmented analytics reduces the repetitive processes data analysts need to do every time they work with new datasets. It helps to decrease the time it takes to clean data through the ETL process. Augmented analytics allows more time to think about the data implications, discover patterns, auto-generated code, create visualizations, and propose recommendations from the insights it derives. Augmented analytics considers intents and behaviors and turns them into contextual insights. It presents new directions to look at data and identify patterns and insights companies would have otherwise missed out on completely- thus altering the way analytics is used. The ability to highlight the most relevant hidden insights is a powerful capability. Augmented analytics, for example, can help users manage the context at the explanatory process stage. It understands the values of data that are associated with or unrelated to that context, which results in powerful and relevant suggestions that are context-aware. Modern self-service BI tools have a friendly user interface that enables business users with low to no technical skills to derive insights from data in real-time. In addition, these tools can easily handle large datasets from various sources in a quickly and competently. The insights from augmented analytics tools can tell you what, why, and how something happened. In addition, it can reveal important insights, recommendations, and relationships between data points in real-time and present it to the user in the form of reports in conversational language. Users can have data queries to get insights through the augmented analytics tools. For example, business users can ask, “How was the company’s performance last year?” or “What was the most profitable quarter of the year?” The systems provide in-depth explanations and recommendations around data insights, clearly understanding the “what” and the “why” of the data. It enhances efficiency, decision-making, and collaboration between users and encourages data literacy and data democracy throughout an organization. Augmented Analytics: What’s Next? Augmented analytics is going to change the way people understand and examine data. It has become a necessity for businesses to survive. It will simplify and speed up the augmented data preparation, cleansing, and standardization of data, thus assist businesses to focus all their efforts on data analysis. BI and analytics will become an immersive environment with integrations allowing users to interact with their data. New insights and data will be easier to access through various devices and interfaces like mobile phones, virtual assistants, or chatbots. In addition, it will help decision-making by notifying the users of alerts that need immediate attention. This will help businesses to stay updated about any changes happening in real-time. Frequently Asked Questions What are the benefits of augmented analytics? Augmented analytics helps companies become more agile, gain access to analytics, helps users make better, faster, and data-driven decisions, and reduces costs. How important is augmented analytics? Augmented analytics build efficiency into the data analysis process, equips businesses and people with tools that can answer data-based questions within seconds, and assist companies in getting ahead of their competitors. What are the examples of augmented analytics? Augmented analytics can help retain existing customers, capitalize on customer needs, drive revenue through optimized pricing, and optimize operations in the healthcare sector for better patient outcomes. These are some of the examples of the use of augmented analytics. { "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [{ "@type": "Question", "name": "What are the benefits of augmented analytics?", "acceptedAnswer": { "@type": "Answer", "text": "Augmented analytics helps companies become more agile, gain access to analytics, helps users make better, faster, and data-driven decisions, and reduces costs." } },{ "@type": "Question", "name": "How important is augmented analytics?", "acceptedAnswer": { "@type": "Answer", "text": "Augmented analytics build efficiency into the data analysis process, equips businesses and people with tools that can answer data-based questions within seconds, and assist companies in getting ahead of their competitors." } },{ "@type": "Question", "name": "What are the examples of augmented analytics?", "acceptedAnswer": { "@type": "Answer", "text": "Augmented analytics can help retain existing customers, capitalize on customer needs, drive revenue through optimized pricing, and optimize operations in the healthcare sector for better patient outcomes. These are some of the examples of the use of augmented analytics." } }] }

Read More
BIG DATA MANAGEMENT

Roles in a Data Team

Article | December 17, 2020

In this article, we’ll talk about different roles in a data team and discuss their responsibilities. In particular, we will cover: The types of roles in a data team; The responsibilities of each role; The skills and knowledge each role needs to have. This is not a comprehensive list and the majority of what you will read in this article is my opinion, which comes out of my experience from working as a data scientist. You can interpret the following information as “the description of data roles from the perspective of a data scientist”. For example, my views on the role of a data engineer may be a bit simplified because I don’t see all the complexities of their work firsthand. I do hope you will find this information useful nonetheless. Roles in a Team A typical data team consists of the following roles: Product managers, Data analysts, Data scientists, Data engineers, Machine learning engineers, and Site reliability engineers / MLOps engineers. All these people work to create a data product. To explain the core responsibilities of each role, we will use a case scenario: Suppose we work at an online classifieds company. It’s a platform where users can go to sell things they don’t need (like OLX, where I work). If a user has an iPhone they want to sell — they go to this website, create a listing and sell their phone. On this platform, sellers sometimes have problems with identifying the correct category for the items they are selling. To help them, we want to build a service that suggests the best category. To sell their iPhone, the user creates a listing and the site needs to automatically understand that this iPhone has to go in the “mobile phones” category. Let’s start with the first role: product manager. Product Manager A product manager is someone responsible for developing products. Their goal is to make sure that the team is building the right thing. They are typically less technical than the rest of the team: they don’t focus on the implementation aspects of a problem, but rather the problem itself. Product managers need to ensure that the product is actually used by the end-users. This is a common problem: in many companies, engineers create something that doesn’t solve real problems. Therefore, the product manager is somebody who speaks to the team on behalf of the users. The primary skills a PM needs to have are communication skills. For data scientists, communication is a soft skill, but for a product manager — it’s a hard skill. They have to have it to perform their work. Product managers also do a lot of planning: they need to understand the problem, come up with a solution, and make sure the solution is implemented in a timely manner. To accomplish this, PMs need to know what’s important and plan the work accordingly. When somebody has a problem, they approach the PM with it. Then the task of the PM is to figure out if users actually need this feature, how important this feature is, and if the team has the capacity to implement it. Let’s come back to our example. Suppose somebody comes to the PM and says: “We want to build a feature to automatically suggest the category for a listing. Somebody’s selling an iPhone, and we want to create a service that predicts that the item goes in the mobile phones category.” Product managers need to answer these questions: “Is this feature that important to the user?” “Is it an important problem to solve in the product at all?” To answer these questions, PMs ask data analysts to help them figure out what to do next. Data Analyst Data analysts know how to analyze the data available in the company. They discover insights in the data and then explain their findings to others. So, analysts need to know: What kind of data the company has; How to get the data; How to interpret the results; How to explain their findings to colleagues and management. Data analysts are also often responsible for defining key metrics and building different dashboards. This includes things like showing the company’s profits, displaying the number of listings, or how many contacts buyers made with sellers. Thus, data analysts should know how to calculate all the important business metrics, and how to present them in a way that is understandable to others. When it comes to skills, data analysts should know: SQL — this is the main tool that they work with; Programming languages such as Python or R; Tableau or similar tools for building dashboards; Basics of statistics; How to run experiments; A bit of machine learning, such as regression analysis, and time series modeling. For our example, product managers turn to data analysts to help them quantify the extent of the problem. Together with the PM, the data analyst tries to answer questions like: “How many users are affected by this problem?” “How many users don’t finish creating their listing because of this problem?” “How many listings are there on the platform that don’t have the right category selected?” After the analyst gets the data, analyzes it and answers these questions, they may conclude: “Yes, this is actually a problem”. Then the PM and the team discuss the repost and agree: “Indeed, this problem is actually worth solving”. Now the data team will go ahead and start solving this problem. After the model for the service is created, it’s necessary to understand if the service is effective: whether this model helps people and solves the problem. For that, data analysts usually run experiments — usually, A/B tests. When running an experiment, we can see if more users successfully finish posting an item for sale or if there are fewer ads that end up in the wrong category. Data Scientist The roles of a data scientist and data analyst are pretty similar. In some companies, it’s the same person who does both jobs. However, data scientists typically focus more on predicting rather than explaining. A data analyst fetches the data, looks at it, explains what’s going on to the team, and gives some recommendations on what to do about it. A data scientist, on the other hand, focuses more on creating machine learning services. For example, one of the questions that a data scientist would want to answer is “How can we use this data to build a machine learning model for predicting something?” In other words, data scientists incorporate the data into the product. Their focus is more on engineering than analysis. Data scientists work more closely with engineers on integrating data solutions into the product. The skills of data scientists include: Machine learning — the main tool for building predictive services; Python — the primary programming language; SQL — necessary to fetch the data for training their models; Flask, Docker, and similar — to create simple web services for serving the models. For our example, the data scientists are the people who develop the model used for predicting the category. Once they have a model, they can develop a simple web service for hosting this model. Data Engineers Data engineers do all the heavy lifting when it comes to data. A lot of work needs to happen before data analysts can go to a database, fetch the data, perform their analysis, and come up with a report. This is precisely the focus of data engineers — they make sure this is possible. Their responsibility is to prepare all the necessary data in a form that is consumable for their colleagues. To accomplish this, data engineers create “a data lake”. All the data that users generate needs to be captured properly and saved in a separate database. This way, analysts can run their analysis, and data scientists can use this data for training models. Another thing data engineers often need to do, especially at larger companies, is to ensure that the people who look at the data have the necessary clearance to do so. Some user data is sensitive and people can’t just go looking around at personal information (such as emails or phone numbers) unless they have a really good reason to do so. Therefore, data engineers need to set up a system that doesn’t let people just access all the data at once. The skills needed for data engineers usually include: AWS or Google Cloud — popular cloud providers; Kubernetes and Terraform — infrastructure tools; Kafka or RabbitMQ — tools for capturing and processing the data; Databases — to save the data in such a way that it’s accessible for data analysts; Airflow or Luigi — data orchestration tools for building complex data pipelines. In our example, a data engineer prepares all the required data. First, they make sure the analyst has the data to perform the analysis. Then they also work with the data scientist to prepare the information that we’ll need for training the model. That includes the title of the listing, its description, the category, and so on. A data engineer isn’t the only type of engineer that a data team has. There are also machine learning engineers. Machine Learning Engineer Machine learning engineers take whatever data scientists build and help them scale it up. They also ensure that the service is maintainable and that the team follows the best engineering practices. Their focus is more on engineering than on modeling. The skills ML engineers have are similar to that of data engineers: AWS or Google Cloud; Infrastructure tools like Kubernetes and Terraform; Python and other programming languages; Flask, Docker, and other tools for creating web services. Additionally, ML engineers work closely with more “traditional” engineers, like backend engineers, frontend engineers, or mobile engineers, to ensure that the services from the data team are included in the final product. For our example, ML engineers work together with data scientists on productionizing the category suggestion services. They make sure it’s stable once it’s rolled out to all the users. They must also ensure that it’s maintainable and it’s possible to make changes to the service in the future. There’s another kind of engineer that can be pretty important in a data team — site reliability engineers. DevOps / Site Reliability Engineer The role of SREs is similar to the ML engineer, but the focus is more on the availability and reliability of the services. SREs aren’t strictly limited to working with data. Their role is more general: they tend to focus less on business logic and more on infrastructure, which includes things like networking and provisioning infrastructure. Therefore, SREs look after the servers where the services are running and take care of collecting all the operational metrics like CPU usage, how many requests per second there are, the services’ processes, and so on. As the name suggests, site reliability engineers have to make sure that everything runs reliably. They set up alerts and are constantly on call to make sure that the services are up and running without any interruptions. If something breaks, SREs quickly diagnose the problem and fix it, or involve an engineer to help find the solution. The skills needed for site reliability engineers: Cloud infrastructure tools; Programming languages like Python, Unix/Linux; Networking; Best DevOps practices like automation, CI/CD, and the like. Of course, ML engineers and data engineers should also know these best practices, but the focus of DevOps engineers/SREs is to establish them and make sure that they are followed. There is a special type of DevOps engineer, called “MLOps engineer”. MLOps Engineer An MLOps engineer is a DevOps engineer who also knows the basics of machine learning. Similar to an SRE, the responsibility of an MLOps Engineer is to make sure that the services, developed by data scientists, ML engineers, and data engineers, are up and running all the time. MLOps engineers know the lifecycle of a machine learning model: the training phase, serving phase, and so on. Despite having this knowledge, MLOps Engineers are still focused more on operational support than on anything else. This means that they need to know and follow all the DevOps practices and make sure that the rest of the team is following them as well. They accomplish this by setting up things like continuous retraining, and CI/CD pipelines. Even though everyone in the team has a different focus, they all work together on achieving the same goal: solve the problems of the users. Summary To summarize, the roles in the data team and their responsibilities are: Product managers — make sure that the team is building the right thing, act as a gateway to all the requests and speak on behalf of the users. Data analysts — analyze data, define key metrics, and create dashboards. Data scientists — build models and incorporate them into the product. Data engineers — prepare the data for analysts and data scientists. ML engineers — productionize machine learning services and establish the best engineering practices. Site reliability engineers — focus on availability, reliability, enforce the best DevOps practices. This list is not comprehensive, but it should be a good starting point if you are just getting into the industry, or if you just want to know how the lines between different roles are defined in the industry.

Read More

How Incorta Customers are Leveraging Real-Time Operational Intelligence to Quickly & Effectively Respond to 3 Likely Scenarios Caused by COVID-19

Article | March 30, 2020

Most businesses do not have contingency or business continuity plans that correlate to the world we see unfold before us—one in which we seem to wake up to an entirely new reality each day. Broad mandates to work at home are now a given. But how do we move beyond this and strategically prepare for—and respond to—business implications resulting from the coronavirus pandemic? Some of our customers are showing us how. These organizations have developed comprehensive, real-time operational intelligence views of their global teams—some in only 24-48 hours—that help them better protect their remote workforces, customers, and business at hand.

Read More

COMBINATION OF VIRTUAL REALITY AND DATA ANALYTICS

Article | March 30, 2020

Virtual reality is an innovation with boundless opportunities. These can be seen when it is combined with another tech to make new opportunities. At the point when paired with gaming, for instance, VR has empowered the user to enter the virtual universe of the game, for example, in an online casino where the user can enter a virtual casino from the comfort of their own home. When utilized in marketing, property developers can demonstrate houses to potential buyers any place they were on the planet.

Read More

Spotlight

Advanced Visual Systems

Advanced Visual Systems, founded in 1991, produces data visualization software and solutions for use in the fields of business intelligence, engineering and research. AVS customers include Independent Software Vendors that embed AVS technology into their own products, Enterprise and corporate users that create proprietary analytic applications, and centers of higher learning that conduct sophisticated research.

Events