Article | March 30, 2020
Most businesses do not have contingency or business continuity plans that correlate to the world we see unfold before us—one in which we seem to wake up to an entirely new reality each day. Broad mandates to work at home are now a given. But how do we move beyond this and strategically prepare for—and respond to—business implications resulting from the coronavirus pandemic? Some of our customers are showing us how. These organizations have developed comprehensive, real-time operational intelligence views of their global teams—some in only 24-48 hours—that help them better protect their remote workforces, customers, and business at hand.
Article | April 2, 2020
The outbreak of coronavirus has taken many countries under its hood. Most of them are suffering from economic loss and a higher mortality rate. Amid this, governments are in a great dilemma how to handle the circumstances around the falling economy and upsurging coronavirus infections. In order to get better hold onto situations across their countries, they are moving towards innovative technology adoption. Out of all the new-age technologies, big data and data analytics can serve with a great opportunity, where governments across various nations can understand the outbreak analytics.
Article | April 16, 2021
There are many articles explaining advanced methods on AI, Machine Learning or Reinforcement Learning. Yet, when it comes to real life, data scientists often have to deal with smaller, operational tasks, that are not necessarily at the edge of science, such as building simple SQL queries to generate lists of email addresses to target for CRM campaigns. In theory, these tasks should be assigned to someone more suited, such as Business Analysts or Data Analysts, but it is not always the case that the company has people dedicated specifically to those tasks, especially if it’s a smaller structure.
In some cases, these activities might consume so much of our time that we don’t have much left for the stuff that matters, and might end up doing a less than optimal work in both. That said, how should we deal with those tasks? In one hand, not only we usually don’t like doing operational tasks, but they are also a bad use of an expensive professional. On the other hand, someone has to do them, and not everyone has the necessary SQL knowledge for it. Let’s see some ways in which you can deal with them in order to optimize your team’s time.
The first and most obvious way of doing less operational tasks is by simply refusing to do them. I know it sounds harsh, and it might be impractical depending on your company and its hierarchy, but it’s worth trying it in some cases. By “refusing”, I mean questioning if that task is really necessary, and trying to find best ways of doing it. Let’s say that every month you have to prepare 3 different reports, for different areas, that contain similar information. You have managed to automate the SQL queries, but you still have to double check the results and eventually add/remove some information upon the user’s request or change something in the charts layout. In this example, you could see if all of the 3 different reports are necessary, or if you could adapt them so they become one report that you send to the 3 different users. Anyways, think of ways through which you can reduce the necessary time for those tasks or, ideally, stop performing them at all.
Sometimes it can pay to take the time to empower your users to perform some of those tasks themselves. If there is a specific team that demands most of the operational tasks, try encouraging them to use no-code tools, putting it in a way that they fell they will be more autonomous. You can either use already existing solutions or develop them in-house (this could be a great learning opportunity to develop your data scientists’ app-building skills).
If you notice it’s a task that you can’t get rid of and can’t delegate, then try to automate it as much as possible. For reports, try to migrate them to a data visualization tool such as Tableau or Google Data Studio and synchronize them with your database. If it’s related to ad hoc requests, try to make your SQL queries as flexible as possible, with variable dates and names, so that you don’t have to re-write them every time.
Especially when you are a manager, you have to prioritize, so you and your team don’t get drowned in the endless operational tasks. In order to do this, set aside one or two days in your week which you will assign to that kind of work, and don’t look at it in the remaining 3–4 days. To achieve this, you will have to adapt your workload by following the previous steps and also manage expectations by taking this smaller amount of work hours when setting deadlines. This also means explaining the paradigm shift to your internal clients, so they can adapt to these new deadlines. This step might require some internal politics, negotiating with your superiors and with other departments.
Once you have mapped all your operational activities, you start by eliminating as much as possible from your pipeline, first by getting rid of unnecessary activities for good, then by delegating them to the teams that request them. Then, whatever is left for you to do, you automate and organize, to make sure you are making time for the relevant work your team has to do. This way you make sure expensive employees’ time is being well spent, maximizing company’s profit.
Article | April 13, 2020
The acronym DMaaS can refer to two related but separate things: data center management-as-a-service referred to here by its other acronym, DCMaaS and data management-as-a-service. The former looks at infrastructure-level questions such as optimization of data flows in a cloud service, the latter refers to master data management and data preparation as applied to federated cloud services.DCMaaS has been under development for some years; DMaaS is slightly younger and is a product of the growing interest in machine learning and big data analytics, along with increasing concern over privacy, security, and compliance in a cloud environment.DMaaS responds to a developing concern over data quality in machine learning due to the large amount of data that must be used for training and the inherent dangers posed by divergence in data structure from multiple sources. To use the rapidly growing array of cloud data, including public cloud information and corporate internal information from hybrid clouds, you must aggregate data in a normalized way so it can be available for model training and processing with ML algorithms. As data volumes and data diversity increase, this becomes increasingly difficult.