Kyvos session at MicroStrategy World 2019 showcases how it delivers fastest BI on Big Data

| February 15, 2019

article image
We participated in MicroStrategy World 2019 held last week at Phoenix, AZ. The conference was a great experience as we could showcase our expertise in delivering fastest insights on Big Data for MicroStrategy users. Besides this, we collaborated with MicroStrategy experts, developers, and users to find pathways to strengthen our partnership with MicroStrategy. During the conference, we hosted a session entitled “Kyvos – Making MicroStrategy perform on Big Data”, where Ajay Anand, our vice president of products and marketing, demonstrated how several large enterprises have revolutionized analytics using the combined power of MicroStrategy’s powerful visualizations and Kyvos’ capability to deliver high performance and unlimited scalability on Big Data. He was joined by Fei Zhao and Ryan Levman, from Bell Canada, who spoke about how Kyvos helps them scale their BI and deliver faster time to insights for 10K+ employees. At another session during the event, Anthony Maresco from MicroStrategy, discussed how Kyvos helps in achieving speed of thought Big Data Analytics by making Big Data work for MicroStrategy.

Spotlight

Precily

Precily AI: Precily AI is a text analysis tool powered by AI, NLP & Deep Learning modules. The engine is capable of analyzing business documents, legal documents, research papers. Within Text Analysis Aura (Precily AI) is capable of doing entity extraction, sentiment analysis, text clustering, concept extraction, custom tag summarization, comparing multiple documents & extracting relevant data and eliminate repeated content, Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax.

OTHER ARTICLES

Evolution of capabilities of Data Platforms & data ecosystem

Article | October 27, 2020

Data Platforms and frameworks have been constantly evolving. At some point of time; we are excited by Hadoop (well for almost 10 years); followed by Snowflake or as I say Snowflake Blizzard (who managed to launch biggest IPO win historically) and the Google (Google solves problems and serves use cases in a way that few companies can match). The end of the data warehouse Once upon a time, life was simple; or at least, the basic approach to Business Intelligence was fairly easy to describe… A process of collecting information from systems, building a repository of consistent data, and bolting on one or more reporting and visualisation tools which presented information to users. Data used to be managed in expensive, slow, inaccessible SQL data warehouses. SQL systems were notorious for their lack of scalability. Their demise is coming from a few technological advances. One of these is the ubiquitous, and growing, Hadoop. On April 1, 2006, Apache Hadoop was unleashed upon Silicon Valley. Inspired by Google, Hadoop’s primary purpose was to improve the flexibility and scalability of data processing by splitting the process into smaller functions that run on commodity hardware. Hadoop’s intent was to replace enterprise data warehouses based on SQL. Unfortunately, a technology used by Google may not be the best solution for everyone else. It’s not that others are incompetent: Google solves problems and serves use cases in a way that few companies can match. Google has been running massive-scale applications such as its eponymous search engine, YouTube and the Ads platform. The technologies and infrastructure that make the geographically distributed offerings perform at scale are what make various components of Google Cloud Platform enterprise ready and well-featured. Google has shown leadership in developing innovations that have been made available to the open-source community and are being used extensively by other public cloud vendors and Gartner clients. Examples of these include the Kubernetes container management framework, TensorFlow machine learning platform and the Apache Beam data processing programming model. GCP also uses open-source offerings in its cloud while treating third-party data and analytics providers as first-class citizens on its cloud and providing unified billing for its customers. The examples of the latter include DataStax, Redis Labs, InfluxData, MongoDB, Elastic, Neo4j and Confluent. Silicon Valley tried to make Hadoop work. The technology was extremely complicated and nearly impossible to use efficiently. Hadoop’s lack of speed was compounded by its focus on unstructured data — you had to be a “flip-flop wearing” data scientist to truly make use of it. Unstructured datasets are very difficult to query and analyze without deep knowledge of computer science. At one point, Gartner estimated that 70% of Hadoop deployments would not achieve the goal of cost savings and revenue growth, mainly due to insufficient skills and technical integration difficulties. And seventy percent seems like an understatement. Data storage through the years: from GFS to Snowflake or Snowflake blizzard Developing in parallel with Hadoop’s journey was that of Marcin Zukowski — co-founder and CEO of Vectorwise. Marcin took the data warehouse in another direction, to the world of advanced vector processing. Despite being almost unheard of among the general public, Snowflake was actually founded back in 2012. Firstly, Snowflake is not a consumer tech firm like Netflix or Uber. It's business-to-business only, which may explain its high valuation – enterprise companies are often seen as a more "stable" investment. In short, Snowflake helps businesses manage data that's stored on the cloud. The firm's motto is "mobilising the world's data", because it allows big companies to make better use of their vast data stores. Marcin and his teammates rethought the data warehouse by leveraging the elasticity of the public cloud in an unexpected way: separating storage and compute. Their message was this: don’t pay for a data warehouse you don’t need. Only pay for the storage you need, and add capacity as you go. This is considered one of Snowflake’s key innovations: separating storage (where the data is held) from computing (the act of querying). By offering this service before Google, Amazon, and Microsoft had equivalent products of their own, Snowflake was able to attract customers, and build market share in the data warehousing space. Naming the company after a discredited database concept was very brave. For those of us not in the details of the Snowflake schema, it is a logical arrangement of tables in a multidimensional database such that the entity-relationship diagram resembles a snowflake shape. … When it is completely normalized along all the dimension tables, the resultant structure resembles a snowflake with the fact table in the middle. Needless to say, the “snowflake” schema is as far from Hadoop’s design philosophy as technically possible. While Silicon Valley was headed toward a dead end, Snowflake captured an entire cloud data market.

Read More

MiPasa project and IBM Blockchain team on open data platform to support Covid-19 response

Article | April 1, 2020

Powerful technologies and expertise can help provide better data and help people better understand their situation. As the world contends with the ongoing coronavirus outbreak, officials battling the pandemic need tools and valid information at scale to help foster a greater sense of security for the public. As technologists, we have been heartened by the prevalence of projects such as Call for Code, hackathons and other attempts by our colleagues to rapidly create tools that might be able to help stem the crisis. But for these tools to work, they need data from sources they can validate. For example, reopening the world’s economy will likely require not only testing millions of people, but also being able to map who tested positive, where people can and can’t go and who is at exceptionally high risk of exposure and must be quarantined again.

Read More

How can machine learning detect money laundering?

Article | December 16, 2020

In this article, we will explore different techniques to detect money laundering activities. Notwithstanding, regardless of various expected applications inside the financial services sector, explicitly inside the Anti-Money Laundering (AML) appropriation of Artificial Intelligence and Machine Learning (ML) has been generally moderate. What is Money Laundering, Anti Money Laundering? Money Laundering is where someone unlawfully obtains money and moves it to cover up their crimes. Anti-Money Laundering can be characterized as an activity that forestalls or aims to forestall money laundering from occurring. It is assessed by UNO that, money-laundering exchanges account in one year is 2–5% of worldwide GDP or $800 billion — $3 trillion in USD. In 2019, regulators and governmental offices exacted fines of more than $8.14 billion. Indeed, even with these stunning numbers, gauges are that just about 1 % of unlawful worldwide money related streams are ever seized by the specialists. AML activities in banks expend an over the top measure of manpower, assets, and cash flow to deal with the process and comply with the guidelines. What are the punishments for money laundering? In 2019, Celent evaluated that spending came to $8.3 billion and $23.4 billion for technology and operations, individually. This speculation is designated toward guaranteeing anti-money laundering. As we have seen much of the time, reputational costs can likewise convey a hefty price. In 2012, HSBC laundering of an expected £5.57 billion over at least seven years.   What is the current situation of the banks applying ML to stop money laundering? Given the plenty of new instruments the banks have accessible, the potential feature risk, the measure of capital involved, and the gigantic expenses as a form of fines and punishments, this should not be the situation. A solid impact by nations to curb illicit cash movement has brought about a huge yet amazingly little part of money laundering being recognized — a triumph rate of about 2% average. Dutch banks — ABN Amro, Rabobank, ING, Triodos Bank, and Volksbank announced in September 2019 to work toward a joint transaction monitoring to stand-up fight against Money Laundering. A typical challenge in transaction monitoring, for instance, is the generation of a countless number of alerts, which thusly requires operation teams to triage and process the alarms. ML models can identify and perceive dubious conduct and besides they can classify alerts into different classes such as critical, high, medium, or low risk. Critical or High alerts may be directed to senior experts on a high need to quickly explore the issue. Today is the immense number of false positives, gauges show that the normal, of false positives being produced, is the range of 95 and 99%, and this puts extraordinary weight on banks. The examination of false positives is tedious and costs money. An ongoing report found that banks were spending near 3.01€ billion every year exploring false positives. Establishments are looking for increasing productive ways to deal with crime and, in this specific situation, Machine Learning can end up being a significant tool. Financial activities become productive, the gigantic sum and speed of money related exchanges require a viable monitoring framework that can process exchanges rapidly, ideally in real-time.   What are the types of machine learning algorithms which can identify money laundering transactions? Supervised Machine Learning, it is essential to have historical information with events precisely assigned and input variables appropriately captured. If biases or errors are left in the data without being dealt with, they will get passed on to the model, bringing about erroneous models. It is smarter to utilize Unsupervised Machine Learning to have historical data with events accurately assigned. It sees an obscure pattern and results. It recognizes suspicious activity without earlier information of exactly what a money-laundering scheme resembles. What are the different techniques to detect money laundering? K-means Sequence Miner algorithm: Entering banking transactions, at that point running frequent pattern mining algorithms and mining transactions to distinguish money laundering. Clustering transactions and dubious activities to money laundering lastly show them on a chart. Time Series Euclidean distance: Presenting a sequence matching algorithm to distinguish money laundering detection, utilizing sequential detection of suspicious transactions. This method exploits the two references to recognize dubious transactions: a history of every individual’s account and exchange data with different accounts. Bayesian networks: It makes a model of the user’s previous activities, and this model will be a measure of future customer activities. In the event that the exchange or user financial transactions have. Cluster-based local outlier factor algorithm: The money laundering detection utilizing clustering techniques combination and Outliers.   Conclusion For banks, now is the ideal opportunity to deploy ML models into their ecosystem. Despite this opportunity, increased knowledge and the number of ML implementations prompted a discussion about the feasibility of these solutions and the degree to which ML should be trusted and potentially replace human analysis and decision-making. In order to further exploit and achieve ML promise, banks need to continue to expand on its awareness of ML strengths, risks, and limitations and, most critically, to create an ethical system by which the production and use of ML can be controlled and the feasibility and effect of these emerging models proven and eventually trusted.

Read More

Data Analytics the Force Behind the IoT Evolution

Article | April 3, 2020

Primarily,the IoT stack is going beyond merely ingesting data to data analytics and management, with a focus on real-time analysis and autonomous AI capacities. Enterprises are finding more advanced ways to apply IoT for better and more profitable outcomes. IoT platforms have evolved to use standard open-source protocols and components. Now enterprises are primarily focusing on resolving business problems such as predictive maintenance or usage of smart devices to streamline business operations.Platforms focus on similar things, but early attempts at the creation of highly discrete solutions around specific use cases in place of broad platforms, have been successful. That means more vendors offer more choices for customers, to broaden the chances for success. Clearly, IoT platforms actually sit at the heart of value creation in the IoT.

Read More

Spotlight

Precily

Precily AI: Precily AI is a text analysis tool powered by AI, NLP & Deep Learning modules. The engine is capable of analyzing business documents, legal documents, research papers. Within Text Analysis Aura (Precily AI) is capable of doing entity extraction, sentiment analysis, text clustering, concept extraction, custom tag summarization, comparing multiple documents & extracting relevant data and eliminate repeated content, Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax.

Events