How Dutch Railways uses big data to keep schedules on track

techhq.com | January 13, 2020

Transporting one million commuters each day, 92 percent of Dutch Railways Nederlandse Spoorwegen’s (NS) trains arrive on time, making it the third-most punctual rail service in the world, behind Switzerland and Japan. While certain delays are unavoidable results of severe weather or obstructions on the line, for exampleNS owes its industry-leading scheduling to big data analytics, thanks to thousands of sensors on its locomotives. As reported in TNW, NS has been leveraging the power of predictive maintenance since 4G became available in the Netherlands, allowing data to be processed at much higher speeds. Not only is this data based on the health of the vehicles themselves, but it’s also combined with check-in data and status measurement points on tracks. In total, 140 data sources converge to provide an instant snapshot of the state of operation across the company’s entire network.

Spotlight

Learn how 4,200 business and IT leaders addresse their IT and data protection strategies. Download this brief and learn from their shared insights.

Spotlight

Learn how 4,200 business and IT leaders addresse their IT and data protection strategies. Download this brief and learn from their shared insights.

Related News

BIG DATA MANAGEMENT, BUSINESS STRATEGY, DATA SCIENCE

HEAVY.AI Launches HEAVY 7.0, Introducing Real-Time Machine Learning Capabilities

Businesswire | April 20, 2023

HEAVY.AI, an innovator in advanced analytics, today announced general availability of HEAVY 7.0. The new product adds innovative machine learning capabilities, enabling telcos and utilities to perform in-database predictive modeling and simulate any scenario to uncover key insights. HEAVY 7.0 also incorporates new ways to interactively join and fuse data in the Heavy Immerse visualization platform, as well as more powerful cell site planning and optimization capabilities via significant enhancements to the HeavyRF telco module. “For telcos and utilities, delivering the best service to their customers means constantly analyzing, investigating and learning from the immense amounts and vast sources of data available to them. But analyzing complex geospatial data combined with customer and radiofrequency data, is a cumbersome and error-prone process,” said Jon Kondo, CEO, HEAVY.AI. “HEAVY 7.0 provides tools and features that make it fast and easy for these organizations to analyze any type of data and uncover insights that are critical for their business.” Introduction of machine learning capabilities via predictive modeling in-database HEAVY 7.0 introduces HeavyML, enabling predictive analytics directly in-database as a public beta feature. Implemented as native SQL operators that can be evaluated interactively on GPUs and then visualized and rendered in Heavy Immerse dashboards, HeavyML supports a variety of clustering and regression algorithms, including tree-based models such as random forest regression. With this addition, domain experts and other end users not intimately familiar with data science workflows can leverage predictive analytics on large datasets. Expanded HeavyRF Cell Site Planning and Optimization Capabilities HEAVY 7.0 features a new site editor for graphical specification of network hardware and its configuration under various, complex operating scenarios, such as rush hour scenarios, reduced power or maintenance modes and seasonal or monthly variation in vegetation optical thickness – a critical capability for midband 5G and lower frequencies. As a result, telcos can develop and test various software optimized network (SON) scenarios safely and with full visibility to potential customer experience impacts. HeavyRF has also gained improved workflows for establishing and monitoring business targets involving large numbers of buildings. HeavyRF has always provided a continuously updated view of relevant business metrics, but now it can target thousands of buildings at once using fully customizable and extensible building tagging. The addition of no-code joins in Heavy Immerse Heavy Immerse has long offered non-technical users the ability to rapidly visualize, map and filter enormous datasets interactively and in-real-time. Heavy 7.0 further enables users to get rapid, visual insights from their data via the addition of no-code join capabilities. Joins can now be specified directly from Heavy Immerse dashboards, and thanks to the speed of the underlying GPU database, execute across multi-billion record datasets at interactive speeds, no indexing or down-sampling required. The ability to performantly fuse large datasets without writing SQL further democratizes access to complex insights for a broad set of users. A major California utility is using the new join capabilities to downscale huge weather models, measuring weather’s impact on specific assets as well as their effect on customers. HEAVY 7.0 is available now in limited release for existing HEAVY.AI customers. About HEAVY.AI HEAVY.AI provides advanced analytics that empower businesses and the government to visualize high-value opportunities and risks hidden in their big location and time data. HEAVY.AI supports high-impact decisions in previously unimaginable timelines by harnessing the massive parallelism of modern GPU and CPU hardware. The analytics technology unifies today’s exploding data volumes from multiple sources for a better immersive, real-time, interactive visual experience. It is available in the cloud and on premises. HEAVY.AI originated from research at Harvard and MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and is funded by GV, In-Q-Tel, New Enterprise Associates (NEA), NVIDIA, Tiger Global Management, Vanedge Capital and Verizon Ventures. The company is headquartered in San Francisco. Learn more about HEAVY.AI at heavy.ai.

Read More

BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT, DATA SCIENCE

Gigasheet Launches New API To Power Collaboration Between Business and Data Teams

Prnewswire | April 21, 2023

Gigasheet, the big data analytics startup, has announced the launch of its new enterprise API, enabling programmatic access to the platform's big data spreadsheets. Gigasheet's spreadsheet-like interface is second nature to most business users, and the web-based application offers many 'one click' solutions to common data problems. These capabilities can now be extended, integrated, and automated via the new enterprise API. Data Teams are using the new API to automate repetitive tasks, schedule imports and exports, and deliver large volumes of data to non-technical users for exploration in an intuitive interface. Data can be pushed into Gigasheet from databases, data warehouses, or enterprise applications. The API also enables business users to help with data preparation before running data pipelines or to inspect data quality throughout its lifecycle. "From day one we have focused on helping empower people to answer questions about big data. The existing tools used to work with enterprise data are increasingly sophisticated, but these tools require users to know SQL or Python," says Gigasheet CEO and Co-founder Jason Hines. "Data engineers know about data availability and quality, while the business users have the context. This causes a lot of back and forth, and too often it's inefficient for both teams." With more than 30,000 users on the company's platform, it's clear users love Gigasheet's data transformations and operations. The user interface is highly performant on large data sets, and the speed of complex operations far exceeds that of other tools. The usual complexity of dealing with big data is hidden from the user, allowing them to focus on analysis. Gigasheet's API helps data teams by enabling business users to contribute to the company's data preparation and analysis efforts. Their requirements can be directly embedded into data pipelines, without burdening internal IT or data teams for training and support. "Gigasheet makes it so easy to work with huge sales and marketing data sets. We're excited to see they have added an API for more seamless integration into workflows," said Mark Feldman, CEO of RevenueBase, a B2B customer intelligence company. Feldman says the company has been using the product for about a year. The company's API is also playing a role in data preparation through their partnership with Tamr. Led by seasoned data veteran Andy Palmer, Tamr's Data Mastering technology helps enterprises transform their data into an asset and a competitive advantage. Palmer says that "The Gigasheet API is a gamechanger. Gigasheet's spreadsheet interface makes it easy for any data citizen to profile and clean up raw files. With the API, we can now embed Gigasheet into pipelines to help our customers get to insights faster." About Gigasheet Gigasheet is a cloud-based big data spreadsheet that allows users to work with large and complex data sets in a simple and intuitive manner. With powerful data preparation and analysis features, Gigasheet helps businesses of all sizes make informed decisions based on data.

Read More

BUSINESS INTELLIGENCE, BIG DATA MANAGEMENT, DATA SCIENCE

Amperity Gets Chosen by MillerKnoll as its Customer Data Platform

Amperity | March 14, 2023

On March 13, 2023, Amperity was selected by MillerKnoll, a collective of dynamic design brands, as its enterprise customer data platform (CDP) to enhance its omnichannel data and provide personalized customer experiences. MillerKnoll possesses a vast source of customer data across its numerous design brands and various digital and offline channels, which can be utilized as a foundation for delivering exceptional customer interactions. However, with the ever-changing shopping preferences and behaviors of consumers, the company required a comprehensive solution to unite and manage its diverse data sources. By partnering with Amperity, MillerKnoll can use its customer data to reach more people and improve its advertising campaigns with data science scores, insights on cross-channel behavior, and content affinities. MillerKnoll can use Amperity's platform to easily connect its data sources, giving it a full picture of its customers and the ability to give them personalized experiences that are relevant and interesting. With the help of Amperity's 360-degree unified view, MillerKnoll can now understand and use data across all touchpoints to provide consistent and relevant shopping experiences. MillerKoll can also use Amperity's technology, which is powered by AI, to drive retargeting, lookalike, and suppression campaigns on a large scale and to resolve identities and divide audiences. Barry Padgett, CEO at Amperity, said, “By improving its customer data infrastructure, MillerKnoll will be able to supercharge the rest of its tech stack, and deliver the level of excellence consumers expect from brands on a daily basis.” He also said, “We are proud to work with a forward-thinking partner to help take its customer experience to the next level.” (Source - Businesswire) Visit ShopTalk in Las Vegas, March 26–29, in booth 1674 to learn how Amperity can help brands unlock business growth with a unified customer data foundation. About Amperity Amperity is a customer data platform (CDP) provider that helps businesses use customer data to improve marketing performance, increase customer loyalty, and drive revenue. The company offers tools to manage and analyze customer data and creates unified customer profiles to provide personalized experiences. Amperity works with over 100 leading global brands across various industries and is headquartered in Seattle with an office in New York City.

Read More