Advanced graph analytics algorithms
The rise of sensors and computational power has led to the collection and storage of big data sets with multi-dimensional, complex relationships. This new generation of data not only requires innovative databases, but efficient algorithms to analyze.
The challenge has been how to efficiently analyze complex data sets using less computational power as we have in raspberry Pi. Hipergraph, the high-performing graph analytics engine, was developed by PARC to tackle this problem. Hipergraph enables customer data to be processed quickly, simply and at low cost. As performance and memory capacity of servers continues to grow, most customer datasets can be processed without the significant performance and cost penalties inherent in big data clusters. PARC has developed its Hipergraph algorithms over the last few years in the domains of artificial intelligence (AI) and graph analytics. Hipergraph is relevant to clients with large, multi-dimensional datasets requiring fast, memory-efficient algorithms.
Examples of graph analytics and applications of Hipergraph are broad. Two examples are within the retail and healthcare industries. PARC’s Hipergraph algorithms have been applied in the context of recommending new products to customers in retail stores based on their and their peers’ history of shopping as well as their budgetary and other constraints. Hipergraph was applied to healthcare insurance data to detect fraudulent activities.
• Fraud detection
How the Technology Works
Hipergraph runs on a single laptop, workstation or server. In order to process graphs with hundreds of millions of nodes and billions of edges with these memory constraints, Hipergraph uses an extremely compact internal graph representation. The internal format also enables graph traversals to make excellent use of the processor cache and memory hierarchies. Hipergraph is written in C, making it straightforward to build into customer applications. Sample applications built on Hipergraph include page rank, link-based clustering, triangle counting, collaborative filtering, centralities and others.
Our work is centered around a series of Focus Areas that we believe are the future of science and technology.
We’re continually developing new technologies, many of which are available for Commercialization.