The rise of sensors and computational power has led to the collection and storage of big data sets with multi-dimensional, complex relationships. This new generation of data not only requires innovative databases, but new, efficient algorithms to analyze.
The main problem is how to efficiently analyze complex data sets using as less computational power as we have in raspberry Pi. Hipergraph, the high-performing graph analytics engine, was developed to tackle this problem. Hipergraph enables customer data to be processed quickly, simply and at low cost. As performance and memory capacity of servers continues to grow, most customer datasets can be processed without the significant performance and cost penalties inherent in big data clusters. PARC has developed its Hipergraph algorithms over last couple year in the domains of artificial intelligence and graph analytics. Hipergraph is relevant to clients with large, multi-dimensional datasets requiring fast and memory efficient algorithms.
Examples of graph analytics and applications of Hipergraph are broad. Two examples for PARC’s Hipergraph are within the retail and healthcare industries. PARC’s Hipergraph algorithms have been applied in the context of recommending new products to customers in retail stores based on their and their peers’ history of shopping as well as their budgetary and other constraints. Hipergraph was applied to healthcare insurance data to detect fraudulent activities.
APPLICATION BY INDUSTRY
- Fraud detection
How Hipergraph works?
Hipergraph runs on a single laptop, workstation or server. In order to process graphs with hundreds of millions of nodes and billions of edges with these memory constraints, Hipergraph uses an extremely compact internal graph representation. The internal format also enables graph traversals to make excellent use of the processor cache and memory hierarchies. Hipergraph is written in C, making it straightforward to build into customer applications. Sample applications built on Hipergraph include pageRank, link-based clustering, triangle counting, collaborative filtering, centralities, and many others.