## Flow Crash Course - Part 6 - HyperCube Computation

## Overview

This is the sixth blog post in our crash course series on Flow. In this blog post, I provide an introduction to HyperCube Computation in the Flow Computing Framework.

This is the sixth blog post in our crash course series on Flow. In this blog post, I provide an introduction to HyperCube Computation in the Flow Computing Framework.

This is the fifth blog post in our crash course series on Flow. In this blog post, I provide an introduction to Generic Expressions in the Flow Computing Framework.

This blog post provides a worked example of how to import and analyze Microsoft Access Data. We learn how to use the Access Database integration interface to consume the sample Northwind database into Flow. A step-by-step walkthrough is provided which details how to denormalize the various relational tables into a consolidated flattened set for analysis. We learn how to apply generic expressions to compute new data points on the fly. Finally, we learn how to leverage Flow's multidimensional analysis engine to compute hypercubes and summarize the data.

This article demonstrates how to perform a word count analysis in Flow. In this blog post, I provide a worked example showing how to take in unstructured natural language data and compute a unigram language model against that data. The result of the language analysis returns a new profile dataset which holds each unique token present in our natural text and the count of times each word occurred. This blog post teaches a quick one-step technique for doing an initial exploratory analysis of natural text data.

7. July 2017
analytics, hypercube, data validation, data summary, expressions, flow analytics, reporting
(0)

n this post, we build a reusable eight-step Flow that performs a basic Benford's Analysis on a sample data set. This Flow loads the sample data set then obtains the first digit from each observation, builds a hypercube and uses it to count the first digits, extracts a dataset containing the distribution and, finally, computes the expected distribution and compares it to the observed distribution by taking the difference.

Copyright © 2020 Data to Decisions - Powered by BlogEngine.NET 3.2.0.3 - Design by FS