Data to Decisions

Automation, Integration, Analytics, Reports & Dashboards

A Quick Introduction to the Five Types of Filters in Flow

In this blog post, I provide an introduction to the five filter actions in Flow. Filter actions are

Overview

In this blog post, I provide an introduction to the five filter actions in Flow. Filter actions are functions which select a specific subset of records from a designated data collection based on some target match criteria. This blog post introduces the different types of filters and provides a comprehensive worked video example demonstrating how to configure and implement these filters against a sample data collection. The filter actions are some of the most elementary and integral operations in the Flow computing framework. Mastering the different types of filters is key to data processing, data analytics, and business intelligence workflow design.

A Quick Introduction to the Five Types of Filters in Flow

 

 

Import and Analyze JSON Data with Flow Analytics

Overview In this blog post, I provide a worked example demonstrating how to import and analyze data

Overview

In this blog post, I provide a worked example demonstrating how to import and analyze data from JSON based sources. Flow allows for the consumption of JSON data into a tabular form for analysis without requiring any knowledge of structure or schema. I demonstrate how to leverage this functionality to read and flatten JSON from a web-based resource into a dataset. I then show how to apply transformations to the data by using the expression builder to calculate new data points on the fly. I show how to compute hypercubes against the flattened data and perform a simple language analysis, highlighting the ability to wrangle and analyze the data. Finally, I demonstrate how to export the transformed data to various file formats allowing us to persist the flattened set for use elsewhere.

How to Import and Analyze JSON Data with Flow Analytics

 

 

Analyze Blank or Missing Data Values Using Flow Analytics

Overveiw In this blog post, I provide a worked example demonstrating how to perform an analysis of b

Overview

In this blog post, I provide a worked example demonstrating how to perform an analysis of blanks on a target dataset. When analyzing data a typical first step is to get an understanding of where there are missing values. Identifying where there are missing values in your data can help you make more informed decisions about your analysis approach.

How to Analyze Blank or Missing Data Values Using Flow Analytics

 

 

How to Denormalize (Join) Data Using Flow Analytics

Overview This blog post demonstrates how to configure the denormalize function in order to join disc

Overview

This blog post demonstrates how to configure the denormalize function in order to join disconnected data sets together. A worked example is provided which shows how to import and merge various delimited files. The denormalize action is used to join the data from the separate files together in order to consolidate them into a single set for analysis. Once the data is joined, we learn how to use hypercubes to aggregate and summarize the data.

How to Denormalize (Join) Data Using Flow Analytics

 

 

 

Building Grouped Reports with Flow Analytics

Here is the second in a series of posts focusing on building reports in Flow. A grouped report is an

Overview

Here is the second in a series of posts focusing on building reports in Flow. A grouped report is an advanced report produced by Flow. Grouped Reports organize records into one or more nested groups where each group is is a collection of records with a common column data value. There are two basic methods you can employ to create grouped reports in Flow. The first is to add a Grouped Report action to a new or existing workflow. The second way is to open a hypercube within the Flow portal then click on the report icon Create Report button in the toolbar located at the top of the hypercube view. This post will cover the first method.

Building Grouped Reports with Flow Analytics

 

 

Doing Data Quality with Flow Analytics

In this article, I provide an introduction to measuring and evaluating data quality using Flow. I br

Overview

In this article, I provide an introduction to measuring and evaluating data quality using Flow. I briefly discuss data quality dimensions and data quality assessment. Then I examine how a schema-on-write approach increases the time and cost required to assess data quality along with a brief discussion of schema-on-read technology. I then introduce Flow's "Generic Data" technology as a solution to the deficiencies of schema-on-write and schema-on-read for data quality. Finally, I provide a hands-on working example of doing data quality in Flow Analytics using some sample name and address data. 

Doing Data Quality with Flow Analytics

 

 

Building Tabular Reports in Flow Analytics

Building Tabular Reports in Flow Analytics Flow enables you to build many types of reports, such as

Overview

Flow enables you to build many types of reports, such as tabular, grouped, pivot tables, tables, and data summaries. Here is the first in a series of posts focusing upon building reports in Flow. You can learn more about these different types of reports in the Flow online help. A tabular report is the most basic type of report you can build in Flow, it is organized in a multicolumn, multirow format, with each column corresponding to a column in a dataset.

There are two basic methods you can employ to create tabular reports in Flow. The first is to add a Tabular Report action to a new or existing workflow. The second way is to open a dataset within the Flow portal then click on the report icon Create Report button in the toolbar located at the top of the dataset view. This post will cover the first method.

Building Tabular Reports in Flow Analytics