Overview
This blog post provides a worked example of an end-to-end automated business intelligence solution. This blog post demonstrates how to load data from different sources, join the data together, cleanse and correct data quality issues, compute hypercubes, and design a dashboard. I then demonstrate how to deploy the solution to the autonomous agent framework for continuous execution.
Overview
This is the ninth blog post in our crash course series on Flow. In this blog post, I provide an introduction to deploying the solution developed in the previous sections to the Flow Autonomous Agent Framework.
Overview
This is the eighth blog post in our crash course series on Flow. In this blog post, I provide an introduction to HyperCube Visualizations and Multi-dimensional HyperCube Tables in the Flow Computing Framework.
Overview
This is the seventh blog post in our crash course series on Flow. In this blog post, I provide an introduction to Results, Dashboard Design, and a first look at HyperCube Reporting in the Flow Computing Framework.
Overview
This is the sixth blog post in our crash course series on Flow. In this blog post, I provide an introduction to HyperCube Computation in the Flow Computing Framework.
Overview
This is the fifth blog post in our crash course series on Flow. In this blog post, I provide an introduction to Generic Expressions in the Flow Computing Framework.
This is the fourth blog post in our crash course series on Flow. In this blog post, I provide an introduction to Basic Data Summary Functions in the Flow Computing Framework.
Overview
This is the first blog post in our crash course series on Flow. In this blog post, I provide an introduction to the Flow Computing Framework.
Overview
This is the third blog post in our crash course series on Flow. In this blog post, I provide an introduction to Working Data Point Actions in the Flow Computing Framework.
Overview
This is the second blog post in our crash course series on Flow. In this blog post, I provide an introduction to Basic Data Point Actions in the Flow Computing Framework.
Overview
In this blog post, I provide an introduction to the five filter actions in Flow. Filter actions are functions which select a specific subset of records from a designated data collection based on some target match criteria. This blog post introduces the different types of filters and provides a comprehensive worked video example demonstrating how to configure and implement these filters against a sample data collection. The filter actions are some of the most elementary and integral operations in the Flow computing framework. Mastering the different types of filters is key to data processing, data analytics, and business intelligence workflow design.
Overview
In this blog post, I demonstrate how to build a hypercube-based autonomous BI dashboard. I explain the current-state landscape of BI reporting and data analytics technologies. I provide details on the current limitations of existing BI approaches to automated reporting. I then define the characteristics required for a next-generation BI analytics and reporting framework capable of meeting the current and emerging reporting requirements that most businesses face. I provide a worked example demonstrating how to develop a solution which answers these emerging challenges. In the worked example, I show how to compute hypercubes from raw data and use those hypercubes as the basis for n-dimensional drill through dashboards. I explore various transformations and aggregation techniques across hypercubes to demonstrate how to summarize data across multiple dimensions. I show the power of Flow's multidimensional visualization and pivot engine by creating visualizations which allow for 5+ levels of drill-down. I finish the example by designing an interactive dashboard and showing how to distribute the completed report across an organization. Finally, I cover how to deploy the developed workflow to Flow's agent framework to continuously and autonomously execute our reporting tasks on a schedule.
Overview
This post provides a hands-on introduction to cognitive computing applications in Flow. It introduces the IBM Watson cognitive actions for unstructured text analytics.
Overview
This blog post provides a worked example of how to import and analyze Microsoft Access Data. We learn how to use the Access Database integration interface to consume the sample Northwind database into Flow. A step-by-step walkthrough is provided which details how to denormalize the various relational tables into a consolidated flattened set for analysis. We learn how to apply generic expressions to compute new data points on the fly. Finally, we learn how to leverage Flow's multidimensional analysis engine to compute hypercubes and summarize the data.
Overview
This blog post provides a worked example of how to import and analyze Microsoft Access Data. We learn how to use the Access Database integration interface to consume the sample Northwind database into Flow. A step-by-step walkthrough is provided which details how to denormalize the various relational tables into a consolidated flattened set for analysis. We learn how to apply generic expressions to compute new data points on the fly. Finally, we learn how to leverage Flow's multidimensional analysis engine to compute hypercubes and summarize the data.
Overview
In this blog post, I provide a worked example demonstrating how to import and analyze data from JSON based sources. Flow allows for the consumption of JSON data into a tabular form for analysis without requiring any knowledge of structure or schema. I demonstrate how to leverage this functionality to read and flatten JSON from a web-based resource into a dataset. I then show how to apply transformations to the data by using the expression builder to calculate new data points on the fly. I show how to compute hypercubes against the flattened data and perform a simple language analysis, highlighting the ability to wrangle and analyze the data. Finally, I demonstrate how to export the transformed data to various file formats allowing us to persist the flattened set for use elsewhere.
Overview
In this blog post, I provide a worked example demonstrating how to perform an analysis of blanks on a target dataset. When analyzing data a typical first step is to get an understanding of where there are missing values. Identifying where there are missing values in your data can help you make more informed decisions about your analysis approach.
Overview
This article demonstrates how to perform a word count analysis in Flow. In this blog post, I provide a worked example showing how to take in unstructured natural language data and compute a unigram language model against that data. The result of the language analysis returns a new profile dataset which holds each unique token present in our natural text and the count of times each word occurred. This blog post teaches a quick one-step technique for doing an initial exploratory analysis of natural text data.
Overview
This blog post demonstrates how to configure the denormalize function in order to join disconnected data sets together. A worked example is provided which shows how to import and merge various delimited files. The denormalize action is used to join the data from the separate files together in order to consolidate them into a single set for analysis. Once the data is joined, we learn how to use hypercubes to aggregate and summarize the data.
Overview
This blog post demonstrates how to identify and remove duplicate records from a dataset. I provide a worked example that shows how to configure and implement the deduplicate function against some sample customer data. The deduplicate function is a critical action which allows the workflow developer to create rich data validation and transformation rules.