Overview
This blog post provides a worked example of an end-to-end automated business intelligence solution. This blog post demonstrates how to load data from different sources, join the data together, cleanse and correct data quality issues, compute hypercubes, and design a dashboard. I then demonstrate how to deploy the solution to the autonomous agent framework for continuous execution.
Overview
This is the ninth blog post in our crash course series on Flow. In this blog post, I provide an introduction to deploying the solution developed in the previous sections to the Flow Autonomous Agent Framework.
Overview
This is the eighth blog post in our crash course series on Flow. In this blog post, I provide an introduction to HyperCube Visualizations and Multi-dimensional HyperCube Tables in the Flow Computing Framework.
Overview
This is the seventh blog post in our crash course series on Flow. In this blog post, I provide an introduction to Results, Dashboard Design, and a first look at HyperCube Reporting in the Flow Computing Framework.
Overview
This blog post provides a worked example of how to import and analyze Microsoft Access Data. We learn how to use the Access Database integration interface to consume the sample Northwind database into Flow. A step-by-step walkthrough is provided which details how to denormalize the various relational tables into a consolidated flattened set for analysis. We learn how to apply generic expressions to compute new data points on the fly. Finally, we learn how to leverage Flow's multidimensional analysis engine to compute hypercubes and summarize the data.
Overview
This blog post demonstrates how to configure the denormalize function in order to join disconnected data sets together. A worked example is provided which shows how to import and merge various delimited files. The denormalize action is used to join the data from the separate files together in order to consolidate them into a single set for analysis. Once the data is joined, we learn how to use hypercubes to aggregate and summarize the data.
Overview
Here is the second in a series of posts focusing on building reports in Flow. A grouped report is an advanced report produced by Flow. Grouped Reports organize records into one or more nested groups where each group is is a collection of records with a common column data value. There are two basic methods you can employ to create grouped reports in Flow. The first is to add a Grouped Report action to a new or existing workflow. The second way is to open a hypercube within the Flow portal then click on the report icon Create Report button in the toolbar located at the top of the hypercube view. This post will cover the first method.
Overview
In this blog post, I provide a worked example demonstrating how to design a workflow which extracts and analyzes cryptocurrency news articles using artificial intelligence. I explain how to use the HTML integration interface to extract links for all top news stories from a target website into data. I show how to use generic expressions to transform and clean the raw links, preparing them for processing. Flow is used to loop through each of the structured links and invoke the built-in Watson artificial intelligence functions to perform advanced cognitive analytics against the text of each news article. Flow collects the results of the cognitive analysis and compiles an aggregate dataset of sentiments, emotions, concepts, topics, keywords, and named entities for all of the supplied articles. I finish the example by showing how to compute hypercubes against the cognitive output to summarize the results and generate various multidimensional views.
Overview
In this article, I provide an introduction to measuring and evaluating data quality using Flow. I briefly discuss data quality dimensions and data quality assessment. Then I examine how a schema-on-write approach increases the time and cost required to assess data quality along with a brief discussion of schema-on-read technology. I then introduce Flow's "Generic Data" technology as a solution to the deficiencies of schema-on-write and schema-on-read for data quality. Finally, I provide a hands-on working example of doing data quality in Flow Analytics using some sample name and address data.
Overview
Flow enables you to build dashboards containing a variety of elements including tables, charts, reports, and data summaries, among others. This post focuses on two methods you can use to build, populate, and update dashboards. I show how to add a new dashboard, then how to create and add chart result using one of the sample datasets provided. Next, I provide an in-depth discussion of adding workflow generated results to a dashboard.