Excel formula for success is +1: learn how to complement it with Omniscope

There are so many articles about the ‘Excel hell’, but the truth is people still love it. It’s easy to use and returns answers fast (at least for simple questions).  It was not originally built for the multi-dimensional data gymnastics that form an ordinary day of today’s data analyst, so does that mean it’s time to drop Excel?

 Not necessarily… Sometimes the winning formula will combine good old tools with innovative solutions. Omniscope provides an environment where any organisation can continue to use Excel for the tasks it’s good for and make it work as a  productive component inside a modern data ecosystem. Xlsx and csv spreadsheets are useful data containers, which are easy to access and edit, but this convenience will inevitably take a toll on the company’s resources, if there is no automated ETL process to take the data from the source to a destination without the human intervention.

How to import, transform or blend Excel data in Omniscope?

a) Super speed: Importing Excel or csv spreadsheets in 2 seconds – just drag and drop a file onto the projects’ list page, or directly onto the data workflow canvas. Omniscope will load the file in the same folder and automatically visualise it! From there you can combine these inputs with data in other formats, from various sources (databases, live feeds or APIs…)

b) Live connection: connect to an Excel or csv spreadsheet on a local or network drive and preserve the connection. Browse to a file location from the File input block. You will be able to establish a live link, so any edits to the file in that location will be reflected and the new data will follow the same transformation process in Omniscope. This can happen on demand – by clicking on the ‘refresh button’ – or via an automated process, using the Scheduler application.

c) Octopus mode: connect to a folder and extract multiple files in one step! If the data storage mechanism collects multiple Excel/csv files in one folder there is no need to drag 52 weekly file blocks to the workflow to get the full picture of your annual data. The ‘Batch append folder’ block can facilitate the extraction of any number of Excel, csv or text files, adding the source field to show where each record came from. (Very useful for audit purposes!) By using the wild character the user can target specific file names or extensions.

 d) Any file, any location – one block to append them all: Extract multiple files from different folders. The ‘Append Files’ block facilitates the extraction of multiple files from any location (machine, network or cloud location) and in any format (not just Excel – CSV, JSON, XML, etc). By default Omniscope will auto-detect the file type and read the contents of the file according to the default options for that file type. The user can configure how Omniscope will read the files by un-ticking ‘Auto detect file type’, then selecting the file type and further options.

Transformation phase – ETL and data quality evaluation

How can the analyst make sure that all those imported Excel files don’t contain any ‘fat finger’ entries, or a messy copy-paste result? Anyone who’s spent more than one Monday morning looking for a typo in a big table will understand that more important than getting the answers fast is making sure that those answers are correct.

You can combine multiple data quality tests inside an Omniscope workflow – start by adding a ‘Validate data’ block to your ETL process. It will screen the data schema (field list and data types), and also evaluate the record count, or the cell values in the target fields. If anything is wrong – a detailed data anomaly list will appear in the block’s diagnostics tab. You will not even have to worry how to remove duplicates – the operation is just a block away… And will be applied every time the new data is appended. More on data quality methodsread here.


Publishing the results

What happens after you’ve imported, validated, augmented and transformed spreadsheet data in Omniscope? You can customise and produce multiple outputs – share clean data with your team, as well as create interactive graphs to allow the users to explore the findings.

There are also options for mass publishing – for those serving hundreds of customised outputs. Use the ‘Batch Output’ block’s spreadsheet interface to define the filtering variation for the data outputs, or ‘Report Generator’ block to create dashboard variations e.g. showing just one country per report.


What about the price of this additional software? Excel is cheap… Think again – Excel is not automating your data management! The amount of man (and woman) hours lost on the manual data transformation / cleaning (copy/paste, rename, search/replace, pivot, then again?), and the time taken fishing for the data faults adds up to more cost than most modern data management tools. By far the biggest danger to an organisation is the cost of not having the right information at the right time or having incorrect data as a basis for decision-making!

Humble spreadsheets are here to stay, however, you may wish to make changes to how Excel is used and integrate it inside a modern data management system to allow it to excel!


No Comments

Leave a Reply