According to a survey performed by KPMG and Forrester only 34% of decision-makers say they feel confident in their data …
A sad state of affairs.
Fortunately for the 66% of decision-makers who cannot rely upon their data, there is an answer *drum roll please* . . . the Enlightenment . . . of data.✨
Who would’ve known a school of thought that revolutionized the 17th and 18th centuries could be harvested to optimize our present-day data management?
“The Enlightenment included a range of ideas centered on the sovereignty of reason.” Dorinda Outram, Panorama of the Enlightenment
Governed by a philosophy of rationality the Age of Enlightenment (also known as the Age of Reason) made decisions based upon a single source of power or primary source of knowledge. The source of truth and knowledge in this instance was determined by the senses.
Single Source of Truth or SSOT, the focus of this Process Street blog post, also takes the sovereign approach.
However, rather than the senses, the sovereign power for SSOT is a reliance on un-replicated, autonomous, self-contained, and enhanced data.
To find out more about SSOT and how it can enlighten your data processes, keep reading.
Alternatively, to jump to a specific section of the post, click the appropriate link below.
- The path to Data Enlightenment
- What is Single Source of Truth
- The role of data for Single Source of Truth
- How to obtain a Single Source of Truth
- Single Source of Truth in practice
- Single Source of Truth is Data Enlightenment
The path to Data Enlightenment
I’ve already touched on the similarities between the core concepts constituting the Age of Enlightenment and those used when taking the modern SSOT approach.
So, before I get stuck into SSOT, its role, best practices, and so on, let me first clarify exactly what I mean by data enlightenment.
According to the Cambridge English Dictionary to be enlightened has two definitions:
The first defines the enlightened state as:
“Showing understanding, acting in a positive way, and not following old-fashioned or false beliefs.” Cambridge English Dictionary.
And the second – which goes a little deeper into the spiritual realm – interprets enlightenment as:
“Knowing the truth about existence”. Cambridge English Dictionary
Now, let’s break down these definitions and apply them to the data arena.
Definition number one:
Imagine a situation whereby all data and all of the information it produces is clear and easy to understand… The outcome would be a blissful state of data clarity.
And, suppose all of the data that is used is acted upon in a positive way … You’d be leveraging data.
As for avoiding the old-fashioned…
Remember the days when we used to have to input all data by hand … as in writing it down on paper?📝
To make matters worse, the paperwork was then stored in humongous metal filing cabinets with never-ending drawers that were all organized alphabetically 😳. . . By the time folder “Z” was exposed the drawer had all but broken off its hinges and filled the entire width of the office.
Fortunately, many of us said goodbye to gargantuan filing cabinets years ago and replaced them with computer hard drives and cloud-based remote data storage systems.
Finally, the last component of the first definition: “to not follow false beliefs”.
False data … need I say more?
The novel coronavirus presents the perfect example of how false or erroneous data can not only be misleading but can also result in poor decision making.
In August 2020, when the coronavirus outbreak was at large, Iowa City in the United States experienced a glitch in their system which produced a data error.
This error caused the cities official website to show a lower number of infections than the city was actually experiencing.
Because the data results downplayed the severity of the outbreak, decisions such as whether or not to re-open schools were made based upon false information.
This is just one of the many examples whereby false data has informed decision makers and caused them to make the wrong call.
Unfortunately, in the context of the novel coronavirus, making the wrong call involves putting people’s lives at risk.
But, worry not …
I’ll go over how to stop false data from infecting your decision-making processes later on in this post. 🦠
Definition number two:
Just to re-cap, the second definition of enlightenment is: “Knowing the truth about existence”.
Do you have multiple variations of contact information for each of your family members
saved on your phone?
No? … Well, I certainly do.
In fact, I have a “Mum” entry for every occasion.
There’s “Mumsy” for the everyday call, “Mummy 2019” for the throwback, “Mum” for everything urgent, and “Mother” … for when I am 90% sure the phone call isn’t going to be a pleasant one.
The only issue is “Mummy 2019” is definitely an expired number and I can’t remember whether its “Mumsy” or “Mum” that’s the right digits.
In this instance knowing the truth, or in other words, the correct number would make my life and overall existence substantially easier.
If I were to change the multiple data entries, remove the redundant numbers, and have just one data entry for “Mum” not only would I have made my life easier: I’d have created my very own Single Source of Truth.
What is Single Source of Truth?
Single Source of Truth or SSOT is one inviolable primary source of revenue data; It is the source from which multiple versions of the truth are developed.
SSOT is used as a common language made up of one reliable source of data: With SSOT, data revenue is reported, customers are defined, and products are classified in a single, unchanging, agreed-upon way.
Furthering its spiritual connotations, an SSOT is considered a state of being for a company’s data, in that all data can be found via a single point of reference.
“A logical, often virtual and cloud-based repository that contains one authoritative copy of all crucial data, such as customer, supplier, and product details.” Leandro DalleMule and Thomas H. Davenport, What’s Your Data Strategy?
The role of data for Single Source of Truth
This information can then guide the decision-making process.
In technical terms, this process of leveraging data to ensure it reaches its true potential can be broken down into two key components:
- Data architecture, which explains how data is stored, collected, transformed, distributed, and used. It includes the systems for connecting data with the business processes that consume it.
- Information architecture, on the other hand, consumes the raw data, analyses it, and then integrates it to reveal information about a company’s performance.
To clarify, let’s take a look at an example:
Suppose a large industrial company had over a dozen data sources containing the basic information of their suppliers (such as the supplier’s name or address).
But, suppose the content of this information was ever so slightly different in each data source.
For example, one data source inputted a supplier as Carpa; another called it Carpa, Inc.; and a third referred to it as CARPA Corp.
In this example the data architecture is flawed. Meaning that the information architecture that relies upon the data is also likely to be flawed.
Although human beings may be able to pick up the variations within the data (although it would be a timely, labor-intensive task), traditional IT systems are unlikely to do so in a reliable fashion.
Thus, the data is rendered useless.
In this instance and to mitigate this issue, the industrial company would select one source of truth, one sovereignty of reason, and shut down everything else.
For argument’s sake, let’s say that out of: Carpa; Carpa, Inc, and CARPA Corp. The industrial company selected “Carpa” as the single source of truth.
By selecting one SSOT and shutting down redundant systems the company would save itself substantial IT costs. It would also save one lucky human from the timely task of checking the data sources for variations.
Data, information, and OKRs
Conversely, information architecture derived from raw data is particularly insightful when determining a company’s Objectives and Key Results (OKRs).
While the KPIs lay out a plan of execution and ensure that goals are targeted and achievable.
The OKRs define:
The “O” = Objectives and these help us to understand what we’re aiming for.
The “KR” = Key Results determine how we intend to measure the success of the objective.
Let’s take a look at Process Street for an example:
At Process Street we have clear OKRs that consciously tie goals to performance.
Let’s imagine our OKR is to increase organic sign-ups, release 15 blog posts per month, out of which at least 5 rank on page 1 of Google for their core keyword.
When determining our OKRs we would need to rely heavily upon an SSOT.
So, without an SSOT there would be no data clarity.
By sourcing our MMR data exclusively from ChartMogul there is no room for variation in the data. Each and every team member knows where to find the true data.
Remember the coronavirus example?🦠
By selecting one single source of truth (in ChartMogul) we at Process Street have prevented unclear data infecting our information architecture and decision making processes.
How to obtain a Single Source of Truth
For a business to obtain an SSOT, all data components from various systems and across all boards will aggregate their datasets to the primary single source of truth location 🌍.
Before aggregating the datasets, it is essential that the data that is being aggregated contains the correct data. This will ensure data clarity as you continue the transition towards an SSOT.
In order to achieve data clarity, you’ll need to go through the process of normalization.
Normalization is like spring cleaning your data sets for company knowledge. It consists of three primary steps:
- Componentize data
- Eliminate redundant data
- Organize the gaps
How to obtain a Single Source of Truth: Componentize Data
Componentizing data is the process of breaking down data into building blocks of knowledge.
An example of this would be classifying data according to the industry that a supplier or client falls into.
Thus, all of the data concerning your relationship with Apple and Levis, such as market information and commercial transactions, would be mapped according to their industry: consumer electronics or retail.
The componentization of data is fundamental in achieving an SSOT and is crucial for the successful implementation of the following two steps.
How to obtain a Single Source of Truth: Eliminate Redundancy
I touched on this earlier on in this post, think of the “Mum” 📞 example.
The same thing applies to the business context: Once data has been compartmentalized duplicates and variations of the data need to be eliminated.
The exact duplicates are easy to identify and eliminate, however, similar yet slightly different data entries will take effort to individually evaluate. With similar components, it is vital to determine if the differences in the data are there for a reason.
Ask yourself if the difference is necessary? If the answer is yes, no matter how small the difference – the data entry is justified.
However, more often than not the majority of these differences are superficial.
Often, similar versions of the same data source will be inputted in a slightly different way, a good example of this is the “Carpa/ Carpa Inc” example used previously. In this case, the data was always referring to “Carpa” but the component inputted was slightly different (Carp, Inc.; CARPA Corp).
How to obtain a Single Source of Truth: Organize the gaps
After you have componentized and consolidated your data, there will likely be gaps in places where duplicates were eliminated.
The third and final step of normalization is to organize these gaps.
In the context of an SSOT, to organize the gaps means to make a link or reference to the true data entry, or in simpler terms: the correct data entry. Or, delete the gap for good.
Do not copy and paste. ❌
To copy and paste is to undo all the progress made in the elimination process.
The organizing and linking of these gaps is an essential part of normalization. And, whilst there are many right ways of doing this, there are also a few wrong ways. In order to do things the right way, be sure to have a pre-determined mapped strategy behind your linking. Poor linking will ultimately result in a poorly functioning SSOT.
If only data enlightenment were as easy as these three simple steps and a good linking strategy…Unfortunately, not all who organize their data will achieve a perfect SSOT.
In fact, a fully functional SSOT depends upon a few more factors.
To put an SSOT in place, an organization will also need to provide its employees with adequate training regarding how to access and manage the SSOT data.
Perhaps the most fundamental step in the process of implementing an SSOT is ensuring that absolutely everyone is using the same data to make a decision; it is essential that every single employee, no matter their position, is working from a single source of truth.
Many companies struggle with data-related issues such as organizing multiple sources of data, a lack of data accuracy, poor collaboration between team members, and poor data accessibility.
And these data-related issues can have a detrimental effect on the company as a whole:
Making decisions or targeting consumers based on inaccurate predictions will quickly erode, if not extinguish, consumer trust and shake the confidence of those executives who rely on these predictions to make informed decisions. KPMG, Building Trust In Analytics
So, to summarize, in order to obtain and implement an SSOT successfully you will need to perform the process of normalization. And, ensure all employees receive adequate training.
Best practices for Single Source of Truth
Integration is fundamental for any type of SSOT structure.
For any type of SSOT architecture, a business needs its systems to be integrated with each other or into a host system.
To do this, an integration strategy is needed, as well as an interface that will host and surface the organization’s data.
Additionally, all departments and teams must provide access to the systems and data they use.
There are a number of ways to achieve an SSOT architecture.
One method is through an enterprise service bus (ESB), an ESB enables different systems to receive data updates from one another.
With an ESB, the numerous data source systems regularly send their data to an aggregated data system, and any changes in the data being received are published via the ESB.
Another way to implement integration is with a master data management (MDM) system. An MDM tool serves as a hub for an organization’s master data, making it visible and accessible via a single reference point.
You can also use integration to well…integrate your truth sources.
If tools only provide isolated pieces of the puzzle the extent of the information and truth they offer is limited.
But, Figma, in contrast, embraces collaboration to allow for file-sharing, version management, approval chains, sync conflicts, and more.
“Figma puts all the pieces together, and into more than the sum of those parts. What used to take four or five discrete tools can now be done end-to-end in Figma, the single source of “truth” for product design and design systems.” Peter Levine, Investing in Figma: The Decade of Design
And finally, let’s turn to Google for an example of both integration and collaboration being harnessed to optimize SSOT.
Say a searcher enters into Google a query about their favorite restaurant. Google would then aim to be the SSOT for anything and everything the searcher may need to know about the restaurant; such as the restaurant’s hours, phone number, local locations, menu link, ratings, and popular times.
Google is answering the searcher’s query by bringing in data from an array of sources: Google maps, the restaurant’s website, TripAdvisor, Google ratings, and so on.
By collaborating and drawing data from these sources Google becomes the searcher’s SSOT for information about the restaurant.
Single Source of Truth is Data Enlightenment
And there we have it, by covering the bases of:
- Employee education
- (and) Collaboration
Your data strategy will be well on its way to achieving an enlightened state of being.
But, bear in mind … When a person takes on meditation for the first time, they are unlikely to become enlightened overnight 🧘🏽♀️🧘🏻♂️.
The same can be said for data enlightenment and achieving a Single Source of Truth.
Nevertheless, there is no avoiding the necessity of building a data strategy, much like the gargantuan filing cabinet, the days of unclear and useless data are behind us.
In today’s tech-savvy climate, businesses need to be working to consistently achieve data clarity that both they themselves, and their clients, can depend upon.
The time has come to enlighten your data processes: Implement SSOT to enforce data autonomy, eliminate data replication, enhance data maintenance, and advocate for data self-containment.
Embrace the sovereign power that is … a Single Source of Truth.
Use the comment field below to share your experience of optimizing your data processes, and share any useful SSOT tools or tips that you may have for anyone looking to implement an SSOT.👇🏼