Resources / Blogs / Reducing Organizational Data Infrastructure Costs

Reducing Organizational Data Infrastructure Costs

Organizational data infrastructure costs

We speak to a number of organizations who are in the process of building and deploying data infrastructure and analytical processes. Organizations face a number of challenges that prevent them from meeting their analytical business objectives. The idea of this note is to share our thoughts on one specific challenge – high cost. Specifically:

  1. Cost model – Deconstruction of cost of data infrastructure
  2. Drivers – Drivers of each cost dimension
  3. Recommendations – Actions to address each driver

Simple Cost Scaling Model for Data Infrastructure

From our experience, when we abstract out the cost structure, it looks like the equation below. For each usecase that a business is considering, there is a transaction or threshold cost due to existing infrastructure and process, and a usecase-dependent, effective cost of delivering on the usecase.

The various dimensions of the cost model are:

  1. Base: Threshold transaction cost that has to be paid always due to the way the systems are organized
  2. Process: Task-specific additional coordination cost
  3. Depth: Cost scaling based on the complexity of the modeling and delivering
  4. Confidence: Cost scaling due to degree of trust needed in the output
  5. Reuse: Cost amortization from the ability to reuse assets, process, and outputs

Drivers of Cost Dimensions

We expand on the underlying drivers for each of the cost dimensions:

  1. Usecase Selection: A usecase for which there is limited buy-in and unclear business benefit results in loss of interest over time. Projects are sometimes shutdown midway resulting in loss of time, and waste of effort.
  2. Base Complexity: The complexity of the IT systems increase the transaction cost for every data project at every step. The process of discovering what data exists in the system, how to access it, whether it is relevant and usable could be time consuming. Further, implementing the data project may require workarounds and additional modules.
  3. Process: Each usecase may involve different degree of coordination between people and organizations, and integration between systems. Friction in this activity due to economic or other issues increases the overhead for a given data project.
  4. Depth: Questions in the organizations can be framed and addressed at varying levels of scale, accuracy, and relevance. The cost increases with scale (e.g., every customer instead of a cohort), accuracy (e.g., fundamental drivers instead of proximate causes), and relevance (e.g., integrated into the workflow at the exact time and detail instead of loosely enabling a decision).
  5. Confidence: Analytical process tends to be error prone. Testing the answer for robustness over dimensions such as time, space, and user groups often is several times the cost of initial analysis. This is often due to the fact that systems are often not designed for controlled experimentation. In addition, more infrastructure has to be built to repeat the experiments.
  6. Reuse: Questions in an organization tend to build upon previous questions, and systems are required to create the ability to reuse the process, technology, and data assets created. Mechanisms for managing the artifacts created, and sharing the results are often missing in enterprises. Analysis attempts often start from scratch.

Common across all these drivers is the people cost. As the cost of the technology is dropping over time, the cost is shifting to people, and this cost is growing rapidly with time.

Recommendations

The cost model immediately shows how the cost can be reduced over time:

  1. Find Good Quality Use cases: It is worth spending time to find a good quality usecase to drive the development. Such usecases have clear and positive economics that aligns people (motive), meets data and people preconditions (means), and commitment from the organization with resources (opportunity).
  2. Incentivize IT for Data Consumption: Data engineering can account for upto 80% of the data project. Architecting systemsand data infrastructure for data discovery and consumption will reduce the transaction cost for all projects. It is not simple though. IT organizations are overburdened, and incentivized for functionality and robustness. They are not incentivized to make technology and other choices that enables the organization’s data journey. There is often technical debt that already built up over time that needs to be addressed first.
  3. Build Well-Oiled Team: Most data projects involves coordination across business functions including IT, business, and data teams for a number of reasons including formulation of the problem, selection of methods, and uncovering tacit knowledge. Ensure low friction and high degree of collaboration.
  4. Build Balanced Team: Data projects have three main areas – modeling/statistics (20-40%), engineering (40-80%), and domain (10-30%). Strong and proportionate representation from each of these areas will enable a defensible result, an efficient implementation, and a business-relevant output.
  5. Sharing Culture: Data analysis generates significant amount of knowledge about an organization’s business including customers, product, and data assets. Providing a mechanism to share work products, and incentivizing the same will save significant amount of resources for the organization by reducing errors and reusing the work done.

Takeaway

The high cost of data infrastructure and projects can be understood and reduced. But discipline in the thinking and execution is required to achieve the same. Data science is more than anything is a test of the character of the organization.

Dr. Venkata Pingali is Co-Founder and CEO of Scribble Data, a data engineering company. Their flagship product, Enrich, is a robust data enrichment platform.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Blogs

September 13, 2024

What’s Driving the Pension Risk Transfer Surge in the US, UK, and Canada?

Pension risk transfer markets are expanding. The United States, United Kingdom, and Canada are experiencing unprecedented levels of PRT activity.  Record-breaking transaction volumes define this new era. An influx of new insurers and innovative deal structures are transforming the market. The PRT boom is touching every segment from multi-billion-dollar megadeals to mid-market transactions. What is […]

Read More
August 29, 2024

The Future of Retirement in the U.S.: Challenges, Solutions, and the Role of AI

At 74, Theresa Edwards rises before dawn, crisscrossing Los Angeles by bus to work as a caregiver. Her last patient of the day? Her husband of 55 years, recovering from a serious car accident. Every dollar counts in their household, where four grandchildren also reside. “Sometimes I wish I could stop working,” Edwards confides. “But […]

Read More
August 23, 2024

The Role of Data Quality in Pension Risk Transfer

Pension funds are like a vast web, woven over decades, where each thread represents a worker’s career—their wages, years of service, and life events. When this complex web worth billions of dollars changes hands, a single loose thread can unravel the entire fabric. Remember that adage, “You can’t manage what you can’t measure”? Well, in […]

Read More