Resources / Blogs / Security in ML Systems using Feature Stores

Security in ML Systems using Feature Stores

Security in machine learning through feature stores

With the transformational early successes in value creation, AI/ML is set to become ubiquitous. By 2030, AI could potentially contribute up to $15.7tr to the global economy. As more and more organizations are depending on data and Machine Learning (ML) models for their crucial decision-making, the security of data and these ML systems is business critical. ML security is imperative to tap into the potential of ML and, more importantly, to mitigate any negative business impact. In this article we will discuss the vulnerability introduced by data, the system requirements to counter it, and how Feature Stores and Feature Engineering platforms such as Scribble Data’s Enrich Intelligence Platform, will enable organizations to fortify their ML systems.

Security and risk management in ML systems

Much like most hardware and software systems, security in ML systems is an end-to-end endeavor with security by design at its foundation. Potential vulnerability points exist across the machine learning workflow. A typical ML system consists of the ML model(s), the data it uses, the systems, dependencies, and processes around the model, as well as the resulting use cases. Attack vectors associated with each of these system components need to be accounted for in order to ensure its safe and secure operation. In addition to the standard cybersecurity best practices, security checkpoints need to be integrated throughout the design, development, and implementation processes of an ML system. 

With proliferating data usage and increase in complexity of models (statistical or ML), policymakers across geographies have become increasingly vigilant when it comes to risks associated with the use of models in business critical systems. Thus, resulting in more stringent regulatory compliance requirements across industries. 

In 2011, after the financial crisis of 2008, the US Federal Reserve and Office of the Comptroller of the Currency (OCC) issued SR 11-7: a comprehensive Supervisory Guidance for banking organizations on effective Model Risk Management. As per the attachment to the SR letter: “The expanding use of models in all aspects of banking reflects the extent to which models can improve business decisions. But models also come with costs, such as the possible adverse consequences (including financial loss). Those consequences should be addressed by active management of model risk.” The Guidance extends across:

  1. Robust model development, implementation, and use,
  2. Sound model validation process, and 
  3. Governance, policies, and control mechanisms. 

It presents two primary reasons for model risk: 

  1. The model may have fundamental errors and may produce inaccurate outputs when viewed against the design objective and intended business uses; or 
  2. The model may be used incorrectly or inappropriately.

Such guidelines are as relevant to ML models as they are to statistical models, and should inform the ML system risk management framework being used by an organization.

Data-borne vulnerabilities in ML systems

In addition to the accuracy of model methodologies and the processing components (like, codebase) that implement them, the integrity of input data (used to develop a model and that used during the operation of a model) is critical. From training to inference to use cases, the whole model lifecycle is automated. Other than rigorous assessment of input data quality, its relevance, and outcome analysis, there are no methods to check for an error, and it is very hard to detect the causal subtle attacks. When an error creeps in, it propagates through the system undetected, thus making ML systems very fragile.

In a ML system, attacks can be intentional (adversarial machine learning) or unintentional (human errors, bugs, or oversight). There are two ways for data associated attacks to happen: 

  1. During the training process.
    The data can be tampered with, corrupted, or a delay or a bias can be introduced – resulting in an inaccurate model with fundamental errors (model poisoning).
  2. During the inference process.
    (i) The model can be queried, its behavior learnt, and then data can be manipulated to compromise and trick a model for (undetectable) inaccurate results, or to trigger worst case scenarios that break / interfere with the business operations (model evasion).
    (ii) The model can also be attacked to recover features used to train the model. This could result in compromise of private data (model inversion).

The increased dependence on data and ML models for business decision-making and process improvements across the industries has been well complemented by the advances in technology and an ever-growing data infrastructure. The volume of data available for use by these models is increasing multifold. The increase in data sources, data channels, data diversity, and data volume has expanded the attack surface. The increased dependence on data and the increase in data volume have also triggered an increase in the number of people, and the complexity of systems and processes involved (growing attack vectors) across the ML lifecycle. The expanding attack surface and the growing attack vectors together with rich returns on cyber-attack opportunity have made the ML ecosystem more vulnerable.

How to address the data vulnerability in ML systems?

Along with security by design, continuous evaluation of controls and governance, rigorous model testing and outcome validation, the increasing data vulnerability necessitates a defensive ML system – a highly streamlined, disciplined, auditable, stable, and trusted system. A system that will watch and defend itself for safety, integrity, and fidelity of its data and outcomes. The focus needs to be on means and methods to maintain data correctness and model accuracy.

Introducing a security buffer and minimizing direct user access to raw data and models will reduce risk significantly; this is in addition to adopting robust user access and (multi-factor) user authentication policies. A user should not be able to access, interact with, or query the raw data, the input data to models, and the models directly. Instead, there should be a secure layer (or layers) of separation between a user and all the core elements of a ML system. 

This security buffer is exactly what a Feature Store can facilitate, seamlessly and effectively. The vantage point at which a Feature Store exists will allow it to anchor the design and implementation of sturdier data security measures in a ML system.

Introducing robust automation and secure data handling practices across the ML lifecycle (like, for identity and access management and security management) will further secure the system by reducing points of direct intervention and data access. The improved system efficiency will also enable faster response times to attacks. Understanding the inherent limitations underlying ML algorithms and implementation processes will enable identify and secure against the hidden dangerous gaps in data access management.

Needless to say that a holistic approach is indispensable to minimize the risk to data security. Safeguards against both internal and external exfiltration are important. In addition to risk mitigation measures, early threat detection and fast response to an attack are equally crucial to reduce the risk of data corruption, loss of data, or misuse of data. The Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS) aims to raise awareness of the existing and new vulnerabilities that compromise the confidentiality, integrity or availability of production ML systems. Its threat matrix presents the progression of tactics used in attacks and the associated breach in access control to data and models. Drawing best practices from such frameworks which are seeded with the learnings from across the industry will enable organizations to build sound strategies and secure the data in their ML systems well.

Feature stores and data security across ML systems

A feature store is a centralized location where features are cataloged and organized to be directly accessed by data scientists and data engineers for data-related tasks ranging from data visualization to adhoc insights generation, to training ML models to running inference on already trained models. However, a feature store is not only a data layer. The framework also supports data manipulation – you can create or update groups of features extracted from multiple data sources or generate new datasets from those feature groups. By virtue of the very purpose that it serves, a feature store efficiently ensures the availability of:

  1. correct data (ensures data consistency, integrity, and fidelity),
  2. safe data (with data privacy and legal risk considerations),
  3. relevant data (with access only to selective useful data), and 
  4. quality data (rich and auditable data). 

It inherently introduces a layer of separation and creates an air gap between a user and raw data. This air gap is what will offer the required data security layer across ML systems. 

Organizations are now beginning to align with the shift in the competitive landscape and the changing economics of data. They see data as a critical asset as efficiently solving bread & butter problems (the day to day business decision-making) using reliable data has become necessary for them to stay relevant and in business. Thus, data integrity is crucial and organizations are adopting the methods, tools, and processes required to encourage the disciplined handling of data. Most of the current avatars of feature stores make sense only when an organization crosses a certain business scale or risk threshold. For mass adoption of feature stores and thereby, to facilitate improved data security across the ML ecosystem, the economics of a feature store has to change.

With improved economics and with the help of add-on elements (like, an App Store), a feature store can and will effortlessly become a security bridge with multiple ingress and egress points across the ML workflow. For example, Scribble Data’s Enrich Intelligence Platform tags on an App Store to a feature store. Via the App Store, the users can access the data in the feature store, interact with the ML models that consume the data from the feature store, and access the outputs of these models, without any direct access to either the raw data or the models themselves.

Final thoughts

With increasing ML adoption, the tech community is rallying around machine learning security to protect their data and networks from adversaries. The increasing complexity of ML systems keeps adding to the count of unforeseen and dangerous vulnerabilities. With the bad actors keeping up the pace, novel security approaches will emerge. Watch this space to stay in touch with the latest in the world of ML systems.

Related Blogs

April 18, 2024

Buy-Ins vs Buy-Outs in Pension Risk Transfer: A Detailed Study

Markets heave and dip like the swells of a restless ocean, unpredictable and ever-changing. Amid these swells, pension schemes are adrift, challenged by relentless waves of economic shifts and longer lives. Each year, the lives of retirees hang more precariously on decisions made not only with numbers but with nerve. In the heart of these […]

Read More
April 11, 2024

Explainable AI: A Comprehensive Guide

In our world, AI has grown out of sci-fi tales into the fabric of daily life. At Harvard, scientists crafted a learning algorithm, SISH, a tool sharp as a scalpel in the vast anatomy of data. It finds diseases hidden like buried treasure, promising a new dawn in diagnostics. This self-taught machine navigates through the […]

Read More
April 4, 2024

Role of AI and ML in Asset Management: A Complete Guide

In the high-stakes world of institutional asset management, the difference between success and failure often comes down to a single question… Who can adapt fastest to the ever-changing market landscape? Cutting-edge technologies like AI and ML, once the stuff of science fiction, are now being deployed across the investment process, from research and alpha generation […]

Read More