Structural Attack

Structural attacks.

Runs a number of ‘static’ structural attacks based on: (i) the target model’s properties; (ii) the TRE’s risk appetite as applied to tables and standard regressions.

This module provides the StructuralAttack class, which assesses a trained machine learning model for several common structural vulnerabilities.

These include: - Degrees of freedom risk - k-anonymity violations - Class disclosure - ‘Unnecessary Risk’ caused by hyper-parameters likely to lead to undue model complexity

The methodology is aligned with SACRO-ML’s privacy risk framework.

class sacroml.attacks.structural_attack.StructuralAttack(output_dir: str = 'outputs', write_report: bool = True, risk_appetite_config: str = 'default')[source]

Structural attacks based on the static structure of a model.

Performs structural privacy risk assessments on trained ML models.

This class implements static structural attacks based on model architecture and hyperparameters, aligned with TRE risk appetite configurations.

Attack pipeline includes: - Equivalence class analysis - Degrees of freedom check - k-anonymity check - Class disclosure risk - Complexity risk

Methods

attack(target)

Check whether an attack can be performed and run the attack.

attackable(target)

Return whether a target can be assessed with StructuralAttack.

get_params()

Get parameters for this attack.

classmethod attackable(target: Target) bool[source]

Return whether a target can be assessed with StructuralAttack.

__init__(output_dir: str = 'outputs', write_report: bool = True, risk_appetite_config: str = 'default') None[source]

Construct an object to execute a structural attack.

Parameters:
output_dirstr

Name of a directory to write outputs.

write_reportbool

Whether to generate a JSON and PDF report.

risk_appetite_configstr

Path to yaml file specifying TRE risk appetite.

attack(target: Target) dict

Check whether an attack can be performed and run the attack.

get_params() dict

Get parameters for this attack.

Returns:
paramsdict

Parameter names mapped to their values.

class sacroml.attacks.structural_attack.StructuralAttackResults(dof_risk: bool, k_anonymity_risk: bool, class_disclosure_risk: bool, lowvals_cd_risk: bool, unnecessary_risk: bool, details: dict | None = None)[source]

Dataclass to store the results of a structural attack.

Attributes:
dof_risk (bool)Risk based on degrees of freedom.
k_anonymity_risk (bool)Risk based on k-anonymity violations.
class_disclosure_risk (bool)Risk of class label disclosure.
lowvals_cd_risk (bool)Risk from low-frequency class values.
unnecessary_risk (bool)Risk due to unnecessarily complex model structure.
details (dict | None)Optional additional metadata.
__init__(dof_risk: bool, k_anonymity_risk: bool, class_disclosure_risk: bool, lowvals_cd_risk: bool, unnecessary_risk: bool, details: dict | None = None) None
class_disclosure_risk: bool
details: dict | None = None
dof_risk: bool
k_anonymity_risk: bool
lowvals_cd_risk: bool
unnecessary_risk: bool
sacroml.attacks.structural_attack.get_model_param_count(model: BaseEstimator) int[source]

Return the number of trained parameters in a model.

This includes learned weights, thresholds, and decision rules depending on model type. Supports DecisionTree, RandomForest, AdaBoost, XGBoost, and MLP classifiers.

Parameters:
model (BaseEstimator)A trained scikit-learn or XGBoost model.
Returns:
intEstimated number of learned parameters.
sacroml.attacks.structural_attack.get_unnecessary_risk(model: BaseEstimator) bool[source]

Check whether model hyperparameters are in the top 20% most risky.

This check is based on a classifier trained on results from a large scale study described in: https://doi.org/10.48550/arXiv.2502.09396

Parameters:
modelBaseEstimator

The trained model to check for risk.

Returns:
bool

True if the model’s hyperparameters are considered high risk, otherwise False.