Structural Attack

Structural attacks.

Runs a number of ‘static’ structural attacks based on: (i) the target model’s properties; (ii) the TREs risk appetite as applied to tables and standard regressions.

Tree-based model types currently supported.

class sacroml.attacks.structural_attack.StructuralAttack(output_dir: str = 'outputs', write_report: bool = True, risk_appetite_config: str = 'default')[source]

Structural attacks based on the static structure of a model.

Methods

attack(target)

Run structural attack.

dt_get_equivalence_classes()

Get details of equivalence classes based on white box inspection.

get_equivalence_classes()

Get details of equivalence classes based on predicted probabilities.

get_params()

Get parameters for this attack.

__init__(output_dir: str = 'outputs', write_report: bool = True, risk_appetite_config: str = 'default') None[source]

Construct an object to execute a structural attack.

Parameters:
output_dirstr

Name of a directory to write outputs.

write_reportbool

Whether to generate a JSON and PDF report.

risk_appetite_configstr

Path to yaml file specifying TRE risk appetite.

attack(target: Target) dict[source]

Run structural attack.

To be used when code has access to Target class and trained target model.

Parameters:
targetattacks.target.Target

target as a Target class object

Returns:
dict

Attack report.

dt_get_equivalence_classes() tuple[source]

Get details of equivalence classes based on white box inspection.

get_equivalence_classes() tuple[source]

Get details of equivalence classes based on predicted probabilities.

get_params() dict

Get parameters for this attack.

Returns:
paramsdict

Parameter names mapped to their values.

sacroml.attacks.structural_attack.get_model_param_count(model: BaseEstimator) int[source]

Return the number of trained parameters in a model.

sacroml.attacks.structural_attack.get_tree_parameter_count(dtree: DecisionTreeClassifier) int[source]

Read the tree structure and return the number of learned parameters.

sacroml.attacks.structural_attack.get_unnecessary_risk(model: BaseEstimator) bool[source]

Check whether model hyperparameters are in the top 20% most risky.

This check is designed to assess whether a model is likely to be unnecessarily risky, i.e., whether it is highly likely that a different combination of hyper-parameters would have led to model with similar or better accuracy on the task but with lower membership inference risk.

The rules applied from an experimental study using a grid search in which: - max_features was one-hot encoded from the set [None, log2, sqrt] - splitter was encoded using 0=best, 1=random

The target models created were then subject to membership inference attacks (MIA) and the hyper-param combinations rank-ordered according to MIA AUC. Then a decision tree trained to recognise whether hyper-params combintions were in the 20% most risky. The rules below were extracted from that tree for the ‘least risky’ nodes.

Parameters:
modelBaseEstimator

Model to check for risk.

Returns:
bool

True if high risk, otherwise False.