Base Class

This module contains prototypes of privacy safe model wrappers.

class aisdc.safemodel.safemodel.SafeModel[source]

Privacy protected model base class.

Examples

>>> safeRFModel = SafeRandomForestClassifier()
>>> safeRFModel.fit(X, y)
>>> safeRFModel.save(name="safe.pkl")
>>> safeRFModel.preliminary_check()
>>> safeRFModel.request_release(path="safe", ext="pkl", target=target)
WARNING: model parameters may present a disclosure risk:
- parameter min_samples_leaf = 1 identified as less than the recommended min value of 5.
Changed parameter min_samples_leaf = 5.

Model parameters are within recommended ranges.

Attributes:
model_typestring

A string describing the type of model. Default is “None”.

model:

The Machine Learning Model.

saved_model:

A saved copy of the Machine Learning Model used for comparison.

ignore_itemslist

A list of items to ignore when comparing the model with the saved_model.

examine_separately_itemslist

A list of items to examine separately. These items are more complex datastructures that cannot be compared directly.

filenamestring

A filename to save the model.

researcherstring

The researcher user-id used for logging

Methods

additional_checks(curr_separate, saved_separate)

Placeholder function for additional posthoc checks e.g. keras this version just checks that any lists have the same contents.

examine_seperate_items(curr_vals, saved_vals)

Comparison of more complex structures in the super class we just check these model-specific items exist in both current and saved copies.

get_current_and_saved_models()

Makes a copy of self.__dict__ and splits it into dicts for the current and saved versions.

get_params([deep])

Gets dictionary of parameter values restricted to those expected by base classifier.

posthoc_check()

Checks whether model has been interfered with since fit() was last run.

preliminary_check([verbose, apply_constraints])

Checks whether current model parameters violate the safe rules.

request_release(path, ext[, target])

Saves model to filename specified and creates a report for the TRE output checkers.

run_attack([target, attack_name, ...])

Runs a specified attack on the trained model and saves a report to file.

save([name])

Writes model to file in appropriate format.

__apply_constraints(operator: str, key: str, val: Any, cur_val: Any) str

Applies a safe rule for a given parameter.

__check_model_param(rule: dict, apply_constraints: bool) tuple[str, bool]

Checks whether a current model parameter violates a safe rule. Optionally fixes violations.

__check_model_param_and(rule: dict, apply_constraints: bool) tuple[str, bool]

Checks whether current model parameters violate a logical AND rule. Optionally fixes violations.

__check_model_param_or(rule: dict) tuple[str, bool]

Checks whether current model parameters violate a logical OR rule.

__get_constraints() dict

Gets constraints relevant to the model type from the master read-only file.

__init__() None[source]

Super class constructor, gets researcher name.

__str__() str[source]

Returns string with model description. No point writing a test, especially as it depends on username.

__weakref__

list of weak references to the object (if defined)

additional_checks(curr_separate: dict, saved_separate: dict) tuple[str, bool][source]

Placeholder function for additional posthoc checks e.g. keras this version just checks that any lists have the same contents.

Parameters:
curr_separatepython dictionary
saved_separatepython dictionary
Returns:
msgstring
A message string
disclosivebool
A boolean value to indicate whether the model is potentially disclosive.

Notes

posthoc checking makes sure that the two dicts have the same set of keys as defined in the list self.examine_separately

examine_seperate_items(curr_vals: dict, saved_vals: dict) tuple[str, bool][source]

Comparison of more complex structures in the super class we just check these model-specific items exist in both current and saved copies.

get_current_and_saved_models() tuple[dict, dict][source]

Makes a copy of self.__dict__ and splits it into dicts for the current and saved versions.

get_params(deep=True)[source]

Gets dictionary of parameter values restricted to those expected by base classifier.

posthoc_check() tuple[str, bool][source]

Checks whether model has been interfered with since fit() was last run.

preliminary_check(verbose: bool = True, apply_constraints: bool = False) tuple[str, bool][source]

Checks whether current model parameters violate the safe rules. Optionally fixes violations.

Parameters:
verbosebool

A boolean value to determine increased output level.

apply_constraintsbool

A boolean to determine whether identified constraints are to be upheld and applied.

Returns:
msgstring

A message string

disclosivebool

A boolean value indicating whether the model is potentially disclosive.

request_release(path: str, ext: str, target: Target | None = None) None[source]

Saves model to filename specified and creates a report for the TRE output checkers.

Parameters:
pathstring

Path to save the outputs.

extstr

File extension defining the model saved format, e.g., “pkl” or “sav”.

targetattacks.target.Target

Contains model and dataset information.

Notes

If target is not null, then worst case MIA and attribute inference attacks are called via run_attack.

run_attack(target: Target | None = None, attack_name: str | None = None, output_dir: str = 'RES', report_name: str = 'undefined') dict[source]

Runs a specified attack on the trained model and saves a report to file.

Parameters:
targetTarget

The target in the form of a Target object.

attack_namestr

Name of the attack to run.

output_dirstr

Name of the directory to store .json and .pdf output reports

report_namestr

Name of a .json file to save report.

Returns:
dict

Metadata results.

Notes

Currently implement attack types are: Likelihood Ratio: lira Worst_Case Membership inference: worst_case Single Attribute Inference: attributes

save(name: str = 'undefined') None[source]

Writes model to file in appropriate format.

Note this is overloaded in SafeKerasClassifer to deal with tensorflow specifics.

Parameters:
namestring

The name of the file to save

Returns:

Notes

No return value

Optimizer is deliberately excluded. To prevent possible to restart training and thus possible back door into attacks.

aisdc.safemodel.safemodel.check_equal(key: str, val: Any, cur_val: Any) tuple[str, bool][source]

Checks equality value constraint.

Parameters:
keystring

The dictionary key to examine.

valAny Type

The expected value of the key.

cur_valAny Type

The current value of the key.

Returns:
msgstring

A message string.

disclosivebool

A boolean value indicating whether the model is potentially disclosive.

aisdc.safemodel.safemodel.check_max(key: str, val: Any, cur_val: Any) tuple[str, bool][source]

Checks maximum value constraint.

Parameters:
keystring

The dictionary key to examine.

valAny Type

The expected value of the key.

cur_valAny Type

The current value of the key.

Returns:
msgstring

A message string.

disclosivebool

A boolean value indicating whether the model is potentially disclosive.

aisdc.safemodel.safemodel.check_min(key: str, val: Any, cur_val: Any) tuple[str, bool][source]

Checks minimum value constraint.

Parameters:
keystring

The dictionary key to examine.

valAny Type

The expected value of the key.

cur_valAny Type

The current value of the key.

..
Returns:
msgstring

A message string.

disclosivebool

A boolean value indicating whether the model is potentially disclosive.

aisdc.safemodel.safemodel.check_type(key: str, val: Any, cur_val: Any) tuple[str, bool][source]

Checks the type of a value.

Parameters:
keystring

The dictionary key to examine.

valAny Type

The expected value of the key.

cur_valAny Type

The current value of the key.

Returns:
msgstring

A message string.

disclosivebool

A boolean value indicating whether the model is potentially disclosive.