SafeKerasModel

An example Python Notebook is available Here

Safekeras.py: Jim Smith, Andrew McCarty and Richard Preen UWE 2022.

class aisdc.safemodel.classifiers.safekeras.SafeKerasModel(*args, **kwargs)[source]

Privacy Protected Wrapper around tf.keras.Model class from tensorflow 2.8 disabling pylont warnings about number of instance attributes as this is necessarily complex.

Attributes:
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

distribute_reduction_method

The method employed to reduce per-replica values during training.

distribute_strategy

The tf.distribute.Strategy this model was created under.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

enable_tune_steps_per_execution
inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

jit_compile

Specify whether to compile the model with XLA.

layers
losses

List of losses added using the add_loss() API.

metrics

Return metrics added using compile() or add_metric().

metrics_names

Returns the model’s display labels for all outputs.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

run_eagerly

Settable attribute indicating whether the model should run eagerly.

state_updates

Deprecated, do NOT use!

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

additional_checks(curr_separate, saved_separate)

Placeholder function for additional posthoc checks e.g. keras this version just checks that any lists have the same contents.

build(input_shape)

Builds the model based on input shapes received.

build_from_config(config)

Builds the layer's states with the supplied config dict.

call(inputs[, training, mask])

Calls the model on new inputs and returns the outputs as tensors.

check_epsilon(num_samples, batch_size, epochs)

Computes the level of privacy guarantee is within recommended limits, and produces feedback".

compile([optimizer, loss, metrics])

Replaces the optimiser with a DP variant if needed and creates the necessary DP params in the opt and loss dict, then calls tf compile.

compile_from_config(config)

Compiles the model with the information given in config.

compute_loss([x, y, y_pred, sample_weight])

Compute the total loss, validate it, and return it.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

dp_epsilon_met(num_examples[, batch_size, ...])

Checks if epsilon is sufficient for Differential Privacy Provides feedback to user if epsilon is not sufficient.

evaluate([x, y, batch_size, verbose, ...])

Returns the loss value & metrics values for the model in test mode.

evaluate_generator(generator[, steps, ...])

Evaluates the model on a data generator.

examine_seperate_items(curr_vals, saved_vals)

Comparison of more complex structures in the super class we just check these model-specific items exist in both current and saved copies.

export(filepath)

Create a SavedModel artifact for inference (e.g. via TF-Serving).

finalize_state()

Finalizes the layers state after updating layer weights.

fit(X, Y, validation_data, epochs, batch_size)

Overrides the tensorflow fit() method with some extra functionality: (i) records number of samples for checking DP epsilon values.

fit_generator(generator[, steps_per_epoch, ...])

Fits the model on data yielded batch-by-batch by a Python generator.

from_config(config[, custom_objects])

Creates a layer from its config.

get_build_config()

Returns a dictionary with the layer's input shape.

get_compile_config()

Returns a serialized config with information for compiling the model.

get_config()

Returns the config of the Model.

get_current_and_saved_models()

Makes a copy of self.__dict__ and splits it into dicts for the current and saved versions.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_layer([name, index])

Retrieves a layer based on either its name (unique) or index.

get_metrics_result()

Returns the model's metrics values as a dict.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_params([deep])

Gets dictionary of parameter values restricted to those expected by base classifier.

get_weight_paths()

Retrieve all the variables and their paths for the model.

get_weights()

Retrieves the weights of the model.

load_own_variables(store)

Loads the state of the layer.

load_weights(filepath[, skip_mismatch, ...])

Loads all layer weights from a saved files.

make_predict_function([force])

Creates a function that executes one step of inference.

make_test_function([force])

Creates a function that executes one step of evaluation.

make_train_function([force])

Creates a function that executes one step of training.

posthoc_check([verbose])

Checks whether model should be considered unsafe for example, has been changed since fit() was last run, or does not meet DP policy.

predict(x[, batch_size, verbose, steps, ...])

Generates output predictions for the input samples.

predict_generator(generator[, steps, ...])

Generates predictions for the input samples from a data generator.

predict_on_batch(x)

Returns predictions for a single batch of samples.

predict_step(data)

The logic for one inference step.

preliminary_check([verbose, apply_constraints])

Checks whether current model parameters violate the safe rules.

request_release(path, ext[, target])

Saves model to filename specified and creates a report for the TRE output checkers.

reset_metrics()

Resets the state of all the metrics in the model.

run_attack([target, attack_name, ...])

Runs a specified attack on the trained model and saves a report to file.

save([name])

Writes model to file in appropriate format.

save_own_variables(store)

Saves the state of the layer.

save_spec([dynamic_batch])

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

save_weights(filepath[, overwrite, ...])

Saves all layer weights.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

summary([line_length, positions, print_fn, ...])

Prints a string summary of the network.

test_on_batch(x[, y, sample_weight, ...])

Test the model on a single batch of samples.

test_step(data)

The logic for one evaluation step.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

train_on_batch(x[, y, sample_weight, ...])

Runs a single gradient update on a single batch of data.

train_step(data)

The logic for one training step.

with_name_scope(method)

Decorator to automatically enter the module name scope.

__call__

reset_states

check_epsilon(num_samples: int, batch_size: int, epochs: int) Tuple[bool, str][source]

Computes the level of privacy guarantee is within recommended limits, and produces feedback”.

compile(optimizer=None, loss='categorical_crossentropy', metrics=['accuracy'])[source]

Replaces the optimiser with a DP variant if needed and creates the necessary DP params in the opt and loss dict, then calls tf compile. Allow None as default value for optimizer param because we explicitly deal with it.

dp_epsilon_met(num_examples: int, batch_size: int = 0, epochs: int = 0) Tuple[bool, str][source]

Checks if epsilon is sufficient for Differential Privacy Provides feedback to user if epsilon is not sufficient.

fit(X: Any, Y: Any, validation_data: Any, epochs: int, batch_size: int, refine_epsilon: bool = False) Any[source]

Overrides the tensorflow fit() method with some extra functionality: (i) records number of samples for checking DP epsilon values. (ii) does an automatic epsilon check and reports. (iia) if user sets refine_epsilon = true, return without fitting the model. (iii) then calls the tensorflow fit() function. (iv) finally makes a saved copy of the newly fitted model.

posthoc_check(verbose: bool = True) Tuple[str, bool][source]

Checks whether model should be considered unsafe for example, has been changed since fit() was last run, or does not meet DP policy.

save(name: str = 'undefined') None[source]

Writes model to file in appropriate format.

Parameters:
namestring

The name of the file to save

Returns:

Notes

No return value

Optimizer is deliberately excluded. To prevent possible to restart training and thus possible back door into attacks.

aisdc.safemodel.classifiers.safekeras.check_DP_used(optimizer) Tuple[bool, str][source]

Checks whether the DP optimizer was actually the one used.

aisdc.safemodel.classifiers.safekeras.check_checkpoint_equality(v1: str, v2: str) Tuple[bool, str][source]

Compares two checkpoints saved with tensorflow save_model On the assumption that the optimiser is not going to be saved, and that the model is going to be saved in frozen form this only checks the architecture and weights layer by layer.

aisdc.safemodel.classifiers.safekeras.check_optimizer_allowed(optimizer) Tuple[bool, str][source]

Checks if the model’s optimizer is in our white-list default setting is not allowed.

aisdc.safemodel.classifiers.safekeras.check_optimizer_is_DP(optimizer) Tuple[bool, str][source]

Checks whether optimizer is one of tensorflow’s DP versions.

aisdc.safemodel.classifiers.safekeras.load_safe_keras_model(name: str = 'undefined') Tuple[bool, Any][source]

Reads model from file in appropriate format. Optimizer is deliberately excluded in the save. This is to prevent possibility of restarting training, which could offer possible back door into attacks. Thus optimizer cannot be loaded.

aisdc.safemodel.classifiers.safekeras.same_configs(m1: Any, m2: Any) Tuple[bool, str][source]

Checks if two models havethe same architecture.

aisdc.safemodel.classifiers.safekeras.same_weights(m1: Any, m2: Any) Tuple[bool, str][source]

Checks if two nets with same architecture havethe same weights.