SafeKerasModel

An example Python Notebook is available Here

Privacy protected Keras model.

class sacroml.safemodel.classifiers.safekeras.SafeKerasModel(*args, **kwargs)[source]

Privacy Protected Wrapper around tf.keras.Model class from tensorflow 2.8.

Attributes:
activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer’s computations.

distribute_reduction_method

The method employed to reduce per-replica values during training.

distribute_strategy

The tf.distribute.Strategy this model was created under.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

enable_tune_steps_per_execution
inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

jit_compile

Specify whether to compile the model with XLA.

layers
losses

List of losses added using the add_loss() API.

metrics

Return metrics added using compile() or add_metric().

metrics_names

Returns the model’s display labels for all outputs.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

non_trainable_weights

List of all non-trainable weights tracked by this layer.

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

run_eagerly

Settable attribute indicating whether the model should run eagerly.

state_updates

Deprecated, do NOT use!

stateful
submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

trainable
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

trainable_weights

List of all trainable weights tracked by this layer.

updates
variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

Methods

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

additional_checks(curr_separate, saved_separate)

Perform additional posthoc checks.

build(input_shape)

Builds the model based on input shapes received.

build_from_config(config)

Builds the layer's states with the supplied config dict.

call(inputs[, training, mask])

Calls the model on new inputs and returns the outputs as tensors.

check_epsilon(num_samples, batch_size, epochs)

Check if the level of privacy guarantee is within recommended limits.

compile([optimizer, loss, metrics])

Compile the safe Keras model.

compile_from_config(config)

Compiles the model with the information given in config.

compute_loss([x, y, y_pred, sample_weight])

Compute the total loss, validate it, and return it.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

compute_output_shape(input_shape)

Computes the output shape of the layer.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

dp_epsilon_met(num_examples[, batch_size, ...])

Check if epsilon is sufficient for Differential Privacy.

evaluate([x, y, batch_size, verbose, ...])

Returns the loss value & metrics values for the model in test mode.

evaluate_generator(generator[, steps, ...])

Evaluates the model on a data generator.

examine_seperate_items(curr_vals, saved_vals)

Check model-specific items exist in both current and saved copies.

export(filepath)

Create a SavedModel artifact for inference (e.g. via TF-Serving).

finalize_state()

Finalizes the layers state after updating layer weights.

fit(X, y, validation_data, epochs, batch_size)

Fit a safe Keras model.

fit_generator(generator[, steps_per_epoch, ...])

Fits the model on data yielded batch-by-batch by a Python generator.

from_config(config[, custom_objects])

Creates a layer from its config.

get_build_config()

Returns a dictionary with the layer's input shape.

get_compile_config()

Returns a serialized config with information for compiling the model.

get_config()

Returns the config of the Model.

get_current_and_saved_models()

Copy self.__dict__ and split into dicts for current and saved versions.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_layer([name, index])

Retrieves a layer based on either its name (unique) or index.

get_metrics_result()

Returns the model's metrics values as a dict.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_params([deep])

Get a dictionary of parameter values restricted to those expected.

get_weight_paths()

Retrieve all the variables and their paths for the model.

get_weights()

Retrieves the weights of the model.

load_own_variables(store)

Loads the state of the layer.

load_weights(filepath[, skip_mismatch, ...])

Loads all layer weights from a saved files.

make_predict_function([force])

Creates a function that executes one step of inference.

make_test_function([force])

Creates a function that executes one step of evaluation.

make_train_function([force])

Creates a function that executes one step of training.

posthoc_check([verbose])

Check whether the model should be considered unsafe.

predict(x[, batch_size, verbose, steps, ...])

Generates output predictions for the input samples.

predict_generator(generator[, steps, ...])

Generates predictions for the input samples from a data generator.

predict_on_batch(x)

Returns predictions for a single batch of samples.

predict_step(data)

The logic for one inference step.

preliminary_check([verbose, apply_constraints])

Check whether current model parameters violate the safe rules.

request_release(path, ext[, target])

Save model and create a report for the TRE output checkers.

reset_metrics()

Resets the state of all the metrics in the model.

run_attack(target, attack_name[, output_dir])

Run a specified attack on the trained model and save report to file.

save([name])

Write model to file in appropriate format.

save_own_variables(store)

Saves the state of the layer.

save_spec([dynamic_batch])

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

save_weights(filepath[, overwrite, ...])

Saves all layer weights.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

summary([line_length, positions, print_fn, ...])

Prints a string summary of the network.

test_on_batch(x[, y, sample_weight, ...])

Test the model on a single batch of samples.

test_step(data)

The logic for one evaluation step.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

train_on_batch(x[, y, sample_weight, ...])

Runs a single gradient update on a single batch of data.

train_step(data)

The logic for one training step.

with_name_scope(method)

Decorator to automatically enter the module name scope.

__call__

reset_states

__init__(*args: Any, **kwargs: Any) None[source]

Create model and apply constraints to params.

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.

This method can be used inside a subclassed layer or model’s call function, in which case losses should be a Tensor or list of Tensors.

Example:

```python class MyLayer(tf.keras.layers.Layer):

def call(self, inputs):

self.add_loss(tf.abs(tf.reduce_mean(inputs))) return inputs

```

The same code works in distributed training: the input to add_loss() is treated like a regularization loss and averaged across replicas by the training loop (both built-in Model.fit() and compliant custom training loops).

The add_loss method can also be called directly on a Functional Model during construction. In this case, any loss Tensors passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These losses become part of the model’s topology and are tracked in `get_config.

Example:

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Activity regularization. model.add_loss(tf.abs(tf.reduce_mean(x))) `

If this is not the case for your loss (if, for example, your loss references a Variable of one of the model’s layers), you can wrap your loss in a zero-argument lambda. These losses are not tracked as part of the model’s topology since they can’t be serialized.

Example:

`python inputs = tf.keras.Input(shape=(10,)) d = tf.keras.layers.Dense(10) x = d(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) # Weight regularization. model.add_loss(lambda: tf.reduce_mean(d.kernel)) `

Args:
losses: Loss tensor, or list/tuple of tensors. Rather than tensors,

losses may also be zero-argument callables which create a loss tensor.

**kwargs: Used for backwards compatibility only.

add_metric(value, name=None, **kwargs)

Adds metric tensor to the layer.

This method can be used inside the call() method of a subclassed layer or model.

```python class MyMetricLayer(tf.keras.layers.Layer):

def __init__(self):

super(MyMetricLayer, self).__init__(name=’my_metric_layer’) self.mean = tf.keras.metrics.Mean(name=’metric_1’)

def call(self, inputs):

self.add_metric(self.mean(inputs)) self.add_metric(tf.reduce_sum(inputs), name=’metric_2’) return inputs

```

This method can also be called directly on a Functional Model during construction. In this case, any tensor passed to this Model must be symbolic and be able to be traced back to the model’s Input`s. These metrics become part of the model’s topology and are tracked when you save the model via `save().

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(math_ops.reduce_sum(x), name='metric_1') `

Note: Calling add_metric() with the result of a metric object on a Functional Model, as shown in the example below, is not supported. This is because we cannot trace the metric result tensor back to the model’s inputs.

`python inputs = tf.keras.Input(shape=(10,)) x = tf.keras.layers.Dense(10)(inputs) outputs = tf.keras.layers.Dense(1)(x) model = tf.keras.Model(inputs, outputs) model.add_metric(tf.keras.metrics.Mean()(x), name='metric_1') `

Args:

value: Metric tensor. name: String metric name. **kwargs: Additional keyword arguments for backward compatibility.

Accepted values: aggregation - When the value tensor provided is not the result of calling a keras.Metric instance, it will be aggregated by default using a keras.Metric.Mean.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing the same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.

This call is ignored when eager execution is enabled (in that case, variable updates are run on the fly and thus do not need to be tracked for later execution).

Args:
updates: Update op, or list/tuple of update ops, or zero-arg callable

that returns an update op. A zero-arg callable should be passed in order to disable running the updates by setting trainable=False on this Layer, when executing in Eager mode.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight(name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, use_resource=None, synchronization=VariableSynchronization.AUTO, aggregation=VariableAggregationV2.NONE, **kwargs)

Adds a new variable to the layer.

Args:

name: Variable name. shape: Variable shape. Defaults to scalar if unspecified. dtype: The type of the variable. Defaults to self.dtype. initializer: Initializer instance (callable). regularizer: Regularizer instance (callable). trainable: Boolean, whether the variable should be part of the layer’s

“trainable_variables” (e.g. variables, biases) or “non_trainable_variables” (e.g. BatchNorm mean and variance). Note that trainable cannot be True if synchronization is set to ON_READ.

constraint: Constraint instance (callable). use_resource: Whether to use a ResourceVariable or not.

synchronization: Indicates when a distributed a variable will be

aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize. If synchronization is set to ON_READ, trainable must not be set to True.

aggregation: Indicates how a distributed variable will be aggregated.

Accepted values are constants defined in the class tf.VariableAggregation.

**kwargs: Additional keyword arguments. Accepted values are getter,

collections, experimental_autocast and caching_device.

Returns:

The variable created.

Raises:
ValueError: When giving unsupported dtype and no initializer or when

trainable has been set to True with synchronization set as ON_READ.

additional_checks(curr_separate: dict, saved_separate: dict) tuple[str, bool]

Perform additional posthoc checks.

Placeholder function for additional posthoc checks e.g. keras. This version just checks that any lists have the same contents.

Parameters:
curr_separatedict
saved_separatedict
Returns:
msgstring

A message string.

disclosivebool

A boolean value to indicate whether the model is potentially disclosive.

Notes

Posthoc checking makes sure that the two dicts have the same set of keys as defined in the list self.examine_separately.

build(input_shape)

Builds the model based on input shapes received.

This is to be used for subclassed models, which do not know at instantiation time what their inputs look like.

This method only exists for users who want to call model.build() in a standalone way (as a substitute for calling the model on real data to build it). It will never be called by the framework (and thus it will never throw unexpected errors in an unrelated workflow).

Args:
input_shape: Single tuple, TensorShape instance, or list/dict of

shapes, where shapes are tuples, integers, or TensorShape instances.

Raises:
ValueError:
  1. In case of invalid user-provided data (not of type tuple, list, TensorShape, or dict).

  2. If the model requires call arguments that are agnostic to the input shapes (positional or keyword arg in call signature).

  3. If not all layers were properly built.

  4. If float type inputs are not supported within the layers.

In each of these cases, the user should build their model by calling it on real tensor data.

build_from_config(config)

Builds the layer’s states with the supplied config dict.

By default, this method calls the build(config[“input_shape”]) method, which creates weights based on the layer’s input shape in the supplied config. If your config contains other information needed to load the layer’s state, you should override this method.

Args:

config: Dict containing the input shape associated with this layer.

call(inputs, training=None, mask=None)

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method.

Args:

inputs: Input tensor, or dict/list/tuple of input tensors. training: Boolean or boolean scalar tensor, indicating whether to

run the Network in training mode or inference mode.

mask: A mask or list of masks. A mask can be either a boolean tensor

or None (no mask). For more details, check the guide [here](https://www.tensorflow.org/guide/keras/masking_and_padding).

Returns:

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

check_epsilon(num_samples: int, batch_size: int, epochs: int) tuple[bool, str][source]

Check if the level of privacy guarantee is within recommended limits.

compile(optimizer=None, loss='categorical_crossentropy', metrics=None)[source]

Compile the safe Keras model.

Replaces the optimiser with a DP variant if needed and creates the necessary DP params in the opt and loss dict, then calls tf compile. Allow None as default value for optimizer param because we explicitly deal with it.

compile_from_config(config)

Compiles the model with the information given in config.

This method uses the information in the config (optimizer, loss, metrics, etc.) to compile the model.

Args:

config: Dict containing information for compiling the model.

compute_loss(x=None, y=None, y_pred=None, sample_weight=None)

Compute the total loss, validate it, and return it.

Subclasses can optionally override this method to provide custom loss computation logic.

Example: ```python class MyModel(tf.keras.Model):

def __init__(self, *args, **kwargs):

super(MyModel, self).__init__(*args, **kwargs) self.loss_tracker = tf.keras.metrics.Mean(name=’loss’)

def compute_loss(self, x, y, y_pred, sample_weight):

loss = tf.reduce_mean(tf.math.squared_difference(y_pred, y)) loss += tf.add_n(self.losses) self.loss_tracker.update_state(loss) return loss

def reset_metrics(self):

self.loss_tracker.reset_states()

@property def metrics(self):

return [self.loss_tracker]

tensors = tf.random.uniform((10, 10)), tf.random.uniform((10,)) dataset = tf.data.Dataset.from_tensor_slices(tensors).repeat().batch(1)

inputs = tf.keras.layers.Input(shape=(10,), name=’my_input’) outputs = tf.keras.layers.Dense(10)(inputs) model = MyModel(inputs, outputs) model.add_loss(tf.reduce_sum(outputs))

optimizer = tf.keras.optimizers.SGD() model.compile(optimizer, loss=’mse’, steps_per_execution=10) model.fit(dataset, epochs=2, steps_per_epoch=10) print(‘My custom loss: ‘, model.loss_tracker.result().numpy()) ```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

The total loss as a tf.Tensor, or None if no loss results (which is the case when called by Model.test_step).

compute_mask(inputs, mask=None)

Computes an output mask tensor.

Args:

inputs: Tensor or list of tensors. mask: Tensor or list of tensors.

Returns:
None or a tensor (or list of tensors,

one per output tensor of the layer).

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

Subclasses can optionally override this method to provide custom metric updating and collection logic.

Example: ```python class MyModel(tf.keras.Sequential):

def compute_metrics(self, x, y, y_pred, sample_weight):

# This super call updates self.compiled_metrics and returns # results for all metrics listed in self.metrics. metric_results = super(MyModel, self).compute_metrics(

x, y, y_pred, sample_weight)

# Note that self.custom_metric is not listed in self.metrics. self.custom_metric.update_state(x, y, y_pred, sample_weight) metric_results[‘custom_metric_name’] = self.custom_metric.result() return metric_results

```

Args:

x: Input data. y: Target data. y_pred: Predictions returned by the model (output of model.call(x)) sample_weight: Sample weights for weighting the loss function.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end(). Typically, the values of the metrics listed in self.metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

compute_output_shape(input_shape)

Computes the output shape of the layer.

This method will cause the layer’s state to be built, if that has not happened before. This requires that the layer will later be used with inputs that match the input shape provided here.

Args:
input_shape: Shape tuple (tuple of integers) or tf.TensorShape,

or structure of shape tuples / tf.TensorShape instances (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.

Returns:

A tf.TensorShape instance or structure of tf.TensorShape instances.

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn’t implement this function, the framework will fall back to use compute_output_shape, and will assume that the output dtype matches the input dtype.

Args:
input_signature: Single TensorSpec or nested structure of TensorSpec

objects, describing a candidate input for the layer.

Returns:
Single TensorSpec or nested structure of TensorSpec objects,

describing how the layer would transform the provided input.

Raises:

TypeError: If input_signature contains a non-TensorSpec object.

count_params()

Count the total number of scalars composing the weights.

Returns:

An integer count.

Raises:
ValueError: if the layer isn’t yet built

(in which case its weights aren’t yet defined).

dp_epsilon_met(num_examples: int, batch_size: int = 0, epochs: int = 0) tuple[bool, str][source]

Check if epsilon is sufficient for Differential Privacy.

Provides feedback to user if epsilon is not sufficient.

evaluate(x=None, y=None, batch_size=None, verbose='auto', sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, return_dict=False, **kwargs)

Returns the loss value & metrics values for the model in test mode.

Computation is done in batches (see the batch_size arg.)

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors, if the model has named inputs.

  • A tf.data dataset. Should return a tuple of either (inputs, targets) or (inputs, targets, sample_weights).

  • A generator or keras.utils.Sequence returning (inputs, targets) or (inputs, targets, sample_weights).

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely). If x is a dataset, generator or keras.utils.Sequence instance, y should not be specified (since targets will be obtained from the iterator/dataset).

batch_size: Integer or None. Number of samples per batch of

computation. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of a dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

sample_weight: Optional Numpy array of weights for the test samples,

used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples

(1:1 mapping between weights and samples), or in the case of

temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample. This argument is not supported when x is a dataset, instead pass sample weights as the third element of x.

steps: Integer or None. Total number of steps (batches of samples)

before declaring the evaluation round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, ‘evaluate’ will run until the dataset is exhausted. This argument is not supported with array inputs.

callbacks: List of keras.callbacks.Callback instances. List of

callbacks to apply during evaluation. See [callbacks](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

**kwargs: Unused at this time.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.evaluate is wrapped in a tf.function.

evaluate_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)

Evaluates the model on a data generator.

DEPRECATED:

Model.evaluate now supports generators, so there is no longer any need to use this endpoint.

examine_seperate_items(curr_vals: dict, saved_vals: dict) tuple[str, bool]

Check model-specific items exist in both current and saved copies.

export(filepath)

Create a SavedModel artifact for inference (e.g. via TF-Serving).

This method lets you export a model to a lightweight SavedModel artifact that contains the model’s forward pass only (its call() method) and can be served via e.g. TF-Serving. The forward pass is registered under the name serve() (see example below).

The original code of the model (including any custom layers you may have used) is no longer necessary to reload the artifact – it is entirely standalone.

Args:
filepath: str or pathlib.Path object. Path where to save

the artifact.

Example:

```python # Create the artifact model.export(“path/to/location”)

# Later, in a different process / environment… reloaded_artifact = tf.saved_model.load(“path/to/location”) predictions = reloaded_artifact.serve(input_data) ```

If you would like to customize your serving endpoints, you can use the lower-level keras.export.ExportArchive class. The export() method relies on ExportArchive internally.

finalize_state()

Finalizes the layers state after updating layer weights.

This function can be subclassed in a layer and will be called after updating a layer weights. It can be overridden to finalize any additional layer state after a weight update.

This function will be called after weights of a layer have been restored from a loaded model.

fit(X: Any, y: Any, validation_data: Any, epochs: int, batch_size: int, refine_epsilon: bool = False) Any[source]

Fit a safe Keras model.

Overrides the tensorflow fit() method with some extra functionality: (i) records number of samples for checking DP epsilon values. (ii) does an automatic epsilon check and reports. (iia) if user sets refine_epsilon = true, return without fitting the model. (iii) then calls the tensorflow fit() function. (iv) finally makes a saved copy of the newly fitted model.

fit_generator(generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0)

Fits the model on data yielded batch-by-batch by a Python generator.

DEPRECATED:

Model.fit now supports generators, so there is no longer any need to use this endpoint.

classmethod from_config(config, custom_objects=None)

Creates a layer from its config.

This method is the reverse of get_config, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights).

Args:
config: A Python dictionary, typically the

output of get_config.

Returns:

A layer instance.

get_build_config()

Returns a dictionary with the layer’s input shape.

This method returns a config dict that can be used by build_from_config(config) to create all states (e.g. Variables and Lookup tables) needed by the layer.

By default, the config only contains the input shape that the layer was built with. If you’re writing a custom layer that creates state in an unusual way, you should override this method to make sure this state is already created when Keras attempts to load its value upon model loading.

Returns:

A dict containing the input shape associated with the layer.

get_compile_config()

Returns a serialized config with information for compiling the model.

This method returns a config dictionary containing all the information (optimizer, loss, metrics, etc.) with which the model was compiled.

Returns:

A dict containing information for compiling the model.

get_config()

Returns the config of the Model.

Config is a Python dictionary (serializable) containing the configuration of an object, which in this case is a Model. This allows the Model to be be reinstantiated later (without its trained weights) from this configuration.

Note that get_config() does not guarantee to return a fresh copy of dict every time it is called. The callers should make a copy of the returned dict if they want to modify it.

Developers of subclassed Model are advised to override this method, and continue to update the dict from super(MyModel, self).get_config() to provide the proper configuration of this Model. The default config will return config dict for init parameters if they are basic types. Raises NotImplementedError when in cases where a custom get_config() implementation is required for the subclassed model.

Returns:

Python dictionary containing the configuration of this Model.

get_current_and_saved_models() tuple[dict, dict]

Copy self.__dict__ and split into dicts for current and saved versions.

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first input node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple inputs).

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple inputs).

Raises:

RuntimeError: If called in Eager mode.

get_layer(name=None, index=None)

Retrieves a layer based on either its name (unique) or index.

If name and index are both provided, index will take precedence. Indices are based on order of horizontal graph traversal (bottom-up).

Args:

name: String, name of layer. index: Integer, index of layer.

Returns:

A layer instance.

get_metrics_result()

Returns the model’s metrics values as a dict.

If any of the metric result is a dict (containing multiple metrics), each of them gets added to the top level returned dict of this method.

Returns:

A dict containing values of the metrics listed in self.metrics. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first output node of the layer.

Returns:

A tensor (or list of tensors if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A mask tensor (or list of tensors if the layer has multiple outputs).

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

Args:
node_index: Integer, index of the node

from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.

Returns:

A shape tuple (or list of shape tuples if the layer has multiple outputs).

Raises:

RuntimeError: If called in Eager mode.

get_params(deep: bool = True) dict

Get a dictionary of parameter values restricted to those expected.

get_weight_paths()

Retrieve all the variables and their paths for the model.

The variable path (string) is a stable key to identify a tf.Variable instance owned by the model. It can be used to specify variable-specific configurations (e.g. DTensor, quantization) from a global view.

This method returns a dict with weight object paths as keys and the corresponding tf.Variable instances as values.

Note that if the model is a subclassed model and the weights haven’t been initialized, an empty dict will be returned.

Returns:
A dict where keys are variable paths and values are tf.Variable

instances.

Example:

```python class SubclassModel(tf.keras.Model):

def __init__(self, name=None):

super().__init__(name=name) self.d1 = tf.keras.layers.Dense(10) self.d2 = tf.keras.layers.Dense(20)

def call(self, inputs):

x = self.d1(inputs) return self.d2(x)

model = SubclassModel() model(tf.zeros((10, 10))) weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: model.d1.kernel, # ‘d1.bias’: model.d1.bias, # ‘d2.kernel’: model.d2.kernel, # ‘d2.bias’: model.d2.bias, # }

# Functional model inputs = tf.keras.Input((10,), batch_size=10) x = tf.keras.layers.Dense(20, name=’d1’)(inputs) output = tf.keras.layers.Dense(30, name=’d2’)(x) model = tf.keras.Model(inputs, output) d1 = model.layers[1] d2 = model.layers[2] weight_paths = model.get_weight_paths() # weight_paths: # { # ‘d1.kernel’: d1.kernel, # ‘d1.bias’: d1.bias, # ‘d2.kernel’: d2.kernel, # ‘d2.bias’: d2.bias, # } ```

get_weights()

Retrieves the weights of the model.

Returns:

A flat list of Numpy arrays.

load_own_variables(store)

Loads the state of the layer.

You can override this method to take full control of how the state of the layer is loaded upon calling keras.models.load_model().

Args:

store: Dict from which the state of the model will be loaded.

load_weights(filepath, skip_mismatch=False, by_name=False, options=None)

Loads all layer weights from a saved files.

The saved file could be a SavedModel file, a .keras file (v3 saving format), or a file created via model.save_weights().

By default, weights are loaded based on the network’s topology. This means the architecture should be the same as when the weights were saved. Note that layers that don’t have weights are not taken into account in the topological ordering, so adding or removing layers is fine as long as they don’t have weights.

Partial weight loading

If you have modified your model, for instance by adding a new layer (with weights) or by changing the shape of the weights of a layer, you can choose to ignore errors and continue loading by setting skip_mismatch=True. In this case any layer with mismatching weights will be skipped. A warning will be displayed for each skipped layer.

Weight loading by name

If your weights are saved as a .h5 file created via model.save_weights(), you can use the argument by_name=True.

In this case, weights are loaded into layers only if they share the same name. This is useful for fine-tuning or transfer-learning models where some of the layers have changed.

Note that only topological loading (by_name=False) is supported when loading weights from the .keras v3 format or from the TensorFlow SavedModel format.

Args:
filepath: String, path to the weights file to load. For weight files

in TensorFlow format, this is the file prefix (the same as was passed to save_weights()). This can also be a path to a SavedModel or a .keras file (v3 saving format) saved via model.save().

skip_mismatch: Boolean, whether to skip loading of layers where

there is a mismatch in the number of weights, or a mismatch in the shape of the weights.

by_name: Boolean, whether to load weights by name or by topological

order. Only topological loading is supported for weight files in the .keras v3 format or in the TensorFlow SavedModel format.

options: Optional tf.train.CheckpointOptions object that specifies

options for loading weights (only valid for a SavedModel file).

make_predict_function(force=False)

Creates a function that executes one step of inference.

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the predict function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return the outputs of the Model.

make_test_function(force=False)

Creates a function that executes one step of evaluation.

This method can be overridden to support custom evaluation logic. This method is called by Model.evaluate and Model.test_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.test_step.

This function is cached the first time Model.evaluate or Model.test_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the test function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_test_batch_end.

make_train_function(force=False)

Creates a function that executes one step of training.

This method can be overridden to support custom training logic. This method is called by Model.fit and Model.train_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual training logic to Model.train_step.

This function is cached the first time Model.fit or Model.train_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Args:
force: Whether to regenerate the train function and skip the cached

function if available.

Returns:

Function. The function created by this method should accept a tf.data.Iterator, and return a dict containing values that will be passed to tf.keras.Callbacks.on_train_batch_end, such as {‘loss’: 0.2, ‘accuracy’: 0.7}.

posthoc_check(verbose: bool = True) tuple[str, bool][source]

Check whether the model should be considered unsafe.

For example, has been changed since fit() was last run, or does not meet DP policy.

predict(x, batch_size=None, verbose='auto', steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False)

Generates output predictions for the input samples.

Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.

For small numbers of inputs that fit in one batch, directly use __call__() for faster execution, e.g., model(x), or model(x, training=False) if you have layers such as tf.keras.layers.BatchNormalization that behave differently during inference. You may pair the individual model call with a tf.function for additional performance inside your inner loop. If you need access to numpy array values instead of tensors after your model call, you can use tensor.numpy() to get the numpy array value of an eager tensor.

Also, note the fact that test loss is not affected by regularization layers like noise and dropout.

Note: See [this FAQ entry]( https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call) for more details about the difference between Model methods predict() and __call__().

Args:
x: Input samples. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs).

  • A tf.data dataset.

  • A generator or keras.utils.Sequence instance.

A more detailed description of unpacking behavior for iterator types (Dataset, generator, Sequence) is given in the Unpacking behavior for iterator-like inputs section of Model.fit.

batch_size: Integer or None.

Number of samples per batch. If unspecified, batch_size will default to 32. Do not specify the batch_size if your data is in the form of dataset, generators, or keras.utils.Sequence instances (since they generate batches).

verbose: “auto”, 0, 1, or 2. Verbosity mode.

0 = silent, 1 = progress bar, 2 = single line. “auto” becomes 1 for most cases, and to 2 when used with ParameterServerStrategy. Note that the progress bar is not particularly useful when logged to a file, so verbose=2 is recommended when not running interactively (e.g. in a production environment). Defaults to ‘auto’.

steps: Total number of steps (batches of samples)

before declaring the prediction round finished. Ignored with the default value of None. If x is a tf.data dataset and steps is None, predict() will run until the input dataset is exhausted.

callbacks: List of keras.callbacks.Callback instances.

List of callbacks to apply during prediction. See [callbacks]( https://www.tensorflow.org/api_docs/python/tf/keras/callbacks).

max_queue_size: Integer. Used for generator or

keras.utils.Sequence input only. Maximum size for the generator queue. If unspecified, max_queue_size will default to 10.

workers: Integer. Used for generator or keras.utils.Sequence input

only. Maximum number of processes to spin up when using process-based threading. If unspecified, workers will default to 1.

use_multiprocessing: Boolean. Used for generator or

keras.utils.Sequence input only. If True, use process-based threading. If unspecified, use_multiprocessing will default to False. Note that because this implementation relies on multiprocessing, you should not pass non-pickleable arguments to the generator as they can’t be passed easily to children processes.

See the discussion of Unpacking behavior for iterator-like inputs for Model.fit. Note that Model.predict uses the same interpretation rules as Model.fit and Model.evaluate, so inputs must be unambiguous for all three methods.

Returns:

Numpy array(s) of predictions.

Raises:

RuntimeError: If model.predict is wrapped in a tf.function. ValueError: In case of mismatch between the provided

input data and the model’s expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

predict_generator(generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0)

Generates predictions for the input samples from a data generator.

DEPRECATED:

Model.predict now supports generators, so there is no longer any need to use this endpoint.

predict_on_batch(x)

Returns predictions for a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

Returns:

Numpy array(s) of predictions.

Raises:
RuntimeError: If model.predict_on_batch is wrapped in a

tf.function.

predict_step(data)

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_predict_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

The result of one inference step, typically the output of calling the Model on data.

preliminary_check(verbose: bool = True, apply_constraints: bool = False) tuple[str, bool]

Check whether current model parameters violate the safe rules.

Optionally fixes violations.

Parameters:
verbosebool

A boolean value to determine increased output level.

apply_constraintsbool

A boolean to determine whether identified constraints are to be upheld and applied.

Returns:
msgstring

A message string.

disclosivebool

A boolean value indicating whether the model is potentially disclosive.

request_release(path: str, ext: str, target: Target | None = None) None

Save model and create a report for the TRE output checkers.

Parameters:
pathstring

Path to save the outputs.

extstr

File extension defining the model saved format, e.g., “pkl” or “sav”.

targetattacks.target.Target

Contains model and dataset information.

Notes

If target is not null, then worst case MIA and attribute inference attacks are called via run_attack.

reset_metrics()

Resets the state of all the metrics in the model.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> _ = model.fit(x, y, verbose=0)
>>> assert all(float(m.result()) for m in model.metrics)
>>> model.reset_metrics()
>>> assert all(float(m.result()) == 0 for m in model.metrics)
reset_states()
run_attack(target: Target, attack_name: str, output_dir: str = 'outputs_safemodel') dict

Run a specified attack on the trained model and save report to file.

Parameters:
targetTarget

The target in the form of a Target object.

attack_namestr

Name of the attack to run.

output_dirstr

Name of the directory to store JSON and PDF reports.

Returns:
dict

Metadata results.

save(name: str = 'undefined') None[source]

Write model to file in appropriate format.

Parameters:
namestring

The name of the file to save

Notes

Optimizer is deliberately excluded. To prevent possible to restart training and thus possible back door into attacks.

save_own_variables(store)

Saves the state of the layer.

You can override this method to take full control of how the state of the layer is saved upon calling model.save().

Args:

store: Dict where the state of the model will be saved.

save_spec(dynamic_batch=True)

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

This value is automatically defined after calling the model for the first time. Afterwards, you can use it when exporting the model for serving:

```python model = tf.keras.Model(…)

@tf.function def serve(*args, **kwargs):

outputs = model(*args, **kwargs) # Apply postprocessing steps, or add additional outputs. … return outputs

# arg_specs is [tf.TensorSpec(…), …]. kwarg_specs, in this # example, is an empty dict since functional models do not use keyword # arguments. arg_specs, kwarg_specs = model.save_spec()

model.save(path, signatures={
‘serving_default’: serve.get_concrete_function(*arg_specs,

**kwarg_specs)

})

Args:
dynamic_batch: Whether to set the batch sizes of all the returned

tf.TensorSpec to None. (Note that when defining functional or Sequential models with tf.keras.Input([…], batch_size=X), the batch size will always be preserved). Defaults to True.

Returns:

If the model inputs are defined, returns a tuple (args, kwargs). All elements in args and kwargs are tf.TensorSpec. If the model inputs are not defined, returns None. The model inputs are automatically set when calling the model, model.fit, model.evaluate or model.predict.

save_weights(filepath, overwrite=True, save_format=None, options=None)

Saves all layer weights.

Either saves in HDF5 or in TensorFlow format based on the save_format argument.

When saving in HDF5 format, the weight file has:
  • layer_names (attribute), a list of strings

    (ordered names of model layers).

  • For every layer, a group named layer.name
    • For every such layer group, a group attribute weight_names,

      a list of strings (ordered names of weights tensor of the layer).

    • For every weight in the layer, a dataset

      storing the weight value, named after the weight tensor.

When saving in TensorFlow format, all objects referenced by the network are saved in the same format as tf.train.Checkpoint, including any Layer instances or Optimizer instances assigned to object attributes. For networks constructed from inputs and outputs using tf.keras.Model(inputs, outputs), Layer instances used by the network are tracked/saved automatically. For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor. See the documentation of tf.train.Checkpoint and tf.keras.Model for details.

While the formats are the same, do not mix save_weights and tf.train.Checkpoint. Checkpoints saved by Model.save_weights should be loaded using Model.load_weights. Checkpoints saved using tf.train.Checkpoint.save should be restored using the corresponding tf.train.Checkpoint.restore. Prefer tf.train.Checkpoint over save_weights for training checkpoints.

The TensorFlow format matches objects and variables by starting at a root object, self for save_weights, and greedily matching attribute names. For Model.save this is the Model, and for Checkpoint.save this is the Checkpoint even if the Checkpoint has a model attached. This means saving a tf.keras.Model using save_weights and loading into a tf.train.Checkpoint with a Model attached (or vice versa) will not match the Model’s variables. See the [guide to training checkpoints]( https://www.tensorflow.org/guide/checkpoint) for details on the TensorFlow format.

Args:
filepath: String or PathLike, path to the file to save the weights

to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

overwrite: Whether to silently overwrite any existing file at the

target location, or provide the user with a manual prompt.

save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or

‘.keras’ will default to HDF5 if save_format is None. Otherwise, None becomes ‘tf’. Defaults to None.

options: Optional tf.train.CheckpointOptions object that specifies

options for saving weights.

Raises:
ImportError: If h5py is not available when attempting to save in

HDF5 format.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

The weights of a layer represent the state of the layer. This function sets the weight values from numpy arrays. The weight values should be passed in the order they are created by the layer. Note that the layer’s weights must be instantiated before calling this function, by calling the layer.

For example, a Dense layer returns a list of two values: the kernel matrix and the bias vector. These can be used to set the weights of another Dense layer:

>>> layer_a = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(1.))
>>> a_out = layer_a(tf.convert_to_tensor([[1., 2., 3.]]))
>>> layer_a.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b = tf.keras.layers.Dense(1,
...   kernel_initializer=tf.constant_initializer(2.))
>>> b_out = layer_b(tf.convert_to_tensor([[10., 20., 30.]]))
>>> layer_b.get_weights()
[array([[2.],
       [2.],
       [2.]], dtype=float32), array([0.], dtype=float32)]
>>> layer_b.set_weights(layer_a.get_weights())
>>> layer_b.get_weights()
[array([[1.],
       [1.],
       [1.]], dtype=float32), array([0.], dtype=float32)]
Args:
weights: a list of NumPy arrays. The number

of arrays and their shape must match number of the dimensions of the weights of the layer (i.e. it should match the output of get_weights).

Raises:
ValueError: If the provided weights list does not match the

layer’s specifications.

summary(line_length=None, positions=None, print_fn=None, expand_nested=False, show_trainable=False, layer_range=None)

Prints a string summary of the network.

Args:
line_length: Total length of printed lines

(e.g. set this to adapt the display to different terminal window sizes).

positions: Relative or absolute positions of log elements

in each line. If not provided, becomes [0.3, 0.6, 0.70, 1.]. Defaults to None.

print_fn: Print function to use. By default, prints to stdout.

If stdout doesn’t work in your environment, change to print. It will be called on each line of the summary. You can set it to a custom function in order to capture the string summary.

expand_nested: Whether to expand the nested models.

Defaults to False.

show_trainable: Whether to show if a layer is trainable.

Defaults to False.

layer_range: a list or tuple of 2 strings,

which is the starting layer name and ending layer name (both inclusive) indicating the range of layers to be printed in summary. It also accepts regex patterns instead of exact name. In such case, start predicate will be the first element it matches to layer_range[0] and the end predicate will be the last element it matches to layer_range[1]. By default None which considers all layers of model.

Raises:

ValueError: if summary() is called before the model is built.

test_on_batch(x, y=None, sample_weight=None, reset_metrics=True, return_dict=False)

Test the model on a single batch of samples.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays (in case the

    model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors (in case the model has

    multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s). It should be consistent with x (you cannot have Numpy inputs and tensor targets, or inversely).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar test loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:
RuntimeError: If model.test_on_batch is wrapped in a

tf.function.

test_step(data)

The logic for one evaluation step.

This method can be overridden to support custom evaluation logic. This method is called by Model.make_test_function.

This function should contain the mathematical logic for one step of evaluation. This typically includes the forward pass, loss calculation, and metrics updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_test_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

To load a network from a JSON save file, use keras.models.model_from_json(json_string, custom_objects={}).

Args:
**kwargs: Additional keyword arguments to be passed to

*json.dumps().

Returns:

A JSON string.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError.

To load a network from a yaml save file, use keras.models.model_from_yaml(yaml_string, custom_objects={}).

custom_objects should be a dictionary mapping the names of custom losses / layers / etc to the corresponding functions / classes.

Args:
**kwargs: Additional keyword arguments

to be passed to yaml.dump().

Returns:

A YAML string.

Raises:

RuntimeError: announces that the method poses a security risk

train_on_batch(x, y=None, sample_weight=None, class_weight=None, reset_metrics=True, return_dict=False)

Runs a single gradient update on a single batch of data.

Args:
x: Input data. It could be:
  • A Numpy array (or array-like), or a list of arrays

    (in case the model has multiple inputs).

  • A TensorFlow tensor, or a list of tensors

    (in case the model has multiple inputs).

  • A dict mapping input names to the corresponding array/tensors,

    if the model has named inputs.

y: Target data. Like the input data x, it could be either Numpy

array(s) or TensorFlow tensor(s).

sample_weight: Optional array of the same length as x, containing

weights to apply to the model’s loss for each sample. In the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

class_weight: Optional dictionary mapping class indices (integers)

to a weight (float) to apply to the model’s loss for the samples from this class during training. This can be useful to tell the model to “pay more attention” to samples from an under-represented class. When class_weight is specified and targets have a rank of 2 or greater, either y must be one-hot encoded, or an explicit final dimension of 1 must be included for sparse class labels.

reset_metrics: If True, the metrics returned will be only for this

batch. If False, the metrics will be statefully accumulated across batches.

return_dict: If True, loss and metric results are returned as a

dict, with each key being the name of the metric. If False, they are returned as a list.

Returns:

Scalar training loss (if the model has a single output and no metrics) or list of scalars (if the model has multiple outputs and/or metrics). The attribute model.metrics_names will give you the display labels for the scalar outputs.

Raises:

RuntimeError: If model.train_on_batch is wrapped in a tf.function.

train_step(data)

The logic for one training step.

This method can be overridden to support custom training logic. For concrete examples of how to override this method see [Customizing what happens in fit]( https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This method is called by Model.make_train_function.

This method should contain the mathematical logic for one step of training. This typically includes the forward pass, loss calculation, backpropagation, and metric updates.

Configuration details for how this logic is run (e.g. tf.function and tf.distribute.Strategy settings), should be left to Model.make_train_function, which can also be overridden.

Args:

data: A nested structure of `Tensor`s.

Returns:

A dict containing values that will be passed to tf.keras.callbacks.CallbackList.on_train_batch_end. Typically, the values of the Model’s metrics are returned. Example: {‘loss’: 0.2, ‘accuracy’: 0.7}.

classmethod with_name_scope(method)

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
Args:

method: The method to wrap.

Returns:

The original method wrapped such that it enters the module’s name scope.

property activity_regularizer

Optional regularizer function for the output of this layer.

property compute_dtype

The dtype of the layer’s computations.

This is equivalent to Layer.dtype_policy.compute_dtype. Unless mixed precision is used, this is the same as Layer.dtype, the dtype of the weights.

Layers automatically cast their inputs to the compute dtype, which causes computations and the output to be in the compute dtype as well. This is done by the base Layer class in Layer.__call__, so you do not have to insert these casts if implementing your own layer.

Layers often perform certain internal computations in higher precision when compute_dtype is float16 or bfloat16 for numeric stability. The output will still typically be float16 or bfloat16 in such cases.

Returns:

The layer’s compute dtype.

property distribute_reduction_method

The method employed to reduce per-replica values during training.

Unless specified, the value “auto” will be assumed, indicating that the reduction strategy should be chosen based on the current running environment. See reduce_per_replica function for more details.

property distribute_strategy

The tf.distribute.Strategy this model was created under.

property dtype

The dtype of the layer weights.

This is equivalent to Layer.dtype_policy.variable_dtype. Unless mixed precision is used, this is the same as Layer.compute_dtype, the dtype of the layer’s computations.

property dtype_policy

The dtype policy associated with this layer.

This is an instance of a tf.keras.mixed_precision.Policy.

property dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

property enable_tune_steps_per_execution
property inbound_nodes

Return Functional API nodes upstream of this layer.

property input

Retrieves the input tensor(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.

Returns:

Input tensor or list of input tensors.

Raises:

RuntimeError: If called in Eager mode. AttributeError: If no inbound nodes are found.

property input_mask

Retrieves the input mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Input mask tensor (potentially None) or list of input mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property input_shape

Retrieves the input shape(s) of a layer.

Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.

Returns:

Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).

Raises:

AttributeError: if the layer has no defined input_shape. RuntimeError: if called in Eager mode.

property input_spec

InputSpec instance(s) describing the input format for this layer.

When you create a layer subclass, you can set self.input_spec to enable the layer to run input compatibility checks when it is called. Consider a Conv2D layer: it can only be called on a single input tensor of rank 4. As such, you can set, in __init__():

`python self.input_spec = tf.keras.layers.InputSpec(ndim=4) `

Now, if you try to call the layer on an input that isn’t rank 4 (for instance, an input of shape (2,), it will raise a nicely-formatted error:

` ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=1. Full shape received: [2] `

Input checks that can be specified via input_spec include: - Structure (e.g. a single input, a list of 2 inputs, etc) - Shape - Rank (ndim) - Dtype

For more information, see tf.keras.layers.InputSpec.

Returns:

A tf.keras.layers.InputSpec instance, or nested structure thereof.

property jit_compile

Specify whether to compile the model with XLA.

[XLA](https://www.tensorflow.org/xla) is an optimizing compiler for machine learning. jit_compile is not enabled by default. Note that jit_compile=True may not necessarily work for all models.

For more information on supported operations please refer to the [XLA documentation](https://www.tensorflow.org/xla). Also refer to [known XLA issues](https://www.tensorflow.org/xla/known_issues) for more details.

property layers
property losses

List of losses added using the add_loss() API.

Variable regularization tensors are created when this property is accessed, so it is eager safe: accessing losses under a tf.GradientTape will propagate gradients back to the corresponding variables.

Examples:

>>> class MyLayer(tf.keras.layers.Layer):
...   def call(self, inputs):
...     self.add_loss(tf.abs(tf.reduce_mean(inputs)))
...     return inputs
>>> l = MyLayer()
>>> l(np.ones((10, 1)))
>>> l.losses
[1.0]
>>> inputs = tf.keras.Input(shape=(10,))
>>> x = tf.keras.layers.Dense(10)(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Activity regularization.
>>> len(model.losses)
0
>>> model.add_loss(tf.abs(tf.reduce_mean(x)))
>>> len(model.losses)
1
>>> inputs = tf.keras.Input(shape=(10,))
>>> d = tf.keras.layers.Dense(10, kernel_initializer='ones')
>>> x = d(inputs)
>>> outputs = tf.keras.layers.Dense(1)(x)
>>> model = tf.keras.Model(inputs, outputs)
>>> # Weight regularization.
>>> model.add_loss(lambda: tf.reduce_mean(d.kernel))
>>> model.losses
[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>]
Returns:

A list of tensors.

property metrics

Return metrics added using compile() or add_metric().

Note: Metrics passed to compile() are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> [m.name for m in model.metrics]
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> [m.name for m in model.metrics]
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.add_metric(
...    tf.reduce_sum(output_2), name='mean', aggregation='mean')
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> [m.name for m in model.metrics]
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc', 'mean']
property metrics_names

Returns the model’s display labels for all outputs.

Note: metrics_names are available only after a keras.Model has been trained/evaluated on actual data.

Examples:

>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> outputs = tf.keras.layers.Dense(2)(inputs)
>>> model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae"])
>>> model.metrics_names
[]
>>> x = np.random.random((2, 3))
>>> y = np.random.randint(0, 2, (2, 2))
>>> model.fit(x, y)
>>> model.metrics_names
['loss', 'mae']
>>> inputs = tf.keras.layers.Input(shape=(3,))
>>> d = tf.keras.layers.Dense(2, name='out')
>>> output_1 = d(inputs)
>>> output_2 = d(inputs)
>>> model = tf.keras.models.Model(
...    inputs=inputs, outputs=[output_1, output_2])
>>> model.compile(optimizer="Adam", loss="mse", metrics=["mae", "acc"])
>>> model.fit(x, (y, y))
>>> model.metrics_names
['loss', 'out_loss', 'out_1_loss', 'out_mae', 'out_acc', 'out_1_mae',
'out_1_acc']
property name

Name of the layer (string), set in the constructor.

property name_scope

Returns a tf.name_scope instance for this class.

property non_trainable_variables

Sequence of non-trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property non_trainable_weights

List of all non-trainable weights tracked by this layer.

Non-trainable weights are not updated during training. They are expected to be updated manually in call().

Returns:

A list of non-trainable variables.

property outbound_nodes

Return Functional API nodes downstream of this layer.

property output

Retrieves the output tensor(s) of a layer.

Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.

Returns:

Output tensor or list of output tensors.

Raises:
AttributeError: if the layer is connected to more than one incoming

layers.

RuntimeError: if called in Eager mode.

property output_mask

Retrieves the output mask tensor(s) of a layer.

Only applicable if the layer has exactly one inbound node, i.e. if it is connected to one incoming layer.

Returns:

Output mask tensor (potentially None) or list of output mask tensors.

Raises:

AttributeError: if the layer is connected to more than one incoming layers.

property output_shape

Retrieves the output shape(s) of a layer.

Only applicable if the layer has one output, or if all outputs have the same shape.

Returns:

Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).

Raises:

AttributeError: if the layer has no defined output shape. RuntimeError: if called in Eager mode.

property run_eagerly

Settable attribute indicating whether the model should run eagerly.

Running eagerly means that your model will be run step by step, like Python code. Your model might run slower, but it should become easier for you to debug it by stepping into individual layer calls.

By default, we will attempt to compile your model to a static graph to deliver the best execution performance.

Returns:

Boolean, whether the model should run eagerly.

property state_updates

Deprecated, do NOT use!

Returns the updates from all layers that are stateful.

This is useful for separating training updates and state updates, e.g. when we need to update a layer’s internal state during prediction.

Returns:

A list of update ops.

property stateful
property submodules

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
Returns:

A sequence of all submodules.

property supports_masking

Whether this layer supports computing a mask using compute_mask.

property trainable
property trainable_variables

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

Returns:

A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).

property trainable_weights

List of all trainable weights tracked by this layer.

Trainable weights are updated via gradient descent during training.

Returns:

A list of trainable variables.

property updates
property variable_dtype

Alias of Layer.dtype, the dtype of the weights.

property variables

Returns the list of all layer variables/weights.

Alias of self.weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

property weights

Returns the list of all layer variables/weights.

Note: This will not track the weights of nested tf.Modules that are not themselves Keras layers.

Returns:

A list of variables.

sacroml.safemodel.classifiers.safekeras.check_checkpoint_equality(v1: str, v2: str) tuple[bool, str][source]

Compare two checkpoints saved with tensorflow save_model.

On the assumption that the optimiser is not going to be saved, and that the model is going to be saved in frozen form this only checks the architecture and weights layer by layer.

sacroml.safemodel.classifiers.safekeras.check_dp_used(optimizer) tuple[bool, str][source]

Check whether the DP optimizer was actually the one used.

sacroml.safemodel.classifiers.safekeras.check_optimizer_allowed(optimizer) tuple[bool, str][source]

Check if the model’s optimizer is in our white-list.

Default setting is not allowed.

sacroml.safemodel.classifiers.safekeras.check_optimizer_is_dp(optimizer) tuple[bool, str][source]

Check whether optimizer is one of tensorflow’s DP versions.

sacroml.safemodel.classifiers.safekeras.load_safe_keras_model(name: str = 'undefined') tuple[bool, Any][source]

Read model from file in appropriate format.

Optimizer is deliberately excluded in the save. This is to prevent possibility of restarting training, which could offer possible back door into attacks. Thus optimizer cannot be loaded.

sacroml.safemodel.classifiers.safekeras.same_configs(m1: Any, m2: Any) tuple[bool, str][source]

Check if two models have the same architecture.

sacroml.safemodel.classifiers.safekeras.same_weights(m1: Any, m2: Any) tuple[bool, str][source]

Check if two nets with same architecture have the same weights.