LiRA Attack

Likelihood testing scenario from https://arxiv.org/pdf/2112.03570.pdf.

class sacroml.attacks.likelihood_attack.LIRAAttack(output_dir: str = 'outputs', write_report: bool = True, n_shadow_models: int = 100, p_thresh: float = 0.05, mode: str = 'offline', fix_variance: bool = False, report_individual: bool = False)[source]

The main LiRA Attack class.

Methods

attack(target)

Run a LiRA attack from a Target object and a target model.

get_params()

Get parameters for this attack.

__init__(output_dir: str = 'outputs', write_report: bool = True, n_shadow_models: int = 100, p_thresh: float = 0.05, mode: str = 'offline', fix_variance: bool = False, report_individual: bool = False) None[source]

Construct an object to execute a LiRA attack.

Parameters:
output_dirstr

Name of the directory where outputs are stored.

write_reportbool

Whether to generate a JSON and PDF report.

n_shadow_modelsint

Number of shadow models to be trained.

p_threshfloat

Threshold to determine significance of things. For instance auc_p_value and pdif_vals.

modestr

Attack mode: {“offline”, “offline-carlini”, “online-carlini”}

fix_variancebool

Whether to use the global standard deviation or per record.

report_individualbool

Whether to report metrics for each individual record.

attack(target: Target) dict[source]

Run a LiRA attack from a Target object and a target model.

Parameters:
targetattacks.target.Target

target as an instance of the Target class.

Returns:
dict

Attack report.

get_params() dict

Get parameters for this attack.

Returns:
paramsdict

Parameter names mapped to their values.