Skip to content

Releases: vtjeng/MIPVerify.jl

v0.2.2

26 May 00:20
b6a585d
Compare
Choose a tag to compare

MIPVerify v0.2.2

Diff since v0.2.1

Features:

  • Add valid and fixed padding to conv2d layer (#43)

Compatibility

  • "Memento": add "0.13, 1.0" (#35, #37)
  • "CSV": add "0.6" (#39)
  • "MAT": add "0.8" (#41)
  • "DataFrames": add "0.21" (#45)

Internal:

  • Add CompatHelper action (#29)
  • Decrease TagBot action frequency to once per day. (#31)
  • Run CI on Julia v1.4 rather than v1.3, and consolidate jobs and matrix keys in travis.yml (#47, #50)
  • Clean up formatting of entire repo (#49) and add JuliaFormatter format check (#48, #52)
  • Update badges in README. (#51)

Closed issues:

  • 'Valid' padding for convolutions (#15)
  • Importing Convolutional Neural Network (#36)
  • Input bounds support (#40)

Merged pull requests:

  • Add CompatHelper.yml action. (#29) (@vtjeng)
  • Decrease TagBot action frequency to once per day. (#31) (@vtjeng)
  • CompatHelper: bump compat for "Memento" to "0.13" (#35) (@github-actions[bot])
  • CompatHelper: bump compat for "Memento" to "1.0" (#37) (@github-actions[bot])
  • CompatHelper: bump compat for "CSV" to "0.6" (#39) (@github-actions[bot])
  • CompatHelper: bump compat for "MAT" to "0.8" (#41) (@github-actions[bot])
  • Added valid and fixed padding. (#43) (@samuelemarro)
  • CompatHelper: bump compat for "DataFrames" to "0.21" (#45) (@github-actions[bot])
  • Run code on Julia versions 1.0 through 1.4 in CI. (#47) (@vtjeng)
  • Add JuliaFormatter format check (#48) (@vtjeng)
  • Clean up formatting of entire repo (#49) (@vtjeng)
  • Remove Julia v1.1,1.2,1.3 and consolidate jobs/matrix keys. (#50) (@vtjeng)
  • Update badges in README. (#51) (@vtjeng)
  • [scripts] fix format checker (#52) (@vtjeng)
  • 0.2.2 release. (#53) (@vtjeng)

v0.2.1

25 Dec 22:44
v0.2.1
393362d
Compare
Choose a tag to compare

Bugfix:

  • Fix skip_unit (#26)
  • Align "same" padding computation with tensorflow implementation (#23)

Internal:

  • Reduce time for tests to run by verifying smaller neural networks (#25)
  • Improve test coverage for MNIST.WK17a_linf0.1_authors network and maximum, abs_ge, and get_relu_type functions (#19)
  • Add TagBot (https://github.com/apps/julia-tagbot) (#27).
  • More precise specification of dependency versions (#28).

Support Julia v1.0 (and drop Julia v0.6)

18 Aug 05:59
b8489c8
Compare
Choose a tag to compare
  • Refine and streamline integration tests
  • Specify the least restrictive possible set of dependencies.

v0.1.1

01 Dec 20:51
aa8e164
Compare
Choose a tag to compare

Summary

Added support for residual nets via SkipBlock and SkipSequential layers.

Other Changes

  • Features
    • Add support for residual nets.
    • Add support for normalizing layer.
    • Enable searching for worst adversarial example: use adversarial_example_objective = MIPVerify.worst as parameter in find_adversarial_example or batch_find_untargeted_attack to find the worst adversarial example.
  • Documentation
    • Update documentation with installation instructions
  • Bugfix
    • Fix memory leak

v0.1.0

05 Sep 03:55
Compare
Choose a tag to compare

Summary

Major changes; highlights are adding support for CIFAR 10, code to enable running find_adversarial_example on datasets conveniently, and significantly reduce runtimes compared to previous release (~10x).

Breaking Changes

  • Change default tightening algorithm in find_adversarial_example to MIP.

Other Changes

  • Features

    • Added support from CIFAR 10 dataset.
    • Batch processing via batch_find_untargeted_attack and batch_find_targeted_attack allowing find_adversarial_example to be run on multiple samples from a single dataset with intelligent restarts.
    • Tightening algorithm can be specified per ReLU layer.
    • Better progress bars for applying individual layers.
    • Better log messages.
    • Eliminate dependency on StatsBase.
  • Performance

    • Progressive bounds computation. Technical details.
    • Output from layers that produce affine expressions is explicitly simplified, avoiding overly large expressions in later layers.
    • Ensure updated versions of dependencies are used to improve performance
  • Bugfix

    • Falls back to interval_arithmetic if bounds found via LP/MIP are inconsistent.
    • Address other open issues.

v0.0.8

06 Apr 18:05
Compare
Choose a tag to compare

Summary

Bugfixes, further performance improvements, and wrappers to make running find_adversarial_example on a large dataset be more manageable.

Breaking Changes

ImageDataset -> ✔ LabelledImageDataset
PerturbationParameters -> ✔ PerturbationFamily
AdditivePerturbationParameters -> ✔ UnrestrictedPerturbationFamily
BlurPerturbationParameters -> ✔ BlurringPerturbationFamily

Other Changes

  • Changes to signature of find_adversarial_example
    • Added cache_model named argument to allow users to skip model caching step. (Default behavior remains the same).
  • Improved performance.
    • Eliminate introduction of unnecessary variables (variable impact per model)
    • Make saving models optional (avoids ~1-10s per solve used to serialize model)
    • Added LInfNormBoundedPerturbationFamily - if you're
    • Avoid introducing additonal binary variables for calculating norm.
    • Remove unused heuristic objective that was being saved to model.
  • Provide helper functions to simplify batch solving.
    • (See batch_processing in documentation).
  • BUGFIX
    • Fix issue in maximum where we were incorrectly indexing.

v0.0.7

08 Mar 02:08
Compare
Choose a tag to compare

Summary

Major changes to interface + performance improvements.

Breaking Changes

  • Removed layers combining multiple underlying layers:
    • ConvolutionLayerParameters() → ✔ Conv2d() + MaxPool() + ReLU()
    • FullyConnectedLayerParameters() → ✔ Linear() + ReLU()
    • MaskedFullyConnectedLayerParameters() → ✔ Linear() + MaskedReLU()
  • Streamlined neural net types into single feedforward net:
    • StandardNeuralNetParameters() → ✔ Sequential()
    • MaskedFullyConnectedNetParameters() → ✔ Sequential()
  • Changes to signature of find_adversarial_example.
    • Removed tighten_bounds option.
      • tightening_algorithm provides a more flexible way to specify the tightening algorithm for upper and lower bounds on input.
    • Renamed model_build_solvertightening_solver.

Other Changes

  • Directly expose basic layers: Conv2d(), Flatten(), Linear(), MaskedReLU(), Pool(), ReLU().
  • Fix memory issues for masked ReLU formulation.
  • Progress bars + text feedback for long-running processes.
  • Enable strided convolution in conv2d (by specifying stride).
  • Added option to specify an expected_stride when importing convolution parameters in get_conv_params.

Fix broken links in documentation and remove num_correct

22 Feb 03:29
Compare
Choose a tag to compare

Summary

  • Remove num_correct function, replacing it with frac_correct.
  • Fix broken links in documentation.
  • Complete tutorials.

Breaking Changes

  • num_correct function is now frac_correct; it returns the fraction of correctly classified images and requires you to pass in an ImageDataset instead of the string of the dataset name. In short: if you previously used num_correct(nnparams, "MNIST", num_samples), you should now import the MNIST dataset explicitly (e.g. mnist = MIPVerify.read_datasets("MNIST")) and pass in frac_correct(nnparams, mnist.test, num_samples).

More detailed examples and better documentation

21 Feb 14:45
c0d098f
Compare
Choose a tag to compare

Summary

  • Documentation is now available and hosted.
  • Full tutotiral on options for find_adversarial_example.
  • Better logging for find_adversarial_example.

Breaking Changes

  • find_adversarial_example defaults to using a cached model. If you were previously not passing in the rebuild named argument and wish to rebuild the model, pass in rebuild=true.

Improved logging and masked fully-connected layers

15 Feb 23:31
Compare
Choose a tag to compare
  • Improved logging
    • Log messages are actually produced.
    • Default log level ensures that find_adversarial_example solves are more informative.
    • Levels for individual messages modified to better line up with usage.
    • Logging is associated with MIPVerify package (instead of main).
  • masked_fully_connected_net now available
    • Architecture: fully connected layers followed by a softmax layer with a mask for each ReLU specifying whether output should be rectified as usual, zeroed (if input to the ReLU is always negative), or passed through (if input to the ReLU is always positive).