Releases: vtjeng/MIPVerify.jl
v0.2.2
MIPVerify v0.2.2
Features:
- Add valid and fixed padding to
conv2d
layer (#43)
Compatibility
- "Memento": add "0.13, 1.0" (#35, #37)
- "CSV": add "0.6" (#39)
- "MAT": add "0.8" (#41)
- "DataFrames": add "0.21" (#45)
Internal:
- Add CompatHelper action (#29)
- Decrease TagBot action frequency to once per day. (#31)
- Run CI on Julia v1.4 rather than v1.3, and consolidate
jobs
andmatrix
keys intravis.yml
(#47, #50) - Clean up formatting of entire repo (#49) and add JuliaFormatter format check (#48, #52)
- Update badges in README. (#51)
Closed issues:
- 'Valid' padding for convolutions (#15)
- Importing Convolutional Neural Network (#36)
- Input bounds support (#40)
Merged pull requests:
- Add CompatHelper.yml action. (#29) (@vtjeng)
- Decrease TagBot action frequency to once per day. (#31) (@vtjeng)
- CompatHelper: bump compat for "Memento" to "0.13" (#35) (@github-actions[bot])
- CompatHelper: bump compat for "Memento" to "1.0" (#37) (@github-actions[bot])
- CompatHelper: bump compat for "CSV" to "0.6" (#39) (@github-actions[bot])
- CompatHelper: bump compat for "MAT" to "0.8" (#41) (@github-actions[bot])
- Added valid and fixed padding. (#43) (@samuelemarro)
- CompatHelper: bump compat for "DataFrames" to "0.21" (#45) (@github-actions[bot])
- Run code on Julia versions 1.0 through 1.4 in CI. (#47) (@vtjeng)
- Add JuliaFormatter format check (#48) (@vtjeng)
- Clean up formatting of entire repo (#49) (@vtjeng)
- Remove Julia v1.1,1.2,1.3 and consolidate jobs/matrix keys. (#50) (@vtjeng)
- Update badges in README. (#51) (@vtjeng)
- [scripts] fix format checker (#52) (@vtjeng)
- 0.2.2 release. (#53) (@vtjeng)
v0.2.1
Bugfix:
Internal:
- Reduce time for tests to run by verifying smaller neural networks (#25)
- Improve test coverage for
MNIST.WK17a_linf0.1_authors
network andmaximum
,abs_ge
, andget_relu_type
functions (#19) - Add TagBot (https://github.com/apps/julia-tagbot) (#27).
- More precise specification of dependency versions (#28).
Support Julia v1.0 (and drop Julia v0.6)
- Refine and streamline integration tests
- Specify the least restrictive possible set of dependencies.
v0.1.1
Summary
Added support for residual nets via SkipBlock
and SkipSequential
layers.
Other Changes
- Features
- Add support for residual nets.
- Add support for normalizing layer.
- Enable searching for worst adversarial example: use
adversarial_example_objective = MIPVerify.worst
as parameter infind_adversarial_example
orbatch_find_untargeted_attack
to find the worst adversarial example.
- Documentation
- Update documentation with installation instructions
- Bugfix
- Fix memory leak
v0.1.0
Summary
Major changes; highlights are adding support for CIFAR 10, code to enable running find_adversarial_example
on datasets conveniently, and significantly reduce runtimes compared to previous release (~10x).
Breaking Changes
- Change default tightening algorithm in
find_adversarial_example
to MIP.
Other Changes
-
Features
- Added support from CIFAR 10 dataset.
- Batch processing via
batch_find_untargeted_attack
andbatch_find_targeted_attack
allowingfind_adversarial_example
to be run on multiple samples from a single dataset with intelligent restarts. - Tightening algorithm can be specified per ReLU layer.
- Better progress bars for applying individual layers.
- Better log messages.
- Eliminate dependency on StatsBase.
-
Performance
- Progressive bounds computation. Technical details.
- Output from layers that produce affine expressions is explicitly simplified, avoiding overly large expressions in later layers.
- Ensure updated versions of dependencies are used to improve performance
-
Bugfix
- Falls back to interval_arithmetic if bounds found via LP/MIP are inconsistent.
- Address other open issues.
v0.0.8
Summary
Bugfixes, further performance improvements, and wrappers to make running find_adversarial_example
on a large dataset be more manageable.
Breaking Changes
✖ ImageDataset
-> ✔ LabelledImageDataset
✖ PerturbationParameters
-> ✔ PerturbationFamily
✖ AdditivePerturbationParameters
-> ✔ UnrestrictedPerturbationFamily
✖ BlurPerturbationParameters
-> ✔ BlurringPerturbationFamily
Other Changes
- Changes to signature of
find_adversarial_example
- Added
cache_model
named argument to allow users to skip model caching step. (Default behavior remains the same).
- Added
- Improved performance.
- Eliminate introduction of unnecessary variables (variable impact per model)
- Make saving models optional (avoids ~1-10s per solve used to serialize model)
- Added LInfNormBoundedPerturbationFamily - if you're
- Avoid introducing additonal binary variables for calculating norm.
- Remove unused heuristic objective that was being saved to model.
- Provide helper functions to simplify batch solving.
- (See batch_processing in documentation).
- BUGFIX
- Fix issue in maximum where we were incorrectly indexing.
v0.0.7
Summary
Major changes to interface + performance improvements.
Breaking Changes
- Removed layers combining multiple underlying layers:
- ✖
ConvolutionLayerParameters()
→ ✔Conv2d() + MaxPool() + ReLU()
- ✖
FullyConnectedLayerParameters()
→ ✔Linear() + ReLU()
- ✖
MaskedFullyConnectedLayerParameters()
→ ✔Linear() + MaskedReLU()
- ✖
- Streamlined neural net types into single feedforward net:
- ✖
StandardNeuralNetParameters()
→ ✔Sequential()
- ✖
MaskedFullyConnectedNetParameters()
→ ✔Sequential()
- ✖
- Changes to signature of
find_adversarial_example
.- Removed
tighten_bounds
option.tightening_algorithm
provides a more flexible way to specify the tightening algorithm for upper and lower bounds on input.
- Renamed
model_build_solver
→tightening_solver
.
- Removed
Other Changes
- Directly expose basic layers:
Conv2d(), Flatten(), Linear(), MaskedReLU(), Pool(), ReLU()
. - Fix memory issues for masked ReLU formulation.
- Progress bars + text feedback for long-running processes.
- Enable strided convolution in
conv2d
(by specifyingstride
). - Added option to specify an
expected_stride
when importing convolution parameters inget_conv_params
.
Fix broken links in documentation and remove num_correct
Summary
- Remove
num_correct
function, replacing it withfrac_correct
. - Fix broken links in documentation.
- Complete tutorials.
Breaking Changes
num_correct
function is nowfrac_correct
; it returns the fraction of correctly classified images and requires you to pass in anImageDataset
instead of the string of the dataset name. In short: if you previously usednum_correct(nnparams, "MNIST", num_samples)
, you should now import the MNIST dataset explicitly (e.g.mnist = MIPVerify.read_datasets("MNIST")
) and pass infrac_correct(nnparams, mnist.test, num_samples)
.
More detailed examples and better documentation
Summary
- Documentation is now available and hosted.
- Full tutotiral on options for
find_adversarial_example
. - Better logging for
find_adversarial_example
.
Breaking Changes
find_adversarial_example
defaults to using a cached model. If you were previously not passing in therebuild
named argument and wish to rebuild the model, pass inrebuild=true
.
Improved logging and masked fully-connected layers
- Improved logging
- Log messages are actually produced.
- Default log level ensures that
find_adversarial_example
solves are more informative. - Levels for individual messages modified to better line up with usage.
- Logging is associated with
MIPVerify
package (instead ofmain
).
masked_fully_connected_net
now available- Architecture: fully connected layers followed by a softmax layer with a mask for each ReLU specifying whether output should be rectified as usual, zeroed (if input to the ReLU is always negative), or passed through (if input to the ReLU is always positive).