Skip to content

This is the accompanying assignment for Stanford University's computer vision course CS231n.

Notifications You must be signed in to change notification settings

zongze-lee/CS231n-LAB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CS231n-LAB

This is the accompanying assignment for Stanford University's computer vision course.

Assignment 1 in CS231n-LAB focuses on fundamental concepts in machine learning and neural networks. In this assignment, we implemented several key components:

  • K-nearest neighbors (KNN) classifier for classification tasks.
  • Fully connected layer for dense neural network architecture.
  • Rectified Linear Unit (ReLU) layer for introducing non-linearity.
  • Support Vector Machine (SVM) layer for classification.
  • Softmax layer for multiclass classification.
  • Constructed a two-layer fully connected neural network for learning basic neural network principles.

Assignment 2 in CS231n-LAB delves into advanced topics in deep learning and neural network architectures. This assignment includes the implementation of the following techniques:

  • Batch normalization layer, Layer normalization layer, Spatial-Batch normalization, Spatial-Group normalization layer for normalizing activations within features.
  • Dropout layer for regularization and preventing overfitting.
  • Convolutional layer for learning spatial hierarchies of features.
  • Built a high-performance deep learning network using PyTorch framework, integrating the aforementioned techniques for optimal results.

Assignment 3 in CS231n-LAB explores feature extraction, adversarial examples, and advanced machine learning frameworks. This assignment involves the following tasks:

  • Generated saliency maps, fooling images, and class visualizations to analyze network activations and vulnerabilities in neural networks.
  • Recurrent Neural Networks (RNN): Suitable for sequential data processing tasks such as natural language processing and time series analysis.
  • Long Short-Term Memory (LSTM): A type of RNN architecture with improved ability to capture long-range dependencies.
  • Transformer: Known for its effectiveness in handling attention mechanisms and transformer-based architectures like BERT for language understanding tasks.
  • Generative Adversarial Networks (GAN): Used for generating realistic data samples, such as images, by training a generator and discriminator network simultaneously.
  • SimCLR (Contrastive Learning): Focuses on learning effective representations from unlabeled data using contrastive loss functions, enabling better generalization and transfer learning capabilities.

PS: Some files such as cifar-10-python.tar.gz, coco_captioning are not uploaded. If you need to run the code in one click, please download the corresponding files from the address provided in the ipynb or terminal before running.

About

This is the accompanying assignment for Stanford University's computer vision course CS231n.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published