diff --git a/examples/dataset_examples/SNN_Tutorial_Derrick_Lee.ipynb b/examples/dataset_examples/SNN_Tutorial_Derrick_Lee.ipynb
new file mode 100644
index 00000000..089b0e7f
--- /dev/null
+++ b/examples/dataset_examples/SNN_Tutorial_Derrick_Lee.ipynb
@@ -0,0 +1,1736 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "[](https://github.com/jeshraghian/snntorch/)\n",
+ "\n",
+ "\n",
+ "# snnTorch - Training Spiking Neural Networks with snnTorch\n",
+ "### By Derrick Lee\n",
+ "\n",
+ "\n"
+ ],
+ "metadata": {
+ "id": "LPxYaNwaj6zk"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The snnTorch tutorial series is based on the following paper. If you find these resources or code useful in your work, please consider citing the following source:\n",
+ "\n",
+ "> [Jason K. Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D. Lu. \"Training Spiking Neural Networks Using Lessons From Deep Learning\". Proceedings of the IEEE, 111(9) September 2023.](https://ieeexplore.ieee.org/abstract/document/10242251) \n"
+ ],
+ "metadata": {
+ "id": "2KZ5Jo9mkd_y"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "\n",
+ "## Introduction\n",
+ "In this tutorial you will learn how to:\n",
+ "\n",
+ "* Train a spiking convolutional SNN\n",
+ "* Construct a dataloader using [Tonic](https://tonic.readthedocs.io/en/latest/#)\n",
+ "* To train a model on the [DVS Gesture Dataset](https://research.ibm.com/interactive/dvsgesture/)\n",
+ "\n",
+ "If running in Google Colab:\n",
+ "* You may connect to GPU by checking `Runtime` > `Change runtime type` > `Hardware accelerator: GPU`\n",
+ "* Next, install the latest PyPi distribution of snnTorch by clicking into the following cell and pressing `Shift+Enter`."
+ ],
+ "metadata": {
+ "id": "AEIyFscwk9Cx"
+ }
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "KJAF2tZPvPMk",
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "outputId": "8e60c8cd-1f7e-4c56-f60a-e4fd0da674bf"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m109.7/109.7 kB\u001b[0m \u001b[31m2.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m112.7/112.7 kB\u001b[0m \u001b[31m9.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.4/50.4 kB\u001b[0m \u001b[31m6.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m109.0/109.0 kB\u001b[0m \u001b[31m3.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h"
+ ]
+ }
+ ],
+ "source": [
+ "!pip install tonic --quiet\n",
+ "!pip install snntorch --quiet"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# imports\n",
+ "import tonic\n",
+ "import matplotlib.pyplot as plt\n",
+ "import tonic.transforms as transforms\n",
+ "\n",
+ "import torch\n",
+ "import torchvision\n",
+ "\n",
+ "import snntorch as snn\n",
+ "from snntorch import surrogate\n",
+ "from snntorch import functional as SF\n",
+ "from snntorch import utils\n",
+ "import torch.nn as nn"
+ ],
+ "metadata": {
+ "id": "C9F9ihlGls9u"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# 1. Dataset"
+ ],
+ "metadata": {
+ "id": "IRW_OSVfmMF1"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "T-KcsehPANDP"
+ },
+ "source": [
+ "The dataset used in this tutorial will be DVSGesture from IBM.\n",
+ "\\\n",
+ "This dataset is comprised of 11 classes, where each corresponds to a hand gesture, like waving or clapping. All events were recorded on a DVS128 camera, which is a vision sensor that captures changes in the environment (events), represented by illuminated pixels, while static backdrops are ignored or produce little noise.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "L9ALJIrHL7AW"
+ },
+ "source": [
+ "## 1.1 Loading the dataset with Tonic"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 156,
+ "referenced_widgets": [
+ "3974416247f7451bb2bb82bdb83138fc",
+ "0f504b5f4ad74595ae37ebf795d46514",
+ "5fd4888429c342f2b1493c1689e6fc6b",
+ "b4a14c5ad41441a7bbb76aa96f72778b",
+ "766a82612ccc442fb46264421109c713",
+ "92e3de6e58a24ea1a870432e8d907504",
+ "c0c3f6d088534ceb9d70af69d7e08392",
+ "0b28db65ac3743c2abb7ddd085fbed21",
+ "1071d5988c9b4cf7b846b51db4f66a32",
+ "daa45045ae794990867fce4f0be28245",
+ "c710df2cd11e4ec2b1c9da6585c76016"
+ ]
+ },
+ "id": "peDXjxJUwEVX",
+ "outputId": "0a85d931-90d1-4fc4-8867-32266e8cd4fb"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Downloading https://s3-eu-west-1.amazonaws.com/pfigshare-u-files/38022171/ibmGestureTrain.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIYCQYOYV5JSSROOA/20231102/eu-west-1/s3/aws4_request&X-Amz-Date=20231102T212615Z&X-Amz-Expires=10&X-Amz-SignedHeaders=host&X-Amz-Signature=587183aaf756ad2dc11fa835233d8e4b110563001b4e86117a308156d072b349 to ./data/DVSGesture/ibmGestureTrain.tar.gz\n"
+ ]
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ " 0%| | 0/2443675558 [00:00, ?it/s]"
+ ],
+ "application/vnd.jupyter.widget-view+json": {
+ "version_major": 2,
+ "version_minor": 0,
+ "model_id": "3974416247f7451bb2bb82bdb83138fc"
+ }
+ },
+ "metadata": {}
+ },
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Extracting ./data/DVSGesture/ibmGestureTrain.tar.gz to ./data/DVSGesture\n",
+ "Dataset contains 1077 samples.\n",
+ "There are 787770 events in the first sample.\n",
+ "A single event: (119, 113, False, 6)\n"
+ ]
+ }
+ ],
+ "source": [
+ "dataset = tonic.datasets.DVSGesture(save_to='./data', train=True)\n",
+ "events, label = dataset[0]\n",
+ "\n",
+ "# Dataset size\n",
+ "print(\"Dataset contains \", len(dataset), \" samples.\")\n",
+ "\n",
+ "# Number of events in the first sample\n",
+ "print(\"There are \", len(events), \" events in the first sample.\")\n",
+ "\n",
+ "# (x-pos, y-pos, polarity, timestamp)\n",
+ "print(\"A single event: \", events[0])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZSTk0p-HQUPk"
+ },
+ "source": [
+ "## 1.2 Visualizing the Data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "tIUgiH4z52IC",
+ "outputId": "386e6579-be2f-46ba-d5a6-3424cad3878e"
+ },
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "(787, 2, 128, 128)\n"
+ ]
+ },
+ {
+ "output_type": "execute_result",
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "execution_count": 4
+ },
+ {
+ "output_type": "display_data",
+ "data": {
+ "text/plain": [
+ "