- I took the pledge #30daysofUdacity in my Deep Learning Nanodegree.
- Completed Lesson 1 on deep learning nanodegree class.
- Revisited mlcourses.ai lesson on DEcision Trees and Random Forest. https://www.youtube.com/watch?v=H4XlBTPv5rQ&feature=youtu.be
- Watched Siraj Raval video on Decision Forest and Random Forest for clarity https://www.youtube.com/watch?v=QHOazyP-YlM
- The collection of Decision Tree is Random Forest
- Random Forest used for both Classification and Regression problem
- Random Forest is good for small dataset.
- Broadened my knowledge on Matrix multiplication and Matrix dot product
- Decision tree is prone to error and it is instable
- I started Introduction to Neural Networks
- Broadened my knowledge on Perceptron https://deepai.org/machine-learning-glossary-and-terms/perceptron
- Watched a video on step function https://www.youtube.com/watch?v=tHwpj9b4zZo
- The heart of deep learning is Neural Network
- Neural networks have nodes, edges, layers
- Perceptron is an algorithm for binary classifier. It consist of foru main parts : Inptu values, Weights and biases, net sum and an activation function
- Another name for Perception is Linear Binary Classifier
- Default equation: ##Equation = Wx + Bias W = Weights x = Values B = Bias
- Read about the cost function equation
- I learnt categories of model in Data Science
- Different between local and global minimum https://statinfer.com/204-5-10-local-vs-global-minimum/
- The co-efficient of a bias is 1
- If the question is to determine probability of a dataset. Use Predictive Model
- If the question is to show relationship of a dataset. Use Descriptive Model
- If the question requires Yes or No answer of a dataset. Use Classification Model
- Matrix is a rectangular array of numbers. Mathematically : Matrix = M x N
- Vector is a N x 1 matrix
- Lesson 1:12 - Non- Linear Region
- Lesson 1:13 - Error Function
- Lesson 1:14 - Softmax
- Broadened my knowledge about softmax https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax
- Error function is used to detect how close we are to the goal
- Error function must be differentiable
- Error function must be continuous bot discrete
- Predictions are answers gotten from Algorithm
- Converting step function to discrete is by using another activation function called Sigmoid function
- Softmax is another word for normalization of numbers
- To be proficint in AI, you need to be proficient in Object Oriented Programming
- Completed Lesson 1
- Started Lesson 2
- Explicit video of One-Hot Encoding https://www.youtube.com/watch?v=v_4KWmkwmsU
- Watched this video image classifier https://www.youtube.com/watch?v=cAICT4Al5Ow
- Completed Lesson 2 - Implementing Gradient Descent
- Started and Completed Lesson 3 - Training neural networks
- Trying to Style Trasnfer my picture but not working. Trying to fix the error
- Started Lesson 4
- Read a little about style Transfer
- Completed all subtopics in Lesson 2 - Neural Networks
- Viewed my first project – Predicting Bike share pattern
- Difference between Underfitting and Overfitting
- What is Regularization?
- L1 and L2 Regularization
- Dropout is the solution to Overfitting
- Random Restart is use to solve the problem of local minimum
- Other Activation function aside Sigmoid. They are Hyperbolic Tangent Function and Rectified Linear Unit (RELU) Function
- Working on my project - Predicting Bike-sharing Patterns
- Trying to style transfer my images with the fast style transfer code
- Reading about how to build neural network with pytorch
https://www.datahubbs.com/deep-learning-101-first-neural-network-with-pytorch/?fbclid=IwAR2_MxZ6aAgWKunezBjgCPn4mIE_tpflWLFZ0yMv-kBDsTD8KpwGlqRrYyU
- It is very interesting when you see the concepts of the class been implemented with codes.
- Learnt Forwardfeed Propagation and Back Propagation
- How to tune hyperparameters : Learning rate, number of iteractions et cetera
- Working on my project - Predicting Bike-sharing Patterns
- Learning how to use Github professionally. Using https://www.udacity.com material
- Built multivariable using Linear regression
- Working on my project - Predicting Bike-sharing Patterns
- Started Sentiment analysis videos
- Submitted my first project on Predicting Bike-sharing Patterns
- Continued lessons on Sentiment analysis
- I continued my course on sentiment analysis
- Dived into some documentation in Python https://docs.python.org/2/library/collections.html
- Solving the projects in Sentiment analysis videos
- Studying some python documentation
- Solving the projects in Sentiment analysis videos
- Watched some videos on Neural Network
- Read some medium article
- Attended a meetup where Linear regression and logistic regression were discussed
- Learnt about the difference in their equations
- Read some articles on neural network
- I discovered there are some similarities between AI and Robotics
- I discovered that if linear equation is been used for logistic regression, it makes it difficult to identify the local minimum.
- Revisiting Bayes Rule in Introduction to Machine Learning Udacity's video
- Read article about it https://towardsdatascience.com/what-is-bayes-rule-bb6598d8a2fd
- It is all about Algebraic mathematics
- Worked on some Python code
- Impossibility is a Mirage - Brace up
- Brushin up on Python Object Oriented Programming
- Learning Algebraic concept. Thank you Udacity for an awesome material
- Revisiting sentiment analysis
- This article is enlightening https://medium.com/dsnet/chai-time-data-science-show-announcement-bfaaf38df219 Tomorrow is another day for a good progress
- There is nothing as good has learning the basis
- Avoid assumptions
- Sorted out the error in my sentiment analysis code
- Rounding up the lesson and moving on to Introduction to Deep Learning with Pytorch
- Going through Introduction to Deep Learning with Pytorch
- Read about fully connected network https://www.oreilly.com/library/view/tensorflow-for-deep/9781491980446/ch04.html
- Watching Introduction to Deep Learning with Pytorch videos
- Going through the codes too
- Revised Sentiment analysis project
- Revising Backpropagation and also reading some articles
- Learning the basis has been an amazing experience - Essential mathematics topics in AI
- Read some articles about AI and Deep Learning
- Clarified my doubt on Linear regression and logistic regression https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.html
- Revising Introduction to deep learning with Pytorch
- Watched a video about element-wise operation https://www.youtube.com/watch?v=2GPZlRVhQWY
- Solving Project 4 in Sentiment analysis
- Solved Network Architecture with Pytorch
- Watched transfer learning videos
- Solved some mathematics question
- Read some articles about Covolution Neural Network
- Read some articles about Recurrent Neural Network
- Started Convolutional Neural Network
- Attended the Webinar by Sourena Yadegari (My Mentor)
- Continued with my CNN Videos
- Read some CNN articles
- Read an article on MLP, CNN and its application with Pytorch
- Started the video on MLPs vs CNN
- Learnt about Filters and Convolution layer
- Filters and Edges
- Frequency in Images as regards CNN
- MLPs convert images to Tensor while CNN convert to matrix
- CNN groups in edges
- Frequency in CNN is similar to that of sound wave
- MLPs are fully connected while CNN are sparsely connected. This leads to the network less dense
- Learnt about Pooling layers
- Covolution layers in Pytorch
- Image Augmentation
- Visualizing CNN
- CNNs in Pytorch
- Augmentation using transfer learning
- ReLU is mostly used in CNN because the slope of the graph doesnot saturate(approach zero). Stacking convolution layers requires a constant gradient instead of a diminished value. Leaky ReLU can also be used. Parametric ReLU is a type of Leaky ReLU that figures out its slope itself. https://medium.com/@danqing/a-practical-guide-to-relu-b83ca804f1f7 https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
- I discovered Concatenated ReLU was used here. Concatenated RELU is used to reduce redundancy that ReLU might have caused. It is denoted as C-ReLU https://arxiv.org/pdf/1603.05201.pdf
- What is C-ReLU? https://arxiv.org/abs/1603.05201
- Increasing the depth of the convolution layer, it enables you to get more features
- Features becomes more subtle when the layers are increased
- There are different CNN Architecture that can be used for Transfer Learning: ImageNet, RestNet, VGG 16
- Transfer Learning
- Weight Initailization
- Constant weight
- Normal Distribution and Random Uniform Distribution
- Using constant weights make back propagation to fail becasue it is not design to deal with consistency
- Back propagation is designed to look at how different weight value affect training loss
- Solution to a constant weight syndrome is choosing a random weight
- To get random weight, use uniform distribution
- Weight Initialization is about giving the model best chance to train
- Completed my videos on Weight Initialization
- Working on my second project.
- Started watching videos on Autoencoder
- Working on my second project
- Autoencoder is use to compress images without losing its content
- Read some articles about CNN and RNN
- Article on Pytorch and Neural Network https://medium.com/dair-ai/pytorch-1-2-introduction-guide-f6fa9bb7597c
- Indepth knowledge abut Haar-code object detection in OpenCV https://docs.opencv.org/trunk/db/d28/tutorial_cascade_classifier.html
- Open CV dataset https://github.com/opencv/opencv/tree/master/data/haarcascades
- Gradient Descent for Machine Learning https://machinelearningmastery.com/gradient-descent-for-machine-learning/
- More information about activation functions http://cs231n.github.io/neural-networks-1/#actfun https://www.datasciencecentral.com/profiles/blogs/deep-learning-advantages-of-relu-over-sigmoid-function-in-deep
- First research paper on Dropout https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
- Watching a video on Numpy https://youtu.be/QUT1VHiLmmI
- Stochastic gradient descent is been used when we have large dataset.
- Sigmoid function has been mostly used but recently there is a shift in the usage because of its drawback.
- The two drawbacks for sigmoid are: (1) Sigmoid saturates and kills gradient descent (2) The output is not zero centred
- Tanh is a scaled sigmoid neuron
- Tanh non-linearity is mostly preferred to Sigmoid non linearity
- Numpy is faster than List
- Working on my project
- It might take longer than I expected
- Revising numpy and matrix libraries
- How to write lower and upper triangular matrix in code
- Learnt how to manipulate numpy to do my task
- It is always impossible till you TRY. STAY HUNGRY
- Data Augmentation
- Difference between the model architecture
- What is Transfer Learning
- Data Augmentation makes the data generalize well
- The number infront of the model architecture indicates the number of layers. For example; VGG16 has 16 layers in the model
- The larger the number of layers, computation time is lengthy but reduces the number of error.
- What is learnt from a datsset, it is transferred to my dataset. This is what we call TRANSFER LEARNING
- Scale Invariance is when the size of the object doesnot not affect the prediction of the model
- Rotational Invariance is when the angular position of an object doesnot affect the prediction of the model
- Translation Invariance is when the shifting of the object to either the left or right doesnot affect the prediction of the model
- Data Augmentation helps to avoid overfitting
- VGG Documentation https://arxiv.org/pdf/1409.1556.pdf https://github.com/jcjohnson/cnn-benchmarks
- Auto-encoder
- Linear Auto-encoder
- Linear Upsampling
- Convolutional Auto-encoder
- To convert coloured images to gray, use Python Imaging Library (PIL). http://www.pythonware.com/products/pil/ https://www.science-emergence.com/Articles/How-to-convert-an-image-to-grayscale-using-python-/
- How to convert images to RGB https://github.com/eriklindernoren/PyTorch-YOLOv3/commit/38c246588cf7d131aac2f68cd35ba222b366c378
- Auto-encoder contains an encoder that compresses some input data and a decoder that reconstruct data from the compressed representation
- Advantage of an Auto-encoder is that it can be used to reduce the dimensionality of an input
- Key aspect of Auto-encoder is that it compresses and retains the data
- Auto-encoders are found to denoise images
- In CNN we apply the concept of Auto-encoder by using Interpolation technique like Nearest Neighbours. This is not effective
- The one we can use could be Transpose Convolutional Layer
- Upsampling is the opposite of Maxpooling
- Working on my project
- Examples of spatial data
- Python string Method https://docs.python.org/3/library/stdtypes.html#string-methods
- Images are spatial data
- List is a mutable and ordered datatype but String is only ordered
- Read an article on Autoencoder
- Attended a meet-up with my Udacity mentor
- Reviewed my code
- Indepth knowledge about control flow - Conditional statement in Python
- Autoencoders
- Optimizer
- Short hand in navigating the jupyter notebook
- He shared some links with me On optimizer : http://ruder.io/optimizing-gradient-descent/index.html On Autoencoder: https://www.youtube.com/watch?v=uaaqyVS9-rM
- Learnt the characteristic of List, Set, Tuple, Dictionaries, Conditonal statement in Python.
- I learnt short hand in navigating jupyter notebook. For example: Esc + l - Command mode activated
- Filled 60 days of Udacity
- I got my badge
- How to solve this error message - image file is truncated (150 bytes not processed) Use the below code "from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True"
- Indepth article on VGG16 Architecture https://neurohive.io/en/popular-networks/vgg16/
- Understanding Resnet, Alexnet, VGG, Inception
https://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/ - Validation and Training Loss https://www.pyimagesearch.com/2019/10/14/why-is-my-validation-loss-lower-than-my-training-loss/
1 Research paper
- https://www.nature.com/articles/nature21056.epdf?author_access_token=8oxIcYWf5UNrNpHsUHd2StRgN0jAjWel9jnR3ZoTv0NXpMHRAJy8Qn10ys2O4tuPakXos4UhQAFZ750CsBNMMsISFHIKinKDMKjShCpHIlYPYUHhNzkn6pSnOCt0Ftf6
- https://fortune.com/2017/01/26/stanford-ai-skin-cancer/
- https://www.bloomberg.com/news/articles/2017-06-29/diagnosing-skin-cancer-with-google-images
- https://www.bbc.com/news/health-38717928
- https://www.wsj.com/articles/computers-turn-medical-sleuths-and-identify-skin-cancer-1486740634?emailToken=JRrzcPt+aXiegNA9bcw301gwc7UFEfTMWk7NKjXPN0TNv3XR5Pmlyrgph8DyqGWjAEd26tYY7mAuACbSgWwvV8aXkLNl1A74KycC8smailE=
- https://www.forbes.com/sites/forbestechcouncil/2017/09/27/what-can-computer-vision-do-in-the-palm-of-your-hand/#5d5e8db47a7b
- https://www.scientificamerican.com/article/deep-learning-networks-rival-human-vision1/
- RNN deals with sequential data while CNN deals with spatial images
- Learnt how to upload images to Udacity workspace
- Read articles from MIT Technology Review. Always insightful
- Documentation in Github https://help.github.com/en/github/writing-on-github/getting-started-with-writing-and-formatting-on-github
- Read some article on RNN
Back Propagation in RNN
- Read an article on hyperparameter tunings
- Finally done with my second deep learning project
- My model is bias so I was advised by project reviewer to work on Data Augmentation.
- Article on Natural Language Processing (NLP).
- Difference between RNN and LSTM.
Long Short Term Memory
Preparing for Project 3 - Movie TV Predicting
RNN and LSTM
Implementation of RNN and LSTM
- Matrix operations make training more efficient
- Batch size correspond to the number of sequences
- What are Tokenizers in RNN Implementation
- Number of hidden layers and Units in RNN. https://cs231n.github.io/neural-networks-1/
- RNN Hyperparameters
- Word Embeddings
-
LSTM and GRU performs better than Vanilla RNN
-
LSTM vs GRU https://arxiv.org/abs/1412.3555 http://proceedings.mlr.press/v37/jozefowicz15.pdf https://arxiv.org/abs/1506.02078 https://colah.github.io/posts/2015-08-Understanding-LSTMs/ https://arxiv.org/abs/1703.03906v2
-
In Word Embedding, you have to be careful of word bias or incorrect source text
-
Number of Hidden Unit is the Embedding Dimension.
-
Embedding look up table is the weight matrix
-
Embedding layer is the Hidden layer
- Sentiment in RNN
- Zoom call with Sourena on Word Embedding, RNN and Code explanation
- He explained Python OOP concept and shared some links https://www.reddit.com/r/learnpython/comments/5jgbiq/python_oop_series/
https://www.thedigitalcatonline.com/blog/2014/08/20/python-3-oop-part-1-objects-and-types/
- Data Augmentation https://medium.com/@thimblot/data-augmentation-boost-your-image-dataset-with-few-lines-of-python-155c2dc1baec
- Introduction to LSTM
- LSTM vs RNN
- LSTM Architecture
- Learn, Forget and Remember Gate
- Hyperparameters
- Materials on Hyperparameter
- Practical recommendations for gradient-based training of deep architectures by Yoshua Bengio - https://arxiv.org/abs/1206.5533
- Practical Methodology by Ian Goodfellow, Yoshua Bengio, Aaron Courville - http://www.deeplearningbook.org/contents/guidelines.html
- How to choose a neural network's hyper-parameters? by Michael Nielsen -http://neuralnetworksanddeeplearning.com/chap3.html#how_to_choose_a_neural_network's_hyper-parameters
- Efficient Back Prop by Yann LeCun - http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf
- How to Generate a Good Word Embedding? by Siwei Lai, Kang Liu, Liheng Xu, Jun Zhao - https://arxiv.org/abs/1507.05523
- Systematic evaluation of CNN advances on the ImageNet by Dmytro Mishkin, Nikolay Sergievskiy, Jiri Matas - https://arxiv.org/abs/1606.02228 Visualizing and Understanding Recurrent Networks by Andrej Karpathy, Justin Johnson, Li Fei-Fei - https://arxiv.org/abs/1506.02078
- Learnt how to mount google drive
- Using Colab for my third Nanodegree Project
- Modified my code
- Working on my RNN Project
- Working on my third project
- Preprocessing the data in RNN is the hardest part of the project
- Started watching GANs videos
- Introduction to GANs
- Application of GANs
- Generator and Discriminator
- GANs is trained by running two (") optimization Algortithms simultaneously
- Game theory is use to understand GANs mathematically
- Both discriminator and generator has at least one hidden layer
- Wrapping up my project
- Submitted my project for review
- Deepmind reinforcement learning framework https://towardsdatascience.com/deepmind-quietly-open-sourced-three-new-impressive-reinforcement-learning-frameworks-f99443910b16
- Videos on GANs https://www.youtube.com/watch?v=Sw9r8CL98N0 https://www.youtube.com/watch?v=9JpdAg6uMXs
- Article on word embedding https://lilianweng.github.io/lil-log/2017/10/15/learning-word-embedding.html
- Support Vector Machine
- SVM is robust to outliers
- https://www.google.com/amp/s/venturebeat.com/2019/12/10/deepminds-dreamer-ai-uses-the-past-to-predict-the-future/amp/
- https://venturebeat.com/2019/12/13/deepmind-proposes-novel-way-to-train-safe-reinforcement-learning-ai/
- Sign language recognition in Pytorch. https://towardsdatascience.com/sign-language-recognition-in-pytorch-5d72688f98b7
- https://www.deeplearningwizard.com/deep_learning/practical_pytorch/pytorch_convolutional_neuralnetwork/
- Video in statistics https://www.youtube.com/channel/UCtYLUTtgS3k1Fg4y5tAhLbw
https://apple.news/AoRVlsNoTRPeBcG0mESVNSw
Working on project 4- Generate Faces (GANs)
Reading some GANs research papers
Submitted my Github account for review
- Machine Learning Workflow
i. Explore and Process data
- Retrieve
- Clean and Explore
- Prepare/Transform
- Machine Learning Workflow
ii. Modeling
- Develop and Train model
- Validate and evaluate model
- Validation are use for model tuning and selection
- Machine Learning Workflow
iii. Deployment
- Deploy to production
- Monitore and update model and data
Resources i. SSD [https://arxiv.org/abs/1512.02325] ii. YOLO [https://arxiv.org/abs/1506.02640] iii. Faster R-CNN [https://arxiv.org/abs/1506.01497] iv. MobileNet [https://arxiv.org/abs/1704.04861] v. ResNet [https://arxiv.org/abs/1512.03385] vi. Inception [https://arxiv.org/pdf/1409.4842.pdf]
- Started Deployment of Sentiment Analysis Lectures
- Learning how to use Amazon sagemaker.
- To deploy a model is to have an endpoint
- Reinforcement Learning https://towardsdatascience.com/machine-learning-part-4-reinforcement-learning-43070cbd83ab
- Completed Lesson 2 of Deployment of Sentiment Analysis
- Started Lesson 3 of Deployment of Sentiment Analysis
- Register the previous Lectures
- Read some research papers
Read some research papers
Did some research on how to write better codes
- Deployment with Sagemaker
- Navigating Sagemaker interface
- Scraping some website for data
- Deployment platform
- Cloud Computing
- XGBoost material
- Benefit of cloud computing Amazon : https://aws.amazon.com/what-is-cloud-computing/ Google : https://cloud.google.com/what-is-cloud-computing/ Microsoft : https://azure.microsoft.com/en-us/overview/what-is-cloud-computing/
- Cloud computing can be used for any and all part of machine learning processes
- Technologies like containers, endpoints, and APIs (Application Programming Interfaces) also help ease the work required for deploying a model into the production environment.
- ONNX (Open Neural Network Exchange) https://onnx.ai/
- XGBoost material https://xgboost.readthedocs.io/en/latest/tutorials/model.html
- Setting up my AWS Account
- What are Instance
- Instances are known as Virtual Machine
- Preparing for deployment with Sagemaker.
- Wrapped up deployment with sagemaker
- Started working on my last Nanodegree project
What are corpus?
- What is NLP
- Application of NLP
- Techniques to solving AI project
- https://github.com/nscalo/swadel-monolith/tree/master/app/online/SpeechRecognition
- https://github.com/nscalo/swadel-monolith/tree/master/app/data/training/HandWrittenDigitsDataGAN
- https://github.com/HartP97/Lane-Detection
- Researching about deploying a deep learning project using Sagemaker
- Speech Emotion Recognition with Convolutional Neural Network https://towardsdatascience.com/speech-emotion-recognition-with-convolution-neural-network-1e6bb7130ce3
Information about Amazon AWS i. https://aws.amazon.com/sagemaker/pricing/
ii. https://aws.amazon.com/sagemaker/faqs/
iii. https://www.youtube.com/user/AmazonWebServices
- Autonomous Learning library https://medium.com/syncedreview/autonomous-learning-library-simplifies-intelligent-agent-creation-c7ec60576a3e
(Machine Learning solutions with code)[https://paperswithcode.com/]
Improving my Python skills
Trying to set up AWS account
Doing some study on the AWS documnetation
Preprocessing data for a sentiment analysis project
Editted my data with nltk and beautifulsoup
Completed my Udacity Deep Learning course
https://towardsdatascience.com/web-scraping-5649074f3ead
- Object-oriented Reinforcement Learning https://towardsdatascience.com/object-oriented-reinforcement-learning-95c284427ea
- Studying the Mathematics of Machine Learning
Studying Linear Algebra
Studying Vectors
Studying Logistic Regression
https://medium.com/datadriveninvestor/deep-learning-for-image-segmentation-d10d19131113
- https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/
- https://en.wikipedia.org/wiki/U-Net
- https://www.marktechpost.com/free-resources/
- https://medium.com/deepquestai/object-detection-training-preparing-your-custom-dataset-6248679f0d1d
- https://medium.com/datadriveninvestor/how-to-create-custom-coco-data-set-for-object-detection-96ec91958f36
- http://cocodataset.org/#detection-2019
https://medium.com/tektorch-ai/best-image-labeling-tools-for-computer-vision-393e256be0a0
https://nanonets.com/blog/how-to-do-semantic-segmentation-using-deep-learning/
https://www.mediterranee-infection.com/hydroxychloroquine-and-azithromycin-as-a-treatment-of-covid-19/ https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/
https://cs50.harvard.edu/ai/ https://deepsense.ai/region-of-interest-pooling-explained/ https://medium.com/@jonathan_hui/image-segmentation-with-mask-r-cnn-ebe6d793272
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
https://github.com/VirajDeshwal/COVID-19?files=1
Annotation Tools video
- https://www.youtube.com/watch?time_continue=20&v=fXalzNpYLGg&feature=emb_logo
- https://www.youtube.com/watch?time_continue=6&v=VyUwHbqJ26g&feature=emb_logo
- https://www.youtube.com/watch?v=iTvG3G84Ez4
- https://hackernoon.com/the-best-image-annotation-platforms-for-computer-vision-an-honest-review-of-each-dac7f565fea
- Data Augumentation for Deep Learning
- https://www.arunponnusamy.com/preparing-custom-dataset-for-training-yolo-object-detector.html
On Seeing Stuff: The Perception of Materials by Humans and Machines
- https://www.programiz.com/python-programming/modules
- https://www.w3schools.com/python/python_modules.asp
- https://python-packaging-tutorial.readthedocs.io/en/latest/setup_py.html
Building a segmentation model from scratch using Deep Learning
- Learning rate
- Batch size
- Number of Workers
How to apply continual learning to your machine learning models
What’s New In Gartner’s Hype Cycle For Emerging Technologies, 2020 secure-data-science-reference-architecture GitHub