forked from iphysresearch/iphysresearch.github.io_Archive
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex_old.html
1 lines (1 loc) · 13.8 KB
/
index_old.html
1
<attachment contenteditable="false" data-atts="%5B%5D" data-aid=".atts-f3d9a49f-bcd4-4e60-be97-f09f59c700a9"></attachment><p class="ql-align-center">IPhysResearch<a href="http://iphysresearch.github.io" target="_blank"><img src="https://i.loli.net/2018/07/11/5b44e3a6a798a.jpg" alt="Background"></a></p><h1 class="ql-align-center">🍺 Teaching is Learning, Writing is Thinking 🍺</h1><hr><p class="ql-align-center"><strong>Physics | Gravitational Waves | Machine Learning | Deep Learning</strong></p><p class="ql-align-center"><a href="http://iphysresearch.github.io" target="_blank"><img src="https://img.shields.io/badge/Update-2018.8.17-green.svg?style=plastic" alt="Update"></a><a href="https://github.com/iphysresearch" target="_blank"><img src="https://img.shields.io/github/followers/iphysresearch.svg?style=social&label=Follow" alt="GITHUB"></a><a href="http://weibo.com/IPhysresearch" target="_blank"><img src="https://img.shields.io/badge/[email protected]?style=plastic" alt="Tweet"></a><a href="https://twitter.com/Herb_hewang" target="_blank"><img src="https://img.shields.io/twitter/url/https/github.com/iphysresearch/iphysresearch.github.io.svg?style=social" alt="Tweet"></a></p><p> </p><p><br></p><p><a href="about:blank" target="_blank">Welcome!</a></p><p><a href="about:blank" target="_blank">About</a></p><p><a href="about:blank" target="_blank">How to comment</a></p><p><a href="about:blank" target="_blank">My Learning Notes on ...</a></p><p><a href="about:blank" target="_blank">Data Science Courses</a></p><p><a href="about:blank" target="_blank">CS231n</a></p><p><a href="about:blank" target="_blank">Books</a></p><p><a href="about:blank" target="_blank">Data Analysis in Gravitational Wave Detection</a></p><p><a href="about:blank" target="_blank">Paper Summary</a></p><p><a href="about:blank" target="_blank">🌈 GW astronomy</a></p><p><a href="about:blank" target="_blank">🏄 Survey & Review</a></p><p><a href="about:blank" target="_blank">🏃 ImageNet Evolution</a></p><p><a href="about:blank" target="_blank">🥅 Model</a></p><p><a href="about:blank" target="_blank">⛷ Optimization</a></p><p><a href="about:blank" target="_blank">My Blog Posts</a></p><p><a href="about:blank" target="_blank">My Github Projects</a></p><p> </p><p><br></p><h1>Welcome!</h1><h2>About</h2><p>Thanks for visiting!</p><p>I'm a PhD candidate majoring in theoretical physics. I love to share knowledge and have a keen passion for scientific research on data analysis of <strong><em>gravitational-wave</em></strong>(GW) detection and <strong><em>deep learning</em></strong> (DL) technologies.</p><p>Most of blog posts and notes here are written in <strong>Chinese</strong> and any future updates for <em>completion</em>. After some years of study, I wish I could accumulate a sufficient body of knowledge and achieve a view of my own on. Thus, as <em>S. Chandrasekhar</em> notes, "I have the urge to present my point of view <em>ab initio</em>, in a coherent account with order, form, and structure."</p><blockquote>"My scientific work has followed a certain pattern motivated, principally, by <em>a quest after perspectives</em>"—— S. Chandrasekhar</blockquote><p>This site is currently <strong>under construction</strong> and I will make updates weekly and look forward to resuming blog posts in the fall.</p><p><br></p><h2>How to comment</h2><p>With use of the <a href="https://hypothes.is/" target="_blank">hypothes.is</a> extension (right-sided), you can highlight, annote any comments and discuss these notes inline<em>at any pages</em>and <em>posts</em>.</p><p><em>Please Feel Free</em> to Let Me Know and <em>Share</em> it Here.</p><p> </p><p><br></p><hr><h1>My Learning Notes on ...</h1><blockquote>“<em>Men Learn While They Teach</em>” —— Seneca.</blockquote><h2>Data Science Courses</h2><h3>CS231n</h3><ul><li>From lecture video and slices</li><li class="ql-indent-1"><a href="about:blank" target="_blank">Lecture 1. Computer vision overview & Historical context</a></li><li class="ql-indent-1"><a href="about:blank" target="_blank">Lecture 2. Image Classification & K-nearest neighbor</a></li><li class="ql-indent-1"><a href="about:blank" target="_blank">Lecture 3. Loss Functions and Optimization</a></li><li class="ql-indent-1">Lecture 4. Introduction to Neural Networks</li><li class="ql-indent-1">Lecture 5. Convolutional Neural Networks</li><li class="ql-indent-1">Lecture 6. Training Neural Networks, part I</li><li class="ql-indent-1">Lecture 7. Training Neural Networks, part II</li><li class="ql-indent-1">Lecture 8. Deep Learning Hardware and Software</li><li class="ql-indent-1">Lecture 9. CNN Architectures</li><li class="ql-indent-1">Lecture 10. Recurrent Neural Networks</li><li class="ql-indent-1">Lecture 11. Detection and Segmentation</li><li class="ql-indent-1">Lecture 12. Generative Models</li><li class="ql-indent-1">Lecture 13. Visualizing and Understanding</li><li class="ql-indent-1">Lecture 14. Deep Reinforcement Learning</li><li class="ql-indent-1">Lecture 15.</li><li>From course notes</li><li><br></li><li>讲义笔记简介<a href="about:blank" target="_blank">图像分类</a>L1/L2 distances, hyperparameter search, cross-validation<a href="about:blank" target="_blank">线性分类</a>parameteric approach, bias trick, hinge loss, cross-entropy loss, L2 regularization, web demo<a href="about:blank" target="_blank">最优化</a>optimization landscapes, local search, learning rate, analytic/numerical gradient </li><li>Others</li><li class="ql-indent-1"><a href="about:blank" target="_blank">一段关于神经网络的故事</a>(<strong>Original</strong>,30671字 + 多图)</li></ul><p> </p><p><br></p><h2>Books</h2><ul><li><a href="about:blank" target="_blank">Python 基础教程(第3版)</a>(<strong>Annotations</strong>)</li></ul><p> </p><p><br></p><h2>Data Analysis in Gravitational Wave Detection</h2><p> </p><p> </p><p> </p><p> </p><p> </p><p><br></p><hr><h1>Paper Summary</h1><blockquote><strong>Please note that these posts are for my future self to review the materials on these papers without reading them all over again.</strong> (Inspired by <a href="https://jaedukseo.me" target="_blank">Jae Duk Seo </a> and also refering to <a href="https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap" target="_blank">Deep Learning Papers Reading Roadmap</a> & <a href="https://github.com/terryum/awesome-deep-learning-papers" target="_blank">Awesome - Most Cited Deep Learning Papers</a>)</blockquote><p> </p><p>I have keys but no locks. I have space but no room. You can enter but can't leave. What am I?</p><p>A keyboard.</p><p> </p><p><br></p><h2>🌈 GW astronomy</h2><ul><li><br></li></ul><p> </p><p><br></p><h2>🏄 Survey & Review</h2><ul><li>[Paper Summary] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "<strong>Deep learning</strong>." <strong>(Three Giants' Survey)</strong></li></ul><h2>🏃 ImageNet Evolution</h2><blockquote>Deep Learning broke out from here</blockquote><ul><li>[Paper Summary] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "<strong>Imagenet classification with deep convolutional neural networks</strong>." (2012). <strong>(AlexNet, Deep Learning Breakthrough)</strong></li><li>[Paper Summary] Simonyan, Karen, and Andrew Zisserman. "<strong>Very deep convolutional networks for large-scale image recognition</strong>." (2014).<strong>(VGGNet,Neural Networks become very deep!)</strong></li><li>[Paper Summary] Szegedy, Christian, et al. "<strong>Going deeper with convolutions</strong>." (2015).<strong>(GoogLeNet)</strong></li><li>[Paper Summary] He, Kaiming, et al. "<strong>Deep residual learning for image recognition</strong>." (2015).<strong>(ResNet,Very very deep networks, CVPR best paper)</strong></li></ul><h2>🥅 Model</h2><ul><li>[Paper Summary] Hinton, Geoffrey E., et al. "<strong>Improving neural networks by preventing co-adaptation of feature detectors</strong>." (2012). <strong>(Dropout)</strong></li><li>[Paper Summary] Srivastava, Nitish, et al. "<strong>Dropout: a simple way to prevent neural networks from overfitting</strong>." (2014)</li><li>[Paper Summary] Ioffe, Sergey, and Christian Szegedy. "<strong>Batch normalization: Accelerating deep network training by reducing internal covariate shift</strong>." (2015).<strong>(An outstanding Work in 2015)</strong></li><li>[Paper Summary] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "<strong>Layer normalization</strong>." (2016).<strong>(Update of Batch Normalization)</strong></li><li>[Paper Summary] Courbariaux, Matthieu, et al. "<strong>Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1</strong>." <strong>(New Model,Fast)</strong></li><li>[Paper Summary] Jaderberg, Max, et al. "<strong>Decoupled neural interfaces using synthetic gradients</strong>." (2016). <strong>(Innovation of Training Method,Amazing Work)</strong></li><li>[Paper Summary] Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer."(2015).<strong>(Modify previously trained network to reduce training epochs)</strong></li><li>[Paper Summary] Wei, Tao, et al. "<strong>Network Morphism.</strong>" (2016). <strong>(Modify previously trained network to reduce training epochs)</strong></li></ul><p> </p><p><br></p><h2>⛷ Optimization</h2><ul><li>[Paper Summary] Sutskever, Ilya, et al. "<strong>On the importance of initialization and momentum in deep learning</strong>." (2013) <strong>(Momentum optimizer)</strong></li><li>[Paper Summary] Kingma, Diederik, and Jimmy Ba. "<strong>Adam: A method for stochastic optimization</strong>." (2014). <strong>(Maybe used most often currently)</strong></li><li>[Paper Summary] Andrychowicz, Marcin, et al. "<strong>Learning to learn by gradient descent by gradient descent</strong>." (2016).<strong>(Neural Optimizer,Amazing Work)</strong></li><li>[Paper Summary] Han, Song, Huizi Mao, and William J. Dally. "<strong>Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding</strong>." (2015). <strong>(ICLR best paper, new direction to make NN running fast,DeePhi Tech Startup)</strong></li><li>[Paper Summary] Iandola, Forrest N., et al. "<strong>SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size</strong>." (2016).<strong>(Also a new direction to optimize NN,DeePhi Tech Startup)</strong></li></ul><p> </p><p> </p><p> </p><p> </p><p><br></p><hr><h1>My Blog Posts</h1><ul><li><a href="about:blank" target="_blank">2018年个人计划和目标</a></li><li><a href="about:blank" target="_blank">数据科学入门之我谈(2017)</a></li><li><a href="about:blank" target="_blank">S_Dbw 聚类评估指标(代码全解析)</a></li><li><a href="about:blank" target="_blank">Training Neural Networks with Mixed Precision: Theory and Practice</a></li></ul><p> </p><p><br></p><hr><h1>My Github Projects</h1><h4><a href="https://github.com/iphysresearch/DataSciComp/" target="_blank">DataSciComp</a> <a href="https://github.com/iphysresearch/DataSciComp/watchers" target="_blank"><img src="https://img.shields.io/github/watchers/iphysresearch/DataSciComp.svg?style=social" alt="Github Watch Badge"></a><a href="https://github.com/iphysresearch/DataSciComp/stargazers" target="_blank"><img src="https://img.shields.io/github/stars/iphysresearch/DataSciComp.svg?style=social" alt="Github Star Badge"></a></h4><blockquote>A collection of popular Data Science Competitions</blockquote><h4><a href="https://github.com/iphysresearch/TOP250movie_douban" target="_blank">TOP250movie_douban</a> <a href="https://github.com/iphysresearch/TOP250movie_douban/watchers" target="_blank"><img src="https://img.shields.io/github/watchers/iphysresearch/TOP250movie_douban.svg?style=social" alt="Github Watch Badge"></a><a href="https://github.com/iphysresearch/TOP250movie_douban/stargazers" target="_blank"><img src="https://img.shields.io/github/stars/iphysresearch/TOP250movie_douban.svg?style=social" alt="Github Star Badge"></a></h4><blockquote>TOP250豆瓣电影短评:Scrapy 爬虫+数据清理/分析+构建中文文本情感分析模型</blockquote><h4><a href="https://github.com/iphysresearch/S_Dbw_validity_index" target="_blank">S_Dbw_validity_index</a> <a href="https://github.com/iphysresearch/S_Dbw_validity_index/watchers" target="_blank"><img src="https://img.shields.io/github/watchers/iphysresearch/S_Dbw_validity_index.svg?style=social" alt="Github Watch Badge"></a><a href="https://github.com/iphysresearch/S_Dbw_validity_index/stargazers" target="_blank"><img src="https://img.shields.io/github/stars/iphysresearch/S_Dbw_validity_index.svg?style=social" alt="Github Star Badge"></a></h4><blockquote>S_Dbw validity index | 代码全解析,可另见<a href="about:blank" target="_blank">博文</a>。</blockquote><p> </p><p> </p><p><br></p><hr><p class="ql-align-center"><a href="http://iphysresearch.github.io" target="_blank"><img src="https://i.loli.net/2018/07/11/5b44d8c9d094f.jpeg" alt="Background"></a></p><blockquote>本站点内容系本作者原创,如有任何知识产权、版权问题或理论错误,还请指正。</blockquote><blockquote>转载请注明原作者及出处,谢谢配合。</blockquote><blockquote><a href="http://creativecommons.org/licenses/by-nc-sa/4.0/" target="_blank"><img src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" alt="Creative Commons License"></a></blockquote><blockquote><a href="http://iphysresearch.github.io" target="_blank">IPhysResearch</a>·<a href="http://iphysresearch.github.io" target="_blank">土豆</a>·<a href="http://iphysresearch.github.io" target="_blank">Herb</a>·<a href="http://iphysresearch.github.io" target="_blank">He Wang</a> © 2018 (under construction)</blockquote><p> { "openSidebar": false, "showHighlights": true, "theme": classic, "enableExperimentalNewNoteButton": true } </p><p><br></p>