帳號:guest(3.237.97.64)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者:黃俊銘
作者(外文):Chun-Ming Huang
論文名稱:利用深度學習為基礎的P 波自動挑波套件
論文名稱(外文):An Automatic P Phase Picking Toolbox Using Deep Learning Method
指導教授:郭陳澔王乾盈
指導教授(外文):Hao Kuo-ChenChien-Ying Wang
學位類別:碩士
校院名稱:國立中央大學
系所名稱:地球科學學系
學號:106622021
出版年:109
畢業學年度:108
語文別:中文
論文頁數:57
中文關鍵詞:P 波到時深度學習自動挑波
外文關鍵詞:P arrivalDeep learningAuto-picking
相關次數:
  • 推薦推薦:0
  • 點閱點閱:265
  • 評分評分:*****
  • 下載下載:46
  • 收藏收藏:0
台灣位於地震帶上,每月都會有數以千計的地震發生,但往往沒有足夠的人力可以消化地震資料。但在科技日新月異的今日,可藉由人工智慧的幫忙,在性質單一的任務下,電腦可以大幅降低人力負擔。2018 花蓮地震發生後的 12 天內,臨時地震網中就偵測到了超過 4,000 次餘震。雖然有許多文章提出了各種深度學習解決方案,但學術研究與實際工作流程之間還是有不小的差距,我們希望有個可以快速整合進生產流程的套件。
在本次的研究中,我們利用 Zhu 等人在 2018 年提出的 PhaseNet 作為設計藍圖。其中最重要的概念,是將人工挑選誤差以高斯分布的形式取代 P 波到時作為訓練標籤。除此之外,我們更進一步地將 PhaseNet 中的神經網路模型用新的UNet++ 取代,其重新設計的快速通道,比起原始版本的 U-Net 收斂速度更快。我們將開發的套件SeisNN開源,其利用 Obspy 作為主要 I/O 的套件,讀取 SEISAN 的 S-file 產生訓練資料集,並使用Tensorflow 2.0作為主要的深度學習框架。另外提供 Docker 映像檔與 Dockerfile 實現快速安裝與一致的開發環境設定。
以本實驗室的使用狀況,大部分收集回來的地震資料都只有垂直分量,而我們也只關心P波到時做後續的速度分析,所以將原本 PhaseNet 中的三分量資料縮減至單分量,輸出從P波、S波與雜訊三個類別縮減至單純的P波到時。我們使用三組不同的資料集進行訓練,分別是美濃資料集13,357筆、花蓮2017資料集30,852筆與花蓮2017資料集56,223筆。經過訓練後F1值分別0.729、0.918和0.925。
本研究利用過往實驗室累積的資料,經過人工智慧成功的簡化了繁瑣的挑波流程,在未來發展成熟後,就可以將此方法整合到既有的工作流程中。
There are thousands of earthquakes take places in Taiwan within a single month, but there are not enough researchers to process the data. Nowadays, researchers can leverage the power of artificial intelligence to accomplish repetitive tasks. In 2018, after the Hualien earthquake, our temporal station network detected over 4,000 aftershocks within 12 days. Although there are numerous methods have been proposed to tackle the problem, we need a real-world solution for our routine workflow.
In this research, we design our toolbox based on PhaseNet which proposed by Zhu et al. in 2018. The main concept is labeling the phase picking time with a Gaussian distribution mask to represent the picking error. Furthermore, we swap the architecture from the well-known U-Net to its successor UNet ++, which redesigns the skip pathways to help the model converge faster. We develop our package SeisNN and open the source code to the public, using Obspy for data I/O, reading SEISAN s-file for generate training data and using Tensorflow 2.0 for the main deep learning framework. Besides, we provide Docker image and Dockerfile for fast deployment and uniform environment.
In our scenario, most of the data we recover from the field are only contained Z component, on the other hand, because we only care the P arrivals for further processing, so we shrink down the network output from P, S, Noise to P arrivals only. We test our model on three different dataset: Meinong dataset with 13,357 training sets, Hualien 2017 dataset with 30,852 training sets and Hualien 2018 dataset with 56,223 training sets. After training, the according F1 score is 0.729, 0.918 and 0.925.
In summary, this research utilize the historical data in our lab, successfully simplify the cumbersome picking process with artificial intelligence. After fully develop the toolbox, it can be integrated into our routine workflow.
摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VI
表目錄 VIII
一、 前言 1
1-1 研究動機與目的 1
1-2 論文架構 1
二、 文獻回顧 2
2-1 人工挑波基礎 2
2-2 傳統自動挑波 3
三、 研究方法 4
3-1 人工神經網路 4
3-1-1 感知器 5
3-1-2 激發函數 5
3-1-3 多層感知器 7
3-1-4 損失函數 8
3-1-5 梯度下降法 8
3-1-6 反向傳播 8
3-1-7 優化器 9
3-1-8 學習曲線 9
3-2 卷積神經網路 9
3-3 語意分割 10
3-3-1 FCN 11
3-3-2 U-Net 12
3-3-3 Unet++ 12
3-3-4 PhaseNet 13
3-4 評估方法 14
3-4-1 訊噪比 14
3-4-2 混淆矩陣 15
3-4-3 精確率、召回率、F1值 15
四、 實驗流程 16
4-1 定義訓練目標 17
4-2 地震資料集 18
4-2-1 美濃資料集 19
4-2-2 花蓮資料集 20
4-3 資料前處理 23
4-4 資料管線建置 23
4-5 模型架構 24
4-6 模型訓練 25
4-7 模型評估與輸出 26
五、 實驗結果 27
5-1 美濃資料集 27
5-2 花蓮2017資料集 32
5-3 花蓮2018資料集 37
六、 討論與結論 42
6-1 資料集比較 42
6-2 結論 43
參考文獻 44
Diehl, T., & Kissling, E. (2007). Users guide for consistent phase picking at local to regional scales. Institute of Geophysics, ETH Zurich, Switzerland.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672-2680).
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.F., & Dennison, D. (2015). Hidden technical debt in machine learning systems. In Advances in neural information processing systems (pp. 2503-2511).
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T. & Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484.
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., & Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support (pp. 3-11). Springer, Cham.
Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
論文全文檔清單如下︰
1.電子全文連結(3028.358K)
(電子全文 已開放)
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *