3. Seong D, Yi BK. Research trends in clinical natural language processing. Commun Korean Inst Inf Sci Eng 2017;35(5):20-6.
6. Shin SY, Park YR, Shin Y, Choi HJ, Park J, Lyu Y, et al. A de-identification method for bilingual clinical texts of various note types. J Korean Med Sci 2015;30(1):7-15.
7. Lafferty J, McCallum A, Pereira FC. Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th International Conference on Machine Learning (ICML); 2001 Jun 28–Jul 1. San Francisco, CA; p. 282-9.
8. Wang Y. Annotating and recognising named entities in clinical notes. Proceedings of the ACL-IJCNLP 2009 Student Research Workshop; 2009 Aug 4. Suntec, Singapore; p. 18-26.
9. Dreyfus SE. Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure. J Guid Control Dyn 1990;13(5):926-8.
11. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997;9(8):1735-80.
13. Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space [Internet]. Ithaca (NY): arXiv.org; 2013 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1301.3781
14. fastText [Internet]. Menlo Park (CA): Facebook Inc.; 2020 [cited at 2022 Jan 10]. Available from:
https://fast-text.cc/
15. Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, et al. Deep contextualized word representations [Internet]. Ithaca (NY): arXiv.org; 2018 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1802.05365
16. Kim JM, Lee JH. Text document classification based on recurrent neural network using word2vec. J Korean Inst Intell Syst 2017;27(6):560-5.
19. Devlin J, Chang MW, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding [Internet]. Ithaca (NY): arXiv.org; 2018 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1810.04805
20. Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R. ALBERT: a lite BERT for self-supervised learning of language representations. Proceedings of the 8th International Conference on Learning Representations (ICLR); 2020 Apr 26–30. Addis Ababa, Ethiopia.
21. Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al. ROBERTa: a robustly optimized BERT pretraining approach [Internet]. Ithaca (NY): arXiv.org; 2019 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1907.11692
23. Kingma DP, Ba J. Adam: a method for stochastic optimization [Internet]. Ithaca (NY): arXiv.org; 2014 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1412.6980
24. Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov RR, Le QV. Xlnet: generalized autoregressive pretraining for language understanding. Adv Neural Inf Process Syst 2019;32:5754-64.
28. Kingma DP, Welling M. Auto-encoding variational bayes [Internet]. Ithaca (NY): arXiv.org; 2013 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1312.6114
29. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Adv Neural Inf Process Syst 2014;27:2672-80.
30. Alsentzer E, Murphy JR, Boag W, Weng WH, Jin D, Naumann T, et al. Publicly available clinical BERT embeddings [Internet]. Ithaca (NY): arXiv.org; 2019 [cited at 2022 Jan 10]. Available from:
https://arxiv.org/abs/1904.03323