Peipei crypto

Comment

Author: Admin | 2025-04-27

Processing (EMNLP ’22). Association for Computational Linguistics, 4163–4181. DOI:[112]Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic Word Embeddings and Semantic Shifts: A Survey. In Proceedings of the 27th International Conference on Computational Linguistics (COLING ’18). Association for Computational Linguistics, 1384–1397.[113]Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, et al. 2019. An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP ’19). Association for Computational Linguistics, 1311–1316. DOI:[114]Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL ’14). Association for Computer Linguistics, 530–539. DOI:[115]Quoc Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of the 31th International Conference on Machine Learning (ICML ’14), JMLR Workshop and Conference Proceedings, Vol. 32. JMLR.org, 1188–1196.[116]Daniel Leite, Rosangela Ballini, Pyramo Costa, and Fernando Gomide. 2012. Evolving Fuzzy Granular Modeling from Nonstationary Fuzzy Data Streams. Evolving Systems 3, 2 (2012), 65–79. DOI:[117]David D. Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. RCV1: A New Benchmark Collection for Text Categorization Research. Journal of Machine Learning Research 5 (2004), 361–397.[118]Peipei Li, Lu He, Haiyan Wang, Xuegang Hu, Yuhong Zhang, Lei Li, and Xindong Wu. 2018. Learning from Short Text Streams with Topic Drifts. IEEE Transactions on Cybernetics 48, 9 (2018), 2697–2711. DOI:[119]Peipei Li, Yingying Liu, Yang Hu, Yuhong Zhang, Xuegang Hu, and Kui Yu. 2022. A Drift-Sensitive Distributed LSTM Method for Short Text Stream Classification. IEEE Transactions on Big Data 9, 1 (2022), 341–357. DOI:[120]Shaohua Li, Jun Zhu, and Chunyan Miao. 2017. PSDVec: A Toolbox for Incremental and Scalable Word Embedding. Neurocomputing 237 (2017), 405–409. DOI:[121]Jinghua Liu, Yaojin Lin, Yuwen Li, Wei Weng, and Shunxiang Wu. 2018. Online Multi-Label Streaming Feature Selection Based on Neighborhood Rough Set. Pattern Recognition 84 (2018), 273–287. DOI:[122]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. 2023. Prompt Injection Attacks and Defenses in LLM-Integrated Applications. arXiv:2310.12815. Retrieved from https://arxiv.org/abs/2310.12815[123]Yang Liu, Alan Medlar, and Dorota Glowacka. 2021. Statistically Significant Detection of Semantic Shifts Using Contextual Word Embeddings. In Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems (Eval4NLP ’21). Association for Computational Linguistics, 104–113.[124]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv:1907.11692. Retrieved from https://arxiv.org/abs/1907.11692[125]Stuart Lloyd. 1982. Least Squares Quantization in PCM. IEEE

Add Comment