A Weakly Supervised Learning Approach to Anomaly Detection on Cloud Server Configuration

A Weakly Supervised Learning Approach to Anomaly Detection on Cloud Server Configuration

Loading document ...
Page
of
Loading page ...

Author(s)

Author(s): Qiuyu Tian, Hongwei Tang, Xiaohong Wang

Download Full PDF Read Complete Article

DOI: 10.18483/ijSci.2779 6 22 41-51 Volume 13 - Jul 2024

Abstract

Cloud computing platforms have become increasingly popular across various industries, offering publicly accessible computing, storage, and network solutions to meet the demands of building, scaling, and managing applications. A critical component of these platforms is the recommendation system, which significantly influences customer experience and platform revenue. However, variations in customer behavior and product attributes result in different recommendation scenarios across platforms. One key scenario faced by customers of cloud computing platforms is configuration selection. In this paper, we present an innovative approach to detect potentially misconfigured cloud servers in such scenarios. Our method utilizes weakly supervised learning, using server lifetime as a weak signal to guide the configuration anomaly detection model. By implementing this configuration check, we can prevent customers from purchasing misconfigured products, thus promoting a stable and satisfactory relationship between cloud computing platforms and their customers.

Keywords

Weakly Supervised Learning, Anomaly Detection, Cloud Computing

References

  1. Ester, M., Kriegel, H.-P., Sander, J., & Xu, X. (1996). A density-based algorithm for discovering clusters in large spatial databases with noise. In E. Simoudis, J. Han, & U. M. Fayyad (Eds.), Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96) (pp. 226–231). AAAI Press. https://www2.cs.uh.edu/~ceick/7363/Papers/dbscan.pdf
  2. Breunig, M. M., Kriegel, H.-P., Ng, R. T., & Sander, J. (2000). LOF: Identifying Density-based Local Outliers. In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (pp. 93–104). ACM. https://doi.org/10.1145/335191.335388
  3. Schölkopf, B., Williamson, R. C., Smola, A. J., Shawe-Taylor, J., & Platt, J. C. (1999). Support Vector Method for Novelty Detection. In Advances in Neural Information Processing Systems (Vol. 12, pp. 582–588). MIT Press.
  4. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., & Gulin, A. (2018). CatBoost: Unbiased Boosting with Categorical Features. In Advances in Neural Information Processing Systems (Vol. 31). https://arxiv.org/abs/1706.09516
  5. Liu, F. T., Ting, K. M., & Zhou, Z.-H. (2008). Isolation Forest. In Proceedings of the 2008 IEEE International Conference on Data Mining (pp. 413–422). IEEE. https://doi.org/10.1109/ICDM.2008.17
  6. Zhou, Z.-H. (2018). A Brief Introduction to Weakly Supervised Learning. National Science Review, 5(1), 44–53. https://doi.org/10.1093/nsr/nwx106
  7. Smith, K. (2013). Precalculus: A Functional Approach to Graphing and Problem Solving. Jones & Bartlett Publishers.
  8. Black, P. E. (2019). Manhattan distance. In Dictionary of Algorithms and Data Structures. National Institute of Standards and Technology.
  9. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297. https://doi.org/10.1007/BF00994018
  10. Beveridge, G. S. G., & Schechter, R. S. (1970). Lagrangian Multipliers. In Optimization: Theory and Practice (pp. 244–259). New York: McGraw-Hill
  11. Brownlee, J. (2020). Ordinal and One-Hot Encodings for Categorical Data. Machine Learning Mastery. https://machinelearningmastery.com/one-hot-encoding-for-categorical-data/
  12. Culberson, J., & Munro, J. I. (1989). Explaining the Behaviour of Binary Search Trees Under Prolonged Updates: A Model and Simulations. The Computer Journal, 32(1), 68–69. https://doi.org/10.1093/comjnl/32.1.68
  13. Roe, B. P., Yang, H.-J., Zhu, J., Liu, Y., Stancu, I., & McGregor, G. (2005). Boosted decision trees as an alternative to artificial neural networks for particle identification. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 543(2), 577–584. https://doi.org/10.1016/j.nima.2004.12.018
  14. Friedman, J. H. (1999). Greedy Function Approximation: A Gradient Boosting Machine. In Annals of Statistics (Vol. 29, pp. 1189–1232). https://doi.org/10.1214/aos/1013203451
  15. Micci-Barreca, D. (2001). A Preprocessing Scheme for High-cardinality Categorical Attributes in Classification and Prediction Problems. ACM SIGKDD Explorations Newsletter, 3(1), 27–32. https://doi.org/10.1145/507533.507538
  16. Wright, S. (1921). Correlation and causation. Journal of Agricultural Research, 20(7), 557–585.

Cite this Article:

International Journal of Sciences is Open Access Journal.
This article is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License.
Author(s) retain the copyrights of this article, though, publication rights are with Alkhaer Publications.

Search Articles

Issue June 2024

Volume 13, June 2024


Table of Contents



World-wide Delivery is FREE

Share this Issue with Friends:


Submit your Paper