Browsing by Subject "Bagging"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Unknown Degree of mispricing with the black-scholes model and nonparametric cures(Peking University Press, 2003) Gençay, R.; Salih, A.The Black-Scholes pricing errors are larger in the deeper out-of-the-money options relative to the near out-of-the-money options, and mispricing worsens with increased volatility. Our results indicate that the Black-Scholes model is not the proper pricing tool in high volatility situations especially for very deep out-of-the-money options. Feedforward networks provide more accurate pricing estimates for the deeper out-of-the money options and handles pricing during high volatility with considerably lower errors for out-of-the-money call and put options. This could be invaluable information for practitioners as option pricing is a major challenge during high volatility periods.Item Unknown A novel online stacked ensemble for multi-label stream classification(ACM, 2018) Büyükçakır, Alican; Bonab, H.; Can, FazlıAs data streams become more prevalent, the necessity for online algorithms that mine this transient and dynamic data becomes clearer. Multi-label data stream classification is a supervised learning problem where each instance in the data stream is classified into one or more pre-defined sets of labels. Many methods have been proposed to tackle this problem, including but not limited to ensemble-based methods. Some of these ensemble-based methods are specifically designed to work with certain multi-label base classifiers; some others employ online bagging schemes to build their ensembles. In this study, we introduce a novel online and dynamically-weighted stacked ensemble for multi-label classification, called GOOWE-ML, that utilizes spatial modeling to assign optimal weights to its component classifiers. Our model can be used with any existing incremental multi-label classification algorithm as its base classifier. We conduct experiments with 4 GOOWE-ML-based multi-label ensembles and 7 baseline models on 7 real-world datasets from diverse areas of interest. Our experiments show that GOOWE-ML ensembles yield consistently better results in terms of predictive performance in almost all of the datasets, with respect to the other prominent ensemble models.