Browsing by Author "Cinbiş, R. G."
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Open Access Fine-grained object recognition and zero-shot learning in multispectral imagery(IEEE, 2018) Sümbül, Gencer; Aksoy, Selim; Cinbiş, R. G.We present a method for fine-grained object recognition problem, that aims to recognize the type of an object among a large number of sub-categories, and zero-shot learning scenario on multispectral images. In order to establish a relation between seen classes and new unseen classes, a compatibility function between image features extracted from a convolutional neural network and auxiliary information of classes is learnt. Knowledge transfer for unseen classes is carried out by maximizing this function. Performance of the model (15.2%) evaluated with manually annotated attributes, a natural language model, and a scientific taxonomy as auxiliary information is promisingly better than the other methods for 16 test classes.Item Open Access Fire detection in infrared video using wavelet analysis(SPIE - International Society for Optical Engineering, 2007) Töreyin, B. U.; Cinbiş, R. G.; Dedeoğlu, Y.; Çetin, A. EnisA novel method to detect flames in infrared (IR) video is proposed. Image regions containing flames appear as bright regions in IR video. In addition to ordinary motion and brightness clues, the flame flicker process is also detected by using a hidden Markov model (HMM) describing the temporal behavior. IR image frames are also analyzed spatially. Boundaries of flames are represented in wavelet domain and the high frequency nature of the boundaries of fire regions is also used as a clue to model the flame flicker. All of the temporal and spatial clues extracted from the IR video are combined to reach a final decision. False alarms due to ordinary bright moving objects are greatly reduced because of the HMM-based flicker modeling and wavelet domain boundary modeling.Item Open Access Image mining using directional spatial constraints(Institute of Electrical and Electronics Engineers, 2010-01) Aksoy, S.; Cinbiş, R. G.Spatial information plays a fundamental role in building high-level content models for supporting analysts' interpretations and automating geospatial intelligence. We describe a framework for modeling directional spatial relationships among objects and using this information for contextual classification and retrieval. The proposed model first identifies image areas that have a high degree of satisfaction of a spatial relation with respect to several reference objects. Then, this information is incorporated into the Bayesian decision rule as spatial priors for contextual classification. The model also supports dynamic queries by using directional relationships as spatial constraints to enable object detection based on the properties of individual objects as well as their spatial relationships to other objects. Comparative experiments using high-resolution satellite imagery illustrate the flexibility and effectiveness of the proposed framework in image mining with significant improvements in both classification and retrieval performance.Item Open Access Key protected classification for collaborative learning(Elsevier, 2020) Sarıyıldız, Mert Bülent; Cinbiş, R. G.; Ayday, ErmanLarge-scale datasets play a fundamental role in training deep learning models. However, dataset collection is difficult in domains that involve sensitive information. Collaborative learning techniques provide a privacy-preserving solution, by enabling training over a number of private datasets that are not shared by their owners. However, recently, it has been shown that the existing collaborative learning frameworks are vulnerable to an active adversary that runs a generative adversarial network (GAN) attack. In this work, we propose a novel classification model that is resilient against such attacks by design. More specifically, we introduce a key-based classification model and a principled training scheme that protects class scores by using class-specific private keys, which effectively hide the information necessary for a GAN attack. We additionally show how to utilize high dimensional keys to improve the robustness against attacks without increasing the model complexity. Our detailed experiments demonstrate the effectiveness of the proposed technique. Source code will be made available at https://github.com/mbsariyildiz/key-protected-classification.Item Open Access Weakly supervised deep convolutional networks for fine-grained object recognition in multispectral images(Institute of Electrical and Electronics Engineers Inc., 2019) Aygüneş, Bulut; Aksoy, Selim; Cinbiş, R. G.The challenging task of training object detectors for fine-grained classification faces additional difficulties when there are registration errors between the image data and the ground truth. We propose a weakly supervised learning methodology for the classification of 40 types of trees by using fixed-sized multispectral images with a class label but with no exact knowledge of the object location. Our approach consists of an end-to-end trainable convolutional neural network with separate branches for learning class-specific and location-specific scoring of image regions. Comparative experiments show that the proposed method simultaneously learns to detect and classify the objects of interest with high accuracy.Item Open Access Weakly supervised instance attention for multisource fine-grained object recognition with an application to tree species classification(Elsevier BV, 2021-06) Aygüneş, Bulut; Cinbiş, R. G.; Aksoy, SelimMultisource image analysis that leverages complementary spectral, spatial, and structural information benefits fine-grained object recognition that aims to classify an object into one of many similar subcategories. However, for multisource tasks that involve relatively small objects, even the smallest registration errors can introduce high uncertainty in the classification process. We approach this problem from a weakly supervised learning perspective in which the input images correspond to larger neighborhoods around the expected object locations where an object with a given class label is present in the neighborhood without any knowledge of its exact location. The proposed method uses a single-source deep instance attention model with parallel branches for joint localization and classification of objects, and extends this model into a multisource setting where a refer- ence source that is assumed to have no location uncertainty is used to aid the fusion of multiple sources in four different levels: probability level, logit level, feature level, and pixel level. We show that all levels of fusion provide higher accuracies compared to the state-of-the-art, with the best performing method of feature-level fusion resulting in 53% accuracy for the recognition of 40 different types of trees, corresponding to an improvement of 5.7% over the best performing baseline when RGB, multispectral, and LiDAR data are used. We also provide an in-depth comparison by evaluating each model at various parameter complexity settings, where the increased model capacity results in a further improvement of 6.3% over the default capacity setting.