Demirel, B.Cinbiş, Ramazan Gökberkİkizler-Cinbiş, N.2018-04-122018-04-122017-10http://hdl.handle.net/11693/37629Date of Conference: 22-29 Oct. 2017Conference name: IEEE International Conference on Computer Vision (ICCV) 2017We propose a novel approach for unsupervised zero-shot learning (ZSL) of classes based on their names. Most existing unsupervised ZSL methods aim to learn a model for directly comparing image features and class names. However, this proves to be a difficult task due to dominance of non-visual semantics in underlying vector-space embeddings of class names. To address this issue, we discriminatively learn a word representation such that the similarities between class and combination of attribute names fall in line with the visual similarity. Contrary to the traditional zero-shot learning approaches that are built upon attribute presence, our approach bypasses the laborious attributeclass relation annotations for unseen classes. In addition, our proposed approach renders text-only training possible, hence, the training can be augmented without the need to collect additional image data. The experimental results show that our method yields state-of-the-art results for unsupervised ZSL in three benchmark datasets. © 2017 IEEE.EnglishSemanticsVector spacesAttribute-basedBenchmark datasetsDiscriminative modelsImage featuresLearning approachState of the artVisual similarityWord representationsComputer visionAttributes2Classname: a discriminative model for attribute-based unsupervised zero-shot learningConference Paper10.1109/ICCV.2017.139