Los puntos clave no están disponibles para este artículo en este momento.
We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines for VQA are provided and compared with human performance.
Building similarity graph...
Analyzing shared references across papers
Loading...
Stanislaw Antol
Aishwarya Agrawal
Jiasen Lu
Georgia Institute of Technology
Virginia Tech
Microsoft (United States)
Building similarity graph...
Analyzing shared references across papers
Loading...
Antol et al. (Tue,) studied this question.
www.synapsesocial.com/papers/698659c429958b2750b9d65a — DOI: https://doi.org/10.1109/iccv.2015.279
Synapse has enriched 4 closely related papers on similar clinical questions. Consider them for comparative context: