Paying attention to words not just images leads to better image captions

Paying attention to words not just images leads to better image captions

A team of University and Adobe researchers is outperforming other approaches to creating computer-generated image captions in an international competition. The key to their winning approach? Thinking about words – what they mean and how they fit in a sentence structure – just as much as thinking about the image itself.

The Rochester/Adobe model mixes the two approaches that are often used in image captioning: the “top-down” approach, which starts from the “gist” of the image and then converts it into words, and the “bottom-up” approach, which first assigns words to different aspects of the image and then combines them together to form a sentence.

The Rochester/Adobe model is currently beating Google, Microsoft, Baidu/UCLA, Stanford University, University of California Berkeley, University of Toronto/Montreal, and others to top the leaderboard in an image captioning competition run by Microsoft, called the Microsoft COCO Image Captioning Challenge. While the winner of the year-long competition is still to be determined, the Rochester “Attention” system – or ATT on the leaderboard – has been leading the field since last November.

Other groups have also tried to combine these two methods by having a feedback mechanism that allows a system to improve on what just one of the approaches would be able to do. However, several systems that tried to blend these two approaches focused on “visual attention,” which tries to take into account which parts of an image are visually more important to describe the image better.

Paying attention to words not just images leads to better image captions
Google caption: “A close-up of a plate of food on a table.” Rochester ATT caption: “A  table topped with a cake with candles on it.”

The Rochester/Adobe system focuses on what the researchers describe as “semantic attention.” In a paper accepted by the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), entitled “Image Captioning with Semantic Attention,” computer science professor Jiebo Luo and his colleagues define semantic attention as “the ability to provide a detailed, coherent description of semantically important objects that are needed exactly when they are needed.”

“To describe an image you need to decide what to pay more attention to,” said Luo. “It is not only about what is in the center of the image or a bigger object, it’s also about coming up with a way of deciding on the importance of specific words.”

Paying attention to words not just images leads to better image captions
Google caption: “A baby is eating a piece of paper.” Rochester ATT caption: “A baby with a toothbrush in its mouth.”

For example, take an image that shows a table and seated people. The table might be at the center of the image but a better caption might be “a group of people sitting around a table” instead of “a table with people seated.” Both are correct, but the former one also tries to take into account what might be of interest to readers and viewers.

Computer image captioning brings together two key areas in artificial intelligence: computer vision and natural language processing. For the computer vision side, researchers train their systems on a massive dataset of images, so they learn to identify objects in images. Language models can then be used to put these words together. For the algorithm that Luo and his team used in their system, they also trained their system on many texts. The objective was not only to understand sentence structure but also the meanings of individual words, what words often get used together with these words, and what words might be semantically more important.

Paying attention to words not just images leads to better image captions
Google caption: “A white plate with a variety of food.” Rochester ATT caption: “A plate with a sandwich and french fries.”

A closely related paper on video captioning by Luo, graduate student Yuncheng Li, and their Yahoo Research colleagues Yale Song, Liangliang Cao, Joel Tetreault, andLarry  Goldberg. “TGIF: A New Dataset and Benchmark on Animated GIF Description,” will also be featured as a “Spotlight” presentation at CVPR.
[Source:- Phys org]

Saheli
bento4d toto slot bento4d bandar togel situs toto situs toto slot thailand situs toto syair hk toto slot situs toto situs togel terpercaya situs toto situs toto toto slot https://uninus.ac.id/ pengeluaran hk
slot gacor togel online terpercaya situs slot https://disdukcapil.salatiga.go.id/ngacor/ slot gacor totomacau4d situs toto situs toto situs toto slot gacor slot gacor slot gacor slot gacor slot gacor rtp slot toto slot https://journal.dpkp.ciamiskab.go.id/ rtp slot rtp live slot gacor situs toto slot gacor situs toto situs toto togel https://faculdadediplomata.edu.br/-/ https://www.pilgrimagetour.in/-/ slot gacor situs toto slot gacor slot gacor rtp slot https://ejournal.yahukimokab.go.id/ https://mikrotik.itpln.ac.id/wp-content/uploads/ situs toto slot gacor slot gacor situs toto slot gacor slot gacor slot gacor slot gacor slot gacor slot gacor slot gacor slot gacor situs toto toto slot bento4d bento4d bento4d bento4d bento4d https://cpnsbatola.id/-/ slot777 situs togel bento4d bento4d slot777 bento4d cerutu4d rimbatoto https://smpitbinailmu.sch.id/ bakautoto bakau toto slot https://inspiracionspa.com.mx/-/ bento4d bento4d https://pafikabupatenrejanglebong.org/ https://dinkes.bogorkab.go.id/-/totoslot/ bento4d bento4d bento4d bento4d bento4d https://pafipcbangkabelitung.org/ https://pafipcindonesia.org/ https://pafipclubuklinggau.org/ https://pafipcpagaralam.org/ https://pafipclahat.org/ slot gacor slot gacor slot gacor slot gacor slot gacor
rimbatoto situs togel situs togel rimbatoto rimbatoto toto slot rimbatoto slot gacor