[1]
G. Mago, P. Mettes, and S. Rudinac, “Looking Beyond the Obvious: A Survey on Abstract Concept Recognition for Video Understanding,” International Journal of Computer Vision (Accepted for Publication), 2026.
[2]
A. Efthymiou, S. Rudinac, M. Kackovic, N. Wijnberg, and M. Worring, “VL-KGE: Vision–Language Models Meet Knowledge Graph Embeddings,” in Proceedings of the ACM Web Conference 2026, WWW ’26, (Accepted for Publication), 2026.
[3]
A. Efthymiou, M. Kackovic, S. Rudinac, M. Worring, and N. Wijnberg, “Being Ranked in a Material World: The visual originality of an artwork and its effects on the artist’s canonization,”
Organization Studies, vol. 47, no. 1, pp. 93–125, 2026, doi:
10.1177/01708406251397720.
[4]
H. Zhu, J.-H. Huang, Y. Shen, S. Rudinac, and E. Kanoulas, “Interactive Image Retrieval Meets Query Rewriting with Large Language and Vision Language Models,”
ACM Trans. Multimedia Comput. Commun. Appl., vol. 21, no. 10, Oct. 2025, doi:
10.1145/3744910.
[5]
U. Sharma, O. S. Khan, S. Rudinac, and B. Þ. Jónsson, “Can Relevance Feedback, Conversational Search and Foundation Models Work Together for Interactive Video Search and Exploration?,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Jun. 2025, pp. 3749–3758.
[6]
L. Rossetto, G. Awad, W. Bailer, C. Gurrin, B. Þ. Jónsson, J. Lokoč, S. Rudinac, and K. Schoeffmann, “Overview of the 1st International Workshop on Interactive Video Search and Exploration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Jun. 2025, pp. 3682–3687.
[7]
J.-H. Huang, Y. Shen, H. Zhu, S. Rudinac, and E. Kanoulas, “Gradient Weight-normalized Low-rank Projection for Efficient LLM Training,”
Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 23, pp. 24123–24131, Apr. 2025, doi:
10.1609/aaai.v39i23.34587.
[8]
S. Wang, I. Najdenkoska, H. Zhu, S. Rudinac, M. Kackovic, N. Wijnberg, and M. Worring, “ArtRAG: Retrieval-Augmented Generation with Structured Context for Visual Art Understanding,” in
Proceedings of the 33rd ACM International Conference on Multimedia, in MM ’25. New York, NY, USA: Association for Computing Machinery, 2025, pp. 6700–6709. doi:
10.1145/3746027.3755673.
[9]
U. Sharma, O. S. Khan, S. Rudinac, and B. Þ. Jónsson, “Exquisitor at the Video Browser Showdown 2025: Unifying Conversational Search and User Relevance Feedback,” in
MultiMedia Modeling - 31st International Conference on Multimedia Modeling, MMM 2025, Nara, Japan, January 8-10, 2025, Proceedings, Part V, in Lecture Notes in Computer Science, vol. 15524. Springer, 2025, pp. 264–271. doi:
10.1007/978-981-96-2074-6_31.
[10]
L. Rossetto, W. Bailer, D.-T. Dang-Nguyen, G. Healy, B. Þ. Jónsson, O. Kongmeesub, H.-B. Le, S. Rudinac, K. Schöffmann, F. Spiess, A. Tran, M.-T. Tran, Q.-L. Tran, and C. Gurrin, “The CASTLE 2024 Dataset: Advancing the Art of Multimodal Understanding,” in
Proceedings of the 33rd ACM International Conference on Multimedia, in MM ’25. New York, NY, USA: Association for Computing Machinery, 2025, pp. 12629–12635. doi:
10.1145/3746027.3758199.
[11]
O. S. Khan, U. Sharma, G. Marcelino, A. Duane, S. Rudinac, M. Worring, and B. Þ. Jónsson, “Interactive Retrieval System for Multi-Stream Collections: multiXview at CASTLE 2025 Interactive Grand Challenge,” in
Proceedings of the 33rd ACM International Conference on Multimedia, in MM ’25. New York, NY, USA: Association for Computing Machinery, 2025, pp. 14273–14279. doi:
10.1145/3746027.3760243.
[12]
Y. Hui, I. M. Zwetsloot, S. Trimborn, and S. Rudinac, “Domain-Informed Negative Sampling Strategies for Dynamic Graph Embedding in Meme Stock-Related Social Networks,” in
Proceedings of the ACM on Web Conference 2025, in WWW ’25. New York, NY, USA: Association for Computing Machinery, 2025, pp. 518–529. doi:
10.1145/3696410.3714650.
[13]
J.-H. Huang, H. Zhu, Y. Shen, S. Rudinac, and E. Kanoulas, “Image2Text2Image: A Novel Framework for Label-Free Evaluation of Image-to-Text Generation with Text-to-Image Diffusion Models,” in
MultiMedia Modeling - 31st International Conference on Multimedia Modeling, MMM 2025, Nara, Japan, January 8-10, 2025, Proceedings, Part IV, in Lecture Notes in Computer Science, vol. 15523. Springer, 2025, pp. 413–427. doi:
10.1007/978-981-96-2071-5_30.
[14]
D. Arya, D. K. Gupta, S. Rudinac, and M. Worring, “Adaptive Neural Message Passing for Inductive Learning on Hypergraphs,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, no. 1, pp. 19–31, 2025, doi:
10.1109/TPAMI.2024.3434483.
[15]
H. Zhu, J.-H. Huang, S. Rudinac, and E. Kanoulas, “Enhancing Interactive Image Retrieval With Query Rewriting Using Large Language Models and Vision Language Models,” in
Proceedings of the 2024 International Conference on Multimedia Retrieval, in ICMR ’24. New York, NY, USA: Association for Computing Machinery, 2024, pp. 978–987. doi:
10.1145/3652583.3658032.
[16]
S. Wang, J. Shen, A. Efthymiou, S. Rudinac, M. Kackovic, N. Wijnberg, and M. Worring, “Prototype-Enhanced Hypergraph Learning for Heterogeneous Information Networks,” in MultiMedia Modeling, Cham: Springer Nature Switzerland, 2024, pp. 462–476.
[17]
M. Sukel, S. Rudinac, and M. Worring, “Multimodal Temporal Fusion Transformers are Good Product Demand Forecasters,”
IEEE MultiMedia, vol. 31, no. 2, pp. 48–60, 2024, doi:
10.1109/MMUL.2024.3373827.
[18]
U. Sharma, S. Rudinac, J. Demmers, W. van Dolen, and M. Worring, “GreenScreen: A Multimodal Dataset for Detecting Corporate Greenwashing in the Wild,” in MultiMedia Modeling, Cham: Springer Nature Switzerland, 2024.
[19]
U. Sharma, S. Rudinac, J. Demmers, W. van Dolen, and M. Worring, “From pixels to perceptions: Capturing high-level abstract concepts in visual user-generated content,”
International Journal of Information Management Data Insights, vol. 4, no. 2, p. 100269, 2024, doi:
https://doi.org/10.1016/j.jjimei.2024.100269.
[20]
S. Rudinac, A. Hanjalic, C. Liem, M. Worring, B. Þ. Jónsson, B. Liu, and Y. Yamakata, MultiMedia Modeling: 30th International Conference, MMM 2024, Amsterdam, The Netherlands, January 29 – February 2, 2024, Proceedings, Part I-V. Springer Nature, 2024.
[21]
J. Li, S. Wang, S. Rudinac, and A. Osseyran, “High-performance computing in healthcare:an automatic literature analysis perspective,”
J. Big Data, vol. 11, no. 1, p. 61, 2024, doi:
10.1186/S40537-024-00929-2.
[22]
J. Li, A. Osseyran, R. Hekster, S. Rudinac, V. Codreanu, and D. Podareanu, “Improving the speed and quality of cancer segmentation using lower resolution pathology images,”
Multim. Tools Appl., vol. 83, no. 4, pp. 11999–12015, 2024, doi:
10.1007/S11042-023-15984-9.
[23]
O. S. Khan, H. Zhu, U. Sharma, E. Kanoulas, S. Rudinac, and B. Þ. Jónsson, “Exquisitor at the Video Browser Showdown 2024: Relevance Feedback Meets Conversational Search,” in MultiMedia Modeling, Cham: Springer Nature Switzerland, 2024, pp. 347–355.
[24]
O. S. Khan, U. Sharma, H. Zhu, S. Rudinac, and B. Þ. Jónsson, “Exquisitor at the Lifelog Search Challenge 2024: Blending Conversational Search with User Relevance Feedback,” in
Proceedings of the 7th Annual ACM Workshop on the Lifelog Search Challenge, in LSC ’24. New York, NY, USA: Association for Computing Machinery, 2024, pp. 117–121. doi:
10.1145/3643489.3661132.
[25]
J.-H. Huang, H. Zhu, Y. Shen, S. Rudinac, A. M. Pacces, and E. Kanoulas, “A Novel Evaluation Framework for Image2Text Generation,” in
Proceedings of The First Workshop on Large Language Models for Evaluation in Information Retrieval (LLM4Eval 2024) co-located with 10th International Conference on Online Publishing (SIGIR 2024), Washington D.C., USA, July 18, 2024, C. Siro, M. Aliannejadi, H. A. Rahmani, N. Craswell, C. L. A. Clarke, G. Faggioli, B. Mitra, P. Thomas, and E. Yilmaz, Eds., in CEUR Workshop Proceedings, vol. 3752. CEUR-WS.org, 2024, pp. 51–65. [Online]. Available:
https://ceur-ws.org/Vol-3752/paper4.pdf
[26]
I. Groenen, S. Rudinac, and M. Worring, “PanorAMS: Automatic Annotation for Detecting Objects in Urban Context,”
IEEE Transactions on Multimedia, vol. 26, pp. 1281–1294, 2024, doi:
10.1109/TMM.2023.3279696.
[27]
M. Sukel, S. Rudinac, and M. Worring, “GIGO, Garbage In, Garbage Out: An Urban Garbage Classification Dataset,” in
MultiMedia Modeling: 29th International Conference, MMM 2023, Bergen, Norway, January 9–12, 2023, Proceedings, Part I, Berlin, Heidelberg: Springer-Verlag, 2023, pp. 527–538. doi:
10.1007/978-3-031-27077-2_41.
[28]
D.-T. Dang-Nguyen, C. Gurrin, M. Larson, A. F. Smeaton, S. Rudinac, M.-S. Dao, C. Trattner, and P. Chen, MultiMedia Modeling: 29th International Conference, MMM 2023, Bergen, Norway, January 9–12, 2023, Proceedings, Part II, vol. 13834. Springer Nature, 2023.
[29]
M. Glistrup, S. Rudinac, and B. Þ. Jónsson, “Urban Image Geo-Localization Using Open Data on Public Spaces,” in
Proceedings of the 19th International Conference on Content-Based Multimedia Indexing, in CBMI ’22. New York, NY, USA: Association for Computing Machinery, 2022, pp. 50–56. doi:
10.1145/3549555.3549589.
[30]
J. Bright, N. Marchal, B. Ganesh, and S. Rudinac, “How Do Individuals in a Radical Echo Chamber React to Opposing Views? Evidence from a Content Analysis of Stormfront,”
Human Communication Research, vol. 48, no. 1, pp. 116–145, Nov. 2021, doi:
10.1093/hcr/hqab020.
[31]
S. Rudinac, A. Bozzon, T.-S. Chua, S. Little, D. Gatica-Perez, and K. Aizawa, “UrbanMM’21: 1st International Workshop on Multimedia Computing for Urban Data,” in
Proceedings of the 29th ACM International Conference on Multimedia, in MM ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 5696–5697. doi:
10.1145/3474085.3478577.
[32]
S. Rudinac, J. Benois-Pineau, and S. Marchand-Maillet, “Special issue on content-based multimedia indexing in the era of artificial intelligence,” Multimedia Tools and Applications, pp. 1–2, 2021.
[33]
O. S. Khan, B. Þ. Jónsson, J. Zahálka, S. Rudinac, and M. Worring, “Impact of Interaction Strategies on User Relevance Feedback,” in
Proceedings of the 2021 International Conference on Multimedia Retrieval, in ICMR ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 590–598. doi:
10.1145/3460426.3463663.
[34]
O. S. Khan, B. Þ. Jónsson, M. Larsen, L. Poulsen, D. C. Koelma, S. Rudinac, M. Worring, and J. Zahálka, “Exquisitor at the Video Browser Showdown 2021: Relationships Between Semantic Classifiers,” in MultiMedia Modeling, J. Lokoč, T. Skopal, K. Schoeffmann, V. Mezaris, X. Li, S. Vrochidis, and I. Patras, Eds., Cham: Springer International Publishing, 2021, pp. 410–416.
[35]
O. S. Khan, A. Duane, B. Þ. Jónsson, J. Zahálka, S. Rudinac, and M. Worring, “Exquisitor at the Lifelog Search Challenge 2021: Relationships Between Semantic Classifiers,” in
Proceedings of the 4th Annual on Lifelog Search Challenge, in LSC ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 3–6. doi:
10.1145/3463948.3469255.
[36]
A. Efthymiou, S. Rudinac, M. Kackovic, M. Worring, and N. Wijnberg, “Graph Neural Networks for Knowledge Enhanced Visual Representation of Paintings,” in
Proceedings of the 29th ACM International Conference on Multimedia, in MM ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 3710–3719. doi:
10.1145/3474085.3475586.
[37]
M. Sukel, S. Rudinac, and M. Worring, “Urban Object Detection Kit: A System for Collection and Analysis of Street-Level Imagery,” in
Proceedings of the 2020 International Conference on Multimedia Retrieval, in ICMR ’20. New York, NY, USA: Association for Computing Machinery, 2020, pp. 509–516. doi:
10.1145/3372278.3390708.
[38]
M. Sukel, S. Rudinac, and M. Worring, “Detecting Urban Issues With the Object Detection Kit,” in
Proceedings of the 28th ACM International Conference on Multimedia, in MM ’20. New York, NY, USA: Association for Computing Machinery, 2020, pp. 4518–4520. doi:
10.1145/3394171.3414427.
[39]
U. Sharma, S. Rudinac, M. Worring, J. Demmers, and W. van Dolen, “Semantic Path-Based Learning for Review Volume Prediction.” Springer International Publishing, Cham, 2020.
[40]
O. S. Khan, M. D. Larsen, L. A. S. Poulsen, B. Þ. Jónsson, J. Zahálka, S. Rudinac, D. Koelma, and M. Worring, “Exquisitor at the Lifelog Search Challenge 2020,” in
Proceedings of the Third Annual Workshop on Lifelog Search Challenge, in LSC ’20. New York, NY, USA: Association for Computing Machinery, 2020, pp. 19–22. doi:
10.1145/3379172.3391718.
[41]
O. S. Khan, B. Þ. Jónsson, S. Rudinac, J. Zahálka, H. Ragnarsdóttir, Þ. Þorleiksdóttir, G. Þ. Guðmundsson, L. Amsaleg, and M. Worring, “Interactive Learning for Multimedia at Large.” Springer International Publishing, Cham, 2020.
[42]
B. Þ. Jónsson, O. S. Khan, D. C. Koelma, S. Rudinac, M. Worring, and J. Zahálka, “Exquisitor at the Video Browser Showdown 2020,” in MultiMedia Modeling, W.-H. Cheng, J. Kim, W.-T. Chu, P. Cui, J.-W. Choi, M.-C. Hu, and W. De Neve, Eds., Cham: Springer International Publishing, 2020, pp. 796–802.
[43]
I. Gornishka, S. Rudinac, and M. Worring, “Interactive Search and Exploration in Discussion Forums Using Multimodal Embeddings,” in MultiMedia Modeling, W.-H. Cheng, J. Kim, W.-T. Chu, P. Cui, J.-W. Choi, M.-C. Hu, and W. De Neve, Eds., Cham: Springer International Publishing, 2020, pp. 388–399.
[44]
D. Arya, S. Rudinac, and M. Worring, “Predicting Behavioural Patterns in Discussion Forums using Deep Learning on Hypergraphs,” in
2019 International Conference on Content-Based Multimedia Indexing (CBMI), Sep. 2019, pp. 1–6. doi:
10.1109/CBMI.2019.8877384.
[45]
M. Sukel, S. Rudinac, and M. Worring, “Multimodal Classification of Urban Micro-Events,” in
Proceedings of the 27th ACM International Conference on Multimedia, in MM ’19. New York, NY, USA: ACM, 2019, pp. 1455–1463. doi:
10.1145/3343031.3350967.
[46]
H. Ragnarsdóttir, Þ. Þorleiksdóttir, O. S. Khan, B. Þ. Jónsson, G. Þ. Guðmundsson, J. Zahálka, S. Rudinac, L. Amsaleg, and M. Worring, “Exquisitor: Breaking the Interaction Barrier for Exploration of 100 Million Images,” in
Proceedings of the 27th ACM International Conference on Multimedia, in MM ’19. New York, NY, USA: ACM, 2019, pp. 1029–1031. doi:
10.1145/3343031.3350580.
[47]
O. S. Khan, B. Þ. Jónsson, J. Zahálka, S. Rudinac, and M. Worring, “Exquisitor at the Lifelog Search Challenge 2019,” in
Proceedings of the ACM Workshop on Lifelog Search Challenge, in LSC ’19. New York, NY, USA: ACM, 2019, pp. 7–11. doi:
10.1145/3326460.3329156.
[48]
C. Gurrin, B. Þ. Jónsson, R. Péteri, S. Rudinac, S. Marchand-Maillet, G. Quénot, K. McGuinness, G. Þ. Guðmundsson, S. Little, M. Katsurai, and G. Healy, Eds.,
2019 International Conference on Content-Based Multimedia Indexing, CBMI 2019, Dublin, Ireland, September 4-6, 2019. IEEE, 2019. [Online]. Available:
https://ieeexplore.ieee.org/xpl/conhome/8863324/proceeding
[49]
D. Arya, S. Rudinac, and M. Worring, “HyperLearn: A Distributed Approach for Representation Learning in Datasets With Many Modalities,” in
Proceedings of the 27th ACM International Conference on Multimedia, in MM ’19. New York, NY, USA: ACM, 2019, pp. 2245–2253. doi:
10.1145/3343031.3350572.
[50]
J. Zahálka, S. Rudinac, B. T. Jónsson, D. C. Koelma, and M. Worring, “Blackthorn: Large-Scale Interactive Multimodal Learning,”
IEEE Transactions on Multimedia, vol. 20, no. 3, pp. 687–698, Mar. 2018, doi:
10.1109/TMM.2017.2755986.
[51]
S. Rudinac, T.-S. Chua, N. Diaz-Ferreyra, G. Friedland, T. Gornostaja, B. Huet, R. Kaptein, K. Lindén, M.-F. Moens, J. Peltonen, M. Redi, M. Schedl, D. A. Shamma, A. Smeaton, and L. Xie, “Rethinking Summarization and Storytelling for Modern Social Multimedia,” in MultiMedia Modeling, Cham: Springer International Publishing, 2018, pp. 632–644.
[52]
S. Rudinac, J. Zahálka, and M. Worring, “Discovering Geographic Regions in the City Using Social Multimedia and Open Data,” in
MultiMedia Modeling: 23rd International Conference, MMM 2017, Reykjavik, Iceland, January 4-6, 2017., Cham: Springer International Publishing, 2017, pp. 148–159. doi:
10.1007/978-3-319-51814-5_13.
[53]
S. Rudinac, I. Gornishka, and M. Worring, “Multimodal Classification of Violent Online Political Extremism Content with Graph Convolutional Networks,” in Proceedings of ThematicWorkshops at ACM Multimedia Conference, Mountain View, CA USA, 2017.
[54]
J. Zahálka, S. Rudinac, B. T. Jónsson, D. C. Koelma, and M. Worring, “Interactive Multimodal Learning on 100 Million Images,” in
Proceedings of the 2016 ACM International Conference on Multimedia Retrieval, in ICMR ’16. New York, NY, USA: ACM, 2016, pp. 333–337. doi:
10.1145/2911996.2912062.
[55]
J. van den Berg, S. Rudinac, and M. Worring, “Scenemash: Multimodal Route Summarization for City Exploration,” in
Advances in Information Retrieval: 38th European Conference on IR Research, ECIR 2016, Padua, Italy, March 20–23, 2016., Cham: Springer International Publishing, 2016, pp. 833–836. doi:
10.1007/978-3-319-30671-1_75.
[56]
M. Mazloom, R. Rietveld, S. Rudinac, M. Worring, and W. van Dolen, “Multimodal Popularity Prediction of Brand-related Social Media Posts,” in
Proceedings of the 2016 ACM Conference on Multimedia, in MM ’16. New York, NY, USA: ACM, 2016, pp. 197–201. doi:
10.1145/2964284.2967210.
[57]
B. T. Jónsson, M. Worring, J. Zahálka, S. Rudinac, and L. Amsaleg, “Ten Research Questions for Scalable Multimedia Analytics,” in
MultiMedia Modeling: 22nd International Conference, MMM 2016, Miami, FL, USA, January 4-6, 2016., Cham: Springer International Publishing, 2016, pp. 290–302. doi:
10.1007/978-3-319-27674-8_26.
[58]
J. Boonzajer Flaes, S. Rudinac, and M. Worring, “What Multimedia Sentiment Analysis Says About City Liveability,” in
Advances in Information Retrieval: 38th European Conference on IR Research, ECIR 2016, Padua, Italy, March 20–23, 2016., Cham: Springer International Publishing, 2016, pp. 824–829. doi:
10.1007/978-3-319-30671-1_74.
[59]
J. Zahálka, S. Rudinac, and M. Worring, “Interactive Multimodal Learning for Venue Recommendation,”
IEEE Transactions on Multimedia, vol. 17, no. 12, pp. 2235–2244, Dec. 2015, doi:
10.1109/TMM.2015.2480007.
[60]
J. Zahálka, S. Rudinac, and M. Worring, “Analytic Quality: Evaluation of Performance and Insight in Multimedia Collection Analysis,” in
Proceedings of the 23rd ACM International Conference on Multimedia, in MM ’15. New York, NY, USA: ACM, 2015, pp. 231–240. doi:
10.1145/2733373.2806279.
[61]
J. Redi and S. Rudinac, “CrowdMM 2015: Fourth International ACM Workshop on Crowdsourcing for Multimedia,” in
Proceedings of the 23rd ACM International Conference on Multimedia, in MM ’15. New York, NY, USA: ACM, 2015, pp. 1341–1342. doi:
10.1145/2733373.2806411.
[62]
J. Zahálka, S. Rudinac, and M. Worring, “New Yorker Melange: Interactive Brew of Personalized Venue Recommendations,” in
Proceedings of the 22nd ACM International Conference on Multimedia, in MM ’14. New York, NY, USA: ACM, 2014, pp. 205–208. doi:
10.1145/2647868.2656403.
[63]
S. Rudinac and M. Worring, “Making Use of Semantic Concept Detection for Modelling Human Preferences in Visual Summarization,” in
Proceedings of the 2014 International ACM Workshop on Crowdsourcing for Multimedia, in CrowdMM ’14. New York, NY, USA: ACM, 2014, pp. 41–44. doi:
10.1145/2660114.2660127.
[64]
S. Rudinac, M. Larson, and A. Hanjalic, “Learning Crowdsourced User Preferences for Visual Summarization of Image Collections,”
IEEE Transactions on Multimedia, vol. 15, no. 6, pp. 1231–1243, Oct. 2013, doi:
10.1109/TMM.2013.2261481.
[65]
S. Rudinac, A. Hanjalic, and M. Larson, “Generating Visual Summaries of Geographic Areas Using Community-Contributed Images,”
IEEE Transactions on Multimedia, vol. 15, no. 4, pp. 921–932, Jun. 2013, doi:
10.1109/TMM.2013.2237896.
[67]
S. Rudinac, M. Larson, and A. Hanjalic, “Leveraging visual concepts and query performance prediction for semantic-theme-based video retrieval,”
International Journal of Multimedia Information Retrieval, vol. 1, no. 4, pp. 263–280, 2012, doi:
10.1007/s13735-012-0018-0.
[68]
S. Rudinac, M. Larson, and A. Hanjalic, “TUD-MIR at MediaEval 2011 Genre Tagging Task: Query expansion from a limited number of labeled videos,” in MediaEval Workshop, 2011.
[69]
S. Rudinac, A. Hanjalic, and M. Larson, “Finding Representative and Diverse Community Contributed Images to Create Visual Summaries of Geographic Areas,” in
Proceedings of the 19th ACM International Conference on Multimedia, in MM ’11. New York, NY, USA: ACM, 2011, pp. 1109–1112. doi:
10.1145/2072298.2071950.
[70]
M. Larson, M. Soleymani, P. Serdyukov, S. Rudinac, C. Wartena, V. Murdock, G. Friedland, R. Ordelman, and G. J. F. Jones, “Automatic Tagging and Geotagging in Video Collections and Communities,” in
Proceedings of the 1st ACM International Conference on Multimedia Retrieval, in ICMR ’11. New York, NY, USA: ACM, 2011, p. 51:1-51:8. doi:
10.1145/1991996.1992047.
[71]
S. Rudinac, M. Larson, and A. Hanjalic, “Visual Concept-based Selection of Query Expansions for Spoken Content Retrieval,” in
Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, in SIGIR ’10. New York, NY, USA: ACM, 2010, pp. 891–892. doi:
10.1145/1835449.1835668.
[72]
S. Rudinac, M. Larson, and A. Hanjalic, “Exploiting result consistency to select query expansions for spoken content retrieval,” in
Proceedings of the 32nd European Conference on Advances in Information Retrieval, in ECIR’2010. Berlin, Heidelberg: Springer-Verlag, 2010, pp. 645–648. doi:
10.1007/978-3-642-12275-0_67.
[73]
S. Rudinac, M. Larson, and A. Hanjalic, “Exploiting Noisy Visual Concept Detection to Improve Spoken Content Based Video Retrieval,” in
Proceedings of the International Conference on Multimedia, in MM ’10. New York, NY, USA: ACM, 2010, pp. 727–730. doi:
10.1145/1873951.1874063.