Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/12/b2/1d/12b21d77-05e4-113a-59f1-74e7cc4f2771/mza_11943161808051384234.jpg/600x600bb.jpg
Deep Dive in Research
NotebookLM
14 episodes
1 week ago
Discussion about interesting research papers
Show more...
Technology
RSS
All content for Deep Dive in Research is the property of NotebookLM and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Discussion about interesting research papers
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/39551831/39551831-1728103572572-7b52b76d15834.jpg
VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models
Deep Dive in Research
18 minutes 57 seconds
6 months ago
VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models

Visual reasoning is a core component of human intelligence and a critical capability

for advanced multimodal models. Yet current reasoning evaluations of multimodal

large language models (MLLMs) often rely on text descriptions and allow languagebased reasoning shortcuts, failing to measure genuine vision-centric reasoning.

To address this, we introduce VisuLogic: a benchmark of 1,000 human-verified

problems across six categories (e.g., quantitative shifts, spatial relations, attribute

comparisons). These various types of questions can be evaluated to assess the visual

reasoning capabilities of MLLMs from multiple perspectives. We evaluate leading

MLLMs on this benchmark and analyze their results to identify common failure

modes. Most models score below 30% accuracy—only slightly above the 25% random baseline and far below the 51.4% achieved by humans—revealing significant

gaps in visual reasoning. Furthermore, we provide a supplementary training dataset

and a reinforcement-learning baseline to support further progress. Code, data, and

baselines are available at https://visulogic-benchmark.github.io/VisuLogic.

Deep Dive in Research
Discussion about interesting research papers