A proof-of-concept method for Inconclusiveness-based Abstention *

Authors:
DPID: 1076

Abstract

Recent studies evaluating Large Language Models (LLMs) on abstention under uncertainty suggests the problem of reasoning under uncertainty remains unresolved even though In-context Learning (ICL) improved abstention in LLMs Kirichenko et al. (2025). For evaluating logical inconclusiveness, "refute a query" is selected under abstention Wen et al. (2024). I propose a proof-of-concept method to create example dataset that is inconclusive but is "satisfiable". I created two example Knowledge Base (KB) sets-with a polysemy noun and another using Wumpus world in 1. The model is asked to choose from multiple answer choices containing-True if this and inconclusive if that. LLMs ground bat mammal, baseball bat as terms early on and deduce true instead of inconclusive, unlike the case with Wumpus world. LLMs may not seem to divulge from standard commonsense reasoning (though such information might be available during reasoning), i.e. once grounded in world knowledge across several variations of same example KB, the inconclusivess is not detected. However, under Wumpus world context, the three LLMs accurately detected the inconclusiveness for the example KB in the context. The preliminary two-KB analyses over three LLMs, hint that a combination of-commonsense, logical and lateral reasoning under uncertainty, might nudge them towards detecting inconclusiveness for real world context which requires elaborate evaluation and analyses. (Method, example KBs: in 4).