Mitigating Hallucinations in Large Vision-Language Models by Self-Injecting Hallucinations (EMNLP 2025 ) Autonomous Preference Alignment via Self-Injection =APASI 2025-09-17 #深度学习 #大模型
Nullu:通过 HalluSpace 投影减轻大型视觉-语言模型中的对象幻觉 (CVPR 2025) 在视觉语言模型中,物体幻觉(Object Hallucinations,OH)指的是模型在生成图像描述时,错误地提到了图像中并不存在的物体。 2025-09-15 #深度学习 #大模型
Vision Transformers Don't Need Trained Registers ICLR 2024的《VISION TRANSFORMERS NEED REGISTERS》指出了VIT中也会出现类似attention sinks的伪影。对于REGISTERS我们是否需要可训练呢? 2025-09-14 #深度学习 #大模型
LLM中MOE的安全行为 (arxiv 2025) [2509.09660] Steering MoE LLMs via Expert (De)Activation [2506.17368] SAFEx: Analyzing Vulnerabilities of MoE-Based LLMs via Stable Safety-critical Expert Identification 2025-09-13 #深度学习 #大模型
对比解码之VCD (CVPR 2024 Highlight) 《Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive》 2025-09-07 #深度学习 #多模态 #大模型
ALPHAEDIT:NULL-SPACE CONSTRAINED KNOWLEDGE EDITING FOR LANGUAGE MODELS (ICLR 2025 outstanding paper) 2025-09-03 #深度学习
kNN-LMs:一种RAG和LLM前的记忆挂靠方法 (ICLR 2020) 《Generalization through Memorization: Nearest Neighbor Language Models》 2025-09-02 #深度学习