- 🔭 I’m currently a fourth-year Ph.D student in SJTU(IWIN Lab).
 - 🌱 I’m currently learning LLM/MLLM, explainable attention, information flow, Truthful AI.
 - 💬 欢迎大家找我一起做一些有趣的工作~,目前做的是幻觉/信息流/显著性可解释性.😊😊
 - 📫 我的邮箱: [email protected]. 微信:SemiZxf
 - 📕 钱塘江上朝信来,今日方知我是我。
 - I am looking for a job, if you have any interest for me, please contact me~
 - 🌱 homepage:(zhangbaijin.github.io)
 - 💬 Google sclolar: Google scholar
 
- 
                  PhD @ SJTU
 - Shang Hai
 - 
        
  19:24
  
(UTC +08:00)  - zhangbaijin.github.io
 
Pinned Loading
- 
  From-Redundancy-to-Relevance
From-Redundancy-to-Relevance Public[NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models
 - 
  FanshuoZeng/Simignore
FanshuoZeng/Simignore Public[AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computation
 - 
  ErikZ719/MCA-LLaVA
ErikZ719/MCA-LLaVA Public[ACM MM25] MCA-LLaVA: Manhattan Causal Attention for Reducing Hallucination in Large Vision-Language Models
Python 12
 - 
  itsqyh/Shallow-Focus-Deep-Fixes
itsqyh/Shallow-Focus-Deep-Fixes Public[EMNLP 2025 Oral] 🎉 Shallow Focus, Deep Fixes: Enhancing Shallow Layers Vision Attention Sinks to Alleviate Hallucination in LVLMs
Python 6
 - 
  Massive-activations-VLMs
Massive-activations-VLMs Public[IPM 2025] Code for paper: What Drives Attention Sinks? A Study of Massive Activations and Rotational Positional Encoding in Large Vision-Language Models
Python 1
 - 
  
 
If the problem persists, check the GitHub status page or contact support.
