You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
affiliation: "Associate Professor, Shanghai Jiao Tong University",
175
-
description: "Talk: TBA",
176
-
link: "https://sjtu-xai-lab.github.io/#people"
177
+
description: "Talk: Can Neural Network Interpretability Be the Key to Breaking Through Scaling Law Limitations in Deep Learning?",
178
+
link: "https://sjtu-xai-lab.github.io/#people",
179
+
abstract: "The “lack of interpretability” and the “constraints of the scaling law” are two major bottlenecks in deep learning, but they fundamentally converge on the same root cause—the absence of a foundational explanation, localization, and debugging representation problems of a neural network. Currently, most explainable AI research remains at the engineering level, and fails to build up a theoretical connection between the “detailed knowledge representation” and “generalization power.” The interaction-based Interpretability theory proposed by Dr. Quanshi Zhang has partially addressed these issues from a new perspective. It rigorously demonstrates that the complex inference logic of neural networks can be comprehensively summarized as sparse interactions. Based on these interactions, the theory successfully explains the root causes of neural network performance, thereby breaking free from the black-box training paradigm. This enables real-time monitoring and correction of model representation flaws, improving training and testing efficiency, and ultimately overcoming the constraints of the scaling law.",
180
+
bio: "Dr. Quanshi Zhang is a tenured associate professor in the Department of Computer Science and Engineering at Shanghai Jiao Tong University. He has received the ACM China Rising Star Award. He obtained his Ph.D. from the University of Tokyo, Japan in 2014 and conducted postdoctoral research at the University of California, Los Angeles (UCLA) from 2014 to 2018. Dr. Zhang’s research mainly focuses on explainable AI, and has proposed theory system of interaction-based explanation. He serves as an action editor for TMLR, area chair for NeurIPS 2024 and 2025, presented tutorials on interpretability at IJCAI 2020 and IJCAI 2021."
question: "How are the shared task submissions evaluated?",
299
303
answer: "Shared task submissions will be evaluated by the workshop organizers and MIB creators based on the novelty and effectiveness of the proposed method. In practice, including more model-task combinations in the evaluation will strengthen high-scoring submissions by demonstrating the generality of the proposed method's effectiveness. Novelty will be evaluated in light of currently established methods for each one of the tracks."
304
+
},
305
+
{
306
+
question: "Are submissions to the shared task archival?",
307
+
answer: "Yes, submissions to the shared task will be considered archival, and will be published in the BlackboxNLP 2025 workshop proceedings on the ACL Anthology."
Copy file name to clipboardExpand all lines: src/pages/shared_task.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Shared Task
2
2
3
-
⚠️ **Interested in participating?** Join our [Discord server](https://discord.gg/6QhDHCJ9) to stay updated and share your ideas with other participants!
3
+
⚠️ **Interested in participating?** Join our [Discord server](http://discord.gg/n5uwjQcxPR) to stay updated and share your ideas with other participants!
4
4
5
5
## Call for Submissions
6
6
@@ -53,7 +53,7 @@ In order to be considered for the final ranking, submissions to either one of th
53
53
<br/>
54
54
This ensures that all submissions are evaluated on a common set of tasks and models, which are selected to require as little computational resources as possible. **Submissions that do not include these three combinations will not be considered for the final ranking, but will still be included in the leaderboard.**
55
55
<br/>
56
-
Submissions to either track will be evaluated by organizers on the private test set from the MIB benchmark, and results will be made available on the [MIB Leaderboard](https://huggingface.co/spaces/mib-bench/leaderboard). Participants will be invited to submit a technical report describing their approach, results, and any insights gained during the process. The report should be no more than 4 pages long (excluding references) and follow the [BlackboxNLP 2025 formatting guidelines](https://blackboxnlp.github.io/2025/call).
56
+
Submissions to either track will be evaluated by organizers on the private test set from the MIB benchmark, and results will be made available on the [MIB Leaderboard](https://huggingface.co/spaces/mib-bench/leaderboard). Participants will be invited to submit a technical report describing their approach, results, and any insights gained during the process. The report should be no more than 4 pages long (excluding references) and follow the [BlackboxNLP 2025 formatting guidelines](https://blackboxnlp.github.io/2025/call).**The report will be considered archival and will be published in the BlackboxNLP 2025 workshop proceedings on the ACL Anthology.**
0 commit comments