Multi-Agent Social Behavior Understanding via Ego-GAT-SqueezeNet

Quantifying social behavior in laboratory animals is fundamental to neuroscience but remains hindered by manual annotation’s subjectivity. The Multi-Agent Behavior (MABe) challenge addresses this by benchmarking automated recognition from pose data, yet faces challenges like extreme class imbalance, complex topology, and cross-laboratory domain shifts.

In this work, we propose Ego-GAT-SqueezeNet, a unified framework for multi-agent behavior understanding. First, we introduce an egocentric alignment strategy to invariantize agent features against translation and rotation. Second, we employ a Graph Attention Network (GAT) to explicitly model the dynamic spatial topology. Crucially, we integrate a Squeezeformer backbone that leverages efficient downsampling to capture long-range dependencies in high-frequency sequences. For environmental heterogeneity, we utilize Feature-wise Linear Modulation (FiLM) to dynamically recalibrate features based on laboratory and subject identities. Our approach achieves an F1-score of 0.7702 on the validation set, outperforming baselines by identifying rare social actions across diverse experimental setups.

  

PastureNet - Cross-Domain Biomass Estimation

Accurate pasture biomass estimation is critical for precision grazing management yet remains challenged by the trade-off between the scalability of remote sensing and the reliability of manual sampling. To address this, we introduce PastureNet, a novel hierarchical ensemble framework that estimates biomass directly from high-resolution RGB images. Unlike traditional approaches, PastureNet synergizes diverse inductive biases by integrating three state-of-the-art Vision Transformers: DINOv3 (object-centric), SigLIP 2 (semantic-aligned), and EVA-02 (texture-sensitive). A key innovation is the integration of Zero-shot Semantic Concept Scores to inject explicit ecological domain knowledge (e.g., clover presence) into the regression pipeline, alongside a Matrix Reconciliation post-processing step that ensures biological consistency across biomass components. Evaluated on a heterogeneous Australian dataset, our method achieves a Weighted R2 of 0.70, significantly outperforming CNN baselines (0.47) and demonstrating robust generalization without requiring physical metadata at inference time.

  

从经典算法到深度学习,AdaBoost、PCA、稀疏编码、与粒子滤波的重生与进化

随着基于注意力机制的大模型面临数据、算力、电力的限制,与对模型可解释性、可控性、推理能力的更高要求,深度学习领域出现了显著的“回溯现象”:人们纷纷将目光投向了前深度学习时代的经典算法思想。如,OpenAI 在 2025 年 11 月发布了通过稀疏电路来理解神经网络的文章:通过稀疏电路来理解神经网络 | OpenAI;还有像清华大学孙茂松老师团队在 2025 年 12 月发布的论文H-Neurons:大语言模型中幻觉相关神经元的存在、作用及其起源,基于 L1 稀疏线性回归器 Lasso 研究的幻觉相关神经元在神经网络的分布。

本文旨在深入探讨AdaBoost、主成分分析(PCA)、稀疏编码和粒子滤波这四大经典算法的基本思想在 2025 年大模型时代的重生与进化。通过对近三年论文的梳理与分析,得出结论:这些经典算法在本质上与现代大模型的对齐(Alignment)、高效微调(PEFT)、可解释性(Interpretability)及复杂推理(Reasoning)殊途同归。AdaBoost 的间隔理论与误差修正思想不仅解释了深度学习中的“良性过拟合”现象,更通过贝叶斯奖励模型集成(BRME)解决了 RLHF 中的奖励黑客问题;PCA 的低秩假设与流形理论直接催生了 LoRA-XS 等高效微调方法及 KV Cache 压缩技术,并揭示了模型本质上的线性特征;稀疏编码的基向量分解思想通过稀疏自编码器(SAE)破解了神经元超级叠加的可解释性难题,并推动了 MoE 架构与 Sparse-Linear Attention (SLA) 的演进;而粒子滤波的序列状态估计思想则为思维链(CoT)推理提供了概率论框架,并赋予视频生成模型掌握处理不确定性的物理世界模拟能力。这些经典思想正成为大模型从 System 1 向 System 2 跃迁的关键基石。


:D 一言句子获取中...

加载中,最新评论有1分钟缓存...