用 AI 生成的网站
用 AI 生成的网站
用 AI 生成的网站
Quantifying social behavior in laboratory animals is fundamental to neuroscience but remains hindered by manual annotation’s subjectivity. The Multi-Agent Behavior (MABe) challenge addresses this by benchmarking automated recognition from pose data, yet faces challenges like extreme class imbalance, complex topology, and cross-laboratory domain shifts.
In this work, we propose Ego-GAT-SqueezeNet, a unified framework for multi-agent behavior understanding. First, we introduce an egocentric alignment strategy to invariantize agent features against translation and rotation. Second, we employ a Graph Attention Network (GAT) to explicitly model the dynamic spatial topology. Crucially, we integrate a Squeezeformer backbone that leverages efficient downsampling to capture long-range dependencies in high-frequency sequences. For environmental heterogeneity, we utilize Feature-wise Linear Modulation (FiLM) to dynamically recalibrate features based on laboratory and subject identities. Our approach achieves an F1-score of 0.7702 on the validation set, outperforming baselines by identifying rare social actions across diverse experimental setups.
Accurate pasture biomass estimation is critical for precision grazing management yet remains challenged by the trade-off between the scalability of remote sensing and the reliability of manual sampling. To address this, we introduce PastureNet, a novel hierarchical ensemble framework that estimates biomass directly from high-resolution RGB images. Unlike traditional approaches, PastureNet synergizes diverse inductive biases by integrating three state-of-the-art Vision Transformers: DINOv3 (object-centric), SigLIP 2 (semantic-aligned), and EVA-02 (texture-sensitive). A key innovation is the integration of Zero-shot Semantic Concept Scores to inject explicit ecological domain knowledge (e.g., clover presence) into the regression pipeline, alongside a Matrix Reconciliation post-processing step that ensures biological consistency across biomass components. Evaluated on a heterogeneous Australian dataset, our method achieves a Weighted R2 of 0.70, significantly outperforming CNN baselines (0.47) and demonstrating robust generalization without requiring physical metadata at inference time.
摘要:Li 和 He 提出的 JiT (Just image Transformers) 架构基于流形假设,通过直接预测干净图像 (x-prediction),验证了简单的线性层配合 ViT 即可有效处理高维像素数据。然而, JiT 的极简线性 Patch Embedding 可能不足以充分捕捉自然图像高度卷曲的非线性流形结构。本文首先在 Embedding 层引入 SiLU 激活函数,构建非线性瓶颈以增强对低维流形嵌入的拟合能力。 进一步地,本文深入探讨了骨干网络 (Backbone) 中流形约束 (降维) 与计算容量 (升维) 的本质矛盾。通过将 Transformer Block 内部替换为瓶颈结构的对比实验,本文揭示了一个关键的精度-多样性权衡 (Precision-Recall Trade-off):显式的降维压缩虽然能有效过滤非流形噪声,从而显著提升生成图像的保 真度 (Precision) 与 FID 指标;但这种严苛的流形约束同时也限制了模型对高熵随机偏差的建模能力,导致生成样本的多样性 (Recall) 下降。 此外,针对 JiT 缺乏语义约束的问题,本文引入了时间 (time) 与旋转 (rotation) 预测的自监督辅助损失。在 ImageNet 256 × 256 数据集上的实验表明,非线性 Embedding 与自监督信号有效提升了 FID 指标,而 Block 层的瓶颈化实验则从反面论证了“计算容量”在扩散模型骨干网络中的必要性。
关键词: 计算机视觉;扩散模型; JiT;非线性流形;瓶颈结构;高维数据拟合;自监督学习
计算机图形学基础大作业
CSC207 Drawing App (Python)
Pygame application consisting of CLRS videos (or video?) made using 3b1b’s manim