发布日期:2026-04-04
收录条目:20
1. Anthropic essentially bans OpenClaw from Claude by making subscribers pay extra
- 来源:The Verge AI
- 发布时间:2026-04-03 23:52 UTC
- 链接:https://www.theverge.com/ai-artificial-intelligence/907074/anthropic-openclaw-claude-subscription-ban
摘要:Using OpenClaw with Claude AI is about to get a lot more expensive, thanks to Anthropic's new policy changes. Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-
2. Google DeepMind’s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts
- 来源:MarkTechPost
- 发布时间:2026-04-03 22:26 UTC
- 链接:https://www.marktechpost.com/2026/04/03/google-deepminds-research-lets-an-llm-rewrite-its-own-game-theory-algorithms-and-it-outperformed-the-experts/
摘要:Designing algorithms for Multi-Agent Reinforcement Learning (MARL) in imperfect-information games — scenarios where players act sequentially and cannot see each other’s private information, like poker — has historically
3. OpenAI’s AGI boss is taking a leave of absence
- 来源:The Verge AI
- 发布时间:2026-04-03 20:22 UTC
- 链接:https://www.theverge.com/ai-artificial-intelligence/906965/openais-agi-boss-is-taking-a-leave-of-absence
摘要:OpenAI is undergoing another round of C-suite changes, according to an internal memo viewed by The Verge. Fidji Simo, OpenAI's CEO of AGI deployment - who was until recently the company's CEO of applications - says in th
4. Apple’s best product ever
- 来源:The Verge AI
- 发布时间:2026-04-03 12:52 UTC
- 链接:https://www.theverge.com/podcast/906548/best-apple-product-vergecast
摘要:All week, we've been asking you to help us rank the 50 best products Apple ever made, as we mark the company's 50th anniversary. Thanks to everyone who pitched in - we ended up with more than 1.6 million votes! We also h
5. Chatbots are now prescribing psychiatric drugs
- 来源:The Verge AI
- 发布时间:2026-04-03 11:43 UTC
- 链接:https://www.theverge.com/ai-artificial-intelligence/906525/ai-chatbot-prescribe-refill-psychiatric-drugs
摘要:Utah is allowing an AI system to prescribe psychiatric drugs without a doctor. It's only the second time the state - and the country - has delegated this kind of clinical authority to AI. State officials say it could bri
6. TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
- 来源:MarkTechPost
- 发布时间:2026-04-03 08:49 UTC
- 链接:https://www.marktechpost.com/2026/04/03/tii-releases-falcon-perception-a-0-6b-parameter-early-fusion-transformer-for-open-vocabulary-grounding-and-segmentation-from-natural-language-prompts/
摘要:In the current landscape of computer vision, the standard operating procedure involves a modular ‘Lego-brick’ approach: a pre-trained vision encoder for feature extraction paired with a separate decoder for task predicti
7. Step by Step Guide to Build an End-to-End Model Optimization Pipeline with NVIDIA Model Optimizer Using FastNAS Pruning and Fine-Tuning
- 来源:MarkTechPost
- 发布时间:2026-04-03 07:48 UTC
- 链接:https://www.marktechpost.com/2026/04/03/step-by-step-guide-to-build-an-end-to-end-model-optimization-pipeline-with-nvidia-model-optimizer-using-fastnas-pruning-and-fine-tuning/
摘要:In this tutorial, we build a complete end-to-end pipeline using NVIDIA Model Optimizer to train, prune, and fine-tune a deep learning model directly in Google Colab. We start by setting up the environment and preparing t
8. How Emotion Shapes the Behavior of LLMs and Agents: A Mechanistic Study
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00005
摘要:arXiv:2604.00005v1 Announce Type: new Abstract: Emotion plays an important role in human cognition and performance. Motivated by this, we investigate whether analogous emotional signals can shape the behavior of large la
9. One Panel Does Not Fit All: Case-Adaptive Multi-Agent Deliberation for Clinical Prediction
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00085
摘要:arXiv:2604.00085v1 Announce Type: new Abstract: Large language models applied to clinical prediction exhibit case-level heterogeneity: simple cases yield consistent outputs, while complex cases produce divergent predicti
10. Open, Reliable, and Collective: A Community-Driven Framework for Tool-Using AI Agents
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00137
摘要:arXiv:2604.00137v1 Announce Type: new Abstract: Tool-integrated LLMs can retrieve, compute, and take real-world actions via external tools, but reliability remains a key bottleneck. We argue that failures stem from both
11. A Safety-Aware Role-Orchestrated Multi-Agent LLM Framework for Behavioral Health Communication Simulation
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00249
摘要:arXiv:2604.00249v1 Announce Type: new Abstract: Single-agent large language model (LLM) systems struggle to simultaneously support diverse conversational functions and maintain safety in behavioral health communication.
12. Human-in-the-Loop Control of Objective Drift in LLM-Assisted Computer Science Education
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00281
摘要:arXiv:2604.00281v1 Announce Type: new Abstract: Large language models (LLMs) are increasingly embedded in computer science education through AI-assisted programming tools, yet such workflows often exhibit objective drift
13. Improvisational Games as a Benchmark for Social Intelligence of AI Agents: The Case of Connections
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00284
摘要:arXiv:2604.00284v1 Announce Type: new Abstract: We formally introduce a improvisational wordplay game called Connections to explore reasoning capabilities of AI agents. Playing Connections combines skills in knowledge re
14. Collaborative AI Agents and Critics for Fault Detection and Cause Analysis in Network Telemetry
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00319
摘要:arXiv:2604.00319v1 Announce Type: new Abstract: We develop algorithms for collaborative control of AI agents and critics in a multi-actor, multi-critic federated multi-agent system. Each AI agent and critic has access to
15. Signals: Trajectory Sampling and Triage for Agentic Interactions
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00356
摘要:arXiv:2604.00356v1 Announce Type: new Abstract: Agentic applications based on large language models increasingly rely on multi-step interaction loops involving planning, action execution, and environment feedback. While
16. In harmony with gpt-oss
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00362
摘要:arXiv:2604.00362v1 Announce Type: new Abstract: No one has independently reproduced OpenAI's published scores for gpt-oss-20b with tools, because the original paper discloses neither the tools nor the agent harness. We r
17. Decision-Centric Design for LLM Systems
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00414
摘要:arXiv:2604.00414v1 Announce Type: new Abstract: LLM systems must make control decisions in addition to generating outputs: whether to answer, clarify, retrieve, call tools, repair, or escalate. In many current architectu
18. Self-Routing: Parameter-Free Expert Routing from Hidden States
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00421
摘要:arXiv:2604.00421v1 Announce Type: new Abstract: Mixture-of-Experts (MoE) layers increase model capacity by activating only a small subset of experts per token, and typically rely on a learned router to map hidden states
19. Execution-Verified Reinforcement Learning for Optimization Modeling
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00442
摘要:arXiv:2604.00442v1 Announce Type: new Abstract: Automating optimization modeling with LLMs is a promising path toward scalable decision intelligence, but existing approaches either rely on agentic pipelines built on clos
20. Towards Reliable Truth-Aligned Uncertainty Estimation in Large Language Models
- 来源:arXiv cs.AI
- 发布时间:2026-04-03 04:00 UTC
- 链接:https://arxiv.org/abs/2604.00445
摘要:arXiv:2604.00445v1 Announce Type: new Abstract: Uncertainty estimation (UE) aims to detect hallucinated outputs of large language models (LLMs) to improve their reliability. However, UE metrics often exhibit unstable per