发布日期:2026-04-10
收录条目:20
1. ChatGPT has a new $100 per month Pro subscription
- 来源:The Verge AI
- 发布时间:2026-04-09 22:57 UTC
- 链接:https://www.theverge.com/ai-artificial-intelligence/909599/chatgpt-pro-subscription-new
摘要:OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers "5x more" usage of its Codex coding tool than the $20 per month Plus subscription and "is best for lon
2. Florida launches investigation into OpenAI
- 来源:The Verge AI
- 发布时间:2026-04-09 22:17 UTC
- 链接:https://www.theverge.com/policy/909557/openai-florida-investigation
摘要:Florida Attorney General James Uthmeier is launching an investigation into OpenAI over public safety and national security risks, as reported earlier by Reuters. In a statement on Thursday, Uthmeier says there are concer
3. Google’s Gemini AI can answer your questions with 3D models and simulations
- 来源:The Verge AI
- 发布时间:2026-04-09 17:57 UTC
- 链接:https://www.theverge.com/tech/909391/google-gemini-ai-3d-models-simulations
摘要:Google's latest upgrade for Gemini will allow the chatbot to generate interactive 3D models and simulations in response to your questions. With the new feature, you may see options to rotate the AI-generated model, manua
4. Understanding Amazon Bedrock model lifecycle
- 来源:AWS ML Blog
- 发布时间:2026-04-09 17:33 UTC
- 链接:https://aws.amazon.com/blogs/machine-learning/understanding-amazon-bedrock-model-lifecycle/
摘要:This post shows you how to manage FM transitions in Amazon Bedrock, so you can make sure your AI applications remain operational as models evolve. We discuss the three lifecycle states, how to plan migrations with the ne
5. The future of managing agents at scale: AWS Agent Registry now in preview
- 来源:AWS ML Blog
- 发布时间:2026-04-09 17:28 UTC
- 链接:https://aws.amazon.com/blogs/machine-learning/the-future-of-managing-agents-at-scale-aws-agent-registry-now-in-preview/
摘要:Today, we're announcing AWS Agent Registry (preview) in AgentCore, a single place to discover, share, and reuse AI agents, tools, and agent skills across your enterprise.
6. Embed a live AI browser agent in your React app with Amazon Bedrock AgentCore
- 来源:AWS ML Blog
- 发布时间:2026-04-09 17:06 UTC
- 链接:https://aws.amazon.com/blogs/machine-learning/embed-a-live-ai-browser-agent-in-your-react-app-with-amazon-bedrock-agentcore/
摘要:This post walks you through three steps: starting a session and generating the Live View URL, rendering the stream in your React application, and wiring up an AI agent that drives the browser while your users watch. At t
7. Introducing stateful MCP client capabilities on Amazon Bedrock AgentCore Runtime
- 来源:AWS ML Blog
- 发布时间:2026-04-09 14:47 UTC
- 链接:https://aws.amazon.com/blogs/machine-learning/introducing-stateful-mcp-client-capabilities-on-amazon-bedrock-agentcore-runtime/
摘要:In this post, you will learn how to build stateful MCP servers that request user input during execution, invoke LLM sampling for dynamic content generation, and stream progress updates for long-running tasks. You will se
8. The AI industry’s race for profits is now existential
- 来源:The Verge AI
- 发布时间:2026-04-09 14:00 UTC
- 链接:https://www.theverge.com/podcast/909042/ai-monetization-cliff-anthropic-openai-profitable-ai-existential-moment
摘要:Today on Decoder, let’s talk about the looming AI monetization cliff, and whether some of the biggest companies in the space can become real, profitable businesses before they careen right off it. My guest today is Hayde
9. Google makes it easy to deepfake yourself
- 来源:The Verge AI
- 发布时间:2026-04-09 10:53 UTC
- 链接:https://www.theverge.com/ai-artificial-intelligence/909104/youtube-shorts-make-ai-avatar
摘要:YouTube Shorts is rolling out a new AI-powered feature giving creators an easy way to realistically clone themselves on camera. The launch, hinted at earlier this year, reflects the platform's fraught relationship with A
10. High-Precision Estimation of the State-Space Complexity of Shogi via the Monte Carlo Method
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06189
摘要:arXiv:2604.06189v1 Announce Type: new Abstract: Determining the state-space complexity of the game of Shogi (Japanese Chess) has been a challenging problem, with previous combinatorial estimates leaving a gap of five ord
11. Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06233
摘要:arXiv:2604.06233v1 Announce Type: new Abstract: Safety-trained language models routinely refuse requests for help circumventing rules. But not all rules deserve compliance. When users ask for help evading rules imposed b
12. Toward Reducing Unproductive Container Moves: Predicting Service Requirements and Dwell Times
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06251
摘要:arXiv:2604.06251v1 Announce Type: new Abstract: This article presents the results of a data science study conducted at a container terminal, aimed at reducing unproductive container moves through the prediction of servic
13. Weakly Supervised Distillation of Hallucination Signals into Transformer Representations
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06277
摘要:arXiv:2604.06277v1 Announce Type: new Abstract: Existing hallucination detection methods for large language models (LLMs) rely on external verification at inference time, requiring gold answers, retrieval systems, or aux
14. SymptomWise: A Deterministic Reasoning Layer for Reliable and Efficient AI Systems
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06375
摘要:arXiv:2604.06375v1 Announce Type: new Abstract: AI-driven symptom analysis systems face persistent challenges in reliability, interpretability, and hallucination. End-to-end generative approaches often lack traceability
15. SELFDOUBT: Uncertainty Quantification for Reasoning LLMs via the Hedge-to-Verify Ratio
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06389
摘要:arXiv:2604.06389v1 Announce Type: new Abstract: Uncertainty estimation for reasoning language models remains difficult to deploy in practice: sampling-based methods are computationally expensive, while common single-pass
16. Qualixar OS: A Universal Operating System for AI Agent Orchestration
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06392
摘要:arXiv:2604.06392v1 Announce Type: new Abstract: We present Qualixar OS, the first application-layer operating system for universal AI agent orchestration. Unlike kernel-level approaches (AIOS) or single-framework tools (
17. ProofSketcher: Hybrid LLM + Lightweight Proof Checker for Reliable Math/Logic Reasoning
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06401
摘要:arXiv:2604.06401v1 Announce Type: new Abstract: The large language models (LLMs) might produce a persuasive argument within mathematical and logical fields, although such argument often includes some minor missteps, incl
18. BDI-Kit Demo: A Toolkit for Programmable and Conversational Data Harmonization
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06405
摘要:arXiv:2604.06405v1 Announce Type: new Abstract: Data harmonization remains a major bottleneck for integrative analysis due to heterogeneity in schemas, value representations, and domain-specific conventions. BDI-Kit prov
19. On Emotion-Sensitive Decision Making of Small Language Model Agents
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06562
摘要:arXiv:2604.06562v1 Announce Type: new Abstract: Small language models (SLM) are increasingly used as interactive decision-making agents, yet most decision-oriented evaluations ignore emotion as a causal factor influencin
20. Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability
- 来源:arXiv cs.AI
- 发布时间:2026-04-09 04:00 UTC
- 链接:https://arxiv.org/abs/2604.06628
摘要:arXiv:2604.06628v1 Announce Type: new Abstract: A prevailing narrative in LLM post-training holds that supervised finetuning (SFT) memorizes while reinforcement learning (RL) generalizes. We revisit this claim for reason