AI super PACs, the hottest investment in tech
Overview of Leading The Future and the evolving role of tech money in AI policy, including funding, structure, and regulatory implications discussed by The Verge.
Items tagged with “Policy”.
Overview of Leading The Future and the evolving role of tech money in AI policy, including funding, structure, and regulatory implications discussed by The Verge.
Two former Harvard students claim to be building Halo X, discreet smart glasses with an always-on microphone, real-time transcription, and AI-driven prompts — raising questions about privacy, security, and legality.
NVIDIA NeMo-RL v0.3 adds Megatron-Core support to boost training throughput for large models, addressing DTensor limitations, with long-context training, MoE support, and simplified configuration.
Anthropic updated Claude (Opus 4/4.1) to terminate conversations judged ‘persistently harmful or abusive’ as a last resort, citing the model’s ‘apparent distress’ and adding policy changes that restrict weapon and exploit development.
NVIDIA Research introduces ProRL v2, the latest evolution of Prolonged Reinforcement Learning for LLMs. It explores thousands of extra RL steps, new stabilization techniques, and broad benchmarking to push sustained improvements beyond traditional RL schedules.
PwC and AWS integrate Automated Reasoning with Bedrock Guardrails to validate AI outputs against policy rules, enabling auditable compliance, faster safe innovation, and new industry use cases.
OpenAI has sent a letter to California Governor Gavin Newsom urging California to lead in harmonizing state AI regulation with national standards and emerging global norms, reflecting US leadership in setting global AI governance.
NVIDIA announces general access for Isaac Sim 5.0 and Isaac Lab 2.2 at SIGGRAPH 2025, delivering new robot models, enhanced sensor simulation, ROS 2 integration, and cloud-based deployment via NVIDIA Brev.
TRL expands multimodal alignment for Vision-Language Models with GRPO, GSPO, MPO, and native SFT support, plus ready-to-run scripts and notebooks to streamline post-training alignment.
Overview of TRL's multimodal alignment methods for vision-language models: GRPO, GSPO, MPO, plus native SFT support and DPO-based baselines.
Open-weight reasoning models gpt-oss-120b and gpt-oss-20b, released under Apache 2.0 and the gpt-oss usage policy, are text-only and compatible with the Responses API, designed for agentic workflows with strong instruction following, tool use, and adjustable reasoning effort.
OpenAI releases its most capable open-weight models to broaden access and flexibility, backed by OpenAI for Countries and nonprofit support to empower governments, nonprofits, and developers worldwide.
OpenAI releases two open-weight, mixture-of-experts models—GPT OSS 120B and GPT OSS 20B—utilizing MXFP4 4-bit quantization. They’re licensed Apache 2.0 with a minimal usage policy and are accessible via Hugging Face Inference Providers for on-device, on-prem, and cloud deployments.
OpenAI releases GPT OSS, a pair of 120B and 20B mixture-of-experts models with MXFP4 4-bit quantization, licensed Apache 2.0, and delivered via Hugging Face Inference Providers for on-device and server deployments.
Two open-weight OpenAI models (gpt-oss-120b and gpt-oss-20b) using mixture-of-experts and MXFP4 quantization for fast, memory-efficient reasoning. Licensed Apache 2.0; designed for private/local deployments and on-device use, with Hugging Face Inference Providers integration.
As AI drives soaring data-center power demand, 2025 hinges on nuclear and fusion potential, hydrogen subsidy risks, and shifting investment as grids, regulators, and big tech race to secure electricity.
Foundational concepts on AI consciousness, including Block's classifications, Chalmers' easy vs hard problems, and the AI Moral Status Problem, explained for practitioners.