麻豆蜜桃精品无码视频-麻豆蜜臀-麻豆免费视频-麻豆免费网-麻豆免费网站-麻豆破解网站-麻豆人妻-麻豆视频传媒入口

Set as Homepage - Add to Favorites

【????? ????? ?????? ??????? ????????】Enter to watch online.OpenAI's Sam Altman breaks silence on AI executive order

Source:Global Perspective Monitoring Editor:relaxation Time:2025-07-03 17:42:31

Update: Hours after this story published,????? ????? ?????? ??????? ???????? Sam Altman postedon X/Twitter saying, "there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams."

Altman also said he is "pro-regulation on frontier systems," or large-scale foundation models, and "against regulatory capture."


In the wake of President Biden's executive order on Monday, AI companies and industry leaders have weighed in on this watershed moment in AI regulation. But the biggest player in the AI space, OpenAI, has been conspicuously quiet.


You May Also Like

The Biden-Harris administration's far-ranging executive order addressing the risks of AI builds upon voluntary commitments secured by 15 leading AI companies. OpenAI was among the first batch of companies to promise the White House safe, secure, and trustworthy development of its AI tools. Yet the company hasn't issued any statement on its website or X (formerly known as Twitter). CEO Sam Altman, who regularly shares OpenAI news on X, hasn't posted anything either.

OpenAI has not responded to Mashable's request for comment.

SEE ALSO: White House announces new AI initiatives at Global Summit on AI Safety

Of the 15 companies that made a voluntary commitment to the Biden Administration, the following have made public statements, and all of which expressed support for the executive order: Adobe, Amazon, Anthropic, Google, IBM, Microsoft, Salesforce, and Scale AI. Nvidia decline to comment.

In addition to crickets from OpenAI, Mashable has yet to hear from Cohere, Inflection, Meta, Palantir, and Stability AI. But OpenAI and Altman's publicity tour proclaiming the urgent risks of AI and the need for regulation makes the company's silence all the more noticeable.

Altman has been vocal about the threat that generative AI made by his own company poses. In May, Altman, along with technology pioneers Geoffrey Hinton and Bill Gates signed an open letter, stating, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

At a senate hearing in May, Altman expressed the need for AI regulation: "I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that," said Altman in response to inquiry from Sen. Blumenthal, D-CT about the threat of superhuman machine intelligence.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

So far, cooperation with lawmakers and world leaders has worked in OpenAI's favor. Altman participated in the Senate's bipartisan closed-door AI summit, giving OpenAI a seat at the table for formulating AI legislation. Shortly after Altman's testimony, leaked documents from OpenAI showed the company lobbying for weaker regulation in the European Union.

It's unclear where OpenAI stands on the executive order, but open-source advocates say the company already has too much lobbying influence. On Wednesday, the same day as the AI Safety Summit in the U.K., more than 70 AI leaders issued a joint statement calling for a more transparent approach to AI regulation. "The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst," said the statement.

Meta Chief AI Scientist Yann LeCun, one of the signatories, doubled down on this sentiment on X (formerly known as Twitter) by calling out OpenAI, DeepMind (a subsidiary of Google), and Anthropic for using fear-mongering to ensure favorable outcomes. "[Sam] Altman, [Demis] Hassabis, and [Dario] Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry," he posted.

Anthropic and Google leadership have both provided statements supporting the executive order, leaving OpenAI the lone company accused of regulatory capture yet to issue any comment.

What could the executive order mean for OpenAI?

Many of the testing provisions in the EO relate to huge foundation models not yet on the market and future development of AI systems, suggesting consumer-facing tools like OpenAI's ChatGPT won't be impacted much.

"I don't think we're likely to see any immediate changes to any of the generative AI tools available to consumers," said Jake Williams, former US National Security Agency (NSA) hacker and Faculty member at IANS Research. "OpenAI, Google, and others are definitely training foundation models and those are specifically called out in the EO if they might impact national security."

So, whatever OpenAI is working on might be subjected to government testing.

In terms of how the executive order might impact directly OpenAI, Beth Simone Noveck, director of the Burnes Center for Social Change, said it could slow down the pace of new products and updates being released and companies will have to invest more in research and development and compliance.

"Companies developing large-scale language models (e.g. ChatGPT, Bard and those trained on billions of parameters of data) will be required to provide ongoing information to the federal government, including details of how they test their platforms," said Noveck, who previously served as the first United States Deputy Chief Technology Officer under President Obama.

More than anything, the executive order signals an alignment with growing consumer expectations for greater control and protection of their personal data, said Avani Desai, CEO of Schellman, a top CPA firm that specializes in IT audit and cybersecurity.

"This is a huge win for privacy advocates as the transparency and data privacy measures can boost user confidence in AI-powered products and services," Desai said.

So while the consequences of the executive order may not be immediate, it squarely applies to OpenAI's tools and practices. You'd think OpenAI might have something to say about that.

Topics Artificial Intelligence OpenAI

0.1413s , 10043.3203125 kb

Copyright © 2025 Powered by 【????? ????? ?????? ??????? ????????】Enter to watch online.OpenAI's Sam Altman breaks silence on AI executive order,Global Perspective Monitoring  

Sitemap

Top 主站蜘蛛池模板: 日韩精品电影在线观看 | 国产无套视频在线观看aa在线 | 国产色在线看精品秘 | 国产精品嫩草影视免费看按摩 | 免费三级电影片 | 日韩午夜精品免费理论片 | 黄色成人网站免费无码av | 精品国产欧美一区二区 | 日韩欧美国产综合在线观看 | 精品无码久久久久久尤物 | 成人精品天堂一区二区三区 | 国产欧美国日产网站 | 亚洲欧美在线免费观看 | 国产成人免费三级片 | 亚洲成人Aⅴ| 精品动漫3D一区二区三区免费版 | 91视频app免费下载 | 国产乱了真 | 日韩在线1区精品 | 亚洲女人的天堂网观看 | 超碰人人人人人人人人人 | 91精品无码在线观看 | 亚洲熟妇图片28p | 国内精品视频免费观看 | 亚洲一级AV片| 国产欧美第一页在线观看 | 一个人免费视频观 | 欧美高清一区三区在线专区 | 人妻中出在线视频 | 无码人妻一区二区三区线曰卧 | 极品黑色丝袜自慰喷水自慰 | 91伊人网| 欧美一区二区三区日韩精品 | 国产免费91视频 | 亚洲AV中文无吗乱人伦在线咪咕 | 日韩精品99国产国产精 | 日本一区二区三区精品福利视频 | 91制片厂职场冰与火 | 国产v标志的车是什么牌子? | 亚洲欧美激情在线一区 | 黄色网址进入 |