麻豆蜜桃精品无码视频-麻豆蜜臀-麻豆免费视频-麻豆免费网-麻豆免费网站-麻豆破解网站-麻豆人妻-麻豆视频传媒入口

Set as Homepage - Add to Favorites

【日活 ポルノ 映画 動画】Enter to watch online.AI automation still poses accessibility issues, especially for audio transcription

Source:Global Perspective Monitoring Editor:hotspot Time:2025-07-03 15:27:31

The 日活 ポルノ 映画 動画case for human oversight of artificial intelligence (AI)services continues, with the intertwined world of audio transcription, captioning, and automatic speech recognition (ASR)joining the call for applications that complement, not replace, human input.

Captions and subtitles serve a vital role in providing media and information access to viewers who are deaf or hard of hearing, and they've risen in popular useover the past several years. Disability advocates have pushed for better captioning options for decades, highlighting a need that's increasingly relevant with the proliferation of on-demand streaming services. Video-based platforms have quickly latched onto AI, as well, with YouTube announcing early tests of a new AI feature that summarizes entire videos and TikTok exploring its own chat bot.

So with the growing craze over AI as a buoy to tech's limitations, involving the latest AI tools and services in automatic captioning might seem like a logical next step. 


You May Also Like

SEE ALSO: Confused about federal student loan forgiveness? Here's what you need to know.

3Play Media, a video accessibility and captioning services company, focused on the impact of generative AI tools on captions used primarily by viewers who are deaf and hard of hearing in its recently published 2023 State of Automatic Speech Recognition report. According to the findings, users have to be aware of much more than simple accuracy when new, quickly-advancing AI services are thrown in the mix. 

The accuracy of Automatic Speech Recognition

3Play Media's report analyzed the word error rate (the number of accurately transcribed words) and the formatted error rate (the accuracy of both words and formatting in a transcribed file) of different ASR engines, or AI-powered caption generators. The various ASR engines are incorporated in a range of industries, including news, higher education, and sports. 

"High-quality ASR does not necessarily lead to high-quality captions," the report found. "For word error rate, even the best engines only performed around 90 percent accurately, and for formatted error rate, only around 80 percent accurately, neither of which is sufficient for legal compliance and 99 percent accuracy, the industry standard for accessibility."

The Americans with Disabilities Act (ADA)requires state and local governments, businesses, and nonprofit organizations that serve the public to "communicate effectively with people who have communication disabilities," including closed or real-time captioningservices for deaf and hard-of-hearing people. According to Federal Communications Commission (FCC) compliance rules for television, captions must be accurate, in-sync, continuous, and properly placed to the "fullest extent possible." 

Caption accuracy across the data set fluctuated greatly in different markets and use cases, as well. "News and networks, cinematic, and sports are the toughest for ASR to transcribe accurately," 3Play Media writes, "as these markets often have content with background music, overlapping speech, and difficult audio. These markets have the highest average error rates for word error rate and formatted error rate, with news and networks being the least accurate."

While, in general, performances have improved since 3Play Media's 2022 report, the company found that error rates were still high enough to warrant human editor collaboration for all markets tested.

Keeping humans in the loop

Transcription models at every level, from consumer to industry use, have incorporated AI-generated audio captioning for years. Many already use what's known as "human-in-the-loop" systems, where a multi-step process incorporates both ASR (or AI) tools and human editors. Companies like Rev, another captioning and transcription service, have pointed out the importance of human editorsin audio-visual syncing, screen formatting, and other necessary steps in making fully accessible visual media. 

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

Human-in-the-loop (also known as HITL) models have been promoted across generative AI development to better monitor implicit bias in AI models, and to guide generative AI with human-led decision making. 

The World Wide Web Consortium (W3C)'s Web Accessibility Initiativehas long held its stance on human oversight as well, noted in its guideline to captions and subtitles. "Automatically-generated captions do not meet user needs or accessibility requirements, unless they are confirmed to be fully accurate. Usually they need significant editing," the organization's guidelines state. "Automatic captions can be used as a starting point for developing accurate captions and transcripts." 

And in a 2021 report on the importance of live human-generated transcriptions, 3Play Media noted similar hesitancies.

"AI doesn’t have the same capacity for contextualizationas a human being, meaning that when ASR misunderstands a word, there's a possibility it will be substituted with something irrelevant, or omitted altogether," the company writes. "While there is currently no definitive legal requirement for live captioning accuracy rates, existing federal and state captioning regulationsfor recorded content state that accessible accommodations must provide an equal experienceto that of a hearing viewer... While neither AI nor human captioners can provide 100% accuracy, the most effective methods of live captioning incorporate both in order to get as close as possible."

Flagging hallucinations

In addition to lower accuracy numbers using ASR alone, 3Play Media's report noted an explicit concern for the possibility of AI "hallucinations,"both in the form of factual inaccuracies and the inclusion of completely fabricated whole sentences. 

Broadly, AI-based hallucinations have become a central aspect among an arsenal of complaints against AI-generated text. 


Related Stories
  • An exploration of cinematic accessibility: Open captions set the standard
  • Threads launch fails to prioritize accessibility
  • Be My Eyes meets GPT-4: Is AI the next frontier in accessibility?
  • Apple adds new accessibility features to its devices
  • YouTube is testing an AI feature that summarises videos
SEE ALSO: ChatGPT's surprisingly human voice came with a human cost

In January, misinformation watchdog NewsGuardpublished a studyon ChatGPT's ease at generating and delivering misleading claims to users posing as "bad actors." It noted that the AI bot shared misinformation about news events 80 out of 100 times in response to leading prompts related to a sampling of false narratives. In June, an American radio host filed a defamation lawsuit against OpenAIafter its chatbot, ChatGPT, allegedly offered erroneous "facts'' about the host to a user searching for details on a federal court case. 

Just last month, AI leaders (including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) met with the Biden-Harris administration"to help move toward safe, secure, and transparent development of AI technology" ahead of a possible executive order on responsible AI use. All of the companies in attendance signed on to a series of eight commitments to ensure public security, safety, and trust. 

For AI's incorporation into day-to-day tech — and specifically for developers seeking other forms of text-generating AI as a paved path to accessibility — inaccuracies like hallucinations pose just as great a risk to users, 3Play Media explains.

"From an accessibility standpoint, hallucinations present an even more egregious problem: the false portrayal of accuracy for deaf and hard-of-hearing viewers," the report explains. 3Play writes that, despite impressive performance related to the production of well punctuated, grammatical sentences, issues like hallucinations currently pose high risks to users.

Industry leaders are attempting to address hallucinations with continued training, and some of tech's biggest leaders, like Bill Gates, are extremely optimistic. But those in need of accessible services don't have time to wait around for developers to perfect their AI systems. 

"While it’s possible that these hallucinations would be reduced through fine-tuning, the negative consequences for accessibility could be profound," 3Play Media's report concludes. "Human editors remain indispensable in producing high-quality captions accessible to our primary end users: people who are deaf and hard-of-hearing."

Want more Social Goodand accessibilitystories in your inbox? Sign up for Mashable's Top Stories newslettertoday.

Topics Artificial Intelligence Social Good Accessibility

0.1585s , 10004.7578125 kb

Copyright © 2025 Powered by 【日活 ポルノ 映画 動画】Enter to watch online.AI automation still poses accessibility issues, especially for audio transcription,Global Perspective Monitoring  

Sitemap

Top 主站蜘蛛池模板: 国产偷啪免费精品视频 | 深夜免费a级毛片久久 | 欧美日韩亚洲综合一 | 凹凸国产熟女精 | 日韩欧美三级伦理 | 色老太婆bbw | 亚洲狼人精品一区二区三区 | 亚洲欧美成人精 | 国产综合视频在线观看 | 亚洲成a人片在线观看播放 亚洲成产国品一二二三区别 | 国产成人精品午夜福利 | 精品国产无码大片在线看 | 日本青草视频 | 亚洲午夜一区在线观看 | 激情爱爱免费福利 | 日本女优樱桃 | 69人人槡人妻人人玩一 | 粉嫩小泬无套粉嫩小泬 | 91高清尤物视频 | 久9成人免费视频在线 | 寡妇高潮A片免费观看 | 国产精品高清在线拍自 | 91短视频免费版在线看 | 成人福利午夜A片公司 | 97国产综合色产在线视频 | 亚洲人成在线观看网站 | 日本理论午夜高清中文字幕 | 囯产淫色麻豆AV一期二期AV | 三级中文字幕永久在线 | 91黄片在线免费观看视频 | 日韩经典一区二区 | 亚洲欧美中文日韩欧美 | 成版人短视频app污 成都4片p高清完整 成年短视 | 国产精品一久 | 亚洲激情网站 | 亚洲中文有码字幕日本 | 亚洲国产精品浪潮久久久av | 国产免费永久在线视频 | 国产精品国产专区不卡 | 国产亚洲精品yxsp | 成人A片产无码免费视频奶头麻豆 |