麻豆蜜桃精品无码视频-麻豆蜜臀-麻豆免费视频-麻豆免费网-麻豆免费网站-麻豆破解网站-麻豆人妻-麻豆视频传媒入口

Set as Homepage - Add to Favorites

【eroticized rebel archetype】Enter to watch online.Can 'free speech' be 'moderated'? Yes.

Source:Global Perspective Monitoring Editor:hotspot Time:2025-07-03 13:57:24

Editor's note: This essay was originally published in on eroticized rebel archetypeJuly 19, 2016 on Medium. It has been republished here in light of recent and somewhat related events.

Can what we call free speech be moderated when expressed on today’s globally connected communication platforms, and do so without limiting the very openness and freedom that these platforms provide? Yes. Really.

In this post I wanted to share my personal thoughts and experience on this important topic.


You May Also Like

From the earliest days of the “net”, meaning going back to the original online newsgroups and subsequently dial-up, the presence of trolling has been a constant and unrelenting force of ill-will and bad taste. In the early days of online forums, some community members would post deliberately provocative comments or posts simply to get a rise out of other members. Within a very short time, such trolling would be met by an increasingly vitriolic exchange of hyperbolic insults. These insults would too often degrade into racial, ethnic, geographic, or other slurs.

Unfortunately, such behavior was rarely punished and, for reasons of zeitgeist and effectively homogenous makeup of those forums, even tolerated. One could argue trolling became somewhat of a celebrated skill because it would be used to silence views that run counter to some conventional view in a forum. In other words, trolling became a way to reduce the very exchange of ideas people wanted enjoy.

Subsequently, with the increasing number of people online and rise of chat and email, a new form of trolling was added to that arsenal of hate — person-to-person harassment that came with instant messaging or email. The ability to connect to the individual target of forum-trolling was now a feature. This rise of 1:1 cyberbullying took trolling to the next level.

It was often said that the overwhelming majority of people that participate actively or support passively these acts of hate and bullying would never ever do so if they had to do so verbally and in person. This became somewhat of an excuse — the idea that it is somehow the tool of communication, the platform, that causes people to lose control of their frontal lobe and unleash words in writing that they could never eye to eye. Again, because of this “out” there was some level of tolerance for this behavior since, you know, the keyboard is so much more difficult to control than the mouth.

All the while the physical world was making progress at thwarting directed and hateful speech. As our society passed the generational torch from Baby Boomers to Generation X to Millennials, we collectively became much less tolerant, or perhaps more politically correct, of hateful speech. This is not easy to accomplish and was a contentious journey.

Going as far back as attempting to regulate speech deemed pornographic or political, the Supreme Court has ruled in difficult free speech cases. For many the key, and often controversial, ruling will remain Potter Stewart’s “I know it when I see it” description of hard-core pornography. In this “realistic and gallant” example of court candor (see https://en.m.wikipedia.org/wiki/I_know_it_when_I_see_it for background and citations) he led an effort to put in place a ruling that limited what was viewed as something that was undeniably without limits.

The idea of limiting speech in any way is incredibly risky to many. Most liberal minded people believe that the speech most important to receive protection and see open expression is that which makes us the most uncomfortable. So by definition limiting speech that one person deems hateful or disrespectful is the opposite of our first amendment ideal.

Still the Supreme Court upheld the right to do so, so long as the speech did not violate other laws such as fire codes, arson, or safety regulations.

In the US, such a belief led to an even higher level of free speech, political expression. While from the earliest days some forms of speech have been assumed to be subject to potential restrictions (i.e. the risk of immediate danger by shouting “fire in a crowded movie house”) the remaining expression of political ideas went unregulated, no matter how hateful. This led to “expressions” such as burning crosses, flags, or effigies, often accompanied by hateful written materials. All in all, it was a lot of work to express a lot of hate. Still the Supreme Court upheld the right to do so, so long as the speech did not violate other laws such as fire codes, arson, or safety regulations.

Mashable Trend Report Decode what’s viral, what’s next, and what it all means. Sign up for Mashable’s weekly Trend Report newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

The rise of a movement to control directed hate began in our universities, as is often the case with societal or generational changes. When I was in college in the 1980’s the world saw simultaneous challenges of the appearance of AIDS and the rise of a New Conservatism and traditional family values. With this came a wave of Anti-Gay (today this would be Anti-LBGQT) speech, defamation, and even violence on campus. Universities responded with speech codes which by some, on both sides of the debate, were deemed silencing or worse. As we see over time, and often, the norms proclaimed in those university codes became societal norms as students graduated to the workforce. Not everyone became more willing to accept different people or less willing to accept hate directed towards those disagreed with, but it was abundantly clear societal attitudes were changing.

Not everyone became more willing to accept different people or less willing to accept hate directed towards those disagreed with, but it was abundantly clear societal attitudes were changing.

In fact, “hate crime” statutes began to appear in the 1980s. While these might be argued by legal scholars as redundant with existing laws against violence and property damage, they demonstrated a consensus that a crime motivated by hate deserved special notice and prosecutorial power. As a result, these laws made it easier to measure such crime, and still over the next decades the amount of hate-driven crime decreased. That’s a good thing as norms changed.

Even with this progress, the online world lacked any protections. While progress was being made in the offline world, the online world seemed to exist in the mid 20th century before any real efforts to reduce broadly offensive, as distinct from strictly protected political, speech. The new tools of online forums, messaging, and email were littered with pornography, abuse, and bullying of individuals.

The fear of overt government regulation (perhaps on the tails of new conservatism), and frankly losing competitively, resulted in quick action by many players as well as the creation and success of many companies designed to help both individuals and corporations protect against offensive content.

Whether it was moderating comment threads about a new product or protecting mail servers and accounts from SPAM, the product teams I worked with and was part of were quick to dig in and find ways to protect both users and our own business. This did not happen lightly. For example, if you’re a mail service (like Hotmail) or a mail client (like Outlook) your whole existence depends on, you know, reliably delivering mail. Thus the idea of just taking over and blocking certain mail seemed to run entirely counter to a platform or protocol view of your role in the flow of information or exchange of ideas.

Users and businesses demanded protections and even though many companies and services ended up in litigation, the industry moved forward. I once spent a good solid week in a very hot San Mateo courtroom attempting to justify Outlook’s SPAM filter by signing up a mediator to a variety of online forums and then waiting for the “know it when you see it” to start rolling in. While that experiment worked, we still had to settle because our blocking mail wasn’t viewed as entirely “fair”.

We went back, redesigned our product and continued to favor protecting users. The marketplace worked without formal regulation. In fact, if you look at any reviews of enterprise email or free mail services from the 90’s you will see “protection” and “filtering” right up there as criteria that were evaluated.

Thus the idea of just taking over and blocking certain mail seemed to run entirely counter to a platform or protocol view of your role in the flow of information or exchange of ideas.

Today companies employ vast amount of technology, people, and dollars to prevent abuse, denial of service, and in many cases offensive content. At the same time too many forums exist where hateful or offensive content continues unabated. When I think back to that sweltering court room I do not understand what the holdup is. I understand the idea that the industry provides “platforms” and in doing so should be agnostic about content. Still, I think we can be more responsible and respectful.

It is not without risk to make it possible and easy to block (or cause to be blocked) someone’s speech on a platform. Doing so gets to the heart of the formation of our country and our sense that the most borderline and edgy speech should be most protected. It is also why most services exist and why they are most often celebrated around the world as symbols of free expression.

From a user perspective we should be careful in talking about “right to use” a particular service because none of us really want to see a service viewed as some sort of “essential facility” by the legal system (that’s a specific word I learned from the DOJ and EU). We want services to be insanely useful, but not regulated like other insanely useful privileges. However, we can vote with our accounts and we can be vocal through many means about what we think as individual users. We can use our marketplace influence to inform and change what we don’t agree with. Product teams can and should be tuned into and act on this feedback. We want the marketplace to work and to respond.

In my view, today’s online forums take the place of universities in shaping the “modern” way to engage with other humans, if for no other reason than the sheer number of people participating. As a whole our industry tends towards self-determinism and self-regulation yet we find ourselves today with a number of incredibly important platforms that are not keeping up with the basic test of “know it when we see it”. This is not to single out any one platform just as we could not single out any one free email service or messaging service back in the day. Rather this is something that every platform that supports broadcasting or 1:1 speech simply needs to continuously work and improve.

It is easy to claim that providing a platform implies it is for others to use in an unfiltered or neutral matter, but modern services are more than passive already. Perhaps if this was still an era where industry made printing presses, cameras, and recorders that would be reasonable. Today our industry provides interactive services that make constant decisions over what content to show, in what order, to whom, along with tools to manually point out potential issues. I believe with that comes a responsibility to also “know” what is a combination of hateful or obscene, and aggravating, “when it is seen” and to act on that point of view.

It is easy to claim that providing a platform implies it is for others to use in an unfiltered or neutral matter, but modern services are more than passive already.

The free market works. Some services or forums might go too far and become too heavy-handed or even too transparent in championing a specific point of view. That would be over-reaching, I think, but is also well within their right as a company to do so. Some services might learn the lessons of “false positives” learned in protecting against SPAM, and need to moderate efforts or provide controls. As companies, the tools exist to do more, so let’s see them put to use.

As users we should campaign for or choose platforms that support the kind of dialog we wish to see and step back from those that fail to do so. We do not have “rights” to use products and services but we have the “A” and the “U” in MAU, so let’s use them appropriately.

Original image replaced with Mashable logoOriginal image has been replaced. Credit: Mashable

Steven Sinofsky is a board partner at Andreessen Horowitz, an adviser at Box Inc., and an advisor/investor to Silicon Valley startups. Follow him @stevesi or read more at https://medium.learningbyshipping.com/

0.1437s , 10080.4140625 kb

Copyright © 2025 Powered by 【eroticized rebel archetype】Enter to watch online.Can 'free speech' be 'moderated'? Yes.,Global Perspective Monitoring  

Sitemap

Top 主站蜘蛛池模板: AV无码一区二区 | 国产黃色A片三級三級电影舞男 | 日韩一区精品五区另类二区 | 亚洲乱码一区二区三区无码 | 国产一级强片在 | 91亚洲精品麻豆91 | 亚洲网站地址一区二区 | 国产无套内射在线观看 | 亚洲成人AV高清字慕 | 国产精品国三级国产AV | 麻豆成人91精品二区三区 | 人妻福利三级视频 | 韩国美女福利专区一区二区 | 91大神夯先生庭审现场 | 日本91视频| 国产 日韩 在线 | 亚洲中文字幕精品乱码 | 亚洲无码精品视频 | 国产精品专 | 山东科润环境技术有限公司 | 亚洲a∨无码成人精品区在线观看 | 国产成人无码无卡在线观看视频 | 亚洲成人午夜影院 | 91精品国产情侣高潮露脸仙踪林 | 精品在线视频一区 | av无码理论片在线观看免费网站 | 精品无卡| 日本一区二区三区久久久久久 | 国产免费福利视频 | 91国内产香蕉v4.4.9最新版 | 中文字幕第一起页 | 日本精品人妻一区二区三区 | 欧美刺激黄色 | 少妇私密会所按摩到高潮呻吟 | 国产日产欧美一区二区三区 | 日本三级特大v片 | 黄色激情小说 | 无码精品国产va在线观看蜜桃 | 色综合电影| 色综合av| 国产精品观看在线 |