
A Blueprint for Content Governance and Enforcement

Mark Zuckerberg
My focus in 2018 has been addressing the most important issues facing Facebook. As the year wraps up, I'm writing a series of notes about these challenges and the progress we've made. The first note was about Preparing for Elections and this is the second in the series.
•••
Many of us got into technology because we believe it can be a democratizing force for putting power in people's hands. I've always cared about this and that's why the first words of our mission have always been "give people the power". I believe the world is better when more people have a voice to share their experiences, and when traditional gatekeepers like governments and media companies don't control what ideas can be expressed.
At the same time, we have a responsibility to keep people safe on our services -- whether from terrorism, bullying, or other threats. We also have a broader social responsibility to help bring people closer together -- against polarization and extremism. The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence. One of the most painful lessons I've learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.
An important question we face is how to balance the ideal of giving everyone a voice with the realities of keeping people safe and bringing people together. What should be the limits to what people can express? What content should be distributed and what should be blocked? Who should decide these policies and make enforcement decisions? Who should hold those people accountable?
As with many of the biggest challenges we face, there isn't broad agreement on the right approach, and thoughtful people come to very different conclusions on what are acceptable tradeoffs. To make this even harder, cultural norms vary widely in different countries, and are shifting rapidly.
I have focused more on these content governance and enforcement issues than any others over the past couple of years. While it has taken time to understand the complexity of the challenges, we have made a lot of progress. Still, we have significant work ahead to get all our systems to the levels people expect, and where we need to be operating.
Even then, there will always be issues. These are not problems you fix, but issues where you continually improve. Just as a free society will always have crime and our expectation of government is not to eliminate all crime but to effectively manage and reduce it, our community will also always face its share of abuse. Our job is to keep the misuse low, consistently improve over time, and stay ahead of new threats.
In this note, I will outline the approach we're taking. A full system requires addressing both governance and enforcement. I will discuss how we're proactively enforcing our policies to remove more harmful content, preventing borderline content from spreading, giving people more control of their experience, and creating independent oversight and transparency into our systems.
Community Standards
Before getting into what we need to improve, it's important to understand how we've approached these problems until now. Every community has standards, and since our earliest days we've also had our Community Standards -- the rules that determine what content stays up and what comes down on Facebook. Our goal is to err on the side of giving people a voice while preventing real world harm and ensuring that people feel safe in our community. You can read them here: http://www.facebook.com/communitystandards
In April, we went a step further and published our internal guidelines that our teams use to enforce these standards. These guidelines are designed to reduce subjectivity and ensure that decisions made by reviewers are as consistent as possible. For example, our Community Standards on violence and graphic content say "we remove content that glorifies violence or celebrates the suffering or humiliation of others". Sometimes there are reasons to share this kind of troubling content, like to draw attention to human rights abuses or as a news organization covering important events. But there have to be limits, and our guidelines include 18 specific types of content we remove, including visible internal organs and charred or burning people.
The team responsible for setting these policies is global -- based in more than 10 offices across six countries to reflect the different cultural norms of our community. Many of them have devoted their careers to issues like child safety, hate speech, and terrorism, including as human rights lawyers or criminal prosecutors.
Our policy process involves regularly getting input from outside experts and organizations to ensure we understand the different perspectives that exist on free expression and safety, as well as the impacts of our policies on different communities globally. Every few weeks, the team runs a meeting to discuss potential changes to our policies based on new research or data. For each change the team gets outside input -- and we've also invited academics and journalists to join this meeting to understand this process. Starting today, we will also publish minutes of these meetings to increase transparency and accountability.
The team responsible for enforcing these policies is made up of around 30,000 people, including content reviewers who speak almost every language widely used in the world. We have offices in many time zones to ensure we can respond to reports quickly. We invest heavily in training and support for every person and team. In total, they review more than two million pieces of content every day. We issue a transparency report with a more detailed breakdown of the content we take down.
For most of our history, the content review process has been very reactive and manual -- with people reporting content they have found problematic, and then our team reviewing that content. This approach has enabled us to remove a lot of harmful content, but it has major limits in that we can't remove harmful content before people see it, or that people do not report.
Accuracy is also an important issue. Our reviewers work hard to enforce our policies, but many of the judgements require nuance and exceptions. For example, our Community Standards prohibit most nudity, but we make an exception for imagery that is historically significant. We don't allow the sale of regulated goods like firearms, but it can be hard to distinguish those from images of paintball or toy guns. As you get into hate speech and bullying, linguistic nuances get even harder -- like understanding when someone is condemning a racial slur as opposed to using it to attack others. On top of these issues, while computers are consistent at highly repetitive tasks, people are not always as consistent in their judgements.
The vast majority of mistakes we make are due to errors enforcing the nuances of our policies rather than disagreements about what those policies should actually be. Today, depending on the type of content, our review teams make the wrong call in more than 1 out of every 10 cases.
Reducing these errors is one of our most important priorities. To do this, in the last few years we have significantly ramped up our efforts to proactively enforce our policies using a combination of artificial intelligence doing the most repetitive work, and a much larger team of people focused on the more nuanced cases. It's important to remember though that given the size of our community, even if we were able to reduce errors to 1 in 100, that would still be a very large number of mistakes.
Proactively Identifying Harmful Content
The single most important improvement in enforcing our policies is using artificial intelligence to proactively report potentially problematic content to our team of reviewers, and in some cases to take action on the content automatically as well.
This approach helps us identify and remove a much larger percent of the harmful content -- and we can often remove it faster, before anyone even sees it rather than waiting until it has been reported.
Moving from reactive to proactive handling of content at scale has only started to become possible recently because of advances in artificial intelligence -- and because of the multi-billion dollar annual investments we can now fund. To be clear, the state of the art in AI is still not sufficient to handle these challenges on its own. So we use computers for what they're good at -- making basic judgements on large amounts of content quickly -- and we rely on people for making more complex and nuanced judgements that require deeper expertise.
In training our AI systems, we've generally prioritized proactively detecting content related to the most real world harm. For example, we prioritized removing terrorist content -- and now 99% of the terrorist content we remove is flagged by our systems before anyone on our services reports it to us. We currently have a team of more than 200 people working on counter-terrorism specifically.
Another category we prioritized was self harm. After someone tragically live-streamed their suicide, we trained our systems to flag content that suggested a risk -- in this case so we could get the person help. We built a team of thousands of people around the world so we could respond to these flags usually within minutes. In the last year, we've helped first responders quickly reach around 3,500 people globally who needed help.
Some categories of harmful content are easier for AI to identify, and in others it takes more time to train our systems. For example, visual problems, like identifying nudity, are often easier than nuanced linguistic challenges, like hate speech. Our systems already proactively identify 96% of the nudity we take down, up from just close to zero a few years ago. We are also making progress on hate speech, now with 52% identified proactively. This work will require further advances in technology as well as hiring more language experts to get to the levels we need.
In the past year, we have prioritized identifying people and content related to spreading hate in countries with crises like Myanmar. We were too slow to get started here, but in the third quarter of 2018, we proactively identified about 63% of the hate speech we removed in Myanmar, up from just 13% in the last quarter of 2017. This is the result of investments we've made in both technology and people. By the end of this year, we will have at least 100 Burmese language experts reviewing content.
In my note about our efforts Preparing for Elections, I discussed our work fighting misinformation. This includes proactively identifying fake accounts, which are the source of much of the spam, misinformation, and coordinated information campaigns. This approach works across all our services, including encrypted services like WhatsApp, because it focuses on patterns of activity rather than the content itself. In the last two quarters, we have removed more than 1.5 billion fake accounts.
Over the course of our three-year roadmap through the end of 2019, we expect to have trained our systems to proactively detect the vast majority of problematic content. And while we will never be perfect, we expect to continue improving and we will report on our progress in our transparency and enforcement reports.
It's important to note that proactive enforcement doesn't change any of the policies around what content should stay up and what should come down. That is still determined by our Community Standards. Proactive enforcement simply helps us remove more harmful content, faster. Some of the other improvements we're making will affect which types of content we take action against, and we'll discuss that next.
Discouraging Borderline Content
One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. This is not a new phenomenon. It is widespread on cable news today and has been a staple of tabloids for more than a century. At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.
Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average -- even when they tell us afterwards they don't like the content.
This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.
This process for adjusting this curve is similar to what I described above for proactively identifying harmful content, but is now focused on identifying borderline content instead. We train AI systems to detect borderline content so we can distribute that content less.
The category we're most focused on is click-bait and misinformation. People consistently tell us these types of content make our services worse -- even though they engage with them. As I mentioned above, the most effective way to stop the spread of misinformation is to remove the fake accounts that generate it. The next most effective strategy is reducing its distribution and virality. (I wrote about these approaches in more detail in my note on Preparing for Elections.)
Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don't come within our definition of hate speech but are still offensive.
This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.
One common reaction is that rather than reducing distribution, we should simply move the line defining what is acceptable. In some cases this is worth considering, but it's important to remember that won't address the underlying incentive problem, which is often the bigger issue. This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.
I believe these efforts on the underlying incentives in our systems are some of the most important work we're doing across the company. We've made significant progress in the last year, but we still have a lot of work ahead.
By fixing this incentive problem in our services, we believe it'll create a virtuous cycle: by reducing sensationalism of all forms, we'll create a healthier, less polarized discourse where more people feel safe participating.
Giving People Control and Allowing More Content
Once we have technology that can understand content well enough to proactively remove harmful content and reduce the distribution of borderline content, we can also use it to give people more control of what they see.
The first control we're building is about providing the safer experience described above. It will be on by default and it means you will see less content that is close to the line, even if it doesn't actually violate our standards. For those who want to make these decisions themselves, we believe they should have that choice since this content doesn't violate our standards.
Over time, these controls may also enable us to have more flexible standards in categories like nudity, where cultural norms are very different around the world and personal preferences vary. Of course, we're not going to offer controls to allow any content that could cause real world harm. And we won't be able to consider allowing more content until our artificial intelligence is accurate enough to remove it for everyone else who doesn't want to see it. So we will roll out further controls cautiously.
But by giving people individual control, we can better balance our principles of free expression and safety for everyone.
Addressing Algorithmic Bias
Everything we've discussed so far depends on building artificial intelligence systems that can proactively identify potentially harmful content so we can act on it more quickly. While I expect this technology to improve significantly, it will never be finished or perfect. With that in mind, I will focus the rest of this note on governance and oversight, including how we handle mistakes, set policies, and most importantly increase transparency and independent review.
A fundamental question is how we can ensure that our systems are not biased in ways that treat people unfairly. There is an emerging academic field on algorithmic fairness at the intersection of ethics and artificial intelligence, and this year we started a major effort to work on these issues. Our goal is to develop a rigorous analytical framework and computational tools for ensuring that changes we make fit within a clear definition of fairness.
However, this is not simply an AI question because at a philosophical level, people do not broadly agree on how to define fairness. To demonstrate this, consider two common definitions: equality of treatment and equality of impact. Equality of treatment focuses on ensuring the rules are applied equally to everyone, whereas equality of impact focuses on ensuring the rules are defined and applied in a way that produces equal impact. It is often hard, if not impossible, to guarantee both. Focusing on equal treatment often produces disparate outcomes, and focusing on equal impact often requires disparate treatment. Either way a system could be accused of bias. This is not just a computational problem -- it's also an issue of ethics. Overall, this work is important and early, and we will update you as it progresses.
Building an Appeals Process
Any system that operates at scale will make errors, so how we handle those errors is important. This matters both for ensuring we're not mistakenly stifling people's voices or failing to keep people safe, and also for building a sense of legitimacy in the way we handle enforcement and community governance.
We began rolling out our content appeals process this year. We started by allowing you to appeal decisions that resulted in your content being taken down. Next we're working to expand this so you can appeal any decision on a report you filed as well. We're also working to provide more transparency into how policies were either violated or not.
In practice, one issue we've found is that content that was hard to judge correctly the first time is often also hard to judge correctly the second time as well. Still, this appeals process has already helped us correct a significant number of errors and we will continue to improve its accuracy over time.
Independent Governance and Oversight
As I've thought about these content issues, I've increasingly come to believe that Facebook should not make so many important decisions about free expression and safety on our own.
In the next year, we're planning to create a new way for people to appeal content decisions to an independent body, whose decisions would be transparent and binding. The purpose of this body would be to uphold the principle of giving people a voice while also recognizing the reality of keeping people safe.
I believe independence is important for a few reasons. First, it will prevent the concentration of too much decision-making within our teams. Second, it will create accountability and oversight. Third, it will provide assurance that these decisions are made in the best interests of our community and not for commercial reasons.
This is an incredibly important undertaking -- and we're still in the early stages of defining how this will work in practice. Starting today, we're beginning a consultation period to address the hardest questions, such as: how are members of the body selected? How do we ensure their independence from Facebook, but also their commitment to the principles they must uphold? How do people petition this body? How does the body pick which cases to hear from potentially millions of requests? As part of this consultation period, we will begin piloting these ideas in different regions of the world in the first half of 2019, with the aim of establishing this independent body by the end of the year.
Over time, I believe this body will play an important role in our overall governance. Just as our board of directors is accountable to our shareholders, this body would be focused only on our community. Both are important, and I believe will help us serve everyone better over the long term.
Creating Transparency and Enabling Research
Beyond formal oversight, a broader way to create accountability is to provide transparency into how our systems are performing so academics, journalists, and other experts can review our progress and help us improve. We are focused on two efforts: establishing quarterly transparency and enforcement reports and enabling more academic research.
In order to improve our systems, we've worked hard to measure how common harmful content is on our services and track our effectiveness over time. When we were starting to build and debug our measurement systems, we only used the data internally to focus our work. As we've gained confidence in the measurements of more of our systems, we're publishing these metrics as well so people can hold us accountable for our progress. We released our first transparency and enforcement report earlier this year, and we're releasing the second report today, which you can read here.
These reports focus on three key questions:
1. How prevalent, or common, is content that violates our Community Standards? We think the most important measure of our effectiveness in managing a category of harmful content is how often a person encounters it. For example, we found that in the third quarter of this year between 0.23-0.27% of content viewed violates our policies against violent and graphic content. By focusing on prevalence, we're asserting that it's more important to remove a piece of harmful content that will be seen by many people than it is to quickly remove multiple pieces of content that won't be as widely viewed. We think prevalence should be the industry standard metric for measuring how platforms manage harmful content.
2. How much content do we take action on? While less important than prevalence, this still demonstrates the scale of the challenges we're dealing with. For example, in Q3 we removed more than 1.2 billion pieces of content for violating our spam policies. Even though we typically remove these before many people see them so prevalence is low, this shows the scale of the potential problem if our adversaries evolve faster than our defenses.
3. How much violating content do we find proactively before people report it? This is the clearest measure of our progress in proactively identifying harmful content. Ideally our systems would find all of it before people do, but for nuanced categories we think 90%+ is good. For example, 96% of the content we remove for nudity is identified by our systems before anyone reports it, and that number is 99% for terrorist content. Because these are adversarial systems, these metrics fluctuate depending on whether we're improving faster than people looking for weaknesses in our systems.
Our priority is getting these measurements stable enough to report for every category of harmful content. After that, we plan to add more metrics as well, including on mistakes we make and the speed of our actions.
By late next year, we expect to have our systems instrumented to release transparency and enforcement reports every quarter. I think it's important to report on these community issues at the same frequency as we report our earnings and business results -- since these issues matter just as much. To emphasize this equivalence further and to create more accountability, we will start doing conference calls just like our earnings calls after we issue each transparency report.
In addition to transparency reports, we're also working with members of the academic community in different ways to study our systems and their impact. This work already focuses on preventing misuse during elections as well as removing bad content from our services. We also plan to expand this work to share more information on our policy-making and appeals processes, as well as working on additional research projects. These partnerships are critical for learning from outside experts on these important challenges.
Working Together on Regulation
While creating independent oversight and transparency is necessary, I believe the right regulations will also be an important part of a full system of content governance and enforcement. At the end of the day, services must respect local content laws, and I think everyone would benefit from greater clarity on how local governments expect content moderation to work in their countries.
I believe the ideal long term regulatory framework would focus on managing the prevalence of harmful content through proactive enforcement. This would mean defining the acceptable rates of different content types. Without clear definitions, people rely on individual examples of bad content to understand if a service is meeting its overall responsibilities. In reality, there will always be some harmful content, so it's important for society to agree on how to reduce that to a minimum -- and where the lines should be drawn between free expression and safety.
A good starting point would be to require internet companies to report the prevalence of harmful content on their services and then work to reduce that prevalence. Once all major services are reporting these metrics, we'll have a better sense as a society of what thresholds we should all work towards.
To start moving in this direction, we're working with several governments to establish these regulations. For example, as President Macron announced earlier this week, we are working with the French government on a new approach to content regulation. We'll also work with other governments as well, including hopefully with the European Commission to create a framework for Europe in the next couple of years.
Of course, there are clear risks to establishing regulations and many people have warned us against encouraging this. It would be a bad outcome if the regulations end up focusing on metrics other than prevalence that do not help to reduce harmful experiences, or if the regulations end up being overly prescriptive about how we must technically execute our content enforcement in a way that prevents us from doing our most effective work. It is also important that the regulations aren't so difficult to comply with that only incumbents are able to do so.
Despite these risks, I do not believe individual companies can or should be handling so many of these issues of free expression and public safety on their own. This will require working together across industry and governments to find the right balance and solutions together.
Conclusion
These questions of what we want the internet to be are some of the most important issues facing our society today. On one hand, giving people a voice aligns with our democratic ideals and enlightenment philosophy of free thought and free expression. We've seen many examples where giving people the power to share their experiences has supported important movements and brought people together. But on the other hand, we've also seen that some people will always seek to use this power to subvert these same ideals [to divide us]. We have seen that, left unchecked, they will attempt to interfere in elections, spread misinformation, and even incite violence.
There is no single solution to these challenges, and these are not problems you ever fully fix. But we can improve our systems over time, as we've shown over the last two years. We will continue making progress as we increase the effectiveness of our proactive enforcement and develop a more open, independent, and rigorous policy-making process. And we will continue working to ensure that our services are a positive force for bringing people closer together.

Mahendra Bemal
Adding

Md Raju Ahmed
excellent photo collection

Saddam Hossen
thanks

Salman Raaz
https://m.facebook.com/story.php?story_fbid=458990528177587&id=100021999303737

Kunal Guman Singh
Add me

Ron Isaacks
Mark keep your hands off our free speech. If you want to regulate content, when you find content you believe crosses the boundaries, do not remove it, instead notify the federal authorities, and let them deal with any threats of violence or identity politics to cause division among the community. If you must alter content, then you must be sued with no liability protections. It is not your job to censor content..

Shahid Husain
Add me

Keith Smith
Facebook... Why you want to take down a poll about how inept your employees and bosses?

Saeed Shahidi
Ji bilkul

Dana Willige
Fuck your Democracy you communist piece of shit. We live in a Republic. don't like it, move your ass to China!

Hermann Mahncke
Like it Mark.

Alice Jones Vaders
We signed up for FB to connect with the people and organizations from whom we want to hear. We did NOT sign up for you to "protect" us. We can take care of ourselves, thank you. Stop the censorship.

Rajesh Bhagwat RB
Adddd

Rajesh Bhagwat RB
Full support

Rajesh Bhagwat RB
Added

Rajesh Bhagwat RB
Please add me Bhai ❤️❤️❤️❤️
Please add me Bhai ❤️❤️❤️
Please add me Bhai ❤️❤️❤️
Please add me Bhai ❤️❤️❤️
Please add me Bhai ❤️❤️❤️

شلبي النوبى عبد المجيد
https://m.facebook.com/story.php?story_fbid=2291534151168999&id=1671575629831524

Gian Ricardo
please a reaction like this and also disgusted reaction

DJokeybest Onde Mix
boss please help me my website is not working on facebook

Zainab Saladeen
Perfect!

Zainab Saladeen
Perfect!

Tony Macrini
Regularly blocked from viewing WH briefs, and many other Conservative posts. Many of my posts and shares do not appear once i post them.
The frequency of blocked posts is increasing by the day and at this rate I fear we will be silenced if we're not democratic liberals.
THIS IS CENSORSHIP AT ITS WORST.
The frequency of blocked posts is increasing by the day and at this rate I fear we will be silenced if we're not democratic liberals.
THIS IS CENSORSHIP AT ITS WORST.

Lucy Nelson
OH PLEASE ... EVERYONE is ON TO YOU Mark... give it up !!!!!!!!!

Robert StMarie
How about you leave the known individuals the fuck alone and stop with your NAZI censorship bullshit. How about YOU stop with your hate and disrespect of everyone who does not share your ideology. No worries though. The big bad uncle named Sam is about to turn you to rubble.

Xochitl Carolina Rios Dupont
You are not relative this time and thats good to me 🙂☯ El respeto al derecho ajeno es la paz🇲🇽

Shivani Singh Neta

Othman Khalid El Sharaa
متت

Akram Al-shlabi
Othman Khalid Elsharaa
في طريقه جديده عمل ام علي
في طريقه جديده عمل ام علي

Othman Khalid El Sharaa
Michael Rilling ديننا هو دين جميع الامم انتم ليس لكم الحق نحن لا نقول اطفال او هكذا نحن ديننا يدعونا ان نتقبل و لكن لما انتم حاقدون علينا هكذا نحن لا شأن لنا بكم اذهبو

Othman Khalid El Sharaa
أشهد ان لا اله الا الله

Deanne DeVlugt
I'll wait and see. Personally, if conservative posts continue to be banned, I'll be leaving. I believe in being fair, not controlled. I hate being controlled because I'm an older American who recognizes what communism is and see it constantly in the news. We are being forced into it, and I will fight it as long as I can.

Georgie Porgie
Facebook is one of the biggest influence on our elections , people interact and learn all in one session they decide who they do or don’t support, if Facebook allows, if someone don’t like a post don’t read it, but if it’s truthful , dont make it disappear because you don’t agree with it, like has been happening since this last election.
Facebook is responsible for more interference in our election process than Russia or China both together
Facebook is responsible for more interference in our election process than Russia or China both together

Howard Miller
David Land not very smart is he? Lol

DLynn Wilson
I had content removed that only I can see. What the hell!!!!!

Joyce Currie
Don’t be censored

Judy Walden
I hate it when you censor my comments! If it is negative about the demos you censor it! Not fair at all!@

Larry Ross
Your taking control to give control. How controlling is that.

Larry Ross
Your taking control to give control. How controlling is that.

Sioma Conart
Didn't read past the Demo comment in the 1st paragraph. You sensor.people who have the Constitutional right of Freedom to speak. I do not want someone to tell me their version of what I read and think. Get the porn off that t the advertises use. Sensor that first.👹👺

Thomas C Barth Sr.

Thomas C Barth Sr.

Thomas C Barth Sr.

Daniel Goodman
https://pulpitandpen.org/2019/05/30/christianity-and-religious-freedom-conference-goes-against-facebook-community-standards/

Roger Allen
You have no right to stomp all over our first Ammendmrnt rights just because they don't align with your beliefs you sure not god and this is illegal we are Americans and this is not a socialist country we live by the Constitution which is our lawbook an you have No RIGHTS WHAT SO EVER to go against the constitution!!!

Linda M Chambré
Why am I getting unreasonable ads constantly? More ads than what I get from my friends? I want this stopped, you doing this on purpose to conservatives? This has got to stop, I don't want any ads on my site and you ad support does not help on iota. You only stop speech when it doesn't fit your agenda/

Linda M Chambré
He didn't invent it his friend in college did, he bought him out.

Linda M Chambré
Are you paying people to say good things about you? I don't know anyone that likes FB

Linda M Chambré
You hit the nail on the head, CONTROL!

Benjamin McBeer
Justin Heng Marc would never allow that

Zinhle Lucia
Like this pages (y) & after that sand me your pics zoku post👦😻👧 https://www.facebook.com/milimilimrCool/

Salai Biakhnin
Hi, how to get Facebook dark mode

Ann Finney
Why is it that your 'team' allows The President to be called a racist Nazi, but saying Michelle Obama looks like a dude is considered hate speech? Nooo, your people aren't at all biased.

Vivek Singh Yadav
Congratulations

Sebastian Cumpatescu
So now you "demote" art, paintings deemed "borderline". Absolutely retarded, well done!

Afzal Shaikh
Nice bhai

Vanne Hivard
Forget advertising and user data misuse. You are one of the richest social networks on the planet. Give us the ethics of the Facebook you created in 2004, when the sole mission was to bring people together and help connect us all. Making the world, a little less big.

Olivier Tietsap
Merci beaucoup Mark Zuckerberg. Je suis content de vous.

Bos Chan
Balikin akun aku

Allan Poeter
wonderful job for all the people

MD Maruf Hossain Rony
Sir, I have tried to inform you about the problems we faced from your community people! We just can't understand, any of our post or comments goes against India, it’s goes against your community!! Why??? Don't we have freedom of speech? On the other hand Indian people use slangs about us and about our country, your community never takes Any actions!! If we report, nothing happens!! Why??? Please do necessary action regarding this. We want justice!!
Thanks
Thanks

Yvonne Arneson
Should delete the ones that NEED to be!!!!!!! Mark... please contact me.

Jim West
Hello again.
Guess who??
The daily moaner..
Can one of Mark's plebs look at this??
Ahhhh the daily usual p*sh from Pussbooks sponsers. So the daily messages to its owner..
Surely someone is getting peeved about all this. Plus me reporting all the ads..
Ahhhhh back to the usual crap of getting sponsored ads. Is it not that simple to understand myself & others do not want our newsfeed getting these ads every 5 posts???
why am I still getting ads on my news feed? I went and deleted/stopped everything on my profile under my settings.... and guess what??
My 2nd bit on my news feed was an ad...
I know you make money from your ads, which is fair enough... but for them to be like every 4th or 5th item on my newsfeed gets very annoying.
AND NOW YOUR VERY KIND COMPANY HAS STOPPED ME FROM REPORTING THESE ADS. OH HOW VERY GROWN UP.
I am sure I am not the only one that has reported this.
I do not mind the odd or or two during the DAY.... not one every 10 seconds!!!
Now please please please stop them on my news feed.....
and can someone have the decency to even acknowledge this request?? I bet I dont even get one...
One very p*ssed off facebooker..
Sent from Mail for Windows
Guess who??
The daily moaner..
Can one of Mark's plebs look at this??
Ahhhh the daily usual p*sh from Pussbooks sponsers. So the daily messages to its owner..
Surely someone is getting peeved about all this. Plus me reporting all the ads..
Ahhhhh back to the usual crap of getting sponsored ads. Is it not that simple to understand myself & others do not want our newsfeed getting these ads every 5 posts???
why am I still getting ads on my news feed? I went and deleted/stopped everything on my profile under my settings.... and guess what??
My 2nd bit on my news feed was an ad...
I know you make money from your ads, which is fair enough... but for them to be like every 4th or 5th item on my newsfeed gets very annoying.
AND NOW YOUR VERY KIND COMPANY HAS STOPPED ME FROM REPORTING THESE ADS. OH HOW VERY GROWN UP.
I am sure I am not the only one that has reported this.
I do not mind the odd or or two during the DAY.... not one every 10 seconds!!!
Now please please please stop them on my news feed.....
and can someone have the decency to even acknowledge this request?? I bet I dont even get one...
One very p*ssed off facebooker..
Sent from Mail for Windows

محمد الموالي
شونك يول

M Ei Mon
Hi

M Ei Mon
ዞᏜ℘℘Ꮍ ℬℹℛʈዞᗬᏜᎽ,¡i|¹i¡¡i¹|i¡,
`'¹li¡|¡|¡il¹
`'¹li¡|¡|¡il¹

Hendri Susanto
yeah... but your own FB need to slap itself (or whoever is taking control of it).
my account is blocked because it seems I violated your community standard, but all I did was post my discontent on a game, no swearing, no dissing, nothing bad whatsoever... its not funny and you need to re-evaluate your community standard altogether.
my account is blocked because it seems I violated your community standard, but all I did was post my discontent on a game, no swearing, no dissing, nothing bad whatsoever... its not funny and you need to re-evaluate your community standard altogether.

Juliet Borela
Thank you Mark for giving us this fb,at least i can see all my family around the world, without this i cant contuct all my love once relatives and thank you again for taking out the fake news, account ,nudity and most of the terrorism and violence too.God bless you and your Family and keep up the good work ...take care everyone. 💖💖💖🙏Mark your the greatest man on earth.💙💙💙😘😘😘

Man Mtv
Man

Luz Viera
I have an apple phone , and for the past month , I keep getting 2 very fine small circles on the left side top corner of my phone, I want to know if I should worry about that.

Jahangir Alom
nc

Jahangir Alom
joss

Jahangir Alom
np

Jahangir Alom
joss

Jahangir Alom
wow

Rhonda Conley
Why allow posts and comments from leftist idiots but not us conservatives? EVIL! That's why. Trump 2020!

Rhonda Conley
What " leaders" ? Disgusting

MD Jishan Khan
Support

Miaad Adel
Xoxo

YAhaya Shehu
HOW INPORMATION

Houssem Salem
سلام عليكم

Houssem Salem
كيف يمكنني تحصل علي علامه الزرقاء

Eldar Nəcəfov

Eldar Nəcəfov
Спасибо большое вам всем за помощь людьми

حجاج احمد صقر،
اهلا امرك

حجاج احمد صقر،
التابوت

Jerie Mahobe
Any changes should not limit up the boundaries and targets people wanted to share their stories to.
Face book messaging in the inbox provide private communications . Grouping in the inbox is also possible . What is not is video communications , and other visual files can not be uploaded , where uploads are possible elsewhere you cant access them only if you can find such materials on youtube or other apps.
Inovation is expected and accepted but let it add value to our experience depending on prevailing needs as shown by FB users.
Face book messaging in the inbox provide private communications . Grouping in the inbox is also possible . What is not is video communications , and other visual files can not be uploaded , where uploads are possible elsewhere you cant access them only if you can find such materials on youtube or other apps.
Inovation is expected and accepted but let it add value to our experience depending on prevailing needs as shown by FB users.

Rd Lee
I see the need to save us from ourselves but not the need to take our rights away from saying things that are true ,are that have happened , are that can happen . It is a very fine line that can take our rights away from us . Please Be careful with your choices !

Ivan Mushastyy
Hey guys add me on facebook Ivan Mushastyy and also gona make some food reviews on youtube soon so stay connected and in touch stay positive people!

बिर. बि. भाटि बिर।
Support

Kamal Mantu
Hi Mark Zuckerberg, I kamal mantu from Bangalore, I have some new opportunity for our website please contact me

Latte Kagome
A bit long winded but goes into each issue with the complexity of the issue in mind. Interesting read just wish there was a cliffnotes version so that more people can access the information. Looking forward to using Facebook in the future!

Albimar Reis Reis
OBRIGADO POR ME CORRIGIREM QUANDO COMETO MEUS ERROS

عيشاها جنان
Спасибо за новое обновление и вашу поддержку со мной

Lin Sok Hak
Haksoklin

Enrique Puentes
Yo tengo en este momento un problema de bloqueo de contenido, rechazo de mis articulos y engano de los que censan en mi pretension de publicar mis articulos, por lo que creo que estoy siendo discriminado en la Web social y cultural que tengo en FB llamada @[340390949982808:274:Hay que decirlo]. Me he percatado que quienes me juzgan no leen mis articulos, ignoran su contenido y seguramente se guian por el robot, que desde luego no es confiable en la lectura de mis posts que tienen tres mil palabras, y los censores, me acusan entonces guiandose por el reporte de este robot que no tiene la capacidad para comprener conceptos sino algoritmos de palabras repetitivas, de que mis escritos son relativos a elecciones politicas y politica interna en otros paises. Ello no es asi, hablo de hechos historicos y relato cuestiones documentales, ya publicadas antes por otros autores en you tube o Google, durante la Edad Media, asi que apelare en base a estas tan buenas propuestas de este hombre genial que es Mark Suckerberg, ideas que estos sensores han tergiversado y que violan flagrantemente mis derechos de libre expresion. acusandome e inventando motivaciones falsas. Espero que no destruyan mis posts, porque ellos son precisamente la prueba de que mienten.

Sherry Bernier Helmic
Plain and simple Mr. Zuckerberg, you are out for top dollar and could care less about what's the right thing to do. Shame on you.

Vipin Patel
Sir. please help me community standard

علی سیناه گلابی
Hi

GeoffreModco ReModco

GeoffreModco ReModco

GeoffreModco ReModco

GeoffreModco ReModco

Narayan Ramawat Bikaner
Hy mark zuck Bro......

Midge Nelson
Wish you could stop the hacker I don’t know if true but I have been told some one is hacking and useing your pictures in obscene videos and sending to your friends if this is true reply

Eze BU Chukwu
nice write up though no much sense in it

Shravan Kumar
Add me

Shravan Kumar
Please add me FB please

Shravan Kumar
Please

Shravan Kumar

Shravan Kumar
Add me

Shravan Kumar

Shravan Kumar
Add me FB please

Kisma Kisma
Thanks dear

Aakash Sharma
Namaste sir

João Jamba John
Very good!!!!

অনেক কষ্টে তোমাকে ছাই
Tnx

Robert Hudson
Fuck censorship..the government and the thought police

حنان عادل
Perhaps this will be strange, but I have a simple request from you, you are the wife of Mr. Mark, and I could not send him to register my account, since it may be busy. I am from Iraq. I have used the Facebook application since 2009, but my account was broken by mistake and I have no other way to communicate with my relatives and friends. In fact, two accounts were broken for me. I did not grieve in my life and did not face any difficulty except in one thing. And when I wanted to retrieve my account that I used 11 years ago, I hope that you will make it a birthday gift that will coincide with 2/13 of the next month. I beg you to reactivate my account. My identity so that I don t get in trouble by Frey Facebook and I will ask you for my information as it is in the account Name: Hashem Khader Awad Email account hashomaloka@gmail.com Please help me to recover my account I lost my everything in my life I do not have this account bad and it means to me a lot of my readers my message please help me know you can influence your husband’s because loves you

حنان عادل
Perhaps this will be strange, but I have a simple request from you, you are the wife of Mr. Mark, and I could not send him to register my account, since it may be busy. I am from Iraq. I have used the Facebook application since 2009, but my account was broken by mistake and I have no other way to communicate with my relatives and friends. In fact, two accounts were broken for me. I did not grieve in my life and did not face any difficulty except in one thing. And when I wanted to retrieve my account that I used 11 years ago, I hope that you will make it a birthday gift that will coincide with 2/13 of the next month. I beg you to reactivate my account. My identity so that I don t get in trouble by Frey Facebook and I will ask you for my information as it is in the account Name: Hashem Khader Awad Email account hashomaloka@gmail.com Please help me to recover my account I lost my everything in my life I do not have this account bad and it means to me a lot of my readers my message please help me know you can influence your husband’s because loves you

Aliyev Farid
Hmm, who decides what content could be published in certain cointries where there is not Facebook team? Can national prosecutor office decide what content could be published?

Balkayhra Ahmad
سلام خاوتي مقاس تزيد المتبعين

মাসুদ রানা
post problem copyright

Gazi Salauddin Raa
Mark To You Bettar Know Theat I &' May Related Data Involbd Eabry Heab Meany Safar Etc' Data Popaganda Parpass. i Hop to Knowing are .&' Mark Theat You'r Fully Knowing &' Fell Theat Whay's I Us I Tell To You meany Leat Deay.May Freand Mark I Tell To You. Whay ,,,,,,?. Ok,Nayce. Fast Thenk Fast. But,Mark Whear Is May Data Feacbook Profail Parpass Moany Profarty Etc'.
* Imargency I Messing May Fast Feacbook Profail .&' More Imargency Preajnt Taime
I No Heab Involved Local Country Party Gob Parpass.Becous She's Reall Seemness Women.But,Some Deay Ago May Profail Recobar Tray &' 2 Peag Recobar &' Cobar &' Profail Peag Photo Imeag Cheaing 1 Peag Fast Heab May Photo Imeag.But,Whay I Cean Not Whear Cheaingeng or Meaking May Data Feacbook Peag Weab Intearnet Neame.Whear No Heab She May Feamyle &' No Heab Any Relation. I No Heab Whear Any Agreement Party Parpass Eabrything Is Lai. So, Whear She's Whar May Feacbook Profail About tool &' May Data Feacbook Profail Parpass What's App, Messenger Etc' Profail Tools Used &' Probably Probleam Creat. Mark What Cean Do It to You Pearsoneally etc' Problemtic Destarb situation Parpass Any Tram's,,,,,.
-Rana.
* Imargency I Messing May Fast Feacbook Profail .&' More Imargency Preajnt Taime
I No Heab Involved Local Country Party Gob Parpass.Becous She's Reall Seemness Women.But,Some Deay Ago May Profail Recobar Tray &' 2 Peag Recobar &' Cobar &' Profail Peag Photo Imeag Cheaing 1 Peag Fast Heab May Photo Imeag.But,Whay I Cean Not Whear Cheaingeng or Meaking May Data Feacbook Peag Weab Intearnet Neame.Whear No Heab She May Feamyle &' No Heab Any Relation. I No Heab Whear Any Agreement Party Parpass Eabrything Is Lai. So, Whear She's Whar May Feacbook Profail About tool &' May Data Feacbook Profail Parpass What's App, Messenger Etc' Profail Tools Used &' Probably Probleam Creat. Mark What Cean Do It to You Pearsoneally etc' Problemtic Destarb situation Parpass Any Tram's,,,,,.
-Rana.

David Mwangale Musungu
Amen.

Abubakar Muhammad Liman
Hello

Shravan Kumar
As social media was a whole new world , so now is governing it . We need to expect some hits and misses in the fine tuning . I agree that reporting and appealing need to work better than they do , but the question is how to do it without having to hire an enormous worldwide staff ( which would then | make it necessary to charge for the | use of social media ) . I think this is | a good start and eventually the process will be streamlined . | And then a whole new world will | open up in some other way and we ' ll have to start over again . Add me. Please add me FB

Shravan Kumar

Shravan Kumar
Add me

Ayan Bilal
I want tomake a complaint against a link which is based on fake account and it can be dangerous for me or my friend as vice versa so it is humbly requested to you that please band or block permanently this account here is a link
https://www.facebook.com/profile.php?id=100045699319964
https://www.facebook.com/profile.php?id=100045699319964

Esmer Villa

Hamid Ali Poor
Ok

Hamid Ali Poor
Ok

Sridhar Kavadger

Ezekiel Dosky Eze
Hello sir my Facebook account is disabled I need help USERNAME ((FAITH AGBONTEAN) 0812 273 5948 OR YOU CAN HELP ME TO OPEN MY OLD ACCOUNT (FAITH LOVE) 09024688990 I HAVE BEEN USE THAT ACCOUNT SINCE 2015 I use fake name that was why Facebook disabled my account but I submit my national id card I don't see any feedback I have been opening different account he keep disabled even if i submit my national id I still get kick at I have cried my eyes out pls sir help me if very painful to be kick at of Facebook I can't even open another Facebook account because I have open so many account if i can get my disabled old account (FAITH LOVE) back or my new FAITH AGBONTEAN T GUST OPEN THE ACCOUNT January I have already have followers and friends I get kick at even after submitting my national id card I even use my real name I will be grateful if you help me open one of my account that has been disabled I gust pray you see my message God should touch your heart

Izack Paul
Iets all just be friends

Selim Dalgaç
Merhaba Acaba tüm gerçekleri hangimiz bildiğini sanıyor. Bende dahil...

Ŋasser Mj
https://www.facebook.com/benasirm للبيع للبيع

Wasim Fawaz
Ali_hoballah79@hotmail.com
Hello, this is my friend, his account has been banned for reasons related to the standards. I ask you to solve his problem. He works and lives from his account. Now he is unemployed due to the ban. I ask you to help him, please break the ban.
Hello, this is my friend, his account has been banned for reasons related to the standards. I ask you to solve his problem. He works and lives from his account. Now he is unemployed due to the ban. I ask you to help him, please break the ban.

Yong Hyuk Lee
I created a new account by email, but access was denied due to the use of a pseudonym. I sent my ID, so I only sent my driver's license three times, but I only sent three copies of the reply to keep sending, and there has been no answer for almost a week since then? Facebook Korea Jiyeon Kim? I don't know how it's going on, but it's more difficult to operate a global company than a game company. There is a paid advertisement on the page, but the payment is settled, and the page operation permission is transferred to the account, so it is not possible to post a post. So I leave a post. When I tried to attach the captured evidence, I also blocked uploading photos. ㅋㅋ ㅋㅋㅋ If you try to log in now, you were unable to access your account while reviewing additional data. I was only looking for this message, but when I searched, it wasn't just one or two people who broke this account? This is the first time I have ever seen a company running a customer center.

Anthony Knight
GREAT, like what I've been seeing. Probably should get some eye ware tho just saying.

Kamal Mantu

Kamal Mantu

Kamal Mantu
Hi, Mark Zuckerberg, I kamal mantu from Bangalore India, i have some secret intelligence Report and new Visson for Facebook please contact me

Melisa Mizrachi
Credibility and Good conscience make use appreciate technology in the most important and professional way. We appreciate this for the ability of human ingenuity.

Melisa Mizrachi
Babies or no babies.
Technology in good conscience is the most serious and important thing. 👍
Technology in good conscience is the most serious and important thing. 👍

Robin David Williamson
i think the goal that you've set has been accomplished... you have given people power... i cant tell if the challenge by the bueracracy is slowing down the amount of power you are allowed to give. i mean i member at one point facebook was in safeways and department-stores with kiosk advertising and expanding the brand... children could safely part from their parents while shopping to be part of demonstrations... that's vitale... the gimmicks has been lost in today's common areas ... and i believe in that void comes danger and mischief... you sir are giving digital balls to tomorrows adventurer. they gotta crown you for that...

Ayoub Alkilani
كسمك

Oluwatosin Acapella Anthony
Like this...

Alsayed Abuaqilah
مرحبا مارك انك رجلا تستحق لنك خرجت بي العالم من بحرمظلم الي نور من المعرفه وربط كثير من العالم بي المعلومات النيره وجميله وسرعة الخير والمعلومات ولذالك شكرا وبالتوفيق

Hamid Ali Poor
Mark Zuckerberg you

Ladont'a Logan
fbf vh h bc jvm v bgfhx FCC b bf cnnv ybn UTC h FM mbv CNNvh c g CNN WWF h bbn m i.v n CNNBBC b bbn UTCn.vGB ggvbuzz n.vfyi bvb CV h vhGB yb vfc cc by HB bnb g.j v n

Elba Rain
#CONTENT #GOVERANCE AND #ENFORCEMENT #SOUNDS #LIKE #FBCENSORSHIP #TO #ME
#MARKFAKENEWS #MISLEADING
#WHITESUPREMACYNULLANDVOID
#TREASON
#NAACP
#FREEDOMFIGHTERS
#LEGALAIDSOCIETY
#UNITEDSTATESCONSTITUTION
#UNITEDSTATESSUPREMECOURT
#MARKFAKENEWS #MISLEADING
#WHITESUPREMACYNULLANDVOID
#TREASON
#NAACP
#FREEDOMFIGHTERS
#LEGALAIDSOCIETY
#UNITEDSTATESCONSTITUTION
#UNITEDSTATESSUPREMECOURT

Jayanta Barman Rkteam
Facebook mein problem a raha hai theek kar do

Bhimbahadur Rai
How to hack in Facebook I'd

William Oliver
THE MAIN PROBLEM IS: No one get's to monitor or hold FB accountable to any law or moral altitude. AND FB is abusing that. Allowing "Fact Checkers" & "FB Admins" to delete TRUTHFUL content based upon fact. "That" in and of itself is use as manipulation and suppression against ideals such as "Bill of Rights", "The Constitution" & "Deprivation of Rights under color of Law".
...................... We are all being duped.
.... This image speaks for itself. Try to post the mentioned LINK yourself & see.
...................... We are all being duped.
.... This image speaks for itself. Try to post the mentioned LINK yourself & see.

William Oliver
Try posting the mentioned LINK on FB and it will be censored.

William Oliver
Your foolish enough to trust Zuckerberg? 1 of 2

William Oliver
Your foolish enough to trust Zuckerberg? 2 of 2

William Oliver
Oooooops 1 more

Mic Israel Garcia Montes
Go to the VERGAAA CULERO block by assholes, because you do not block pornography that go up fucking shit, I hope you get the competition and you go to the fucking fucking dick

Joselus Mosquetez
.i.

Dalia Alfaghal
What if the community standards are homophobic/sexist/racist? I am a queer activist from Egypt, and there is an aggressive attack on lgbtq+ Individuals and women. Lots of posts being reported, and at many situation hate and death inciting invites reports are being ignored by FB. We need representation in the "community" appointed to revise content reported on FB

Nihat Kerküklü
Param yok telefon almaya lütfen bana yardım eder misiniz

Haris Tuharea
https://www.facebook.com/mentari.putri.9638718

Haris Tuharea
https://www.facebook.com/dek.kesy

Haris Tuharea
https://www.facebook.com/maya.gingsul.98

Jose-Ramon Sanchez
Mi nombre es jose ramon sanchez y te ofresco
⏩⏩UNA GRAN OPORTUNIDAD PARA TI!!!! ⏪⏪
⏩⏩PAYPAL ILIMITADO!!!!!!⏪⏪
⏩⏩SOLAMENTE POR 3 DOLORES UNA SOLA VEZ!!!⏪⏪
🎁💲GENERA DE 60$ A 80$ EN UN DIA💲🎁
🎁TENGO PRUEBAS DE PAGO🎁
ESTA PROMOCION NO LA ENCONTRARAS EN NINGUN SITIO 🚀🔥
Requisitos!
✓Leer con atencion mis estrategias que te ayudaran a ganar mas
✓Internet o Wifi
✓minimo 2 horas
✓Ganas de trabajar
🔴GANARAS DE 60$ A 80$ DOLARES DIARIOS
🔴🎁EN EL LINK HAY 3 SORPRESAS🎁
👌👍🗣LINK CONFIABLE PERTENECE A GOOGLE🙏🙌
https://docs.google.com/forms/d/e/1FAIpQLSc51O7oGrbzD3ZXY5I4dztuF06-aHkg74qVXbMc1K5iG8ldQA/viewform?usp=sf_link
⏩⏩UNA GRAN OPORTUNIDAD PARA TI!!!! ⏪⏪
⏩⏩PAYPAL ILIMITADO!!!!!!⏪⏪
⏩⏩SOLAMENTE POR 3 DOLORES UNA SOLA VEZ!!!⏪⏪
🎁💲GENERA DE 60$ A 80$ EN UN DIA💲🎁
🎁TENGO PRUEBAS DE PAGO🎁
ESTA PROMOCION NO LA ENCONTRARAS EN NINGUN SITIO 🚀🔥
Requisitos!
✓Leer con atencion mis estrategias que te ayudaran a ganar mas
✓Internet o Wifi
✓minimo 2 horas
✓Ganas de trabajar
🔴GANARAS DE 60$ A 80$ DOLARES DIARIOS
🔴🎁EN EL LINK HAY 3 SORPRESAS🎁
👌👍🗣LINK CONFIABLE PERTENECE A GOOGLE🙏🙌
https://docs.google.com/forms/d/e/1FAIpQLSc51O7oGrbzD3ZXY5I4dztuF06-aHkg74qVXbMc1K5iG8ldQA/viewform?usp=sf_link

Landis Linda
Why remind me it's National hot dog day? 🍣🌭

Jenny Fletcher
Why is it SO difficult to get FB to take down obviously fake accounts? Look, if an account has no content, is using stolen pictures, and has a name that is radically different from the URL chosen, then it is BLATANTLY FAKE!
There is no satisfactory way to report such accounts, because there is no free text box to say for instance ' this account is using stolen pictures belonging to X' and giving a link to back up your statement.
Don't FB understand that as a result for allowing such accounts to stand and proliferate, scammers are cheating vulnerable FB members out of money and causing some of them huge emotional damage.
I am a member of two anti-scammer groups and we have the greatest difficulty to get known scam accounts taken down. Why is it like this?
In addition, I am spending hours every week, as an admin of another group, to remove join requests from accounts that I know perfectly well are fake and likely to try scamming our group members. Every group admin on FB has this issue.
I realise that there is little that FB can do to prevent such fake accounts being created but the least you can do is to allow people to report impersonation effectively, and if we put in a report and you initially refuse to take it down, then there has to be a robust, helpful and usable appeals process.
There is no satisfactory way to report such accounts, because there is no free text box to say for instance ' this account is using stolen pictures belonging to X' and giving a link to back up your statement.
Don't FB understand that as a result for allowing such accounts to stand and proliferate, scammers are cheating vulnerable FB members out of money and causing some of them huge emotional damage.
I am a member of two anti-scammer groups and we have the greatest difficulty to get known scam accounts taken down. Why is it like this?
In addition, I am spending hours every week, as an admin of another group, to remove join requests from accounts that I know perfectly well are fake and likely to try scamming our group members. Every group admin on FB has this issue.
I realise that there is little that FB can do to prevent such fake accounts being created but the least you can do is to allow people to report impersonation effectively, and if we put in a report and you initially refuse to take it down, then there has to be a robust, helpful and usable appeals process.

Jason Hardy
Sooo, is removing the Hodgetwins page because they are conservative? Or because you are in favor of socialism. Facebook has been censoring more and more conservative content in the recent years. If this is going to continue to happen, I can guarantee you will end up facing multiple lawsuits. And I'm pretty sure you don't want to be sitting in front of another congressional panel trying to find a way to keep from going to jail....

Helio Sandro
https://www.facebook.com/helioda.sju/videos/102158981642823/

Helio Sandro
https://www.facebook.com/helioda.sju/videos/102158981642823/

Helio Sandro

حہمہز'ة رجہاويے
https://m.facebook.com/story.php?story_fbid=127953099022822&substory_index=114&id=100054243984144

Bablu Qureshi
Mark please help me mark i am poor family and Facebook moderator bill gates sir please help me
qureshibablu81@gmail.com
Whatsapp 7978313840
qureshibablu81@gmail.com
Whatsapp 7978313840

Bill Allemann
when accusations are true, it's not a smear.

Bill Allemann
I don't understand FB's enforcement of standards like: ""17. Misrepresentation
Authenticity is the cornerstone of our community. We believe that people are more accountable for their statements and actions when they use their authentic identities. That's why we require people to connect on Facebook using the name they go by in everyday life. Our authenticity policies are intended to create a safe environment where people can trust and hold one another accountable."" I've reported a few accounts that were blatantly and 100% fake and using names of well known people, and the results of the reviews were always that they didn't violate community standards (which, of course, they certainly did). So the question becomes of what does a "review" of community standards consist ?
Authenticity is the cornerstone of our community. We believe that people are more accountable for their statements and actions when they use their authentic identities. That's why we require people to connect on Facebook using the name they go by in everyday life. Our authenticity policies are intended to create a safe environment where people can trust and hold one another accountable."" I've reported a few accounts that were blatantly and 100% fake and using names of well known people, and the results of the reviews were always that they didn't violate community standards (which, of course, they certainly did). So the question becomes of what does a "review" of community standards consist ?

محبان یا الله
Mark your

رویا کابل
مارک زاکربرگ شما

Бегим Ембергенов
Марк Цукерберг вы можете мне помочь пожалуйста помогите мне я отправил смс посмотрите на меня Facebook Messenger

Greg Vezina
You are about to find out that in Canada you cannot suppress legitimate political parties from advertising, especially for year end fundraising, You cost us a fortune and we will reciprocate. See you in Court very soon. You might have your lawyers look up Canadian and Ontario laws that DO NOT PROTECT YOU. #BTW #GFY.
You have placed us in a "False Light" and you have engaged in partisan political promotion of some registered political parties while suppressing others. We have a Constitution in Canada that Facebook is not exempt from facing legal consequences for violating the most fundamental rights in it.
Canada: Four Of A Kind: Ontario Recognizes The Fourth Privacy Tort – False Light
https://www.mondaq.com/canada/privacy-protection/906126/four-of-a-kind-ontario-recognizes-the-fourth-privacy-tort-false-light
In late 2019, the Ontario Superior Court recognized the tort of placing a person in a false light for the first time. This landmark decision completes the set of four privacy torts, which are now all recognized in Ontario, and has implications for businesses.
For background on the three other privacy torts, intrusion upon seclusion was recognized by the Ontario Court of Appeal in Jones v Tsige in 2012. Following this landmark ruling, in 2016 and again in 2018, the Ontario Superior Court recognized the tort of public disclosure of private facts.1 Misappropriation of personality has been recognized in Ontario since the 1970s.2
As detailed below, given this new tort's (i) flexible test that requires consideration for the "reasonable" person's view of what is offensive, (ii) potential imposition of liability based on "reckless" conduct, and (iii) unclear adoption of affirmative defences, businesses must be attuned to the potential application of this tort. While the tort in this case was applied to egregious facts, this new tort may be applied to impose a positive obligation on businesses to ensure the accuracy of information that may be distributed, whether lawfully or through a data breach.
You have placed us in a "False Light" and you have engaged in partisan political promotion of some registered political parties while suppressing others. We have a Constitution in Canada that Facebook is not exempt from facing legal consequences for violating the most fundamental rights in it.
Canada: Four Of A Kind: Ontario Recognizes The Fourth Privacy Tort – False Light
https://www.mondaq.com/canada/privacy-protection/906126/four-of-a-kind-ontario-recognizes-the-fourth-privacy-tort-false-light
In late 2019, the Ontario Superior Court recognized the tort of placing a person in a false light for the first time. This landmark decision completes the set of four privacy torts, which are now all recognized in Ontario, and has implications for businesses.
For background on the three other privacy torts, intrusion upon seclusion was recognized by the Ontario Court of Appeal in Jones v Tsige in 2012. Following this landmark ruling, in 2016 and again in 2018, the Ontario Superior Court recognized the tort of public disclosure of private facts.1 Misappropriation of personality has been recognized in Ontario since the 1970s.2
As detailed below, given this new tort's (i) flexible test that requires consideration for the "reasonable" person's view of what is offensive, (ii) potential imposition of liability based on "reckless" conduct, and (iii) unclear adoption of affirmative defences, businesses must be attuned to the potential application of this tort. While the tort in this case was applied to egregious facts, this new tort may be applied to impose a positive obligation on businesses to ensure the accuracy of information that may be distributed, whether lawfully or through a data breach.

Greg Vezina
Get ready to PAY!!! Commercial interference and Trot laws apply to Facebook in Canada. #GFY
http://www.dentonsdata.com/four-of-a-kind-ontario-recognizes-the-fourth-privacy-tort-false-light/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=LinkedIn-integration
In late 2019, the Ontario Superior Court recognized the tort of placing a person in a false light for the first time. This landmark decision completes the set of four privacy torts, which are now all recognized in Ontario, and has implications for businesses.
For background on the three other privacy torts, intrusion upon seclusion was recognized by the Ontario Court of Appeal in Jones v Tsige in 2012. Following this landmark ruling, in 2016 and again in 2018, the Ontario Superior Court recognized the tort of public disclosure of private facts.[1] Misappropriation of personality has been recognized in Ontario since the 1970s.[2]
As detailed below, given this new tort’s (i) flexible test that requires consideration for the “reasonable” person’s view of what is offensive, (ii) potential imposition of liability based on “reckless” conduct, and (iii) unclear adoption of affirmative defences, businesses must be attuned to the potential application of this tort. While the tort in this case was applied to egregious facts, this new tort may be applied to impose a positive obligation on businesses to ensure the accuracy of information that may be distributed, whether lawfully or through a data breach.
Background
In Yenovkian v Gulian, 2019 ONSC 7279, the court disposed of a couple’s family law trial with the wife’s tort claims against her (now ex-) husband. Justice Kirstjanson found that the husband, Mr. Yenovkian, engaged in a litany of misconduct including a cyberbullying campaign abusing his wife, Ms. Gulian, and her parents. The court also awarded $150,000 in punitive damages and $50,000 in compensatory damages from the intentional infliction of mental suffering.
After granting, amongst other things, a permanent restraining order against Mr. Yenovkian and sole custody of the children to Ms. Gulian, the court turned to Mr. Yenovkian’s liability for his “outrageous and egregious” conduct.
As it relates to this new privacy tort, Mr. Yenovkian falsely said that Ms. Gulian is a kidnapper, abuses the children, drugs the children, forges documents, and defrauds governments. Mr. Yenovkian also publicized private true facts about Ms. Gulian’s living situation with the children and her parents (including videos of their home) and details of access visits with their children. The court found that this was tortious and awarded damages of $100,000 for the tort of invasion of privacy, combining false light and public disclosure of private facts.
The tort of false light
The court described the tort of false light as follows: it is tortious for a person to place another person before the public in a false light if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
The court found that while the publicity giving rise to this cause of action will often be defamatory, defamation is not required. It is enough for the plaintiff to show that a reasonable person would find it highly offensive to be publicly misrepresented as they have been. The wrong is in publicly representing someone, not worse than they are but as other than they are.
Damages cap does not apply
The court found the false publicity egregious in that it involved alleged criminal acts including by Ms. Gulian against her children. The publications were widely disseminated online and through targeted friends and colleague of the claimant. These publications had adverse effects on Ms. Gulian’s health and welfare. Despite court orders, Mr. Yenovkian did not retract these statements. These factors led the court to awarding a higher quantum of damages. Notably, the $20,000 cap on damages applied in the tort of intrusion upon seclusion was found to not apply to the tort of false light and disclosure of private facts.
In assessing the quantum of damages, the court was guided by the factors in the seminal defamation case Hill v Church of Scientology and adapted them to the tort of false light:
The nature of the false publicity and the circumstances in which it was made,
The nature and position of the victim of the false publicity,
The possible effects of publicity statement on the life of the plaintiff, and
The actions and motivations of the defendant.
Unanswered questions
While the decision provides useful guidance as to the variety of torts available to plaintiffs, it also raises several questions for defendants. Since false light aims to respect a person’s privacy right to control the way they present themselves to the world, it remains to be seen how this tort will accord with defamation[3] and the affirmative defences available to defendants in such proceedings.[4] Since no-one appeared on behalf of Mr. Yenovkian, these questions were not canvassed with the court.
Another difference between this tort and defamation is that, pursuant to section 38 of the Trustee Act, it appears a deceased person’s estate can sue for a privacy tort but not defamation.[5]
Looking to the US for guidance, we expect that some parallel and modified defamation defences may apply to these privacy torts. In California, for example, the court has recognized the availability of certain analogous parallel defences such as privilege and matters of public interest.[6] The hopes for the application of such defences in Ontario must be tempered by the fact that this US caselaw is shaped significantly by freedom of expression, which does not have the same application between private actors in Ontario.
Businesses may also be exposed to additional privacy tort claims post-breach. Where sensitive personal information is obtained from an organization by a rogue employee, who then publishes the information, it is conceivable that the employer could face direct or vicarious liability claims related to this new tort. However, this issue was not canvassed in the case.
[1] The reasons in Jane Doe 464533 v D.N, 2016 ONSC 4920 stems from a motion for default judgment that was ultimately set aside. In Jane Doe 72511 v Morgan, 2018 ONSC 660 the tort was recognized anew.
[2] See Krouse v Chrysler Can. Ltd. [1972] 2 OR 133 and Athans v Canadian Adventure Camps Ltd. [1977] 2 A.C.W.S. 1065 (Ont HCJ).
[3] In a defamation case, the plaintiff must establish:
The impugned words are defamatory, in the sense that they would tend to lower the plaintiff’s reputation in the eyes of a reasonable person;
The words in fact refer to the plaintiff; and
The words were published, (i.e., that they were communicated to at least one person other than the plaintiff).
[4] If defamation is established, a defendant may rely on one of its affirmative defences including justification, fair comment, responsible communication or privilege. A defendant may also move early to dismiss a proceeding for being a strategic lawsuit against public participation (i.e. it can bring an anti-SLAPP motion).
[5] Trustee Act, s. 38 (1) Except in cases of libel and slander, the executor or administrator of any deceased person may maintain an action for all torts or injuries to the person or to the property of the deceased in the same manner and with the same rights and remedies as the deceased would, if living, have been entitled to do, and the damages when recovered shall form part of the personal estate of the deceased….
[6] See e.g. White v State, 17 Cal App 3d 621 (1971) and Maheu v CBS, Inc, 201 Cal App 3d 662.
http://www.dentonsdata.com/four-of-a-kind-ontario-recognizes-the-fourth-privacy-tort-false-light/?utm_source=Mondaq&utm_medium=syndication&utm_campaign=LinkedIn-integration
In late 2019, the Ontario Superior Court recognized the tort of placing a person in a false light for the first time. This landmark decision completes the set of four privacy torts, which are now all recognized in Ontario, and has implications for businesses.
For background on the three other privacy torts, intrusion upon seclusion was recognized by the Ontario Court of Appeal in Jones v Tsige in 2012. Following this landmark ruling, in 2016 and again in 2018, the Ontario Superior Court recognized the tort of public disclosure of private facts.[1] Misappropriation of personality has been recognized in Ontario since the 1970s.[2]
As detailed below, given this new tort’s (i) flexible test that requires consideration for the “reasonable” person’s view of what is offensive, (ii) potential imposition of liability based on “reckless” conduct, and (iii) unclear adoption of affirmative defences, businesses must be attuned to the potential application of this tort. While the tort in this case was applied to egregious facts, this new tort may be applied to impose a positive obligation on businesses to ensure the accuracy of information that may be distributed, whether lawfully or through a data breach.
Background
In Yenovkian v Gulian, 2019 ONSC 7279, the court disposed of a couple’s family law trial with the wife’s tort claims against her (now ex-) husband. Justice Kirstjanson found that the husband, Mr. Yenovkian, engaged in a litany of misconduct including a cyberbullying campaign abusing his wife, Ms. Gulian, and her parents. The court also awarded $150,000 in punitive damages and $50,000 in compensatory damages from the intentional infliction of mental suffering.
After granting, amongst other things, a permanent restraining order against Mr. Yenovkian and sole custody of the children to Ms. Gulian, the court turned to Mr. Yenovkian’s liability for his “outrageous and egregious” conduct.
As it relates to this new privacy tort, Mr. Yenovkian falsely said that Ms. Gulian is a kidnapper, abuses the children, drugs the children, forges documents, and defrauds governments. Mr. Yenovkian also publicized private true facts about Ms. Gulian’s living situation with the children and her parents (including videos of their home) and details of access visits with their children. The court found that this was tortious and awarded damages of $100,000 for the tort of invasion of privacy, combining false light and public disclosure of private facts.
The tort of false light
The court described the tort of false light as follows: it is tortious for a person to place another person before the public in a false light if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.
The court found that while the publicity giving rise to this cause of action will often be defamatory, defamation is not required. It is enough for the plaintiff to show that a reasonable person would find it highly offensive to be publicly misrepresented as they have been. The wrong is in publicly representing someone, not worse than they are but as other than they are.
Damages cap does not apply
The court found the false publicity egregious in that it involved alleged criminal acts including by Ms. Gulian against her children. The publications were widely disseminated online and through targeted friends and colleague of the claimant. These publications had adverse effects on Ms. Gulian’s health and welfare. Despite court orders, Mr. Yenovkian did not retract these statements. These factors led the court to awarding a higher quantum of damages. Notably, the $20,000 cap on damages applied in the tort of intrusion upon seclusion was found to not apply to the tort of false light and disclosure of private facts.
In assessing the quantum of damages, the court was guided by the factors in the seminal defamation case Hill v Church of Scientology and adapted them to the tort of false light:
The nature of the false publicity and the circumstances in which it was made,
The nature and position of the victim of the false publicity,
The possible effects of publicity statement on the life of the plaintiff, and
The actions and motivations of the defendant.
Unanswered questions
While the decision provides useful guidance as to the variety of torts available to plaintiffs, it also raises several questions for defendants. Since false light aims to respect a person’s privacy right to control the way they present themselves to the world, it remains to be seen how this tort will accord with defamation[3] and the affirmative defences available to defendants in such proceedings.[4] Since no-one appeared on behalf of Mr. Yenovkian, these questions were not canvassed with the court.
Another difference between this tort and defamation is that, pursuant to section 38 of the Trustee Act, it appears a deceased person’s estate can sue for a privacy tort but not defamation.[5]
Looking to the US for guidance, we expect that some parallel and modified defamation defences may apply to these privacy torts. In California, for example, the court has recognized the availability of certain analogous parallel defences such as privilege and matters of public interest.[6] The hopes for the application of such defences in Ontario must be tempered by the fact that this US caselaw is shaped significantly by freedom of expression, which does not have the same application between private actors in Ontario.
Businesses may also be exposed to additional privacy tort claims post-breach. Where sensitive personal information is obtained from an organization by a rogue employee, who then publishes the information, it is conceivable that the employer could face direct or vicarious liability claims related to this new tort. However, this issue was not canvassed in the case.
[1] The reasons in Jane Doe 464533 v D.N, 2016 ONSC 4920 stems from a motion for default judgment that was ultimately set aside. In Jane Doe 72511 v Morgan, 2018 ONSC 660 the tort was recognized anew.
[2] See Krouse v Chrysler Can. Ltd. [1972] 2 OR 133 and Athans v Canadian Adventure Camps Ltd. [1977] 2 A.C.W.S. 1065 (Ont HCJ).
[3] In a defamation case, the plaintiff must establish:
The impugned words are defamatory, in the sense that they would tend to lower the plaintiff’s reputation in the eyes of a reasonable person;
The words in fact refer to the plaintiff; and
The words were published, (i.e., that they were communicated to at least one person other than the plaintiff).
[4] If defamation is established, a defendant may rely on one of its affirmative defences including justification, fair comment, responsible communication or privilege. A defendant may also move early to dismiss a proceeding for being a strategic lawsuit against public participation (i.e. it can bring an anti-SLAPP motion).
[5] Trustee Act, s. 38 (1) Except in cases of libel and slander, the executor or administrator of any deceased person may maintain an action for all torts or injuries to the person or to the property of the deceased in the same manner and with the same rights and remedies as the deceased would, if living, have been entitled to do, and the damages when recovered shall form part of the personal estate of the deceased….
[6] See e.g. White v State, 17 Cal App 3d 621 (1971) and Maheu v CBS, Inc, 201 Cal App 3d 662.

Myrzaa Noorzahi Myrzaa Noorzahi
پست های منو عمومی قرار دی تمام صفحه ها

Landa Marie
🌻

Srabon Chowdhury
🆗 thanks

Antonella Arena
Y

Htoo Maung
Maung Htoo

Babak Mehrdoost
Ddd

Ben Brett
Not sure if Facebook would even bother to read all comments here. I will be very grateful if someone from facebook team will attend my concern. Went through all forms to report but nothing happening!
Waiting on someone from Facebook team. Urgent. Thanks
Waiting on someone from Facebook team. Urgent. Thanks

Bro Pich Small
@[100062039476850:2048:PicnZer EmmZni]

Scott Fisher
This guy is an absolute moron. Just like most leftist-elected politicians. His IQ must be miniscule.

Debra Elfenbein
It is not wrong to censor someone who spreads sedition and gaslghts Americans with lies. Ban Donald Trump all your social media platforms.

Catherine Elizabeth Clay
You do absolutely nothing about bullying and I would love to talk to somebody that has bullied me for the longest time. And you do nothing about it so I would certainly appreciate it if somebody would look into being bullies. Not only that you should consider how people have brain damage and can't function and your neurotypical world the way that everybody else can but you don't care what I have to say even though you asked me to be a part of your marketing every six months because of how popular I really am and how many people care about what I have to say.
I sure as hell do not appreciate that you took down two of my websites that use the word b**** and c*** when was describing a website and the other one is describing a book I've written however that doesn't matter to you. Nobody's going to care what I have to think or that bullies from a certain brain tumor website keep turning me in for anything that they don't like and even if I have a 30-day suspension you add to it if they say so.
It really breaks my heart that you can't do anything for people that suffer from brain damage or other mental illnesses so that you kept me off Facebook for 60 days and not caused me many problems because my husband died and I couldn't even say anything the day he died on my site.
You don't care what I have to say because you're going to censor almost everything that comes out of my mouth. I think that's horrible that you're taking away our freedom of speech and trying to make us adhere to community standards even you can't adhere to. I'd love to know how Mark Zuckerberg isn't as transparent as I am. Nobody on the Internet is as honest as I am because I've been writing since 1992 and you took down my website that was supposed to be able to be part of what is going on because you didn't even see what the website I meant. I'm very sexist because of the way that I was raised and you didn't even check the website to see if it was a joke because of the way men have been calling women for centuries and you even have a Facebook page dedicated to the word that starts with a C and I hope to God you don't take that down or I mean some stupid guide since that means a goddess. That would break my heart if you took that down but you probably will because that's the level of idiots that you're dealing with because they don't have the time to actually do their job. I'd love to be a content manager considering I've been doing it since 1992 and I framed the internet. I wrote my diary online that is still active. It's been active since 1997 but there and my book is now selling for over $1,000.
I sure as hell do not appreciate that you took down two of my websites that use the word b**** and c*** when was describing a website and the other one is describing a book I've written however that doesn't matter to you. Nobody's going to care what I have to think or that bullies from a certain brain tumor website keep turning me in for anything that they don't like and even if I have a 30-day suspension you add to it if they say so.
It really breaks my heart that you can't do anything for people that suffer from brain damage or other mental illnesses so that you kept me off Facebook for 60 days and not caused me many problems because my husband died and I couldn't even say anything the day he died on my site.
You don't care what I have to say because you're going to censor almost everything that comes out of my mouth. I think that's horrible that you're taking away our freedom of speech and trying to make us adhere to community standards even you can't adhere to. I'd love to know how Mark Zuckerberg isn't as transparent as I am. Nobody on the Internet is as honest as I am because I've been writing since 1992 and you took down my website that was supposed to be able to be part of what is going on because you didn't even see what the website I meant. I'm very sexist because of the way that I was raised and you didn't even check the website to see if it was a joke because of the way men have been calling women for centuries and you even have a Facebook page dedicated to the word that starts with a C and I hope to God you don't take that down or I mean some stupid guide since that means a goddess. That would break my heart if you took that down but you probably will because that's the level of idiots that you're dealing with because they don't have the time to actually do their job. I'd love to be a content manager considering I've been doing it since 1992 and I framed the internet. I wrote my diary online that is still active. It's been active since 1997 but there and my book is now selling for over $1,000.

Catherine Elizabeth Clay
It always makes me wonder that if I put down that I was a boy that I could get away with a lot more then what you let me get away with because I'm a woman. I'm sure the algorithm has something to do with sexism and if your mail you can say more then you can if your female. You can't have opinions which is why my website one opinionated b****.com was taken down and it was describing a website so I guess you're taking down everybody sites without even giving them a heads up so that I can download everything that I had written about considering I'm a writer and I have a lot to say

ᏗᏦᏗᏕᏂ ᎷᏗᏂᏗᎷᏬᎴ
fack account for sexual picture and video fackbook upload please help me and my report accept
Fackbook Link // https://www.facebook.com/profile.php?id=100052519335917
Fackbook Link // https://www.facebook.com/profile.php?id=100052519335917

Wa Fg
https://www.facebook.com/help/120939471321735/?ref=share

Joshua Adelante
–PA HEART AND SHARE NAMAN MGA LODI NG MISMONG PIC SA LINK SAKA KUNG MAY ORAS KAPA PARA IFOLLOW AKO PAKIPAFOLLOW NADEN SALAMAT NG MARAMI SA PAG SUPORTA💗*
https://www.facebook.com/photo.php?fbid=586489822336140&set=a.106647966986997&type=3&app=fbl
https://www.facebook.com/photo.php?fbid=586489822336140&set=a.106647966986997&type=3&app=fbl

Robert Duffy
Lost my notes and files. Doesn't seem like "Giving People Control and Allowing More Content"

Robert Duffy
Not as useful as before "There is no single solution to these challenges, and these are not problems you ever fully fix. But we can improve our systems over time, as we've shown over the last two years. We will continue making progress as we increase the effectiveness of our proactive enforcement and develop a more open, independent, and rigorous policy-making process. And we will continue working to ensure that our services are a positive force for bringing people closer together."

Arindam Roy
Add me