Connect with us

NEWS

Infinit Care Spotlight Series: A closer look at worker profiles that need workplace mental health support

May 26, 2022 8:00 p.m.

Being on the internet has become a regular part of our everyday life. According to the latest statistics, 3.96 billion people use social media globally, with each person spending an average of 147 minutes or two hours and seven minutes on digital platforms every day.

These are significant figures if you look at them from the context of the level of exposure we get from the digital platforms we access.

Over the last few years, the internet has played a pivotal role in society–building businesses and new industries, creating new needs, and of course, shaping the mindset of the public as a whole. Without a doubt, the internet has become so powerful that it can shape generations and the way they think and act as a whole.

But have you ever wondered how information is sifted and checked in the online worlds we love to immerse ourselves in?

Websites and applications, big and small, have community guidelines that protect their users from being exposed to harmful information, but who exactly are the people working behind the scenes and doing the heavy lifting of screening this information?

In this article, we will talk about the sentinels of the internet and the plight that comes with their profession.

Meet the Content Moderators.

Content Moderation in a Nutshell

Content moderation, at its simplest, is the process of screening and monitoring user-generated content posted on online platforms.

Whenever a user submits or uploads something to a website, moderators go through the content to make sure that the material follows the community regulations and is not criminal or illegal in nature.

Some examples of banned content that content moderators screen are those that contain sexual themes, drugs, bigotry, homophobia, harassment, and racism.

While content moderation is applied to a majority of online platforms, they are even more so practiced in websites with a heavy lean toward user-generated uploads.

This includes social media platforms, online marketplaces, communities and forums, the sharing economy, and even dating sites.

There are two different types of content moderation that websites use: AI-automated and human moderation.

In the first type, a machine learning system is designed to moderate posts based on previous data gathered from the internet.

AI moderation is significantly faster–sometimes only taking seconds to review posts, but it might not always be 100 percent accurate because it relies on machine learning which may not always pick up the right cues.

Human moderation, on the other hand, is a manual type of process that involves an actual person who reviews the posts.

Under this category, the screener follows specific platform rules and guidelines to check the user-generated content submitted to the website. While this type of moderation is more foolproof than its counterpart, it also takes more time due to its manual nature.

Moreover, it also presents a serious problem within its workforce that unfortunately, is not often well addressed: mental distress.

The Dark Side of Content Moderation

While content moderation remains to be a discreet profession, at least in the Philippines, more and more people who have worked in the field have stepped up over the recent years to speak up about the challenges and dangers that are prevalent in the industry.

A riveting 2018 internationally produced documentary titled ‘The Cleaners’ gave an exhaustive look at the plight of moderators in the country who worked for online giants like Facebook, Twitter, and Google, and tackled the subject of their mental health struggles from their job.

Facebook itself has acknowledged the difficulties that come with the profession while Microsoft has faced lawsuits from former employees who claim that they were not given proper support despite the psychological dangers of their job.

Moderators sift through hundreds of submissions that contain triggering content not limited to depictions of death, torture, mutilation, and violence for hours, sometimes with only limited time for breaks.

The nature of the work can lead to the development of mental distress and psychological issues such as post-traumatic stress disorder (PTSD), anxiety, and even depression.

This is something that is also supported by data from other studies in journalism, law enforcement, and child protection which claim that repeated trauma exposure can lead to psychological distress.

On top of that, workers in the said areas have also been stated to suffer more from burnout, relationship challenges, and even suicide.

The following are other mental health problems that can arise from exposure to toxic content:

  • Panic attacks – some moderators have expressed feeling attacks when being around animals and children–fearing something will happen to them–after repeated exposure to violent videos.
  • Normalization/Desensitization to disturbing humor and language – repetitive exposure to disturbing content can change the mindsets and perspectives of its audience, leading to inappropriate humor and language.
  • Self-destructive habits – alcoholism, use of drugs, and display of indiscriminate sexual habits have supposedly also been reported in the workplaces of moderators who presumedly engage in them as a way of emotional escape to their job.
  • Skewed beliefs – in some cases, some content moderators can also develop fringe views (e.g. believing conspiracy theories) that are not supported by hard facts because of constant exposure to their materials.

The Cost of Internet Safety

Without a doubt, content moderators serve as the first layer of protection of the general public from disturbing and harmful materials.

Unfortunately, they are not always properly protected from the rigors that come with their profession.

Unlike different workplaces (for example, those in the health sector, law and policing, and journalism) which have more solid guidelines when it comes to taking care of the mental needs of their workforce, there is an obvious lack of the same system for those working in the content moderation industry.

In an article by Harvard, it is even said that companies are even very restrictive about letting others investigate their existing procedures and treatment of these workers.

Not only are there no third parties monitoring the welfare of employees, but people working in the industry are also commonly asked to refrain from talking about their work through non-disclosure contracts.

Fortunately, some companies have also taken the initiative to develop workplace guidelines that can improve the treatment of those in the industry.

Facebook, for example, helped create the Technology Coalition which then designed the Employee Resilience Guidebook, a guide that outlines rules protecting the occupational health and safety of workers reviewing distressing content.

While the guidelines were made for those who are focused on employees dealing with child pornography, it also has terms that can be used for others in professions that expose workers to distressing imagery and content.

Specifically, the guide includes rules such as the provision of mandatory individual and group counseling sessions with a certified trauma specialist, limiting exposure to disturbing content for four hours, giving employees the choice to opt out of viewing specific disturbing content, encouraging them to switch to other projects as a form of relief, and giving them enough time to take a break and recover from their work.

Protecting the Protectors

While overarching guidelines are already being developed on a global scale, it cannot be debated that a huge chunk of the responsibility should fall on the shoulders of the employers who are in a better position to observe and improve the best practices in this area.

Here at Infinit Care, for example, we follow a tried and tested framework, the Mental Health Continuum, to make sure that every employee working in high-risk professions gets the mental health support that they need, wherever they are on the scale – whether they are excelling, surviving or in crises. (Click here to know more about the Mental Health Continuum.)

Our Head of Clinical Care Shyne Mangulabnan suggests several ways on how employers can put this to work.

“Having a counseling professional who can help these employees is essential as well as having a solid support and assessment system for them. For example, surveys given to agents which can be used as a reference for the design of a wellness strategy is a good place to start. Constant monitoring of employees should also be done to make sure that their needs are met.”

On top of that, Mangulabnan also suggests creating proper escalation procedures for concerns relating to the mental health challenges of content moderators.

Proper education of important stakeholders within the company (human resource team, upper management) about mental health risks of the job is also necessary since they are the decision-makers who create systems that take care of employees.

“It would be best to have an end-to-end solution: an onboarding process that gives candidates the training and education they need to understand the risks and concepts of well-being,  round-the-clock onsite and virtual counseling services, community support groups, yoga and meditation activities, and workshops are just some of the many things that employers can initiate to make sure that they give the support that their workforce needs.”

True enough, it is the responsibility of employers to make sure that they ‘protect the protectors’ of the internet.

However, it’s not only the content moderators who should be given this kind of support, especially with 43 percent of the global workforce expressing that the COVID-19 pandemic has increased the stress that they suffer from work.

This story is just the first chapter of a series that will shed light on all the professions who need mental health support most in these trying times.

Do you need help on how you can start caring for your employees in this aspect? We’d be more than happy to guide you here at Infinit Care. We are a company that helps other companies provide comprehensive mental health care support to their employees through the use of science-backed methodologies. You can reach out to us here to know more about how we can help.

NEWS

TikTok enhances safety, transparency for Filipino community with new initiatives

7:01 p.m. July 16, 2023

TikTok, the world’s leading short-form video platform, is taking further steps to ensure a safer and more transparent platform for its Filipino community. In response to the evolving digital landscape and the rise of AI-generated content (AIGC), these initiatives are designed to maintain a secure environment, uphold community guidelines, and ensure users can trust the content they encounter on the platform.

Strengthening Community Guidelines Enforcement

As part of its ongoing efforts to safeguard its community, TikTok recently published its Q1 2024 Community Guidelines Enforcement Report. During this period from January 1 to March 31, 2024, TikTok removed 4.26 million videos in the Philippines for violations of its Community Guidelines. Of these, 99.7% were removed proactively, and 95% were taken down within 24 hours.

To further enhance transparency, TikTok updated its Community Guidelines in April to provide clearer rules and introduce new features that help creators understand and comply with policies. Available in English and Filipino, these guidelines include detailed definitions and outline moderation practices for features like Search, LIVE, and the For You feed, ensuring policies are clear and accessible to all users.

Advancing AI-Generated Content Transparency

In response to the increasing prevalence of AI-generated content, TikTok has implemented new measures for transparency. Since May, TikTok has automatically labeled AI-generated content uploaded from specific platforms. This initiative is part of a collaboration with MediaWise, a program of the Poynter Institute, and the Coalition for Content Provenance and Authenticity (C2PA), making TikTok the first video-sharing platform to adopt C2PA’s Content Credentials technology. These labels aim to provide users with clear context about the nature of the content they consume.

Educating the Community with Media Literacy Tools

To support its community in navigating AI-generated content and combating misinformation, TikTok is launching new media literacy resources. Developed in collaboration with experts, these resources are integral to TikTok’s broader strategy to enhance user understanding and foster a more informed community. As part of this initiative, TikTok has partnered with MediaWise to release 12 educational videos throughout the year. These videos aim to teach universal media literacy skills and explain how TikTok’s AI-generated content labels can help contextualize content. This partnership underscores TikTok’s commitment to educating its community and fostering a more informed user base.

Expanding AIGC Labeling Through Partnerships

Building on its efforts to ensure content transparency, TikTok has extended its auto-labeling capabilities for AI-generated content created on other platforms. By integrating the ability to read Content Credentials from C2PA, TikTok automatically recognizes and labels AI-generated content, with plans to expand this to audio-only content soon.

In the coming months, TikTok plans to attach Content Credentials to its content, ensuring transparency even when content is downloaded, allowing users to utilize C2PA’s Verify tool to identify AI-generated content and understand its creation details.

Driving Industry-Wide Adoption

In its mission to promote industry-wide adoption of Content Credentials, TikTok has joined the Adobe-led Content Authenticity Initiative (CAI). As the first video-sharing platform to implement Content Credentials, TikTok is at the forefront of encouraging transparent content practices across the industry. The gradual increase in auto-labeled AI-generated content on TikTok is expected to grow as more platforms adopt this technology, fostering a more transparent digital landscape.

For You Feed and Creator Code of Conduct

To further enhance safety, TikTok has introduced new standards that will temporarily restrict accounts that repeatedly violate content standards. These accounts and their content will be harder to find in search, with creators being notified and given the option to appeal.

Additionally, TikTok published a Creator Code of Conduct outlining the standards expected from creators involved in TikTok programs, features, events, and campaigns. This code reinforces TikTok’s commitment to maintaining a safe and inclusive platform.

Through these measures, TikTok continues to focus on helping its community, especially creators, understand its rules and enforcement methods to ensure a safer experience for its users. By embracing continuous innovation and collaboration, TikTok strives to create a secure and inclusive space for creativity and connection.

Continue Reading

NEWS

SM Prime, DTI empower MSMEs with 83 SM mall spaces, training, mentorship

(L-R): SM Prime Chairman of the Executive Committee Hans Sy and Department of Trade and Industry (DTI) Secretary Alfredo Pascual

9:15 p.m. July 12, 2024

Good news for Micro-, Small, and Medium-Sized Enterprises (MSMEs)! SM Prime Holdings (SM Prime) and the Department of Trade and Industry (DTI) solidified a partnership through a Memorandum of Agreement (MOA) signing ceremony held on July 1 at the SM Prime Headquarters.

This collaboration empowers MSMEs with prime mall space in 83 SM Malls nationwide, aligning with the One Town, One Product (OTOP) Philippines program. Besides providing space, SM Prime offers MSMEs discounted booth rentals, training programs on product development, marketing, financial management, and mentorship opportunities with experienced business leaders.

SM Supermalls’ President Steven Tan (3rd from left) and Department of Trade and Industry (DTI) Secretary Alfredo Pascual (3rd from right) with (L-R): SM Supermalls’ Assistant Vice President for Operations Royston Cabunag, SM Supermalls’ Vice President for Operations Junias Eusebio, DTI Undersecretary for Micro-, Small, and Medium-Sized Enterprises (MSME) Development Group Cristina Roque, and DTI-Bureau of Market Development, Promotions, and One Town, One Product (OTOP) Philippines Director Marievic Bonoan
Key signatories from SM and the Department of Trade and Industry (DTI) sign the Memorandum of Agreement.
Department of Trade and Industry (DTI) Undersecretary for Micro-, Small, and Medium-Sized Enterprises (MSME) Development Group Cristina Roque

Continue Reading

NEWS

DTI National Food Fair celebrates local flavors at SM Megamall

8:26 p.m. July 11, 2024

The Department of Trade and Industry (DTI) successfully concluded the 10th National Food Fair at SM Megamall’s Megatrade Halls 1-3, held from July 3-7, 2024. This premier event showcased the rich flavors of the Philippines and empowered over 200 Micro-, Small, and Medium-Sized Enterprises (MSMEs).

Food enthusiasts enjoyed a bounty of fresh produce, regional specialties, and delectable treats from all corners of the country. Attendees had the opportunity to stock up on pantry staples, explore health-conscious options, and discover unique ingredients to elevate their cooking skills.

(L-R): Megatrade Hall’s Maite Quiogue, SM Supermalls’ Assistant Vice President for Operations Royston Cabunag, Department of Trade and Industry (DTI) Undersecretary for Micro-, Small, and Medium-Sized Enterprises (MSME) Development Group Cristina Roque, Guest of Honor Winnie Chua-Go, SM Megamall Assistant Vice President for Operations Christian Mathay, SM Supermalls’ Vice President for Corporate Marketing Grace Magno, DTI-Bureau of Market Development, Promotions, and One Town, One Product (OTOP) Philippines Director Marievic Bonoan, and SM Megamall Assistant Mall Manager Isabella Manjon

(L-R): Department of Trade and Industry (DTI) Undersecretary for Micro-, Small, and Medium-Sized Enterprises (MSME) Development Group Cristina Roque, Guest of Honor Winnie Chua-Go, and DTI-Bureau of Market Development, Promotions, and One Town, One Product (OTOP) Philippines Director Marievic Bonoan

The 2024 Department of Trade and Industry (DTI) Bagong Pilipinas National Food Fair brings together the best food and flavors from all 16 regions.

Fresh pomelos and other local fruits take center stage at the National Food Fair in Megatrade Hall.

A potential buyer gets ready to take home bottled Bicol Express and Laing at the 10th National Food Fair in SM Megamall.

Crispy, salty, and packed with nutrients —these water spinach chips are the perfect healthy snack.

Bottled honey and baked fruit crisps, all made with local ingredients.

Davao del Sur and Misamis Oriental’s chocolate products are crafted from premium cacao beans.

Quality golden salted eggs from Rizal.

A variety of coconut products from San Pablo, Laguna.

Continue Reading