The Turquoise Tick Initiative: A Roadmap to Better Mental Health and Cyber Wellness for Youths in Singapore

The Turquoise Tick Initiative

A Roadmap to Better Mental Health and Cyber Wellness for Youths in
Singapore

How can young people foster their wellness online in a socially-distanced world?

This question must be in the minds of so many educators, parents, and people who are in contact with youths, especially in this pandemic era. As former educators, we (the writers of this article) have seen students and friends turn to help online whenever they were faced with struggles simply because they weren’t ready to seek help in real life. They sought advice from online sources because they value the solace that anonymity brought in the online world. For users who wish to seek comfort from the safety of their screens, simply reading about someone else’s wellness journey while remaining anonymous online can be cathartic and can help them in their search for answers.

In a research article by the Community Health Assessment Team (CHAT) Singapore on youthfriendly health services, only 31.8% of youths sought help for treatment. The findings also revealed some barriers, which include a lack of resources and accessibility of service. The survey also found out that 62.2% of youths were already aware that CHAT is a youth mental health service and that 42.5% first made contact with CHAT online. This shows that youths are more likely to seek help online first or are open to receiving help if they can do their own anonymous research before deciding to go any further.

But what if these youths do not know how to distinguish between what is credible information and what is simply misinformation? Youths could take the information and treat it as the truth because they have not acquired the skills to discern all that they see online. The internet can then turn from a place of solace to an unsafe space riddled with distrust.

How then can we help people, especially youths or the less-informed, to distinguish between what is legitimate, credible, and verified versus what is simply misinformation that could harm their mental well being?

Overview

This article will focus on measures that can aid youths in their mental health journey as well as look at how the industry can help in this cause. Through our experience last year when the COVID-19 pandemic escalated the amount of misinformation on social media platforms, it is important to understand that misinformation is false or misleading information that is spread, regardless of intent to mislead.

Against this backdrop, we have decided to band together in our attempt to address the issue of how tech companies and online platforms like websites can help to mitigate the risks pertaining to the propagation of misinformation that affect cyber wellness and mental health amongst youths online.

To frame the gravity of this issue, the large amount of time spent online comes along with a troubling. statistic that 75 per cent of youths in Singapore have encountered cyber bullying, yet only 3 per cent have reported it to their parents. This is unfortunate yet intriguing as youths have unprecedented access to a lot of mental health resources available online – a quick search on Instagram alone amounted to 33.7 million posts^ supporting the message and spreading awareness about mental health. To add to these concerns, the 2020 Child Online Safety Index (Cosi) report found that despite Singapore scoring well in almost all fields researched in the report, such as, connectivity and its cyber-security infrastructure, what is lacking is the social infrastructure that can possibly tackle the issue of online safety for a
positive and less mentally injurious online experience.

Data from the Digital 2020 Singapore report by We Are Social revealed that the average Singaporean spends an average of 7 hours a day on the Internet. This has only increased after the COVID-19 pandemic. As one of the most highly-connected nation in the region boasting affordable high-speed Internet access, the island-republic has recently reported in a study announced at the Singapore Mental Health Conference 2021 that there has been a spike in mental health cases exacerbated by the effects from the pandemic.

With so much time spent online, some of which youths use to find solace or seek preliminary advice on mental health, there should be possible pathways to help youths distinguish between community pages or online entities that share resources on mental health.

The Turquoise Tick Initiative

We came up with the idea for the Turquoise Tick Initiative (TTI) in a Cyber Wellness design jam called Heartbits, co-organized by Facebook and the National Youth Council (NYC) Singapore. Subject matter experts spoke about the prevalent issue of mental health and cyber wellness amongst youths in Singapore and through workshops and mentor consultations with fellow changemakers, we crafted and refined the idea for the TTI.

Heartbits
Screenshot of a session in the Heartbits Design Jam

The TTI serves as a way of verifying a community page or online support group that shares resources for mental health issues. The Turquoise Tick (TT) is a legitimacy checkmark that will help to visually distinguish between credible sources of information or groups from others in a bid to help users filter through credible information and misinformation.

Turquoise
A mock-up of The Turquoise Tick

Turquoise, a combination of blue and green, is a nod to the “verified blue ticks” seen on some social media accounts and the increasingly common “green ticks” related to environmental responsibility. Turquoise then represents calmness, clarity, and assurance. As a tertiary colour, it also represents the evolution and amalgamation of primary and secondary colours that is symbolic of how our society has progressed to the point where we can combine our knowledge, expertise, and willingness to help others in a simple symbol.

How it can be done – Whole-of-society Approach

The parameters that govern the TTI will be explored through a series of stakeholder discussions that will have a whole-of society approach.

Pathway 1: Roundtable Stakeholder Discussions

As part of the TTI, we would recommend assembling key stakeholders together on several rounds of discussion on ways to tackle cyber wellness and mental health amongst our youths in Singapore. This will thus bring together multiple perspectives from the Internet industry (which owns social media or online intermediary platforms), mental health professionals, government and regulators, civil societies, and prominent public figures into the conversations. This is done with the hope of breaking down barriers that may exist amongst parties involved and capture a multitude of voices, lending some insights from the United Nations’ Nairobi Outcome Document where there is a need for more inclusive, multi-stakeholder partnerships to deal with complex issues where the intersections of efforts are to be coordinated and complementary.

Adding to it are several advantages in undertaking this approach as the first step in this Initiative, such as:

  1. Fostering cooperation and inter-relations between the parties involved
  2. Building greater trust and understanding amongst the parties involved in the work
  3. Increasing civic engagement between the Internet industry and civil societies
Diagram
A diagram showing who would be involved in a whole-of-society approach, with the circles corresponding to the importance of the groups in deciding what constitutes the turquoise tick

Kick-off Session

Invited agencies and parties will be introduced to one another.

Objective: to build openness, authenticity and lay the foundation that builds meaningful, deeper connection before structuring pathways in undertaking such a complex topic such as mental
health in the society.

Goal: each organization will be able to gain an awareness of the positions of their interests, capabilities and possible intersections.

The Initiative will then carve a space for them to contribute by framing problem statements pertaining to the topic and aligning a common vision together. To end off the first engagement, the parties involved would then vote to shortlist the problem statements and a common vision that they can agree together.

Roundtable Stakeholder Discussions

A series of three initial discussions (which can expand further later) that highlights the areas of cooperation that representatives from the larger group can focus, and then scale later to involve more parties, should the need arise.

This stage may take a little longer to percolate to account for the complexity of the issues concerned and the nexus of relationships that interplays across the parties involved at every layer of the discussion. In order to capture the nuances of such discussions, the Initiative will then gather insights both qualitative and quantitative that would eventually present a case as a larger insights paper into the state of mental health and cyber wellness in youths living in Singapore.

Insights Paper

This will segue from the roundtable series to pave the way for a formalized solution that the Initiative will look into to bring the civil society sector and the Internet industry from the earlier roundtables together. They will work and commit changes that will change our online space as it is imperative for these two parties to share the responsibility for safe content and information on the Internet with societal actors and users of the platform. In addition to that, mental health professionals will play a pivotal role in amplifying and acting as credible custodians of conversations and resources on social media platforms.

Pathway 2: Industry case studies and what can be learned from them

The issue of verification and legitimacy is one that many online platforms, such as social media and websites, have tried to tackle. There have been varying degrees of success but to date there has not been an industry standard that platforms have agreed upon.

However, there are some commonalities across online platforms, namely:

  • Account verification via identification
  • Notability and activity used as grounds for verification
  • Use of admins or moderators to monitor content or any violation of rules and guidelines

Case study: social media and the Blue Verified Badge

Social media platforms use the “Blue Verified Badge” to identify accounts that have met their criteria for grounds of verification. Once verified, a simple blue badge will appear on the profile. Although different platforms have varying ways of verification, the common underlying reason for doing so is to provide users a way to identify legitimate accounts.

An analysis of some social media platforms shows in greater clarity the commonalities in the verification rules and process.

Facebook & Instagram

Name: verified badge
Requirements:

  • Authentic: Represent a real person, registered
  • business, or entity.
  • Unique: Be the only presence of this person or business. Only one Page or profile per person or business may be verified, with exceptions for language-specific Pages and profiles. General interest Pages and profiles (e.g., puppy memes) are not verified.
  • Complete: Have an about section, Page or profile photo and recent activity, including at least one post.
  • Notable: Represent a well-known, often searched person, brand, or entity. They review Pages and profiles that are featured in multiple news sources, and they don’t consider paid or promotional content as sources for review.

Process:

  • Fill up a form where the person, entity or business has to provide official identification and provide a writeup of why they should be verified. Optional fields include linking to other social media accounts or an official website.

Twitter

Name: blue verified badge
Requirements:

  • Authentic: official website, official ID verification, official email address
  • Notable: 6 types:
    • Government
    • Companies, brands, and non-profit
      organizations
    • News organizations and journalists
    • Entertainment
    • Sports and esports
    • Activists, organizers, and other
      influential individuals
  • Active: Complete profile, active use, approved security feature via a confirmed email address or phone number, has not had an account lockout from Twitter.

Process:

  • Fill up their form by choosing the type of account, provide identification, and Twitter will contact the account holder via em

Commonalities:

  • Verified accounts must be public and linked to an official identification.
  • Use of forms and vetting by a team.
  • Verification does not mean endorsement nor promotion; it is just a way of verifying authenticity.
  • Removal of badge is account is deemed suspicious or has violated guidelines and policies.

Difference:

  • Having a badge on Facebook does not guarantee a badge on Instagram and vice versa.
  • Badges can be removed if profiles have suspicious activities, used third parties for verification, attempt to sell the verification.

Difference:

  • Badge will be removed if the user changes the handle, becomes inactive, or is no longer in the position they were initially verified for (e.g., elected government official who leaves office).

These social media giants have aimed to tackle the issue of inauthentic or suspicious accounts by using a blue verified badge and various ways of authentication. While it is impossible to eliminate all suspicious actors, it does provide a level of assurance to users, especially when used with a critical lens.

This contributed to the inspiration behind the TT. The focus of the TT centres on the legitimacy of a group’s credibility to share information especially when it comes to groups sharing information about mental health online. Social media platforms could come up with verification factors that will help legitimize social groups that wish to enact genuine change by helping people.

Case study: Psych Central

For someone at the start of their mental health journey, doing a quick search online might be one of the first things that they do. Therefore, websites providing tips and resources for mental health should also adhere to standards that the TT would suggest.

Psych Central is an American website that provides mental health resources via articles that are written by experienced writers and vetted by their vast editorial team and medical experts. Their mission is to provide a strong, evidence-based foundation for their readers without the complicated medical jargon so that readers can embark on their journey.

They acknowledge that legitimate information can be hard to find online and as part of their values, are very clear on their editorial process and are transparent about their writers and the medical expert who vetted the article.

In this article that gives advice on online support groups, the content is simple and straightforward and it recommends some sources and groups that readers can use to start their wellness journey with. This is a common baseline for a wellness article, but what gives it legitimacy is shown at the top of the article – it clearly states who wrote it, and more importantly, who vetted it.

Psych
Screenshot of Psych Central’s format for transparency of their content.

Each article will have a link to a profile of their medical team or associate who will always have a relevant background on the topic of the article. This gives a layer of assurance that even though the article might have been written by a writer who is not a subject matter expert (which can be common), it is vetted by someone who is. It serves as a kind of moderation which gives an added layer of content authenticity.

An article like this has links to support groups and associations. Having someone like Danielle Wade, who is a clinical social worker, to vet and approve of those links gives a layer of assurance that these aren’t just groups that the writer found online but are legitimate groups because someone in the field has approved of them.

Authenticity
Another screenshot showing Psych Central’s commitment to content authenticity

This article speaks of a position paper about how teens find solace online and through social media in times of stress and. isolation. It is written by a senior news editor with a PhD and vetted by Psych Central’s Scientific Advisory Board, who will also periodically review existing articles. The commitment that they must ensure each article’s content is legitimate, and reviewing it periodically is a way sites like this can help with mitigating misinformation online.

Case Study: Reddit As a social news aggregator and discussion website, Reddit’s substantially large user base of 330 million is known for being a platform catering to specific users’ interests or a network of activity-focused communities known as subreddits. Hence, it can be surmised that subreddits are like threads of individual websites while reddit is the single main domain where these subreddits sit.

Utilising a decentralised and hybrid approach to content moderation, Reddit uses a user-driven voting system that determines the ranking of content posted on each page as well as each subreddit. The platform provides a set of high-level overarching content policies and prohibits illegal or harmful content and content that encourages or incites violence.

Content moderation on Reddit is done through the following approaches:

  • A small, centralised team of moderators (known to users as administrators or admins)
  • The majority of content moderation on the platform is carried out by voluntary moderators of individual subreddits, who are known as mods
    • Mods have reasonable editorial discretion and the ability to remove content that violates Reddit’s platform policies or content that are deemed objectionable
    • Mods can also temporarily mute or ban users from their subreddit
    • They are also empowered to craft parameters where acceptable content can be defined as long as they do not conflict with Reddit’s global set of content policies
    • All of the mods on a subreddit can also collectively create guidelines that elaborate on their responsibilities and codes of conduct
    • Mods may also have additional roles, such as fostering discussions and engaging redditors on the subreddits
    • Intervention is rare as admins would only enforce actions to remove objectionable content that is illegal or violating Reddit’s content policies, or to ban users from the site for their violating actions

Recommendations and Conclusion

Addressing young people’s mental health and wellbeing based on their online engagements pose a multi-dimensional challenge. Factoring in the safety sensibilities surrounding the spectrum of young ages is complex and mental health and wellness need new, proactive approaches by stakeholders from all of society. Hence, the following is a summary of recommendations that would address this holistically:

  • A whole-of-society approach: This approach echoes the country’s leadership within a Singaporean context as the social sector is called on to amplify the government’s efforts in addressing the complex challenges that the younger generation faces. There are a lot of virtues in collaborating together with key stakeholders that addresses a matter of such extreme societal importance. By involving multiple parties of interest, it provides a fair and equitable platform for all stakeholders to work on efforts that do not run contrary to the collective aim that addresses youth mental health and well-being online.
  • User verification and content authenticity: Drawing inspiration from intermediary and social media platforms that use a verified user and account mechanism, a proactive step in content authenticity is to assure users of the authenticity of mental health resources shared by legitimate and credible users on the platform and this can be in the form of a legitimacy checkmark, as alluded earlier. A checkmark can be a simple signaller especially for youths. This can help with distinguishing credible sources of help and information online from platform accounts and unverified groups.
  • Access to mental health resources online: Since more youths are spending time online, it would be helpful for platforms to work with mental health professionals in sharing resources and tips online to help with their well-being online. This can come in the form of sharing credible, verified resources within the platform by accredited mental health professionals, or co-launching wellness programs and materials online suitable for young persons.
  • Proactive content moderation: collaborating with other stakeholders and enlisting assistance from the online community will be helpful in moderating content. Platforms can decide on striking the balance of how it maintains its community guidelines and platform policies whilst putting the responsibility of fostering a safe online experience back in the hands of its users. Moderators can also include mental health professionals who are able to lend legitimacy to the content moderation process.
  • Educating good digital citizenship and digital literacy practices: Building and scaling digital education in the earliest years of the school curriculum of a young person is important as our lives get hyper-digitalised. Stakeholders can also work together to develop the curriculum and craft programs that are age-specific within the youth spectrum. At the same time, the content of such curriculum should also cater to the sensibilities and specific nuances that exist in the target demographic audience. Teaching critical thinking skills that encompass both the online and offline space of youths will be helpful in developing a healthy generation of digital citizens.

This set of recommendations can be applied to different societal contexts with the respective prescription of localized nuances and values. As the popular African proverb goes, “It takes a village to raise a child” – It is with sincere hope that many can join hands and make the online space safe for every young person. Every effort and enabler counts in collectively cultivating our young to be able to thrive and succeed in this increasingly digitalized world.

Note: The opinions expressed in this blog post are those of the authors. They do not purport to reflect the opinions or views of the Asia Internet Coalition (AIC) or its members.

References and Citations: