• Hey Guest,

    As you know, censorship around the world has been ramping up at an alarming pace. The UK and OFCOM has singled out this community and have been focusing its censorship efforts here. It takes a good amount of resources to maintain the infrastructure for our community and to resist this censorship. We would appreciate any and all donations.

    Bitcoin Address (BTC): 39deg9i6Zp1GdrwyKkqZU6rAbsEspvLBJt

    Ethereum (ETH): 0xd799aF8E2e5cEd14cdb344e6D6A9f18011B79BE9

    Monero (XMR): 49tuJbzxwVPUhhDjzz6H222Kh8baKe6rDEsXgE617DVSDD8UKNaXvKNU8dEVRTAFH9Av8gKkn4jDzVGF25snJgNfUfKKNC8

needthebus

needthebus

Longing to Becoming HRU
Apr 29, 2024
303

It's more focused on the technical aspects in terms of the content itself, and considering its an academic paper it's wordy as fuck, but importance lies in data collection and pre-processing.

"Using network data generated from over 3.2 million unique interactions of N = 192 individuals, n = 48 of which were determined to be highest risk users (HRUs), a machine learning classification model was trained, validated, and tested to predict HRU status."
"A complete record of posting activity within the "Suicide Discussion" subforum of Sanctioned Suicide, from inception on March 17, 2018 to February 5, 2021, was programmatically collected and organized as tabular data using a custom Python (v3.8) script that primarily leveraged the BeautifulSoup package to parse the site's HTML and XML information.34 This effort resulted in a dataset containing more than 600,000 time-stamped posts across nearly 40,000 threads and over 11,000 users. This posting activity information consisted of (i) thread title, (ii) thread author, (iii) post author, (iv) post date, (v) post text content, and (vi) direct mentions and references to other user comments within the post text. All information, except for post text, was used in this study. To impose an added layer of user anonymity, each username was automatically assigned a randomly generated, 32-character hashed ID. These de-identifying IDs were automatically replaced with all instances of users' online handles within the data prior to subsequent preprocessing and analysis."
"A structured approach to select a subset of appropriate users and identify HRUs was devised based on the findings discussed through the New York Times investigation into Sanctioned Suicide35 as well as the authors' thorough review of the forum content. Moreover, this strategy was described and utilized in a previously published analysis of users on the Sanctioned Suicide forum.33 To reiterate herein, data was first filtered by searching for thread titles with the following keywords/phrases: "bus is here," "catch the bus," "fare well," "farewell," "final day," "good bye," "goodbye," "leaving," "my time," "my turn," "so long," and "took SN." Of note, "catch the bus" is a euphemism adopted by the community to symbolize suicide,33 while "SN" is short for sodium nitrate, an increasingly popular chemical used in suicide-related methodology. These terms were used to identify "goodbye threads" on Sanctioned Suicide, and thus have the highest probability of signaling for an impending attempt"
In
simpler terms, a large amount of data was collected by a university research group, which out of these posts, which where then analysed for words. Not that this is of any interest to me personally but @RainAndSadness, are there counter-measures against these types of stuff or did you have knowledge of this?

They even said: "Accordingly, written informed consent was waived and this study "exempt" from further review."

t
it would have been a more interesting stusy if they had also looked at post times. that's the sort of thing AI could find patterns in. Even if time zones weren't known, AI couod find patterns.
 
  • Hmph!
Reactions: ihateearth
T

TheUncommon

Student
May 19, 2021
129
It is not reasonable to think that information that is willfully published online cannot be withheld by third-parties, specifically if the site is open to the general public.
 
hereornot

hereornot

Freedom
May 16, 2024
142
Everytime, everywhere, a long time Ago, not only sasu but anywhere...
 
C

CatLvr

Warlock
Aug 1, 2024
798
well, we can all do our part by just commenting incorrect shit to throw off the AI

I can't wait to commit suicide by oxygen inhalation! I can't wait to die from drinking 2L of water daily! I can't wait to kill myself from 8 hours of uninterrupted sleep each night!
I would die if I actually got a whole 8 hours sleep!! 🥴
 
anonymousfoxxo

anonymousfoxxo

Stray Fox
Nov 9, 2023
33
I don't even have the energy to write all my thoughts about this. This is just horrible. I hate it.
 
  • Like
Reactions: ihateearth
Judah

Judah

Nobody remembers me
Oct 1, 2020
1,592
I guess it's impossible to prevent this site from being used as a research tool or a way to feed AIs with information... Privatizing the forum would be a valid option but RAS once said that this would imply that the registration queue would increase considerably.
 
SNastablesalt

SNastablesalt

she longs for freedom
Oct 6, 2023
106

It's more focused on the technical aspects in terms of the content itself, and considering its an academic paper it's wordy as fuck, but importance lies in data collection and pre-processing.

"Using network data generated from over 3.2 million unique interactions of N = 192 individuals, n = 48 of which were determined to be highest risk users (HRUs), a machine learning classification model was trained, validated, and tested to predict HRU status."
"A complete record of posting activity within the "Suicide Discussion" subforum of Sanctioned Suicide, from inception on March 17, 2018 to February 5, 2021, was programmatically collected and organized as tabular data using a custom Python (v3.8) script that primarily leveraged the BeautifulSoup package to parse the site's HTML and XML information.34 This effort resulted in a dataset containing more than 600,000 time-stamped posts across nearly 40,000 threads and over 11,000 users. This posting activity information consisted of (i) thread title, (ii) thread author, (iii) post author, (iv) post date, (v) post text content, and (vi) direct mentions and references to other user comments within the post text. All information, except for post text, was used in this study. To impose an added layer of user anonymity, each username was automatically assigned a randomly generated, 32-character hashed ID. These de-identifying IDs were automatically replaced with all instances of users' online handles within the data prior to subsequent preprocessing and analysis."
"A structured approach to select a subset of appropriate users and identify HRUs was devised based on the findings discussed through the New York Times investigation into Sanctioned Suicide35 as well as the authors' thorough review of the forum content. Moreover, this strategy was described and utilized in a previously published analysis of users on the Sanctioned Suicide forum.33 To reiterate herein, data was first filtered by searching for thread titles with the following keywords/phrases: "bus is here," "catch the bus," "fare well," "farewell," "final day," "good bye," "goodbye," "leaving," "my time," "my turn," "so long," and "took SN." Of note, "catch the bus" is a euphemism adopted by the community to symbolize suicide,33 while "SN" is short for sodium nitrate, an increasingly popular chemical used in suicide-related methodology. These terms were used to identify "goodbye threads" on Sanctioned Suicide, and thus have the highest probability of signaling for an impending attempt"
In
simpler terms, a large amount of data was collected by a university research group, which out of these posts, which where then analysed for words. Not that this is of any interest to me personally but @RainAndSadness, are there counter-measures against these types of stuff or did you have knowledge of this?

They even said: "Accordingly, written informed consent was waived and this study "exempt" from further review."

t
I just audibly exclaimed "what the fuck" lolol
 
  • Like
Reactions: ihateearth
I

ihateearth

Student
Apr 1, 2024
150
the study seemed fairly ethical to me

they were using language terms to try to guess who actually died

then they were looking at interactions between users and threads to see who was more at risk

some of their conclusions were things like if you interact with a few random people that are not linked in a social group (like your contacts have fewer connections) your risk is higher. some of their conclusions I didn't understand. The study author is smarter than I am so I couldn't understand it all, or perhaps I could if I learned more.

this didnt to me seem about trying to figure out who was posting or even to stop people. it seemed to be more about identifying social media data to indicate who is at high risk

some people are high risk and want help. it's really tough to study the highly suicidal due to the mental health industry's unethical rules and conduct that result in people afraid of care.

in a normal setting (facebook) how do you study who wants to die?

the article is not as bad as it seems and this isn't about stopping us if we want to but more about just stidying suicide as a phenomenon.

they also did things to anonymize the datasets even while studying it. this study seems respectful of those involved and the site from what I read, although any publicity of this site is bad
It is unethical to steal data from this website and data from users that they didn't pay for or ask for consent as part of a research study! Test subjects are compensated in studies. Unauthorized data mining is stealing and unethical as people don't know they're being watched like lab rats.

Why do you think websites are increasing locking and monetizing their data? It stops thieves like these researchers and protects their users.


Stop normalizing researching suicidal people to develop tools to further monitor label and potentially arrest them. When police show up to their homes as they have before, in the past, they will be taken against their will besides being recorded.

Their names will be added to various systems with bad mental health stigma, even if it's just a rough period in their lives they later overcome. Their names will be stigmatized and in situations like court cases they can have custody of their kids taken away or have the record and data used against them.

Just because they anonymized the user names and data doesn't mean they won't give their information if compelled by a subpoena, police, or a government order. They have the raw data and key. They're doing the work of the government for them. I wonder who's funding the study? Government grants?

They didn't need to use AI to study suicide as a phenomenon. There's been innumerable studies done about suicide. These researchers are leeches. They want any opportunity to make money with AI and create something with AI for their egos.

What's next? Developing a product. Having that product track people while the developers and creators get rich or richer. Pathetic. They don't care about suicidal people. They want accolades.

Leave suicidal people alone. If somebody wants to die, it's their life, their body, and their choice. Leave them alone. You suck for trying to make snooping college researchers possibly backed by a government grant to develop tools against people sound normal.

I guess it's impossible to prevent this site from being used as a research tool or a way to feed AIs with information... Privatizing the forum would be a valid option but RAS once said that this would imply that the registration queue would increase considerably.
The website owners should stop being lazy and do whatever methods are best to make their website less of an easy target for the media and random data thieves like these researchers. Seems they don't care. One comment highlighted multiple ways.

These people should be paying for access to use their website and user data. Leaving it open is losing revenue they could use to keep the site open. It's also reckless towards their users.
 
Last edited:
  • Like
Reactions: needthebus
EvisceratedJester

EvisceratedJester

|| What Else Could I Be But a Jester ||
Oct 21, 2023
3,730
Oh holy fudge no. I looked at this site as a safe haven to chat with others of a similar mind, not to be STARED AT ON DISPLAY LIKE A MONKEY IN CAGE!!!! If they want to study us, then go ahead and talk to us, 1-on-1, and say it to our face, and if they can't handle that, then they shouldn't even bother violating our personal rights that we went on this site to try to protect and preserve, where we went to get AWAY from this kind of thing.

Sorry for the rant. This just feels VERY stupid and frustrating in all kinds of ways to me.
The thing is, they aren't violating anyone's rights. They are using posts made on a public forum. The data being collected likely falls under public data, meaning that informed consent is not required in this case. All of the information that they gathered is information that didn't require them to have any direct interactions with the site's users, let alone even require an account to access. Along with that, they also anonymized the data.
To impose an added layer of user anonymity, each username was automatically assigned a randomly generated, 32-character hashed ID. These de-identifying IDs were automatically replaced with all instances of users' online handles within the data prior to subsequent preprocessing and analysis.

Informed consent can actually be waived under certain circumstances.

This isn't anything new. There have been other studies done looking at posts on this forum. This isn't really unique to SaSu either, as you can find plenty of studies looking at posts from all sorts of other social media platforms, from more mainstream ones (e.g. Twitter) to forums.

For reference, I don't like what these researchers did and I wish that they had just left this site alone, but at the same time they did not violate anyone's rights, at least to my knowledge.
 
needthebus

needthebus

Longing to Becoming HRU
Apr 29, 2024
303
It is unethical to steal data from this website and data from users that they didn't pay for or ask for consent as part of a research study! Test subjects are compensated in studies. Unauthorized data mining is stealing and unethical as people don't know they're being watched like lab rats.

Why do you think websites are increasing locking and monetizing their data? It stops thieves like these researchers and protects their users.


Stop normalizing researching suicidal people to develop tools to further monitor label and potentially arrest them. When police show up to their homes as they have before, in the past, they will be taken against their will besides being recorded.

Their names will be added to various systems with bad mental health stigma, even if it's just a rough period in their lives they later overcome. Their names will be stigmatized and in situations like court cases they can have custody of their kids taken away or have the record and data used against them.

Just because they anonymized the user names and data doesn't mean they won't give their information if compelled by a subpoena, police, or a government order. They have the raw data and key. They're doing the work of the government for them. I wonder who's funding the study? Government grants?

They didn't need to use AI to study suicide as a phenomenon. There's been innumerable studies done about suicide. These researchers are leeches. They want any opportunity to make money with AI and create something with AI for their egos.

What's next? Developing a product. Having that product track people while the developers and creators get rich or richer. Pathetic. They don't care about suicidal people. They want accolades.

Leave suicidal people alone. If somebody wants to die, it's their life, their body, and their choice. Leave them alone. You suck for trying to make snooping college researchers possibly backed by a government grant to develop tools against people sound normal.


The website owners should stop being lazy and do whatever methods are best to make their website less of an easy target for the media and random data thieves like these researchers. Seems they don't care. One comment highlighted multiple ways.

These people should be paying for access to use their website and user data. Leaving it open is losing revenue they could use to keep the site open. It's also reckless towards their users.
all your points are really good, the mental health industry sucks and the overlap between them and police/government oppression is pretty close to 1:1

there could l also be a SaSu ToS at the opening page:

By entering, I agree to the ToS. This ToS is also a contract. By entering SaSu, users get access to the platform and the ability to use the platform. The benefit is derived from usage of the platform, irrespective of content or lack thereof, which allows users to post. The benefit is accrued even without posting and just by being given the ability to post. For unregistered users, the benefit is the opportunity to use some of the platform and apply to be a full member so as to post, and having the ability to become a registered user requires consent of the contract.

I agree I am not a researcher, jounalist, or person under 18.

I agree not to use the data in any AI research or any research without compensating the owners of this website as per the terms herein and getting their express written permission first.

All user posts become property of SaSu upon posting and if a researcher wishes to use these posts for an AI study, even a non-profit study, the cost is $500,000 per usage of all site data at time of payment plus 1 additional month of data scraping. For profit datascraping also is the same cost. If a journalist or researcher wants platform access, but not content, for any purposes, the cost is $200,000 per one month of access. I agree by entering and accepting that these are reasonable fees since the site obtains funding by users and the existence of journalist, researchers, and AI data scraping may result in a loss of users who are concerned about being monitored. In the event I use this data without consent, I consent to pay for legal fees for recovery of these amounts and that I will be liable in a personal capacity and if I represent an organization that they will also be liable. I agree that if I use any information on this website, including fair use quotations or fair use discussions of posts or information for general research, without first contacting the admins I will be liable for $250,000 per article. By entering, I agree that these fees are reasonable and not excessive.

This would at least make it hard for AI data scraping to occur. This was written mostly by ChatGPT.
 
Linda

Linda

Member
Jul 30, 2020
1,683
I have read the paper, and this study is not hostile. It's neutral, or even mildly positive, in its attitude to this site. Moreover, it reaches a very important conclusion, so important that I have broken my self-imposed rule of taking a long break from SaSu, so I can point it out. Users who interact less with other users are at greater risk of suicide. So it's really important that every one of us makes an effort to interact with people who get very few responses to their posts and/or threads. Talk to people. Don't leave anyone isolated. (And that's all you will be hearing from me for about another 6 months.)
 
  • Love
  • Like
Reactions: -Link- and Rust
needthebus

needthebus

Longing to Becoming HRU
Apr 29, 2024
303
I have read the paper, and this study is not hostile. It's neutral, or even mildly positive, in its attitude to this site. Moreover, it reaches a very important conclusion, so important that I have broken my self-imposed rule of taking a long break from SaSu, so I can point it out. Users who interact less with other users are at greater risk of suicide. So it's really important that every one of us makes an effort to interact with people who get very few responses to their posts and/or threads. Talk to people. Don't leave anyone isolated. (And that's all you will be hearing from me for about another 6 months.)
i interact with lots of people on here :-(

what if I never become a HRU? 😭
I have read the paper, and this study is not hostile. It's neutral, or even mildly positive, in its attitude to this site. Moreover, it reaches a very important conclusion, so important that I have broken my self-imposed rule of taking a long break from SaSu, so I can point it out. Users who interact less with other users are at greater risk of suicide. So it's really important that every one of us makes an effort to interact with people who get very few responses to their posts and/or threads. Talk to people. Don't leave anyone isolated. (And that's all you will be hearing from me for about another 6 months.)
wait... is that a correct conclusion?

couldn't that just be random?

so let's say there is a website called Roulette. 100 people are on it. Each day, a person posts once in reply to 1 user and flips heads or tails. If they flip tails they die.

On day 1, 100 people post and half die.

On day 2, 50 people post and half die.

On Day 3, 25 people post and exactly 12.5 die.

And so on and so forth...

That pattern would lead to the conclusion that people who are connecting less with others are more likely to die when it's actually random chance, right?

Did they control for that? I'm not smart enough to know if they controlled for that in the study.

I shouldn't have called that clinician a fucking asshole earlier, they probably know the answer! :-(

@bosschop did the study control for that?
 
Last edited:
Rust

Rust

Member
Aug 28, 2024
32
Because I had nothing better to do, I had a brief look at the paper. There were some interesting highlights:

To study Suicidal thought and behavior (STB) as it manifests within a modern modality of communication, free from the limitations of censorship found on mainstream platforms, this work leveraged data from an unconventional and unique community—the pro-choice forum, "Sanctioned Suicide."
...while an online community like Sanctioned Suicide has the capacity to offer novel insight into the nature of STB, it is not entirely representative of other modern communication platforms with much stricter content policies and mechanisms of censorship (e.g., Instagram or Reddit subforums for suicidal ideation).

They chose to focus on SaSu because there isn't a social filter here that would taint their data. The implication of this though (at least from what I understand) is that their model only really works on SaSu and its users. So what they've built won't really be applicable to any other social platform. They'd require further research for this to happen.

STB is pervasive across the World Wide Web, and there is a concerted effort among suicidology and data science researchers to understand, detect, and prevent it.
Pairing these network-based features with other proven digital markers of STB risk may improve data-driven suicide prevention efforts.

It seems their intentions are on prevention, though they don't explicitly do or recommend anything on how to go about implementing that. I don't think their work is malicious on its own, but it could be bad depending on how it's used.

Upholding a pro-choice, censorship-free philosophy, Sanctioned Suicide has deservedly come under fire due to public outcry regarding suicides that were believed to have been facilitated through the content hosted within its virtual walls. The forum's content covers a variety of topics and themes, with the most concerning in regards to the solicitation and sharing of suicide methods-related advice. Although this obviates any potentially positive impact this community has on its highly vulnerable members, it is still important to note that individuals come to Sanctioned Suicide for different reasons. While some individuals seek aid in carrying out their plans to end their own lives, others join the community to be heard, understood, and validated due to thoughts, feelings, and opinions that make them pariahs in their day-to-day lives. This allows them to achieve a sense of belonging. Therefore, these users do not necessarily represent imminent attempters as they interact with others, share life histories, and bond through a mutual understanding of ideology.

They're very critical with their choice of wording: "deservedly come under fire", "this obviates any potentially positive impact his community has". They do end this by saying that there's a sense of community, but they still seem rather dismissive about it. I think they just added this to say that not everyone who joins SaSu is immediately interested in CTB.

the results of the current analysis suggest that individuals who are involved with more highly connected users, interact directly or indirectly with a larger number of users, post in a larger array of threads, or are more central receivers of information/attention are less likely to be High Risk Users (HRUs) than their peers. Conversely, the association and involvement with smaller cliques of users or interaction with a less connected or smaller number of users was shown by the model's prediction tendencies to be HRU signatures.

Ironically, later on, they seem to believe that if you're more involved in the SaSu community, you'll be less likely to CTB (I might be misinterpreting this one), which kinda contradicts the earlier quote. It also kind of invalidates the paper a little, since they basically say that some users come for the community aspect (instead of intending to CTB), but then proceed to say that those involved in the community don't often CTB (like of course you'd discover this).

Importantly, no account creation or interaction with users on Sanctioned Suicide was carried out by the researchers of this work.

Figured I'd just add this one for curiosity sake.

I admit I cherry picked the above quotes, but I still think it's a strage paper. To prevent data collection, I'd go for a rate limit on the number of threads a user can access in a given time frame. A Captcha is normally used on registration or login, but not on viewing threads, so I'm not sure how well that will work. Anyways, figured I'd just share this in-case someone finds it interesting. I also see that some posts were made while I was writing this, but I'm too lazy to change it now.

---

Small Edit:
They do caveat the part I mentioned about how the paper could be invalidated with the following:
The unique moral duality of Sanctioned Suicide—existing simultaneously as both an unconventional "therapeutic" resource that discourages the act of suicide and a place to obtain all information necessary to successfully attempt it—makes it difficult to hypothesize
I definitively could've read the paper a little more thoroughly.

Also, in hindsight, a rate limit might not work, since they could just make multiple crawlers. So that might not be the greatest idea.
 
Last edited:
  • Informative
Reactions: -Link-

Similar threads

justcallmeJ
Replies
18
Views
2K
Suicide Discussion
_AllCatsAreGrey_
_AllCatsAreGrey_
M
Replies
3
Views
579
Suicide Discussion
misthios2040
M
AnderDethsky
Replies
3
Views
487
Suicide Discussion
ms_beaverhousen
ms_beaverhousen