Because I had nothing better to do, I had a brief look at the paper. There were some interesting highlights:
To study Suicidal thought and behavior (STB) as it manifests within a modern modality of communication, free from the limitations of censorship found on mainstream platforms, this work leveraged data from an unconventional and unique community—the pro-choice forum, "Sanctioned Suicide."
...while an online community like Sanctioned Suicide has the capacity to offer novel insight into the nature of STB, it is not entirely representative of other modern communication platforms with much stricter content policies and mechanisms of censorship (e.g., Instagram or Reddit subforums for suicidal ideation).
They chose to focus on SaSu because there isn't a social filter here that would taint their data. The implication of this though (at least from what I understand) is that their model only really works on SaSu and its users. So what they've built won't really be applicable to any other social platform. They'd require further research for this to happen.
STB is pervasive across the World Wide Web, and there is a concerted effort among suicidology and data science researchers to understand, detect, and prevent it.
Pairing these network-based features with other proven digital markers of STB risk may improve data-driven suicide prevention efforts.
It seems their intentions are on prevention, though they don't explicitly do or recommend anything on how to go about implementing that. I don't think their work is malicious on its own, but it could be bad depending on how it's used.
Upholding a pro-choice, censorship-free philosophy, Sanctioned Suicide has deservedly come under fire due to public outcry regarding suicides that were believed to have been facilitated through the content hosted within its virtual walls. The forum's content covers a variety of topics and themes, with the most concerning in regards to the solicitation and sharing of suicide methods-related advice. Although this obviates any potentially positive impact this community has on its highly vulnerable members, it is still important to note that individuals come to Sanctioned Suicide for different reasons. While some individuals seek aid in carrying out their plans to end their own lives, others join the community to be heard, understood, and validated due to thoughts, feelings, and opinions that make them pariahs in their day-to-day lives. This allows them to achieve a sense of belonging. Therefore, these users do not necessarily represent imminent attempters as they interact with others, share life histories, and bond through a mutual understanding of ideology.
They're very critical with their choice of wording: "deservedly come under fire", "this obviates any potentially positive impact his community has". They do end this by saying that there's a sense of community, but they still seem rather dismissive about it. I think they just added this to say that not everyone who joins SaSu is immediately interested in CTB.
the results of the current analysis suggest that individuals who are involved with more highly connected users, interact directly or indirectly with a larger number of users, post in a larger array of threads, or are more central receivers of information/attention are less likely to be High Risk Users (HRUs) than their peers. Conversely, the association and involvement with smaller cliques of users or interaction with a less connected or smaller number of users was shown by the model's prediction tendencies to be HRU signatures.
Ironically, later on, they seem to believe that if you're more involved in the SaSu community, you'll be less likely to CTB (I might be misinterpreting this one), which kinda contradicts the earlier quote. It also kind of invalidates the paper a little, since they basically say that some users come for the community aspect (instead of intending to CTB), but then proceed to say that those involved in the community don't often CTB (like of course you'd discover this).
Importantly, no account creation or interaction with users on Sanctioned Suicide was carried out by the researchers of this work.
Figured I'd just add this one for curiosity sake.
I admit I cherry picked the above quotes, but I still think it's a strage paper. To prevent data collection, I'd go for a rate limit on the number of threads a user can access in a given time frame. A Captcha is normally used on registration or login, but not on viewing threads, so I'm not sure how well that will work. Anyways, figured I'd just share this in-case someone finds it interesting. I also see that some posts were made while I was writing this, but I'm too lazy to change it now.
---
Small Edit:
They do caveat the part I mentioned about how the paper could be invalidated with the following:
The unique moral duality of Sanctioned Suicide—existing simultaneously as both an unconventional "therapeutic" resource that discourages the act of suicide and a place to obtain all information necessary to successfully attempt it—makes it difficult to hypothesize
I definitively could've read the paper a little more thoroughly.
Also, in hindsight, a rate limit might not work, since they could just make multiple crawlers. So that might not be the greatest idea.