A student’s app ignites a fierce debate over anonymity and freedom of speech.
Sitting on my parents’ orange couch while scrolling through Librex one afternoon, I came upon a post entitled “African-American and Ethnicity studies are bullshit.” Tapping on this title, I opened the post, which described how these fields of study are “designed to lure unsuspecting poc…. by majoring in these, students gain no practical skills” and become trapped in “the cycle of poverty.”
The post had garnered a significant number of upvotes that day, so Librex—an anonymous discussion forum exclusively for Ivy League students à la Reddit or YikYak—brought it to the top of my feed. Responses ranged from firm endorsements—“This is a take I can firmly agree with.”—to dismissive opposition—“Nah you can find a great job no matter what you major in.”
The mission of Librex (“‘Libr’ for libre [as in free], ‘ex’ for exchange”) is to “democratize college discourse and create a space for honest open discussion about what’s going on in your campus community,” per the app’s website. Ryan Schiller ’21, the founder of Librex and currently the only full-time employee, released his product on iOS in September 2019. Today, the app boasts around 11,000 registered users, 3,200 of which are active daily, spending over an hour on average perusing posts and messaging others (though I recently came upon a post confessing a five-hour Librex binge). An Ivy League .edu email address is required to log on to the platform, and once inside, the user is greeted by an infinite scroll of posts.
In an effort to understand the motives behind controversial content on the platform, I matched with the student who wrote the post. On the app, users can send private messages to the author of a specific post or comment.
Only identified by the Librex UI’s pill-shaped badges for each school, the Harvard student explained, “So my post is what we call ‘bait’ here on Librex. It’s definitely not my true thoughts. I’m just happy a lot of people have taken the bait. Although, I’m shocked some people support this.” Eager to incite a reaction, stir up the comments section, and amass attention, people bait all across the internet—this isn’t by any stretch a feature or quirk of Librex. But I was shocked by how shamelessly this person owned up to it.
“So you don’t actually believe any of this stuff?” I asked to clarify.
“No I think it’s dumb. I’m black and double major in African American studies. Just thought it’d be interesting to see what people thought. I also don’t argue with the comments. I’m more interested in seeing how it plays out.” Reticent to disclose even their name’s first initial for fear of being identified, the user vanished back into the app’s anonymous mist after just a few more questions.
Taking this Harvard student at their word, the post would fall in the category of well-intentioned baiting. But they just as easily could have lied to me, aiming to be a hurtful troll the whole time. In the end, does the intention even matter when everything is anonymous?
On any given day, Librexers—this is what they call themselves—discuss politics (“Renaming the past”), imposter syndrome (“How is everyone so put together and so far ahead already???”), and sex (“I’ve never had an orgasm”). They express their worries (“My dad is in the operating room rn”) and ask critically important logistical questions (“Is toads closed forever?”), but between the banal is a world that much more closely resembles the digital fringes. Schiller acknowledges that “any time there is anonymous discourse among thousands of people, there will be users who don’t contribute positively to the discussion.” But how does Librex decide what stays and what goes? What is useful, what is hurtful, what is truthful? And who makes the decisions?
In a February 2020 interview with the Yale Herald, Schiller explained that he felt “isolated and alienated” after his sophomore year, often wondering if others would judge him for ideas brought up in class. He wanted to give students a space where they could air sensitive thoughts without the fear of being called out.
Today, Schiller seems much more comfortable expressing his views on controversial topics. Casually, he revealed to me his belief in the state of Israel (“I’m a big religious Jew. I’m also a big Zionist”), which he surely knows could be an unpopular stance on Yale’s left-leaning campus.
But for those who can’t sit so openly and confidently with their ideas, Schiller brought forth a solution. A Math and Global Affairs double major without any coding skills, he googled his way through every question over the summer of 2019—“how do you make an iOS app,” “how do you get it on the Apple Store”—and ended up with a basic framework for the application. Along the way, he temporarily sought others to help him with development, outreach, and design.
The app’s user base stayed small for a while, with around 100 daily active users in February 2020, but steadily grew as Schiller expanded piece by piece to the rest of the Ivy League—from Yale he moved to Dartmouth, then came the rest.
As COVID-19 hit, kicking students off campus and sending them home, Librex blew up. The platform teemed with discussion about how schools would respond to the virus, when or even if students could go back to campus, and what Zoom classes would look like. A contentious point of debate on Librex at this time was about whether or not Yale should enact a universal pass (UP) policy to accommodate the immense range of learning environments at home. Over the phone, Schiller praised the platform for providing a place for “people [who] didn’t feel like they could speak on their opinions [on UP]” publicly. But this open debate, though occasionally productive, also included derisive digs. After Yale converted all classes to pass/fail, a post appeared on Librex and subsequently made the rounds on Facebook and Twitter that simply read “congrats poor people.”
In late May the Twitter account @LibrecksApp emerged. The account owner reposts screenshots of offensive posts on the platform, mocking Librex’s generous content moderation and circulating especially offensive or off-color posts. At the same time, content moderation entered the national discourse when, on May 26th, Twitter fact-checked the President for the first time by placing a label below his erroneous claims that mail-in ballots lead to voter fraud. In the following weeks, Twitter continued to place warnings on the President’s tweets, attaching a notice of violation of Twitter’s content policy for “glorifying violence” over his incendiary re-invocation of the phrase “when the looting starts the shooting starts.” The question arose, where does a social media platform end and a publication begin?
It’s a debate that may never be resolved. Mark Zuckerberg (Facebook), Jack Dorsey (Twitter), Susan Wocjicki (YouTube), and Steve Huffman (Reddit) are constantly weighing decisions on this question, and some have taken more drastic steps than others. In late June, Huffman banned over 2,000 subreddits including the massive Trump-focused thread “The_Donald” in accordance with a new content policy of no hate on Reddit, whereas Zuckerberg has favored inaction, stating that Facebook won’t be “arbiters of truth.” Many Librexers expressed frustration to me about large social media companies. One anonymous user told me, “I used to love Reddit, but then they caved to the mob” and have now moved to Librex for its anonymity and looser moderation policies.
Librex is moderated according to three simple rules:
“Be legal: No spam, threats, or darknet type stuff;” “Be a mensch: No targeting individual students or professors (excluding public figures);” and “Be specific: No sweeping statements about core identity groups.” If a post breaks the rules, it’s removed. Repeat offenses or an especially egregious transgression: the user is banned. In theory, it’s a simple rubric.
But stark lines quickly become blurry, black and white turning to a mute grey—a post rarely fits neatly into this scant set of guidelines. This is where the moderation gets fun, S. told me.
S., a junior at Princeton, has been a Librex moderator since April. He entered the volunteer role after responding to a Librex posting by Schiller advertising open moderator positions (posts made by Schiller and other affiliates of Librex are signed with a gaudy purple script font “– The Librex Team”). S. interviewed with Schiller via the matching feature—anonymously, of course—and was accepted shortly thereafter.
Speaking to me through the phone over background sounds of his sister playing “Für Elise” on piano, S. told me that when a post is reported, it is brought to the moderation team for debate. In a Facebook group chat alongside twenty-three other volunteer moderators from across the Ivy League, S. argues for why or why not a post should remain up. The two dozen mods—which Schiller told me represent a diversity of socioeconomic status, country of origin, political viewpoint, and race (over half are PoC)—can debate for hours whether or not a post is against the rules.
If there’s too much disagreement in the text chat, the moderators will have lengthy video calls. Schiller explained to me that the team has built a sort of case law, whereby they compare reported posts with past decisions in an effort to maintain a consistent moderation scheme. And when Schiller recruits new moderators, he sometimes gives them sample moderation questions to see how they respond.
“The two dozen mods—which Schiller told me represent a diversity of socioeconomic statuses, countries of origin, political viewpoints, and races (over half are PoC)—can debate for hours whether or not a post is against the rules.”
A favorite of his, he told me, is whether or not you remove a post claiming that “all men are trash”—a question that Facebook used when they were first creating moderation standards. “There’s no right answer to that necessarily,” he told me, “although I think there are better answers and worse answers.”
The founder said that he would likely message the user to let them know that the proclamation is not the kind of discourse expected on Librex and the post would likely be removed “because it’s a sweeping statement against a core identity group.”
Schiller often benchmarks his moderation practice against Facebook’s: in another instance, he justified that a Holocaust-denial post should remain up by saying “even Facebook allows it.” A disgruntled moderator quipped: “ah yes let’s take our moral cues from daddy zuck.”
The whole operation has an air of professionalism to it until you look at the moderation chat itself. In a series of screenshots leaked by ex-moderators, we get a glimpse of the post evaluation process.
Before even examining the substance of the chat, another detail jumped out at me: pseudonyms the moderators had given each other. There is, for example, “Grandmaster’s Padawan,” (S.’s name) “Grandmaster Big Brain,” “EtErNaL🍆,” and “Daddy,” which is, unsurprisingly, Schiller. The nicknames are transient, continually edited and tweaked by the moderators: at one point Schiller becomes “papi.” “The moderators can be silly sometimes,” he told me.
The nicknames also hint at yet another layer of anonymity, perhaps spurred by the worry that their identities will be revealed. After screenshots of the moderator chat were first leaked to Twitter in late May by an ex-moderator from Dartmouth, there was a palpable fear in the chat. After Schiller informed the moderators of the leak, S. writes, “Holy [new message] Shit [new message] @Daddy did I get cancelled.” Schiller (Daddy) replies, “I think ur ok for now.” A few messages later S.’s fear returns: “I’m gonna get cancelled.”
The content moderation chat brings to light the debates of what should be allowed to exist on the internet, what is considered hate speech, what is said satirically versus seriously. These questions are asked behind closed doors, which Schiller explained to me is because “these are very sensitive issues and oftentimes issues that involve identity and involve how we see the world. And it’s important that our moderators feel like they are safe to actually speak what they think is like [sic] moral.”
The moderator chat, as Schiller described it to me, is a discourse unencumbered by judgement, a place where moderators feel comfortable to voice their true opinions while discussing the complicated politics of post removal.
But an ex-moderator from Dartmouth, who left Librex over frustrations with the culture of the company, described that the chat is full of slights and disrespect. In a message they told me, “i started to get really tired of the job when there was a conversation in the mod group chat about racist content and i asked if we should leave up racist posts even if they technically lie outside of the community guidelines and ryan said yes”—a discouraging instance given their impetus for becoming a moderator stemmed from a belief that maybe they could be the one to “change the content for the better & reduce the harassment/hate speech.”
The ex-moderator expressed anger over an instance in which their moderation decisions were overridden by Schiller. In late May during elections for the Dartmouth Student Assembly, candidate María Teresa Hidalgo ’22 and her running mate Olivia Audsley ’21 were continually harassed anonymously on Librex. The Dartmouth reported they were compared to fascists and that one post “threatened to call Immigrations and Customs Enforcement on Hidalgo… Despite her own American citizenship.”
After Librex announced that Hidalgo and Audsley did not qualify as public figures and therefore could not be targeted by name on the platform—a decision which took several days while the offending statements festered on the platform—the ex-moderator began work on deleting the hurtful posts: “i started taking everything down (very relieved) but [Ryan] put back a ton of posts where only their initials were used.”
After being called “oversensitive for telling someone why the redskins logo was bad and promptly made fun of for defending [themselves],” they finally resigned as a moderator. “it was just like. why do i bother.”
In another instance in the moderator chat, the group discusses a post denying the Armenian genocide. Grandmaster Big Brain takes a hard-line stance: “it’s not sexist, doesn’t attack a class, just reflects the poor state of historical education.”
Another moderator responds, “it’s like… denying genocide.”
Big Brain retorts, “Yes, but it doesn’t go against the rules [new message] Trust me, I find it disgusting—but at the same time we are not policing for facts.”
Schiller affirmed this belief to me over email. I asked how Librex deals with misinformation and he replied, “In general, the Yale community is good at pointing out misinformation through comments and voting.” He believes in minimal regulation: falsehoods should remain on the platform and be downvoted to the bottom of the feed rather than moderated away.
“‘So you don’t actually believe any of this stuff?’ I asked to clarify.
‘No I think it’s dumb…just thought it’d be interesting to see what people thought. I also don’t argue with the comments. I’m more interested in seeing how it plays out.’”
Anonymous Harvard Student
Schiller’s platform allows Ivy League students to explore ideas and learn from each other, as many users told me. While matching with people, almost everyone I spoke with loved the app (selection bias, of course), and most accepted the trolling as a necessary inconvenience.
J., a Columbia student who quickly matched with my post soliciting user interviews, explained that he’d actually learned a lot from Librex and had his beliefs challenged and changed. He told me that as a Chinese-American, he often felt resentment towards the ways affirmative action disadvantaged Asian-American college applicants, until he had some conversations on Librex that helped him build a more nuanced understanding of the lasting effects of slavery and prejudice against Black people. “[Librex] helped me be more receptive and less apathetic and self-interested than I was before.”
Schiller’s quest to give students a platform for free speech and debate has been certainly realized: speech is definitely free, debate plentiful. But the extent to how far “free” should go, and whether the outcome of that decision is good, is another question entirely. Although access to anonymous speech on Librex is spread equally across the Ivy League, the bigoted posts enabled by the anonymity tend to target certain groups more heavily: Black students, overweight students, undocumented students, transgender students, among other minorities.
One critic of Librex, Anyoko Sewavi (Dartmouth ’23), who posted a YouTube video in July reacting to racist Librex comments from other Dartmouth students, told me that she initially witnessed productive discussions and believed anonymity was a “good concept.” But “then came the trolls.” Sewavi, who is Black, told me that “seeing that Dartmouth is the main school writing racist comments is just unnerving, it’s uncomfortable, and honestly it’s just exhausting to see…. I wouldn’t know if I walked across campus and someone felt this strongly about me being Black.”
Our phone call finished on an optimistic note, with Sewavi explaining to me that the relatively small size of Librex enables opportunity for systemic change on the platform, and perhaps social media as a whole: “If they take trolling seriously, and there’s actual consequences for these comments and actions, then maybe in the future it will snowball and set an example for other anonymous social media apps.”
Others have also spoken out against the app, such as the actress Skai Jackson who posted on Twitter screenshots of Librex posts with the caption “This app is called Librex, so sad people are saying disgusting things on here…” The attached posts claimed “Fellow racists. I have a plan to increase racism” and “Black people need to learn grammer [sic].” It’s hard to know if the posts were eventually removed.
Librex as a platform does not create the cesspools of flippant, divisive callouts that sometimes permeate its users’ devices. Rather, the anonymity gives these ideas a convenient breeding ground. The platform is a magnifying glass on the internet—a tangible diorama of the same behavior that permeates the bigger social sites. I recently came upon a post with 22 upvotes confessing: “For a forum filled with Ivy League kids, the content here is remarkably similar in quality to Reddit and 4Chan.” Yet this behavior feels much closer to home when we see it on Librex. We know that the content that so often ends up being debated in the moderation chat is made by the people that live around us, that attend class with us.
On Librex, offensive posts appear often—posts about why undocumented people should just move to Canada, or how the n-word “triggers libtards.” But despite the occasional moderation blunders, my experience browsing the app has gradually included less and less brazenly insulting content since I downloaded it in April. Perhaps the moderation team has honed their technique in an attempt to balance strict free speech and the app’s role as a useful and good platform. But the internet’s propensity to breed bigots and fuel flame wars, intensified by Librex’s feature of facelessness, may be insurmountable.
Schiller conceded to me that any online community has its bad apples—a statement that felt a bit like admitting it’s the fault of the water and not the holes in the hull that causes a ship to sink. But when a leak does emerge and trolls inundate the platform, anonymity protects and emboldens some while others bear the brunt of unchecked speech.