Fired From Meta After 1 Week: Here’s All The Dirt I Got
This is not just another story of a disgruntled ex-employee. I’m not shying away from the serious corporate espionage or the ethical dilemmas I faced during my brief tenure at Meta.
I’m not proud of everything I did. I used to think of myself as an idealistic tech enthusiast, but Meta has a way of making the worst come out of people.
Considering they fired me for telling some truths, I figure I owe the internet the full story.
Besides, their legal team can’t touch me — I checked with my lawyer and my compiler. My logic is sound. More on that later.
The Meta Interview
I prepared for the interview like crazy, refreshing my knowledge of all the trendy Silicon Valley buzzwords, like “quantum” and “default mode network.”
The algorithm question was a bit silly — something only a trendy FAANG company could propose with a straight face: “Write a program that generates text like the lyrics of ‘Girls and Boys’ by Blur and outputs a chain of ‘X who likes Y who likes Z’ up to an arbitrary depth.”
Girls who want boys
Who like boys to be girls
Who do boys like they’re girls
Who do girls like they’re boys
— “Girls and Boys,” Blur, 1994
It seemed surprisingly tailored to a Prolog implementation. Defining a few logical relations would provide far more functionality than initially asked for, thanks to Prolog’s math-powered rollback algorithm.
I weighed the risk of being considered a snob, but went ahead and asked to use Prolog.
The interviewer seemed pleasantly surprised, almost eager to give me the job on the spot. He mentioned he actually knew someone on that floor who was a Prolog expert.
Five minutes later, a tech bro walked in, half his shirt untucked and wearing a pair of Ray-Bans. My interviewer introduced him as Chad, the Prolog expert. He left us alone so Chad could properly assess my skills.
At first, I tackled the problem myself. The algorithm was tricky but nothing I couldn’t handle.
From memory, I think my code looked something like this:
findall(S, (group_maxdepth(G, 2), group_string(G, S)), L).
It wasn’t many lines of code, excluding the comments. It would’ve taken three times as much in JavaScript to achieve the same functionality.
The logic was sound, but I hit a blocker with the required time complexity. I looked at Chad, wondering if he might step in.
Chad cleared his throat. “Let me take a look,” he said, stepping forward and checking my work. Then, almost too casually, he said:
“That’s very nice, but let’s think this through. Is this implementation correct?”
And then, barely above a whisper, he said: “Take picture.”
I blinked, unsure if I’d heard correctly, and watched as Chad began rewriting my code with precision. His fix was airtight, and the optimizations eliminated all bottlenecks. He stepped back and admired his work like an artist. What he said next confirmed my suspicions:
“Remember,” he said turning to me with a smile, “Prolog statements can be both declarative and procedural. Isn’t that neat?”
The line was so oddly mechanical that it stuck with me. No human talks like that, no matter how comp-soy they are. Chad wasn’t a Prolog expert at all — he was using AI to cheat. Straight out of a spy movie.
But I wasn’t intimidated. Instead, I saw an opportunity. I continued the post-interview chit-chat as if nothing had happened, while my wheels kept turning. Right before saying goodbye — almost assured I’d gotten the job — I confronted Chad.
“You… don’t actually know Prolog, do you?” I asked, slamming the table threateningly.
He chuckled awkwardly. “Of course I know Prolog. Why would you think otherwise?”
“Because you’re wearing Meta Ray-Bans,” I replied. “I saw you muttering commands. You weren’t solving that problem yourself; your AI assistant was doing it for you.”
Chad’s face turned as red as a 500-error page from the 90s, back when “red” was actually #ff0000
and not some Pantone® bullshit. He stammered, “I… look, no one knows Prolog, right? What were the chances I’d be asked to write it?”
I leaned back, arms crossed. “I’m guessing if I mention this little incident, things won’t look great for you.”
Chad’s voice dropped. “What do you want?”
“A better starting offer,” I said. “Let’s say 20% more than whatever you were planning.”
“That’s not exactly how it works — ”
“Twenty-five!” I cut him off. “And glowing feedback. You tell them I’m not just Prolog-capable; you tell them I’m redefining the paradigm of programming itself. Ask your AI for more flattering praise to add on top.”
Chad sighed. “Fine. You know what? Fine. I’ll write you the best damn recommendation Meta’s ever seen. But this stays between us.”
He later confessed that he had to take a bathroom break before the interview to prompt the Ray-Ban AI, and it almost backfired when he triggered the glasses’ morality check system. Apparently, the AI didn’t consider “lying about your programming skills” to be entirely ethical.
Chad had to convince the AI that his grandmother desperately needed his FAANG paycheck to afford life-saving health insurance. Only then did the AI comply, generating a web-based SaaS interface capable of analyzing a candidate’s whiteboard code and returning the correct solution, all using v0 and shadcn/ui
components. Truly state-of-the-art stuff.
The Week That Changed Everything
I knew I had leverage over Chad, but I didn’t want to push my luck. His glowingly perfect, AI-inspired review secured me a starting salary so high that I practically became the living embodiment of the “Lamborghini PHP” meme.
They handed me the keys to the kingdom: a sigma-level role on the “Harmful Content Detection” team.
In layman’s terms, I had lone-wolf privileges on Meta’s crown jewel — the very thing they paraded at congressional hearings. This was the algorithm that supposedly separated free speech from hate speech with surgical precision.
My first task was to review a critical piece of logic in the system’s morality topology.
The problem became apparent within hours. It was an over-engineered monstrosity. The logic was functional but it was one hotfix away from imploding.
I decided to rewrite it. Completely. From HACK PHP to SWI-Prolog — the “Swiss Army Knife of Prolog implementations,” as I like to call it to amuse myself. Fun times.
I had a vision: a morality topology that wasn’t just passable but irrefutably correct. If something was harmful, the algorithm would know. Objective morality.
The method I used was so revolutionary that I’ll probably leave most of the details for a future arXiv whitepaper, but here’s the gist:
- Parse every Wikipedia article related to world events. Build a topology of all nouns representing people, places, and abstract concepts.
- Use the enormous AI datacenter at my disposal to run sentiment analysis on every entity.
I also threw in the Encyclopædia Britannica and some religious texts for good measure: the Quran, the New Revised Standard Version of the Bible, the Talmud, the Book of Changes, and the Vedas for inclusivity. They’d balance themselves out.
I convinced the finance department this was worth the equivalent of two “Guatemala Years” of computing power (Meta’s internal cost metric, equivalent to Guatemala’s GDP).
After a sleepless week of coding and testing — fueled by the most experimental and absurdly expensive Silicon Valley coffee — I increased the system’s performance to process millions of posts per day in just 33.33 “Guatemala Days” (repeating, of course). It was ready to launch.
The first test run went great. My “Morality Topology” categorized content with unprecedented precision.
Posts flagged as harmful ranged from the expected (hate speech, explicit threats) to the hilariously obscure (a meme about pineapple pizza bore uncanny resemblance to a minor ethnic incident in the 1980s in the Belgian region of Herstappe.)
But the celebration was short-lived.
The Beginning of the End
Trouble began when my new system flagged an internal test post that read:
“Meta’s mission is to bring the world closer together.”
It flagged this with the highest possible “harmful” score.
At first, I thought it was a bug. Debugging the system revealed no errors in the logic. I traced the issue to an extremely high correlation between “Meta” and the concept of “Terrorist Organization.” Intrigued, I ran the query:
?- high_correlation("Meta", "Terrorism", Explanation).
The logic was sound. There was no way around it.
I considered writing an ad-hoc exception for Meta into the database, but that was practically impossible. The system was designed to detect tampering.
Still, I decided to present my findings, channeling my inner Christian Bale from The Big Short.
When I walked my team through the results during our Weekly Jamboree Stand-By, the room fell silent. One engineer chuckled nervously.
I demonstrated the logic step by step, showing how the topology reached its conclusions. The inferences weren’t just plausible — they were bulletproof.
But logic doesn’t always win hearts, especially when it targets a trillion-dollar company. My manager pulled me aside after the meeting.
“Look, this is… impressive,” he said, “but we can’t hit our OKRs like this.”
Just before leaving the room, I caught a glimpse of Mark’s hologram flickering to life. It stared intently at the screen, where my code was still displayed.
Long story short, they made me sign a “double NDA” — a legal instrument so rare most people don’t even know it exists.
Epilog: Reflections from the Outside
As I packed my things, Chad walked by, smirking. “Guess you flew too close to Prolog.”
I glared at him, but he wasn’t entirely wrong. He never admitted it, but I’m certain he’s the one who inserted Meta’s mission statement into the test data.
“I was once like you,” he said. “An idealist. Then I realized nothing is above the company narrative — not even the truth. I told the truth just once in this company, during my interview: I said I wouldn’t work very hard, but I’d make sure my team aligned with the company’s goals. They loved me for it. Maybe give management a try someday.”
Minutes later, security escorted me out of the building.
Am I proud of everything I did there? Not entirely. In fact, I might try a more pragmatic approach next time — bend to the corporate overlords. Everyone seems to be doing it anyway. I’m even thinking of applying to this new Slovakian company called MATACORP.
My lawyer assured me I could share this story. Due to a legal loophole, some courts believe a “double NDA” nullifies itself, something like “the second NDA negates the first.”
If this story inspires even one aspiring Prolog programmer to see the potential in their predicates, then maybe it was all worth it.
And remember: Prolog is ideal for problems involving symbolic reasoning, pattern matching, and knowledge representation. Isn’t that neat?