top of page

Algorithms, silence, and invisible videos

Horizontal 16:9 infographic with a dark textured background. At the center, a circular maze contains a padlock and the title “OPAQUE ALGORITHMS.” Arrows connect five surrounding sections to the center.  Top left section, “BLOCKED VIDEOS”: smartphone displaying “COMMUNITY GUIDELINES VIOLATION,” text “Vague notifications, no clear explanation,” and label “VIDEO NOT DELIVERED.”  Top right section, “SENSATIONAL CONTENT”: megaphone and speech bubbles labeled “FAKE NEWS,” “Gossip,” and “Controversy,” with caption “What drives engagement.”  Bottom left section, “HIRING PROCESS”: clipboard stamped “CANDIDATE NOT SELECTED,” envelope labeled “GENERIC FEEDBACK,” and caption “Silence or vague response.”  Bottom right section, “PORTALS & INFLUENCERS”: screens with videos, influencer figure, and stacks of money, labeled “Large creators, high immunity.”  Bottom center includes three tags: “Scale and Cost,” “Bias and Opacity,” and “Power and Profit.” The layout indicates all elements connect to the central concept of opaque algorithms.

I grew up on the internet. Not just as a user, but as someone who decided to produce content on platforms like TikTok Shop and YouTube Shorts. And, at the same time, I graduated in Data Science. This changes the way I see the game.


When you understand metrics, distribution, retention, CTR, completion rate, and A/B testing, naiveté ends. You start to notice patterns. And you also start to find certain things strange.


One of them is simple and bothersome: why are we encouraged to post videos every day, but frequently receive vague notifications that "the video violates community guidelines," without a clear explanation? Why, upon appeal, are many videos restored, but some remain "undelivered"? And why do large portals and influencers seem immune to this type of limitation?


Over time, I came to associate this with another common experience: selection processes with generic answers or absolute silence.


It's not just a coincidence. There's a structural logic behind it.


The daily pressure for production


The myth of infinite consistency


If you've ever created content on TikTok or YouTube , you've probably heard the same recommendation: post every day.


The promise is clear: consistency generates growth. The algorithm rewards frequency. Those who publish more, get more visibility.


But this narrative hides a tension.


As a data scientist, I know that platforms optimize for retention and screen time. They want people to spend more time in the app. This means the algorithm isn't a "neutral judge." It's an optimization system.



Optimization systems choose what maximizes an objective function.


In the case of these platforms, this function is usually something like:


  • Display time

  • Immediate engagement

  • Shares

  • Return to the app


That's not necessarily true. It's efficiency.


What really engages?


It's nothing new for those who work with data: content that evokes strong emotions performs better.


Outrage. Curiosity. Fear. Gossip. Sensationalism.


Fake news too.


This isn't an opinion. It's human behavior observed on a large scale. When you measure share rates, retention peaks, comments per thousand views, you realize that neutral and balanced content rarely wins attention battles against polarizing content.


So I wonder: if the system is optimized to maximize engagement, why am I surprised when technical, critical, or reflective videos have limited reach?


Perhaps the mistake lies in the expectation.


Vague notification and algorithmic silence


"Your video violates community guidelines."


Anyone who creates content has already received this message. The text is almost always generic. There is no clarity about which passage, which word, which framing was problematic.

You appeal.


A few days later, you receive another notification: "After review, your video does not violate our guidelines."


Great. But the damage has already been done. The video missed its timing. Delivery dropped. Reach cannot be recovered.


In other cases, even without a confirmed violation, the video continues to have limited distribution. It is not removed. It is just not delivered.


For those who understand systems, this raises some hypotheses.


Moderation in scale and operational cost


Platforms like Meta, TikTok, and YouTube handle billions of uploads.


It's not possible to review everything manually. Therefore, a large part of the moderation is automated.


Machine learning models classify content based on the probability of a violation. It's not a perfect binary decision. It's statistical.


If the probability exceeds a certain threshold, the video may:


  • To be removed

  • Having a reduced range

  • To be sent for manual review.


Here's an important technical point: false positives.


In any classification system, there is a trade-off between accuracy and recall. If you want to avoid problematic content as much as possible, you increase the model's sensitivity. This generates more false positives.


In other words, content that does not violate any rules can be penalized preventively.

From the company's point of view, this is acceptable. From the creator's point of view, it's frustrating.


Strategic opacity


But there is another element besides technical limitations: strategy.


Full transparency would allow creators to "play against the system" with more precision. If I knew exactly which words trigger blocks, I could circumvent that without altering the essence of the content.


Platforms deliberately avoid this level of clarity.


Opacity is not just a flaw. It's a tool for control.



The parallel with selection processes


This experience reminds me a lot of job interviews.


You participate in a process. You send your resume. You take a technical test. You talk to HR.


Speak to the manager.


Silence.


Or you might receive a standard message: "We've chosen to proceed with another candidate who better matches your profile."


What profile? In what aspect? Experience? Communication? Cultural fit?


Nothing is explained.


The bias of first impressions


In Data Science and Organizational Psychology, we know that human decisions are profoundly affected by first impression bias.


In the first few minutes of conversation, the interviewer forms a hypothesis. The rest of the interview often serves to confirm that hypothesis.


Then, the justification sent to the candidate is generic. Not because there isn't a reason, but because providing details involves legal risk, operational effort, and exposure.


Silence is safer.


Just like on the platforms.


Scale and standardization


Large companies receive thousands of resumes. Platforms receive millions of videos.


In both cases, customization comes at a high cost.


Then standard responses, automated decisions, and pre-filtered options emerge.


It's not personal. It's about scale.


But the psychological effect is personal. Very much so.


You start to doubt your own competence. Or your own integrity.


Why do large portals seem immune?


That's the question that bothers me the most.


Major news outlets and influencers publish dozens of videos a day. The content is often sensationalist. And we don't see any public notifications of restrictions.


What's behind all this?


History and internal score


Platforms work with reputation systems.


A creator with a long track record, a low rate of confirmed violations, and high positive engagement tends to have greater algorithmic trust.


It is plausible to imagine that there are "internal scores" of reliability. This would impact:


  • Probability of automatic blocking

  • Review speed

  • Initial reach


Large portals have legal teams, are familiar with the rules, and strategically adjust their language. This reduces the risk of penalties.


Business relationship


Another factor is monetization.


Large creators generate significant revenue. They attract advertisers, retain users on the platform, and strengthen their brand.


I'm not claiming direct favoritism. But, in any company, strategic accounts receive preferential treatment.


This happens in banks, software companies, and consulting firms. Why would it be different on digital platforms?


The advantage of redundancy


Big influencers post a lot. If one video is limited, ten others make up for it.

For smaller creators, a penalized video can represent 30% of their weekly reach.


The perception of injustice is greater because the relative impact is greater.


What's really behind all this?


If I had to summarize, I would say there are five layers.


1. Engagement optimization


The algorithm prioritizes retention and advertising revenue. Content that maximizes these indicators tends to be favored.


2. Risk management


Platforms prefer to err on the side of restriction rather than allow content that could generate a reputational crisis.


3. Operational scale


Billions of pieces of content require automation. Automation generates errors. Errors generate frustration.


4. Opacity as a strategy


Total transparency would reduce control and open the door to massive manipulation.


5. Power imbalance


Small breeders have less influence, less direct contact, and less margin for error.


The feeling of invisibility


There's something deeper to all of this.


When a video is "not delivered," there is no public debate. There is no explicit removal. There is invisibility.


It's a modern form of silence.


You speak, but almost no one listens.


This creates an environment where the creator internalizes guilt. "Maybe my content isn't good." "Maybe I don't understand the algorithm."


Sometimes that's true. But not always.


As someone who analyzes data, I know that systems are imperfect. And I know that metrics are chosen. They are not natural. They are human decisions.



The impact on content production


Over time, many breeders adapt their behavior:


  • They avoid complex topics.

  • They oversimplify.

  • They use more appealing titles.

  • They're betting on controversy.


The system shapes the content.


This worries me.


If what engages is superficial, and what is profound is limited or risky, the trend is towards a shallower internet.


Not out of individual malice. But due to structural incentives.


What I learned as a data scientist


I learned that every system has a goal. And whoever defines the goal defines the system's behavior.


If the objective function were validated informational quality, perhaps the feed would be different.


But measuring quality is difficult. Measuring display time is easy.


So, one chooses what is measurable.


This also happens in companies. Selection processes prioritize speed and reducing legal risk, not necessarily detailed feedback.


Metrics shape culture.


Is there a solution?


I don't believe in simple solutions.


Greater transparency would help. Clearer moderation criteria would also be beneficial.


But there are real limits:


  • Competition between platforms

  • Pressure for profit

  • Regulatory risk

  • Massive scale


Perhaps the change will come more from external regulation than from internal initiative.


Or perhaps it comes from users who begin to value in-depth content more than immediate entertainment.


I don't know.


What do I do about this?


I continue producing.


But I produce consciously.


I know that not every video will be delivered. I know that some notifications will be vague. I know that big players are playing a different game.


I also know that I can't base my self-esteem on the metrics of a platform whose main objective isn't my intellectual growth, but user retention.


Similarly, I cannot define my professional value based on a generic rejection email.


Conclusion


Growing up on social media platforms that demand daily content creation is both stimulating and strange.


As someone with a degree in Data Science, I see what goes on behind the scenes. I see the statistical logic, the false positives, the retention optimization, the strategic opacity.


As a creator, I feel the frustration of the lack of clarity.


As a candidate in selection processes, I recognize the same pattern of generic answers and opaque decisions.


Ultimately, it all revolves around scale, risk, and incentives.


It's not personal. But it's not neutral either.


And perhaps the most important question is not "why does this happen?", but "what kind of system do we want to continue feeding?"


That one, yes, is still open.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page