Authors

Latest text of pad IVlnVpwRfx
Saved April 16, 2024
 and Name: T                                                                                                                                                 
Correa: Much of the content produced for  people on the Internet is a real generator of intellectual property, created by artificial communities with personalities that influence people and my secrets that teach    
  
Part 1. Introduction. We continue with personal experiences of the early Internet and  references on the history of the Internet  
 
The internet has changed in a very short period of time. It was once a wild, untamed place, but in the last two words, for lack of a better term, it has become "normalized." 
 
When I was a kid in the late 1990s and 2000s, Facebook, Youtube, TikTok and Instagram neo. Instead, the content of that era was almost entirely made up of hobbyists creating small, personal websites that had no intention of generating revenue. In that sense, the internet was only meant for people who knew how to use it and for you who could afford it, so there ended up being a lot of people showing them their values 
 
Eventually, the internet went mainstream. Social media has emerged, mobile phones have become extremely ubiquitous all over the world, and access to the Internet has become easier and cheaper. It is through this explosion that the average person's access has become a developed force and population. You could argue that they tried to appear cool and trendy by going online, but regardless of their intentions, they consistently undermined the value and quality of publications and the internet. 
 
[ Mention "netiquette", according to the disclosure of confidential data on the Internet, was previously unacceptable.]
 
Over the years, the internet has become something that "they" didn't control, and they've had to put in a significant effort to control it, and after a lot of work, that's what they're doing now. Bots are a BIG part of that. Another one is multi-platform touts. Now that the internet is so widely used by so many people, manipulation becomes very beneficial for those seeking money or power. 
  
A prime example that comes to mind is Reddit. I've never been into "web nationalism," so I still go to the site from time to time to check things out, but in general, Reddit is rife with bots and straw men. On the subreddit dedicated to the conversation, there was a report that this guy went to a subreddit dedicated to politics and sent links to a lot of users in a confidential way,  telling them, "This link will record your IP address, don't click on it." Of course, they weren't lying; the link did record their IP addresses. According to him, when the trial ended, about 80% of users clicked on the link a few seconds after he sent a message in a private message. The government of these users responded a few minutes later, rejecting the claim to collect IP addresses. That's not to say there weren't regular visitors, because my couple, who caused their confusion, never clicked on the link. Admittedly, this is a small choice and, at best, a questionable experiment. However, it would not be entirely wrong to say that the rest of the leaders account for 10 bots and 2 shills, the share of 1 respectable user
 
 [ if the whole internet isn't better. 4chan doesn't have a private messaging system, so we can't be sure, however, after I've smashed my head /pol/ in the last few months, it's usually necessary to promote, uy prudma. Even Discord raiders, called in solely to annoy other users (who are themselves also unpaid dummy users), are being discovered because they are hiding people. 
 
 
 
Part 2 - The Problem: Outline the basics of what appears to be happening. 
 
There  is a large-scale, deliberate effort to manipulate discourse online and  wider culture via the complex utilization of a system of bots and paid  employees whose job it is to produce content and respond to content  online in order to further the agenda of those they were employed by.==
 
We've  already seen this in action through foreign nations influencing  elections by manipulating advertising algorithms on social media in  order to push specific candidates.
 
[Oxy: need to talk about shills or otherwise key disinformation tactics]
 
[Oxy: misinformation might also help but it's less of a focus for this theory]
 
I  strongly believe this is a positive feedback loop, and thus, I blame  Facebook and Twitter. The internet is the de facto access to  information, and information is what moves the mind. The mind likes  recognition. When user interaction systems like voting, or "likes" were  introduced, without negative feedback, they created a copy-feedback  subconscious. As such, only "positive" opinions are able to be  propagated while sequesting "negative" opinions to obselesence.
 
It's  from this system that more accurately drives groupthink. The average  internet user is afraid to have an opinion that goes against everyone  else because they want the votes (or as Luke Smith in this video on the  matter describes it, upcummies: https://www.youtube.com/watch?v=YjbyDU0WzYI). <!-- really? luke smith? why not talk about social media dependency etc instead of mentioning that hipster soyboy --> 
 
Consequently,  they are more likely to follow trends and repeat what other people have  said. A prominent example of this comes in the form of near-religious  advocation of experts, experts, experts. (I use "near-religious"  deliberately, since no rational person would "believe" in science. They understand  it.) Though that's not to mention that the fear of missing out on  trends and running against them is something to scoff at either. 
 
I  believe that the proliferation of the internet was originally supposed  to democratize media and content by allowing users to create whatever  they wanted. On the old internet, you could start anew every time you  posted something. Anonymity was a means of protection against toxic  negative feedback, so people were willing to express their opinions and  try radical or experimental things. Though many people on all sides  today would spit on what may be juvenile proliferation of such behavior  (See: Derision of Antifa), they were counterculture.
 ,
So what about bots then? Aren't they the ghost in the machine?
 
Not  necessarily. For all their wonderful AI, bots don't add a higher degree  of complexity to this system. (Which up until now has been startlingly  human-driven.) It'd be better to assume that they amplify what's already  happening.
 
They  repeat the same opinion more and more, and faster than us. Anonymity  can't do anything against bots because we can't externally influence the  bot like we could a human--that has to come from whoever is using it.  Since most people are more concerned with their online identity (which  is in of itself yet another problem), they don't try to consider the  content, and as such, continue to insemminate them further. If this is  true [need to include of bots],  it's a startling evolution of propaganda. The rigid mechanics of bots  make it an easy weapon to manipulate people, and it's a boon for people  with agendas. (Let's not forget different sides here.)
 
I  believe google is one of those that makes bots, after all they work  like a search engine, where they get the most accepted content first, Is  the same as doing an ad. [Oxy: Do  they work like a search engine, though? Aren't they repeating and  amplifying topics and opinions by being activated by key words in  otherwise organic conversation? "obtaining the most popular content  first" is a huge duh moment because of course  they're the first ones to get it. They are bots! They need to be loaded  up with specific information and put in a certain direction! Green  highlighter dude, I have no idea what you're trying to say here.]
 
It  might be easy to see that these functions are what's keeping us locked  in this state of widespread groupthink, and that is true to an extent.  However, when you consider who's manning these tools, it wouldn't be  unreasonable to assume that perhaps an elite few has enabled mainstream  media to develop, and then hijacked it.
 
Part 3 - Why is This Happening?: Explain  eate  teamand I always have a pang of nostalgia for that lost little bit of  extra freedom and ease, but if anyone else here was around back then,  you'll likely remember how frequently the site was overrun by rogue  bots. It literally made 4chan unusable for days at a time. 
 
Does  anyone remember the clockwork orange bot? It's the one I particularly  remember, and which (if I recall) may have ushered in the era of the  captcha. 
 
For  those who don't remember it, what happened was that a particular still  of Alex from A Clockwork Orange holding his head and screaming (NOT the  famous scene where he's in the strait jacket with his eyes held open)  started to get posted without any text in random threads. It looked like  a standard memey reaction image at first, but then it started to get  posted an excessive amount: multiple posts in a row of this image, then  new threads started with the image. And to get around the script that  blocks multiple posts of the same image, the picture changed colour  slightly each time it was posted. 
 
Within  a day or two, /b/ was almost nothing but empty-text posts of  differently-tinted images of this picture. I think (and I can't remember  for sure, but this is how I recall it) that moot shut down the site  completely for maintenance and when it came back online he'd installed  the captcha 'as a temporary fix'. 
 
An easy way for bots to get past captchas is for someone to set up a system for feeding captchas to humans at another location. 
 
Example: 
 
Mr  Shill wants to get his bot to post on /pol/, but the bot can't get past  the captcha. Mr Shill implements a piece of software that triggers  every time his bot tries to post. The software takes the captcha image  that his bot is trying to solve and feeds it into a 'fake' captcha that  he has set up as a gatekeeper for a pornographic film download on a  different website. 
 
Jonny  Retard wants to download Ass Licking Trannies 23 and finds a link to  download it on Mr Shill's website (1080p WebRip aUtHeNtIc DoWnLoAd!!!).  He clicks the download button and a captcha pops up. He eagerly solves  the captcha. The text he has just typed in gets copied and used to solve  the captcha image on 4chan, thus allowing the bot to post. 
 
Jonny Retard is disappointed when Mr Shill's website tells him he must log in as a premium user to download this movie. 
 
m/index.php?threads/dead-internet-theory-most-of-the-internet-is-fake.3011/
There's also services that people can nfow pay for to have their captchas solved.
 
 
Part 4 - How Are They Doing it?: Semi-technical description of how botnets are used. 
 
I've  been here long enough to remember the days before the captcha and I  always have a pang of nostalgia for that lost little bit of extra  freedom and ease, but if anyone else here was around back then, you'll  likely remember how frequently the site was overrun by rogue bots. It  literally made 4chan unusable for days at a time. 
 
Does  anyone remember the clockwork orange bot? It's the one I particularly  remember, and which (if I recall) may have ushered in the era of the  captcha. 
 
For  those who don't remember it, what happened was that a particular still  of Alex from A Clockwork Orange holding his head and screaming (NOT the  famous scene where he's in the strait jacket with his eyes held open)  started to get posted without any text in random threads. It looked like  a standard memey reaction image at first, but then it started to get  posted an excessive amount: multiple posts in a row of this image, then  new threads started with the image. And to get around the script that  blocks multiple posts of the same image, the picture changed colour  slightly each time it was posted. 
 
Within  a day or two, /b/ was almost nothing but empty-text posts of  differently-tinted images of this picture. I think (and I can't remember  for sure, but this is how I recall it) that moot shut down the site  completely for maintenance and when it came back online he'd installed  the captcha 'as a temporary fix'. 
 
An easy way for bots to get past captchas is for someone to set up a system for feeding captchas to humans at another location. 
 
Example: 
 
Mr  Shill wantss to get his bot to post on /pol/, but the bot can't get  past the captcha. Mr Shill implements a piece of software that triggers  every time his bot tries to post. The software takes the captcha image  that his bot is trying to solve and feeds it into a 'fake' captcha that  he has set up as a gatekeeper for a pornographic film download on a  different website. 
 
Jonny  Retard wants to download Ass Licking Trannies 23 and finds a link to  download it on Mr Shill's website (1080p WebRip aUtHeNtIc DoWnLoAd!!!).  He clicks the download button and a captcha pops up. He eagerly solves  the captcha. The text he has just typed in gets copied and used to solve  the captcha image on 4chan, thus allowing the bot to post. 
 
Jonny Retard is disappointed when Mr Shill's website tells him he must log in as a premium user to download this movie. 
 
 
Part  5 - The Effects: Describe the outcomes that we have seen so far in  culture, human psychology, etc. and where it might lead. 
 
The  real problem is that flesh-and-blood human beings have been directed by  an algorithm to act in a way that emulates bot behaviour. 
 
Realistically-animated people pretending to be real isn't necessary when the algorithm can get real people to do its bidding. 
 
We  know for a fact that this happens because it's exactly what happened in  an obvious way in 'Elsagate'. More disturbing is how we might be seeing  algorithmic feedback loops encouraging content creators to create  certain types of pornography. 
 
Think  about it: porn used to be made by a big film company that had funding  to make films. The people in the company would come up with some stupid  idea for a film, call up some appropriate porn actors, shoot the damn  film and then try to sell it to people. 
 
Now,  increasingly (and this has really only happened in the last few years),  porn is created by independent users and posted online directly. They  do not have funding, and are not trying to simply make a quality film  that people will want to buy, but are trying to make something that will  trigger the algorithm, and thus generate revenue through networks of ad  revenue, data-farming, and/or micropayments.* 
 
What  happens when the algorithm drifts (having no human compunctions, no  understanding of morality or culture) towards increasingly bizarre and  extreme content? The human content creators who are trying to ride the  algorithm will respond to this change and alter their output to better  fit the algorithm and trigger the correct search terms and to trigger  the feed of links that bump content up search rankings. 
 
I'm not explaining this very well, but if you read the original big exposé on Elsagate, you'll know what I'm talking about. 
 
*Video  games do this, too: notoriously. The business model used to be to make a  fun game that people would want to play, sell lots of copies. The  business model now is to make a game that keeps people playing. It  doesn't have to be fun, it just has to keep people playing
 
 
 
 
Part  5 - The Effects: Describe the outcomes that we have seen so far in  culture, human psychology, etc. and where it might lead. 
 
The  real problem is that flesh-and-blood human beings have been directed by  an algorithm to act in a way that emulates bot behaviour.
 
Realistically-animated people pretending to be real isn't necessary when the algorithm can get real people to do its bidding. 
 
We  know for a fact that this happens because it's exactly what happened in  an obvious way in 'Elsagate'. More disturbing is how we might be seeing  algorithmic feedback loops encouraging content creators to create  certain types of pornography. 
 
Think  about it: porn used to be made by a big film company that had funding  to make films. The people in the company would come up with some stupid  idea for a film, call up some appropriate porn actors, shoot the damn  film and then try to sell it to people. 
 
Now,  increasingly (and this has really only happened in the last few years),  porn is created by independent users and posted online directly. They  do not have funding, and are not trying to simply make a quality film  that people will want to buy, but are trying to make something that will  trigger the algorithm, and thus generate revenue through networks of ad  revenue, data-farming, and/or micropayments.* 
 
What  happens when the algorithm drifts (having no human compunctions, no  understanding of morality or culture) towards increasingly bizarre and  extreme content? The human  content creators who are trying to ride the algorithm will respond to  this change and alter their output to better fit the algorithm and  trigger the correct search terms and to trigger the feed of links that  bump content up search rankings. 
 
I'm not explaining this very well, but if you read the original big exposé on Elsagate, you'll know what I'm talking about. 
 
*Video  games do this, too: notoriously. The business model used to be to make a  fun game that people would want to play, sell lots of copies. The  business model now is to make a game that keeps people playing. It  doesn't have to be fun, it just has to keep people playing. 
 
[  - should mention how games/soc media/etc is heavily into exploiting  pleasure/reward circuitry in the brain to force the issue of  "engagement"]
 
[ i think this website offers great information on how your brain is hijacked by things like pornography and video games https://www.your           br           ai           n           onpo.                            earn.com/]
 
[ - and how these days, many people tend to be online somewhat compulsively]
 
[Oxy:  At that point, you may as well rope psychology in--and I really don't  mean that in a snide way. Mainstream internet culture is still rather  organic even without bots and shills, and I suspect we can find what  consequences occur and why it happens at the intersection of psychology  and sociology. If you're particularly daring, you can even discuss  internet memes.]
 
 
Part 6 - Conclusion: Summarise the key points of what we know, the consequences, and how we might respond. 
Internet may have slipped out of our control. Need to raise public awareness of this. 
 
Possible  solutions may be increased reliance on encrypted peer-to-peer  communication software, or using less centralised networks like the idea  of a p2p internet or 'meshnet'. 
 
Imageboards  and their "wild west" attitude have allowed for the free exchange of  ideas to flow more or less uninhibited (barring jannies, pedos getting  banned, etc.)
 
>As  a result, conscious or otherwise, the cream of the crop of the content  that originates here disseminates to the normies in a gradual,  stratified way
>The  structure and culture of imageboards has also made it difficult for  traditional structures of power and influence to subvert effectively,  which is why imageboards are pretty much the only vestige of old web  type content
>In  an attempt to circumvent this, TPTB are trying to push bots and shills   on us in a last ditch effort to drown out our own voices with ones they  have more direct control over
>Moreover,  even if the majority of anons dismiss or call out bots or shills, it's   inevitable that trolls or just low IQ anons will imitate their posts  and  mannerisms for attention, effectively doubling these efforts reach
 
Every  impulse in my brain is to basically say "I can't really wrap my head  around the ramifications of shit like this" as a copout for shrugging it  off. 
 
This  happens all the time. Major, world-shaking government secrets leak and  people turn their heads. Meanwhile, someone says nigger on Twitter and it's headline news for the next month. 
 
And  I do it too. Tackling this stuff requires an obstinate determination to  stay focused on it and to not be afraid to sound like a schizo to most  normal people. 
 
There's  a pretty powerful impulse in us which, when we hear something huge that  could change our view of everything, rejects it to protect ourselves.  No-one wants to have their whole world-view, which they've built a life  upon, blown apart.
 
 
Experiments  on reddit proving Amazon shill posters. Catching bots on 4chan. Use  user/bot networks to gather data, force trends, sell products, silence  dissent, essentially anything they want to. 
 
By controlling a fake public consciousness, you control culture and can then enact whatever you want. 
 
Corporations and governments can use this for profiteering and supressing dissent. 
 
Evidence that Chinese government--or any other government--may create teams of shills to influence American culture.  
 
 
Part 6 - Conclusion: Summarise the key points of what we know, the consequences, and how we might respond. 
 
Internet may have slipped out of our control. Need to raise public awareness of this. 
 
Possible  solutions may be increased reliance on encrypted peer-to-peer  communication software, or using less centralised networks like the idea  of a p2p internet or 'meshnet'. 
 
Imageboards  and their "wild west" attitude have  allowed for the free exchange of  ideas to flow more or less uninhibited  (barring jannies, pedos getting  banned, etc.)
As   a result, conscious or otherwise, the cream of the crop of the content   that originates here disseminates to the normies in a gradual,   stratified way
The  structure and  culture of imageboards has also made it difficult for  traditional  structures of power and influence to subvert effectively,  which is why  imageboards are pretty much the only vestige of old web  type content
In  an attempt to circumvent this, TPTB are trying to push bots and shills  on us in a last ditch effort to drown out our own voices with ones they  have more direct control over
Moreover,   even if the majority of anons dismiss or call out bots or shills, it's   inevitable that trolls or just low IQ anons will imitate their posts  and  mannerisms for attention, effectively doubling these efforts' reach
 
Every  impulse in my brain is to basically say "I can't really wrap my head  around the ramifications of shit like this" as a copout for shrugging it  off. 
 
This  happens all the time. Major, world-shaking government secrets leak and  people turn thieir heads. Meanwhile, someone says nigger on Twitter and  it's headline news for the next month. 
 
And  I do it too. Tackling this stuff requires an obstinate determination to  stay focused on it and to not be afraid to sound like a schizo to most  normal people. 
 
There's  a pretty powerful impulse in us  which, when we hear something huge  that could change our view of  everything, rejects it to protect  ourselves. No-one wants to have their  whole world-view, which they've  built a life upon, blown apart.
 
<!-- too much acid and autism :D 10/10 -->fdsafdsa
 
Hey this is my first time here! Anyone else on?... Bringing it Back
yeah
420
modelo mttt : internet internet é completamente falsa j da                                                                                                                                     N                               J                                                                                                                                           J                                  H j                                  
 m,
Enredo:  A maioria dos seres humanos                 agora determinados na internet teria expandido o escopo de sua cultura aqui . N mn                                  
 Jnn
Eles tiveram muitos problemas para controlá-lo, e agora o fazem. Bots são uma grande parte disso. Z Mutliplatjform shills são outros .
    
Então, reddit, eu nunca entrei no "website nati xnalism" então eu 
Z
 
 
We need to riot
 
020 378 3567
 
and so on and so forth forever and a day
 
 
Yea!
m,
 
Greetings, it is too late to riot, but we can still take back the internet by developing a parrallel internet. what say you? 
 
m
to create new internet would require either a ton of routers sattilites and towers or fiberoptics with fallback dependencys
 
Hello, basically brand new here. But it's never too late to riot, just too soon to act.