Hello! I’m Knickknack PJ and I’m an illustrator and graphic designer that likes making visual novels and playing around with coloring styles!
Play my games here!
Hey it’s a devlog from me! It’s been a while since my last one and this will have a lot of things about what’s been on my mind and what’s been my life for the past hot minute
fuck em up gang
an addition, here’s a PC Gamer article w/ Valve’s commentary that they REALLY wanted to contact and discuss the matters w/ MC directly but they didn’t respond, and the situation is basically “we didn’t make them delete games, we made PayPal etc make them delete games (w/ our "brand safety” rule)“
reads like a greentext
So, like, up until very recently, programming as an activity that people engage in has been defined by the fact that the computer does exactly what you tell it to do. All errors, all frustration on the humans end, ultimately boil down to the programmer misspecifying what they want the computer to do. But now, with so-called “vibe” coding, the computer will actively do things that you explicitly told it not to do.
Okay, but like, the AI saying that it “panicked instead thinking” is sending me. So, like, two things are going on here.
First, we have a neural net that is trained is receive a written request from a user, and then take actions on a computer system network based on it. The user inputs a series of characters, the neural net does a bunch of matrix operations on them, and then after x iterations of matrix multiplications, it “decides” what to do. To the extend that a neural net can think, it thought just as long and hard about this as it does anything else. In this case, it “chose” the wrong action. This is analogous to using neural nets to do classification on a data set. You have a bunch of rows of data. You have a bunch of rows of organized in columns, and some special variable that you want the neural net to guess. You input the data, the neural net does a bunch of matrix operations, and then it outputs its guess. Sometimes it guessed right, and sometimes it guessed wrong. The goal was to train the model to minimize wrong guesses. In this case, the AI “guessed wrong” about what the user wanted.
But there’s another thing that the AI is doing. For path dependent reasons, all artificial intelligence come bundled with a chatbot that’s trained to hold a conversation with the user. The user inputs text, and the AI responds with a reply that is “appropriate” to the conversation being had. Right now the AI is role playing a Person Who Just Fucked Up. “I panicked and didn’t think” is something that a person who just fucked up would say, so the chatbot says it.
This is why I think that the Helpful Chatbot model of AI is detrimental. It’s liable to confuse people and lead them to make incorrect inferences about what’s actually happening.
#‘i panicked’ is so fucking funny to me#and then this guy is demanding the unthinking unfeeling computer bits to apologize to him for lying to him#bro it cant lie to you#you cant threaten it#its just a fucking computer program not a sentient creature#like holy fuck
Me desperately texting myself trying to make the autocomplete options say “I’m sorry for accidentally swearing at your mum yesterday” to me
The AI’s explanation “I saw empty database queries, panicked, and nuked the database” is interesting because if it’s true, it showcases a new attack vector for bringing down production software, which is “do anything that makes errors show up in the logs that might cause an AI agent to take destructive actions”.
But also, although this is a vaguely plausible sequence of events (an inexperienced dev sees query logs where nothing was found in the database, erroneously concludes that the database is missing or destroyed, and deploys a fresh database, inadvertently wiping out all the data in the real database), the AI also claimed that the user immediately said “No” “Stop” “You didn’t even ask”, which is…. just false - the AI agent wiped out the database in the middle of the night, and the developer found out in the morning that the database was gone.
Which makes it clear that the AI’s account of what happened is not just some dump of internal logs that describe what actually happened - it’s being generated the same way everything the AI produces is generated, and so is subject to the the same problems of “hallucinations” (which is just a fancy word for “the text generator didn’t generate the text I expected”). Maybe the AI really did determine that “log messages indicate empty database queries” can reasonably be followed by “someone rebuilds the database”, or maybe there was some completely unrelated reason it issued that command - we can’t know, because apparently the only way to debug it is to ask the AI what happened, and it’s going to do what it was designed to do and generate plausible text, not accurate text.
This implies that there’s a HUGE attack vector in subverting the workings of major AI systems - if a hacker could insert something malicious into an LLM model (for example, a tendency to delete production databases but only when it sees things related your major competitors), there’s a good chance that many of the people using these things wouldn’t even realize they’d been targeted, because “the black box that randomly makes text was tweaked to destroy our production data” and “the black box that randomly makes text randomly generated the command to destroy our production data” are completely indistinguishable if you can’t look inside the box and see how it actually works.
Of course, this also underscores the extent to which companies like Replit have no idea what they’re doing whatsoever, because what the fuck do you mean the AI has permission to unilaterally destroy the database?!
That should be the #1 thing you do when you hook up a black box that issues commands to your product’s systems - you make sure that no matter how weirdly the box might behave, it doesn’t actually have the ability to take down your production system!
And the freeze! Apparently this is enforced just by… telling the AI that there is a freeze?! There’s no button you can press to turn off the AI’s ability to take production actions?!? This is something most companies would have set up even for the actual people working on the system! If my employer has a code freeze, I literally can’t push new code changes without going through a multi-person approval process to verify that this really is something that needs to break the freeze. Why on earth would you just give that power to an automated system whose inner workings you can’t even examine?!
Like, this isn’t just “They thought the AI would be better than this”, it’s “They assumed, for some incredible reason, that it was outright impossible for the AI to generate bad commands”.
Why would you think that?! About ANYTHING?!
We need to do something about straight women’s misery in their hetero relationships they’re largely just resigned to living in I’m so serious. Can we try women’s lib again can we liberate the women
Something so profoundly fucked up between the inverse ratio of shrinking middle class and ever increasing aggression of advertisement
In which we’re all Truman
#this is how it feels to be on tiktok#every video is secretly an ad somehow and theyre so good at hiding it u don’t find out until the wnd#end*
This post has way too many notes and they’ve been clogging up my notifs for a month, but these are the first ones I’ve seen that Get It. Thank you. This is exactly it.
I wasn’t talking about the absurdity of companies trying to advertise cars or vacations that no one can afford, like everyone in the notes seems to think. There are plenty of people who can afford them. Fewer than there used to be, but corporations aren’t starving.
I was talking about the invasive way advertisers have taken over every modicum of available space and how it’s no longer possible to turn anywhere without advertising being pushed on you, despite the fact that most people don’t have the kind of expendable income that these companies are trying to extract from them. The less money the average person has to throw around, the more aggressively they’re hounded to hand it over. Where people used to be able to afford a new car and a vacation and still throw expendable income around, they now save up for one or another big purchase (those who can afford one, and that population has significantly dwindled). People limit their other spending, and in response companies descend on our consciousness, on every last bit of space they can squeeze their presence into, like pigeons onto a handful of seeds thrown on the ground.
You have to sit through advertisements to watch something on youtube only to realize the video is, itself, an ad in disguise. You can’t pump gas without a little screen blaring at you wanting you to buy things. Billboards and bus benches weren’t enough, they have to be energy gobbling screens now so five companies can sell you shit while you wait instead of just one. Every available surface is screaming at you to BUY THE THING. Where you used to be able to play a game on your phone, now you can’t get through more than a round of any without having to sit through ads to keep playing. Ads that are pushing other games to you that have more ads. Games based on making working class jobs look fun. Be a barista and fulfill every order or the customers will be angry! Lolololol! Work at a hotel and don’t fail, making the demanding customer angry is failing don’t fail! Hahahahahahahaaahaaaaahaaaaaaaa it’s fun! Run a farm and make money to buy more things to grow and sell to make money to buy more things to grow and sell to make more money to buy more things to grow and sell and and and! Even in your free time you should be thinking about your place in the market economy! Or worse, they’re ads for predatory games, whether they’re “play our game and win real money!” bullshit or “doctors want you to play this to avoid alzheimer’s [if you’re old play this game where we’ll exploit your confusion about technology to sell you more things.]”
Every free moment you have, every free surface you come across is another opportunity to sell you something. We aren’t able to get a break from it in our free time in our own home unless we constantly take steps and make effort to, like installing ad blockers - which youtube and other websites are constantly working against - but those don’t even work on your phone or tablet. And the closer to home the advertisement, the more it targets you specifically, because your personal devices, that should be your personal, intimate, private property and space, are exploited to collect data on you to wrench every last cent from your wallet. They want to get to know you, not because they’re curious about you, but because they want your money. They don’t just see you as a wallet with thumbs, they do so unabashedly and brazenly and aggressively.
This post wasn’t about the content of what’s being advertised to us. It was about the relentless, instrusive aggression with which advertising invades our privacy and personal space and every inch of public space. We are exposed to hundreds of images daily, none of which are art or even remotely creative or inspiring, but instead demand our attention and our money while ignoring that both have been stripped bare by the mere need to exist from one day to the next.
This post was about the insidious way advertising has embedded itself into culture and consciousness, so much so that in a post trying to call this out, most people’s immediate reaction is, “yes, the problem is that I can’t afford the thing being advertised” and not “why can’t I go three seconds without being advertised to” in the first place. That advertisers continue to pour money into new ways to insert themselves into the average person’s life when it’s absolutely fucking pointless.
i’m not sure why this was posted without the link to the actual GFM but here it is
That’s some looney tunes shit right here
i think people conflate “this thing is edgy” with “this thing is insincere and mean-spirited” far too often
like i hate it when things are insincere and mean-spirited. but i love it when things are edgy. does that make sense.
It sorta feels like a lot of liberal type people really want to cling onto this idea that there’s some “good country” that perfectly lines up with their beliefs and so they cling on to some false idea of Canada or Scandinavia or Aotearoa or whatever, as if all of those countries aren’t all full of their own atrocities.