just sold you out
They been sellin us out since the start. And they never even paid for us!
just sold you out
They been sellin us out since the start. And they never even paid for us!
There’s this podcast I used to enjoy (I still enjoy it, but they stopped making new episodes) called Build For Tomorrow (previously known as The Pessimists Archive).
It’s all about times in the past where people have freaked out about stuff changing but it all turned out okay.
After having listened to every single episode — some multiple times — I’ve got this sinking feeling that just mocking the worries of the past misses a few important things.
I’m not so sure that the concerns about AI “killing culture” actually are as overblown as the worry about cursive, or record players, or whatever. The closest comparison we have is probably the printing press. And things got so weird with that so quickly that the government claimed a monopoly on it. This could actually be a problem.
If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.
Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.
Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.
I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.
The theme that I would highlight here though:
More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.
But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.
How will you be ready for that if you don’t get practice along the way?
Nor is losing your night vision to the glare of a car (it’s always a pickup) behind you with too-bright lights that fill your mirrors.
It really fucking is. Nothing is a bigger red flag to me than a pickup. 98% of pickup drivers are assholes.
Basically this: Flying Too High: AI and Air France Flight 447
Description
Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.
Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…
In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn’t - and that the consequences of asking the wrong question are disastrous.
Yep: https://www.scientificamerican.com/blog/beautiful-minds/who-created-maslows-iconic-pyramid/
However, many people may not realize that during the last few years of his life Maslow believed self-transcendence, not self-actualization, was the pinnacle of human needs. What’s more, it’s difficult to find any evidence that* he ever actually represented his theory as a pyramid*. On the contrary, it’s clear from his writings that he did not view his hierarchy of needs like a video game-- as though you reach one level and then unlock the next level, never again returning to the “lower” levels. He made it quite clear that we are always going back and forth in the hierarchy, and we can target multiple needs at the same time.
Don’t worry. Someone will soon come by to remind us that it’s pointless to regulate AI, and also harmful to do it, and it’s actually a good thing for everyone, and also we’ll be shoveling shit until we die if we don’t get on board, and please oh please just let me get off to one more deepfake of my classmate before you take away my toy it’s not faiiiiir.
Yeah… What a mess. A horrible, horrible idea.
Mass producing disguised explosives is risky business.
Obviously they wanna price them low, to attract buyers in the target market. But if you price them too low, they become an opportunity for middlemen to resell to another market.
And now you’ve spread several batches of explosives to who-knows-where.
Hopefully they thought of that and restricted the detonation trigger to specific country codes. But that doesn’t erase the fact that there are explosives in the device.
Arguably one of the most important groups to hear from if we’re gonna find the right balance between freedom to create and freedom from harm.
Seems like we’re going to be stuck in the uncanny valley of telepresence. The more fidelity we add, the more we’re able to pick up on microexpressions, subtle eye movements, and breathing, which helps trigger oxytocin and promote trust. But also, the more fidelity we add, the more attack surface we open up for malicious actors to exploit.
I’m sympathetic to the reflexive impulse to defend OpenAI out of a fear that this whole thing results in even worse copyright law.
I, too, think copyright law is already smothering the cultural conversation and we’re potentially only a couple of legislative acts away from having “property of Disney” emblazoned on our eyeballs.
But don’t fall into their trap of seeing everything through the lens of copyright!
We have other laws!
We can attack OpenAI on antitrust, likeness rights, libel, privacy, and labor laws.
Being critical of OpenAI doesn’t have to mean siding with the big IP bosses. Don’t accept that framing.
Not even stealing cheese to run a sandwich shop.
Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.
That’s the reason we got copyright, but I don’t think that’s the only reason we could want copyright.
Two good reasons to want copyright:
Accurate attribution:
Open source thrives on the notion that: if there’s a new problem to be solved, and it requires a new way of thinking to solve it, someone will start a project whose goal is not just to build new tools to solve the problem but also to attract other people who want to think about the problem together.
If anyone can take the codebase and pretend to be the original author, that will splinter the conversation and degrade the ability of everyone to find each other and collaborate.
In the past, this was pretty much impossible because you could check a search engine or social media to find the truth. But with enshittification and bots at every turn, that looks less and less guaranteed.
Faithful reproduction:
If I write a book and make some controversial claims, yet it still provokes a lot of interest, people might be inclined to publish slightly different versions to advance their own opinions.
Maybe a version where I seem to be making an abhorrent argument, in an effort to mitigate my influence. Maybe a version where I make an argument that the rogue publisher finds more palatable, to use my popularity to boost their own arguments.
This actually happened during the early days of publishing, by the way! It’s part of the reason we got copyright in the first place.
And again, it seems like this would be impossible to get away with now, buuut… I’m not so sure anymore.
—
Personally:
I favor piracy in the sense that I think everyone has a right to witness culture even if they can’t afford the price of admission.
And I favor remixing because the cultural conversation should be an active read-write two-way street, no just passive consumption.
But I also favor some form of licensing, because I think we have a duty to respect the integrity of the work and the voice of the creator.
I think AI training is very different from piracy. I’ve never downloaded a mega pack of songs and said to my friends “Listen to what I made!” I think anyone who compares OpenAI to pirates (favorably) is unwittingly helping the next set of feudal tech lords build a wall around the entirety of human creativity, and they won’t realize their mistake until the real toll booths open up.
You’re presupposing the superiority of science. What good is knowing the chemical composition of a mind, if such chemicals are but shadows on the cave wall?
You can’t actually witness a rock, in its full objective “rock-ness”. You can only witness yourself perceiving the rock. I call this the Principle of Objective Things in Space.
Admittedly, the study of consciousness is still in its infancy, especially compared to study of the physical world. But it would be foolish to discard the entire concept when it is unavoidably fundamental. Suppose we do invent teleporters and they do erase consciousness. Doesn’t it say something about the peril of worshipping quantification over all else, that we wouldn’t even know until we had already teleported all of our bread? The entire field is babies. I am heavy ideas guy and this is my PoOTiS.
To quote Searle: Should I pinch myself and report the results in the Journal of Philosophy?
The physical world is the hologram.
Between saccades, fnords, and confabulation, I don’t trust a single thing my senses tell me. But the one thing I know for sure is that I’m conscious.
So, knowing that only consciousness is “real”, why would I assume it can be recreated through atoms (which are a mere hallucination)?
Fake lawyers, fake reviews, and several pyramid schemes. Solid takedowns, FTC!