I’ve had similar frustration with conversations that focus on how devastating AI will be, without much mention of who is responsible or how to hold them accountable. One person compared my take to “guns don’t kill people, people kill people.” I feel like responsibility has already been successfully deflected and we’ve barely gotten started. I don’t even know how to have that conversation at this point.
While the cat being out of the bag is definitely an interesting and probably wholly positive development, I would still wonder how much big corporations can flex their muscles on this front. Keep in mind, AI isn’t a finished tech, there’s most likely a large amount of room for development both gradual and categorical … plus it’s not quite settled where it’s killer applications will lie and who will have control over that, which is a market question not just a pure tech one.
Yea wow, hadn’t thought about how resonant the guns debate is with this (I’m not a USian). But very on point.
I can’t help but wonder about great filters and with AI, if it is a great filter (if they even exist), it’s not so much the invention of AI as it is the improbability of arriving at its invention only after sorting through and solving a bunch of social prerequisites, and how there’s a natural tension between technological progress and social safety and caution such that who ever invents AI first will also be less “safe”, a little like the inevitable mutations that lead to cancer.
I’ve had similar frustration with conversations that focus on how devastating AI will be, without much mention of who is responsible or how to hold them accountable. One person compared my take to “guns don’t kill people, people kill people.” I feel like responsibility has already been successfully deflected and we’ve barely gotten started. I don’t even know how to have that conversation at this point.
There is nothing we can do in the end. Anyone can create an AI in thier own home, and it’s only going to get easier.
We could ban AI and it still won’t stop it, I think we have passed the threshold.
This is a case of the cat being out of the bag…
While the cat being out of the bag is definitely an interesting and probably wholly positive development, I would still wonder how much big corporations can flex their muscles on this front. Keep in mind, AI isn’t a finished tech, there’s most likely a large amount of room for development both gradual and categorical … plus it’s not quite settled where it’s killer applications will lie and who will have control over that, which is a market question not just a pure tech one.
And that take IS still true.
Though of course a strong argument for background checks, reasonable limits, safety, etc…
The US isn’t the only country with plenty of guns, but IS the only place this keeps happening.
Plenty of countries have knives all about. Only China had spree child stabbing.
Because its about culture and social conditions.
Yea wow, hadn’t thought about how resonant the guns debate is with this (I’m not a USian). But very on point.
I can’t help but wonder about great filters and with AI, if it is a great filter (if they even exist), it’s not so much the invention of AI as it is the improbability of arriving at its invention only after sorting through and solving a bunch of social prerequisites, and how there’s a natural tension between technological progress and social safety and caution such that who ever invents AI first will also be less “safe”, a little like the inevitable mutations that lead to cancer.