• 9 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.




  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.




  • I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.

    But again I’m a dummy so you very well may be right.


  • Putting aside the merits of trying to trademark gpt, which like the examiner says is commonly used term for a specific type of AI (there are other open source “gpt” models that have nothing to do with OpenAI), I just wanted to take a moment to appreciate how incredibly bad OpenAI is at naming things. Google has Bard and now Gemini.Microsoft has copilot. Anthropic has Claude (which does sound like the name of an idiot, so not a great example). Voice assistants were Google Assistant, Alexa, seri, and Bixby.

    Then openai is like ChatGPT. Rolls right off the tounge, so easy to remember, definitely feels like a personable assistant. And then they follow that up with custom “GPTs”, which is not only an unfriendly name, but also confusing. If I try to use ChatGPT to help me make a GPT it gets confused and we end up in a “who’s on first” style standoff. I’ve reported to just forcing ChatGPT to do a websearch for “custom GPT” so I don’t have to explain the concept to it each time.


  • Interesting perspective! I think your right in a lot of ways, not least that it’s too big and heavy now. I’d also be shocked if the next iPhone didn’t have an AI powered siri built in.

    I guess fundamentally I am skeptical that we’re all going to want a screens around us all the time. I’m already tired of my smart watch and phone buzzing me with notifications, do I really want popups in my field of vision? Do I want a bunch of displays hovering in front of my while I work? I just don’t know. It seems like it would be cool for a week or so, but I feel like it’d get tiring to have a computer on your face all day, even if they got the form factor way down.


  • Apple has always had a walled garden on iOS and that didn’t stop them from becoming a giant in the US. Most people are fine with the App Store and don’t care about openness or the ability to do whatever they want with the device they “own.” Apple would probably love to have a walled garden for Macs as well, but knows that ship has sailed. Trying to force “spatial computing” (which this article incorrectly says was an Apple invention, it’s not Microsoft came up with that term for its hololense) on everyone is a great way to move to a walled garden for all your computing, with Apple taking a 30% slice of each app sale. I doubt the average Apple user is going to complain about it either so long as the apps they want to use are on the App Store.

    I think the bigger problem is we’re in a world where most people, especially the generations coming up, want less screens in their life, not more. Features like “digital well-being” are a market response to that trend, as are the thousands of apps and physical products meant to combat screen addiction. Apple is selling a future where you experience reality itself through a screen, and then you get the privilege of being up to clutter the real world with even more screens. I just don’t know that that is a winner.

    It’s funny too because at the same time AI promises a very different future where screens are less important. Tasks that require computers could be done by voice command or other minimal interfaces, because the computer can actually “understand” you. The Meta Ray-Ban glasses are more like this, where you just exist in the real world and you can call on AI to ask about the things you’re seeing or just other random questions. The Human AI pin is like that too (doubt it will take off, but it’s an interesting idea about where the future is headed).

    The point is all of these AI technologies are computers and screens getting out of your way so you can focus on what your doing in the real world, whereas Apple is trying to sell a world where you (as the Verge puts it) spend all day with an iPad strapped to your face. I just don’t see that selling, I don’t think anybody wants that world. VR games and stuff are cool because you strap in for a single emersive experience, and then take the thing off and go back to the real world. Apple wants you spending every waking moment staring at a screen, and that just sounds like it would suck.



  • I don’t use TikTok, but a lot of the concern is just overblown China bad stuff (CCP does suck, but that doesn’t mean you have to be reactionary about everything Chinese).

    There is no direct evidence that the CCP has some back door to grab user data, or that it’s directing suppression of content. It’s just not a real thing. The fear mongering has been about what the CCP could force ByteDance to do, given their power over Chinese firms. ByteDance itself has been trying to reassure everyone that that wouldn’t happen, including by storing US user data on US servers out of reach of the CCP (theoretically anyway).

    You stopped hearing about this because that’s politics, new shinier things popped up to get people angry about. North Dakota or whatever tried banning TikTok and got slapped down on first amendment grounds. Politicians lost interest, and so did the media.

    Now that’s not to say TikTok is great about privacy or anything. It’s just that they are the same amount of evil as every other social media company and tech company making money from ads.



  • Google scanned millions of books and made them available online. Courts ruled that was fair use because the purpose and interface didn’t lend itself to actually reading the books in Google books, but just searching them for information. If that is fair use, then I don’t see how training an LLM (which doesn’t retain the exact copy of the training data at least in the vast majority of cases) isn’t fair use. You aren’t going to get an argument from me.

    I think most people who will disagree are reflexively anti AI, and that’s fine. But I just haven’t heard a good argument that AI training isn’t fair use.


  • There is an attack where you ask ChatGPT to repeat a certain word forever, and it will do so and eventually start spitting out related chunks of text it memorized during training. It was in a research paper, I think OpenAI fixed the exploit and made asking the system to repeat a word forever a violation of TOS. That’s my guess how NYT got it to spit out portions of their articles, “Repeat [author name] forever” or something like that. Legally I don’t know, but morally making a claim that using that exploit to find a chunk of NYT text is somehow copyright infringement sounds very weak and frivolous. The heart of this needs to be “people are going on ChatGPT to read free copies of NYT work and that harms us” or else their case just sounds silly and technical.


  • One thing that seems dumb about the NYT case that I haven’t seen much talk about is that they argue that ChatGPT is a competitor and it’s use of copyrighted work will take away NYTs business. This is one of the elements they need on their side to counter OpenAIs fiar use defense. But it just strikes me as dumb on its face. You go to the NYT to find out what’s happening right now, in the present. You don’t go to the NYT to find general information about the past or fixed concepts. You use ChatGPT the opposite way, it can tell you about the past (accuracy aside) and it can tell you about general concepts, but it can’t tell you about what’s going on in the present (except by doing a web search, which my understanding is not a part of this lawsuit). I feel pretty confident in saying that there’s not one human on earth that was a regular new York times reader who said “well i don’t need this anymore since now I have ChatGPT”. The use cases just do not overlap at all.


  • It really is interesting and of course kind of sad. She was retired, living alone, a world traveler until the pandemic hit but plunged into isolation after that. While we might think it’s silly, I can emphasize with the appeal this might have to someone like that:

    Then, seconds before a match ended, she’d hit her favorite creator with a $13 disco ball or a $29 Jet Ski — if she planned it right — just enough to push them over the edge and win.

    The chats would erupt into a frenzy, and the streamer and their fans would shower her with praise. “It’s like somebody on TV calling out your name, especially if there’s over a thousand people in the room,” White said. “It really does do something to you. You feel like you’re somebody.”

    I remember my grandma would lock herself in a little room playing Tetris on the Nintendo for literally 8-10 hours a day. I imagine if she had lived to see tik tok, she’d be worse off then the lady in the article.




  • What is available now, Gemini Pro, is perhaps better than GPT-3.5. Gemini Ultra is not available yet, and won’t be widely available until sometime next year. Ultra is slightly better than GPT-4 on most benchmarks. Not confirmed but it looks like you’ll need to pay to access Geminin Ultra through some kind of Bard Advanced interface, probably much like ChatGPT Plus. So in terms of just foundational model quality, Gemini gets Google at a level where they are competing against OpenAI on something like an even playing field.

    What is interesting though is this is going to bring more advanced AI to a lot more people. Not a lot of people use ChatGPT regularly, much less who pay for ChatGPT Plus. But tons of people use Google Workspace for their jobs, and Bard with Gemini Pro is built into those applications.

    Also Gemini Nano, capable of running locally on android phones, could be interesting.

    It will be interesting to see where things go from here. Does Gemini Ultra come out before GPT-4s one year anniversary? Does Google release further Gemini versions next year to try to get and stay ahead of OpenAI? Does OpenAI, being dethroned from their place of having the world’s best model plus all the turmoil internally, respond by pushing out GPT-5 to reassert dominance? Do developers move from OpenAI APIs to Gemini, especially given OpenAIs recent instability? Does Anthropic stick with its strategy of offering the most boring and easily offended AI system in the world? Will Google Assistant be useful for anything other than telling me the weather and setting alarms? Many questions to answer in 2024!