Hi! I am Creesch, also creesch on other platforms :)

  • 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • What do you mean by “it”? The chatGPT interface? Could be, but then you are also missing the point I am making.

    After all, chatGPT is just one of the possible implementations of LLMs and indeed not perfect in how they implemented some things like search. In fact, I do think that they shot themselves in the foot by implementing search through bing and implementing it poorly. It basically is nothing more than a proof of concept tech demo.

    That doesn’t mean that LLM’s are useless for tasks like searching, it just means that you need to properly implement the functionality to make it possible. It certainly is possible to implement search functionality around LLMs that is both capable and can be reviewed by a human user to make sure it is not fucking up.

    Let me demonstrate. I am doing some steps that you would normally automate with conventional code:

    I started about by asking chatGPT a simple question.

    *“I want to provide a list of all the children bach had. What would be a suitable google query”

    It then responded with.

    “To compile a comprehensive list of Johann Sebastian Bach’s children, a suitable Google query would be: “List of Johann Sebastian Bach’s children names”. This query is specific enough to indicate that you’re looking for a detailed list, including the names of all the children, which should yield targeted results.”

    The following step I did manually, but is something you would normally have automated. I put the suggested query in google, I quickly grabbed the first 5 links and then put the following in chatGPT.

    The top 5 results in google are the pages I provided below. Browse to each page, and answer the query “List of Johann Sebastian Bach’s children names”. Just answer the query, but let me know if there is conflicting information on the page.

    https://en.wikipedia.org/wiki/Bach_family
    https://www.classicfm.com/composers/bach/guides/children/
    https://www.bachonbach.com/100-bach-faq-and-soon-there-are-500-bach-faq/faq-26-what-were-the-names-of-all-of-johann-sebastian-bach-s-kids/
    https://www.spiritsound.com/music/bachchildren.html
    https://www.grahamsmusic.net/post/bach-s-wives-and-children

    It then proceeded to give me the following answer

    Johann Sebastian Bach had 20 children in total. With his first wife, Maria Barbara Bach, he had seven children, four of whom survived into adulthood: Catharina Dorothea, Wilhelm Friedemann, Carl Philipp Emanuel, and Johann Gottfried Bernhard. With his second wife, Anna Magdalena Wilcke, he had 13 children, of whom six survived into adulthood: Gottfried Heinrich, Elisabeth Juliane Friederica, Johanna Carolina, Regina Susanna, Johann Christoph Friedrich, and Johann Christian

    Going over the search results myself seems to confirm this list. Most importantly, except for the initial input, all of this can be automated. And of course, a lot of it can be done better, as I didn’t want to spend too much time.



  • True, though that isn’t all that different from people doing knee jerk responses on the internet…

    I am not claiming they are perfect, but for the steps I described a human aware of the limitations is perfectly able to validate the outcome. While still having saved a bunch of time and effort on doing an initial search pass.

    All I am saying is that it is fine to be critical of LLM and AI claims in general as there is a lot of hype going on. But some people seem to lean towards the “they just suck, period” extreme end of the spectrum. Which is no longer being critical but just being a reverse fanboy/girl/person.


  • I don’t know how to say this in a less direct way. If this is your take then you probably should look to get slightly more informed about what LLMs can do. Specifically, what they can do if you combine them with with some code to fill the gaps.

    Things LLMs can do quite well:

    • Generate useful search queries.
    • Dig through provided text to determine what it contains.
    • Summarize text.

    These are all the building blocks for searching on the internet. If you are talking about local documents and such retrieval augmented generation (RAG) can be pretty damn useful.




  • Long term wearing of vr headsets might indeed be not all that good. Though, the article is light on actual information and is mostly speculation. Which for the Apple Vision Pro can only be the case as it hasn’t been out long enough to conduct anything more than a short term experiment. So that leaves very little data in the way of long term data points.

    As far as the experiment they did, there was some information provided (although not much). From what was provided this bit did stand out to me.

    The team wore Vision Pros and Quests around college campuses for a couple of weeks, trying to do all the things they would have done without them (with a minder nearby in case they tripped or walked into a wall).

    I wonder why the Meta Oculus Quests were not included in the title. If it is the meta Quest 3, it is fairly capable as far as pass through goes. But, not nearly as good as I understand the Apple Vision Pro’s passthrough is. I am not saying the Apple Vision Pro is perfect, in fact it isn’t perfect if the reviews I have seen are any indicator. It is still very good, but there is still distortion around edges of vision, etc.

    But given the price difference between the two I am wondering if the majority of the particpants actually used Quests as then I’d say that the next bit is basically a given:

    They experienced “simulator sickness” — nausea, headaches, dizziness. That was weird, given how experienced they all were with headsets of all kinds.

    VR Nausea is a known thing even experienced people will get. Truly walking around with these devices with the distorted views you get is bound to trigger that. Certainly with the distortion in pass through I have seen of Quests 3 videos. I’d assume there are no Quests 2 in play as the passthrough there is just grainy black and white video. :D

    Even Apple with all their fancy promo videos mostly shows people using the Vision pro sitting down or in doors walking short distances.

    So yeah, certainly with the current state of technology I am not surprised there are all sorts of weird side effects and distorted views of reality.

    What I’d be more interested in, but what is not really possible to test yet, is what the effects will be when these devices become even better. To the point where there is barely a perceivable difference in having them on or off. That would be, I feel, the point where some speculated downsides from the article might actually come into play.


  • They’re for different needs.

    Yes… but also extremely no. Superficially you are right, but a lot of the arguments of why many new distros are created is just because of human nature. This covers everything from infighting over inane issues to more pragmatic reasons. A lot of them, probably even a majority, don’t provide enough actual differentiators to be able to honestly claim that it is because of different needs. In the end it all boils down to the fact that people can just create a new distro when they feel like it.

    Which is a strength in one way, but not with regard to fragmentation.


  • I am not quite sure why there are all these bullet points that have very little todo with the actually issue.

    Researchers at the University of Wisconsin–Madison found that Chrome browser extensions can still steal passwords, despite compliance with Chrome’s latest security standard, Manifest V3.

    I am not sure how Manifest V3 is relevant here? Nothing in Manifest V3 suggests that content_scripts can’t access the DOM.

    The core issue lies in the extensions’ full access to the Document Object Model (DOM) of web pages, allowing them to interact with text input fields like passwords.

    I’d also say this isn’t directly the issue. Yes, content_scripts needing an extra permissions to be able to access password input fields would help of course.

    Analysis of existing extensions showed that 12.5% had the permissions to exploit this vulnerability, identifying 190 extensions that directly access password fields.

    Yes… because accessing the DOM and interacting with it is what browser extensions do. If anything, that 12.5% feels low, so I am going to guess it is the combination of accessing the DOM and being able to phone home with that information.

    A proof of concept extension successfully passed the Chrome Web Store review process, demonstrating the vulnerability.

    This, to me, feels like the core of the issue right now. The behavior as described always has been part of browser extensions and Manifest V3 didn’t change that or made a claim in that direction as far as I know. So that isn’t directly relevant right now. I’d also say that firefox is just as much at risk here. Their review process over the years has changed a lot and isn’t always as thorough as people tend to think it is.

    Researchers propose two fixes: a JavaScript library for websites to block unwanted access to password fields, and a browser-level alert system for password field interactions.

    “A javascript library” is not going to do much against content_scripts of extensions accessing the DOM.

    The alert system seems better indeed, but that might as well become browser extension permission.

    To be clear, I am not saying that all is fine and there are no risks. I just think that the bullet point summary doesn’t really focus on the right things.