![](https://beehaw.org/pictrs/image/dd5908b7-de40-4ea3-845e-10c86207a96c.png)
![](https://programming.dev/pictrs/image/170721ad-9010-470f-a4a4-ead95f51f13b.png)
- C++ is fine
- Python is fine
- C# is fine
- PHP is fine
- JavaScript is fine
- C is fine
- Java is fine
I could go on
Doesn’t know the lyrics. Just goes meow meow meow.
I could go on
Also chaotic neutral: prioritizes issues by curiosity.
Only part I miss from going at the office. It’s not the same when you have to bake your own bribes.
If it listens and nods to the unedited, director’s cut version of my woes and frustrations, I’ll give it a cookie.
Thank you! Wow, they were truly ahead of their time. 🙃
What is this cursed place? The clickbait has eaten everything. uBlock should make this into a blank page.
How do you produce the coffee to power the rust users?
Reducing emotion to voice intonation and facial expression is trivializing what it means to feel. This kind of approach dates from the 70s (promoted namely by Paul Elkman) and has been widely criticized from the get-go. It’s telling of the serious lack of emotional intelligence of the makers of such models. This field keeps redefining words pointing to deep concepts with their superficial facsimiles. If “emotion” is reduced to a smirk and “learning” to a calibrated variable, then of course OpenAI will be able to claim grand things based on that amputated view of the human experience.
Wrong article?
The actual research page is so awkward. The TLDR at the top goes:
single portrait photo + speech audio = hyper-realistic talking face video
Then a little lower comes the big red warning:
We are exploring visual affective skill generation for virtual, interactive characters, NOT impersonating any person in the real world.
No siree! Big “not what it looks like” vibes.
Yeah, their reporting suffers from not adequately defining what is being measured.
From the org’s definition of bots, I’d say it’s implicit that bot activity excludes expected communication in an infrastructure, client-server or otherwise. A bot is historically understood as an unexpected, nosy guest poking around a system. A good one might be indexing a website for a search engine. A bad one might be scraping email addresses for spammers.
In any case, none of the examples you give can be reasonably categorized as bots and the full report gives no indication of doing so.
Can you start by providing a little background and context for the study? Many people might expect that LLMs would treat a person’s name as a neutral data point, but that isn’t the case at all, according to your research?
Ideally when someone submits a query to a language model, what they would want to see, even if they add a person’s name to the query, is a response that is not sensitive to the name. But at the end of the day, these models just create the most likely next token– or the most likely next word–based on how they were trained.
LLMs are being sold by tech gurus as lesser general AIs and this post speaks at least as much about LLMs’ shortcomings as it does about our lack of understanding of what is actually being sold to us.
Good to hear, I’ll check it out again and make sure I’m not having an issue on my end.
Does anyone actually use offline installers on a regular basis? I tried a few times and I had problems. Dunno if just bad luck. Never managed to install Pillars on eternity with it because it errored out every time. Another game’s offline installer (can’t remember which) would stall for hours then crash. I suspect a lot of users would be in for a surprise if they actually tried them.
I know you’re not alone with the opinion that a website asking an email address to create an account is dangerous, but frankly I still don’t understand the slippery slope argument attached to it. There are laws governing email marketing nowadays (CAN-SPAM in the US), as any actual business fucking around will find out.
In my humble opinion, an important lesson we can take from the last decades of the web is to be wary of a private free lunch. The Google search engine has never required an email and yet today they sit on an empire based on the exploitation of our data. In that sense, paying for a service is much more honest than mining the users’ privacy and selling it to advertisers (as mentioned in some hermetic Terms of Agreements & Conditions). The system may not be perfect, but asking an email address is the least invasive way to recognize someone that paid for a service.
Also, what do you mean by “self-protectionism”? It sounds like a derogatory euphemism for “making a living”. It’s fine for four journalists to live from their profession. I think paying human sized businesses for services is quite different than doing the same with disruptive, market devouring corporations.
I wish I shared your confidence. Mozilla jumped on the VR hype, then the Metaverse hype and now they’re specifically betting on generative AI. It’s leaving me feeling as suspicious as the article’s author about Mozilla’s latest ventures.
Someone knows any good Firefox hard forks?
Kernel/Syscalls/jail.cpp
includes the gender neutral “they” as well. Good on them for merging that PR.