@FaceDeer@kbin.social
@FaceDeer@kbin.social avatar

FaceDeer

@FaceDeer@kbin.social

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and is now exploring new vistas in social media.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

FaceDeer,
@FaceDeer@kbin.social avatar

Not to mention that a response "containing" plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don't even know how it works.

I went and looked at how low theater and such were and it's dramatic:

The lowest similarity scores appeared in theater (0.9%), humanities (2.8%) and English language (5.4%).

FaceDeer,
@FaceDeer@kbin.social avatar

Article mentioned 400-word chunks, so much less than paper-sized.

FaceDeer,
@FaceDeer@kbin.social avatar

Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.

Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.

I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.

FaceDeer,
@FaceDeer@kbin.social avatar

I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.

FaceDeer,
@FaceDeer@kbin.social avatar

Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do <insert whatever is currently being debated here>.

I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.

FaceDeer,
@FaceDeer@kbin.social avatar

And even if local small-scale models turn out to be optimal, that wouldn't stop big business from using them. I'm not sure what "it" is being referred to with "I hope it collapses."

FaceDeer,
@FaceDeer@kbin.social avatar

There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.

FaceDeer,
@FaceDeer@kbin.social avatar

Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.

FaceDeer,
@FaceDeer@kbin.social avatar

Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.

IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.

FaceDeer,
@FaceDeer@kbin.social avatar

And even if it was Google, these companies aren't magic. Once there's a proof of concept out there that something like this can be done other companies will dump resources into catching up with it. Cue the famous "we have no moat" memo.

FaceDeer, (edited )
@FaceDeer@kbin.social avatar

The term "AI" has a much broader meaning and use than the sci-fi "thinking machine" that people are interpeting it as. The term has been in use by scientists for many decades already and these generative image programs and LLMs definitely fit within it.

You are likely thinking of AGI, or artificial general intelligence. We don't have those yet, but these things aren't intended to be AGI so that's to be expected.

FaceDeer,
@FaceDeer@kbin.social avatar

It's not the training data that's the problem here.

FaceDeer,
@FaceDeer@kbin.social avatar

It's the "make some people non-white" kludge that's the specific problem being discussed here.

The training data skewing white is a different problem, but IMO not as big of one. The solution is simple, as I've discovered over many months of using local image generators. Let the user specify what exactly they want.

FaceDeer,
@FaceDeer@kbin.social avatar

You'd need to gut the car completely and rebuild it, it would be more work than starting from scratch.

FaceDeer,
@FaceDeer@kbin.social avatar

Negative examples are just as useful to train on as positive ones.

FaceDeer,
@FaceDeer@kbin.social avatar

I've lost track, is AI a good thing today or a bad thing?

FaceDeer,
@FaceDeer@kbin.social avatar

They're rolling it out gradually, as is customary for routine updates.

I'm not sure why this is worthy of a headline, frankly. This is how Microsoft typically does these things. I guess it's the "...with AI involved somehow!" Bit in the title that makes it interesting? I expect that's going to get old fairly quickly.

FaceDeer,
@FaceDeer@kbin.social avatar

Indeed, the level of obsession some people have with Elon Musk is kind of ridiculous.

FaceDeer,
@FaceDeer@kbin.social avatar

Why do you think so, and why does it matter?

FaceDeer,
@FaceDeer@kbin.social avatar

How dare they provide a useful tool like this, those bastards.

FaceDeer,
@FaceDeer@kbin.social avatar

It's not exactly training, but Google just recently previewed a LLM with a million-token context that can do effectively the same thing. One of the tests they did was to put a dictionary for a very obscure language (only 200 speakers worldwide) into the context, knowing that nothing about that language was in its original training data, and the LLM was able to translate it fluently.

OpenAI has already said they’re not making that publicly available for now

This just means that OpenAI is voluntarily ceding the field to more ambitious companies.

FaceDeer,
@FaceDeer@kbin.social avatar

Exactly. Article looked fine to me, if it was AI-written then it did a good job.

FaceDeer,
@FaceDeer@kbin.social avatar

Writing code to do math is different from actually doing the math. I can easily write "x = 8982.2 / 98984", but ask me what value x actually has and I'll need to do a lot more work and quite probably get it wrong.

This is why one of the common improvements for LLM execution frameworks these days is to give them access to external tools. Essentially, give it access to a calculator.

FaceDeer,
@FaceDeer@kbin.social avatar

Sora's capabilities aren't really relevant to the competition if OpenAI isn't allowing it to be used, though. All it does is let the actual competitors know what's possible if they try, which can make it easier to get investment.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • All magazines