Neat!

Now just imagine, chatgpt gets access to all patent files and you can build just about anything in your garage! Chatgpt is not on my favorites list. “Do you want to play a game??” (just showing my age) or how about Terminator?

4 Likes

Greetings, Professor Falken.

Not too worried about Skynet, just yet… But I, for one, welcome our new AI overlords. :rofl:

2 Likes

Well actually chatgpt, bard, and every other llm/generative engine has a paid version where your data doesn’t serve as training data and is not shared with others :wink:

I believe that was my thread.

My boss is still pushing for this. Thanks for the link!

1 Like

Did anyone see the 60 minutes story this past Sunday, with the grandfather of AI talking to 60 minutes?

Kind of gave me the feeling we are closer to a problem than we think. Just hope it’s after my time on this planet.
It’s a race to see which will be first: war games, terminator or iRobot. It’s sort of amazing to me that people were that forward thinking, that long ago, before AI’s latest releases to the public.

Tied to this, I’m exhausted with the various sources accusing things of being AI that aren’t, like simple if/then workflows being AI. If that was the case, we had AI, before the grandfather of AI accidentally created it in the 70’s.

Stepping back off my soapbox, my apologies for the vent.

2 Likes

That’s how it all begins, then they add fuzzy logic and then you are in trouble!! :smiley:

The lines are very blurred at the moment.

In one of my other lives I am dealing with the complexities of AI and photographic competitions - almost every function of modern processing software relies to an extent on true AI “subject selection” “remove object” and so on. Every smartphone uses it to process the data it collects and turn it into what we would call an image.

Presently the rules say “must be comprised entirely of information captured through a lens”, but how would anyone know?

I often remind myself that AI is only as good as its training regime. For better or for worse. I recall stories of early neural nets being trained to find tanks in photos of forests, and then couldn’t find tanks in fields, but thought that every forest had tanks. And let’s not dismiss the pervasive gender and racial bias currently exhibited by most AIs on the marketplace.

OK, off Steve’s soapbox (thanks for letting me borrow it!)

If AI ever makes good on the “LoTR by Wes Anderson” trailer/meme that I’ve seen around, I’m watching it. I’ll donate to SAG/AFTRA as penance.

1 Like

Garbage in, Garbage out.

I guess that’s kind of my take away from the now two 60 minutes stories on AI. Don’t go with my opinion, look for the story named “The godfather of AI”. Surely it’s out there on the web (unless the AI keeps deleting it). Humans aren’t teaching the true AI anymore. In the case of four small robots playing football (soccer) they only told it the task was to get the ball in the goal. Then they watch it learn for X amount of time, and develop strategies, then wipe the system to start all over again, and see what it learns on the second try, lather rinse repeat, over and over, and watch how it’s learning, to see the subtle differences in what it’s learning. But no garbage in, it’s creating its own. Then there’s chatGT making up stories, or lies, depending how you want to interpret the data.

Sometimes amazing, sometimes scary.

1 Like

Well, when the robots rise, I know who they’ll get first…

1 Like

So one group of people are teasing the robots and beating them with sticks…

And another group is teaching them how to use manufacturing processes to create the parts they need to build more robots by themselves…

Awesome…

1 Like

I’ve been a little obsessed lately trying to figure out what real practical utility I can squeeze out of these AI tools. There are a lot of crap demos but I feel like there is real potential that hasn’t yet been tapped.

My current sense is that the missing piece is some development of procedures surrounding the AI LLMs. Like the McDonald’s handbook or CMMI or something like that but different, to steer the AI outputs into some results with known properties.

Like using the CAM demo from above but actually plugging it into the CNC with some load cells and letting it make tests cuts until it knows exactly what to do?

2 Likes

Or having it optimize the path

Since no-one knows what’s going to happen here, I’ll just drop this and run.

This relates specifically to AI in visual media, but I imagine a similar logic will follow elsewhere.
18 months ago I was given a detailed run-down on a new text to AI system that large ad agency was exploring. At that early stage of the tech, they had two people full time “training” the AI to output in the company style.

6 months ago, those people had a title “prompt engineers” and there were twelve of them on large six figure salaries.

Consensus among them is that in twelve months from now, there will be one left, whose job it will be to continue to steer the outputs as you say, and keep the machine from being influenced by outside sources.

In the graphics world there is a widely held theory that the overwhelming amount of public data available could result in AI creating the “perfect” amalgamation, eventually ending up with a “grey” result, similar to what you would get if you mixed every colour into one pot, so that every publicly available generator will produce a single style with fashion evolution slowed because of the mass of data already analysed.

I don’t know how that translates to robotics, but think it’s an interesting aside.

3 Likes

Open the pod bay doors Hal

1 Like

The ground robotics problem was supposed to be easy to solve. And it is, in the first few months.

One unexpected issue that a lot of algorithms hit is the “fork in the road” problem. If you consider every possible solution, choose the best ones, and then take an average, you usually get something pretty good. Good enough to get a big chunk of that delicious VC money. :stuck_out_tongue:

But when you try it in a forrest, it finds a fork in the road and averages a perfectly good solution to the left with a perfectly good solution on the right and slams into a tree.

It sounds silly, but I promise you $100M+ has been spent on algorithms that suffer in this context.

Any solution needs to be smarter than combining everything into an average. Just as those “prompt engineers” trained the model to have the style of the agency, what makes it valuable is its ability to make a distinctive solution that is not average.

The prompt engineer is a good title too. So many times I have seen people getting decent first drafts from the LLM and in basic software problems. But they need someone smart to be able to give them the right prompt, evaluate and edit their response. Eventually, that person will be more effective than me at designing, writing, testing, and delivering code. I will have to adapt or be out of a job. I am pretty sure that day hasn’t come. But maybe they are just on the other side of the horizon and coming at me fast. :person_shrugging:

3 Likes

Nah, those AI based code sets will have subtle and obscure problems. You’ll be even more valuable because unlike person who gets first drafts, you’ll actually understand the underlying software engineering. You’ll be the only one left who can try and fix that mess.

3 Likes

Came across this and thought it was pretty neat…

4 Likes