Cellular Automata

Clinamenic

Binary & Tweed
I think we're entering a serious renaissance-like period now with AI-assisted creativity and work. Sure there are a bunch of serious risks we'll need to contend with, but the fact that I can now use Python without really needing to know python is kinda profound. The coding, combined with the image creation, high-level strategy, etc, can make for a pretty staggering toolbox.

You can even use ChatGPT to refine the prompts you feed back into it! IE you can tell it what you want it to do, but let it tell itself how to do it, in order to maximize prompt efficacy.
 

Mr. Tea

Let's Talk About Ceps
@william_kent @wektor I just had ChatGPT write a bunch of Python code trying to create a cellular automata animation which could be parameterized (cell size, grid size, color, rules, etc) and automatically export as a GIF - and it actually worked! I had to splice together some pieces from a few of ChatGPT's attempts, but ultimately it works now:

View attachment 15063
This must have profound implications for the future of programmers as a profession, doesn't it? I mean, why pay someone to write a routine to achieve, say, a certain shading effect in a graphics engine for a game when you can ask ChatGPT to write it for you? Granted, that sort of task is orders of magnitude more complex than the one you've given it, but it's probably only a matter of time, right?
 

Mr. Tea

Let's Talk About Ceps
This must have profound implications for the future of programmers as a profession, doesn't it? I mean, why pay someone to write a routine to achieve, say, a certain shading effect in a graphics engine for a game when you can ask ChatGPT to write it for you? Granted, that sort of task is orders of magnitude more complex than the one you've given it, but it's probably only a matter of time, right?
I apologise for writing a post consisting of three questions.
 

Clinamenic

Binary & Tweed
This must have profound implications for the future of programmers as a profession, doesn't it? I mean, why pay someone to write a routine to achieve, say, a certain shading effect in a graphics engine for a game when you can ask ChatGPT to write it for you? Granted, that sort of task is orders of magnitude more complex than the one you've given it, but it's probably only a matter of time, right?
I mean there are already ways people are using ChatGPT 4 to run physics simulations, so I'd imagine there are some pretty advanced 3d rendering/shading stuff you can do with it.

But yeah AI is gonna be automating a lot of mid-level managerial and creative work, including programming, graphic design, basic legal consulting, copywriting, marketing strategy, education, therapy, etc.

But yeah I'm enjoying it while its still free, and in this research/beta phase. I already pay for Midjourney image generation AI, but I'm not sure how much I'd pay to keep using ChatGPT.
 

Leo

Well-known member
So AI can provide the equivalent of a best practice on a task, but is it able to innovate? It works off knowledge of what's been done in the past, but does it have the ability to come up with a new way of doing the task that might be better?
 

Clinamenic

Binary & Tweed
So AI can provide the equivalent of a best practice on a task, but is it able to innovate? It works off knowledge of what's been done in the past, but does it have the ability to come up with a new way of doing the task that might be better?
I mean it can write new code, create new images, write new text, and in general demonstrate creativity and innovation in that respect. It can also apparently create fictional languages (and potentially even novel programming languages), but I'd imagine, in general, that the approach most/all of these AI models take toward innovation is just brute force probabilistic innovation, IE just trying out a bunch of novel combinations just to see what the user likes, because it can't really have any opinions of its own, at least to my knowledge, and thus might not be able to internally judge how innovative its own methods are, and thus might not be able to develop radically innovative solutions.
 

wektor

Well-known member
the big issue is it is often far from ideal with more specific solutions, it's difficult to get it to do array slicing right in python, so I definitely wouldn't trust it with something as complex as physics modelling. it's a roulette pretty much. if you're curious, try to get it to draw a landscape using P5.js. I suggested a few of my students to try that and it takes 10x more time than doing it by yourself.
 

Slothrop

Tight but Polite
The ultimate limitation for the application of AI to programming seems to be that once you get out of fairly well-known toy examples you still fundamentally need to be able to specify the behavior that you want, and specifying the behavior that you want (and often even figuring out what behavior you do want) is essentially what programmers do, albeit at a slightly lower level. I mean, it has the potential to be quite a big multiplier for the effectiveness of programmers, in the same way that high-level languages and libraries and IDEs were relative to everyone fucking around writing assembler on punch cards, but we still seem to be a way off the point where management can just tell the AI what features they want and have them pop out of the machine ready to deploy.
 

Clinamenic

Binary & Tweed
The ultimate limitation for the application of AI to programming seems to be that once you get out of fairly well-known toy examples you still fundamentally need to be able to specify the behavior that you want, and specifying the behavior that you want (and often even figuring out what behavior you do want) is essentially what programmers do, albeit at a slightly lower level. I mean, it has the potential to be quite a big multiplier for the effectiveness of programmers, in the same way that high-level languages and libraries and IDEs were relative to everyone fucking around writing assembler on punch cards, but we still seem to be a way off the point where management can just tell the AI what features they want and have them pop out of the machine ready to deploy.
Totally agree, I see this as really mostly being beneficial for programmers who can outsource a lot of the legwork to AI, then they'll easily be able to check it, and refine it. But then again, maybe in some situations, it'll be just as much work to debug AI code as it would be to just write it yourself.
 

william_kent

Well-known member
@william_kent @wektor I just had ChatGPT write a bunch of Python code trying to create a cellular automata animation which could be parameterized (cell size, grid size, color, rules, etc) and automatically export as a GIF - and it actually worked! I had to splice together some pieces from a few of ChatGPT's attempts, but ultimately it works now:

View attachment 15063

I meant to reply to this at the time, but I was on holiday in Taiwan and I wanted to mention my various arguments with chat gpt where I was getting it to implement CA algorithms in "the c programming language" but I'm so jet lagged and hammered from my sneaky exceeding the customs limits that I'll have to leave it to another day....mainly because i can't log in to open ai at the moment due to some inebriation issues and can't retrieve my history of complaints..
 

IdleRich

IdleRich
There's a number of properties of this kind of algorithmically generated art that make it conceptually interesting to me. It's non-deterministic, so there's no sense of "final" form. Because it takes environmental/human inputs, each instantiation of a rule will have a different unpredictable end state, (if there is one.) Algo "art" defies regular categories because it is fundamentally just a rule for producing complexity, not a specific form. Thus its form agnostic. One could have a near-infinite number of ways of instantiating an automata rule, whether it be virtually, (e.g simulated visually on a screen, or turned into an audio file) physically, (e.g create a mechanical instantiation of a rule). Thus the "art" is a type with many different possible tokens.
Is it, I thought it's entirely deterministic. There is no random element. I think you're mixing up non-deterministic and undecidable etc
 

Clinamenic

Binary & Tweed
Is it, I thought it's entirely deterministic. There is no random element. I think you're mixing up non-deterministic and undecidable etc
Yeah I think it is deterministic, using a “seed” value as one of the determining constraints. A lot of the text-to-image models use a random seed as a default constraint, which means you can put the same text input in and get different image outputs.
 

Clinamenic

Binary & Tweed
But maybe there are other factors aside from seed value, model type, text input, which play determining roles too. But yeah short of some noise element or randomness generator, I don’t know how these models could be non-deterministic.
 

IdleRich

IdleRich
Yeah I think it is deterministic, using a “seed” value as one of the determining constraints. A lot of the text-to-image models use a random seed as a default constraint, which means you can put the same text input in and get different image outputs.
He said "it's non-deterministic" and you said "yeah" - but then I said "I think that's wrong, it is deterministic" and you said "yeah that's right".

I think it's just a mix-up with terminology. The thing is, the ones on the first pages that I read about have a rule that generates them, the rule is of the form, if it's like this do x, if it's like that do y etc.

These rules are deterministic, they contain no random or probabilistic element - although you can normally choose your starting conditions and inputs and various other parameters - and what makes them interesting is that these very simple, deterministic rules generate very complex things. But if you did the same program twice, with the same input parameters etc you would get the same result.

But the, I think, counter-intuitive thing, is that very simple, deterministic rules generate very complicated patterns. And in fact in many cases even though we know and understand the rule, and we choose the starting conditions, we often can't work out where it will end up, or what type of pattern it will end up being - other than by actually just doing it.

So I think it's tempting to think that cos we know the inputs and we know the generating rule, but we don't know where it will end up, that the explanation for us seemingly sort of getting lost between the known start, and the unpredictable result is that it's non-deterministic, but that's not it - we lose the ability to predict where it will go not cos of randomness or non-determinism, but simply because of complexity* or undecidability.

Just cos we it's impossible to predict we want to say it's random, but it's not, it's something that our mind wants to interpret that way, cos we are not used to something where we know the start and we know precisely every step, and yet we can't tell where it will end up. That's right ain't it @Slothrop, @MrTea?

*Great/terrible mangling of the language "simply because of complexity"
 

Clinamenic

Binary & Tweed
Yeah and it also shows that the resulting complexity needn’t be foreseen by the creator, which is philosophically profound in my opinion.
Is this the other post you’re referring to? I think I’m saying pretty much the same point you made, that the complexity is just unpredictable without saying that’s it’s non-deterministic. But maybe somewhere else in this thread I said these models were non-deterministic, but I don’t recall saying that, because I’d be really surprised if there were non-deterministic models already. Could be possible though, from what little I know. Maybe something to do with noise?
 

IdleRich

IdleRich
The post that said it was non-det was by Toko, you merely agreed with it. I suspect it was just a typo anyhow cos I'm sure he understands the whole thing much better than I do.

The post you highlight above I agree with. I think it is profound. And I think it's very weird to have a situation where we know the start and every step and yet can't predict where it will go. Pretty headfucking.
 

IdleRich

IdleRich
I’d be really surprised if there were non-deterministic models already. Could be possible though, from what little I know. Maybe something to do with noise?
You can make probabilistic models in general if you want eg when people try and predict weather - eg a crude one could be if it rains today then say 7/10 rain tomorrow, but if it's dry today then 4/10 and you could draw like a forking probability tree going forwards, and of course you don't know for certain where you will be after even one day cos there are non-deterministic choices at every step. But the fact you won't know where you'll be in a prob model is trivial or not that interesting in the way it was with the stuff above cos it's built into it.
 

catalog

Well-known member
Not read everything in this convo, sorry, but...

I've just installed stable diffusion on this laptop, so I'm running it locally, and like Stan said, you run the same text prompt again and again, and you definitely don't get the same result.

So it's not quite this

But if you did the same program twice, with the same input parameters etc you would get the same result.

But, it sort of is, in that you geg a result that fulfils the prompt, it's just if might not be as good as a previous one you did.

So that element of chance is definitely present, I would say. Which is what makes it interesting, for me.

I don't understand this deterministic thing you are both on about, but my mate who got me into all this to begin with said that the best way of thinking about AIs is as black boxes in that there's so much training data, so many ways of doing things inside them, so many micro-decisions, as @IdleRich is saying, that is the whole point.
 

IdleRich

IdleRich
Not read everything in this convo, sorry, but...

I've just installed stable diffusion on this laptop, so I'm running it locally, and like Stan said, you run the same text prompt again and again, and you definitely don't get the same result.

So it's not quite this



But, it sort of is, in that you geg a result that fulfils the prompt, it's just if might not be as good as a previous one you did.

So that element of chance is definitely present, I would say. Which is what makes it interesting, for me.

I don't understand this deterministic thing you are both on about, but my mate who got me into all this to begin with said that the best way of thinking about AIs is as black boxes in that there's so much training data, so many ways of doing things inside them, so many micro-decisions, as @IdleRich is saying, that is the whole point.
I haven't read the whole thread either, but the ones at the start that I did read about said that they were deterministic so you should get the same result with the same inputs, you must be talking about something different that I haven't read about yet.
 

catalog

Well-known member
I think it's the same thing, but his terminology is wrong, or different to what yoj think it should mean.

Or maybe I'm talking about something different. He's saying algo art, I'm saying AI art, that's maybe the difference. Cos AI art is a level up from algo in terms of the autonomy of the machine.

With something like game of life, isn't the whole point that it comes out the same thing if you don't switch any parameters?

Wrong thread for me to barge into probably, without reading properly. But an interesting thing is how quick its moving. These 2 things were feeling quite newish, now it's all gone bananas.
 
Top