i use we as in, we, Cyberdyne Systems.
I wonder how much of these are just artifacts of the training sets? Even if thats the case, it's still effectively sensing patterns that don't make sense to us.Article that seems to confirm current suggestions that the Dall-E image maker has developed its own secret language without any prompting
![]()
An Image Generation AI Created Its Own Secret Language But Skynet Says No Worries
A researcher claims that DALL-E, an OpenAI system that creates images from textual descriptions, is making up its own language.hothardware.com
They've denied it but there's definitely something to this
Apoploe vesrreaitais
I wonder how much of these are just artifacts of the training sets? Even if thats the case, it's still effectively sensing patterns that don't make sense to us.
I imagine thats exactly what's happening. if you think of image recognition, the intermediate layers in a neural net build up abstract images which are then used by later layers to match against more complicated and concrete images. you could potentially feed the network an image based on those abstract images and have it recognise it as a cat or whatever, even though it's completely abstract and syntheticI wonder how much of these are just artifacts of the training sets? Even if thats the case, it's still effectively sensing patterns that don't make sense to us.
Going to train a language model
Mancbook will never be as good as Scousebook.97% of people will never have a brain how is mancbook going to have one, behave