i sent a journal article for internal review before submission, and the person who needs to approve it had clearly run it through chatGPT and given me the output. i had to change the article to respond to the computer's thoughts
i regularly receive emails now which are written by ChatGPT, that colleagues purport to have written themselves. i am being thanked by a computer, the thoughts are being framed and described by a computer. it's rude to point it out, so i am writing responses to what the computer has said, as though they are the thoughts of the person i'm emailing
i am getting slick professional documents that look like they are well thought through institutional publications, but they are simply chatGPT doing its thing. it's breaking that aspect of interpretation and communication, the inference that a document which looks professional is something that someone actually cares about and has put thought into
a layman suggested to me yesterday that some modelling that a university professor is doing should simply be done by AI instead
it's intervening in the world of ideas, via humans as intermediaries. the first time the computers have really done this i think.
at some point figuring out how to manipulate what AI is likely to say as a response to a query could become as important as gaming the youtube algo, gaming the google search results and so on