4 Chan.

suspendedreason

Well-known member
Talk about a psyop. The geriatrics holding media power are direct industry competitors with social media. These tech companies are the main competitor to their business model. It's like trusting McCarthyite Congressional reports on the Soviets.
 

version

Who loves ya, baby?
This is you, isn't it?
The tragic thing is that the BLM movement, the organized left and the Trump administration's effects on hysterical increases in funding for groups like the ACLU has set back the introduction of mass surveillance, AI-optimized facial recognition at every junction in every city, effective pre-crime algorithms, better national ID, gait analysis and really a lot of other hugely helpful technologies that could significantly improve quality of life for the majority of the population over the last year - at least in the US - by decades.
 

beiser

Well-known member
On a non unrelated note:

this is a truly astounding piece—it goes all the way back to Marx and Schumpeter without engaging with the body of literature called “Disruption Theory.” It calls the word meaningless but it doesn’t actually mention that the word does have a well defined meaning in a pretty straightforward and anodyne framework.
 

beiser

Well-known member
This is you, isn't it?
There's the issue of algorithms disproportionately excluding or negatively highlighting minorities too, e.g. facial recognition software.
something funny about tiktok is that this isn’t actually algorithmic, they have humans who are responsible for screening videos and who have excluded gay people, minorities, people with disabilities, and so forth. But curiously enough, that’s never nearly as frightening to people.
 

beiser

Well-known member
The interesting thing about algorithms is not that they’re more or less biased than human processes—it’s that they’re consistent and testable. If you can take a system and root out algorithmic bias, you have a fair system, and you can feel pretty good about it staying that way. Humans are never like this—their shifting, their judgements are inscrutable and uncontestable. Algorithms create a venue that allows for fairness, I don’t know why more people don’t understand that.
 

constant escape

winter withered, warm
And bias is nonetheless a tricky issue here, as you know. In ways that are reducible to the bias of such and such developers, and in ways that are not. An example of the latter, there was some issue with Google facial recognition software (I could be mistaken about Google, could be another company) wherein black people's faces didn't register as faces, but instead were passed over as background or some such.

Even if I've botched some key details here, it still works as an example of what could go wrong, in ways that are not reducible to developer bias, no?
 

suspendedreason

Well-known member
My pal's in NLP, says the entire field is incapable of moving past this "bias" issue, it's the primary cathexis-point for intellectual capital there, despite being an incoherently framed concept to begin with.

Developing a model that isn't "biased" would mean a whole philosophical undertaking about what it means to stereotype without stereotyping, or make predictions based on certain "acceptable" data vs unacceptable
 

constant escape

winter withered, warm
I think a solid step would be to administer bias with some kind of egalitarian/diverse committee of sorts, which still leaves room for error, but much less room than previous infrastructural developments in our history, no?
 

constant escape

winter withered, warm
The folks pushing for zero bias still, perhaps, ought to push that hard, otherwise the results won;t be as tempered as they could have been.
 

constant escape

winter withered, warm
One of those situations where the optimal outcome is predicated on some variety of excessive, contra/partisan efforts that manage to partially cancel each other, partially complement each other. The balance of efforts would be an order higher than individual/partisan efforts, which are necessarily and functionally myopic. That is, in order for the higher order to reach its optimal state, the lower order individuals need to reach beyond their grasp.
 

beiser

Well-known member
these two tweets have more wisdom about algorithmic bias than any article ever published in a mainstream source or journalC0DB1CC4-4140-4235-A362-E6F76034AAB9.jpegC0F2C26B-5DF7-4907-AFAB-C878698FE800.jpeg
 

beiser

Well-known member
the question of what it means for AI to be “fair” is not simple at all. There are dozens of mutually exclusive theories and the key thing to note is that this is also true of existing processes, but they’re sufficiently opaque, informal, and ad-hoc that people get away with them, because they’re not analyzable. You will not win this battle by “taking a hard line on fairness,” it actually requires people and societies to think about what they value.
 

beiser

Well-known member
the example people bring up are always ones that are comically bad, unjustifiable under any grounds — its the opacity that gets people, what would make a lot more sense is to push fervently for direct access to models, to let them be interrogated and understood.
 

beiser

Well-known member
the brain worms that exist across europe are simple: nobody believes anything; they can problematize, but can't construct a position. believing things is gauche, you're naive.
 

boxedjoy

Well-known member
Our online recruitment portal asks candidates to rate themselves 1-5 on various skills as well as answer some questions for evidence of those skills. Then when you log in as a manager it recommends you the applications of the highest self-scoring applicants. The highest scoring ones are nearly always awful with their answers to the questions. So then you end up having to go through every application anyway because the person who rates themselves 4/5 might actually have done something more useful to the job than the over-confident or desperate candidate willing to give themselves 5/5. It's a complete waste of time. Plus as we know - women and POC are much more likely to under-inflate their skills and abilities. I absolutely hate it but we have to be seen to be using it.
 

luka

Well-known member
Staff member
it gets weirder and weirder. If you read a modern job application, there's no human in it. A sophisticated operator knows how the things are assessed and is able to bleed all the humanity out of it and methodically and ruthlessly tick every box.
 

luka

Well-known member
Staff member
Because you're not being assessed by a human as a human but by a checklist.
 

luka

Well-known member
Staff member
I don't really know why our American brothers are pretending not to know why normal human beings see algorithms are nightmarish. For those of us who know nothing really about how they work what they seem to promise is a rigidly controlled future in which you are nothing but your data and your data will decide on whether or not you are executed by drone or denied social security or a library card whether you are promoted or sacked whether your children are taken away from you whether you are dissolved in acid.
 
Top