AIDS: conspiracy or reality?

nomadthethird

more issues than Time mag
I believe in the lizard people, I'm just not sure that they're Jewish. I think it's power that turns people into lizards, and sometimes being in prison. People get that shark-eyed and shiny skinned thing. But then I've seen people I don't like turn into snakes in front of my eyes, so I should shut up and take some meds.

Yeah, I know what you mean... I think there's an old thread on that topic... the whole lizard people thing seems like a good metaphor for the observation that sociopaths and the power-hungry tend to reach positions of power and influence before the worthy and upstanding...

The thing is, every video I've seen on the internet by one of the true believers either in the masonic or lizard conspiracy (or any other one, for that matter) is usually filled with virulent anti-semitism as well.

swears said:
Kary Mullis

I bet he stopped believing in his meds, too. Typical, but sad.

James Watson arguably went off the deep end, too. Not that far, tho.
 
Last edited:

nomadthethird

more issues than Time mag
the possibility that general health and nutrition MAY be more important than drugs for patients??? fairly shocking stuff don't you think? (edit: again, to a non-specialist) it will be interesting to see reactions when this full doc is out.

But that's NOT what the guy's saying... good lord, Sloane makes a really good point and you twist it.

It may be just as important to pay for "infastructure" in Africa--I agree, but I also know what usually happens due to corruption when foreign aid comes in for those things. I agree with Sloane. But that doesn't mean that paying for infastructure (things like water, better sanitation, better schools, etc) is going to lower AIDS infection in and of itself. It's a step in the right direction, but nutrition alone has no clear affect on HIV.

Hell, if you're a doctor and get an accidental needle injury on the job, there are drugs they can give you to stop HIV dead in its tracks, with something like 80% success.

Think about this-- if food were the magic bullet, then the food industry could just as easily be over in Africa raking in the billions that you imagine drug companies are. As it is, the west is spending more money than it's making on the AIDS pandemic. I'm all for reforming Big Pharma but these weird "pull the trigger and spray bullets blindly" sort of criticisms aren't going to do anyone any good.
 
Last edited:

Mr. Tea

Let's Talk About Ceps
I don't know, T, those results seem believable (if ehh a little trivial) to me.

Well yeah, I'm not saying I don't believe the results - it just seems to come from a tradition of socio/psych studies that either tell us stuff we all intuitively know anyway, or to have been conducted purely to support some cliched parental maxim, or to simply be of no conceivable benefit to anyone even if they do turn up some unexpected result (says the former LHC grad student, yeah yeah...):

"'Benefit not felt' of coats donned before going outside"

"Breadcrust-eating children 'have curlier hair'"

"Dogs really do look like owners, new survey shows"

"Scientist wastes life developing mathematically optimal biscuit-dunking technique"

"£50m, ten-year-study probes why fish, chips taste better in North of England"

"Red cars 3.1% more likely to be involved in accidents"

"Oncologist takes life over 'science' column inches lavished on mindless bullshit"

...kinda thing.
 

nomadthethird

more issues than Time mag
Well yeah, I'm not saying I don't believe the results - it just seems to come from a tradition of socio/psych studies that either tell us stuff we all intuitively know anyway, or to have been conducted purely to support some cliched parental maxim, or to simply be of no conceivable benefit to anyone even if they do turn up some unexpected result (says the former LHC grad student, yeah yeah...):

"'Benefit not felt' of coats donned before going outside"

"Breadcrust-eating children 'have curlier hair'"

"Dogs really do look like owners, new survey shows"

"Scientist wastes life developing mathematically optimal biscuit-dunking technique"

"£50m, ten-year-study probes why fish, chips taste better in North of England"

"Red cars 3.1% more likely to be involved in accidents"

"Oncologist takes life over 'science' column inches lavished on mindless bullshit"

...kinda thing.

You don't have to convince me... I've seen some really ridiculous shit masquerading as "data"...

The worst was when I realized that sociologists don't use the null hypothesis or any type of control (usually) in their studies. (It really was kinda disappointing. Call me Pollyanna...)

You might just as well toss a poll up on a Myspace banner ad, the results would be just as scientifically sound...
 

vimothy

yurp
The worst was when I realized that sociologists don't use the null hypothesis or any type of control (usually) in their studies. (It really was kinda disappointing. Call me Pollyanna...)

Ironically, this is totally untrue.
 

nomadthethird

more issues than Time mag
Ironically, this is totally untrue.


Social sciences in general don't work according to the null hypothesis even if they use controls (which is very hard to do in general and isn't always possible...)

Witness, for example, this gem:

Viewing Cute Images Increases Behavioral Carefulness

No null hypothesis! Just a bunch of lame associations made post hoc about "carefulness", which isn't necessarily what their study demonstrated, which is that looking at "cute" (whatever that means) images increases fine motor skills in participants.

I will say for psych studies like this one that at least they compared the self-reports to the actual performance of participants. Soc studies are no where near as rigorous, in general.
 

nomadthethird

more issues than Time mag
Oh look! Other people have already critiqued this one.

Strengths

Controlled and assessed other cognitive/emotional influences of the animal images that were presented.

Cautions

Authors assume, but do not demonstrate, that the improved fine-motor performance in response to cuteness can lead to improved reproductive success because of the fragility of human infants/small children. This relationship between fine-motor activation and behaviors helpful to fragile children has not – to my knowledge – been demonstrated.
The nature of human infant/small child fragility is assumed, but neither demonstrated nor made explicit.
 

vimothy

yurp
I can't speak for the undergrad syllabus, but hypothesis testing is boiler-plate for any post grad social sciences course. It's fully half the research methodology.
 
Last edited:

nomadthethird

more issues than Time mag
I can't speak for the undergrad syllabus, but hypothesis testing is boiler-plate for any post grad social sciences course. It's fully half the research methodology.

Research methods is required in MA programs here, too. And for psychology students, they start with research methods as sophomores in college. But social sciences generally don't use the null hypothesis.

Having experienced both humanities research methods and scientific ones, I think it's more than fair to say there's no comparison in terms of formal rigor.
 

vimothy

yurp
But social sciences generally don't use the null hypothesis.

You're wrong about this. Hypothesis testing is bread and butter for fully half of all social research methodology. Why do you think social scientists use quantitative methods in the first place? If you were applying it here, you wouldn't be making such hideously out of touch generalisations, so I guess you're almost proving proving yourself right.
 

padraig (u.s.)

a monkey that will go ape
interesting little article from TIME on David Ho & the search for an AIDS vaccine (& the difficulties thereof: The Man Who Could Beat AIDS. talks a bit in the same molecular bio tip I was on about upthread, in this case designing vaccine that would work by some altering or binding HIV to make it physically impossible for the virus to enter cells.

on a separate note, normally I'd steer well clear of a dispute between Vim & Nomad about social science research methodology, but by coincidence I happen to be taking "Social Research" this semester; normally I try to avoid the social "sciences" as much as possible - at least in classes I'm paying for - but research methodology is far more tolerable, interesting & relevant than listening to some due gob about critical theory or whatever (tho mainly I'm just taking it cause it's an honors course). anyway, from my limited knowledge, w/r/t to the null hypothesis, doesn't it depend entirely on what social science you're looking at (i.e., psychology is more likely to be rigorous than sociology, & so on)? and in fact, what study?
 

vimothy

yurp
It depends on whether the research is quantitative (or mixed methods), or not. If it is, well, hypothesis testing is the sine qua non of quantitative research. It doesn't matter what sub-field of the social sciences we're talking about--it could be econometrics, quantitative sociological research, or whatever. Generalising from a sample to the population necessarily involves hypothesis testing. All SPSS (or whatever stats package you're using) does when it generates a p value is give you a measure of the probability that the null hypothesis is true.

Of course, there are large swathes of social science research that don't use hypothesis testing: non-quantitative research, or research that doesn't use hypothesis testing. But that's totally obvious, to say nothing of totally circular. This is a slightly silly debate though. I just thought it was ironic that Nomad was making an incorrect generalisation about social science's use of a methodology that is supposed to prevent researchers from making incorrect generalisations.

EDIT: Just to add, that I do work for a mixed methods social science research project that employs two statisticians (one full and one part time). Hypothesis testing is hard wired into the most basic and fundamental quantitative procedure, which exists to 1, establish if a relationship exists between two or more variables in a dataset and 2, establish if that relationship exists in the population. No. 2 is a test of the probability that the null hypothesis is true (i.e. that the relationship does not exist in the population). This is the essence of quantitative social research methods.
 
Last edited:

nomadthethird

more issues than Time mag
Good article padraig.

It depends on whether the research is quantitative (or mixed methods), or not. If it is, well, hypothesis testing is the sine qua non of quantitative research. It doesn't matter what sub-field of the social sciences we're talking about--it could be econometrics, quantitative sociological research, or whatever. Generalising from a sample to the population necessarily involves hypothesis testing. All SPSS (or whatever stats package you're using) does when it generates a p value is give you a measure of the probability that the null hypothesis is true.

Of course, there are large swathes of social science research that don't use hypothesis testing: non-quantitative research, or research that doesn't use hypothesis testing. But that's totally obvious, to say nothing of totally circular. This is a slightly silly debate though. I just thought it was ironic that Nomad was making an incorrect generalisation about social science's use of a methodology that is supposed to prevent researchers from making incorrect generalisations.

EDIT: Just to add, that I do work for a mixed methods social science research project that employs two statisticians (one full and one part time). Hypothesis testing is hard wired into the most basic and fundamental quantitative procedure, which exists to 1, establish if a relationship exists between two or more variables in a dataset and 2, establish if that relationship exists in the population. No. 2 is a test of the probability that the null hypothesis is true (i.e. that the relationship does not exist in the population). This is the essence of quantitative social research methods.

Dude, you still don't get it. This is not a debate. It was a statement about FORMAL RIGOR, and how social science has less than the other sciences. (Econ being possibly excepted in some cases.) Social scientists often test things that simply can't be quantified as easily as, say, genetic drift or allele frequency can.

And as far as chi-square/p values go, no shit... but you can test a fucking shit hypothesis with a chi-square, and still get a p value. That doesn't make your hypothesis good, it just makes your data set nice and "statistically significant"...which again, doesn't mean that what you're saying makes any sense. Any number of anomalies can get you a really low p number. (which is a measure of whether your hypothesis, not the null hypothesis, is stat. sign.)

The null hypothesis should be "our research will not produce usable results" or you're not doing it right.
 
Last edited:

vimothy

yurp
No, that's not right. You said:

I realized that sociologists don't use the null hypothesis or any type of control (usually) in their studies.

EDIT: You're also wrong about p-values. A p-value tests the null hypothesis (we want to avoid type I errors). A low p-value means a low chance the null hypothesis is correct, means there's a good chance your relationship holds in the population. Low p-values make us happy.
 
Last edited:

nomadthethird

more issues than Time mag
No, that's not right. You said:



EDIT: You're also wrong about p-values. A p-value tests the null hypothesis (we want to avoid type I errors). A low p-value means a low chance the null hypothesis is correct, means there's a good chance your relationship holds in the population. Low p-values make us happy.

No, chi-squares/p values test your data, and the likelihood that such results could have resulted from chance, not the null hypothesis. You can't really test the null hypothesis with a chi square, since the null hypothesis is kind of like a big old nothing.

A low p value is in fact a good thing, but not because it tests the null hypothesis. It's a good thing because it indicates that *your results* are *statistically significant* which means that they probably mean something. (What that something is, however, is not whether the null hypothesis is correct or incorrect...)

From wikipedia:

There are several common misunderstandings about p-values.[2][3]

  • The p-value is not the probability that the null hypothesis is true. (This false conclusion is used to justify the "rule" of considering a result to be significant if its p-value is very small (near zero).)
  • In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity. This is the Jeffreys-Lindley paradox.
  • The p-value is not the probability that a finding is "merely a fluke." (Again, this conclusion arises from the "rule" that small p-values indicate significant differences.)
  • As the calculation of a p-value is based on the assumption that a finding is the product of chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is subtly different from the real meaning which is that the p-value is the chance that null hypothesis explains the result: the result might not be "merely a fluke," and be explicable by the null hypothesis with confidence equal to the p-value.
  • The p-value is not the probability of falsely rejecting the null hypothesis. This error is a version of the so-called prosecutor's fallacy.
  • The p-value is not the probability that a replicating experiment would not yield the same conclusion.
  • 1 − (p-value) is not the probability of the alternative hypothesis being true (see (1)).
  • The significance level of the test is not determined by the p-value.
  • The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed.
  • The p-value does not indicate the size or importance of the observed effect (compare with effect size).
 
Last edited:

vimothy

yurp
Okay, okay, the null hypothesis is "true" by default (the status quo ante). A low p value means a low chance of committing a type I error, i.e. a low chance of observing a test statistic as extreme as that observed, if the results were distributed as in the null hypothesis. Or,

alpha (or the p value) = P(Rejecting H0|H0)​

It doesn't really change my argument, though.

I think better arguments / critiques of social science are that quantitative methods is in fact all about qualities (moreso than qualitative methods, perhaps), and that quantiative methods can replace critical thought with statistical ritual.

On the latter point, this is an interesting paper.

Abstract:

Statistical rituals largely eliminate statistical thinking in the social sciences. Rituals are indispensable for identification with social groups, but they should be the subject rather than the procedure of science. What I call the “null ritual” consists of three steps: (1) set up a statistical null hypothesis, but do not specify your own hypothesis nor any alternative hypothesis, (2) use the 5% significance level for rejecting the null and accepting your hypothesis, and (3) always perform this procedure. I report evidence of the resulting collective confusion and fears about sanctions on the part of students and teachers, researchers and editors, as well as textbook writers.
 
Top