Formalism?

blissblogger

Well-known member
there's an academic called Jason Toynbee who wrote an interesting book with a boring title "Making Popular Music Musicians, Creativity and Institutions " a few years ago --he's not quite a musicologist a la Susan McLary but the approach goes against the consumer-makes-meaning angle of cultural studies in so far as he's looking at the producers of music, what their mindset is, the field of possibilities and limits within which they operate. And--relevant to this discussion--there are some amazingly meticulous and detailed breakdowns on what's going on in some drum'n'bass tracks -- i think one of the case studies is Shimon 'the predator' or maybe it's a randall & Andy C track, 'cool down' -- but anyway very very fastidious and closely observed descriptions of the timbral qualities of beats, the multiple basslines, the way the breaks are organised and intermesh, the changes within the tracks, the synth-refrains coming in, and how all of it creates a certain mood.

the thing about the McLary et al approach i always feel is that they're looking to prove infallible effects that originate in certain chord changes and note patterns and the macro-structure of a musical piece, tension climax etc, but just going by some of the examples they give, it's totally possible for a listener to miss all this emotional language and be completely bypassed by it. One example is a Whitesnake song that she analyses as a journey into an abyss of dread-of-the-female. Having seen the video dozens of times on MTV i can vouch that none of this ever occurred to me. It could be that the other cultural signifiers--the cheesy video with the model girlfriend of David Coverdale writhing around with the singer in a car, the timbre of his cod-blues-sub-Plant voice and all its associations, the production quality of the guitars (lite-metal), completely over-rode for me all the effects they were analysing in terms of the chord changes supposedly taking the listener into a kind of musical equivalent of the vagina dentata. There's a similar analysis of a Madonna song which left me equally feeling "nah, didn't affect me like that".
 

shudder

Well-known member
I think McClary's best stuff or at least her most interesting stuff is her exposition of gender coding in "classical" music. I've never really looked much at her pop stuff, and I can imagine how harmonic analyses of pop songs probably don't reveal tooooo much...
 

dogger

Sweet Virginia
blissblogger said:
the thing about the McLary et al approach i always feel is that they're looking to prove infallible effects that originate in certain chord changes and note patterns and the macro-structure of a musical piece, tension climax etc, but just going by some of the examples they give, it's totally possible for a listener to miss all this emotional language and be completely bypassed by it.

I agree. But I don't know whether McClary's explanations should really be read as *descriptions* of how the majority of people listen to music, so much as memorable, if idiosyncratic, critical constructs that can change the way the music is heard. Their aim is not to describe, but to create avenues of hearing. Some of them -- of classical works at least -- are fairly (deliberately?) controversial. The famous example of Beethoven's Fifth Symphony as a narrative of a rape springs to mind, and some American traditionalists got fairly shirty when she suggested that not only was Schubert gay, but you can hear his 'gayness' in the music. (I misrepresent: what she actually said was that the music constructs an alternative mode of subjectivity that could be construed as typically homosexual). Anyway, the point is that these interpretations are pretty outlandish, and not meant to be infallible -- how could an interpretation possibly be infallible-for-everyone-forever? -- but they are memorable images, and well backed-up by the scores, which makes them potentially persuasive. And why not be persuaded? I don't think there's any one 'correct' way of listening to a piece of music, and I don't think you have to listen to the same piece in the same way for the rest of your life. To give a personal example, I always heard Mozart's Requiem as a comforting bit of music -- don't really know why; I just did -- until my tutor suggested that he thought it should be heard as a violent and despairing protest against death. As he is a world expert on Mozart, I was inclined to take his advice seriously, so I did, and it made much more sense. Point being: it is totally possible to miss a lot of potentially exciting information in music due to having fallen into habitual patterns of listening. Criticism like McClary's aims to shock you out of that.
 

borderpolice

Well-known member
DigitalDjigit said:
Drum and bass is pretty sparse too, at least the '97-mid 00's part of it where it was all two steppy (don't know what it's like now). The rhythm as melody thing was all '94-'96, wasn't it? That's what separates jungle and drum and bass in my mind.

Well, this is partly a problem of semantics. The junglist of my acquaintance tell me that
jungle and DnB is the same thing. Anyway, clearly there's a continuum between Dubstep
and DnB. My analysis was "idealtypical". the late 90s "two step jungle", which being less
breakbeaty, often using programmed drums rather than samples are definitly going in the
direction of Dubstep, but the rhythms is still 'driving' rather than working, building up tension
through long gaps.
 

borderpolice

Well-known member
dogger said:
However, this does not get round the central problem:
linguistically representing the *experience* of sound in a meaningful,
communicable way.

Well, I think that representing the sound itself is as good as it
gets. The experience is in the listener's head and is unlikely to be
shared across listeners. What is shared, is socialised, is the
behaviour in (social) space, seen as triggered by, or related to that
music. This is of course also what traditional musical notation does.
it does not represent the effects of pitch, duration, the scales, but
rather it allows a concise representation so that the music in
question can be reconstructed easily. In that sense, sound-files
basically as good as it can get.

dogger said:
Simply drawing a picture of a timbre using Fourier
analysis is incredibly complex (especially if you are actually going
to chart timbral change through time), and says nothing if you can't
read it (and most people can't). Even if do understand what spectral
analyses show, they represent the objective acoustic reality of the
sound, and relate in only a very oblique way to the experience of
it.

I agree. though interestingly, having visual representations of
spectral distribution is rather useful in production, hence the
preponderance of <I>graphical</I> equalisers. Which is a strong
indication that it would be useful in musical analysis too.

dogger said:
So words illuminate the
experience of music, although they cannot capture it.

As I said, words (eg an mp3 file) can help you to reconstruct
the music precisely.
 

borderpolice

Well-known member
Moodles said:
I totally agree with this - there is a vocabulary out
there waiting to be used. Are there actually music critics that are
using it? For electronic musicians, it's common currency, but what
about the non-specialists who just want to learn a bit about how the
music operates. Is it safe to say that there is a bit of
mystification of electronic music technique that maybe doesn't exist
in more traditional genres?

Also, electronic music doesn't exist completely outside of traditional
music theory, but certainly music theory falls short in its ability to
analyse the electronic aspects of the music.

Yes, I agree, there's a curious bifurcation between classical and
modern music. for the former, most of its audience do in fact
understand the technical language of the genre, while for the
latter the opposite is the case.

I don't know if this will change in the future. on the one hand,
the trajectory of development in just about every aspect of the
world is towards more specialisation, suggestion that the gap
would become bigger. on the other, the increasing ubiquity of
extremely sophisticated production technology may lead to a
rapprochement between producers and consumers. time will tell.
 

borderpolice

Well-known member
k-punk said:
I can't see how any recording technologies help
here... what CONCEPTUAL advance do mp3s/ digitisation offer over
analogue recording? A digitally recorded sound is in this sense no
different to the sound itself - what is required is a one-level up
abstract description of the sound, simply ostensively indicating the
sound ('this is sound x') does not provide such a description...

None. And none was claimed. The key change is in ease of use. linking
to an mp3 exhibiting a peculiar feature of music (e.g.: this bass) is now
very easy, has little barrier of entry. This is a radical change form
how it was done before, and allows large scale communication about all
musical phenomena. simply ostensively indicating the sound is
they key issue for the emergence of suitable vocabulary.

k-punk said:
I'm a bit baffled by what folk mean by the language of
pro tools or cubase... These are just sequencing programmes that treat
sounds indiscriminately, as cut and pasted sonorous blocks... The
chief service they might offer to sonic analysis is that they provide
a way to 'visualize' a sequence ... clusters of sound can literally be
seen as patterns....

Well, this visualisation is a very concise and convenient
representation of key (but not all, eg timbre) musical
features. Incidentally, the traditional musical notation with staffs
is the same, a concise and convenient representation, at least given
for the music produced with certain kinds of instruments. Musical
notation has two key features:

(1) Complexity reduction: we don't always want to represent all the
information (in the sense of information theory, i.e. Shannon/weaver
or kolmogoroff complexities) contained in a piece, because that is
usually too much, and most of it is irrelevant,

(2) We want to be able to reconstruct (much of) the music in question
from the representation.

Both is done very nicely by Logic/Pro Tools/Cubase. I'm not saying
these systems are beyond improvement, but they seem good enough.
What's lacking is familiarity of those interested in musical analysis.

As an aside: modern production software is beyond cutting and pasting
sonorous blocks. These programs do offer rather powerful sound
synthesis facilities.

k-punk said:
Or are you talking about the waves themselves? Yes, we
can now 'see' the sound, but that isn't the same as being able to
describe it...

It is in fact a way of description, just usually not very useful, as
it doesn't throw away enough information.

k-punk said:
My view would be that such programmes tell us nothing
about the qualitative aspect of sounds ... any description would come
from the users of the programmes, not the programmes
themselves...

Not they don't, but that was not claimed. They offer a pretty perfect
way of representing this music, so it can be reconstructed at will.

k-punk said:
But surely Dogger's point about timbre is well
made. Someone made a similar argument at the NoiseTheoryNoise
conference last year... the conclusion being that Pop, since it is
based on timbre, is not music at all (if music = that which can be
notated and precisely repeated).

Whoever said this has no clue. Popmusic can be perfectly precisely
notated and repeated, just ask your CD player.

And to suggest that timbre is all that matters in pop (i.e. rhythm,
melody don't matter) is foolish.
 

borderpolice

Well-known member
blissblogger said:
is that why it seems more neurotic and stilted than d&B/jungle, in that the later feels like it's exploding, whereas dubstep seems more clenched and inhibited

I think so. it's very extended foreplay. Partly i suspect it is a
reaction to the full-on-ness of so much other music, partly it's
driven by it's tight connection to MCing: because the vocals need
instrumental restraint. a key influence here is 2 step garage of
course and the whole timbaland way of RnB production. a lot of this
leads to jamaica and dub. and partly it's the age old realisation that 'music
is what happens between the notes'.
 

Moodles

Active member
I was asked above about which threads I was referring to wrt the devaluation of formalist critiques. At the time, I couldn't remember specifically and was too lazy to dig for it, but now Blissblogger has reminded me that it was a different thread in which he also mentioned McClary. My impression had been that he was dissing a formalist approach because he had read this dubious critique of a Whitesnake song. My immediate thought was that the problem lay less with the general technique of analysis and more with the fact that she was dealing with a Whitesnake song.

Now that he's explained in more detail, I see that she had an interesting thesis that was most likely a big stretch based upon the material. I doubt that the Whitesnake tune really has that message encoded in its chord changes. Still, I do believe that a kind of narrative can be read in chord changes, melodies, rhythms, the way in which a performer plays an instrument, etc., I just think she might not have done a very good job. (I'd still like to read it though...)

OTOH, the Toynbee book sounds very interesting. It sounds like the kind of critique that I have in mind.

WRT borderpolice's comments: I think I misunderstood what was meant by "the language of Cubase, etc." I really do believe that digital technology requires musicologists to expand the language they use to analyze music, and that this technology has caused conceptual advances. The very fact of working with Random Access Memory causes musicians/producers/composers to create music that is non-linear. This in turn results in music based around repetition in which development occurs through processing, filtering, etc. rather than through melodic or harmonic progressions. To me, this is a huge paradigm shift in music.
 

dogger

Sweet Virginia
borderpolice said:
None. And none was claimed. The key change is in ease of use. linking
to an mp3 exhibiting a peculiar feature of music (e.g.: this bass) is now
very easy, has little barrier of entry. This is a radical change form
how it was done before, and allows large scale communication about all
musical phenomena. simply ostensively indicating the sound is
they key issue for the emergence of suitable vocabulary.


I'm not sure how a (verbal) vocabulary is going to automatically emerge from the ability to reproduce sound as samples in texts, however useful this ability is, unless you count the ability to reproduce as the 'vocabulary' itself, and I would argue that it is not: there has to be some separation between language and object, otherwise the former ceases to function as language. Which leads on to my main point: there seems to be a basic misunderstanding lurking under the surface here, that has to do with *reproduction* of sound vs *representation* of it. Clearly, you are talking about reproduction - i.e. creating the sound again, in exact form - as a form of communicating its identity, which is fair enough. However, I think I misunderstood you earlier in the thread, and assumed you were talking about *representing* sound, that is, translating the sound into another media (words, graphics), which necessarily involves a measure of interpretation or, as you put it, complexity reduction. Of which more below...


Well, this visualisation is a very concise and convenient
representation of key (but not all, eg timbre) musical
features. Incidentally, the traditional musical notation with staffs
is the same, a concise and convenient representation, at least given
for the music produced with certain kinds of instruments. Musical
notation has two key features:

(1) Complexity reduction: we don't always want to represent all the
information (in the sense of information theory, i.e. Shannon/weaver
or kolmogoroff complexities) contained in a piece, because that is
usually too much, and most of it is irrelevant,

In a sense, yes. But this ignores a couple of important things:
A) Except in the case of improvised music that is later notated, the 'complete' piece of music usually does not exist prior to its existence in notation, since music (arguably) only really exists as physical sound in performance. You could argue that the 'complete' work exists in the composer's head, I suppose, before s/he writes it down, but this is to ignore the fact that what the piece *is* is conditioned by the means of its representation. i.e. when you compose something it often doesn't really make sense until you force it into the mould of its representation. I am guessing a similar situation exists for electronic composition using Cubase etc., with the difference being that the music can be 'performed' (i.e. realised in actual sound) instantaneously.
B) Consequently, what is considered to be 'irrelevant' information vis a vis the notated score is conditioned not so much by one individual's decision about what is or isn't relevant to their piece, as by the capabilities of the notated language itself. This might seem uncontroversial until you consider that at various times in the past, what was considered 'irrelevant' to musical notation has included things that we would consider absolutely central, like exact pitch, rhythm, dynamics, and tempo.

(2) We want to be able to reconstruct (much of) the music in question
from the representation.

Again yes, but I don't think 'reconstruct' is the right word. And to imply that we reconstruct 'much of' the music implies that there is some pre-existing 'complete version', which in the case of non-electronic music, there never is. The notated representation instead serves as the basis for *interpretation*, which adds the information needed for a complete realisation of the music in sound (a performance). The point is that there is no certainty about the process, and much subjectivity: 'reconstruct' is completely the wrong word.

Both is done very nicely by Logic/Pro Tools/Cubase. I'm not saying
these systems are beyond improvement, but they seem good enough.

Now I don't know much about this. I had been under the impression that Logic/Pro Tools/Cubase record the processes applied to a sound as a set of instructions so that reproducing it exactly and in its entirety becomes a possibility: that is, they provide the opportunity for *reproduction* rather than *representation* - am I right? If so, this would seem to make them rather different from traditional written notation, but perhaps you can help me on this?

What's lacking is familiarity of those interested in musical analysis.

I think I stand accused there... :)
 

Rambler

Awanturnik
Great thread. I think Dogger's said most of what I'd want to say (I was at that same conference listening to Fountains of Wayne), and bravo to borderpolice's analysis of the drum and bass/dubstep distinction. A perfect example of how a little bit of technical knowledge, plus some proper listening, and the vocabulary to express it all can go a long way. Personally, I would love to see more of this in criticism, and those critics (across the spectrum from pop to classical) who can do this are the ones I tend to gravitate towards.

McClary's a perfect example of this (although she would be disturbed, I think, to be considered a prescriptive writer who was telling people how such and such a song should be heard - the shock aspect of her work, given the situation in which she began to write, is important, but it's also pushing you to listen a little bit harder. I don't doubt that she hears the music in the way that she describes, and sometimes I do too, but the point is that it's all part of the mix, and definitely NOT positivistic 'this is what this song is about' approach. That's precisely what she has spent her career writing against.).

I don't want to comment too much on the 'pop isn't music because you can't notate it' thing too much because I knew a couple of people at that conference and it may have been one of them (although I don't think so); but it's a strange statement that seems to me an admission of failure on the part of 'traditional' musicology, one that, as has been pointed out, we're trying hard to work around. While I like the fact that people like Richard Middleton are trying to formalise pop music-ology, I do feel that he's going about it in the wrong way, returning again and again to redundant models from classical analysis - chord changes, strophic song structures, this sort of thing. This is a damaging approach, because the nature of 90% of popular music is going to look pretty slim on that basis: nearly everything is 4/4 time (that includes DnB), nearly everything is based on a limited number of chord patterns, and, depending on genre (rock, house, pop, etc.), the basic forms are pretty small in number as well. I find it kind of odd that musicologists who claim to be putting pop's case for serious musicological study do so in terms that, frankly, make it look very weak indeed against the classical concert repertoire for which those terms were devised. Yet, if you take a timbral approach (the HUGE problems with doing so already mentioned notwithstanding) then pop suddenly looks incredibly complex, and a much more worthy object of study. What's more, interestingly, is that most of the classical repertoire looks pretty bland and weak. (This is an extension of Eno's old point about 2 seconds of a pop record versus 2 seconds of a string quartet, obv.) My frustration is not with formalism per se, but the fact that an appropriate formalism doesn't exist, yet. And no, waveforms, spectral analyses etc don't cut it - that's placing the subjective listener at an even greater remove than classical formal analysis, and we don't want to go back there in a hurry.
 

shudder

Well-known member
re: pop analysis that does a proper service to music.

Wayne & Wax occasionally drops into a sort of tech talk that goes some way in appropriately describing the important and interesting aspects of the music he's interested in. His piece on reggaeton, "we use so many snares" talks a little bit about the rhythmically and timbrally interesting bits of the music in an interesting way, although much of the writing is cultural type stuff.

I also remember a pretty neat very formal analysis of M.I.A.'s song "M.I.A." on some blog awhile back... anyone know what I"m talking about?
 

Rambler

Awanturnik
Yeah, W&W is someone who successfully treads that tricky line between formal and informal and has something to say. Everyone should read him (see also his stuff at the riddim method. The M.I.A analysis I think you're talking about was here at Clap Clap. That's also a good 'un, and is as good a set of reasons as any to make your mind up on ye olde MIA debate.
 

shudder

Well-known member
Rambler said:
Yeah, W&W is someone who successfully treads that tricky line between formal and informal and has something to say. Everyone should read him (see also his stuff at the riddim method. The M.I.A analysis I think you're talking about was here at Clap Clap. That's also a good 'un, and is as good a set of reasons as any to make your mind up on ye olde MIA debate.
yup that's the MIA one... linked from zoilus, I see, which is how I must have found it.
 

borderpolice

Well-known member
OK, thanks dogger for the explanation, I understand your position much
better now.

(1) I do think that language you are looking for is already
out there, it's just not widely spoken. Producers can and do latch on
to genres, and they can communicate to others how to learn producing
in that genre, witness for example the many tutorials available for
making DnB. The language used in such tutorials is often a mixture of
screen-shots from sequencing programs (this usually refers to 2
different things: (a) placement of notes, i.e. a variant of
conventional staff based musical notation, and (b) sonic
characteristic of sounds, though EQing curves, ADSR and filter levels,
spectral distributions. for an example, see this . it is quite easy for
anyone moderately familiar with synthesisers to reproduce the sounds
described in this text, i.e. something that is not done at all in
classical music, or, more precisely: it is done by naming instruments
"op 24 for piano", where the instrument name denotes timbral
characteristics. This was possible because of the severely limited
number of timbres instruments could produce before electronic
synthesis.), giving a concise and intuitive simplification of musical
structure (relative to an understanding of the sequencer), iconic
presentation of sounds by way of linked mp3 files and standardised
phrases (e.g. "Reese Bass", "Ghost Snares", "Pepperseed
Riddim"). Again, the function of these tutorials is to allow others
the reproduction of a certain type of sound. The problem with it is
mostly, that it's unfamiliar to conventional music critics, hence the
near complete irrelevance of the latter to most modern
musicians. another related problem is that producer communities are
currently too distributed for a more complete standardisation of their
vocabulary. A breakbeat producer may most likely be using fruityloops
and hence be adapted to the language suggested by that program,
whereas an PnB person is more likely to speak protools or logic. an
averaging out, a convergence between these dialects is likely to
emerge only when public discussion of production and composition
techniques becomes commonplace in public fora.

(2) the explosion in the complexity of music production will make the
gulf between conceptual vocabulary (in the generalised form partly
described above) and musical product bigger than with conventional
classical music. by this i mean that musical vocabulary will
necessarily be less concise (and hence to a certain degree less
useful) as used to be the case of classical music. I suspect that this
inevitably means that messing about, experimenting, aleatoric elements
will always play an important role in the composition process. in
other words, the idea that music exists in the composer's head in form
of a (non-musical, linguistic, conceptual) representation at the start
of the composition process, which you seem to allude to, is likely to
be less attractive than conceptualising one's composition process as
an interaction between conceptual ideas (i want to write machiavellian
pimp pop meets kate bush today) which are necessarily imprecise, and
the (for the composer) surprising results first tentative
implementations of such ideas have.

(3) Re: things change in importance over time: you are right. But the
beauty of sequencer and iconic sample based vocabulary is that you can
specify to absolute precision what a piece of music sounds
like. indeed, that's what a modern composer usually does. The
vocabulary allows you to mention everything. but it also allows you to
be imprecise. This is really different from the notation systems of
the past.
 
Last edited:

borderpolice

Well-known member
Rambler said:
And no, waveforms, spectral analyses etc don't cut it - that's placing the subjective listener at an even greater remove than classical formal analysis, and we don't want to go back there in a hurry.

I agree that waveforms are mostly not very informative. note however, that producers
talk about putting a sine=wave under the bass drum or that one uses sawtooth to get this
or that effect. in addition, graphical EQing is most useful and easily communicated (visually).
hence there is an important place for this.
 

Rambler

Awanturnik
True, but those sort of things are more useful from the composition/production: they explain exactly how a musician created such and such an effect in the studio, and help others to do the same. They're sort of analogous (very vaguely) to musical scores in that way. So they might function as a resource for analysis but not as a method for analysis itself, in the same way that traditional formal analysis is removed from the score itself - the set of instructions for reproducing the music - by one or two steps.

I'm guessing here - I don't know enough about academic analysis of electronic music to know what sort of methodologies are in use.
 

shudder

Well-known member
borderpolice said:
I agree that waveforms are mostly not very informative. note however, that producers
talk about putting a sine=wave under the bass drum or that one uses sawtooth to get this
or that effect. in addition, graphical EQing is most useful and easily communicated (visually).
hence there is an important place for this.

two different beasts. when we talk about waveforms as a means of analysis, the waveforms are MESSY, since they are representations of a fully mixed piece. sine and sawtooth (and square and triangle) waves are super simple, and each have a clear, characteristic sound. If I were to draw an arbitrary (and complex) waveform, no producer would have a clue in hell what it might sound like.
 

borderpolice

Well-known member
shudder said:
two different beasts. when we talk about waveforms as a means of analysis, the waveforms are MESSY, since they are representations of a fully mixed piece. sine and sawtooth (and square and triangle) waves are super simple, and each have a clear, characteristic sound. If I were to draw an arbitrary (and complex) waveform, no producer would have a clue in hell what it might sound like.

oh sure, but that shows a limitation of the method, not its complete inapplicability.
choose the right tools for the job innit! incidentally, by looking over the fourier specturum
of a sound, one can get an idea what kind of sound one deals with. and things like ADSR or
graph EQ are but simplifications of this fourier spectra (over time in the former case).
 

shudder

Well-known member
borderpolice said:
oh sure, but that shows a limitation of the method, not its complete inapplicability.
choose the right tools for the job innit! incidentally, by looking over the fourier specturum
of a sound, one can get an idea what kind of sound one deals with. and things like ADSR or
graph EQ are but simplifications of this fourier spectra (over time in the former case).

certainly in the analysis of music, we can use terms relating to the envelope of a sound (ADSR etc) and relative loudnesses of frequency bands (graphical EQs, and discrete fourier transforms) and be understandable and understood... that's a good point.

Many of these rather more vague and hard to interpret representations (waveforms of whole mixes, fourier-based representations of spectra) can be useful elements of an analysis (although I would say again that looking at a waveform straight up tells you not much at all), but they are neither going to be a starting point, nor a language in which to root the analysis. Of course, in most any analysis, listening to the music should probably be the starting point (right?). We're probably still missing a sufficiently general and widely applicable language here, although some of these representations do help, yes.
 
Top