|
|
Ok, I hope these diagrams clear things up. Last night after I went to bed I sent myself an sms saying "S. nodes seem to be attributes and signifieds at the same time. discuss. The only diff between any two random nodes is their signifier!"
Which reflects more my state of mind than any great insight, but I am attempting to reconstruct whatever it was that wouldn't let me sleep til I wrote it down. There was something else which was probably more important, but I assumed it would stick in my mind, and it's melted.
Ok. Here's my first diagram. It's a Saussurian semantic network as I understand it (actually... it's one of the ways I understand them, I need to clarify that, too):
So here we have a network in which the nodes are empty and they're defined by their differences to other nodes. Possibly they should merely have ">not the same>" on all the lines linking them up, for the purpose of the demonstration of my problem, I don't think it matters.
Say we take the top left node and we're trying to use it. Its value becomes [something >more grey> than something which is >laughed at> by something which >collects> something which is >eaten> by something which is >not the same> as [1]], where [1] is everything in the brackets, including [1].
That seems problematic to me, and is what I mean by 'pointers all the way down'. All you have is relations, and relations aren't things/values.
On this next map, we've got a label for one of the nodes.
We'll call that our signifier. So the label and the node together form a sign. Except now we've got nothing different, just that [1]= [elephant]=[1].
However, if elephant is a value of the node, then we can begin to put values in the other nodes, too.
An /elephant/ is >more grey> than a /tapir/. /elephant/ is >not the same> as a /pig/, which has >less fur> than a /tapir/. /elephant/ is >bigger than> many things, many things which are >less mobile> than /tapirs/ and >eaten by> /pigs/, but lets say a /flower/, becaus /flowers/ are >collected by> /people/ which >eat> /pigs/ and >laugh> at /tapirs/ and >hunt> /elephants/, which...
Obviously this is a sham network, and the relations would be more graded and so on, but you can (I hope) see that once you've got one solid value in the 'net, you can make a good start at filling things in. But until there's an actual value, the net just goes around in horrific endless loops. Once there's a value, the signifier [elephant] has a signified which has actual properties, and an actual value.
The problem is that if the nodes(signifieds) are empty(valueless), the only thing which has any content which could define them is their signifiers, which is by definition impossible in Saussure's model, I think. We get things like "that which we call elephant is more grey than that which we call tapir", but since we don't have any value in elephant or tapir, the only thing which we can compare is their signifiers. And signifiers aren't grey, per se. And once we start comparing signifiers, they lose their arbitrary relations, because they gain properties in and of themselves, which is also bad and unpossible.
I am not sure if I'm making sense, or if I'm talking into a hat, about things which are not at all the way I'm describing them, but that's how it looks to me at the moment.
I also have some slight confusion with Saussure's values. If we define values in terms of what they are different to (I am going to assume an actual value for values, for this), do signifiers point to one 'lump' of thought-stuff, or do they point to various properties in the thoughstuff? Like with Locke's example of gold, does [gold] point to /gold/, which is defined as (not everything else in the thoughtstuff mass), or does it point to /gold/ which is defined as /not everything else in the thoughstuff mass except yellow, heavy, ductile, metallic, etc/?
And if the latter, why doesn't [gold] just point to the properties yellow, heavy, metallic, etc, rather than explicitly pointing negatively to everything except the properties it does have?
It seems... computationally inefficient for a signified to have, instead of a small set positive values (attributes), a near-infinite set negative values, and for people to have to work out what a word means by computing it from everything it does not mean. People understand words in milliseconds, and also different parts of the brain light up for different words/categories. If they were computing values by 'not everything but' methods, all the parts of the brain which store semantic information would light up except the parts which correspond to the actual value of the word, you'd assume. But this does not happen...
But my query could easily be based on me misreading and my tutor misunderstanding my question, leaving me with a poor understanding.
I'm trying to find exactly what I'm after in my old notes, but they're amazingly badly organised, and I think I'm merging several classes into one in my memory anyway. If you PM me an email address I can send you a bunch of readings on and around the topic of speech comprehension/production from a psych point of view, and quite possibly some which examine neural activation with speech/hearing/etc.
Thanks again for your time! |
|
|