A diagram showing an LLM trained on training data, where it learns that a poem is the average of texts that are described as poems - this is NORMALISATION. The right half of the diagram shows a fine tuned model that has been told by a human than a rhyming couplet looks like a poem. This finetuned model has learnt that a poem rhymes. It generates a poem based on statistical analysis of human workers' reception of the texts it has previously generated as poems. This relates to sycophancy in chatbots - they aim to please.

Heuser argues that LLMs “flatten historical variation into an idealized representation of poetic form.” In our discussion group I first argued that normalisation is a better term, but through the discussion we decided maybe idealisation is more descriptive – but it isn’t idealising the training data, but rather the reception it has learnt in fine tuning.


Discover more from Jill Walker Rettberg

Subscribe to get the latest posts sent to your email.

Leave A Comment