Word2complex
Word2complex
A workshop with Varia
This workshop was a play on word2vec, a model commonly used to create ‘word embeddings’. Word embeddings is a technique used to prepare texts for machine learning. After splititng the writing up in individual words, Word2vec assigns a number to each individual word based on its nearness to other words they find themselves in. With word2complex Varia proposed a thought experiment to resist the flattening of meaning that is inherent to such a method, trying to think about ways to keep complexity in machinic readings of text material.
Step 1: Cutting embeddings of words
Choose a body of texts that you would like to analyse. Count how many times words appear in this text. You can use a script or an on-line service. Pick one word from the list of words.
Step 2: Embedding words
Use CTRL+F to find your word in the text that you are analysing. For each moment in which the word is used: describe briefly the context in which the word is.
Step 3: Identify/generate/complexify relations
Pick two words that have been embedded (this can include words that someone else embedded. Expand the semantic map below and feel free to adjust the connectors (they are starting points, not prompts)!
Semantic map
______ is to ______ as ______ is to ______ ______ is to ______ not as ______ is to ______ ______ is not to ______ as ______ is to ______ ______ is to ______ as ______ is not to ______