💾 Archived View for auragem.letz.dev › devlog › 20240327.gmi captured on 2024-05-26 at 14:41:21. Gemini links have been rewritten to link to archived content

View Raw

More Information

⬅️ Previous capture (2024-05-10)

-=-=-=-=-=-=-

2024-03-27 Who Controls Presentation? Presentation vs. Semantics

This is part of my series on re-assessing the designs of Gemini and Gopher:

2024-03-22 Gopher's Uncontextualized Directories vs. Gemini's Contextualized Directories

2024-03-23 What Gemini Gets Wrong With Anti-Extensibility

2024-03-24 The Necessary Semantics behind Emphasis and Strong

2024-03-25 The Simplicity of List Nesting: How AsciiDoc Does It

2024-03-26 The Case for a 4th-Level Heading

For the past three articles, we covered more concrete aspects to Gemtext, so this post will focus on something more conceptual. While real-life usage is important and must inform our decisions, we cannot forget about our conceptual ideas; they are both important. Our conceptual ideas inform our real-life usage and our real-life usage informs our conceptual ideas. They are inter-related.

There are two aspects to markup that I want to talk about in this post: readability, or presentation, and semantics. Initially we might think this is a dichotomy, or perhaps a spectrum. I don't think it's as simple as that. Let us begin with who can control presentation:

Presentation

The Benefits of User-Controlled Presentation

User-controlled presentation allows one to focus on linguistic art and relgate the visual presentation of it to what the user finds most comfortable or most accessible. A user could present this information as a scrollable document or as pages, with margins or without, justified or not, centered on the screen or aligned left or right, etc. The presentation becomes the means to the end, the linguistic content. Epubs, Mobi, Gemtext, and Markdown all have this characteristic and focus to some extent.

The Benefits of Authorial-Controlled Presentation

Authorial-Controlled presentation becomes the focus when the author has information stored in the presentation itself. You might have linguistic art that relates to graphic design, and you might want to demonstrate how presentation affects the reception and feeling of that linguistic art. In this case, the author controls the presentation as well as the text - the presentation becomes part of the content.

Authorial-Controlled presentation is also necessary when the presentation is integral to the format itself. The visual arts cannot be easily expressed in or translated to language, for example. The semantics of visual art is implied through its presentation, whereas linguistic art has a heavier focus on semantics, and might imply the experiences of the senses through that (called imagery).

Lastly, one might want to use the presentation of one format as their art. Ascii Art and Unicode Text Art are examples of this. One is relying on the *presentation* of the individual letters, not the linguistic semantics of the letters, to convey their information and art.

The Fluidity of Semantics, Presentation, and Format

Like text, sound can be a presentation of a different format, the format itself, or semantic.

Sound can present linguistic information. In fact I would argue a reversal here; that linguistic information originates from sound and that most text is merely a presentation of it. Song lyrics and spoken words are examples of this.

Sound can also be the artistic format itself. This is used in (instrumental) music, which uses the sounds themselves as the art, and the semantics are implied from that.

Lastly, sound can convey semantic information through the use of consistent sounds that represent specific things. An example of this would be morse code and Dial-up. This same thing can also be done visually, like with sound translated to an optical format on the side of film reels. Texturally, sound is converted to bumps and curves in Viyl and CDs, although the latter undergoes a conversion from sound to binary first.

A Combination of Both?

We can combine presentation and semantics in multiple ways:

Intermixing involves inline media and preformatted blocks within the document format. Gemtext allows for preformatted blocks, and HTML and Markdown allows for both preformatted blocks and inline media.

PDF documents and older forms of HTML combine semantics with a presentation created by the author. One can customize how text is visually presented, how it is broken up, etc.

Gemtext and Markdown try to adhere textual information to a readable plaintext format. All linetypes are simple prefixes attached to lines. Sub-lists are adhered to visuals through the requirement of indentation.

Binary information in memory and on-disk is presented in a multitude of ways. Integers become visual text on screen, or sound. Waveforms become sound, or visual waves, or even text, in terms of live captions. Vibration happens by storing some binary data telling a device to vibrate, for how long, and in which areas.

Everything is a Presentation?

As it turns out, text itself is a visual presentation of semantics! The interesting thing about text is that it is also translatable to sound, and this is because text often *represents* sound. I say this because sound *came first* in the development of languages. But really, sound now also represents text.

Poetry combines sound and text in an interesting way. We can often visually see rhymes and rhythms, but we also hear them. And yet, some rhymes are heard only, but not seen, and some are seen only and not heard.

In fact, language itself fits in this interesting dynamic between sound and visuals. Language is conveyed through sound, but it also has different characteristics from instrumental sounds. Language is conveyed through visuals, but again, it has different characteristics from visual art. Language enters this space because we try to associate semantic meaning and structure to sounds and symbols. We create units like morphemes, words, phrases, and sentences. Each layer on top of sound adds meaning and structure, but also introduces new ambiguities. Combining morphemes combines meanings in a different way from combining instrumental sounds. In text, our punctuation conveys semantics that in sound would be conveyed through the absense of sound or intonation and pitch.

The Dynamic between Semantic Art and Presentation Art

The question I now ask is whether all art is presentation, or if there is also semantic art? In fact, all of language is a presentation of semantics. Words refer to ideas, they are not the ideas themselves. Images refer to objects and ideas, like metaphors and symbols.

When we try to store ideas into a computer, what do we use? HTML is a textual format that could just as easily have been an aural format like beeps and other sounds. Binary is a numeric representation of ideas. Visual art, a visual representation. They are all not the ideas themselves, but a presentation of those ideas. We choose certain formats based on both how we want to present the information, the need to convert the information between presentations, and how authors construct and write that information into the computer to be stored.

One might think about how HTML is different from visual art. What makes it different? Well, HTML has combinations of symbols that refer to one thing and one thing only. A p tag only refers to the concept of a paragraph, and a title tag only refers to the concept of a title. However, in visual art, we are drawing the visuals, the objects and actions, space, or even visual concepts like lines and shapes. An object *could* refer to just that object, but it could also refer to something more - a metaphor, a symbol, a sound, a feeling, etc. How the semantics, how the meaning and concepts, adheres to the visual arts is more fluid, more ambiguous, less certain.

So, if we want to know whether semantic art exists, we must first answer *what art is.* Is the ambiguity, the fluidity, the uncertainness an essential component of all art? Or perhaps it's the relationships and categorization that makes art art? The former could exclude semantic-focused presentations (like formal languages), but the latter doesn't.

Accessibility

Visual Art

Because not everybody can see, visual art needs some mechanism so that it is accessible to those who are blind. The common thing to do is to provide some alt-text that can be spoken audibly, and that might describe the visuals, or it might describe what the art represents - it's ideas, symbols, feelings, metaphors. I would say that the latter is more accessible than the former, but removes some of the artistic component, and could involve interpretation.

In the above case, we are really conveying a linguistic representation of the art through sound, because the alt-text is spoken out. An additional way to convey visual art through sound is through instrumental music. Instead of capturing the ideas through a linguistic format to then be spoken out, we capture the ideas of the art through instrumental sounds instead. Even presentations can have multiple paradigms or "genres".

The visual arts often rely on its ambiguity, uncertainness, and fluidity. When we try to convert one presentation to another presentation, we have to change this fluidity and ambiguity to fit our new presentation. Sound has different ambiguities from visuals and from language.

A semantic representation, however, tries to convey all of the relationships, categorizations, meanings, and even the ambiguities, within a consistent presentation. A combination of letters refers to one thing and one thing only. We remove the ambiguity of the representation so we can convey the ambiguity of the art as meaning itself.

Linguistic Art

Linguistic art relies on language. As mentioned above, language sits in this interesting cross between sound and visuals. Our writing systems now are a visual representation of our sounds, for most languages, but this is not how writing started out.

Writing started by trying to convey our ideas and meaning through a visual representation. We painted deers when we wanted to write about deers. We painted a human running when we wanted to refer to the act of running. Two interesting things happened, however: we started to associate composable sounds with these symbols, and we started to assosiate composable meaning with these symbols.

Let us create a hypothetical world. We started to associate multiple words together into one symbol. A symbol that was originally created to represent a word becomes the same symbol for another word that sounds similar. Then, we begin to compose symbols together based on the sounds of the words they referred to. The symbol for running referred to the word "run", so now we prefix all words that have the syllable "run-" at their beginning with the running symbol. This becomes a sound-focused writing system.

Another separate civilization takes a semantic-focused approach: A sunrise combined with the running symbol became a past-tense verb. Noon became the symbol for the present, and sunset became the symbol for the future. They begin to associate concrete symbols to grammatical constructions and to ideas.

Back to the real world: Both of these approaches intermixed in the development of language. Ancient chinese took a more semantic-focused approach. Hieroglyphs composed meaning with symbols, and then they became the basis for phoenician, which took a sound focused approach. Phoenician started with symbols representing syllables, and that's our abjad. The Greeks took this and added in symbols for the vowels, and that's our alpha-bet, a descendant of the aleph-bet.

Over the course of history, our languages go back and forth between these two poles: semantic-based and sound-based. Our emojis are hieroglyphs that can be combined to create stories. And yet we still use them with our traditional languages, many of which are sound-based, or *phonetic*.

Linguistic art imposes structure on whatever format is lays on top. Sound is turned into phonemes, morphemes, syllables, words, phrases, and sentences. Visuals are turned into symbols and letters. These letters became the basis of translating audible morphemes, syllables, words (whether they exist and how exactly they exist won't be covered here), and audible markers for phrases and sentences, into visuals. And yet, there's some commonality here, something that is part of language but transcends the aural and visual presentations. Words, phrases, and sentences are not aural or visual, they are beyond that. They are a structural component of language that is conceptual.

In Linguistics, phonemes are *not* sounds. They are a structural layer of language that rest on whatever media we intend to use to convey our language. In sign-language, it's signs. In audible language, it's sounds. In visual language, it's symbols (or combinations of symbols), or even components of symbols. We can also create a language of touch. Phonemes might be the placement of our fingers on another's hand or body, and their movement. This is what protactile uses.

However, when we talk about linguistic art, we can talk about three things:

Rhyme and rhythm are most commonly thought of as relying on sound. However, they can also rely on visuals and tactile. The problem is what rhymes according to sound does not necessarily rhyme in writing. Sign languages and tactile languages are usually considered separate languages since they are not wholly based on translating sound to visuals or tactile, like writing often is.

There are also higher structures that can be used artistically. This includes organizations, relationships, and repetitions, like chiasmus, inclusio, stanzas, parallelism, and various other literary techniques. These often transcend translation, because they don't rely on a language's underlying media or on how it categorizes objects (*lexical* semantics).

Semantics involves meaning *and* relations between concepts and ideas. It includes metaphors, how we categorize objects within our world-view, symbols, etc. How translatable these are depends on whether it relies on a language's particular system of categorization or not. While they can always be translated, one might need to put in more work to translate them in a certain way in other languages, many times by describing the relations and world-view.

Preformatted Text vs. Textual Formats

Preformatted text and textual formats are conflated in Markdown and Gemtext. However, they are different. Preformatted text is a mechanism that allows us to control the spacing and wrapping of text. It is a toggle that effectively disables the default formatting of the Gemtext/Markdown and puts us into a plain-text mode. It can be used for Ascii Art, Unicode Text Art, or other textual formats outside of Gemtext.

Textual formats have their own formatting rules that can be encompassed within plaintext. C is formatted a specific way that's different from Python or Lisp. AsciiDoc follows different rules from Markdown and Gemtext.

Unfortunately, because the two have been conflated, we cannot determine whether a preformatted block is a different textual format (e.g., source code, CSV), if it is trying to control the visual presentation of text (e.g., indentation, shaped poetry), or if it is trying to use the visuals of text for an artistic purpose (e.g., Ascii Art, Unicode Text Art).

The difference between the three is this:

Notice that this is a spectrum from more formal textual formats, to a combination of formal textual formats and visuals, to just visuals.

The Purpose of Document Formats

The final consideration we must wrestle with is the purpose of document formats. It turns out there are at least three purposes:

Each has their own characteristics and things that they must support.

A document used for conversion must support a wide variety of different elements so that they can be translated back and forth between documents. Tools like PanDoc and AsciiDoc try to do this by extending Markdown with more semantic information, or by creating their own document format that supports all the elements they would need. One might say that assembly language and Intermediate Representation (IR) serves a similar purpose in compilers and interpreters.

A document that is suited for plain text usage and reading tries to adhere the format to certain restrictions that make the plaintext easy to read. Markdown does this by using indentation or line prefixes, and by reflowing text. Gemtext uses line-prefixes only, gets rid of internal markup (especially links) that can clutter the text, and tries to find a balance between parsers and the plain text reading experience. They are also much easier to write by hand because of this. However, they are only suited for media that can be represented textually, and are therefore unsuitable for most visual or aural art.

A main document format that is intended to be read by a document viewer might want parsing to be easy, but can also completely sacrifice all aspects of the plain text reading experience. HTML and XML basically fall in this category, as well as many other formats, including PDFs. These formats also include basically every binary format. These are more likely to have special word-processors or editors because the document format is not as easy to write by hand.

Conclusion

I've just thrown a bunch of stuff at you. There are many questions in this post that have not been answered, and that is deliberate. Presentation, text, language, media, textual art, and how they all relate to each other and to meaning is complicated, and oftentimes fluid. Many presentations can support different purposes, different structures, and different semantic meanings, or they can support the same purposes, the same structures, the same semantic meanings. Meaning can be attached to presentation or can be completely orthogonal to it. So, where does semantics end and presentation start?

Continue the Series

2024-03-28 Headers, Footers, Sidebars, and Footnotes