In the past, any text that was not poetic was prosaic. Back then, prosaic carried no negative connotations; it simply indicated that a written work was made up of prose. That sense clearly owes much to the meaning of the word's Latin source prosa, meaing "prose." Poetry is viewed, however, as the more beautiful, imaginative, and emotional type of writing, and prose was relegated to the status of mundane and plain-Jane. As a result, English speakers started using prosaic to refer to anything considered matter-of-fact or ordinary, and they gradually transformed it into a synonym for "colorless," "drab," "lifeless," and "lackluster."
prosaic
As it turned out, the essays that we solicited for this special issue all manifested a structure somewhat more complex than a simple description of the prosaic imaginings performed by their chosen texts. What they all in one way or another found was a series of variations on a pattern in which a particular generic structure is problematized by its reflexive thematization. Generic forms are represented metonymically in novelistic texts in a way that then illuminates the limits of those forms. At the same time, the category of the everyday turns out to be central to the ways in which each of these problematized generic forms is organized by its recursive metonyms.
Collectively and individually, these essays give us very different answers to our question about the relationship between the novel's prosaic subject matter and novel reading as an everyday activity. Perhaps, too, that question was loaded, and we might have explored in greater depth the sheer diversity of representational modes to which the novel lends itself. Yet the prosaic imaginary has been formative of the European literary tradition and of the world literatures to which it has, by way of Euro-American political and cultural hegemony, contributed. The essays collected here demonstrate some of the complexities of that imaginary.
1650s, "having to do with prose" (a sense now obsolete), from French prosaique (15c.) and directly from Medieval Latin prosaicus "in prose" (16c.), from Latin prosa "prose" (see prose). Meaning "having the character of prose (in contrast to the feeling of poetry)" is by 1746; the extended sense of "ordinary, commonplace in style or expression, lacking poetic imagination or beauty" is by 1813. Both sense are from French. Related: Prosaical; prosaically.
In the Davis Square Symphony, the bustle of Davis Square in Somerville, Massachusetts, has been translated into an orchestral score. Vehicles become strings, pedestrians shift into wind instruments, and bicycles emerge as snare drums. Traffic, which is generally considered irritating and unpleasant, has been transformed into a work of unique beauty. Translating these prosaic phenomena into poetry is akin to the ancient alchemical dream of transmuting lead into gold.
I think that this is definitely a concern for prosaic AI safety methods. In the case of something like amplification or debate, I think the bet that you're making is that language modeling alone is sufficient to get you everything you need in a competitive way. I tend to think that that claim is probably true, but it's definitely an assumption of the approach that isn't often made explicit (but probably should be).
It sounds like your notion of "prosaic" assumes something related to agency/reinforcement learning, but I believe several top AI people think what we'll need for AGI is progress in unsupervised learning -- not sure if that counts as "prosaic". (FWIW, this position seems obviously correct to me.)
Interesting, I was not aware of that, thanks! I was thinking of "prosaic" as basically all current methods, including both agency/reinforcement learning stuff and unsupervised learning stuff. It's true that the example I gave was more about agency... but couldn't the same argument be run using e.g. a language model built like GPT-2? (Isn't that a classic example of unsupervised learning?) Conjecture would say that you need e.g. the whole corpus of the internet, not just a corpus of e.g. debate texts, to get cutting-edge performance. And a system trained merely to predict the next word when reading the whole corpus of the internet... might not be safely retrained to do something else. (Or is the idea that mere unsupervised learning wouldn't result in an agent-like architecture, and therefore we don't need to worry about mesa-optimizers? That might be true, but if so it's news to me.)
I think there is a bit of a motte and bailey structure to our conversation. In your post above, you wrote: "to be competitive prosaic AI safety schemes must deliberately create misaligned mesa-optimizers" (emphasis mine). And now in bullet point 2, we have (paraphrase) "maybe if you had a really weird/broken training scheme where it's possible to sabotage rival subnetworks, agenty things get selected for somehow [probably in a way that makes the system as a whole less competitive]". I realize this is a bit of a caricature, and I don't mean to call you out or anything, but this is a pattern I've seen in AI safety discussions and it seemed worth flagging.
Comment on your planned opinion: I mostly agree; I think what this means is that prosaic AI safety depends somewhat on an empirical premise: That joint training doesn't bring a major competitiveness penalty. I guess I only disagree insofar as I'm a bit more skeptical of that premise. What does the current evidence on joint training say on the matter? I have no idea, but I am under the impression that you can't just take an existing training process--such as the one that made AlphaStar--and mix in some training tasks from a completely different domain and expect it to work. This seems like evidence against the premise to me. As someone (Paul?) pointed out in the comments when I said this, this point applies to fine-tuning as well. But if so that just means that the second and third ways of the dilemma are both uncompetitive, which means prosaic AI safety is uncompetitive in general. 2ff7e9595c
Comments