Some verbs are trivially nounable, even the dictionaries have them nouned. Ex: I had a good run yesterday. I suspect it is more limited by one’s imagination and creativity.
Unrelated: real shame NewsBlur still doesn’t support reply trees. :(
Last week, at the completion of their high school studies, 750,000 students in France took the baccalauréat exam in philosophy, or “bac philo,” as it is called. Meanwhile, worries about reforms to the baccalauréat system have some teachers threatening to strike.
In an ordeal that has changed little since Napoleon Bonaparte introduced the baccalaureate in 1808, the students wrestled with choosing a single question from a limited selection. These included “Is desire the mark of our imperfection?” and “Can experience be misleading?”. They had the alternative of critiquing texts from Schopenhauer, Mill or Montesquieu.
It is a four hour exam in which, according to philosophy teacher Marie Perret, quoted at France 24, “students aren’t just asked to display their knowledge but to think about a problem themselves by using the notions they studied during the year.”
The set of exams comprising the baccalauréat is due for on overhaul for 2021, with a reduction in exam subjects. The Times reports that philosophy will still be required, however, the amount of time dedicated to philosophy lessons in high school will be cut in half, from eight hours per week to four.
Arguing that “Four hours a week is far too little to learn the skills demanded of la philosophie… More than 120 teachers signed a protest letter to Jean-Michel Blanquer, the education minister, last month and hardline defenders of philosophy are talking of strike action.”
“We are not stuff that abides, but patterns that perpetuate themselves. A pattern is a message.”
“Information will never replace illumination,” Susan Sontag asserted in considering the conscience of words. “Words are events, they do things, change things,” Ursula K. Le Guin wrote in the same era in her exquisite meditation on the magic of real human communication. “They transform both speaker and hearer; they feed energy back and forth and amplify it. They feed understanding or emotion back and forth and amplify it.” But what happens when words are stripped of their humanity, fed into unfeeling machines, and used as currencies of information that no longer illuminates?
Half a century before the golden age of algorithms and two decades before the birth of the Internet, the mathematician and philosopher Norbert Wiener (November 26, 1894–March 18, 1964) tried to protect us from that then-hypothetical scenario in his immensely insightful and prescient 1950 book The Human Use of Human Beings: Cybernetics and Society (public library) — a book Wiener described as concerned with “the limits of communication within and among individuals,” which went on to influence generations of thinkers, creators, and entrepreneurs as wide-ranging as beloved author Kurt Vonnegut, anthropologist Mary Catherine Bateson, and virtual reality pioneer Jaron Lanier.
Wiener had coined the word cybernetics two years earlier, drawing on the Greek word for “steersman” — kubernētēs, from which the word “governor” is also derived — to describe “the scientific study of control and communication in the animal and the machine,” pioneering a new way of thinking about causal chains and how the feedback loop taking place within a system changes the system itself. (Today’s social media ecosystem is a superficial but highly illustrative example of this.)
Information is a name for the content of what is exchanged with the outer world as we adjust to it, and make our adjustment felt upon it. The process of receiving and of using information is the process of our adjusting to the contingencies of the outer environment, and of our living effectively within that environment. The needs and the complexity of modern life make greater demands on this process of information than ever before, and our press, our museums, our scientific laboratories, our universities, our libraries and textbooks, are obliged to meet the needs of this process or fail in their purpose. To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.
A pillar of Weiner’s insight is the second law of thermodynamics and its central premise that entropy — the growing tendency toward disorder, chaos, and unpredictability — increases over time in any closed system. But even if we were to consider the universe itself a closed system — an assumption neglecting the possibility that our universe may be one of many universes — neither individual human beings nor the societies they form can be thought of as closed systems. Rather, they are pockets of attempted order and decreasing entropy amid the vast expanse of cosmic chaos — attempts encoded in our systems of organizing and communicating information. Weiner examines the parallel between organisms and machines in this regard — a radical notion in his day and plainly obvious, if still poorly understood, in ours:
If we wish to use the word “life” to cover all phenomena which locally swim upstream against the current of increasing entropy, we are at liberty to do so. However, we shall then include many astronomical phenomena which have only the shadiest resemblance to life as we ordinarily know it. It is in my opinion, therefore, best to avoid all question-begging epithets such as “life,” “soul,” “vitalism,” and the like, and say merely in connection with machines that there is no reason why they may not resemble human beings in representing pockets of decreasing entropy in a framework in which the large entropy tends to increase.
When I compare the living organism with such a machine, I do not for a moment mean that the specific physical, chemical, and spiritual processes of life as we ordinarily know it are the same as those of life-imitating machines. I mean simply that they both can exemplify locally anti-entropic processes, which perhaps may also be exemplified in many other ways which we should naturally term neither biological nor mechanical.
In a sentiment of astounding foresight, Wiener adds:
Society can only be understood through a study of the messages and the communication facilities which belong to it; and that in the future development of these messages and communication facilities, messages between man and machines, between machines and man, and between machine and machine, are destined to play an ever-increasing part.
In control and communication we are always fighting nature’s tendency to degrade the organized and to destroy the meaningful; the tendency… for entropy to increase.
Organism is opposed to chaos, to disintegration, to death, as message is to noise. To describe an organism, we do not try to specify each molecule in it, and catalogue it bit by bit, but rather to answer certain questions about it which reveal its pattern: a pattern which is more significant and less probable as the organism becomes, so to speak, more fully an organism.
We are not stuff that abides, but patterns that perpetuate themselves. A pattern is a message.
Messages are themselves a form of pattern and organization. Indeed, it is possible to treat sets of messages as having an entropy like sets of states of the external world. Just as entropy is a measure of disorganization, the information carried by a set of messages is a measure of organization. In fact, it is possible to interpret the information carried by a message as essentially the negative of its entropy, and the negative logarithm of its probability. That is, the more probable the message, the less information it gives.
Weiner illustrates this idea with an example that would have pleased Emily Dickinson:
Just as entropy tends to increase spontaneously in a closed system, so information tends to decrease; just as entropy is a measure of disorder, so information is a measure of order. Information and entropy are not conserved, and are equally unsuited to being commodities. Clichés, for example, are less illuminating than great poems.
The prevalence of cliches is no accident, but inherent in the nature of information. Property rights in information suffer from the necessary disadvantage that a piece of information, in order to contribute to the general information of the community, must say something substantially different from the community’s previous common stock of information. Even in the great classics of literature and art, much of the obvious informative value has gone out of them, merely by the fact that the public has become acquainted with their contents. Schoolboys do not like Shakespeare, because he seems to them nothing but a mass of familiar quotations. It is only when the study of such an author has penetrated to a layer deeper than that which has been absorbed into the superficial clichés of the time, that we can re-establish with him an informative rapport, and give him a new and fresh literary value.
From this follows a corollary made all the clearer by the technologies and media landscapes which Wiener never lived to see and with which we must and do live:
The idea that information can be stored in a changing world without an overwhelming depreciation in its value is false.
Information is more a matter of process than of storage… Information is important as a stage in the continuous process by which we observe the outer world, and act effectively upon it… To be alive is to participate in a continuous stream of influences from the outer world and acts on the outer world, in which we are merely the transitional stage. In the figurative sense, to be alive to what is happening in the world, means to participate in a continual development of knowledge and its unhampered exchange.
In a passage that calls to mind Zadie Smith’s lucid antidote to the illusion of universal progress and offers a sobering counterpoint to today’s strain of social scientists purveying feel-good versions of “progress” via the tranquilizing half-truths of highly selective statistics willfully ignorant of the for whom question, Wiener writes:
We are immersed in a life in which the world as a whole obeys the second law of thermodynamics: confusion increases and order decreases. Yet, as we have seen, the second law of thermodynamics, while it may be a valid statement about the whole of a closed system, is definitely not valid concerning a non-isolated part of it. There are local and temporary islands of decreasing entropy in a world in which the entropy as a whole tends to increase, and the existence of these islands enables some of us to assert the existence of progress.
Thus the question of whether to interpret the second law of thermodynamics pessimistically or not depends on the importance we give to the universe at large, on the one hand, and to the islands of locally decreasing entropy which we find in it, on the other. Remember that we ourselves constitute such an island of decreasing entropy, and that we live among other such islands. The result is that the normal prospective difference between the near and the remote leads us to give far greater importance to the regions of decreasing entropy and increasing order than to the universe at large.
Wiener considers the central flaw of the claim that the arrow of historical time is aligned with the arrow of “progress” in a universal sense:
Our worship of progress may be discussed from two points of view: a factual one and an ethical one — that is, one which furnishes standards for approval and disapproval. Factually, it asserts that the earlier advance of geographical discovery, whose inception corresponds to the beginning of modern times, is to be continued into an indefinite period of invention, of the discovery of new techniques for controlling the human environment. This, the believers in progress say, will go on and on without any visible termination in a future not too remote for human contemplation. Those who uphold the idea of progress as an ethical principle regard this unlimited and quasi-spontaneous process of change as a Good Thing, and as the basis on which they guarantee to future generations a Heaven on Earth. It is possible to believe in progress as a fact without believing in progress as an ethical principle; but in the catechism of many Americans, the one goes with the other.
What many of us fail to realize is that the last four hundred years are a highly special period in the history of the world. The pace at which changes during these years have taken place is unexampled in earlier history, as is the very nature of these changes. This is partly the result of increased communication, but also of an increased mastery over nature which, on a limited planet like the earth, may prove in the long run to be an increased slavery to nature… We have modified our environment so radically that we must now modify ourselves in order to exist in this new environment. We can no longer live in the old one. Progress imposes not only new possibilities for the future but new restrictions… May we have the courage to face the eventual doom of our civilization as we have the courage to face the certainty of our personal doom. The simple faith in progress is not a conviction belonging to strength, but one belonging to acquiescence and hence to weakness.
The new industrial revolution is a two-edged sword… It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction.
Three decades later, the great physician, etymologist, poet, and essayist Lewis Thomas would articulate the flip side of the same sentiment in his beautiful meditation on the peril and possibility of progress: “We are in for one surprise after another if we keep at it and keep alive. We can build structures for human society never seen before, thoughts never thought before, music never heard before… Provided we do not kill ourselves off, and provided we can connect ourselves by the affection and respect for which I believe our genes are also coded, there is no end to what we might do on or off this planet.” Weiner’s most visionary point is that if we are to not only survive but thrive as a civilization and a species, we must encode these same values of affection and respect into our machines, our information systems, and our technologies of communication, so that “the new modalities are used for the benefit of man, for increasing his leisure and enriching his spiritual life, rather than merely for profits and the worship of the machine as a new brazen calf.”
More than a century after Mary Shelley raised these enduring questions of innovation and responsibility in Frankenstein, Weiner offers a sentiment of astonishing prescience and relevance to the artificial intelligence precipice on which we now stand, in an era when algorithms are deciding for us what we read, where we go, and how much of reality we see:
The machine’s danger to society is not from the machine itself but from what man makes of it.
The modern man, and especially the modern American, however much “know-how” he may have, has very little “know-what.” He will accept the superior dexterity of the machine-made decisions with out too much inquiry as to the motives and principles behind these… Any machine constructed for the purpose of making decisions, if it does not possess the power of learning, will be completely literalminded. Woe to us if we let it decide our conduct, unless we have previously examined the laws of its action, and know fully that its conduct will be carried out on principles acceptable to us! On the other hand, the machine [that] can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us. For the man who is not aware of this, to throw the problem of his responsibility on the machine, whether it can learn or not, is to cast his responsibility to the winds, and to find it coming back seated on the whirlwind.
When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine. Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions.
Precisely because our existence is so improbable against the backdrop of a universe governed by entropy, it is imbued with a singular responsibility — a responsibility that is the source and succor of meaning in human life. In a sentiment which the Nobel-winning Polish poet Wisława Szymborska would later echo, Weiner writes:
It is quite conceivable that life belongs to a limited stretch of time; that before the earliest geological ages it did not exist, and that the time may well come when the earth is again a lifeless, burnt-out, or frozen planet. To those of us who are aware of the extremely limited range of physical conditions under which the chemical reactions necessary to life as we know it can take place, it is a foregone conclusion that the lucky accident which permits the continuation of life in any form on this earth, even without restricting life to something like human life, is bound to come to a complete and disastrous end. Yet we may succeed in framing our values so that this temporary accident of living existence, and this much more temporary accident of human existence, may be taken as all-important positive values, notwithstanding their fugitive character.
In a very real sense we are shipwrecked passengers on a doomed planet. Yet even in a shipwreck, human decencies and human values do not necessarily vanish, and we must make the most of them. We shall go down, but let it be in a manner to which we may look forward as worthy of our dignity.
Bringing you (ad-free) Brain Pickings takes me hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.
Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s most unmissable reads. Here’s what to expect. Like? Sign up.
Long considered aberrations in his artistic career, René Magritte’s sunlit surrealist and vache pictures have recently been reassessed by art historians and critics not only on their own terms but also in relation to the notion of “bad painting.” The two bodies of work have often been discussed separately, since they are stylistically dissimilar and the latter was produced specifically for Magritte’s first solo exhibition in Paris, in 1948. Nevertheless, there is good reason to think of them as related. Both series are almost unrecognizable as “Magrittes,” and one followed directly after the other, together spanning World War II and the immediate postwar period. Far more than a neutral background, historical events may have helped shape, if not determine, the nature and terms of these works more than has until now been presumed.
These paintings are deeply, thoroughly weird, not only in their iconography but also in their departure from Magritte’s long-established style, palette, and facture. Whereas previously Magritte acknowledged only the artistic influence of the Italian Surrealist Giorgio de Chirico, the sunlit surrealist works refer—sometimes quite directly—to late paintings by Pierre-Auguste Renoir. But examples such as La moisson, with its harlequinesque, multicolored limbs, torso, and head are far closer to parody than pastiche. Certain vache works evoke other artists: the sinuous contours of several female nudes recall those of Henri Matisse, and the intense hues and crude brushwork in other pictures have invited comparison to German Expressionism. With their penis-nosed grotesques, lurid colors, and bodily eruptions, the vache paintings have been described as “look[ing] like nothing so much as the missing link between James Ensor and Zap Comix.”
Le lyrisme(Lyricism), 1947.
The subject matter of both series is as striking as the radically different styles in which they are rendered. A prime example of sunlit surrealism is La bonne fortune. Rippled brushstrokes describe sky and ground, while an obelisk headstone dominates the background. The cemetery setting, the (arguably) phallic monument, and the floral wreath of the kind placed on soldiers’ graves are perhaps not incidental to the postwar context of the work’s creation. The principal subject, however, is a pig, its head and one tiny, beady eye turned toward the viewer. Unlike Magritte’s more familiar depictions of figures or objects, this canvas makes no attempt to evoke the actual animal: it stands upright, it wears a dark and thickly painted jacket, and the shape of its head is more human than piglike. Despite the Impressionistic brushstrokes and colors—dominated by sugary pinks, corals, and oranges—whatever this is, like La moisson, it certainly is not Impressionism, a style grounded in ideas of visual and perceptual truth that Magritte’s work systematically repudiated. Another strange element in several sunlit surrealist scenes is the substitution of objects for suns, radiating cursory beams. Perhaps the most delirious of these proxies appears in Le lyrisme, where the sun has become the pear head invented by the satirists Honoré Daumier, Charles Philipon, and J. J. Grandville as stand-ins for Louis Philippe I, the constitutional monarch installed after the French Revolution of 1830. But why would Magritte have reprised these caricatures? Was this choice determined by Louis Philippe’s identification with and enrichment of the powerful bourgeoisie—nemesis of all Surrealists—and the belief that such images fostered the 1848 uprising that led to his overthrow?
La famine(Famine), 1949.
Among the vache works, consider, for example, La famine, which shows a loosely painted chain of heads apparently eating one another. Their features are cursorily indicated, obscuring distinctions between mouths, tongues, and phallic noses or beaks. Is a literal interpretation of this picture—an acknowledgment that in famine, people may be reduced to cannibalism—possible or even appropriate? The cartoonish style neutralizes any impulse to link the image to its title—as is the case with Le stropiat. Here, an equally crudely rendered man is frontally positioned against an undefined background. Sporting a Phrygian cap and spectacles, with eight pipes sprouting from his beard, mouth, eye, and forehead, is this figure to be understood as a self-reflexive reference to Magritte’s famous pipe motif? Should Le stropiat or La famine be seen as comical or vicious? Is the language of art criticism or art history suitable or useful to make sense of such works, or should they be disregarded as aberrations or jokes?
Le stropiat (detail), 1948.
That the vache paintings and gouaches, rapidly made over five or six weeks in 1948, were intended as provocations to the French art world is a necessary but insufficient explanation for their perversity and insistent “badness.” Certainly there was more than a whiff of conspiratorial glee surrounding their production and exhibition. Unsurprisingly, not a single picture in the show sold; Magritte’s Brussels and London dealers, P.G. Van Hecke and E. L. T. Mesens, were appalled, and for the most part, the French press found the compositions unfathomable and easily dismissed. That is to say, bad. Still, there are reasons to reckon with them seriously—the form, facture, and subject matter are so defiantly non-Magrittian as to suggest calculation and raise questions of interpretation that exceed their deliberate provocation. The same can be said of the sunlit surrealist series: if any artist was temperamentally (and aesthetically) averse to apparent spontaneity or expressionist indulgence, it was Magritte. If these works look spontaneous, that was a no less strategic decision.
One way to consider both series might be in relation to madness, not in the sense of mental disorder but in terms of the range of emotions from rage and anger to irritation and frustration. This anger may have had wellsprings other than the Surrealists’ principled loathing of capitalism, militarism, religion, and the bourgeoisie. One should at least entertain the possibility that the war and occupation contributed, even subliminally, to the outrageousness of both bodies of work.
Germany’s invasion and occupation of Belgium began in May 1940. Initially Magritte and several Surrealist friends, including Louis Scutenaire, Irène Hamoir, and Raoul Ubac, fled to France, where Magritte stayed until August. Once back in Brussels, judging from his correspondence, it is as though normal life was restored. Yet by December, all Jews holding official positions were fired; in 1941, the word Jew was added to Belgian identifying documents, and Jewish children were expelled from the schools. By 1942, the occupation had become yet more oppressive and food shortages were endemic—especially in cities, where all food was rationed. Even bread was sometimes unavailable. A thriving black market emerged, some of it run by Germans, some by enterprising Belgians. Citizens were taxed to pay for their own occupation and military operations elsewhere. Censorship was imposed on all news media. Almost two hundred thousand Belgians were conscripted and sent to Germany as forced labor. By the war’s end, more than forty thousand Belgians, over half of them Jews, had been killed. Allied bombings were themselves responsible for many deaths.
But throughout the occupation, the art market continued to function. Paintings were bought and sold, galleries opened and closed, and exhibitions took place, if sometimes semi-clandestinely. Most of the Belgian Surrealists retained their day jobs; continued to publish tracts, journals, pamphlets, and manifestos; and exchanged witty and apparently carefree letters. Magritte and his friends met almost weekly and went to the coast on their holidays. Besides Paul Nougé, who was conscripted into the French army and demobilized after two months, only Marcel Mariën, the youngest of the Belgian group, was called up, subsequently spending three months in German camps. Only the Jewish Fernand Demoustier—better known by his pseudonym, Fernand Dumont—lost his life; he died either en route to or in the Bergen-Belsen concentration camp.
Mal de mer (Seasickness), 1948.
The photographs and short films Magritte produced in these years are antic and playful, and he rarely linked his art to external circumstances. In a 1944 letter to Mariën he wrote: “So I’m taking refuge in the ideal world of art. An idealist position, you’ll tell me. Well—all right. But it’s only a way of amusing myself, after all, and that’s the main thing. And the noisier reality becomes, the less reluctant I am to escape from it as much as possible.” Another letter notes, “The German occupation marked the turning point in my art. Before the war, my paintings expressed anxiety, but the experiences of war have taught me that what matters in art is to express charm. I live in a very disagreeable world, and my work is meant as a counter-offensive.” These are uncharacteristically tepid remarks given the horrors of the period, but they have generally been taken at face value, possibly because the war barely features at all in Magritte’s correspondence and other writings. Nevertheless, his statements need not mean that the war and occupation did not produce cultural or psychic symptoms in his oeuvre. His assertion that “what matters in art is to express charm” is, as we have seen, belied by the manifest charmlessness of the works themselves.
At least two vache canvases feature hams, perhaps alluding to wartime food shortages, but it is not subject matter as such in Magritte’s paintings of this period that link them to their time and place. Far more suggestive is their blatant cynicism: the parodic qualities of the sunlit surrealist compositions and the violence and disgust of the vache pictures, which are dense with fetishistic references and a manic deployment of fecal, phallic, and castration imagery. To be sure, these motifs can be identified throughout Magritte’s career; despite his frequent denunciation of psychoanalytic readings of his work, no one was more adept at translating Freudian concepts into iconography. But a distinction can be drawn between his canny exploitation of psychologically charged themes—think here of the still-shocking Le viol, reprised in a sunlit surrealist version—and what might be identified as a form of unarticulated or repressed rage at being subject to fascism, one of the threats that the Surrealists had sought to resist throughout the 1930s.
Magritte joined the Belgian Communist Party in 1945 and had contributed graphics and posters to it before the war. But during the occupation, though he fully identified with the revolutionary aspirations of Surrealism, Magritte, like most of his cohort, had no known contact with the Belgian resistance. The weirdness of his sunlit surrealist style and the brutality of his vache works may be symptomatic of the contradictions of his position as a putatively subversive artist and the acquiescent circumstances of his daily existence. At a time when the guise of the respectable bourgeois incarnated by Magritte’s man, often construed as a kind of alter ego, could not fully contain these inconsistencies, the series can perhaps be seen as means of rebellion, conscious or not: forms of protest against the status quo—be it the German occupation or merely business as usual. The perversity of these bodies of work suggests that Magritte’s statements about his intentions were either a kind of camouflage or part of a more general repression.
After the 1948 Paris exhibition closed, Magritte acknowledged that the vache paintings were a form of “slow suicide,” referring specifically to their lack of buyers. By 1949, he had returned to his signature style; his economic situation progressively improved, and the sunlit surrealist and vache works were effectively marginalized as short-term aberrations in an otherwise coherent oeuvre. They did not figure in his most important international exhibitions of the 1950s. Suggestively, his next extended series featured a wooden coffin as its central motif—a fitting coda for the war and occupation whereby its horrors could be more safely and profitably sublimated into the language of art.
La Bonne fortune (detail), 1945.
Abigail Solomon-Godeau is a professor emerita of art history at the University of California, Santa Barbara. She is the author, most recently, of Photography after Photography: Gender, Genre, History (2017). Her essays on photography, art, and feminism have been widely anthologized and translated. She lives and works in Paris.
“Mad or Bad? Magritte’s Artistic Rebellion,” by Abigail Solomon-Godeau, was published in Rene Magritte: The Fifth Season, in 2018 by the San Francisco Museum of Modern Art in association with Distributed Art Publishers, Inc., New York, in conjunction with the exhibition of the same title at SFMOMA, on view May 19 through October 28, 2018.
In 1920, the night before Easter Sunday, Otto Loewi woke up, seemingly possessed of an important idea. He wrote it down on a piece of paper and promptly returned to sleep. When he reawakened, he found that his scribbles were illegible. But fortunately, the next night, the idea returned. It was the design of a simple experiment that eventually proved something Loewi had long hypothesized: Nerve cells communicate by exchanging chemicals, or neurotransmitters. The confirmation of that idea earned him a Nobel Prize in medicine in 1936.
Almost a century later after Loewi’s fateful snoozes, many experiments have shown that sleep promotes creative problem-solving. Now, Penny Lewis from Cardiff University and two of her colleagues have collated and combined those discoveries into a new theory that explains why sleep and creativity are linked. Specifically, their idea explains how the two main phases of sleep—REM and non-REM—work together to help us find unrecognized links between what we already know, and discover out-of-the-box solutions to vexing problems.
As you start to fall asleep, you enter non-REM sleep. That includes a light phase that takes up most of the night, and a period of much heavier slumber called slow-wave sleep, or SWS, when millions of neurons fire simultaneously and strongly, like a cellular Greek chorus. “It’s something you don’t see in a wakeful state at all,” says Lewis. “You’re in a deep physiological state and you’d be unhappy if you were woken up.”
During that state, the brain replays memories. For example, the same neurons that fired when a rat ran through a maze during the day will spontaneously fire while it sleeps at night, in roughly the same order. These reruns help to consolidate and strengthen newly formed memories, integrating them into existing knowledge. But Lewis explains that they also help the brain extract generalities from specifics—an idea that others have also proposed.
“Let’s say you replay memories of birthday parties,” she says. “They all involve presents, cake, and maybe balloons. The areas of the brain that represent those things will be more strongly activated than areas that represent who was at each party, or other idiosyncrasies.” Over time, the details may fade from memory, while the gist remains. “That’s how you might form your representation of what a birthday party is.” (Some scientists have argued that dreaming is the conscious manifestation of this process; it’s effectively your brain watching itself replaying and transforming its own memories.)
This process happens all the time, but Lewis argues that it’s especially strong during SWS because of a tight connection between two parts of the brain. The first—the hippocampus—is a seahorse-shaped region in the middle of the brain that captures memories of events and places. The second—the neocortex—is the outer layer of the brain and, among other things, it’s where memories of facts, ideas, and concepts are stored. Lewis’s idea is that the hippocampus nudges the neocortex into replaying memories that are thematically related—that occur in the same place, or share some other detail. That makes it much easier for the neocortex to pull out common themes.
The other phase of sleep—REM, which stands for rapid eye movement—is very different. That Greek chorus of neurons that sang so synchronously during non-REM sleep descends into a cacophonous din, as various parts of the neocortex become activated, seemingly at random. Meanwhile, a chemical called acetylcholine—the same one that Loewi identified in his sleep-inspired work—floods the brain, disrupting the connection between the hippocampus and the neocortex, and placing both in an especially flexible state, where connections between neurons can be more easily formed, strengthened, or weakened.
These traits, Lewis suggests, allows the neocortex to unconsciously search for similarities between seemingly unrelated concepts like, say, the way the planets revolve around the sun and the way electrons orbit the nucleus of an atom. “Suppose you’re working on a problem and you’re stuck,” she says. In REM sleep, “the neocortex will replay abstracted, simplified elements [of that problem], but also other things that are randomly activated. It’ll then strengthen the commonalities between those things. When you wake up the next day, that slight strengthening might allow you to see what you were working on in a slightly different way. That might just allow you to crack the problem.”
“Many of these ideas have been out there,” says Lewis. “Some people argued that slow wave sleep is important for creativity and others argued that it’s REM. We’re saying it’s both.” Essentially, non-REM sleep extracts concepts, and REM sleep connects them.
Crucially, they build on one another. The sleeping brain goes through one cycle of non-REM and REM sleep every 90 minutes or so. Over the course of a night—or several nights—the hippocampus and neocortex repeatedly sync up and decouple, and the sequence of abstraction and connection repeats itself. “An analogy would be two researchers who initially work on the same problem together, then go away and each think about it separately, then come back together to work on it further,” Lewis writes.
“The obvious implication is that if you’re working on a difficult problem, allow yourself enough nights of sleep,” she adds. “Particularly if you’re trying to work on something that requires thinking outside the box, maybe don’t do it in too much of a rush.”
Parts of this framework are based on strong data, but others are still conjectures that need to be tested. For example, there isn’t much evidence to support Lewis’s hunch that the hippocampus prods the neocortex into replaying related memories during non-REM sleep. “I realize it’s a little bit of a stretch,” she admits, but she notes that in several studies, slow-wave improves the ability to identify common concepts. In one widely used task, people have to learn a word list—night, dark, coal—that revolves around an unseen theme. If they sleep afterwards, they’re more likely to (falsely) remember that they also learned the theme word—in this case, “black.” However, Jessica Payne from the University of Notre Dame notes that in one of her experiments, SWS had the opposite effect.
Still, that “small disagreement” aside, Payne feels that Lewis is mostly on the right track, especially when it comes to the role of REM sleep in combining conceptual knowledge “in ways that can be preposterous and creative,” she says. “I think the general idea is going to be right.”
There’s another weakness to Lewis’s framework that she finds more troubling: People can be totally deprived of REM sleep without suffering from any obvious mental problems. One Israeli man, for example, lost REM sleep after a brain injury; “he’s a high-functioning lawyer and he writes puzzles for his local newspaper,” Lewis says. “That is definitely a problem for us.”
“I’m sure [the theory] isn’t 100 percent right,” she adds, laughing, “but we just got back a set of results that really strongly support it.” Her team tried to get sleeping volunteers to replay memories during slow wave sleep and REM sleep, and found different effects in each. Those results should be published in the near-future. In the meantime, the team is also developing ways of boosting or suppressing the two sleep stages to see how that affects people’s problem-solving skills. This is all part of a five-year project, and they’re just in their first year.
Lewis is also working with Mark van Rossum from the University of Nottingham to create an artificial intelligence that learns in the way she thinks the sleeping brain does, with “a stage for abstraction and a stage for linking things together,” she says.
“So you’re building an AI that sleeps?” I ask her.