Hi and welcome back! Someone’s developing a theory lately that might explain why humans’ dreams are often so illogical and weird. Since dis me 100%, the story caught my eye. What makes it even better is that it relates to the world of artificial intelligence (AI). Today, Lord Snow Presides over new discoveries about dreams: their function, their necessity, and the distinctive AI weakness that got us thinking about it all.
(This week’s 1st-Century Friday topic can be found here.)
Everyone dreams. We must. Some of us can actually remember our dreams, though most can’t. But we all do it, even if we can’t remember a lick of ’em afterward. If we’re prevented from reaching the state of sleep that forms the wellspring of dreaming, then we are mightily messed up the next day. (We also get mightily messed up if prevented from sleeping at all. More than a few days of that, and people deteriorate very quickly.)
A doctor of psychiatry and sleep medicine, Dr. Alex Dimitru, tells us:
“Whether they remember or not, all people do dream in their sleep. It is an essential function for the human brain, and also present in most species.”
So we know that humans must dream while we sleep. We even know about some of the stuff that can inspire strange and vivid dreams:
- Some medications
- Stress and anxiety
- Sleep disorders
- Substance abuse
- The COVID-19 pandemic, just like generally
We also know that sometimes, our dreams can cause us a lot of anxiety during our waking hours — especially if they’re unpleasant.
What we don’t really know, though, is why humans must dream — or why we must sleep at all, for that matter. But we’ve got some ideas along those lines.
Dreams as Filing Systems.
For the past few years, I’ve been watching as researchers have developed some intriguing new ideas about why people must dream. This story from NBC summarizes some of those ideas:
Our brains need offline time for processing and learning new things — and they do this during sleep. (And there’s a whole lot of evidence to support the idea that sleep makes learning and memory storing possible.)
And it might be that dreaming plays a role in that process, [Robert] Stickgold says — “where the brain is trying to solve problems and complete processes that were going on during waking that it — in its waking hours — didn’t complete.”
That makes sense. When I was in college in the 90s, I read somewhere that staying up to study all night long was actually not as beneficial as spending a reasonable amount of time studying, then getting a good night’s sleep. At the time, I half-suspected it was a grand conspiracy on the part of parents to deprive college students of fun.
I learned better, though, when I very unwisely stayed up for several days straight to cram for finals one summer semester. I was so out of it by the last test that I caught myself filling in “D” on Scantron answer sheets for True/False questions. (See endnote for info about Scantron tests back in the age of the dinosaurs.) I never did that again.
But there might be something else that dreams do — something very important indeed.
AI Dreams and Stranger Things.
If you’ve ever monkeyed around with something like Deep Dream Generator, you know that the images it returns can get really wild really quickly. Today, I started with this picture of a pretty wine-red Miata:
A short while later, Deep Dream served this up to me:
When I clicked “Go Deeper” a couple more times until I was weirded out, this was what that idyllic little country scene had turned into:
Do you notice any themes in the images I got?
The “dreamed” images kinda look like the AI had something on its mind, doesn’t it? Like someone had recently fed it tons of images of graphic badgers and snakes, and it found things in the Miata picture that sorta reminded it of what it’d seen while “awake.”
So eventually, it produced a nightmare Lisa Frank picture.
Overfitting in the Absence of AI Dreams.
You want an AI to learn to detect patterns. It does that with what’s called a “training set” of data points.
As this Big Think article puts it, the AI needs to learn what the data means, not just what the data points are. If it doesn’t ever learn the pattern behind the data, then it’ll just mash everything users feed it into those same data points. It won’t be able to generalize enough to provide useful output with a variety of data points.
That means that if you want an AI to recognize human faces, it’s got to see millions of ’em — different ones, too, in all kinds of poses and expressions. But then if you ask it to interpret a picture of a dog, it’ll hunt through that image to see if it can find any elements that look like what it actually knows, which in this case is the human face.
And if it finds any such elements anywhere, like say the pupper’s nose looks a bit like a human ear, it might just draw you a picture of a dog with a human ear where its nose should be. And a tail that looks like a rainbow of human arms waving in the air. The AI just jams whatever it can into the pattern it recognizes.
That’s called overfitting, and AI system programmers try really hard to avoid it. It severely weakens and stunts the AI by limiting its ability to interpret data given to it.
The Dreams That May Come.
To prevent overfitting, those programmers have a few tricks up their sleeves. Mostly, they introduce chaotic elements to the data set so the AI doesn’t just learn that one zig-zag pattern. That way, the AI knows that some elements won’t fit into that exact data set, and it seeks a through-way — the pattern — that the data points describe.
That’s what programmers want. They want it to be able to generalize from the data, not zig-zag back and forth between exact, precise data points.
The way these articles describe this chaos element, it sounds like white noise — the way that it dampens other extreme noises and makes it smooth, so we can sleep in noisy environments.
And guess what? Just as AI learning systems need a little chaos so they can function correctly, humans might need it as well.
The Overfitting Brain Hypothesis.
About a year ago, we began hearing about a new theory about the function of dreams. Erik Hoel, a neuroscientist and assistant professor at Tufts University, has been working on the Overfitting Brain Hypothesis. Recently, he gave us a more layperson-friendly writeup here, at Inverse.
The way he puts it, humans’ lives are pretty dang repetitive. (They were probably even way more repetitive centuries and millennia ago!) And that can lead to us being exposed to only a very narrow data set of experiences that we can learn from.
So maybe our dreams are a way of introducing noise and chaos to our minds so we can retain information more effectively, make better connections between our ideas, and learn the patterns in information we learned that day.
He’s got some other interesting ideas about ways to perhaps bring a “dreamlike” state to people who need to stay away for long periods. That may prove to be its own wondrous game-changer. But it was this part about vivid, wacky dreams that caught my eye most of all.
Today, Lord Snow Presides over our increasing understanding of one of the most mysterious parts of the human condition: our dreams.
NEXT UP: Kent Hovind’s arrest for domestic violence — and why it’s no surprise. See you tomorrow!
All About Scantron: For those who have the good fortune not to know what those are, Scantron tests come with two paper parts. One part contains the printed-out questions, each with multiple-choice answers. The answer sheets contain numbered rows of circles. They’ll either have 4 or 5 circles, with each labeled A-D or A-E depending. Students fill in the circle on the answer sheet that corresponds to the answer they think is correct. Then, after the test is completed, a machine reads the filled-in circles to grade the test.
A True/False question on your test means you should only have to choose between A or B. So if I hadn’t caught the mistake, I’d have been graded wrong on that question.
Oh my gosh, I suddenly wonder if students still take these on paper. I bet not.
1st-Century Friday Topic:
As always, nobody is required to do anything. I provide this announcement only for those who want to read up on our sources ahead of time. (Back to the post!)
About Lord Snow Presides (LSP)
Lord Snow Presides is our off-topic weekly chat series. Lord Snow was my very sweet white cat. He actually knew quite a bit. Though he’s passed on, he now presides over a suggested topic for the day. Of course, please feel free to chime in with anything on your mind: there’s no official topic on these days. I’m just starting us off with something, but consider the sky the limit here. We especially welcome pet pictures!
Please Support What I Do!
Come join us on Facebook, Tumblr, and Twitter! (Also Instagram, where I mostly post cat pictures, and Pinterest, where I sometimes post vintage recipes from my mom’s old recipe box.) Also please check out our Graceful Atheist podcast interview!
If you like what you see, I gratefully welcome your support. Please consider becoming one of my monthly patrons via Patreon with Roll to Disbelieve for as little as $1/month!
My PayPal is firstname.lastname@example.org (that’s an underscore in there) for one-time tips. You can also support this blog at no extra cost to yourself by beginning your Amazon shopping trips with my affiliate link — and, of course, by liking and sharing my posts on social media!
This blog exists because of readers’ support, and I appreciate every single bit of it. Thank you. <3