Machine Fantasies Are No Better Than Religious Ones

Machine Fantasies Are No Better Than Religious Ones October 4, 2018

There’s an algorithm for that? Really?

Skeptics and humanists are good at attacking religious concepts of humanity, society and the universe, and describing the way these ideas dehumanize and denigrate us. But we should also be able to call out dehumanizing and dangerous rhetoric of the secular sort. I’m talking about the offhand way we use metaphors that compare us to machines. We also have a tendency to describe our modes of inquiry in technological terms, and talk about nature and reality as if they’re engineering projects.

The Machine of Nature

During the Scientific Revolution, scientists and philosophers modeled the universe on the example of the clock: a complex invention that ran according to well-understood principles. As so often happens in history, the metaphor became reality. Scientists defined phenomena by how they work, and what function they serve. Not only that, but the machine metaphor defined the process of inquiry itself: to understand something, you need to open it up and see how its parts work. A reductionist approach denies that the whole is greater than the sum of its parts; the point is that the whole is nothing more than the sum of its constituent elements.

The irony is that, scientifically speaking, the mechanical universe is as obsolete as the whale bone corset. Darwin, Einstein, and Freud showed us a universe of contingency and indeterminacy, and did away with the knowable, predictable, ordered reality of the Enlightenment. So why does the dusty old machine metaphor still resonate with us?

It’s a fantasy about control. Humanity has been at the mercy of Nature’s often dangerous whims for so long that we’ve devised methods to make us feel like we’re the ones in charge. But just as praying and ritual behavior gave us the illusion of power, making reality fit our models does nothing except reinforce our delusions of dominance. This delusion has consequences for ourselves and the environment. During the Industrial Revolution, promulgating the scientific concept of Nature as an inert machine was necessary to overcome superstitious beliefs about the consequences of exploiting natural resources for gain. What a deal: all the dominance with none of the responsibility.

Algorithms All the Way Down

Scientific inquiry itself is frequently conceptualized as a machine, a tool with which humanity studies phenomena. Considering the way scientists like Richard Dawkins and Lawrence Krauss talk about science taming Time and Space and decoding the universe, you’d think this idea ennobles humanity. However, there’s an anti-human undercurrent to this kind of rhetoric too. It conceptualizes scientific inquiry as something algorithmic, an automatic process that eliminates human error, but doesn’t acknowledge the personal, cultural, and political messiness of this human endeavor. Evidence goes into the machine, and Truth comes out.

This is another fantasy that denies the human aspect of empirical inquiry. It’s no easier to remove humanity, with all its biases and interests, from the process of empirical inquiry than it is to remove it from the way we conceptualize government or the economy. Science’s “self-correcting” mechanism is a common claim, but it’s magical thinking.

The Soft Machine

In the comments section of my previous post, plenty of commenters expressed amusement or outrage at the implication that there’s more to human cognition and consciousness than neuroprocessing: one said, Our brains are meat computers, I’ve seen nothing to indicate that this isn’t the case. Another alleged, There is no ghost in the machine.

Well, that’s true. Because there’s no machine.

Talking about humans in engineering terms is the dictionary definition of dehumanization. In one sense, it’s typical of the anti-intellectualism of com-box discourse. Talking about the linguistic and cultural contexts of how we conceptualize and interpret human experience is an incredibly complex challenge, and it’s easier to spit out borrowed rhetoric with sciencey-words and pretend you’re dealing with the phenomenon in a sincere way. What’s truly disturbing, though, is that people regurgitate this anti-human numbnuttery because it relieves us of responsibility for our beliefs, behavior, and societies. Don’t blame us, we’re just data processing! As philosopher Stanley Cavell used to say, “Nothing is more human than the wish to deny one’s humanity.”

Furthermore, this shows how infatuated we can get with our own metaphors. Dawkins calls us gene machines, and DNA is so frequently referred to as digital code or a program that we’re unable to approach biology and evolution as anything more than engineering. Once again, when we make reality fit our models, that’s called pseudoscience.

What do you think? Do machine metaphors truly help us understand ourselves and reality, or do they just pander to our need for control and our urge to deny responsibility?

"Kuhn gets hung up on "established paradigms".The Big Bang didn't really re-write a paradigm....... unless ..."

Redefining Science With Thomas Kuhn
"His description of the social-institutional context of scientific inquiry makes the idea of "self-correcting" science ..."

Redefining Science With Thomas Kuhn
"I'm not sure "constantly testing their theories" and "validating their theories" are mutually exclusive. There ..."

Redefining Science With Thomas Kuhn
"What you see as upheaval as Kuhn describes rarely actually happens in science circles....even the ..."

Redefining Science With Thomas Kuhn

Browse Our Archives

Follow Us!


TRENDING AT PATHEOS Nonreligious
What Are Your Thoughts?leave a comment
  • The universe is more like a machine than it is like the whims of spirits and gods.

    The religious fantasies gave us a model which provided explanations that were psychologically satisfactory but had no further utility.
    The mechanical fantasies also provided, as you clearly indicate, psychological satisfaction, but also had utility in grounding our expectations in regular physical law. We should expect, on balance, according to the machine metaphor, for every physical event to have a physical cause, whose mechanism was regular and could be sussed out with observation.

    So it turns out, with quantum physics, that at some level the metaphor breaks down, in that mechanical clocks to don’t have have any stochastic parts. I guess I would respond with a shrug and say, yeah, so? No metaphor is perfect. This one, however, is pretty good, and works a whole lot of the time. So long as we keep in mind that the model is not perfect–which is something anyone who is working with any analogical model should do anyway–there’s no real problem until a better metaphor comes along and more perfectly captures our experience.

    What do you prefer? An organic metaphor? Maybe some day soon, when we understand biological activity with the thoroughness we do of physical mechanics, that will be the metaphor to go with, and that will address the possible problem of dehumanization that you raise. But we aren’t there yet.

  • Anthrotheist

    What you address is limited strictly to science’s machine metaphor being a better fit than supernaturalism for explaining nature (more useful for producing larger amounts of reproducible knowledge), and in that limited scope I think you are absolutely correct. Your disagreement though doesn’t extend into what I read as being the author’s actual objection: when the lens of the scientific machine model is turned toward human beings (biologically, psychologically, and socially).

    This is where science is dehumanizing: attempting to view humans as machines that can be reduced to constituent parts, examined and understood with little regard for the whole, and with the expectation that this machine can be engineered like any other. Society is approached with similar treatment (such as the architectural model of “social structures”), and conclusions that are drawn are skewed with unacknowledged biases in exactly the same way. Like machines, we begin to expect that we can “fix” people and societies, or “improve” them and make them “better”, and this gave us concepts like eugenics

  • Anthrotheist

    I’m not sure what I think about machine metaphors vs. religious metaphors regarding nature. While cherishing rocks seems odd, the notion of a “mother earth” still resonates with me. I was trying recently to define what “life” is, and one of the conclusions that I came up with was, “It’s the property that allows us to pulverize earth and stone with impunity, while simultaneously condemning anthropogenic extinctions.” I agree, that appears to be the machine metaphor of the nonliving universe (its materials and forces) at work. Some ancient and long-lived societies revered the natural world, giving thanks to animals killed for food for example; maybe some day modern morality will return to that point again, I don’t know. The philosophical question of course is, “would returning to that morality be progress?

  • The universe is more like a machine than it is like the whims of spirits and gods.

    All right, but that only demonstrates how woefully inadequate religious explanations are.

    What I wanted to point out is that the machine metaphor is more than just useful shorthand. It’s trying to make reality fit our models, and insisting that phenomena validate our prejudices. It conditions the way we study nature, and our aims and expectations regarding inquiry.

    The dehumanization aspect is, pure and simple, Hómo Sap’s perennial search for an escape hatch from responsibility.

  • This is where science is dehumanizing: attempting to view humans as machines that can be reduced to constituent parts, examined and understood with little regard for the whole, and with the expectation that this machine can be engineered like any other.

    I think this is blaming the metaphor for its bad users, which places the blame in the wrong place. For me, a mechanical metaphor doesn’t cause me to denigrate the whole for the parts, and indeed why would it be expected to do this? ‘What’s the point of the parts if the whole doesn’t work?’ is a thought I am certain has occurred to every engineer who has worked on a physical system.

    We look at the cells and organs in order to heal the person.
    Is it disconcerting to be thought of as a collection of organ systems? Yes.
    Is this dehumanizing? Not normally.
    Is it less dehumanizing than dying of preventable disease? Definitely.

    Like machines, we begin to expect that we can “fix” people and societies, or “improve” them and make them “better”, and this gave us concepts like eugenics

    …and sanitation. And city planning. And epidemiology. Treating the country as a machine, the city as a machine, the body as a machine if you take the totality of those metaphors’ effects has been thoroughly humanizing. There is nothing more dehumanizing than dying from dysentery because folks running the sewers failed to think systemically, and nothing more humanizing than the reorganization of affairs to make them at once less deadly and miserable and more predictable and comfortable than they were before we adopted the conceits that make those systems possible.

    As I said, if there is a better metaphor that more perfectly possesses and communicates the particular value-ladenness either Shem or you would like, by all means sketch it out. But the faults so far being laid at the feet of the mechanical metaphor are pretty dodgy charges, and nothing better is yet on offer.

  • To me, there is nothing really to elaborate on other than the standard warning–which applies to all models, metaphors, and conceits, everywhere, always–that we shouldn’t be seduced into thinking our particular model will be apt in all cases. When the physical indicia of a phenomenon do not fit into the current model cleanly, we should be cautious in how we proceed.

    That caution, for example, currently makes room for physical systems that don’t seem to work like clock gears (quantum systems), and physical systems whose behavior is sufficiently complicated at a higher level of abstraction that providing a lower-level explanation is beyond our current reach. The gearbox universe already has several widely-accepted and uncontroversial asterisks next to it in the Big Textbook of Science. So where’s the beef?

    If we switched to an ecological or biological metaphor, do you think that humans would fail to use these models to attempt to escape responsibility for our actions?

  • Anthrotheist

    I think that I understand your point, and I agree with it to an extent.

    The question that I think is being asked isn’t so much, “Is the scientific machine metaphor ineffective or worse than none at all?”

    Instead I think it is more, “Should we be content with the metaphor’s application to humanity; or should we stop defending it in that area and put our energies toward trying to develop a better system, rather than continuing to produce potentially flawed and dehumanizing conclusions about people?” It isn’t an all-or-nothing consideration to me. It is a pointed (but hypothetical) criticism of science that I feel is crucial in trying to make our epistemology better (not abandoning it altogether), especially regarding the subject of humanity.

    I don’t have the answer, I don’t imagine that Shem does either, but perhaps the point is to get people thinking about it, as we are right now.

  • I think the answer is that we should never be content with a metaphor, because to be content with any metaphor is to be content with an illusion.

    As far as humans go, I find it useful to remind myself (and occasionally others) that all human endeavors–including science, philosophy, religion, and everything else–start and end with the human perspective. Consequently all these things are human tools, toward human ends. In the end how they are measured is by humans, whether they live up to their promises of efficacy, whether they actually help us achieve our varied ends.

    If our ends are bad, our tools will amplify that badness. If we seek to dehumanize, all our great creative power will be bent toward that goal. I think, though, in the end that the mechanical metaphor can serve as a potent warning, a reminder that all of our tools are tools, including the cities we build, the devices we fashion, even the very words we write and speak. It is also a reminder that all of our tools are our tools, and so what is wrought by them is ultimately on us, and not what we were thinking at the time we used them.

    And I certainly agree it’s never a bad idea to get people thinking about these things.

  • The gearbox universe already has several widely-accepted and uncontroversial asterisks next to it in the Big Textbook of Science. So where’s the beef?

    I dunno. Ask my amigos, the ones who talk about the brain being a meat computer, period. On the neurocentrism discussion, I was only asserting that there’s more to human consciousness than brain meat and people made it sound like I may as well have been talking about angels. I’m afraid you’re making it sound like the average science fan is every bit as cautious and circumspect about their use of metaphors as you, and I can tell you from years of experience that’s not even remotely true.

  • Anthrotheist

    I agree, and your comment leads to another related (and in my opinion quite real) concern: techno-fetishism, where the tool becomes the point rather than the means.

  • Should we care about all science fans, or just scientists? The laity often had only the very gauziest idea of the concepts that the priests trafficked in, and patients believe all manner of things about medicine that would make a doctor snort. Does this indicate something fundamental about science, about religion, about medicine? I tend to think it’s merely that the proper circumspection and caution usually comes along with actual formal training and expertise, and people unburdened with the specific knowledge that permits doing actual scientific exploration should not be expected to understand science in the same way as those who are so burdened.

    So, for the most part, pointing to a science fan, or a lay theist, or a patient is a distraction from analysis of the underlying discipline, its actual productive activity, its actual procedural and epistemological weaknesses. Occasionally mass behavior (usually through politics, but occasionally through unorganized emergent effects) does matter, but these cases are rather the exception than the rule for understanding any decision or model produced by said disciplines.

    And I am most interested in your answer to my last question. If your primary objection to the dominance of the current model is that it is used to excuse human bad behavior and put off responsibility for our acts, what model would not have that capacity? As far as I can tell, any model heretofore used on Earth to explain any phenomenon or articulate any value can be and has been twisted to evil ends. We need not belabor examples. So, if the motivation for questioning our model is a value-laden one, what reason do we have whatsoever to believe that the problem rests with the particular current model, rather than simply that it is a problem with models, generally?

  • TinnyWhistler

    “Darwin, Einstein, and Freud showed us a universe of contingency and indeterminacy, and did away with the knowable, predictable, ordered reality of the Enlightenment.”

    I’m gonna very much disagree with this. 1) Chaos and the butterfly effect are not indeterminacy. 2) Einstein was the one who said “God does not play dice with the universe” specifically as a **reaction to** new physics which suggested that determinism is physically impossible, at least on the quantum scale.

    If you wanna talk about a rejection of determinism in physics, don’t use Einstein. Talk about quantum mechanics. But even then, you’re gonna get pushback on how even events that are physically impossible to predict and describe individually still show very predictable probabilities when taken en masse. That’s why statistics is so important to particle physics.

  • Point well taken.

    I think Darwin is a better example of someone whose theories demonstrated just how messy, contingent, and unpredictable nature is, and yet has become the patron saint of the science fan who thinks the algorithmic DNA-and-natural-selection machine determines the future of life on Earth.

  • TinnyWhistler

    Chaotic systems can still be deterministic. Just because it’s really, REALLY hard to model something doesn’t mean it *can’t* be done eventually or that it isn’t deterministic. So far as I’m aware, there’s nothing in Darwin’s theories that rule out determinism. Evolution hasn’t been demonstrated to be indeterministic, just very (very, very) sensitive to its many (many, many) contributing factors in a way that means we can’t precisely model it in the same way we can the trajectory of a bullet. Like I said before, the butterfly effect doesn’t mean a system isn’t deterministic, just not really practical to model since simplifications have a bigger effect the more sensitive your system is to initial conditions.

    Of course, you can always bring Darwin back around to quantum-level uncertainty but it’s probably a bit silly.

    As always, any biologists are welcome to correct me as I talk out of my ass.

  • And I am most interested in your answer to my last question. If your primary objection to the dominance of the current model is that it is used to excuse human bad behavior and put off responsibility for our acts, what model would not have that capacity?

    Who said that was my primary objection? I think it needs to be said that the use of machine analogies for the universe, scientific inquiry, and the human body is inaccurate and misleading. You even said that there’s “uncontroversial asterisks” next to the concept, persuasive enough evidence to me that we’re making the metaphor do heavy lifting for which it’s unequipped.

    I think that machine metaphors are unique in that they explicitly deny agency to phenomena, even phenomena that involve agency and consciousness. The science fans who responded in the neurocentrism discussion have every right to mock religious nuts who think that hurricanes or epidemics are God’s will. But denying that humans have free will, or asserting that consciousness is just an illusion created by the brain’s software setup, is something that derives from the belief that we’re just machines and anyone who claims otherwise must be a sentimental or religious crackpot.

  • I think that part of the problem is that some people (of the “techbro” persuasion) believe that the machine metaphor supplies a way to escape mortality: the singularity will come soon, and it will upload human minds into the cloud to live in paradise, free from mortal frailty, or else give humans biological immortality through being able to continuously maintain our bodies as if they were like rusty cars that you can replace their parts.

    The biggest theme in transhumanism is immortality and its converse: mortality, and how to make gods to save you from it.

  • Kevin K
  • That’s a good point. I was only thinking of the way machine metaphors allowed the techno-fetishist to escape ambiguity.

  • You’re not being smart here.

  • I dunno. I found it to be a pretty pointed objection. Why are we discarding the machine metaphor, unless we have good reason to believe that there is an element to human consciousness that is not rooted in a set of physical facts and supervenient upon them? It would be quite a discovery–Earth shattering, really–if we actually discovered an irreducible non-physical element to the experience of mind. Unless that occurs, multiplying unevidenced objects in our descriptions leads far enough astray from parsimony to obviate most meaningful rigor and testability, yielding metaphors about as useful as the spirits and daemons of old.

  • It’s the same reason why not comparing molecules to machines or to computers doesn’t turn chemists into dualists.

    Calculation, or computation, is simply a misleading metaphor for what the brain does.

  • How so? What does the brain do that is not computational in nature?

  • I’m not really a biologist, so I am unable to describe all the functions performed by the brain, but I can provide a few links to wikipedia:
    https://en.wikipedia.org/wiki/Astrocyte#Function
    https://en.wikipedia.org/wiki/Neuroglia#Macroglia

    Many of the cells that comprise the white matter and the grey matter are more autonomous than originally thought, unlike a computer’s circuit elements like gates and transistors, brain cells can act on their own, and don’t adhere to fixed roles.

  • None of that suggests anything about the computability of the tasks the cells are doing, only that they are doing them in a way different than initially expected. It is also not the case that the flexibility of role for pieces of substrate is not achievable in silicon [link].

    The fact that meat is not like silicon in several incidental ways is a distraction from the fundamental similarity that in a silicon circuit as in a neuron-networked brain information is being interpreted and recorded, read and written, and manipulated algorithmically. We are pretty certain about those things in both substrates.

    What we are uncertain about is how cells do these tasks, and we are also uncertain about whether all the various tasks that brains do can be described in these terms. So far, there have been no tasks that we can demonstrate brains to do that cannot be described in terms of moving information around a network and manipulating it in pretty standard ways, even if the levels of abstraction employed vary depending on the nature of the objects of description.

    So it is certainly possibly true that brains are more than computers and can do something that Turing machines fundamentally can’t, but presently there is no positive reason to believe it so.

  • “None of that suggests anything about the computability of the tasks the cells are doing, only that they are doing them in a way different than initially expected”

    It suggests that the brain architecture is vastly more flexible than any computer that has ever been designed.

    “It is also not the case that the flexibility of role for pieces of substrate is not achievable in silicon [link].”

    FPGAs don’t make a model of astrocytes or glial cells, they cannot change their hardware that has already been assembled on IC; any possible datapath that can be assigned must already exist on that chip, in a fundamentally limited configuration, unlike cells that can multiply and grow.

    “The fact that meat is not like silicon in several incidental ways”

    Meat is simply not at all like silicon. When you talk about “information” being “manipulated algorithmically”, these are words that we only developed in the context of machines or computers following lists of instructions to perform certain tasks. Cells… and the organs which they comprise… simply don’t have instructions like we understand them, and DNA is simply not a programming language with any kind of semantics.

    https://www.skepticink.com/smilodonsretreat/2014/09/10/dna-is-not-like-a-computer/

    We simply can’t determine any kind of abstraction or symbol manipulation that occurs in brain hardware that is in any way analogous to abstraction or symbol manipulation on computer hardware. The fact that we (certain members of homo sapiens, in varying degrees) can approximate symbol manipulation isn’t really here, nor there, because it isn’t a common feature of mammal brains- its not a fundamental component of brains in general.

  • It suggests that the brain architecture is vastly more flexible than any computer that has ever been designed.

    I guess I don’t understand exactly what the bit in italics there is claiming. It’s always a bad idea to judge the total potential of a given idea by its current level of realization. It might have been a stretch for a person who saw the first fireworks in the Tang dynasty to realize that essentially the same principles, refined, would eventually take people to the moon, but their grandchildren in the Song dynasty would see those toys turned into weapons of war.

    That we currently lack the means to do X is never good evidence that it is impossible to do X.

    FPGAs don’t make a model of astrocytes or glial cells, they cannot change their hardware that has already been assembled on IC; any possible datapath that can be assigned must already exist on that chip, in a fundamentally limited configuration, unlike cells that can multiply and grow.

    Throw a “yet” after any of that, and you realize the limitations of this path of argument. We couldn’t manipulate qubits ten years ago, but we can now. Ten years ago, would a person have been justified in saying that because it hadn’t been done, qubits will never be usefully manipulated?

    Haven’t yet != Never will

    Meat is simply not at all like sillicon. When you talk about “information” being “manipulated algorithmically”, these are words that we only developed in the context of machines or computers following lists of instructions to perform certain tasks.

    Assertion is not argument. When I talk about information being manipulated algorithmically, I am using language that is not specific to the computing machine context and mostly predates it.

    Cells… and the organs which they comprise… simply don’t have instructions, and DNA is not a programming language with any kind of semantics.

    This is simply false. The place where the machine model is strongest by far in the biological context is in how DNA is manipulated to yield instructions to build proteins. The semantics of DNA language (its codons, transcription protocols, start-read and halt-read commands, and meta-tagging through methylation, histone sensitizing antibodies, and chromatin blocking ) are all very well understood and can be imported pretty damn cleanly for a simple conceit, to the point where we’ve written our own working programs to produce artificial proteins using existing cellular machinery.

    The fact that we (certain members of homo sapiens, in varying degrees) can approximate symbol manipulation isn’t really here, nor there, because it isn’t a common feature of mammal brains- its not a fundamental component of brains in general.

    Some mammals do seem to be able to manipulate symbols with content. Some non-mammals, such as grey parrots and octopus, can too. And, again, I must ask what you think is proved, if anything, by that fact that only certain sorts of brains exhibit this particular property to a detectable level?

    And also, what do you mean by “approximate symbol manipulation”? It’s not an approximation, it is symbol manipulation.

  • “I guess I don’t understand exactly what the bit in italics there is claiming. It’s always a bad idea to judge the total potential of a given idea by its current level of realization. It might have been a stretch for a person who saw the first fireworks in the Tang dynasty to realize that essentially the same principles, refined, would eventually take people to the moon, but their grandchildren in the Song dynasty would see those toys turned into weapons of war.”

    “Throw a “yet” after any of that, and you realize the limitations of this path of argument. We couldn’t manipulate qubits ten years ago, but we can now. Ten years ago, would a person have been justified in saying that because it hadn’t been done, qubits will never be usefully manipulated?”

    Moore’s law won’t give you what you want. Supercomputers, today, are reaching the limits of the computations they can do per power consumed and heat generated, and adding more transistors on a chip wont prevent the chip from melting if you try to use all of them at the same time.

    https://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

    Additionally, quantum computers, are more closer to analog computers than you realize. The limitation on their computer power is measurement accuracy and precision, so, although you may be able to have 2^(2^big) states for your big number of qubits, it doesn’t mean much if you can’t distinguish them through measurement or keep the temperature low enough to prevent thermal noise.

    Don’t believe the hype:
    https://phys.org/news/2017-08-hype-cash-muddying-quantum.html
    https://medium.com/quantum-bits/top-3-quantum-myths-and-misconceptions-2ae797550746

    and, if you’re talking about moon rockets, plenty of people were so willing to believe that we would be living and thriving in space by now, so over-eager technological extrapolations are pretty common in that regard too.

    “This is simply false. The place where the machine model is strongest by far in the biological context is in how DNA is manipulated to yield instructions to build proteins. The semantics of DNA language (its codons, transcription protocols, start-read and halt-read commands, and meta-tagging through methylation, histone sensitizing antibodies, and chromatin blocking ) are all very well understood and can be imported pretty damn cleanly for a simple conceit, to the point where we’ve written our own working programs to produce artificial proteins using existing cellular machinery.”

    You can make cells produce proteins, but… we’ve been doing that since forever using plasmoids, it’s not that hard. What’s hard is figuring out how all these systems interact inside the cell, and yes, you may be able to construct a supercomputer model of the cell (and it really does require a supercomputer to do this, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2711391/) but the supercomputer is taking up, maybe, a couple of megawatts and the floorspace of a large office building to emulate, much more slowly, what the cell does on microwatts and in the space of a few microns, which should tip you off on that maybe the cell doesn’t really adhere to a computational or mechanical metaphor, if a computer has such a hard time processing an explanatory model based on computation.

    Yes, you may be able to, theoretically get a turing machine to run every single physical process that you want so you can claim that they’re all fundamentally “computational”, but a theoretical machine that has a tape bigger than the universe and a runtime longer than heat death isn’t really demonstrating an understanding of the important details of said process. Philosophy has to abide by practical limitations.

    https://www.scottaaronson.com/papers/philos.pdf

  • What does the brain do that is not computational in nature?

    Lots of things. Hope, fear, love, desire, use language metaphorically rather than literally, ascribe meaning and morality to its knowledge, and so on. These things are more than data processing.

  • Because you say so. Fascinating.

  • Um, well, do computers do these things?

  • martin_exp(pi*sqrt(163))

    That we currently lack the means to do X is never good evidence that it is impossible to do X.

    it might be interesting to see how turing came up with his notion of what we now call turing machines (which is not limited to what we can actually build, in the 1930s or now). he then was able to prove that there are “uncomputable problems”, in particular hilbert’s entscheidungsproblem.

    it’s interesting that his definition of turing machines is partly based on what humans/mathematicians consider to be computable (this is what he wanted to make precise with his abstract machines), how humans calculate things, the limitiations of human memory and human symbol recognition, and the physical limitations of writing/printing of easily distinguishable symbols. i collected a few quotes from his paper “on computable numbers with an application to the entscheidungsproblem”:

    “Once it is granted that computable numbers are all “computable”, several other propositions of the same character follow. In particular, it follows that, if there is a general process for determining whether a formula of the Hilbert function calculus is provable, then the determination can bo carried out by a machine.”

    “Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child’s arithmetic book. In elementary arithmetic the two-dimensional character of the paper is sometimes used. But such a use is always avoidable, and I think that it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite. If we were to allow an infinity of symbols, then there would be symbols differing to an arbitrarily small extent.”

    “The differences from our point of view between the single and compound symbols is that the compound symbols, if they are too lengthy, cannot be observed at one glance. This is in accordance with experience. We cannot tell at a glance whether 9999999999999999 and 999999999999999 are the same.”

    “The behaviour of the computer at any moment is determined by the symbols which he is observing, and his “state of mind” at that moment.”

    “We will also suppose that the number of states of mind which need be taken into account is finite. The reasons for this are of the same character as those which restrict the number of symbols. If we admitted an infinity of states of mind, some of them will be “arbitrarily close” and will be confused.”

    “Let us imagine the operations performed by the computer to be split up into “simple operations” which are so elementary that it is not easy to imagine them further divided. Every such operation consists of some change of the physical system consisting of the computer and his tape.”

    “We may now construct a machine to do the work of this computer. To each state of mind of the computer corresponds an “m-configuration” of the machine.”

  • martin_exp(pi*sqrt(163))

    one could also imagine one turing machine for the whole universe. if we knew enough about this turing machine/universe we could prove that it’s practically impossible to replicate this (or any) turing machine in-universe. space and time would also be simulated by this turing machine, btw. neither the “size” of the cells nor the” time” for one computational step would have any physical meaning in-universe. the definition already doesn’t specify the size of the cells on the “tape” and the time for one step (which is funny: the time complexity of algorithms is often defined in terms of number of steps, not seconds).

    then one could imagine that the “turing machine of the universe” is simulated on an universal turing machine (i love puns) … or even weirder: xkcd – a bunch of rocks.

  • rationalobservations?
  • I’ve always thought memes are the lowest form of wit, and this does nothing to change my opinion.

    What point are you trying to make here?

  • IconoclastTwo

    Today? Obviously no, but I don’t think you can categorically say that it’s impossible for such a computer to exist in the future.

  • It’s ironic that you take umbrage when I appear to be underestimating the abilities of computers, not that I’m making any sort of valid distinction between humans and computers.

    There are just a lot of biochemical, linguistic and cultural contexts to human consciousness that aren’t computational. Am I being unforgivably presumptuous for pointing out these facts? Should I be so ready to dismiss human consciousness as mere data processing that I’ll accept speculation about the abilities of future computing devices with no evidence whatsoever?

  • IconoclastTwo

    You’re reading a lot more into my response than I was trying to actually say, though. I’m asking how you can claim to know, now, that this will never be representable at any point in the future? I’m not saying that its possible as much as I’m saying that I don’t see how you rule it out that its impossible.

  • Well, you’re reading a lot more into my claim than I was trying to make. I never made claims about what computers can conceivably or will be able to do.

    What I was trying to say is that human brains aren’t just computing devices. Plenty of features of human consciousness involve more than pure computation.

  • ortcutt

    I don’t really know whether anyone has a machine understanding of living in the 21st Century. It’s a strawman, really. In the 17th Century, they didn’t know the first thing about biochemical basis of life, so they went with ideas they could understand, like machine analogies with gears and pulleys. We know better now that living organisms aren’t machines with tiny gears and pulleys. They are complex biochemical systems. Humans are complex biochemical systems, like every other living thing on Earth.