Document Type
Paper
Abstract
In late 2022 and early 2023, Large Language Models (LLMs) exploded into the mainstream in the form of Open AI’s ChatGPT. Discussions about artificial neural networks, artificial general intelligence, the singularity, and other obscure topics previously restricted to the water cooler in graduate school lounges and AI conference panels are now discussed on the nightly news. Proponents of AI utopian and dystopian futures battle it out on social media and in congressional committee rooms. Influencers such as entrepreneur Elon Musk tend towards the dystopian view:
“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it…It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road.” [1]
On the utopian end of the spectrum, futurist Ray Kurzweil predicts the following (written originally in 2001!):
“Within a few decades, machine intelligence will surpass human intelligence, leading to … the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.” [2]
These are both remarkable claims (such luminaries as Stephen Hawking, Mark Zuckerberg and Sam Altman make similar claims). Underneath those claims is an unstated, materialist or naturalist worldview: the mind emerges from the brain and therefore with enough time and computational resources, we will achieve artificial general intelligence (AGI) in which the cognitive capabilities or intelligence of computers will match or exceed those of humans. Does technology support this view? Does the history of computation over the last century support these views? If not, why not? The view of the authors is that technology trends and the history of computation do not support the dystopian / utopian views above. We are a long way from either a utopian or dystopian future and there are good reasons to believe that we will never achieve AGI with today’s technology.
For the average Christian (in the pew and in the halls of academia), discussions about AI can be intimidating. When reading about predictions of the future of AI, we would do well to keep in mind the advice of Nobel Laureate Richard P. Feynman who once said:
“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy – and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter.” [3]
Or even farther back, Luke the Physician in the book of Acts:
“Now these people were more noble-minded than those in Thessalonica, for they received the word with great eagerness, examining the Scriptures daily to see whether these things were so. Therefore, many of them believed, along with a significant number of prominent Greek women and men.” (Acts 17:11)
To paraphrase the quotes above: Christians should not be intimidated by “experts” and we should do our own research before believing any view on the future of AI. Experts are blind to their own worldview, and unfortunately dictate the terms of the debate. As Christians we should not forget that Scripture has much to say about natural and artificial intelligence, especially as it relates to what it means to be human. There is after all, nothing more fundamental to the questions of worldview than what it means to be human. Despite what some may claim, STEM (Science, Technology, Engineering, Mathematics) is not “worldview neutral”. One’s worldview dictates how one might view STEM issues in general or AI in particular.
We will argue in this paper that a materialistic worldview is preventing popular AI commentators from “seeing the forest for the trees.” More accurately, it is the closed universe worldview assumption of the naturalistic or materialistic worldview that is most important in understanding why the experts believe the way they do. By closed universe assumption, we mean the view that all there is to know is what can be sensed, observed or measured in this physical world. In contrast, the open universe assumption that is characteristic of the theistic worldview is that there are forces or influences outside of our senses that have a role to play in assessing claims about AI.
The open universe perspective characteristics of the theistic worldview is needed to properly assess modern AI claims. This is something that Christians should have no difficulty grasping. The modern AGI debate is an excellent opportunity for Christians to discuss (open) worldview beliefs with believers and unbelievers alike. In popular discussions, the assumed closed worldview underlying these positions is left unstated and unchallenged. The opportunity is there to introduce the theistic, specifically Christian, worldview into these discussions. The authors believe that this is an excellent opportunity to inform Christians not only on the latest AGI technologies, but also why this topic matters in the discussion of worldview.
In the remainder of this paper, we present a brief overview of the history of artificial intelligence leading up to the current state of the art, including the emergence of large language models (LLMs) and generative AI. We discuss some of the classical challenges to AI (Turing test, Lovelace test, Chinese Room experiment) and how they often miss the mark due to the underlying worldviews. We then discuss how scripture can help us to assess the technologies and inform how we should approach them from a Christian ethics and moral perspective. We conclude by discussing some of the efforts underway at Oklahoma Baptist University (OBU) to educate students on the integration of faith and technology in general and Artificial Intelligence in particular. We hope that the ideas in this paper will help outline the role that Christians should play in shaping the narrative around AI technologies.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.
Large Language Models and Worldview – An Opportunity for Christian Computer Scientists
In late 2022 and early 2023, Large Language Models (LLMs) exploded into the mainstream in the form of Open AI’s ChatGPT. Discussions about artificial neural networks, artificial general intelligence, the singularity, and other obscure topics previously restricted to the water cooler in graduate school lounges and AI conference panels are now discussed on the nightly news. Proponents of AI utopian and dystopian futures battle it out on social media and in congressional committee rooms. Influencers such as entrepreneur Elon Musk tend towards the dystopian view:
“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it…It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road.” [1]
On the utopian end of the spectrum, futurist Ray Kurzweil predicts the following (written originally in 2001!):
“Within a few decades, machine intelligence will surpass human intelligence, leading to … the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.” [2]
These are both remarkable claims (such luminaries as Stephen Hawking, Mark Zuckerberg and Sam Altman make similar claims). Underneath those claims is an unstated, materialist or naturalist worldview: the mind emerges from the brain and therefore with enough time and computational resources, we will achieve artificial general intelligence (AGI) in which the cognitive capabilities or intelligence of computers will match or exceed those of humans. Does technology support this view? Does the history of computation over the last century support these views? If not, why not? The view of the authors is that technology trends and the history of computation do not support the dystopian / utopian views above. We are a long way from either a utopian or dystopian future and there are good reasons to believe that we will never achieve AGI with today’s technology.
For the average Christian (in the pew and in the halls of academia), discussions about AI can be intimidating. When reading about predictions of the future of AI, we would do well to keep in mind the advice of Nobel Laureate Richard P. Feynman who once said:
“I believe that a scientist looking at nonscientific problems is just as dumb as the next guy – and when he talks about a nonscientific matter, he will sound as naive as anyone untrained in the matter.” [3]
Or even farther back, Luke the Physician in the book of Acts:
“Now these people were more noble-minded than those in Thessalonica, for they received the word with great eagerness, examining the Scriptures daily to see whether these things were so. Therefore, many of them believed, along with a significant number of prominent Greek women and men.” (Acts 17:11)
To paraphrase the quotes above: Christians should not be intimidated by “experts” and we should do our own research before believing any view on the future of AI. Experts are blind to their own worldview, and unfortunately dictate the terms of the debate. As Christians we should not forget that Scripture has much to say about natural and artificial intelligence, especially as it relates to what it means to be human. There is after all, nothing more fundamental to the questions of worldview than what it means to be human. Despite what some may claim, STEM (Science, Technology, Engineering, Mathematics) is not “worldview neutral”. One’s worldview dictates how one might view STEM issues in general or AI in particular.
We will argue in this paper that a materialistic worldview is preventing popular AI commentators from “seeing the forest for the trees.” More accurately, it is the closed universe worldview assumption of the naturalistic or materialistic worldview that is most important in understanding why the experts believe the way they do. By closed universe assumption, we mean the view that all there is to know is what can be sensed, observed or measured in this physical world. In contrast, the open universe assumption that is characteristic of the theistic worldview is that there are forces or influences outside of our senses that have a role to play in assessing claims about AI.
The open universe perspective characteristics of the theistic worldview is needed to properly assess modern AI claims. This is something that Christians should have no difficulty grasping. The modern AGI debate is an excellent opportunity for Christians to discuss (open) worldview beliefs with believers and unbelievers alike. In popular discussions, the assumed closed worldview underlying these positions is left unstated and unchallenged. The opportunity is there to introduce the theistic, specifically Christian, worldview into these discussions. The authors believe that this is an excellent opportunity to inform Christians not only on the latest AGI technologies, but also why this topic matters in the discussion of worldview.
In the remainder of this paper, we present a brief overview of the history of artificial intelligence leading up to the current state of the art, including the emergence of large language models (LLMs) and generative AI. We discuss some of the classical challenges to AI (Turing test, Lovelace test, Chinese Room experiment) and how they often miss the mark due to the underlying worldviews. We then discuss how scripture can help us to assess the technologies and inform how we should approach them from a Christian ethics and moral perspective. We conclude by discussing some of the efforts underway at Oklahoma Baptist University (OBU) to educate students on the integration of faith and technology in general and Artificial Intelligence in particular. We hope that the ideas in this paper will help outline the role that Christians should play in shaping the narrative around AI technologies.