TRANSLATE THIS ARTICLE
Integral World: Exploring Theories of Everything
An independent forum for a critical discussion of the integral philosophy of Ken Wilber
David Christopher LaneDavid Christopher Lane, Ph.D. Professor of Philosophy, Mt. San Antonio College Lecturer in Religious Studies, California State University, Long Beach Author of Exposing Cults: When the Skeptical Mind Confronts the Mystical (New York and London: Garland Publishers, 1994) and The Radhasoami Tradition: A Critical History of Guru Succession (New York and London: Garland Publishers, 1992).

The Cyborg has Entered the Classroom

A.I. and the Future of Education

David Lane

“The best thing for being sad," replied Merlin, beginning to puff and blow, "is to learn something. That's the only thing that never fails. You may grow old and trembling in your anatomies, you may lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honour trampled in the sewers of baser minds. There is only one thing for it then—to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by, never fear or distrust, and never dream of regretting. Learning is the only thing for you. Look what a lot of things there are to learn.” - T.H. White, The Once and Future King
“If we do it right, we might be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become a powerful catalyst that we need to reclaim our humanity.” —John Hagel
Question: What is more important than learning? Answer: The desire to learn.
I mention all of this as a necessary preface for why I think we need to embrace artificial intelligence in our halls of learning as early as possible.

I remember my father, Warren, suggesting that a hand calculator, though quite remarkable, shouldn’t necessarily replace the slide-ruler that we were currently using in my math classes at Notre Dame high school. I knew he had a point, but I also realized that one could make the same argument for not discarding an abacus, which he also occasionally used.

In Plato’s Phaedrus we learn in the arbitrarily structured dialogue that Socrates argues against writing things down lest one must rely solely on something apart from their own memory.

“And so, it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.”

Socrates' point is that it is better have a one on one discussion, where the to and fro of ideas can evolve in real time. This is not possible when a text is codified and unable to provide in the moment responses. Writing was a fundamental disruptor to oral communication. Or so it seemed to Socrates. He was certainly correct in one aspect, but what writing brought forth was a way to record and catalog all such previous conversations, even if originally done through oral communication.

Rhetoric, poetry, and dialogue didn’t die out because of writing, but continued even if adjustments were made in the process.

Even the transition from handwritten manuscripts to Gutenberg’s movable type printing presses were lamented by some, who thought that such uniformity betrayed the copyist sensibility for artistic nuance. The telegraph and later the telephone were also viewed negatively by certain quarters, precisely because it hurried and condensed communication.

Marshall McLuhan, the famed media critic, wrote in 1964 about how the medium is the message, and because of these technologies, such as television, can cheapen language and transform it into advertising slogans and pithy, but often misleading, sound bites.

With the advent of the Internet and its most popular application, the World Wide Web, many in Silicon Valley and elsewhere believed that the democratization of information would lead to better interactions across large swaths of the population that were otherwise marginalized. And to a large degree this did indeed happen. But what most didn’t expect was that opening these digital floodgates also released a torrent of hate speech, massive amounts of spam, and an endless flow of bad information. All of this reminds one of that song, “Who Let the Dogs Out”? which was released just prior to the first Net crash.

Homo sapiens, to be sure, are a social animal, but we are also a very competitive species and we too often suffer from our inability to recognize our own limitations and our own hubris.

Unlike other animals, where their young can be selfsufficient within days or weeks, we need years to grow up to become independent players. Because of this, we have developed a hierarchy of educational systems so as to train the youth for what the future has in store for them.

Education, let us not forget, is about preparing for what will happen, not for what has already transpired. Yes, to be sure, we need to have a deep sense of history and all that has occurred before our arrival, but that is because we need to learn from our mistakes so that we can optimize our tomorrows.

I mention all of this as a necessary preface for why I think we need to embrace artificial intelligence in our halls of learning as early as possible. Instead of fearing what is inevitable, we will be much wiser if we accept and integrate the synthetic offspring we have engineered. Keep in mind that as far back as 1863 Samuel Butler essayed in “Darwin Among the Machines,” that we should turn our back on all mechanical progress since it will be our doom. Yet, ironically, even as he lamented the industrial revolution, he wasn’t averse to having his books published using the very machines he wanted to eliminate.

Now, let’s be clear. We are not foregoing our tools and we are certainly not going to retreat into some hippie off-the-grid commune, because we have delusions that killer robots, mimicking Terminator-like clones, are out to kill all of humanity.

A.I., like it or not, is here to stay. We are not going back to typewriters (Tom Hanks be damned!), because we miss correcting our mistakes with liquid white-out. Word processing was and is a game-changer, just as having a search engine like Google has opened up vast forms of information that were hitherto inaccessible.

Right now, as I write these words, I read that “New York City schools have banned ChatGPT, the artificial intelligence chatbot that generates human-like writing including essays, amid fears that students could use it to cheat. According to the city's education department, the tool will be forbidden across all devices and networks in New York's public schools.”

Some educators are applauding the move, out of fear that students won’t write original essays and simply rely on A.I. systems like ChatGPT to do their homework.

But this, I suggest, is myopic to the extreme.

Are we going to now ban spell check on Word? Ban Grammarly? Ban Wikipedia? Ban Google Scholar? And the list goes on, if we are truly worried about A.I. obviating student learning.

No, the Pandora’s Box of artificial intelligence has already been opened and there is no closing it now. Instead, we should embrace such innovations and let them improve how we learn, not shut the door to them in our public schools, only to have students access them freely at home. It is a stupid game we are playing.

Students should be encouraged to play with ChatGPT and DALLE-2 and any other up and coming iterations of machine learning. Suppressing such creative tools is backwards, especially since we can learn much more by augmenting our own intelligence with their surprising capabilities.

I am not a great fan of Dan Brown’s fictional novels, since I tend to find them too predictable and cliched for my taste. However, despite whatever reservations I may have about his plot structure, in his latest book, Origin, the final conclusion reveals what is most likely our fate: we have birthed a new form of intelligence that we need to merge with, lest our offspring exponentially transcends our cognitive abilities to even have a glimpse of its internal machinations. We need to align A.I. to our own human centric needs and desires and we will only accomplish this if our educational system conjoins and utilizes the best of what it can provide.

How then do we incorporate ChatGPT right now in our courses and not shoo it away as if it was an unwelcome intruder who accidentally stumbled into our classroom as we were discussing the contributions of Richard Feynman and Julian Schwinger to Quantum Electrodynamics?

The following are some suggestions that I think can tackle this issue and broaden our pedagogical purviews:

English 101 and Writing Essays with A.I.

When I was teaching in the Warren College Writing Program at the University of California, San Diego, back in the mid-1980s, my mentor, Dan Matson, gave me some sage advice about helping students to become better writers. First, he said, don’t have them write anything. Indeed, prevent them until they read, read, and read some more.

Then after they have had their fill of reading, then and only then put pen to the paper or fingers to the keyboard. Why? Because the brain will have been mindfully imprinted—replete with sequences of words, ideas, plots, dialogues, and narratives.

This is where ChatGPT comes in. Let students ask it to write on any subject they desire, learning along the way about which questions provide the better answers, the richer detail, and the more interesting text. What happens in the process? They read and the more questions they pose, the more self-guided essays they read. The student is the programmer by dint of what he or she instructs AI to respond with. The best way to learn a video game? Play it, over and over. Best way to utilize ChatGPT? Play with it, over and over.

What happens is that the programmer (the student) is forced by his various queries to learn what he or she has prompted AI to bring forth.

By doing such, the student learns how AI structures its answers and why certain patterns invariably emerge.

The student learns the behavior of A.I., as foreign as it may seem at first. This is not so much different than learning about characters of the 19th century in one of Leo Tolstoy’s great novels, such as Anna Karenina or War and Peace.

There is also the option of providing students with an essay prompt and having them use ChatGPT first and then having them augment or flesh out what the A.I. has produced. So instead of arbitrarily divorcing one’s self from A.I., one partners with it, similar but probably more effective and time-efficient than group work which is often the bane of most students.

Ironically, because A.I. can mimic human writing so well it becomes a practical tool for refining one’s critical thinking skills.

Ironically, because A.I. can mimic human writing so well it becomes a practical tool for refining one’s critical thinking skills. ChatGPT forces us to take the Turing Test seriously and become seasoned in appraising what is purely computational versus what is human generated. Yes, it will be difficult and yes there will most likely come a time when we will not be able to differentiate A.I. essays from purely human constructions. But that is exactly why we should train ourselves now about how these machine learning algorithms work.

A.I. hasn’t yet reached the Kurzwellian Singularity, so now is our chance to make sure that our digital progeny retains a human face, in lockstep with bettering our species and promoting human flourishing. As Marc Watkins at the University of Mississippi warn us, “What message would we send our students by using AI-powered detectors to curb their suspected use of an AI writing assistant, when future employers will likely want them to have a range of AI-related skills and competencies? What we should instead focus on is teaching our students data literacy so that they can use this technology to engage human creativity and thought.”

Ben Talsma, writing for Chalkbeat, provides a positive outlook for teachers to incorporate ChatGPT into their courses: “As a learning specialist, my job involves working with teachers and technology, so I’ve already seen many put ChatGPT to use, from kindergarten through high school. AI can help teachers model the concepts they want students to understand. This works for all sorts of things, from comparing and contrasting different characters to telling the difference between complete and incomplete sentences. Teachers can, in a matter of minutes, create dozens of examples for students to rate, rank, sort, or comment on. This is a wonderful, inquiry-oriented way to explore ideas. As humans, we learn so much from observing and interacting with examples, and now teachers have an almost infinite supply of them readily available. We must also prepare students for the world they’ll inherit.

In the next few years, it will become increasingly important for humans to edit AI-generated work. Right now, I’m seeing teachers provide students with samples of AI-generated work, then working with them to improve it. This is an engaging way to open up deep conversations about writing.”

Several years ago, I wrote a novel entitled, The Cult of the Bhrigu: A Quest to Unravel the Mystery of the World’s Oldest Astrological Book. Almost ninetypercent of the book was based on real-life events, but ten percent was pure fiction. In my Critical Thinking and Writing classes I had my students read the book, but the crux of the assignment was to see if they could parse out what was true and what was false. Naturally, this necessitated a much deeper reading and assessment of the text, since they had to work as detectives sniffing out possible clues, dead-ends, and, hopefully, discernible facts in order to make their case.

Talsma argues that ChatGPT can actually help improve our critical thinking skills, similar to what I had been doing in semesters past: “ChatGPT often makes factual errors; having students fact-check ChatGPT’s writing is a wonderful way to improve information literacy. A fifth-grade teacher I’m working with recently provided small groups of students with AI-generated content and reported that they loved working to try to prove the articles wrong.”

And it is not only students that can benefit from A.I. Teachers now have extremely powerful tools to generate better tests, better lesson plans, and better feedback about how to help students become motivated in learning new material.

A.I. is not merely altering the way we write, but how we create art, compose music, produce movies, and much more. Already DALLE-2 has paved the way for anyone to generate the most stunning photographs and paintings, simply by following a few suggestive prompts. One can employ an array of influential art styles, from Picasso to Dali to Baroque to Futurism to Digital to even Cave inspired renderings—and each of them unique.

I have beta tested DALLE-2 and ChatGPT to publish six books, each authored by A.I. What I learned by utilizing these tools was that it liberated my own creativity and provided much greater latitude in what I wanted to express. I wasn’t the author of these works, but only its facilitator and as such felt a unique sense of pride in being able organize and cohere AI generated content into a readable tome that may be helpful to students and the reading community at large. We have even given a name to our algorithmically guided author, Tai Synth.

I can easily imagine how one can even intertwine A.I. and human orchestrated writing to produce a remarkable coalescence that establishes a new genre, what can be appropriately called “Synthetic Realism.”

My son, Kelly, was particularly impressed with how ChatGPT can almost instantly develop computer codes and enable gameplay for free. In so doing Kelly began to have a better understanding and appreciation of what algorithms work and which ones are lacking, since AI allows endless trials and errors.

Kelly also beta tested putting entire novels into ChatGPT to produced systematic summaries of Mark Twain’s Huckleberry Finn and H.G. Wells’ The Time Machine. Since he had already read both books, he was amazed at how coherent and clear the summaries were of both tomes. This is a game changer for those of us who may not have time to read certain books but wish to know the salient features or key points in a particular novel or science text, particularly if it is relatively obscure or unknown.

What must be kept in mind is that A.I. is only going to get better, much better, very quickly. And if we don’t assimilate what it has to offer and become more conversant with what it produces under certain directives, we will miss out on a pivotal opportunity.

I am often asked how to properly define intelligence. My reply is a pretty straightforward and a practical one. It is our ability to solve problems. This is can range from a surfer trying to figure out how best to ride a wave, to a musician mastering an instrument to produce a song, to a mathematician developing a proof, to a mechanic fixing an air conditioner, to a writer wanting to write a compelling detective story, to a pilot knowing how to fly a jumbo jet, to a business manager figuring out how best to run his organization, and the list is nearly endless. All of us, to some degree, are knowledgeable about certain subjects, but some are better equipped than us in areas where our understanding is lacking. We tend to call these people experts. It is to these authorities that we turn to learn more and augment our limited frames of reference.

The underlying purpose of education is to provide students with a variety of tools so that they can better navigate and facilitate the world they encounter. Thus, it is short-sighted to cordon off A.I. peremptorily out of fear that it will somehow replace a student’s ability or desire to learn. To the contrary, A.I. is the exact tool we need to negotiate an increasingly complex, informational universe.

We are not prematurely ridding ourselves of Global Positioning Satellites (GPS) because it will interfere with knowing how to drive without maps. No, in a technology laden landscape it is not an either/or proposition but a “yes and” acceptance of what is inevitable.

The wisest course for schools (at all levels) is to wakeup to Artificial Intelligence and see it for what it is: a revolutionary and magnifying tool for increasing our knowledge of how things work. Yes, there will be hiccups, yes, there will be mistakes, and yes, we should be vigilant in how we utilize this most remarkable of breakthroughs. But censoring it in the classroom as if it somehow doesn’t or shouldn’t exist is pure folly.

I remember when the Internet became all the rage in the mid-1990s and how most didn’t truly grasp how such an interconnected pipeline was going to transform civilization. Even here at Mt. San Antonio College for a year or two students were not allowed to check email, engage in IRC, or even search the web for information that was not related to their coursework. I bucked the system and argued to the then President of the college that curbing students in this way is wrong and antithetical to the pursuit of higher learning. Some even dismissed the Net as a passing fad like CB radio. A good illustration of this can be gleaned in watching David Letterman interview Bill Gates about the Internet back in 1995. Gates clearly sees what the future will bring forth and Letterman, because he was not conversant with what such a communicative system could ultimately accomplish, was dumbfounded and cynical. It is a time capsule moment and instructive about how future generations will look back at those who balked at all things A.I. related.

Bill Gates Explains the Internet to Dave (1995) | Letterman

With quantum computers now becoming much more powerful and accessible, the future will be unlike anything we can possibly imagine. This may seem like hype, but just consider the following differences between classical computation and quantum engineered computation.

Because quantum computing is not based on a binary system of 0’s and 1’s, known as bits, but rather utilizes superposition which allows for qubits. This allows for an exponential scaffolding of problem solving. As Sayali Sonawane explains,

“Quantum computers use quanta as a physical system which allows information to be carried out in both 0 and 1 state simultaneously. Whereas in classic computing information is carried in either 0 or 1 bit. Quantum computing increase the speed of the system exponentially as it can hold more information as compare to classical computers. Suppose there are equal number of quanta bits and regular bits, then the quanta bit will hold twice the information, i.e., let’s say there are n quanta bits in the supercomputer, then it will have 2^n different states. Therefore, quantum computing can hold more information thereby increase in the speed of system.”

Couple A.I. with quantum computing and it will transform science as we know it, and in turn how we gather knowledge. Education is not about preparing us for what has happened, but what will happen. Our students should be trained for a future they will live in, not for a past that no longer exists.

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.” —Alan Kay
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” —Ray Kurzweil
“Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It’s really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun

Tai Synth & A.I. Generated Books

Shortly after ChatGPT was released and in conjunction with DALLE-2, both highly sophisticated A.I. programs, here at the MSAC Philosophy Group we decided to test how well such systems could produce readable books that would help in explaining relatively complex ideas. The first book in this oeuvre was entitled Philosophical Contrasts: The Synthetic Meeting of Differing Minds. We were interested in seeing what A.I. would generate when asked to compare and contrast thinkers of different mindsets, such as “Friedrich Nietzsche meets Albert Camus,” “Socrates Meets Buddha,” “Aristotle meets Saint Augustine,” “Blaise Pascal meets Carl Sagan,” and so on. We were pleasantly surprised with the results, since it provided us, as readers, with a new perspective given the oftenrandom couplings. The following are a few excerpts from the book:

Friedrich Nietzsche meets Albert Camus
Friedrich Nietzsche and Albert Camus were both influential philosophers who were known for their contributions to the field of existentialism. One of the key differences between the two philosophers is their respective myths of eternal recurrence and Sisyphus. Nietzsche's myth of eternal recurrence posits that the universe is infinite and has no beginning or end. This means that all events, including the lives of individual people, will repeat themselves an infinite number of times. In this view, time is cyclical and everything that has ever happened will happen again and again, in an endless cycle of repetition. This myth is closely related to Nietzsche's philosophy of eternal return, which holds that all events are determined by the will to power. In this view, individuals are constantly striving to assert their power and influence over others, and this struggle is the driving force behind the eternal recurrence of events. In contrast, Camus' myth of Sisyphus is a parable about the futility of human existence. In this myth, Sisyphus is a king who is punished by the gods for his misdeeds by being forced to roll a boulder up a hill, only to have it roll back down again, over and over. This myth is closely related to Camus' philosophy of the absurd, which holds that human existence is fundamentally meaningless and that individuals must find their own meaning in life. In this view, life is a never-ending struggle against the forces of the universe, and individuals must find ways to cope with this struggle in order to lead fulfilling lives.
Albert Einstein meets Niels Bohr
Albert Einstein and Niels Bohr were two of the most prominent scientists of the 20th century, and both made significant contributions to the field of quantum mechanics. However, they had different views on the interpretation of quantum mechanics and its implications. Einstein was a proponent of the idea of determinism, which is the belief that all events in the universe are determined by previous events and can be predicted with certainty.
In contrast, Bohr believed in the idea of complementarity, which holds that certain pairs of physical properties, such as position and momentum, cannot be measured simultaneously and have to be considered separately. In terms of their views on quantum mechanics, Einstein was dissatisfied with the uncertainty principle, which states that it is impossible to simultaneously measure both the position and momentum of a subatomic particle with precision. He famously said that "God does not play dice with the universe," expressing his belief that there must be a deeper, underlying determinism at play in quantum mechanics. Bohr, on the other hand, embraced the uncertainty principle and argued that it was a fundamental feature of quantum mechanics. He believed that the concept of complementarity was crucial for understanding the behavior of subatomic particles, and that it was necessary to abandon the classical notion of determinism in order to make sense of quantum mechanics.
In summary, Einstein and Bohr had different views on the nature of quantum mechanics and its implications for our understanding of the universe. Einstein believed in determinism and was skeptical of the uncertainty principle, while Bohr embraced complementarity and the uncertainty principle as fundamental features of quantum mechanics.

The second book we created via “Tai Synth” was combinatorial where we interlaced A.I. material with those of a real-life human author. The subject was about increasing our moral circle and advocating a purely plant-based diet. To our amazement we discovered valuable information that complimented the overall structure of the book. We also used DALLE-2 to create the cover.

Creature Compassion
1. Charles Darwin was not a vegetarian, but he did express concern about the ethics of eating meat in his book "The Descent of Man." In the book, he wrote: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?" This suggests that he believed that animals are capable of suffering, and that this should be considered when deciding whether or not to eat meat. However, he did not take this belief to its logical conclusion and become a vegetarian himself.
2. Pythagoras, the ancient Greek philosopher and mathematician, is known for advocating for a vegetarian diet. According to some accounts, he believed that all living beings have souls and should be treated with compassion and respect, leading him to conclude that it is wrong to kill and eat animals. He believed that a vegetarian diet would not only be kinder to animals, but would also be healthier for the human body and mind.
3. Anekantavada is a philosophical concept in Jainism that is based on the concept of relative pluralism. It is a key principle of Jainism, and it asserts that the ultimate truth of the universe cannot be grasped fully because it is complex and multidimensional. This means that no single viewpoint or perspective can capture the fullness of truth, and that multiple perspectives are necessary to understand the world and the ultimate reality.
4. Plant-based diets have been shown to be beneficial for preventing chronic diseases such as heart disease, obesity, and type 2 diabetes, which can result in lower healthcare costs. Additionally, the production of plant-based foods requires less water and land, making it a more efficient and sustainable option for feeding the growing global population.

The third book, Scientific Acumen: Scattered Jewels of Human Acumen, centered on intriguing, but often misunderstood, concepts and theories in science.

The Implication of Large Numbers
G.E. Littlewood was a British mathematician known for his work on large numbers and the law of miracles. Littlewood's work on large numbers was groundbreaking and helped to improve our understanding of the behavior of very large quantities. He developed a number of important theorems, including the law of the iterated logarithm and the Hardy-Littlewood circle method, which have been widely used in the study of prime numbers and other mathematical areas. One of Littlewood's most famous contributions was his law of miracles, which states that a typical person should expect to experience at least one "miracle" (an extremely unlikely event) in their lifetime.
This law is based on the idea that even highly unlikely events become more likely to occur as the number of opportunities for them to happen increases. For example, a person who flips a coin repeatedly is likely to eventually get a streak of heads or tails, even though the probability of this happening on any given flip is very low. Littlewood's law of miracles has been widely cited and discussed in both the scientific and popular literature. Some have criticized the law as being overly optimistic or based on overly simplified assumptions, while others have pointed out that it can be used to help explain the occurrence of seemingly miraculous events in everyday life.
Overall, G.E. Littlewood was a pioneering mathematician who made significant contributions to our understanding of large numbers and the behavior of unlikely events. His work continues to be an important foundation for further research in these areas.

The fourth book, and perhaps our favorite, was a 20th century history of one the greatest scientific theories in the history of humankind: All Things Quantum: A Simple Guide to a Very Strange Universe. In this text, we employed DALLE-2 to produce cartoon and cubist renditions of the key physicists behind the development of quantum mechanics.

Niels Bohr was a Danish physicist who made significant contributions to the development of quantum theory, a fundamental theory in physics that describes the behavior of matter and energy at the atomic and subatomic level. Bohr's work helped to bridge the gap between classical physics, which describes the behavior of macroscopic objects, and quantum mechanics, which describes the behavior of microscopic particles. One of Bohr's most significant contributions to quantum theory was his development of the Bohr model of the atom. This model, which was published in 1913, was an early attempt to explain the behavior of atoms using quantum theory. In the Bohr model, an atom is composed of a central nucleus surrounded by a cloud of electrons. The electrons are arranged in shells around the nucleus, and each shell can hold a specific number of electrons.
The Bohr model was able to explain the emission spectra of atoms, which are the lines that appear in the spectrum of light when an atom absorbs or emits energy. It also provided a framework for understanding the behavior of atoms in chemical reactions. However, the Bohr model had several limitations, and it was later revised and expanded upon by other physicists. One of these physicists was Wolfgang Pauli, who developed the Pauli exclusion principle, which states that no two electrons in an atom can have the same set of quantum numbers. This principle helped to explain why atoms have a specific number of electrons in each shell, and it played a crucial role in the development of quantum mechanics.
Bohr also made important contributions to the development of the theory of quantum electrodynamics (QED), which describes the behavior of particles and fields in the presence of electromagnetic forces. Bohr's work on QED helped to establish the concept of renormalization, which allows physicists to make precise calculations of the behavior of particles even when the calculations involve infinities.
Louis de Broglie was a French physicist who made significant contributions to the development of quantum theory, a revolutionary approach to understanding the behavior of particles at the atomic and subatomic level. De Broglie's most significant contribution to quantum theory was his proposal of the concept of wave-particle duality. This idea states that particles, such as electrons, can exhibit both wave-like and particle-like properties depending on how they are observed or measured. De Broglie suggested that all particles, not just photons (particles of light), can exhibit wave-like behavior, and that the wave-like nature of a particle is determined by its momentum. De Broglie's theory was based on the work of Albert Einstein, who had shown that light exhibits both wave-like and particle-like properties. De Broglie took this idea one step further and proposed that all particles possess a corresponding wave, known as a de Broglie wave. The wavelength of a de Broglie wave is given by the equation:

wavelength = h/p

where h is Planck's constant and p is the momentum of the particle. De Broglie's theory was able to explain several previously mysterious phenomena in the field of physics, such as the behavior of electrons in atoms and the stability of atomic orbits. It also led to the development of the Schrödinger equation, a fundamental equation in quantum mechanics that describes the behavior of particles on a microscopic level. In addition to his work on wave-particle duality, de Broglie also made important contributions to the field of nuclear physics. He was the first to suggest that the nucleus of an atom could be described as a collection of protons and neutrons, and he developed a theory of the nucleus that explained the stability of certain isotopes. De Broglie was awarded the Nobel Prize in Physics in 1929 for his contributions to the field of quantum theory. His work laid the foundations for much of the modern understanding of the behavior of subatomic particles and has had a profound impact on the fields of physics and chemistry.

The fifth book in our “Tai Synth” series is Early Wisdom: Glimpses from Ancient Greek Philosophers. This was a survey of ten of the more innovative thinkers in our ancient past.

Diogenes of Sinope
Diogenes of Sinope was a Greek philosopher who was born in 412 BCE and died in 323 BCE. He was a member of the ancient philosophical school known as the Cynics, whose philosophy was based on the belief that the only way to live a truly happy and fulfilled life was to reject all worldly possessions and desires, and instead to live a simple and virtuous life in accordance with nature. Diogenes was born in Sinope, a city on the coast of modern-day Turkey, and was the son of a wealthy merchant. However, he rejected his privileged upbringing and chose instead to live a life of poverty and simplicity. He is said to have given away all of his possessions, including his clothes, and lived on the streets of Athens in a large ceramic jar, surviving on the charity of others. Despite his apparent poverty, Diogenes was a fierce and independent thinker who was unafraid to challenge the conventional wisdom of his time. He was known for hissharp wit and his refusal to conform to societal norms, and he often engaged in provocative behavior to make a point. For example, he is said to have walked through the streets of Athens with a lamp in broad daylight, claiming that he was searching for an "honest man"—a critique of the corruption and moral decay that he saw in society.
Zeno of Citium
Zeno of Citium was a Greek philosopher who was born in Citium, Cyprus, around the year 335 BC. He is considered one of the founders of Stoicism, a school of philosophy that emphasized the importance of living in accordance with nature and reason. Zeno was the son of a wealthy merchant, but he lost his wealth when he was shipwrecked off the coast of Athens. He was stranded in the city and began to study philosophy, eventually becoming a student of the philosopher Crates of Thebes. Zeno became known for his rigorous and ascetic lifestyle, and he was known to walk barefoot and sleep on the ground. He believed that physical discomfort and hardship were essential for developing self-control and inner strength.

The subject of the sixth book was evolution and those who pioneered the study of how natural selection has led to the origin of different species. It was appropriately titled DNA Select: The Homo Sapiens Guide to Evolution.

Renowned Female Evolutionary Scientists
Mary Anning (1799-1847): Anning was an English fossil collector and paleontologist who made significant contributions to the understanding of the history of life on Earth. She is known for her discovery of numerous important fossil specimens, including the first known ichthyosaur and plesiosaur skeletons.
Rosalind Franklin (1920-1958): Franklin was a British scientist who made important contributions to the understanding of the structure of DNA. She produced the key X-ray diffraction images of DNA that helped James Watson and Francis Crick to propose the correct structure of the molecule, which is now known as the double helix.
Lynn Margulis (1938-2011): Margulis was an American evolutionary biologist and science writer who made significant contributions to the field of evolutionary theory through her work on endosymbiosis and the evolution of complex cells and organisms.
Sarah Blaffer Hrdy (1946-present): Hrdy is an American anthropologist and evolutionary theorist who has made significant contributions to the understanding of the evolution of social behavior in primates. She is known for her research on the role of cooperative breeding in primate societies and the evolution of maternal investment in offspring.

The seventh book, Psychology’s Unconscious, provides thumbnail sketches of some of the great psychologists in history and their contributions to understand how the unconscious mind plays a greater role in our lives than we suspect.

What Does A.I. Think of Education in the Future?

How should schools incorporate A.I. into the classroom?

A.I. Response: There are many ways that schools can incorporate artificial intelligence (A.I.) into the classroom. Here are a few ideas:

Use A.I. tools for grading and feedback: Some A.I. tools can grade written assignments, providing students with personalized feedback. This can save teachers time and allow them to focus on more high-level tasks, such as helping students with their understanding of the material.

Use A.I. to personalize learning: A.I. can be used to create personalized learning plans for each student, tailoring the curriculum and pace to their individual needs and abilities. This can help ensure that each student is learning at their own optimal level.

Incorporate A.I. into lesson plans: Teachers can use A.I. tools and resources as part of their lesson plans, helping students learn about the capabilities and limitations of A.I. and how it can be used in various fields.

Use A.I. for research projects: Students can use A.I. tools to analyze data and perform research, helping them learn about A.I. and its applications in a real-world context.

Use AI to support language learning: AI can be used to provide language translation and support for students who are learning a new language.

Use AI to conduct research: AI can be used to help students conduct research and find information more efficiently.

What are the dangers of using A.I. in education?

A.I. Response: Dependence on technology: If students become too reliant on AI tools, they may struggle to think critically and solve problems on their own.

Bias in AI systems: AI systems can sometimes reflect the biases of the data they are trained on or the programmers who create them. This can lead to unfair or inaccurate results if not properly addressed.

Privacy concerns: Using AI in education may involve collecting and storing large amounts of sensitive student data. It's important to ensure that this data is handled responsibly and in compliance with relevant privacy laws and regulations.

Displacement of teachers: There is a concern that AI could eventually replace teachers, leading to job losses and a decrease in the overall quality of education.

Cost: Implementing AI systems in the classroom can be costly, and there may be concerns about equity if only certain schools or students have access to these resources.

Lack of transparency: AI systems can be difficult for non-experts to understand, which makes it hard for teachers and students to know how decisions are being made and whether the system is working as intended.

It's important to carefully consider these risks and to use AI in a way that is responsible and ethical.

Loss of social skills: If students rely too heavily on AI systems for communication and interaction, they may not have as many opportunities to develop important social skills such as empathy, collaboration, and communication.

Unforeseen consequences: As with any new technology, it is difficult to predict all of the potential consequences of using AI in the classroom. There may be unintended consequences that emerge as AI systems are used more widely in education.

A Brief Timeline of Key Markers in Artificial Intelligence

1943: Warren McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity," which is one of the first published works on artificial neural networks.

1956: The term "artificial intelligence" is coined at a conference at Dartmouth College.

1965: The first AI winter begins, as funding for AI research dries up and progress in the field slows.

1972: The first successful chess program is developed by computer scientist Kenneth Thompson.

1980: The second AI winter begins as funding for AI research is cut again.

1987: The first commercially available intelligent personal assistant, called "The Einstein," is released.

1997: Deep Blue, a chess-playing computer developed by IBM, defeats world champion Garry Kasparov in a chess match.

2005: The DARPA Grand Challenge, a competition for autonomous vehicles, is won by Stanford University's "Stanley" car.

2011: Apple's Siri, a virtual assistant that uses voice recognition, is released.

2014: DeepMind's AlphaGo defeats Lee Sedol, one of the world's top Go players, in a five-game match.

2016: Google's AlphaGo Zero defeats AlphaGo, the original version of the program, in a 100-game match.

2020: OpenAI's GPT-3, a language model with 175 billion parameters, is released.

2021: OpenAI's DALL-E, a neural network that can generate images from textual descriptions, is released.

Speculations on the Evolution of A.I. in the Future via Ray Kurzweil

Ray Kurzweil is a well-known futurist who has written extensively about the concept of the singularity, which he defines as a point in the future when artificial intelligence will surpass human intelligence and trigger exponential technological progress. Here is a summary of Kurzweil's theory on the singularity, based on his writings:

2005: Kurzweil predicts that the singularity will occur around 2045.

2009: Kurzweil publishes "The Singularity is Near," in which he outlines his predictions for the future of artificial intelligence and the singularity.

2013: Kurzweil becomes a director of engineering at Google, where he works on projects related to artificial intelligence and machine learning.

2021: Kurzweil predicts that machine intelligence will surpass human intelligence in certain narrow domains, such as playing chess or translating languages.

2025: Kurzweil predicts that machine intelligence will surpass human intelligence in most domains and that humans and machines will begin to merge through the use of brain-machine interfaces.

2045: Kurzweil predicts that the singularity will be fully realized and that humans and machines will have merged into a single entity.

It's important to note that these predictions are purely speculative and are not based on any real prediction or forecast. The future development of artificial intelligence and the singularity is difficult to predict and will depend on a wide range of factors.

A.I., Dan Brown, Origin, And the Future

"Origin" is a novel by Dan Brown that was published in 2017. It is the fifth book in the Robert Langdon series and follows the story of Robert Langdon, a Harvard professor of symbology, as he is drawn into a mystery surrounding the future of humanity. In the novel, Langdon is invited to attend the unveiling of a revolutionary new technology in Spain by his friend, Edmond Kirsch, a billionaire computer scientist and futurist. Kirsch's invention, which he claims will change the course of humanity, is revealed to be a revolutionary new form of artificial intelligence that he has named "Consciousness," which he claims will be able to answer the three fundamental questions of life: "Where did we come from?", "What is our purpose?", and "What happens when we die?"

However, Kirsch is assassinated shortly before the unveiling, and Langdon finds himself on the run with Kirsch's former employee, Ambra Vidal, who is the director of the museum hosting the event. Together, they set out to uncover the truth behind Kirsch's invention and the conspiracy that led to his death.

As they investigate, they discover that Kirsch's work has made him a target of powerful forces that will stop at nothing to keep his discovery from being revealed. Along the way, they must navigate a web of secrets, deception, and danger as they try to uncover the truth behind Kirsch's invention and its implications for the future of humanity.

As Langdon and Vidal continue their investigation, they travel to various locations in Spain and beyond, including Barcelona, Bilbao, and Marbella, encountering a variety of characters who are connected to Kirsch's work in different ways. They also encounter a mysterious organization known as the "Regents," who are determined to keep Kirsch's discovery a secret at any cost.

As they get closer to the truth, Langdon and Vidal begin to realize that Kirsch's invention has the potential to fundamentally change the way humans understand the world and their place in it. However, they also realize that there are powerful forces that are determined to keep Kirsch's discovery hidden, as it threatens to upend the status quo and challenge long-held beliefs.

In the end, Langdon and Vidal are able to uncover the truth behind Kirsch's invention and the conspiracy that led to his death, but not before facing numerous dangers and challenges along the way. The novel ends with Langdon and Vidal returning to the United States, where they begin to ponder the implications of Kirsch's discovery and its potential impact on the future of humanity.

Kirsch's belief in the eventual merger of humans and machines is based on his belief that technological progress will continue to accelerate and that humans will eventually be able to augment their own intelligence and capabilities through the use of technology. However, Kirsch's views are not necessarily shared by all of the characters in the novel, and the concept of humanmachine merger is only one aspect of the broader themes explored in the book.

"I love all technology, but AI is something that fascinates me deeply, mainly because scientists can't agree on whether it's going to save us or kill us," —Dan Brown

Classroom of the Future?
“We are already a cyborg. People don't realize—we are already a cyborg because we are so well integrated with our phones and our computers. The phone is almost like an extension of yourself. If you forget your phone, it's like a missing limb. But the communication bandwidth to the phone is very low, especially input. That's why humans desperately need to catch up.” —Elon Musk
“I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.” —Claude Shannon
We are a transitional species, having birthed a digital intelligence that will far exceed us in almost every way. As parents we should be proud that our A.I. offspring will live on in ways that are unimaginable. The future is brighter because we have finally succeeded in overcoming our fleshly limitations. —A Transhumanist Manifesto Notes for Future Cyborgs





Comment Form is loading comments...