Growth Ritual #14
📋 IN THIS ISSUE: The Magnitude of Change in a Quarter Century ✨ Adaptation is Hardwired into Us ✨ Why AI Struggles with Basic Math
🎙️ AUDIO DEEP DIVE OF THIS ISSUE:
Sammy & Mila offer in-depth analysis on each newsletter issue. Subscribe to their podcast on Spotify or any other podcast platform.
📊 TRENDS, RESEARCH & REPORTS:
The Magnitude of Change in a Quarter Century
Compared to the vast expanse of cosmic time, a human lifespan is fleeting.
From the birth of stars and planets to the rise and fall of civilizations, the history of our universe dwarfs our individual existence. Yet, here we are, reading this in the present moment. Our lives, measured in mere decades, are a tiny flicker in the grand cosmic scheme.
Given the World Health Organization’s average life expectancy of 72.8 years, our time on Earth is remarkably brief. When viewed through this cosmic lens, the complexities of daily life — our challenges, joys, and sorrows — seem comparatively insignificant. Carl Sagan's iconic “Pale Blue Dot” image serves as a powerful reminder of humanity's place in the universe.
Amidst the constant ebb and flow of life, transformative events emerge, capable of reshaping our world. The advent of ChatGPT is one such milestone. These watershed moments often provoke fear and uncertainty as we grapple with the shifting foundations of our reality.
To better understand the profound impact of such changes, I aim to reflect on the past 25 years. By identifying key milestones that have fundamentally altered our world, we can gain a fresh perspective on the current era of rapid transformation.
I’ve compiled a list of ten pivotal milestones in scientific and technological advancements, profoundly reshaping human existence:
1998: Human Genome Project Initiated: Deciphering the human genetic code laid the groundwork for personalized medicine and a deeper understanding of genetic diseases.
2001: Wikipedia Launched: By democratizing knowledge creation and access, Wikipedia revolutionized information sharing and challenged traditional authority in the realm of expertise.
2004: Facebook Established: Transforming the internet from a platform for information into a social hub, Facebook redefined communication and connection.
2007: iPhone Introduced: Reimagining the mobile phone as a multifunctional device, the iPhone placed computing power and connectivity in the palm of our hands.
2008: Bitcoin Emerges: Challenging traditional financial systems, Bitcoin pioneered decentralized digital currency and blockchain technology.
2009: CRISPR-Cas9 Gene Editing Discovered: This breakthrough in genetic engineering opened new avenues for disease treatment and offered unprecedented precision in manipulating genetic material.
2010: Significance of Göbeklitepe Recognized: The discovery of Göbeklitepe radically altered our perception of prehistoric societies, challenging long-held assumptions about the transition to settled life.
2012: Higgs Boson Discovered: Confirming a fundamental prediction of the Standard Model of particle physics, the discovery of the Higgs boson deepened our understanding of the universe’s building blocks.
2016: AlphaGo Defeats Go Champion: Demonstrating artificial intelligence’s capacity for complex strategic thinking, AlphaGo’s victory marked a significant leap in AI development.
2020: Shift to Remote Work and Learning: The COVID-19 pandemic accelerated the adoption of remote work and online education, fundamentally changing how we work and learn.
These milestones collectively underscore humanity’s remarkable capacity for adaptation.
📰 LATEST NEWS DECODED:
Adaptation is Hardwired into Us
In a groundbreaking experiment conducted by Cortical Labs in 2022, scientists achieved the remarkable feat of teaching brain cells to play Pong.
By connecting 800,000 brain cells to a computer, researchers enabled them to perceive the ball's position and control a virtual paddle.
Initially, the cells were unable to interpret the computer's signals. However, through a reward-based system, where correct actions were met with regular electrical stimulation and incorrect ones with irregular, chaotic stimulation, the cells gradually learned to track the ball and effectively maneuver the paddle.
This astonishing result underscores the profound adaptability inherent in our biology.
Interestingly, this reward-punishment mechanism mirrors the fundamental learning process employed in artificial intelligence models like ChatGPT.
The accompanying video vividly illustrates this concept by showcasing an AI model's acquisition of walking through countless trial-and-error attempts. It's a captivating demonstration of the underlying logic powering these models.
The capacity of brain cells to master a game as complex as Pong marks a profound inflection point in our understanding of the relationship between organic and digital systems.
This achievement, while undeniably remarkable, is merely a glimpse into a future where such boundaries might dissolve entirely.
It's a future that Elon Musk, with his characteristically audacious vision, has anticipated. His extrapolation from the rapid evolution of video games to the potential for indistinguishable simulations is not merely speculative; it's a logical progression based on observed technological trajectories.
If our reality can be so meticulously replicated, the question of its authenticity becomes profoundly philosophical. Are we inhabitants of a meticulously crafted illusion, a simulation designed to a purpose yet unknown?
The implications of such a hypothesis are staggering. If our universe is indeed a construct, it raises profound questions about our creators, their motivations, and the nature of consciousness itself. And if we, as a species, are capable of creating such sophisticated simulations, where does the hierarchy of creation end? Are we merely a cog in an infinite regress of simulations within simulations?
The convergence of biology, technology, and philosophy is creating a new paradigm. Quantum physics, with its exploration of parallel universes and the strange behavior of matter at the subatomic level, offers tantalizing parallels to the concept of multiple simulations.
Perhaps the multiverse theory is not merely a theoretical construct, but a reflection of the underlying architecture of our reality, or simulation.
As we stand on the precipice of this new era, it's imperative that we approach these questions with a blend of scientific rigor and philosophical contemplation.
The exploration of simulated realities is not merely an intellectual exercise; it has the potential to redefine our understanding of existence, consciousness, and our place in the cosmos.
Whether we find answers or merely uncover deeper layers of complexity remains to be seen. But one thing is certain: the journey promises to be as intellectually stimulating as it is existentially profound.
🧙♂️ TIPS & TRICKS:
Why AI Struggles with Basic Math
While we have come to the conclusion that “Life is just a simulation” due to the incredible point that artificial intelligence has reached, I would like to talk about why it it often stumbles when confronted with basic mathematical operations.
You may have tried asking ChatGPT math questions that even a primary school student could solve, and when you got the wrong answer, you said, “Idiot!” you said.
In fact, the problem stems from the fact that we treat them as humans. Because they can use language very well, we think that artificial intelligence models have human capacity in every respect. This is a flawed perspective.
If we understand their working mechanisms, we can be more accurate in evaluating their performance.
These “Large Language Models”, which we call LLM, actually learn how words come together by reading huge chunks of text. So, they read books, articles, websites, etc. just like we do. Then they write new texts using the patterns they learned.
For example, they might learn that the word “milk” often follows the word “cat” because of phrases like “milk the cat”.
But when it comes to mathematics, things change. Because math is based on precise rules of numbers and symbols, rather than how words are put together. So, 2 + 2 always equals 4, it doesn't change. But that's not the case for LLMs.
They work mostly on possibilities. For example, the word “milk” is likely to come after the word “cat”. But this is not an absolute rule. Sometimes, when the context changes, the word “dog” may come after the word “cat”.
LLMs use the mathematical closeness of words in a very large textual data to each other in various planes and sentences are completed based on similarity/sequentiality probability scores.
In other words, the fact that LLMs appear to be “writing in the moment” is not actually an aesthetic issue. As each word is written, the next word is calculated based on probability scores.
This approach has proven highly effective for generating human-like text but is fundamentally unsuited to the precise, rule-based world of mathematics.
While language is inherently probabilistic, with multiple correct answers often possible depending on context, mathematics operates on definitive rules.
It's crucial to understand that LLMs do not possess an intrinsic understanding of numbers or mathematical operations.
They manipulate symbols based on statistical correlations learned from text data. This limitation becomes evident when dealing with mathematical concepts that require logical reasoning and symbolic manipulation, rather than pattern recognition.
That's why, when you ask a numerical question as a text, LLM looks at the world of possibilities like “What comes after 2?”, it gives you wrong answers that make you laugh.
To overcome this challenge, researchers are developing AI models specifically designed for mathematical tasks. These models are trained on datasets that emphasize mathematical principles and rules.
While progress has been made, achieving human-level mathematical proficiency in AI remains a formidable challenge.
In summary, you need to know this; LLMs don't know what you mean by “cat”. It just knows very well which other words the word “cat” is associated with, which ones are used first and which ones are used after, within the scope of the data set on which it is trained.
So, if you're talking to an LLM trained solely on Eskimo-generated textual data, you're likely to get strange answers associated with “ice” or “cold” to questions you ask about “cat”.
We need to conclude that there is no single artificial intelligence and that each one is different with the data sets it is trained on. Just like each person lives in the world of meaning in their own culture and their differences.
💡 INSPIRING IDEAS:
“The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
— Eliezer Yudkowsky


