Nothing can be certain
Executive Summary
- Success and failure, right and wrong, depend on current knowledge and may completely reverse with new information.
- All values, including “yes” and “no,” “one” and “zero,” inherently blur over time. It is impossible to assert “yes” without acknowledging that it will inevitably decay into “I don’t know” due to information obsolescence.
- External factors, in addition to time, can accelerate the blurring of values. If conditions allow, we can estimate the rate of this blurring, but in many cases, even the rate itself is subject to uncertainty.
- All real-world values are physical, not purely mathematical. The equation 1+1=2 holds true only as a coincidence; in reality, it is always an approximation due to the inherent physical nature of all quantities.
There is no such thing as absolute truth or falsity. Instead, truth exists only within the framework of our current knowledge and should always be considered provisional. Assessments of correctness or incorrectness must adapt to new information. It is a fundamental mistake—both in robotics and in life—to assume that a previously verified truth remains valid unless reaffirmed by new evidence. Likewise, assuming it has become false is equally incorrect. The only rational approach is recognizing a gradual decrease in confidence over time.
The rate at which confidence declines depends on the nature of the event, its environment, and the interactions between the observed object and other objects or subjects.
Key Insights and Implications
- Nothing should be accepted unconditionally. Everything comes with implicit caveats, which are often ignored, leading to flawed reasoning. However, paradoxically, complex systems remain stable precisely because they are built with the assumption that nothing is absolutely reliable. This fundamental uncertainty is embedded in the resilience of systems such as nature, life, and intelligence itself.
- From the lowest levels of the system, both code and approach must not only take this into account but be fundamentally built upon the principle that nothing should be blindly trusted.
- All data and conclusions inevitably decay over time. This principle can be extended to a manifestation of entropy: as events accumulate, their relationships become harder to trace, and what was once structured information dissolves into apparent randomness.
- Contemporary programming languages, mathematics, and formal terminologies lack a precise way to express that 1+1 probabilistically equals 2 at a given moment, given current knowledge.
- In reality, the concept of “one” is always physical, never purely mathematical—one ampere of current, one apple, or one photon.
- One ampere may not be measured precisely.
- One apple may differ significantly from another.
- One photon carries variable energy.
- We simplify with 1+1=2, but such simplifications introduce fundamental inaccuracies. While this approach has led to great advancements, we increasingly encounter its limitations.
- The key to future progress—particularly in intelligence engineering—lies in developing a more physical, context-aware approach.
- Even in domains where precision is maximized (e.g., counting electrons), uncertainty persists. Time, for instance, has not been proven to be granular. No matter how precisely we measure electrons, there remains an inherent uncertainty in the measurement duration itself. The deeper we probe reality, the more we encounter what could be seen as a generalized Heisenberg uncertainty principle.
All variables, constants, values, and knowledge inherently come with extended disclaimers. At every level of a system, from the lowest layers of code to the overarching methodology, this should not only be considered but must serve as a fundamental principle: nothing can be believed without reservation.
We know anything only within the bounds of a specific level of confidence, a defined precision, and a particular moment in time.
- Even the word “know” itself is misleadingly definitive—it falsely implies absolute certainty, which is inherently unattainable. We never truly “know” anything; instead, we always “assume” within the constraints of current inputs and past experience, where experience itself is merely a historical aggregation of prior inputs and interaction outcomes.
For example, in Marvelmind indoor positioning system, when determining distance, we begin with multiple candidates. Until we eliminate the unsuitable ones, all candidates appear plausible. However, we know that the object can exist at only one of these distances, and at most one candidate is truly correct.
Furthermore, we cannot be 100% certain that there is even a valid candidate within our set. This means we can identify the most probable estimate, yet even this best estimate may still be factually incorrect.
Even the term “factually correct”, or “ground truth”, is itself not an absolute truth. Instead, it serves as a reference point, one that we currently trust more than the rejected candidates, not because it is inherently true, but because our historical candidate elimination process and prior experience have led us to assign it higher credibility at this moment.
The Fundamental Link Between Truth, Time, and Cost
Since we can never have perfect knowledge, truth always involves a degree of trust and a temporal component. This means that “more accurate but later data” is not necessarily better than earlier, less precise data.
A practical example:
- Marvelmind indoor positioning system measures coordinates with centimeter-level precision.
- If an object moves at several meters per second, and the indoor positioning system measures the location with 10 Hz (10 times per second) update rate, then even at just 3 m/s, the real-time error can easily reach 20-30 cm—far exceeding our nominal 1-2 cm accuracy.
- Thus, our “highly precise” data is already outdated the moment we obtain it.
Here we encounter the crucial principle of practically necessary precision:
- Greater precision costs more money, energy, and effort, but yields diminishing returns.
- Beyond a certain point, increased accuracy offers no practical benefit because other errors dominate.
- A less precise but faster system may produce more accurate real-time results than a high-precision system with significant latency.
- A cheaper, even less precise system might still be preferable if higher-end solutions are financially unattainable.
- However, this is not always true: a system with a correlated error may be worse than a purely random guess.
- A misleading system that we trust incorrectly can produce worse results than random chance, effectively making us victims of deception.
A Thought Experiment: The Evolution of Success and Failure
Consider a scenario where a person must decide which direction to move:
- Initial uncertainty:
- There is no prior knowledge about the environment, so going left, right, or forward all seem equally valid.
- First step taken:
- This step does not result in immediate harm.
- The person naturally assumes that the decision was correct.
- A new variable appears:
- After traveling for some time, they encounter a wild predator.
- With this new information, the initial decision now seems poor.
- An escape attempt fails:
- Trying to retreat is impossible due to a rockslide.
- The situation appears even worse.
- A new perspective emerges:
- The person notices a rock behind them.
- They realize it can be used as a weapon.
- A new outcome is reached:
- The predator is killed.
- The person now has warm fur, food, and safety.
- The original decision now seems like a success again.
However, what if the scent of meat attracts a larger pack of predators?
- The person is ultimately killed—a tragic outcome.
- Yet, their act of resistance delayed the predators, saving a nearby village.
- To the villagers, they are now a hero, even a legend.
- But perhaps they would not have wanted to die, even if remembered as a saint.
Implications: The Relativity of “Good” and “Bad”
This sequence of shifting perspectives suggests that “good” and “bad,” “absolute evil” and “pure virtue,” “success” and “failure” cannot exist independently of context, time, and circumstances.
Just as electromagnetic waves interfere and propagate, our current state and all events around us form an interference pattern of past experiences, decisions, and external influences.
- Like waves in water, our past choices and external factors interact, sometimes amplifying, sometimes canceling each other.
- The significance of an event depends on its amplitude (importance).
- Minor factors can accumulate, leading to unexpected and seemingly random outcomes.
- The probability of any single event may be small, but when multiplied across countless interactions, the net effect is far from negligible.
Thus, when assessing any event, decision, or outcome, we must always consider:
- The timescale of evaluation
- The surrounding context
- The degree of uncertainty
Without these, any assessment of “success” or “failure” becomes inherently flawed.
We must eliminate dangerous misconceptions:
We do not know values precisely, and it is not just a limitation of measurement accuracy. It is a fundamental property of our Universe. The question of who designed our Universe and why it was made this way is beyond our immediate task. However, since we operate within this world, and not within a virtual world where fundamental rules might be different, we must adhere to the fundamental laws of physics that govern our physical reality—especially if we aim to build physical robots in a physical world.
The approach of “let’s build it as is, and later add some parameter variability to simulate uncertainty” is absolutely wrong. No, no, and once again, no. That would require rewriting everything from scratch—reworking every line of code and every design decision. It is far simpler to do things correctly from the start.
Correct design means that mathematics, coding approach, and algorithms must incorporate this fundamental principle from the ground up. Uncertainty should not be an artificial afterthought, but a cornerstone of system architecture.
Why is this approach simpler?
Why search for a variable with precision up to the 10th decimal place if the input data itself has only 1–3% accuracy? It is meaningless.
Why digitize at 16-bit resolution, then struggle with 32-bit computations and massive memory overhead, when input precision is effectively 1–4 bits?
From the very foundation of code, hardware, algorithms, and decisions, we must embed the principle that nothing can be blindly trusted.
If we compute something over a prolonged period (relative to the validity window of a given value), we must consider that by the time the computation completes, the result may already be incorrect.
If we still use the outdated result, we must account for its degradation—we should never rely on it as if it were still precise. If we must use it, then only with far less confidence and lower precision than at the moment of measurement.
Fundamental Misconception: 1+1 ≠ 2
1+1 is never exactly 2—except in rare coincidences.
1+1=2 is valid only in simplified arithmetic, such as for kindergarten-level math or in specialized branches of pure algebra—not for real-world engineering.
In reality, 1+1 could mean:
- 1A + 1A of electrical current,
- 1 kg + 1 kg of mass,
- 1 apple + 1 apple,
- 1 logical ‘1’ in CMOS + another logical ‘1’ in CMOS,
- But what if power fluctuates?
- What if a radioactive particle interferes?
- What if an unconnected high-impedance input oscillates at 100 Hz due to power line noise?
1+1 is always unequal to 2 by definition.
- We may describe 1A as a count of electrons per second, but time itself is quantized, and our best clocks (e.g., krypton oscillations) have finite precision.
- Engineering reality is not an idealized mathematical construct where limits can simply approach zero.
- Simplified models are fine for rough estimates, but applying them blindly at higher levels leads to critical errors.
The problem is not just computational inaccuracy—it is a flaw in logic and approach.
- Blind faith in 1+1=2 leads to wasted hours debugging issues that should never exist.
- A real danger is when a programmer assumes:
- “5.00000000001 m > 5.0000000 m, so let’s turn the robot around.”
- Meanwhile, a three-year-old in a sandbox sees the robot behaving strangely and loses interest.
- The mistake?
- Trusting a minuscule difference,
- Ignoring the 5-10% measurement uncertainty,
- Overlooking sensor failures or factory defects.
This flawed thinking appears early in development:
- “I receive data, I process it—its source and accuracy are not my concern.” → Wrong.
- “If data came with confidence intervals, I’d know how to handle it.” → Wrong again.
- This assumes confidence intervals themselves are perfectly known.
- It ignores baseline measurement errors, which are always present.
Such misconceptions prevent building reliable systems.
- The result is a rigid, rule-based system that fails in real-world environments.
Does solving this require fuzzy logic or analog computing? → Not necessarily.
Does it require a new programming language? → Maybe, but modern tools are usually enough.
What matters is a shift in system design:
- Nothing should be blindly trusted.
- Every value has potential errors.
- 1+1=2 is a coincidence, not a rule.
Systems must be built with this principle from the ground up:
- There are candidates, assumptions, and values we trust with some confidence—because decisions must be made.
- But trust must always be conditional:
- It must align with other data,
- Account for historical changes,
- Degrade over time if left unverified.