Hacking Pi — The Bootloader of the Universe
Pattern Field Theory and the Genome of Reality
In Pattern Field Theory™ (PFT™), we uncovered something extraordinary: π (Pi) is not just a number — it is the bootloader of the universe. It functions as the recursive code sequence that seeds coherence, generates symmetry, and initiates the structural genome of reality itself.
Pi as the Universal Genome
Just as DNA encodes the blueprint for life, Pi encodes the instruction set for the universe. Within its digits, recursive structures unfold that act as the boot sequence for reality. This is why PFT describes Pi not as a random irrational, but as the genome of the cosmos.
Pi = Bootloader
It contains the recursive closure instructions that seed
resonance, coherence, and field stability.
Formula Representation
Pi → Genome(Code)
Genome(Code) → Boot Sequence
Boot Sequence → Pattern Coherence
Pattern Coherence → Reality
Your Pi file
Use a plain‑text file containing digits of π — any size (thousands to billions). The script ignores non‑digits, so 3.1415…
or just digits both work.
Quick start
- Save the Python script below as
hack_pi.py
in the same folder as your Pi file (e.g.,pi_1e8.txt
). - Run one of these commands:
# Windows (PowerShell)
python .\hack_pi.py --pi-file .\pi_1e8.txt --limit 0 --outdir .\results
# macOS / Linux
python3 ./hack_pi.py --pi-file ./pi_1e8.txt --limit 0 --outdir ./results
--limit 0
= use the entire file. For a quick test, try --limit 1000000
(first 1M digits).
Outputs: results/metrics.json
(full) and results/metrics.preview.txt
(short).
The Python script (hack_pi.py
)
#!/usr/bin/env python3
# /* Pattern Field Theory™ — Hacking Pi (structure probes)
# © 2025 James Johan Sebastian Allen — https://patternfieldtheory.com
# Must include this header + link. Code License: MIT. Theory/Docs: site terms. */
import argparse, re, json, math
from pathlib import Path
from collections import Counter, defaultdict
def stream_digits(path, limit=0, chunk=1_000_000):
rx = re.compile(rb"\d")
total = 0
with open(path, "rb") as f:
while True:
b = f.read(chunk)
if not b: break
digs = rx.findall(b)
for d in digs:
yield d[0] - 48 # '0' -> 0
total += 1
if limit and total >= limit:
return
def chi_square(observed_counts, N):
if N == 0: return float("nan"), float("nan")
expected = N / 10.0
chi = sum((c - expected) ** 2 / expected for c in observed_counts)
return chi, 9.0 # df=9 for 10 categories
def autocorr_lags(seq, maxlag=10):
sums = defaultdict(int); sums_sq = defaultdict(int)
pairs = {k: defaultdict(int) for k in range(1, maxlag+1)}
n = 0; buf = []
for d in seq:
buf.append(d)
if len(buf) > maxlag+1: buf.pop(0)
sums["x"] += d; sums_sq["x2"] += d*d; n += 1
for k in range(1, maxlag+1):
if len(buf) > k:
pairs[k][(buf[-1-k], buf[-1])] += 1
if n == 0: return {}
mean = sums["x"]/n; var = (sums_sq["x2"]/n) - mean*mean
if var <= 0: return {f"lag_{k}": float("nan") for k in range(1, maxlag+1)}
res = {}
for k in range(1, maxlag+1):
cov_num = 0.0; denom = 0
for (a,b), cnt in pairs[k].items():
cov_num += ((a-mean)*(b-mean)) * cnt; denom += cnt
r = float("nan") if denom == 0 else (cov_num/denom)/var
res[f"lag_{k}"] = r
return res
def ngram_counts(seq, n=3, limit_pairs=2_000_000):
counts = Counter(); buf = []; c = 0
for d in seq:
buf.append(d)
if len(buf) == n:
counts[tuple(buf)] += 1
buf.pop(0); c += 1
if c >= limit_pairs: break
return counts, c
def sliding_entropy(seq, window=10000, step=10000):
ent=[]; buf=[]; counts=[0]*10
for d in seq:
buf.append(d); counts[d]+=1
if len(buf)==window:
H=0.0
for i in range(10):
p=counts[i]/window
if p>0: H -= p*math.log(p,10)
ent.append(H)
if step==window:
buf.clear(); counts=[0]*10
else:
for _ in range(step):
if not buf: break
first=buf.pop(0); counts[first]-=1
return ent
def count_patterns(path, limit, patterns):
maxlen = max(len(p) for p in patterns)
rx = re.compile(r"\d"); tail=""; found={p:0 for p in patterns}; seen=0
with open(path, "r", encoding="utf-8", errors="ignore") as f:
while True:
chunk = f.read(1_000_000)
if not chunk: break
digs = "".join(rx.findall(chunk))
if not digs: continue
s = tail + digs
for p in patterns:
start = 0; plen=len(p)
while True:
idx = s.find(p, start)
if idx==-1: break
found[p]+=1; start=idx+1
seen += len(digs)
if limit and seen >= limit: break
tail = s[-(maxlen-1):] if maxlen>1 else ""
return found
def fib_indices_upto(N):
F=[1,1]
while F[-1] <= N: F.append(F[-1]+F[-2])
return set(F[:-1])
def prime_sieve_upto(N):
if N < 2: return set()
sieve = bytearray(b"\x01")*(N+1); sieve[0:2]=b"\x00\x00"; p=2
while p*p <= N:
if sieve[p]:
start=p*p; step=p
sieve[start:N+1:step] = b"\x00"*(((N-start)//step)+1)
p += 1
return {i for i in range(2, N+1) if sieve[i]}
def analyze_pi(pi_path, limit=0, outdir="results"):
pi_path = Path(pi_path); out = Path(outdir); out.mkdir(parents=True, exist_ok=True)
# 1) stream basic counts
freq=[0]*10; total=0; mem=[]
cap_window=5_000_000
for d in stream_digits(pi_path, limit=limit):
freq[d]+=1; total+=1
if len(mem) < cap_window: mem.append(d)
# 2) tests
chi, df = chi_square(freq, total)
ac = autocorr_lags(iter(mem), maxlag=10)
# 3) n-grams
tri, tri_seen = ngram_counts(iter(mem), n=3, limit_pairs=2_000_000)
top_tri = tri.most_common(20)
# 4) sliding entropy
ent = sliding_entropy(iter(mem), window=10000, step=10000)
ent_stats = {
"count": len(ent),
"min": float(min(ent)) if ent else float("nan"),
"max": float(max(ent)) if ent else float("nan"),
"mean": float(sum(ent)/len(ent)) if ent else float("nan"),
}
# 5) pattern counts
patterns = ["314","141","15926","26535","897932","000","12345","98765"]
pat_counts = count_patterns(pi_path, limit, patterns)
# 6) prime & Fibonacci index digits (cap to 5M)
up_to = min(total, 5_000_000)
prs = prime_sieve_upto(up_to)
fibs = fib_indices_upto(up_to)
sel_positions = sorted(prs | fibs)
sel_digits = {}
if sel_positions:
pos_i=0; target=sel_positions[pos_i]; idx=0
for d in stream_digits(pi_path, limit=up_to):
idx+=1
if idx==target:
sel_digits[target]=d
pos_i+=1
if pos_i==len(sel_positions): break
target=sel_positions[pos_i]
prime_digit_freq=[0]*10; fib_digit_freq=[0]*10
for pos in prs:
if pos in sel_digits: prime_digit_freq[sel_digits[pos]] += 1
for pos in fibs:
if pos in sel_digits: fib_digit_freq[sel_digits[pos]] += 1
results = {
"input_file": str(pi_path.resolve()),
"limit_used": int(limit),
"total_digits": int(total),
"digit_freq": freq,
"chi_square": {"chi": float(chi), "df": float(df)},
"autocorr_lags": ac,
"top_trigrams": [{"tri": "".join(map(str,k)), "count": v} for (k,v) in top_tri],
"trigram_pairs_seen": int(tri_seen),
"sliding_entropy_stats": ent_stats,
"pattern_counts": pat_counts,
"beacons": {
"prime_index_upto": int(up_to),
"prime_digit_freq": prime_digit_freq,
"fib_index_upto": int(up_to),
"fib_digit_freq": fib_digit_freq
},
"notes": "Uniform Pi ~ flat digit_freq, chi~9±, small autocorr, high entropy (~1.0 base-10). Deviations indicate structure to investigate."
}
out_json = out/"metrics.json"
with open(out_json, "w") as f: json.dump(results, f, indent=2)
prev = out/"metrics.preview.txt"
with open(prev, "w") as f:
f.write(f"digits: {total}\n")
f.write(f"freq: {freq}\n")
f.write(f"chi^2(df=9): {chi:.3f}\n")
for k in range(1,11):
f.write(f"autocorr lag {k}: {ac.get(f'lag_{k}', float('nan'))}\n")
f.write(f"top 3-grams: {[(''.join(map(str,k)),v) for (k,v) in top_tri]}\n")
f.write(f"entropy mean: {ent_stats['mean']:.5f}\n")
f.write(f"patterns: {pat_counts}\n")
f.write(f"prime_digit_freq: {prime_digit_freq}\n")
f.write(f"fib_digit_freq: {fib_digit_freq}\n")
print(f"Wrote {out_json}")
print(f"Wrote {prev}")
if _name_ == "_main_":
ap = argparse.ArgumentParser(description="Hacking Pi — structure probes")
ap.add_argument("--pi-file", required=True, help="Path to text file of Pi digits")
ap.add_argument("--limit", type=int, default=0, help="Use first N digits (0 = all)")
ap.add_argument("--outdir", default="results", help="Output directory")
args = ap.parse_args()
analyze_pi(args.pi_file, limit=args.limit, outdir=args.outdir)
What it measures
- Digit frequencies (0–9) and Chi‑square vs uniform (df=9).
- Autocorrelation lags 1–10 (should be near zero if iid).
- Top trigrams (3‑digit sequences) from a large streaming window.
- Sliding entropy (min/mean/max across chunks; ~1.0 base‑10 when uniform‑like).
- Pattern counts for notable substrings (e.g.,
314
,15926
). - Prime‑index & Fibonacci‑index digit distributions (first ≤5M positions).
Ask Grok / ChatGPT — copy this prompt
Analyze my Pi structure results.
Context:
- I ran the "hack_pi.py" script from Pattern Field Theory on a local Pi file.
- It produced results/metrics.json and results/metrics.preview.txt.
Tasks:
1) Check digit frequencies against uniform using chi-square (df=9). Are they within expectation?
2) Inspect autocorrelation lags 1–10. Flag any |r| > 0.01.
3) List the top 10 trigrams and whether counts materially deviate from IID expectation.
4) Summarize sliding entropy (min/mean/max) vs ~1.0 (base-10).
5) Compare prime-index vs Fibonacci-index digit distributions to overall.
6) Comment on notable pattern counts (e.g., '314', '15926') relative to digits analyzed.
7) Verdict: “consistent with uniform” or “shows structure worth deeper tests”.
JSON follows:
--- BEGIN JSON ---
[paste contents of results/metrics.json here]
--- END JSON ---
FAQ
- Huge files? The script streams; memory stays modest. Prime/Fib beacons cap at 5M indices for speed.
- Quick run? Use
--limit 1000000
for 1M digits, then scale up. - File format? Any text with digits; non‑digits are ignored.
© 2025 James Johan Sebastian Allen — Pattern Field Theory™. Keep this header and link in any reposts of the code.