London Historical LLM

A custom GPT-2 model trained from scratch on historical London texts from 1500-1850. Fast to run on CPU, and supports NVIDIA (CUDA) and AMD (ROCm) GPUs.

Note: This model was trained from scratch - not fine-tuned from existing models.

This page includes simple virtual-env setup, install choices for CPU/CUDA/ROCm, and an auto-device inference example so anyone can get going quickly.


πŸ”Ž Model Description

This is a Regular Language Model built from scratch using GPT-2 architecture, trained on a comprehensive collection of historical London documents spanning 1500-1850, including:

  • Parliamentary records and debates
  • Historical newspapers and journals
  • Literary works and correspondence
  • Government documents and reports
  • Personal letters and diaries

Key Features

  • ~354M parameters (vs ~117M in the SLM version)
  • Custom historical tokenizer (~30k vocab) with London-specific tokens
  • London-specific context awareness and historical language patterns
  • Trained from scratch - not fine-tuned from existing models
  • Optimized for historical text generation (1500-1850)

πŸ§ͺ Intended Use & Limitations

Use cases: historical-style narrative generation, prompt-based exploration of London themes (1500-1850), creative writing aids.
Limitations: may produce anachronisms or historically inaccurate statements; complex sampling parameters may produce gibberish due to the historical nature of the training data. Validate outputs before downstream use.


🐍 Set up a virtual environment (Linux/macOS/Windows)

Virtual environments isolate project dependencies. Official Python docs: venv.

Check Python & pip

# Linux/macOS
python3 --version && python3 -m pip --version
# Windows (PowerShell)
python --version; python -m pip --version

Create the env

# Linux/macOS
python3 -m venv helloLondon
# Windows (PowerShell)
python -m venv helloLondon
:: Windows (Command Prompt)
python -m venv helloLondon

Note: You can name your virtual environment anything you like, e.g., .venv, my_env, london_env.

Activate

# Linux/macOS
source helloLondon/bin/activate
# Windows (PowerShell)
.\\helloLondon\\Scripts\\Activate.ps1
:: Windows (CMD)
.\\helloLondon\\Scripts\\activate.bat

If PowerShell blocks activation ("running scripts is disabled"), set the policy then retry activation:

Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSigned
# or just for this session:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

πŸ“¦ Install libraries

Upgrade basics, then install Hugging Face libs:

python -m pip install -U pip setuptools wheel
python -m pip install "transformers" "accelerate" "safetensors"

βš™οΈ Install one PyTorch variant (CPU / NVIDIA / AMD)

Use one of the commands below. For the most accurate command per OS/accelerator and version, prefer PyTorch's Get Started selector.

A) CPU-only (Linux/Windows/macOS)

pip install torch --index-url https://download.pytorch.org/whl/cpu

B) NVIDIA GPU (CUDA)

Pick the CUDA series that matches your system (examples below):

# CUDA 12.6
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

# CUDA 12.4
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# CUDA 11.8
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

C) AMD GPU (ROCm, Linux-only)

Install the ROCm build matching your ROCm runtime (examples):

# ROCm 6.3
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.3

# ROCm 6.2 (incl. 6.2.x)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.2.4

# ROCm 6.1
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1

Quick sanity check

python - <<'PY'
import torch
print("torch:", torch.__version__)
print("GPU available:", torch.cuda.is_available())
if torch.cuda.is_available():
    print("device:", torch.cuda.get_device_name(0))
PY

πŸš€ Inference (auto-detect device)

This snippet picks the best device (CUDA/ROCm if available, else CPU) and uses sensible generation defaults for this model.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "bahree/london-historical-llm"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)

prompt = "In the year 1834, I walked through the streets of London and witnessed"
inputs = tokenizer(prompt, return_tensors="pt").to(device)

outputs = model.generate(
    inputs["input_ids"],
    max_new_tokens=50,
    do_sample=True,
    temperature=0.8,
    top_p=0.95,
    top_k=40,
    repetition_penalty=1.2,
    no_repeat_ngram_size=3,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.eos_token_id,
    early_stopping=True,
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“– Sample Output

Prompt: "In the year 1834, I walked through the streets of London and witnessed"

Generated Text:

"In the year 1834, I walked through the streets of London and witnessed a scene in which some of those who had no inclination to come in contact with him took part in his discourse. It was on this occasion that I perceived that he had been engaged in some new business connected with the house, but for some days it had not taken place, nor did he appear so desirous of pursuing any further display of interest. The result was, however, that if he came in contact witli any one else in company with him he must be regarded as an old acquaintance or companion, and when he came to the point of leaving, I had no leisure to take up his abode. The same evening, having ram ##bled about the streets, I observed that the young man who had just arrived from a neighbouring village at the time, was enjoying himself at a certain hour, and I thought that he would sleep quietly until morning, when he said in a low voice β€” " You are coming. Miss β€” I have come from the West Indies . " Then my father bade me go into the shop, and bid me put on his spectacles, which he had in his hand; but he replied no: the room was empty, and he did not want to see what had passed. When I asked him the cause of all this conversation, he answered in the affirmative, and turned away, saying that as soon as the lad could recover, the sight of him might be renewed. " Well, Mr. , " said I, " you have got a little more of your wages, do you ? " " No, sir, thank ' ee kindly, " returned the boy, " but we don ' t want to pay the poor rates . We"

Notice how the model captures:

  • Period-appropriate language ("thank 'ee kindly," "bade me go," "spectacles")
  • Historical dialogue patterns (formal speech, period-appropriate contractions)
  • Historical context (West Indies, poor rates, needle work, pocket-book)
  • Authentic historical narrative (detailed scene setting, period-appropriate social interactions)

πŸ§ͺ Testing Your Model

Quick Testing (10 Automated Prompts)

# Test with 10 automated historical prompts
python 06_inference/test_published_models.py --model_type regular

Expected Output: ``` πŸ§ͺ Testing Regular Model: bahree/london-historical-llm

πŸ“‚ Loading model... βœ… Model loaded in 12.5 seconds πŸ“Š Model Info: Type: REGULAR Description: Regular Language Model (354M parameters) Device: cuda Vocabulary size: 30,000 Max length: 1024

🎯 Testing generation with 10 prompts... [10 automated tests with historical text generation]


### **Interactive Testing**
```bash
# Interactive mode for custom prompts
python 06_inference/inference_unified.py --published --model_type regular --interactive

# Single prompt test
python 06_inference/inference_unified.py --published --model_type regular --prompt "In the year 1834, I walked through the streets of London and witnessed"

Need more headroom later? Load with πŸ€— Accelerate and device_map="auto" to spread layers across available devices/CPU automatically.

from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

πŸͺŸ Windows Terminal one-liners

PowerShell

python -c "from transformers import AutoTokenizer,AutoModelForCausalLM; m='bahree/london-historical-llm'; t=AutoTokenizer.from_pretrained(m); model=AutoModelForCausalLM.from_pretrained(m); p='Today I walked through the streets of London and witnessed'; i=t(p,return_tensors='pt'); print(t.decode(model.generate(i['input_ids'],max_new_tokens=50,do_sample=True)[0],skip_special_tokens=True))"

Command Prompt (CMD)

python -c "from transformers import AutoTokenizer, AutoModelForCausalLM ^&^& import torch ^&^& m='bahree/london-historical-llm' ^&^& t=AutoTokenizer.from_pretrained(m) ^&^& model=AutoModelForCausalLM.from_pretrained(m) ^&^& p='Today I walked through the streets of London and witnessed' ^&^& i=t(p, return_tensors='pt') ^&^& print(t.decode(model.generate(i['input_ids'], max_new_tokens=50, do_sample=True)[0], skip_special_tokens=True))"

πŸ’‘ Basic Usage (Python)

⚠️ Important: This model works best with greedy decoding for historical text generation. Complex sampling parameters may produce gibberish due to the historical nature of the training data.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("bahree/london-historical-llm")
model = AutoModelForCausalLM.from_pretrained("bahree/london-historical-llm")

if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

prompt = "Today I walked through the streets of London and witnessed"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
    inputs["input_ids"],
    max_new_tokens=50,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    top_k=30,
    repetition_penalty=1.25,
    no_repeat_ngram_size=4,
    pad_token_id=tokenizer.pad_token_id,
    eos_token_id=tokenizer.eos_token_id,
    early_stopping=True,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🧰 Example Prompts

  • Tudor (1558): "On this day in 1558, Queen Mary has died and …"
  • Stuart (1666): "The Great Fire of London has consumed much of the city, and …"
  • Georgian/Victorian: "As I journeyed through the streets of London, I observed …"
  • London specifics: "Parliament sat in Westminster Hall …", "The Thames flowed dark and mysterious …"

πŸ› οΈ Training Details

  • Architecture: Custom GPT-2 (built from scratch)
  • Parameters: ~354M
  • Tokenizer: Custom historical tokenizer (~30k vocab) with London-specific and historical tokens
  • Data: Historical London corpus (1500-1850) with proper segmentation
  • Steps: 60,000+ steps (extended training for better convergence)
  • Final Training Loss: ~2.78 (excellent convergence)
  • Final Validation Loss: ~3.62 (good generalization)
  • Training Time: ~13+ hours
  • Hardware: 2Γ— GPU training with Distributed Data Parallel
  • Training Method: Trained from scratch - not fine-tuned
  • Context Length: 1024 tokens (optimized for historical text segments)
  • Status: βœ… Successfully published and tested - ready for production use

⚠️ Troubleshooting

  • ImportError: AutoModelForCausalLM requires the PyTorch library β†’ Install PyTorch with the correct accelerator variant (see CPU/CUDA/ROCm above or use the official selector).

  • AMD GPU not used β†’ Ensure you installed a ROCm build and you're on Linux (pip install ... --index-url https://download.pytorch.org/whl/rocmX.Y). Verify with torch.cuda.is_available() and check the device name. ROCm wheels are Linux-only.

  • Running out of VRAM β†’ Try smaller batch/sequence lengths, or load with device_map="auto" via πŸ€— Accelerate to offload layers to CPU/disk.

  • Gibberish output with historical text β†’ Use greedy decoding (do_sample=False) and avoid complex sampling parameters. This model works best with simple generation settings due to the historical nature of the training data.


πŸ“š Citation

If you use this model, please cite:

@misc{london-historical-llm,
  title   = {London Historical LLM: A Custom GPT-2 for Historical Text Generation},
  author  = {Amit Bahree},
  year    = {2025},
  url     = {https://huggingface.co/bahree/london-historical-llm}
}

Repository

The complete source code, training scripts, and documentation for this model are available on GitHub:

πŸ”— https://github.com/bahree/helloLondon

This repository includes:

  • Complete data collection pipeline for 1500-1850 historical English
  • Custom tokenizer optimized for historical text
  • Training infrastructure with GPU optimization
  • Evaluation and deployment tools
  • Comprehensive documentation and examples

Quick Start with Repository

git clone https://github.com/bahree/helloLondon.git
cd helloLondon
python 06_inference/test_published_models.py --model_type regular

🧾 License

MIT (see LICENSE in repo).

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support