🏆 BANNER PROJECT · AI-ASSISTED DOCUMENTATION

Khojant + AI: Building the Marathon System Help Portal
for Marathon Insurance Software

How Jeff Hojka used Claude to deliver a complete, searchable documentation portal in 10 days — a project that would have cost a professional team $100,000–$160,000+ and taken months.

Jeff Hojka
Jeff Hojka
// FOUNDER · SOFTWARE ENGINEER · CLOUD ARCHITECT
View Full Portfolio →
ESTIMATED COST SAVINGS
$100,000 – $160,000+
CALENDAR TIME
10 Days
ITEMS DOCUMENTED
1,250+
10Days to Complete
43HTML Pages Built
553Reports Documented
270Form Templates
214Letter / Email / SMS
1,250+Total Items

Executive Summary

In 10 days, Jeff Hojka — working as a single developer — used Claude (Anthropic's AI assistant) to design, write, and publish a fully navigable, searchable help portal for Marathon, a long-established insurance management software platform. The finished system comprises 43 HTML pages, covers more than 1,250 individually documented items, and rivals documentation that professional services teams typically quote at $80,000–$150,000 and deliver over several months.

This case study examines what was built, how it was built, the realistic cost of doing it without AI, and what Jeff's approach reveals about the emerging skill of directing AI to achieve professional-grade outcomes.

About the Builder: Jeff Hojka

Founder of Khojant LLC and architect of The Marathon System. Jeff has spent over 35 years building mission-critical software for independent insurance agencies — pioneering cloud-hosted office management since 1982. His expertise spans custom application development (Clarion, C/C++/C#), cloud infrastructure (AWS, HyperV), and end-to-end system integrations. This project is a direct demonstration of that expertise applied through AI-augmented development.

View Jeff's Full Portfolio → Meet the Team

Background: The Marathon System

Marathon is a mature insurance agency and finance management platform used by brokerages to run day-to-day operations — from client policy management and accounting to automated communications and regulatory reporting. Like many enterprise systems of its generation, Marathon's Reporting Engine is powerful but largely undocumented from a user perspective.

Staff who needed to understand which of the system's 553 built-in reports to run, how to customise them, or how to trigger automated client communications had no central reference to turn to. The underlying data — field names, form codes, report definitions, letter templates, and filter logic — existed only in raw CSV files and Clarion include files (.INC). That was the problem Jeff set out to solve.

What Was Built: The RGE Help System

The finished product is a self-contained, browser-based help portal — the Reporting Engine (RGE) documentation suite — consisting of 43 interconnected HTML files with a shared JavaScript/CSS framework, consistent navigation, and live in-browser search across every major category.

Portal Coverage

SectionPagesItems Covered
Reports (Agency, Finance, Accounting, Processing, Setup, System)12533 definitions
Form Templates Reference2270 templates
Report Templates Reference2234 templates
Process Report Templates122 templates
Data Form Templates1190 definitions
Graphs (Agency & Finance)318 definitions
Exports (Agency & Finance)429 definitions
Letters, Emails & SMS4214 templates
System Forms & Data Forms527 + 190 definitions
How-to Guides (configure, schedule, automate, send-to)6Step-by-step workflows
Live Search Pages (Reports, Forms, Letters)3553 + 270 + 214 items searchable
Field Reference, Documentation Todo, Home & Guide4All reportable fields & relationships

Technical Depth

Beyond the page count, the system demonstrates genuine technical depth. Raw data was extracted from multiple CSV files — form codes, field definitions, form templates, report generation data, sort/range configurations, and letter/campaign tables — and transformed into structured, readable reference pages. The portal also incorporates:

  • Live JavaScript search across all 553 reports, 270 forms, and 214 letter/email/SMS templates
  • A complete Field Reference Guide covering all reportable files, sort keys, and data relationships
  • Step-by-step screenshot-based guides with over 60 annotated images for key workflows
  • Automated email/SMS campaign setup guides pairing form triggers with letter templates
  • Report scheduling documentation covering daily/weekly/monthly delivery via email, FTP/SFTP, and AWS S3
  • A shared rge.js / rge.css framework maintaining visual consistency across all 43 pages

How It Was Built: The Human–AI Collaboration

The project ran across multiple Claude sessions over 10 days. Jeff acted as the project director — supplying domain knowledge, raw data files, screenshots, and feedback — while Claude acted as the builder, writing HTML, JavaScript, CSS, and documentation content based on Jeff's direction. Each session followed a consistent pattern:

1
Context Setting
Jeff opened a session and oriented Claude to the goal — e.g., "build the Letters & SMS reference page from this CSV data."
2
Data Analysis
Claude read and analysed the relevant CSV or .INC source files, extracting field names, codes, descriptions, and relationships.
3
Page Generation
Claude produced the HTML page — structure, content, navigation, styling — in one or more iterations based on feedback.
4
Refinement
Jeff reviewed the output, flagged corrections or additions, and Claude iterated until the page met the standard.
5
Integration
Each completed page was linked into the portal navigation, maintaining consistency with the shared design system.

Time & Cost Analysis

What It Actually Took

Jeff completed the project in 10 days. His personal time investment was primarily spent reviewing outputs, providing direction, supplying source data, and quality-checking results — estimated at 20–40 hours across the 10 days, with Claude handling execution.

Side-by-Side Comparison

Metric ✓ With Claude ✗ Without Claude
Total calendar time10 days3–6 months
Direct human hours~20–40 hrs (direction)600–900+ hrs (execution)
Roles required1 (Jeff)Tech writer + web dev + BA
553 reports documentedAutomated from CSV data~1 hr each = 553 hrs
270 forms documentedAutomated from CSV data~30 min each = 135 hrs
214 letters/SMS documentedAutomated from CSV data~30 min each = 107 hrs
43 HTML pages builtGenerated by Claude~8 hrs each = 344 hrs
Search functionalityBuilt by Claude40–80 hrs developer time
Total estimated effort~20–40 hrs (Jeff's time)~1,200–1,400 hrs professional

Estimated Cost Without AI

Work StreamEst. HoursEst. Cost
Content research & writing (tech writer @ $85–120/hr)500–600 hrs$50,000–$72,000
HTML/JS/CSS development (web developer @ $100–150/hr)300–400 hrs$37,500–$60,000
BA / project coordination (@ $90–120/hr)100–150 hrs$9,000–$18,000
Review, QA, and iteration100–150 hrs$12,000–$18,000
TOTAL1,000–1,300 hrs$108,500–$168,000

Assessment: Jeff's Ability to Direct AI

Using AI to produce results of this calibre is not simply a matter of typing prompts. It requires domain expertise, project discipline, and AI literacy working together. Jeff demonstrated each of these throughout the project.

What Jeff Did Well

1. Domain mastery translated into precise direction
Jeff's deep knowledge of the Marathon system was the foundation of the project's quality. He knew which CSV files contained the relevant data, how field codes mapped to user-facing concepts, what workflows mattered to end users, and what the documentation needed to say. Without that domain knowledge, no AI could have produced accurate, meaningful output.
2. Breaking a large problem into manageable sessions
Rather than attempting to build the entire system in one go, Jeff correctly decomposed the project into logical units — one page or section at a time — and built up the system incrementally across 10 days. This is a disciplined, professional approach to AI project management.
3. Supplying the right raw materials
Jeff identified and provided Claude with the source data that made the documentation possible: the CSV files, .INC include files, filter logic, screenshot images, and existing text. Good AI output depends on good inputs, and Jeff consistently provided the context Claude needed to generate accurate content.
4. Maintaining quality standards through feedback loops
Jeff did not simply accept first-draft outputs. He reviewed pages, identified gaps, requested corrections, and pushed for a consistent standard across the portal. This iterative quality control is what separates a functional AI-assisted project from a mediocre one.
5. Establishing and enforcing design consistency
The finished portal has a coherent visual identity — card-based navigation, consistent typography, colour-coded sections, and shared JavaScript/CSS. Achieving this across 43 pages required Jeff to direct Claude toward a unified design language and reject deviations. The result looks and feels like professional software documentation.
6. Knowing when to trust the AI and when to verify
Working with large datasets creates opportunities for subtle errors. Jeff's background in the Marathon system gave him the ability to spot when an output was wrong or incomplete — a critical capability that separates an expert-directed AI project from an unreviewed one.

Areas to Build On

Version control habit
Saving each session's outputs to a named folder will make it easier to track what changed and roll back if needed as the system grows.
Prompt templates per page type
A brief "prompt template" for each page type (report page, search page, guide page) will speed up future sessions and maintain consistency across new additions.
Periodic content reviews
As Marathon itself is updated, periodic reviews of earlier pages will catch drift and keep the documentation accurate over time.

Broader Implications

This project illustrates something important about where AI value is actually created in the workplace. The limiting factor was never Claude's ability to write HTML or extract data from CSVs. The limiting factor — and the source of the project's quality — was Jeff's knowledge of the Marathon system, his judgment about what users needed, and his ability to direct the AI with precision.

For software companies, insurance agencies, and any organisation running legacy systems with large amounts of undocumented institutional knowledge, this case study demonstrates a repeatable pattern: a single domain expert, equipped with the right AI tools and working systematically, can produce documentation assets that previously required entire professional services engagements. The economic implication is significant — not only in cost savings, but in the speed at which organisations can make complex systems accessible to the people who use them.

Conclusion

"In 10 days, Jeff Hojka accomplished what a professional team would have taken 3–6 months and $100,000–$160,000+ to deliver. He did it by combining irreplaceable domain knowledge with effective AI direction — decomposing a large problem, supplying quality inputs, maintaining design standards, and iterating on outputs until they met the bar. The RGE help system now gives every Marathon user a searchable, navigable, professionally structured reference for a system that previously had none. That is a genuine, measurable, lasting contribution — and a compelling example of what becomes possible when the right person learns to work with AI effectively."

// ready.to.talk

Ready to Work Together?

AI-directed development delivered a $100K+ project in 10 days. Want to explore what it can do for you?