Skip to content
Version 1.0 · Last updated 16 April 2026

Editorial Policy

How Coda One produces content, where AI fits in our workflow, and the rules our editors follow — including when we cover our own competitors.

Effective date
16 April 2026
Last updated
16 April 2026
Version
1.0
Responsible editor
Daniel Park

1. Why this policy exists

Coda One publishes reviews, comparisons, tutorials, and guides about AI writing tools, AI detection, and adjacent software. Because we also build AI tools ourselves, and because some of the tools we cover are direct competitors or commercial partners, we owe readers a transparent account of how our content is produced and what could bias it.

This policy is also our public commitment to Google's E-E-A-T standard (Experience, Expertise, Authoritativeness, Trustworthiness). Everything below is operational, not aspirational — our editors are expected to follow it, and reviewers cite it when rejecting drafts.

One rule above all others: if you can't verify a claim from a primary source, you don't publish it. Applies to AI-generated drafts and human-written ones equally.

2. Who writes our content

The Coda One Editorial Team produces content. Named researchers and reviewers currently include:

  • Mei Zhou — Senior AI Writing Researcher. Background in computational linguistics; hands-on tester for Humanizer and Grammar coverage.
  • Daniel Park — Lead Editor, Tools & Comparisons. Responsible for comparison pages and competitor analyses.
  • Priya Nair — Research Analyst, Education & Emerging Markets. Leads coverage of academic-use scenarios and non-UK regional angles.

Every article of substance (more than 600 words, or anything making a product claim) carries a named author byline. A reviewer byline appears on pages where a second pair of eyes is required before publication.

Short utilities pages may be bylined "Coda One Editorial Team" collectively. Any such attribution means the team has reviewed and owns the content, not that no human wrote it.

Author profiles with credentials, links to published work, and contactable email aliases are maintained at codaone.ai/authors.

3. How AI is used in our content production

We build AI tools. We also use AI tools in our editorial workflow — honestly and transparently. The table below sets out where AI is allowed, where it is not, and the rule that applies at each stage.

Stage AI allowed? Rule
Research and background reading Yes AI used to summarise sources; the named author still reads every cited primary source.
First-draft writing Scaffolding only A human author rewrites every paragraph. No AI-drafted sentence ships unchanged.
Factual claims No Every benchmark, price, date, statistic, or quote is confirmed against a primary source by a human.
Hands-on tool testing Human only We pay for paid tiers where necessary. AI cannot "pretend" to have used a tool — if we didn't test it, we say so.
Headlines, meta, alt text Yes Allowed with human approval before publish.
Final review Human only A second human editor signs off on every publication.

Our rule of thumb: AI is a research assistant, not a reporter. A reader who asks "did a human actually try this?" must always be able to get "yes" as an answer for any hands-on claim on our site.

4. Fact-checking

Every article with factual claims passes through a three-step check before publication:

  1. Author self-check. The author lists every factual claim and the primary source that supports it.
  2. Reviewer cross-check. A second editor opens the primary source and confirms the claim. No secondary-source citations of third-party summaries unless the primary is inaccessible.
  3. Freshness check at publication. Pricing, feature lists, and dates are confirmed within 14 days of the go-live date.

Benchmarks (for example, "AI Humanizer evades Turnitin at X%") are run by a named human tester on the date cited, with methodology documented. We do not reuse third-party benchmark numbers without running our own, or without explicitly attributing the external study.

When reviewing AI-drafted content, we fact-check in this order (highest risk first): dates, statistics, person names, product features, pricing, quotes, URLs. Every one of these categories is something AI models routinely get wrong.

5. Corrections policy

We make mistakes. When we do, we handle them openly:

  • Minor corrections (typos, broken links, small formatting fixes): corrected silently with no notice.
  • Substantive corrections (any change to a factual claim, price, date, benchmark, or quote): a dated correction note is appended at the foot of the article describing what was wrong and what it now says. The URL is preserved; we do not delete-and-repost.
  • Major corrections (a claim that significantly misled readers): the correction note is also placed at the top of the article for at least 30 days. For evergreen pieces, we email subscribers who engaged with the original.

Corrections are logged internally with article URL, date, original text, corrected text, and responsible editor. The log is audited during quarterly editorial reviews.

Readers can flag an error by emailing [email protected] or using the "Report a correction" link in each article's footer. We acknowledge receipt within 2 working days and resolve within 10.

6. Sources and citation

  • Every factual claim links to or names a primary source — vendor documentation, company filings, peer-reviewed research, government data, or on-the-record statements.
  • Where a secondary source is used, it is named and the reader is told it is secondary.
  • Vendor marketing pages are labelled as vendor sources, not neutral sources.
  • Quotes are verbatim, dated, and attributed. We do not paraphrase quotes and present them as direct speech.
  • Screenshots are dated in the caption and retaken at least annually for evergreen content.

8. How we evaluate tools — including our competitors

Coda One builds humanizer, detector, and grammar tools, which means we cover a space that includes our direct competitors. The rules below apply to every tool we review, ours or otherwise.

  • Same methodology across all tools. Every comparison page documents its test set, inputs, scoring rubric, and date. We run our own tools through the same tests and publish the results whether we win or lose.
  • No "alternatives to Coda One" pages. We do not author pages designed to steer traffic back to ourselves. Our comparison pages cover competitors on the merits.
  • Negative findings about Coda One are published. If a competitor's tool beats ours on a specific benchmark, we say so in the comparison, date it, and link back to our tool page where improvement work is tracked.
  • Vendor claim rights exist and are labelled. Tool vendors may claim their listing to correct factual errors; claims do not grant control over reviews, ratings, or rankings.

This is a deliberate departure from most in-house media operations, which protect the house. We don't — because publishing where our own products lose is the single biggest trust signal we can offer readers.

9. Conflicts of interest

Coda Web3 Creative Ltd builds and sells AI writing tools. This is itself a structural conflict when writing about the same category. We manage it in three ways:

  1. Full disclosure of Coda One's commercial interest on every comparison page that includes a Coda One product.
  2. Self-critical coverage is required. A Coda One tool appearing in a comparison must also have its weaknesses documented in that comparison — the same standard we apply to competitors.
  3. Individual disclosures. If a named author has a personal stake (prior employment, equity, family member at a covered vendor), it is disclosed in the byline block for that article. Authors may recuse themselves.

We do not take equity, share-for-coverage arrangements, or undisclosed retainers from any vendor we cover.

10. AI content labelling for readers

We also label for readers when a piece of content is produced with unusually high AI involvement:

  • Articles meeting our standard rule (Section 3) are not labelled — human authorship with AI assistance is our baseline, and applies to most pages.
  • Articles where AI wrote a substantial portion that we elected to keep (rare; mostly programmatic pages for pure utility content) are labelled "AI-assisted — human-reviewed" at the foot of the article.
  • Pure AI-generated pages with only automated review are not published. If this ever changes, it will be disclosed before any such page goes live.

11. Author byline attribution

Bylines are how we assign accountability. Every editorial piece carries one, following the rules below.

  • Every blog post, long-form comparison, and long-form glossary entry receives a named byline with the author's role and a one-line credential block.
  • Collective bylines ("Coda One Editorial Team") are used only on short utility pages where multiple editors contributed and no single primary author is appropriate.
  • Reviewer bylines appear on any piece where a second editor signed off before publication. For factual comparisons, benchmarks, and competitor coverage, the reviewer byline is mandatory.
  • Legal and policy pages (About, Terms, Privacy, this Editorial Policy, Refund Policy, Cookie Policy) are attributed to Coda Web3 Creative Ltd, not to an individual author.
  • Author profiles at /authors list each author's remit, credentials, and published work. An author's remit is narrow on purpose — we do not have one generalist writing across unrelated topic areas.

What we do not do

  • We do not fabricate credentials. Author bios describe real expertise and real responsibilities.
  • We do not use an author outside their remit to pad byline counts.
  • We do not ship content under a name that has not actually reviewed it.

12. Governance, appeals, and contact

  • Responsible editor. Daniel Park, Lead Editor — [email protected].
  • Escalation path. Complaints about editorial decisions, or disputes about a correction, can be escalated to the CEO at [email protected]. The CEO will respond within 10 working days.
  • Review cycle. This policy is reviewed at least annually, or whenever a material change to AI-assistance practice occurs.

Direct contact