AI writing shouldn’t sound like AI. In this episode of AI Native, we sit down with Aleksandr Lashkov, co‑founder of Linguix, to unpack seven years of building grammar tools from rule‑based systems to LLM‑powered assistants. We dig into why delivery beats model choice (hello, browser extensions), how “humanizer” features reduce AI tells, and where AI helps—or harms—learning. Aleksandr shares hard‑won product lessons, what changed after ChatGPT, and practical advice for builders weighing open‑source models vs. APIs and the real costs of data, evals, and hiring.
What you’ll learn
Chapters (YouTube)
00:00 – The problem with AI‑sounding writing
00:45 – Meet Aleksandr Lashkov & the early Linguix journey
02:30 – Who uses grammar tools (native vs. non‑native)
05:20 – From rules to LLMs: the 3‑layer stack
07:45 – Post‑ChatGPT: why grammar tools didn’t die
10:30 – Delivery beats model choice (extensions, in‑context help)
12:40 – Humanizer: removing AI tells & emerging etiquette
15:20 – AI in education: hints over answers, critical thinking
18:40 – Why “writing coach” flopped at work
21:30 – Simplifier vs. paraphraser: usage hockey stick
24:05 – Two educator camps & using analytics for support
26:50 – The future: AI everywhere, natural language as the new UI
29:30 – Build vs. buy: open source, data costs, and evals
33:10 – What Aleksandr would do differently today
36:20 – Open‑source parity & getting started
38:30 – Wrap
Links & mentions
• Sponsor: AIorNot.com — detect whether text is human or AI‑generated.
• Guest: Aleksandr Lashkov — co‑founder, Linguix (AI writing assistant).