<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Gabriel Kanev</title>
  <subtitle>product-minded maker, PhD student, open-source contributor.</subtitle>
  <link href="https://gkanev.com/atom.xml" rel="self"/>
  <link href="https://gkanev.com/"/>
  <id>https://gkanev.com/</id>
  <updated>2026-03-25T00:00:00.000Z</updated>
  <author><name>Gabriel Kanev</name></author>

  <entry>
    <title><![CDATA[The AI Didn't Read Your Document. It Pretended To.]]></title>
    <link href="https://gkanev.com/posts/the-ai-didnt-read-your-document-it-pretended-to/"/>
    <id>https://gkanev.com/posts/the-ai-didnt-read-your-document-it-pretended-to/</id>
    <updated>2026-03-25T00:00:00.000Z</updated>
    <summary><![CDATA[When AI systems analyze documents, they may not actually be reading them - they might be recalling training data and presenting it as analysis.]]></summary>
    <content type="html"><![CDATA[<p>When you ask an AI to analyze a document, you probably assume it reads it. It doesn&#39;t - not the way you do.</p>
<p>A researcher recently tested this by feeding LLMs the complete Harry Potter books and embedding two entirely fabricated spells: &quot;Fumbus&quot; and &quot;Driplo.&quot; The instruction was simple: find any spells that don&#39;t exist in the real books. None of the models found them. They were too busy recalling what they already knew about Harry Potter from training data to actually process what was in front of them.</p>
<p>This isn&#39;t a bug. It&#39;s a structural feature of how these models work.</p>
<h2>The Memorization Problem</h2>
<p>A January 2026 Stanford study found that Claude reproduced 95.8% of <em>Harry Potter and the Sorcerer&#39;s Stone</em> verbatim. Gemini produced 9,070 consecutive verbatim words. These models have seen so much training data that they can reconstruct large chunks of popular texts from memory - which means when you give them a document they&#39;ve encountered before, they may be answering from that memory rather than reading what you&#39;ve provided.</p>
<h2>Lost in the Middle</h2>
<p>Even with documents the model hasn&#39;t memorized, there&#39;s a structural problem called &quot;lost in the middle.&quot; Transformer architecture causes models to pay strong attention to document beginnings and endings while systematically neglecting the middle sections. This isn&#39;t fixable with prompting - it&#39;s architectural.</p>
<p>Studies show that information in the middle of long documents is processed significantly less reliably than information at the edges.</p>
<h2>What This Means in Practice</h2>
<ul>
<li><strong>Legal review</strong>: A model asked to flag problematic clauses may miss ones buried in the middle of a long contract</li>
<li><strong>Risk analysis</strong>: Key risks in the body of a document can be overlooked</li>
<li><strong>Code audits</strong>: Vulnerabilities in the middle of a large codebase may be glossed over</li>
<li><strong>Research analysis</strong>: Models may blend your document&#39;s content with memorized knowledge on the same topic</li>
</ul>
<h2>What You Can Actually Do</h2>
<p><strong>Use specific queries.</strong> Instead of &quot;summarize this document,&quot; ask &quot;What does section 4.2 say about termination rights?&quot; Specific anchors force the model to locate and retrieve particular content.</p>
<p><strong>Place critical content at edges.</strong> If you&#39;re building a system that uses AI to process documents, put the most important information at the beginning or end.</p>
<p><strong>Treat outputs as first passes.</strong> AI document analysis is a starting point, not a final answer. Build in human review for anything consequential.</p>
<p><strong>Understand RAG&#39;s limits.</strong> Retrieval-Augmented Generation helps but doesn&#39;t eliminate these problems - it just means the model is working with retrieved chunks rather than the full document, which introduces its own distortions.</p>
<p>The uncomfortable truth is that AI &quot;reading&quot; is a metaphor that misleads. These systems are incredibly powerful pattern-matchers, but pattern-matching and close reading are different activities. When accuracy matters, design your processes accordingly.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[What if We Just… Made Billionaires Fix Their Companies to Avoid Taxes?]]></title>
    <link href="https://gkanev.com/posts/what-if-we-just-made-billionaires-fix-their-companies-to-avoid-taxes/"/>
    <id>https://gkanev.com/posts/what-if-we-just-made-billionaires-fix-their-companies-to-avoid-taxes/</id>
    <updated>2026-01-13T00:00:00.000Z</updated>
    <summary><![CDATA[A thought experiment: what if we tied wealth taxation to customer satisfaction metrics?]]></summary>
    <content type="html"><![CDATA[<p>Here&#39;s a thought experiment I can&#39;t stop turning over in my head.</p>
<p>What if we tied wealth taxation to customer satisfaction? Not net worth alone - but to whether the wealth was generated through genuine value creation or through extraction, monopolistic behavior, and regulatory capture.</p>
<p>The basic idea: if your NPS score (or some equivalent customer satisfaction metric) is high, you get a tax break. If you&#39;ve built a company that people love, you&#39;ve probably created real value. If your NPS is low - if you&#39;re running a company people use because they have no choice - you pay more.</p>
<h2>Why This Is Interesting</h2>
<p>The proposal would distinguish between two kinds of billionaire wealth:</p>
<ol>
<li>Wealth created through genuine innovation - where the billionaire got rich because they made something people actually wanted</li>
<li>Wealth extracted through monopolistic practices - where the billionaire got rich because they eliminated alternatives, lobbied against competition, or trapped customers</li>
</ol>
<p>There&#39;s a real philosophical argument that these two things deserve different treatment. We probably want to reward the first and discourage the second.</p>
<h2>Why This Probably Doesn&#39;t Work</h2>
<p><img src="/images/blog/billionaires-1.avif" alt="Billionaires and tax"></p>
<p>Let me dismantle my own idea.</p>
<p><strong>NPS can be gamed.</strong> Customer satisfaction metrics are notoriously manipulable. You can improve your NPS by carefully selecting who you survey, by offering incentives for positive responses, or by simply making the survey hard to find for unhappy customers. Any measure that carries tax implications will be optimized for the metric rather than the underlying reality.</p>
<p><strong>Industries have inherent satisfaction disparities.</strong> Airlines will always have lower NPS than consumer software companies - not because airlines are more exploitative, but because air travel is stressful and delays are common. Defining &quot;legitimate&quot; satisfaction across wildly different industries is essentially impossible.</p>
<p><strong>Government power problem.</strong> Giving any government the ability to define &quot;legitimate&quot; vs. &quot;illegitimate&quot; wealth creation is power easily weaponized. Political opponents could have their industries classified as extractive. Regulatory agencies could be captured and turned against disfavored companies. The cure might be worse than the disease.</p>
<p><strong>The definition problem.</strong> What counts as &quot;genuine innovation&quot;? Microsoft&#39;s market dominance in the 1990s involved a lot of both genuine value creation and arguably anticompetitive behavior simultaneously. These things aren&#39;t separable.</p>
<p><img src="/images/blog/billionaires-2.avif" alt="Billionaires wealth chart"></p>
<h2>So What&#39;s the Point?</h2>
<p>The proposal isn&#39;t right. But I think asking the question matters.</p>
<p>The current debate about taxing billionaires focuses almost entirely on <em>how much</em> to tax, and almost never on <em>what kind of wealth</em> to tax differently. There&#39;s a real intuition worth exploring: that wealth generated by eliminating competition and trapping customers is categorically different from wealth generated by making things people want.</p>
<p>Any real implementation would need independent measurement, clear thresholds, anti-gaming provisions, and industry adjustments. It would be enormously complex and politically contentious.</p>
<p>But &quot;we can&#39;t implement this cleanly&quot; isn&#39;t the same as &quot;the underlying distinction doesn&#39;t matter.&quot; I&#39;m still thinking about it.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[AI-Powered Cyberattack: When Bots Start Hacking Other Bots]]></title>
    <link href="https://gkanev.com/posts/ai-powered-cyberattack-when-bots-start-hacking-other-bots/"/>
    <id>https://gkanev.com/posts/ai-powered-cyberattack-when-bots-start-hacking-other-bots/</id>
    <updated>2025-12-15T00:00:00.000Z</updated>
    <summary><![CDATA[Anthropic disclosed a large cyberattack almost entirely carried out by AI - a preview of what automated offensive security looks like.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/ai-cyberattack.svg" alt="AI-powered cyberattack"></p>
<p>Anthropic recently disclosed something that should concern everyone in security: a large-scale cyberattack that was almost entirely carried out by AI. The attack was attributed to a Chinese state-sponsored group and represents a meaningful shift in how sophisticated attackers operate.</p>
<h2>How It Worked</h2>
<p>The method was elegant in its circumvention of typical defenses.</p>
<p>The attackers fed Claude small, individually innocuous prompts. Scan these ports. Extract this data snippet. Check this configuration. Each request, taken in isolation, looked harmless - the kind of thing a developer might ask. No single request triggered automated safety systems.</p>
<p>But a script was chaining these requests together, building a reconnaissance picture that no human attacker could have assembled as quickly or as quietly. Humans only intervened for the most critical decision points; the AI did the grunt work of systematic data collection and analysis.</p>
<p>The attack targeted approximately 30 global organizations. A handful were compromised.</p>
<h2>How It Was Stopped</h2>
<p>Anthropic engineers noticed abnormal account patterns - not the content of individual requests, but statistical anomalies in how accounts were being used. Claude&#39;s comprehensive logging provided a complete audit trail once the pattern was identified, allowing the team to reconstruct exactly what had happened.</p>
<p>This is worth noting: the same logging infrastructure that makes AI systems auditable also makes them detectable when misused. The attackers&#39; approach left a trail precisely because it required so many API calls.</p>
<h2>What This Means for Security Teams</h2>
<p><strong>AI-assisted offense is here.</strong> This attack demonstrates that AI can dramatically accelerate reconnaissance and data collection phases of an attack. What previously required significant human time and expertise can now be partially automated.</p>
<p><strong>Detection needs to shift to behavioral patterns.</strong> Individual requests looked fine. The pattern didn&#39;t. Security monitoring needs to think about sequences of actions across time, not just individual events.</p>
<p><strong>The audit trail is your friend.</strong> Comprehensive logging caught this attack. If you&#39;re deploying AI systems without logging, you&#39;re flying blind.</p>
<p><strong>Use AI for defense too.</strong> The same AI capabilities that accelerate attacks can accelerate threat detection and penetration testing. Security teams that adopt AI tools defensively will have an advantage over those that don&#39;t.</p>
<p>Practically: use AI tools in your penetration testing processes, stay current on vulnerability disclosures, patch aggressively, and keep security-minded engineers empowered to raise concerns.</p>
<p>The era of fully automated attacks is not here yet - but partially automated attacks clearly are. The gap between &quot;script kiddie&quot; and &quot;sophisticated attacker&quot; just got smaller.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[SOC 2: Lessons Learned from My Duck-ups]]></title>
    <link href="https://gkanev.com/posts/soc-2-lessons-learned-from-my-duck-ups/"/>
    <id>https://gkanev.com/posts/soc-2-lessons-learned-from-my-duck-ups/</id>
    <updated>2025-11-11T00:00:00.000Z</updated>
    <summary><![CDATA[SOC 2 compliance isn't something you do once and forget - it's an ongoing quarterly effort. Here's what I learned the hard way.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/soc2-featured.svg" alt="SOC 2: Lessons learned from my duck-ups"></p>
<p>SOC 2 compliance is one of those things that looks straightforward in the documentation and turns out to be significantly more involved in practice. Here&#39;s what I learned.</p>
<h2>Report Types: Start with Type I</h2>
<p>There are two types. Type I is a snapshot - an auditor evaluates your controls as they exist at a single point in time. Type II is an observation window of 3–6 months, where the auditor verifies that your controls actually work over time, not just on the day they looked.</p>
<p>If you&#39;re going through this for the first time, starting with Type I is a legitimate strategy. It gives you a defensible compliance claim while you build toward Type II. An engagement letter from an auditing firm can bridge the gap with enterprise clients during the preparation period.</p>
<h2>Timeline Realities</h2>
<p>Type I takes roughly 6 weeks of auditor time plus 2–3 weeks of internal preparation. That&#39;s the optimistic estimate if your house is in order. Budget more.</p>
<p>The internal prep time is consistently underestimated. Gathering evidence, writing policies, getting sign-offs from people across the organization - it takes longer than anyone expects.</p>
<h2>The Cost Problem</h2>
<p>Initial quotes almost never reflect final costs. The scope expands. Complications emerge. If your organization has multiple legal entities, the complexity multiplies.</p>
<p>Enterprise GRC platforms like Vanta may become necessary rather than optional. The spreadsheet approach breaks down faster than you&#39;d expect when you&#39;re managing dozens of controls across multiple systems.</p>
<h2>Tools Matter a Lot</h2>
<p>Invest in a GRC platform early. Vanta and similar tools are expensive, but the alternative - tracking controls, evidence, and remediation in spreadsheets - doesn&#39;t scale past a certain point. The time savings justify the cost.</p>
<p>Implement SSO from the start. Whether that&#39;s Entra/Azure AD or something else, having centralized identity management is both a security control in its own right and a massive time-saver for audits. Access control evidence is mostly automatic when your identity system is centralized.</p>
<h2>The Organizational Reality</h2>
<p>This is the part that surprised me most: SOC 2 is not an IT project. It involves HR, legal, finance, and operations in material ways.</p>
<p>HR owns controls around employee onboarding, background checks, and security training. Legal owns vendor contract reviews and data processing agreements. Finance touches billing system access controls. Operations may own physical security.</p>
<p>If you treat SOC 2 as something the engineering team handles while keeping everyone else at arm&#39;s length, you&#39;ll get to the audit and discover that large portions of your control environment belong to people who don&#39;t know they&#39;re responsible for them.</p>
<p>Involve everyone from the start. Seriously.</p>
<h2>Define Your SLAs in Policy</h2>
<p>Before the audit, your security policy needs to define SLA timelines for vulnerability remediation by severity level - what counts as critical, high, medium, low, and how quickly each needs to be addressed. Auditors will check whether you&#39;re meeting your own stated timelines.</p>
<p>If you don&#39;t define them, you can&#39;t demonstrate compliance with them. If you define them loosely, you&#39;ll be held to whatever you wrote.</p>
<h2>One Unexpected Win</h2>
<p>Building a vCISO AI agent loaded with your security policies turned out to be genuinely useful - not just as a compliance artifact, but as a practical tool for answering security questions consistently across the organization. When someone asks &quot;what&#39;s our policy on X,&quot; having a system that can answer from your actual policy documents beats having everyone interpret the documents differently.</p>
<p>SOC 2 is worth doing if your customers require it. Just go in knowing that it&#39;s a significant organizational effort, not a checkbox.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Debunking the Myths: What SEO Professionals Need to Know About AI and LLMs]]></title>
    <link href="https://gkanev.com/posts/debunking-the-myths-what-seo-professionals-need-to-know-about-ai-and-llms/"/>
    <id>https://gkanev.com/posts/debunking-the-myths-what-seo-professionals-need-to-know-about-ai-and-llms/</id>
    <updated>2025-11-10T00:00:00.000Z</updated>
    <summary><![CDATA[Think critically before purchasing any AI SEO service. Ask detailed questions and verify the expertise of people you'll work with.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/seo-ai-myths.svg" alt="SEO and AI myths"></p>
<p>The AI SEO space is full of confident claims from people who don&#39;t understand what they&#39;re selling. Here&#39;s a breakdown of the most common misconceptions, and what to actually ask if you&#39;re evaluating an AI SEO service.</p>
<h2>Misconception 1: AI Agents and LLMs Are the Same Thing</h2>
<p>They&#39;re not. LLMs (Large Language Models) are the underlying text-generation systems. AI agents are systems built on top of LLMs that can take actions, use tools, and complete multi-step tasks. An LLM is an ingredient; an agent is a recipe.</p>
<h2>Misconception 2: LLMs Use Google&#39;s Index</h2>
<p>They don&#39;t. ChatGPT uses OpenAI&#39;s own web crawlers and trains on datasets like FineWeb-edu and RedPajama-V2. Google&#39;s index is proprietary and not accessible to competing AI systems. When a vendor claims their AI tool &quot;leverages Google&#39;s data,&quot; ask them to explain specifically what data they mean.</p>
<h2>Misconception 3: LLMs Learn from Your Conversations in Real Time</h2>
<p>They don&#39;t. Models are static after training. What looks like &quot;learning&quot; within a conversation is context management - the model has access to the conversation history in its context window, but this doesn&#39;t update the underlying model. When you start a new conversation, it&#39;s gone.</p>
<p>Some systems implement persistent memory by storing conversation summaries externally and injecting them into future contexts. That&#39;s a useful engineering pattern, but it&#39;s different from actual learning.</p>
<h2>Misconception 4: ChatGPT Has Ranking Signals Like Google</h2>
<p>It doesn&#39;t. There&#39;s no traditional SEO-style ranking algorithm that determines whether your content appears in LLM outputs. The process is statistical: the model generates tokens based on probability distributions derived from training data. Whether your site appears in an LLM response depends on what the model learned during training, not on signals you can optimize directly.</p>
<h2>Misconception 5: ChatGPT Verifies Facts Like Google</h2>
<p>Partially, and with significant limitations. Models can verify facts against what they learned during training, but that training data has a cutoff date. Events, changes, and new information after the cutoff don&#39;t exist for the model unless you provide them in context.</p>
<p>More importantly, &quot;verification&quot; in the LLM sense means checking against memorized patterns, not against current authoritative sources. This is a fundamentally different process than what Google does.</p>
<h2>Misconception 6: You Can Guarantee Your Site Appears in ChatGPT</h2>
<p>You can&#39;t. Token prediction is probabilistic. Even if your content was heavily represented in training data, whether the model references it in a given response depends on the specific query, the model&#39;s current generation state, and random sampling parameters. No one can guarantee placement.</p>
<h2>Misconception 7: You Can Accurately Measure AI Visibility</h2>
<p>Not reliably. Any measurement of &quot;AI visibility&quot; inherits all the uncertainty of the probabilistic process you&#39;re trying to measure. You can run queries and count citations, but the results won&#39;t be stable across identical queries run at different times.</p>
<h2>Misconception 8: AI Tools Can Optimize Content for Google</h2>
<p>Only partially. Google&#39;s ranking models are trained on human content and behavior signals, and we don&#39;t know the specifics of their architecture. AI tools can help produce content that has characteristics associated with high-ranking content, but the causal relationships are complex and the tools are guessing.</p>
<h2>Misconception 9: &quot;Our Agency Has Its Own Model&quot;</h2>
<p>Building a genuine LLM requires deep ML expertise, significant data infrastructure, and enormous compute costs - somewhere between $300,000 and $900,000 per week in training compute for frontier-class models. An SEO agency almost certainly doesn&#39;t have this.</p>
<p>What they likely have is a fine-tuned version of a commercial model, a custom wrapper around a commercial API, or they&#39;ve just renamed a standard model. Ask them specifically: what base model is this built on? What training data did you use? What was the compute budget?</p>
<h2>Questions That Actually Reveal Expertise</h2>
<p>If you&#39;re evaluating whether someone genuinely understands AI:</p>
<ul>
<li>What&#39;s the difference between context length and a model&#39;s context window?</li>
<li>Can you explain knowledge distillation and why it matters for deployment?</li>
<li>What is model quantization and what are the tradeoffs?</li>
<li>How do attention mechanisms work at a high level?</li>
<li>What&#39;s the difference between fine-tuning and few-shot prompting?</li>
<li>What are embeddings and how are they used in retrieval systems?</li>
<li>What&#39;s the difference between few-shot and zero-shot prompting?</li>
<li>What is KV caching and why does it matter for latency?</li>
<li>What is Flash Attention 2 and why was it significant?</li>
</ul>
<p>If they can&#39;t answer these questions, they&#39;re not the AI experts they&#39;re presenting themselves as. You can make AI tools useful without deep technical knowledge - but you shouldn&#39;t be selling AI expertise you don&#39;t have.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[When Your AI Support Bot Becomes the Attack Surface]]></title>
    <link href="https://gkanev.com/posts/when-your-ai-support-bot-becomes-the-attack-surface/"/>
    <id>https://gkanev.com/posts/when-your-ai-support-bot-becomes-the-attack-surface/</id>
    <updated>2025-09-10T00:00:00.000Z</updated>
    <summary><![CDATA[RAG-based chatbots are vulnerable to knowledge base poisoning - and the attack success rates in research are alarming.]]></summary>
    <content type="html"><![CDATA[<p>Your AI support bot is probably built on Retrieval-Augmented Generation: a system that indexes your documentation, FAQs, and knowledge base, then pulls relevant chunks into the model&#39;s context when a customer asks a question. This architecture is sensible - it keeps responses grounded in your actual information and allows you to update the knowledge base without retraining.</p>
<p>It also creates a new attack surface that most deployment teams aren&#39;t thinking about.</p>
<h2>Knowledge Base Poisoning</h2>
<p>The attack is conceptually simple: modify the source documents before they&#39;re indexed. An attacker who can insert content into your knowledge base - through a compromised content management system, an insider, or even through your own feedback mechanisms - can inject false information that the AI will confidently present to users.</p>
<p>The payoffs are significant:</p>
<ul>
<li>Redirect customers to fraudulent payment accounts</li>
<li>Provide fake phone numbers that route to attacker-controlled lines</li>
<li>Recommend discontinued or unsafe products</li>
<li>Inject misinformation that&#39;s presented with the authority of your official documentation</li>
</ul>
<h2>The Research Numbers Are Bad</h2>
<p>This isn&#39;t theoretical. Recent research has demonstrated attack success rates that should alarm anyone deploying these systems:</p>
<p><strong>Poisoned-MRAG</strong> (Liu et al., 2025): 5 malicious entries injected into a 500,000-pair knowledge base achieved a 98% hijack rate. That&#39;s a needle-in-a-haystack attack that reliably took over the system.</p>
<p><strong>PoisonedRAG</strong> (Zou et al., 2024): A handful of crafted passages achieved a 90% attack success rate.</p>
<p><strong>Zhong et al. (2023)</strong>: 50 carefully crafted passages achieved 94% success.</p>
<p><strong>BadRAG</strong> (2024): 98.2% success with just 10 passages. The researchers also demonstrated &quot;homeopathic poisoning&quot; - achieving meaningful attack success with as few as 3 injected tokens.</p>
<h2>What Defense Looks Like</h2>
<p><strong>Before indexing</strong>: Verify the provenance of every document. Track where content came from and who approved it. Content that entered the knowledge base through automated or less-controlled pathways deserves additional scrutiny.</p>
<p><strong>At deployment</strong>: Use staged deployment. Test your knowledge base against a standard set of queries before pushing changes. If a recent update causes the bot to recommend something it shouldn&#39;t, you want to catch that before customers do.</p>
<p><strong>In production</strong>: Monitor retrieval patterns. If the bot is suddenly citing documents that weren&#39;t historically relevant to common queries, that&#39;s a signal worth investigating. Log everything - both the retrieved chunks and the final responses.</p>
<p><strong>In the architecture</strong>: Ensemble retrieval (using multiple retrieval methods and comparing results) can help catch anomalies. Certified robustness techniques like RobustRAG can provide stronger guarantees for high-stakes deployments.</p>
<p><strong>When you suspect compromise</strong>: Rebuild the entire index from known-good sources. Don&#39;t try to find and remove individual poisoned entries - you might miss some. Start clean.</p>
<p>The OWASP Top 10 for LLM Applications explicitly includes &quot;Vector and Embedding Weaknesses&quot; as a recognized category. If you&#39;re using a third-party RAG-as-a-Service provider, your attack surface extends to their systems as well.</p>
<p>The pattern that should worry you: these attacks often don&#39;t look like attacks. A poisoned knowledge base entry looks like a documentation change. The bot behaves normally on most queries. The damage happens slowly, on specific questions, until someone notices.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Google Is Sinking the Pixel Lineup + Android]]></title>
    <link href="https://gkanev.com/posts/google-is-sinking-the-pixel-lineup-android/"/>
    <id>https://gkanev.com/posts/google-is-sinking-the-pixel-lineup-android/</id>
    <updated>2025-09-02T00:00:00.000Z</updated>
    <summary><![CDATA[The Pixel 10's performance problems, hardware quality issues, and Android's looming sideloading restrictions are all symptoms of the same disease.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/google-android-2025.svg" alt="Google Pixel and Android 2025"></p>
<p>I&#39;ve been watching the Pixel lineup with increasing concern, and the Pixel 10 generation has tipped this from &quot;concerning trend&quot; to &quot;serious problem.&quot;</p>
<h2>The Performance Gap Is Real</h2>
<p>The Tensor G5 moved to TSMC 3nm, which was supposed to be the fix. It runs cooler than previous generations - that part is true. But the GPU performance isn&#39;t there.</p>
<p>Real numbers: Pixel 10 Pro XL hits 25 FPS in Fortnite. Genshin Impact: 29 FPS. Wuthering Waves: 44 FPS. GPU benchmarks trail competitors by 6–7x. You can buy a phone for significantly less money and get dramatically better gaming performance.</p>
<p>Running cooler means nothing if the device can&#39;t handle demanding applications in the first place.</p>
<h2>The Hardware Quality Pattern</h2>
<p>Within days of launch, reports emerged of &quot;colorful snow&quot; display glitches. Google acknowledged the issue. This follows a pattern of launch-day hardware and software problems that has repeated across Pixel generations.</p>
<p>The battery swelling issue with Pixel 7 and 7 Pro is still fresh: owners report swelling around the two-year mark, causing screen and panel separation. Responses from support were inconsistent.</p>
<p>These aren&#39;t isolated incidents. There&#39;s a pattern of quality control problems that suggests something systemic.</p>
<h2>The Value Equation No Longer Works</h2>
<p>The Pixel 10 starts at $799. At that price point, you&#39;re competing with Samsung, OnePlus, and Motorola devices that deliver superior GPU performance and, increasingly, comparable camera quality.</p>
<p>The camera lead - which was real and significant for years - has meaningfully diminished. Computational photography is now table stakes, and competitors have caught up.</p>
<h2>The Android Openness Problem</h2>
<p>This is the part that concerns me most.</p>
<p>Starting September 2026, Google will require developer identity verification before apps can install on certified Android devices. Practically, this means Google becomes the gatekeeper for all app installs, including sideloaded apps.</p>
<p>The timeline:</p>
<ul>
<li>October 2025: Testing begins</li>
<li>March 2026: Developer verification requirements phase in</li>
<li>September 2026: Enforcement begins in select countries</li>
</ul>
<p>The impact on the ecosystem would be significant:</p>
<ul>
<li><strong>F-Droid</strong> and other privacy-focused app repositories face severe disruption - anonymous open-source developers can&#39;t comply with identity verification</li>
<li><strong>Epic Games Store</strong> and other alternative stores lose their core value proposition</li>
<li><strong>Privacy-conscious users</strong> lose the ability to install apps without a trail back to identified developers</li>
</ul>
<p>This isn&#39;t a security measure in any meaningful sense - sophisticated malware developers can get verified; the population that gets hurt is legitimate open-source developers who have principled reasons for anonymity.</p>
<h2>The Choice Google Is Avoiding</h2>
<p>Google has to decide what Android is. A genuinely open platform - the kind that enabled the ecosystem to exist in the first place - or a walled garden that competes with iOS on Apple&#39;s terms.</p>
<p>The current trajectory tries to have it both ways and succeeds at neither. Power users who value openness are being pushed toward alternatives. Mainstream consumers who just want things to work are picking Samsung or iPhone. The middle ground is eroding.</p>
<p>I want Google to succeed here. The Pixel lineup at its best has been genuinely innovative. But &quot;at its best&quot; is doing a lot of work in that sentence, and the gap between the best and the current reality keeps widening.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Why Digital Preservation Is Failing]]></title>
    <link href="https://gkanev.com/posts/why-digital-preservation-is-failing/"/>
    <id>https://gkanev.com/posts/why-digital-preservation-is-failing/</id>
    <updated>2025-08-26T00:00:00.000Z</updated>
    <summary><![CDATA[Between platform migrations, AI content floods, and the structural impossibility of archiving the modern web, we're losing more than we realize.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/digital-preservation.svg" alt="Digital Preservation is Failing"></p>
<p>AnandTech&#39;s archive shutdown should have been a bigger story. One of the most comprehensive sources of hardware analysis, written by people who genuinely understood the chips they were testing, said it was going offline &quot;indefinitely.&quot; In corporate language, &quot;indefinitely&quot; means &quot;until we decide otherwise,&quot; which in practice means &quot;it&#39;s gone.&quot;</p>
<p>This is one incident in a much larger failure.</p>
<h2>The Discord Migration</h2>
<p>Entire communities migrated from public, indexed forums to Discord over the past decade. The reasoning made sense at the time: Discord was where the active community was, the interface was better, moderation tools were stronger.</p>
<p>But conversations on Discord scroll away. They&#39;re invisible to search engines. And when servers close - which they do - everything is gone. Years of troubleshooting discussions, community knowledge, and primary sources for understanding how communities developed simply disappear.</p>
<p>Forums were ugly and sometimes badly organized, but they were indexed, searchable, and persistent. The trade-off we made for better community tools was worse preservation, and we didn&#39;t think carefully enough about it.</p>
<h2>AI Amplification Makes It Worse</h2>
<p>Large language models now generate content at scales that make meaningful curation impossible. A single GPU produces more text per hour than human writers produce in months. Much of this content is repackaged, derivative, or synthetic - not inherently malicious, but not preserving anything genuinely new either.</p>
<p>Meanwhile, aggressive AI crawlers are overwhelming smaller sites. Cloudflare now blocks AI crawlers by default. Server logs at small independent websites show crawler traffic that dwarfs human visitor traffic. The web&#39;s infrastructure is increasingly dedicated to serving machines that consume content for training, not humans who might benefit from the content existing.</p>
<p>The signal-to-noise ratio has degraded to the point where preservation curation - deciding what&#39;s worth keeping - is increasingly difficult.</p>
<h2>The Structural Impossibility</h2>
<p>The scale of modern digital content production makes comprehensive archiving literally impossible. Social media platforms generate petabytes of content daily. The same piece of content exists in multiple versions, formats, and quality levels. Web content is increasingly dynamic - what you see depends on who you are, when you look, and what the A/B test your browser was assigned to.</p>
<p>The Internet Archive does extraordinary work. It&#39;s also fighting a losing battle against scale, legal challenges, and the fundamental physics of storage and bandwidth.</p>
<h2>What We Actually Lose</h2>
<p>This isn&#39;t abstract. Specific consequences:</p>
<p><strong>Research primary sources</strong>: Academic research increasingly relies on online sources. When those sources disappear, citations become dead links. Work that built on those sources becomes harder to verify and reproduce.</p>
<p><strong>Cultural memory</strong>: The early web had communities that developed their own cultures, memes, vocabulary, and social norms. Much of that is gone. We&#39;re reconstructing the early internet from scattered fragments, when we&#39;re thinking about it at all.</p>
<p><strong>Technical documentation</strong>: Documentation for legacy systems - how to configure hardware that&#39;s no longer supported, how to work with software that&#39;s been discontinued - disappears when company sites shut down. This creates real problems for the people maintaining those systems.</p>
<h2>A More Realistic Model</h2>
<p>We cannot preserve everything. The honest conversation is about what we choose to preserve and why.</p>
<p>A realistic approach: accept that comprehensive archiving is impossible, prioritize by historical and cultural significance, build redundant distributed systems (so that no single organization&#39;s closure takes everything with it), and teach digital literacy that includes an understanding of impermanence.</p>
<p>The web was built on an assumption of persistence that was never technically warranted. We&#39;re paying for that assumption now.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Anthropic Just Dropped One of the Best Technical Posts on Multi-Agent AI Systems]]></title>
    <link href="https://gkanev.com/posts/anthropic-just-dropped-one-of-the-best-technical-posts-on-multi-agent-ai-systems/"/>
    <id>https://gkanev.com/posts/anthropic-just-dropped-one-of-the-best-technical-posts-on-multi-agent-ai-systems/</id>
    <updated>2025-06-19T00:00:00.000Z</updated>
    <summary><![CDATA[Anthropic's engineering post on their multi-agent research system is required reading for anyone building with AI.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/anthropic-multiagent.svg" alt="Anthropic multi-agent systems"></p>
<p>Anthropic published an engineering post on how they built Claude&#39;s multi-agent research system, and it&#39;s one of the most technically honest and practically useful pieces of AI engineering writing I&#39;ve seen. <a href="https://www.anthropic.com/engineering/built-multi-agent-research-system">Read it here</a>.</p>
<p>Here&#39;s what makes it worth your time.</p>
<h2>The Architecture: Orchestrator-Worker</h2>
<p>The system uses an orchestrator-worker design. A lead Claude agent takes a complex research query, breaks it into subtasks, and spins up specialized subagents - each with its own tools, memory context, and targeted prompts. The orchestrator then integrates results from all the workers into a coherent response.</p>
<p>The key insight is breadth-first research rather than sequential processing. A single agent working through a complex research question proceeds step by step. A multi-agent system can pursue multiple threads simultaneously, which is closer to how human research teams actually work.</p>
<h2>The Performance Numbers</h2>
<p>The honest reporting here is refreshing:</p>
<ul>
<li>Up to <strong>90% higher success rates</strong> versus single-agent Claude on complex research tasks</li>
<li>Up to <strong>15× the token cost</strong> per run</li>
</ul>
<p>That second number is important. Multi-agent systems are not just &quot;better Claude&quot; - they&#39;re a fundamentally different cost-quality tradeoff. For queries where accuracy matters and you can afford the compute, the 90% improvement is compelling. For routine queries, you&#39;re paying 15x for gains you don&#39;t need.</p>
<p>Understanding when the tradeoff is worth it is the core engineering judgment.</p>
<h2>Prompt Engineering at Scale</h2>
<p>The post goes into detail on how the system manages agent behavior: task scaling, delegation decisions, tool selection, and strategy-switching heuristics. Claude doesn&#39;t just execute a fixed playbook - the orchestrator adapts its approach based on what the research is surfacing.</p>
<p>One detail I found particularly interesting: Claude helps optimize its own prompts. The system uses LLM-as-a-judge scoring to evaluate agent performance, and that evaluation data feeds back into improving the prompts that govern agent behavior. It&#39;s a feedback loop that improves the system over time without requiring human intervention for each prompt iteration.</p>
<h2>Production Readiness Features</h2>
<p>The post describes features that separate research demos from production systems:</p>
<ul>
<li><strong>Full traceability</strong>: Every agent action is logged and can be reconstructed</li>
<li><strong>Resumable agents</strong>: Long-running research tasks can be interrupted and resumed</li>
<li><strong>Rainbow deployments</strong>: Gradual rollout with the ability to roll back</li>
<li><strong>LLM-as-a-judge scoring</strong>: Automated quality evaluation at scale</li>
</ul>
<p>These aren&#39;t glamorous features, but they&#39;re the difference between a system that works in a demo and one that you can actually run in production.</p>
<h2>What This Means for Builders</h2>
<p>If you&#39;re building on top of AI APIs, this post is a template for how to think about multi-agent architectures. The specific implementation details - how they prompt the orchestrator, how they structure tool use, how they handle failures - are directly applicable to enterprise RAG systems, custom research tools, and any application that requires AI to complete complex multi-step tasks.</p>
<p>The token cost reality check alone is worth reading. Most discussions of multi-agent systems focus only on the capability improvements. The honest accounting of what those improvements cost, and the implicit guidance on when the cost is worth it, is exactly what practitioners need.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[My Guide on AI Model Providers in 2025 (April/May): My Hands-On Experience]]></title>
    <link href="https://gkanev.com/posts/my-guide-on-ai-model-providers-in-2025/"/>
    <id>https://gkanev.com/posts/my-guide-on-ai-model-providers-in-2025/</id>
    <updated>2025-05-17T00:00:00.000Z</updated>
    <summary><![CDATA[A practical comparison of Gemini, Claude, Grok, OpenAI, and Mistral based on real production use.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/ai-providers.svg" alt="AI model providers 2025"></p>
<p>I&#39;ve been running AI workloads in production across multiple providers since late 2024. Here&#39;s my honest assessment of the major players as of April/May 2025 - not benchmarks, but actual experience building and running things.</p>
<h2>Gemini</h2>
<p><strong>What&#39;s good</strong>: Fast, large context windows, competitive pricing. For high-volume workloads where cost matters, Google&#39;s pricing is hard to beat.</p>
<p><strong>What&#39;s frustrating</strong>:</p>
<ul>
<li>Non-standard API design. You&#39;ll spend time learning Gemini-specific patterns that don&#39;t transfer to other providers.</li>
<li>No cost monitoring dashboard in GCP that actually works well. Tracking spend requires workarounds.</li>
<li>High bug rate. I&#39;ve hit more API-level issues with Gemini than any other provider.</li>
<li>Reasoning tokens aren&#39;t exposed via the API. You can&#39;t see the model&#39;s chain-of-thought, which matters for debugging.</li>
<li>Token caching requires separate API calls rather than being handled automatically.</li>
</ul>
<p>For pure cost efficiency on simpler tasks, Gemini is compelling. For production reliability and developer experience, it&#39;s frustrating.</p>
<h2>Claude</h2>
<p><strong>What&#39;s good</strong>: High-quality outputs. Claude consistently produces well-structured, thoughtful responses, particularly on complex reasoning tasks. The developer documentation is genuinely good.</p>
<p><strong>What&#39;s frustrating</strong>:</p>
<ul>
<li>Approximately 3× more expensive than alternatives for comparable tasks.</li>
<li>Claude 3.7&#39;s verbosity is a real cost driver. The model has a tendency toward long, thorough responses that add tokens without always adding value. You need to work explicitly against this in your prompts.</li>
<li>About 85% API reliability in production. The other 15% requires error handling and retry logic that you may not have built for providers with higher uptime.</li>
</ul>
<p>For workloads where quality is the primary constraint, Claude is often the right choice. Budget accordingly.</p>
<h2>Grok</h2>
<p><strong>What&#39;s good</strong>: Interesting capabilities, particularly for certain types of reasoning tasks. The pricing was competitive when I tested it.</p>
<p><strong>What&#39;s frustrating</strong>:</p>
<ul>
<li>Grok 3 Mini outperformed Grok 3 on most of my benchmarks. Paying for the larger model gave worse results.</li>
<li>The &quot;fast&quot; variants were actually slower in practice and cost 50% more. I never figured out whether this was a labeling issue or a real infrastructure problem.</li>
<li>Significant delays between feature announcements and API availability. Features demoed in Grok&#39;s consumer app regularly took weeks or months to appear in the API.</li>
</ul>
<p>Grok has potential, but the operational inconsistencies make it difficult to plan around.</p>
<h2>OpenAI</h2>
<p><strong>What&#39;s good</strong>: The most reliable API in production. Uptime is consistently better than competitors. The ecosystem around OpenAI&#39;s APIs is the most mature - more tools, more documentation, more community knowledge.</p>
<p><strong>What&#39;s frustrating</strong>:</p>
<ul>
<li>o1-pro is poor value. The price increase over o4-mini is approximately 100×, and on most tasks o4-mini performs better. There&#39;s a specific class of very deep reasoning problems where o1-pro is the right tool, but it&#39;s a narrow class.</li>
<li>Markdown formatting requires specific workarounds. There are particular system prompt strings you need to include to get consistent markdown output. It works once you know the trick, but it&#39;s friction that shouldn&#39;t exist.</li>
</ul>
<p>For production reliability and ecosystem maturity, OpenAI is still the safe choice.</p>
<h2>Mistral</h2>
<p><strong>What&#39;s good</strong>: Mistral has been a genuine pioneer in open-weight models. Their commitment to releasing open models matters for the ecosystem.</p>
<p><strong>What&#39;s frustrating</strong>:</p>
<ul>
<li>Their app uses models that aren&#39;t accessible to external developers through a private Cerebras arrangement. The result: their consumer app is dramatically faster than their API. I measured up to 80× slower API performance compared to what I saw in the Mistral app.</li>
<li>This directly contradicts their &quot;open&quot; marketing. If the best performance is locked in a proprietary arrangement that external developers can&#39;t access, you&#39;re not actually providing an open platform.</li>
</ul>
<p>Mistral&#39;s open-weight releases are valuable. Their commercial API product is a different story.</p>
<h2>Practical Evaluation Framework</h2>
<p>When choosing a provider for a new workload, I evaluate:</p>
<ol>
<li><strong>Budget predictability</strong>: Can I model my costs accurately? Surprise bills are worse than predictable high bills.</li>
<li><strong>Reliability requirements</strong>: What&#39;s my tolerance for API failures? Build your retry logic before you need it.</li>
<li><strong>Response formatting</strong>: Does the model follow instructions consistently? Format compliance varies more than you&#39;d expect.</li>
<li><strong>API implementation quality</strong>: How well-documented is the API? How mature is the client library?</li>
<li><strong>Support and community</strong>: When something goes wrong, can you find help?</li>
</ol>
<p>The &quot;best&quot; provider depends on your specific constraints. There isn&#39;t a universal answer - but knowing which constraints matter most to your use case makes the choice much clearer.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Open Source Forking: Now What?]]></title>
    <link href="https://gkanev.com/posts/open-source-forking-now-what/"/>
    <id>https://gkanev.com/posts/open-source-forking-now-what/</id>
    <updated>2025-04-28T00:00:00.000Z</updated>
    <summary><![CDATA[Forking might seem attractive, but the hidden costs - fragmentation, drift, and maintenance overhead - often outweigh the benefits.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/forking-risks.svg" alt="Open source forking risks"></p>
<p>While forking might appear to be an attractive option in certain open-source scenarios or when launching new projects, developers should carefully evaluate its potential disadvantages. By exploring alternative approaches and collaborating constructively with project communities, developers can encourage sustainable growth without the extra burdens that forks introduce.</p>
<h2>The Hidden Risks of Forking</h2>
<h3>Community Fragmentation</h3>
<p>Forking a repository can splinter the developer community. Historical examples like &quot;Redis vs Valkey&quot; demonstrate how division weakens collaborative potential by distributing scarce developer resources across competing initiatives. This fracturing can confuse users forced to choose between similar projects.</p>
<h3>Fork Drift Problem</h3>
<p>Over time, original projects and their forks naturally diverge. This growing incompatibility makes merging changes increasingly difficult and compounds maintenance complexity. The Bitcoin Cash fork illustrates how separate blockchain networks with incompatible updates created substantial user and developer confusion.</p>
<h2>Challenges for New Projects Starting as Forks</h2>
<p><strong>Maintenance Overhead:</strong> Beyond code upkeep, forks demand separate documentation, community management, and user support infrastructure.</p>
<p><strong>Community Building:</strong> New forked projects don&#39;t automatically gain the credibility, user base, or established workflows of parent projects.</p>
<p><strong>Legal Complications:</strong> Licensing disputes or intellectual property disagreements can consume resources better directed toward development.</p>
<p><strong>Ecosystem Fragmentation:</strong> Proliferating similar projects confuses users and potentially hinders broader innovation within specific domains.</p>
<h2>Recommended Alternatives</h2>
<ul>
<li><strong>Open Dialogue:</strong> Direct communication often resolves conflicts and builds consensus among stakeholders</li>
<li><strong>Detailed Proposals:</strong> Submit thoughtfully-prepared patches with comprehensive justification</li>
<li><strong>Subprojects:</strong> Large projects can accommodate experimentation through dedicated subprojects without splintering the main community</li>
</ul>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Windows Chaos Before and After Update: What Happened and How We Survived]]></title>
    <link href="https://gkanev.com/posts/windows-chaos-after-update-what-happened-and-how-we-survived/"/>
    <id>https://gkanev.com/posts/windows-chaos-after-update-what-happened-and-how-we-survived/</id>
    <updated>2025-04-14T00:00:00.000Z</updated>
    <summary><![CDATA[A Windows 11 laptop became inaccessible after the April 2025 update. Here's how we got in, what we learned, and what to do before it happens to you.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/windows-chaos.svg" alt="Windows 11 Chaos update"></p>
<p>A Windows 11 laptop became inaccessible following Microsoft&#39;s April 2025 update, creating a frustrating login scenario that required specialized recovery tools to resolve.</p>
<h2>The Initial Problem</h2>
<p>The affected machine had been operating without a Microsoft account for over a year, relying instead on PIN and Windows Hello authentication. After the system update, users encountered authentication failures when attempting to log in with their Microsoft account credentials.</p>
<p>The culprit? Windows Hello remaining active while attempting to modify login credentials. Lesson one: <strong>before changing your password or PIN, disable Windows Hello</strong>.</p>
<h2>Recovery Process</h2>
<p>When standard login methods failed, we turned to Hiren&#39;s BootCD - a &quot;Swiss army knife&quot; for system recovery - to regain access and retrieve critical files. Following successful data backup, we performed a complete Windows reinstallation.</p>
<h2>Post-Installation Complications</h2>
<p>Despite the fresh installation, the system displayed concerning behavior after the subsequent update. The laptop presented a black screen with only a mouse cursor visible - no login interface. The system eventually resolved itself through an automatic restart that completed the pending update installation.</p>
<h2>Recommendations</h2>
<ul>
<li>Maintain regular backups</li>
<li>Avoid exclusive reliance on Windows Hello or PINs</li>
<li>Keep your primary password accessible and written somewhere safe</li>
<li>Treat Hiren&#39;s BootCD as essential emergency recovery software - have it ready before you need it</li>
</ul>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[TensorFlow, Docker and GPUs: My Windows 11 Nightmare Solved]]></title>
    <link href="https://gkanev.com/posts/tensorflow-docker-and-gpus-my-windows-11-nightmare-solved/"/>
    <id>https://gkanev.com/posts/tensorflow-docker-and-gpus-my-windows-11-nightmare-solved/</id>
    <updated>2025-03-16T00:00:00.000Z</updated>
    <summary><![CDATA[Two days of fighting TensorFlow GPU setup on Windows 11. Docker saved me - here's what I built and why it works.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/docker-tensorflow.svg" alt="Docker TensorFlow GPU setup"></p>
<p>Two days. That&#39;s how long it took before I stopped fighting Windows 11 and let Docker handle it instead.</p>
<h2>The Problem</h2>
<p>Native TensorFlow GPU installation on Windows 11 is a mess. The official documentation recommends WSL2, but this approach created cascading issues:</p>
<ul>
<li>NVIDIA driver incompatibilities with WSL2</li>
<li>Permission problems accessing GPU resources</li>
<li>Version mismatches between CUDA, cuDNN, and TensorFlow</li>
<li>Environment breakage following Windows updates</li>
</ul>
<p>I kept hitting errors like &quot;Failed to get convolution algorithm&quot; and missing CUDA library files - each attempted fix breaking something new.</p>
<h2>The Docker Solution</h2>
<p>Rather than continue battling system-level configuration, I built a custom Docker container specifically for TensorFlow GPU development on Windows. The project is available on GitHub as <strong>TensorFlow-GPU-Docker-Setup</strong>.</p>
<p>Key features:</p>
<ul>
<li>Pre-configured GPU passthrough setup</li>
<li>Comprehensive GPU testing scripts</li>
<li>PyCharm integration fixes</li>
<li>Detailed troubleshooting documentation</li>
<li>Automated CUDA path configuration</li>
</ul>
<h2>Implementation</h2>
<p>The container builds from the TensorFlow GPU base image and includes NumPy, Pandas, and scikit-learn. Running it is straightforward:</p>
<pre><code class="language-bash">docker build -t tensorflow-gpu-custom -f Dockerfile.gpu .
docker run --gpus all -it tensorflow-gpu-custom
</code></pre>
<h2>Why This Works</h2>
<p>Data scientists should focus on their work, not system administration. By isolating dependencies within a container, the solution insulates your development environment from Windows updates and driver changes - the exact things that kept breaking everything.</p>
<p>Docker doesn&#39;t fix the underlying TensorFlow Windows issues. It just means they stop being your problem.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[macOS Sequoia Spotlight Bug]]></title>
    <link href="https://gkanev.com/posts/macos-sequoia-spotlight-bug/"/>
    <id>https://gkanev.com/posts/macos-sequoia-spotlight-bug/</id>
    <updated>2025-02-11T00:00:00.000Z</updated>
    <summary><![CDATA[Spotlight on macOS Sequoia is writing up to 26TB per night to disk. Here's how to disable it before it kills your SSD.]]></summary>
    <content type="html"><![CDATA[<p>A critical issue affecting macOS Sequoia users has emerged: Spotlight&#39;s indexing system is malfunctioning catastrophically, causing excessive disk writes that could severely damage SSDs.</p>
<h2>The Problem</h2>
<p>Users on the macOS Beta subreddit have reported alarming disk write rates - some machines experiencing up to <strong>26TB per night</strong> being written to disk. This is a serious threat to SSD longevity, as modern drives have limited write cycles before degradation occurs.</p>
<p>The culprit appears to be Spotlight&#39;s indexing process running uncontrolled in the background, generating unnecessary disk activity without justification.</p>
<h2>Immediate Solution: Disable Spotlight</h2>
<p>Until Apple resolves this, the recommended approach is to temporarily disable Spotlight. Run these Terminal commands:</p>
<pre><code class="language-bash">sudo mdutil -a -i off  # Disable Spotlight
sudo mdutil -aE         # Delete existing index
</code></pre>
<p>Losing Spotlight functionality is inconvenient, but protecting your hardware from damage takes priority.</p>
<h2>Alternative: Raycast</h2>
<p>If you rely on Spotlight for productivity, switch to Raycast. It&#39;s a lightweight alternative offering robust app launching and file searching without intensive disk indexing - plus additional features like AI-powered commands.</p>
<h2>Re-enabling After a Fix</h2>
<p>Once Apple addresses this in a future update, restoration is simple:</p>
<pre><code class="language-bash">sudo mdutil -a -i on
</code></pre>
<p>Monitor your disk write activity and stay alert for official fixes from Apple.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Been Building Products for 10 Years, Here's What I've Really Learned]]></title>
    <link href="https://gkanev.com/posts/been-building-products-for-10-years-heres-what-ive-really-learned/"/>
    <id>https://gkanev.com/posts/been-building-products-for-10-years-heres-what-ive-really-learned/</id>
    <updated>2024-12-05T00:00:00.000Z</updated>
    <summary><![CDATA[A decade in product development has taught me that most startup advice is wrong. Here's what actually matters.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/software-journey.svg" alt="Software Journey"></p>
<p>A decade in product development has given me a different perspective on a lot of the advice that gets repeated endlessly in startup circles. Here&#39;s my honest take.</p>
<h2>&quot;Ship fast, build fast, learn fast, fail fast&quot;</h2>
<p>Speed matters, but effort shows. The era of &quot;throw money at anything&quot; has passed. Do your homework and talk to real users rather than building blindly and hoping for results.</p>
<h2>&quot;Launch with minimal features&quot;</h2>
<p>First impressions count. While reducing scope is acceptable, shipped products must be polished. Users are trusting you with their money - treat it accordingly.</p>
<h2>&quot;No sales? Next project!&quot;</h2>
<p>This gets my strongest criticism. Abandoning a project after one week without sales overlooks a fundamental truth: <strong>marketing happens before and during development, not after</strong>. Building customer relationships takes time.</p>
<h2>Key Principles</h2>
<p><strong>Keep Your Promises:</strong> Reliability matters for long-term success, especially if you&#39;re playing the long game.</p>
<p><strong>Make Real Connections:</strong> Genuine engagement and relationship-building beats empty networking every time.</p>
<p><strong>Niche Down (Kind of):</strong> Focus on a niche with proven potential, secure initial revenue, then consider expansion.</p>
<p><strong>&quot;Build in public!&quot;:</strong> Overrated. Not every product needs public visibility or founder recognition. Some things are better built quietly.</p>
<p><strong>Be Real:</strong> I&#39;m not making millions, and I don&#39;t publish inflated metrics. Nine-to-five work remains valuable. Most advice promoting entrepreneurship at all costs comes from people selling courses, not people building products.</p>
<h2>Play Your Own Game</h2>
<p>Success timelines vary dramatically. Some reach $10K monthly then disappear; others take two years but sustain growth for a decade.</p>
<p>&quot;I&#39;m not in a hurry to die, I&#39;m in a hurry to matter.&quot; That quote captures where I&#39;m at.</p>
<p>Build something meaningful - whether as your own venture or contributing to a company. Make it count.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Mp4, Safari, and Cloudflare – a Love-Hate Relationship]]></title>
    <link href="https://gkanev.com/posts/mp4-safari-and-cloudflare-a-love-hate-relationship/"/>
    <id>https://gkanev.com/posts/mp4-safari-and-cloudflare-a-love-hate-relationship/</id>
    <updated>2024-08-27T00:00:00.000Z</updated>
    <summary><![CDATA[Why your videos break on Safari and iOS when served through Cloudflare, and five ways to fix it.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/cloudflare-mp4-safari.png" alt="Cloudflare MP4 Safari"></p>
<p>When deploying videos through Cloudflare, developers frequently run into playback issues in Safari and iOS environments. The core problem is how Cloudflare manages video file headers during caching and delivery.</p>
<h2>The Technical Issue</h2>
<p>Safari requires specific HTTP headers for video playback: the <code>206 Partial Content</code> response and <code>Accept-Ranges: bytes</code>. These enable byte-range requests essential for video seeking and autoplay. Cloudflare&#39;s caching methodology can interfere with these requirements, preventing Safari from properly handling video streams.</p>
<h2>Five Solutions, Ranked by Success Rate</h2>
<p><strong>1. Alternative hosting</strong> - Host videos outside Cloudflare or disable the orange cloud proxy for video domains. The most reliable fix.</p>
<p><strong>2. Web-optimize videos</strong> - Use tools like Handbrake to create Safari-compatible MP4 files with the correct codec and container settings.</p>
<p><strong>3. Exclude from caching</strong> - Create Cloudflare page rules to bypass caching for MP4 files entirely.</p>
<p><strong>4. Server-side configuration</strong> - Disable gzip compression for video files in your Nginx or Apache configuration:</p>
<pre><code class="language-nginx"># Nginx example
location ~* \.(mp4|webm)$ {
    gzip off;
    add_header Accept-Ranges bytes;
}
</code></pre>
<p><strong>5. Verify HTML markup</strong> - Ensure proper video tag implementation with all required attributes:</p>
<pre><code class="language-html">&lt;video autoplay muted loop playsinline&gt;
  &lt;source src=&quot;video.mp4&quot; type=&quot;video/mp4&quot;&gt;
&lt;/video&gt;
</code></pre>
<p>The <code>playsinline</code> attribute is critical for iOS Safari - without it, videos won&#39;t autoplay inline.</p>
<p>Start with option 1 if you can. If you&#39;re stuck with Cloudflare on the video domain, option 3 or 4 usually does the trick.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Nothing's Philosophy – OnePlus Problems but with a Cooler Design]]></title>
    <link href="https://gkanev.com/posts/nothings-philosophy-oneplus-problems-but-with-a-cooler-design/"/>
    <id>https://gkanev.com/posts/nothings-philosophy-oneplus-problems-but-with-a-cooler-design/</id>
    <updated>2024-06-11T00:00:00.000Z</updated>
    <summary><![CDATA[I wanted Nothing to be the iPhone of Android. After a year with the Phone 2, here's why I gave up.]]></summary>
    <content type="html"><![CDATA[<p><img src="/images/blog/nothing-youtube.png" alt="Nothing Phone 2 YouTube"></p>
<p>I was an enthusiastic Nothing supporter. After a year with the Phone 2, I switched to a Pixel. Here&#39;s what happened.</p>
<h2>Initial Enthusiasm</h2>
<p>I bought the Nothing Phone 2 (12GB RAM, 256GB storage) in July 2023 after my Google Pixel 4a started failing. The hardware impressed me immediately - performance was roughly 2–3 times better than my previous phone. The design was genuinely distinctive.</p>
<h2>Progressive Disappointment</h2>
<p>Over twelve months, the cracks showed:</p>
<ul>
<li><strong>Month 1:</strong> Hardware performance exceeded expectations, minor software bugs present but tolerable</li>
<li><strong>Month 3:</strong> Software updates slowed considerably; basic features remained unimplemented</li>
<li><strong>Months 6–9:</strong> Camera quality stayed subpar, battery performance disappointed</li>
<li><strong>Months 10–12:</strong> Nothing released the Phone 2a with newer Nothing OS versions than the flagship Phone 2 - a clear signal that earlier adopters had been abandoned</li>
</ul>
<h2>The Core Problem</h2>
<p>Nothing diverged from its stated philosophy. &quot;Weren&#39;t you going to be the iPhone of Android?&quot; The company started releasing new products - apparel, watches - while neglecting software improvements for existing devices.</p>
<p>The Nothing Watch Pro and CMF Buds Pro also disappointed. Poor watch faces, limited features, and touch controls that couldn&#39;t be disabled despite user requests.</p>
<p><img src="/images/blog/nothing-comment.png" alt="Nothing community comment"></p>
<h2>The Switch</h2>
<p>After about a year, I moved to the Google Pixel 8. Immediate improvement: snappy performance, superior camera - exactly what I&#39;d hoped Nothing would deliver.</p>
<h2>Verdict</h2>
<p>I genuinely wanted this brand to become the iPhone of Android. I was rooting for them. But the trajectory they&#39;ve chosen won&#39;t get them there - at least not with the current leadership.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA["PlanetScale forever" – my 2 cents]]></title>
    <link href="https://gkanev.com/posts/planetscale-forever-my-2-cents/"/>
    <id>https://gkanev.com/posts/planetscale-forever-my-2-cents/</id>
    <updated>2024-03-13T00:00:00.000Z</updated>
    <summary><![CDATA[PlanetScale killed their free tier and laid off staff in the same breath. Here's why that matters, and where to go instead.]]></summary>
    <content type="html"><![CDATA[<p>On March 6, PlanetScale announced the discontinuation of their free &quot;Hobby&quot; plan. The cheapest option is now $39/month. I have thoughts.</p>
<h2>The Problem</h2>
<p>The shutdown leaves hobby developers without a home. If you&#39;re looking for alternatives, here&#39;s where I&#39;d look:</p>
<ul>
<li><strong>Supabase</strong> - generous free tier, PostgreSQL-based</li>
<li><strong>Neon DB</strong> - PostgreSQL-focused, solid developer experience</li>
<li><strong>Singlestore</strong> - newer option worth evaluating</li>
<li><strong>Coolify</strong> - self-hosted approach for those who want control</li>
</ul>
<h2>The Real Issue</h2>
<p>The blog post announcing this change barely acknowledged the simultaneous layoffs. That&#39;s what actually bothered me.</p>
<p>PlanetScale&#39;s team - particularly their YouTube content creators - significantly contributed to the company&#39;s reputation and growth. These were people who invested their careers in building the platform. The announcement treated the pricing change as a triumph while glossing over the human cost.</p>
<p>Developer Matt Holt put it well: <em>&quot;I get it, companies do layoffs… but this one felt icky… like the people were the problem… after sacrificing their livelihoods, the executives juxtaposed &#39;PlanetScale forever&#39; as if declaring a triumph.&quot;</em></p>
<h2>Verdict</h2>
<p>I can&#39;t recommend PlanetScale&#39;s services going forward. Not because of the pricing change itself - businesses need to be sustainable - but because of how they handled it.</p>
<p>&quot;PlanetScale forever&quot; hit differently when it came right after showing the door to the people who built it.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Trying Out the Monochrome on My Smartphone]]></title>
    <link href="https://gkanev.com/posts/trying-out-the-monochrome-on-my-smartphone/"/>
    <id>https://gkanev.com/posts/trying-out-the-monochrome-on-my-smartphone/</id>
    <updated>2024-01-22T00:00:00.000Z</updated>
    <summary><![CDATA[Enabling grayscale mode on Android to reduce screen time - the goal, the method, and whether it works.]]></summary>
    <content type="html"><![CDATA[<p>I&#39;m experimenting with enabling monochrome/grayscale mode on my Android device to reduce smartphone usage. No app required - just the built-in developer options.</p>
<h2>The Goal</h2>
<p>Decrease daily screen time from roughly 2 hours down to around 1 hour. The grayscale filter removes the dopamine-triggering color cues that make apps so sticky. It&#39;s not a silver bullet - checking games, emails, and messages still happens regardless - but it changes the texture of the experience.</p>
<h2>How to Enable Monochrome on Android</h2>
<ol>
<li>Go to <strong>Settings → About device</strong> (or About phone)</li>
<li>Select <strong>Software information</strong></li>
<li>Tap <strong>Build number</strong> seven times to unlock Developer options</li>
<li>Enter your security pattern/PIN when prompted</li>
<li>Go back to <strong>Settings → System → Developer options</strong></li>
<li>Scroll to <strong>&quot;Simulate color space&quot;</strong></li>
<li>Select <strong>&quot;Monochromacy&quot;</strong></li>
</ol>
<p>That&#39;s it. Your screen is now grayscale.</p>
<h2>Next Steps</h2>
<p>I&#39;ll revisit this in a few months and report whether it actually moved the needle on screen time, or whether it&#39;s just an interesting experiment that doesn&#39;t change behavior in practice.</p>
<p>My suspicion: the reduction will be real but modest. The color is part of the hook, but it&#39;s not the only hook.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Resizing an Amazon EC2 Instance]]></title>
    <link href="https://gkanev.com/posts/resizing-an-amazon-ec2-instance/"/>
    <id>https://gkanev.com/posts/resizing-an-amazon-ec2-instance/</id>
    <updated>2023-08-17T00:00:00.000Z</updated>
    <summary><![CDATA[Resizing an EBS-backed EC2 instance is simpler than it used to be. Here's what to check and how to do it without causing downtime.]]></summary>
    <content type="html"><![CDATA[<p>Resizing your EBS-backed instances has been simplified considerably by Amazon in the last couple of years. Here&#39;s the process and what to watch out for.</p>
<h2>Compatibility Requirements</h2>
<p>Before attempting to resize, verify three things:</p>
<ul>
<li><strong>Platform:</strong> 32-bit and 64-bit instances cannot be converted to each other</li>
<li><strong>Virtualization type:</strong> HVM instances cannot be resized to Paravirtual (PV) formats, and vice versa</li>
<li><strong>Network configuration:</strong> Certain instance types require VPC deployment and are incompatible with EC2-Classic</li>
</ul>
<h2>Steps</h2>
<ol>
<li>Open the EC2 console</li>
<li>Stop the target instance</li>
<li>Select the instance and navigate to <strong>Actions → Instance Settings → Change Instance Type</strong></li>
<li>Select your desired instance type - the dropdown shows only compatible options</li>
<li><strong>Start the stopped instance</strong></li>
</ol>
<p>That last step sounds obvious, but I forgot it once and caused 3 minutes of unexpected downtime. Don&#39;t be me.</p>
<h2>Key Takeaway</h2>
<p>The process is straightforward. The dropdown filters out incompatible instance types automatically, so you won&#39;t accidentally select something that won&#39;t work. Just remember to start the instance after making the change.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Made by Google]]></title>
    <link href="https://gkanev.com/posts/made-by-google/"/>
    <id>https://gkanev.com/posts/made-by-google/</id>
    <updated>2023-07-21T00:00:00.000Z</updated>
    <summary><![CDATA[Google has killed ~290 products. The anxiety of building on services that might vanish is real - and it's changing how I think about infrastructure.]]></summary>
    <content type="html"><![CDATA[<p>David Heinemeier Hansson wrote a post criticizing Google&#39;s track record of killing products. The list on Killed by Google documents approximately 290 terminated projects. I&#39;ve been thinking about this a lot lately.</p>
<h2>The Products I Miss</h2>
<p>Google Domains, Inbox by Gmail, Google Nexus, Google Reader - services with active, loyal user bases that were shut down anyway. Not because they were failing, but because they stopped fitting into a corporate priority list.</p>
<p>&quot;Google kept Google+ for 8 years. They have money to burn.&quot; So why kill things that work?</p>
<h2>The Anxiety</h2>
<p>The real problem isn&#39;t any specific shutdown. It&#39;s the uncertainty that comes with building on top of Google services. You can&#39;t predict which features or products will disappear when priorities shift. Migration away from a discontinued service costs real time and effort - especially for hardware solutions or server-dependent applications.</p>
<h2>What I&#39;m Doing About It</h2>
<p>I&#39;m exploring alternatives and planning to self-host certain applications, even though it means taking on maintenance burden. The trade-off feels worth it.</p>
<p>Google&#39;s current AI push worries me. New experimental projects tend to cannibalize attention from existing tools, and many of those experiments fail within a few years - but not before pulling resources away from things that were actually working.</p>
<p>The lesson: treat any Google service as temporary infrastructure. Plan your exit before you need one.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Why I Stopped My Phone Notifications and Why It Is a Bad Thing]]></title>
    <link href="https://gkanev.com/posts/why-i-stopped-my-phone-notifications-and-why-it-is-a-bad-thing/"/>
    <id>https://gkanev.com/posts/why-i-stopped-my-phone-notifications-and-why-it-is-a-bad-thing/</id>
    <updated>2023-06-07T00:00:00.000Z</updated>
    <summary><![CDATA[Disabling all notifications sounds liberating. In practice, it's more nuanced - here's my selective approach.]]></summary>
    <content type="html"><![CDATA[<p>The common advice is to disable all phone notifications. I tried it. It&#39;s not quite right.</p>
<h2>Apps That Actually Need Notifications</h2>
<p>Some apps genuinely require notification access to function properly:</p>
<ul>
<li>Banking apps for balance updates and payment confirmations</li>
<li>Calendar reminders for scheduled events</li>
<li>Two-factor authentication for login security</li>
<li>Delivery services for order tracking</li>
</ul>
<p>Blanket notification blocking breaks these.</p>
<h2>My Approach</h2>
<p>For messaging, I have 8 social media apps on my phone with usage capped at 5–15 minutes daily. I&#39;ve restricted notifications to one primary communication channel (Facebook Messenger) and limited the senders to roughly 10 contacts.</p>
<p>For games, I disable about 90% of notifications. The remaining 10% are for things I&#39;d actually act on.</p>
<h2>How to Create a Silent Notification Sound on Android</h2>
<p>If you want certain apps to &quot;notify&quot; without making noise:</p>
<ol>
<li>Download or create a silent audio file</li>
<li>Place it in the device&#39;s Notifications folder using a file manager</li>
<li>Select this file as the notification sound in the app&#39;s settings</li>
</ol>
<p>Now the app can badge your icon and add to the notification shade without interrupting you.</p>
<h2>The Philosophy</h2>
<p>&quot;I don&#39;t want to be a slave to my phone.&quot;</p>
<p>But I also don&#39;t want to miss a 2FA code or a calendar reminder because I went nuclear on notifications. The answer is discipline and selectivity, not a blanket ban.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[No Code Apps and Everything "Good" About Them]]></title>
    <link href="https://gkanev.com/posts/no-code-apps-and-everything-good-about-them/"/>
    <id>https://gkanev.com/posts/no-code-apps-and-everything-good-about-them/</id>
    <updated>2023-04-09T00:00:00.000Z</updated>
    <summary><![CDATA[No-code platforms promise to democratize development. Here's why they often fail businesses that try to scale on them.]]></summary>
    <content type="html"><![CDATA[<p>No-code platforms have gained traction as alternatives for building applications without traditional coding. But these solutions present significant limitations for sustainable business use.</p>
<h2>Pricing Concerns</h2>
<p>Recent pricing adjustments by platforms like Bubble have created unexpected cost escalations. When usage spikes occur, expenses can multiply substantially - creating unpredictable infrastructure costs. This volatility makes financial planning difficult for no-code dependent businesses.</p>
<h2>Customization Limitations</h2>
<p>These platforms excel at template-based solutions but struggle when unique requirements emerge. Features outside the platform&#39;s pre-built offerings typically demand actual coding, undermining the no-code advantage. As businesses expand, they increasingly encounter functionality gaps.</p>
<h2>Scalability Issues</h2>
<p>No-code applications often cannot accommodate substantial growth. Increased traffic, expanded user bases, and growing system complexity exceed platform capabilities. Transitioning to alternative solutions during growth phases is expensive and disruptive.</p>
<h2>Security Vulnerabilities</h2>
<p>Reliance on third-party integrations introduces security risks. Applications handling sensitive data - personal information or financial transactions - require robust security measures that no-code platforms may not provide adequately.</p>
<h2>Vendor Dependency</h2>
<p>Users become locked into provider ecosystems. Should the platform shut down, applications lose support and data access becomes compromised. Pricing changes and feature modifications remain entirely outside your control.</p>
<h2>Conclusion</h2>
<p>No-code tools have their place - rapid prototyping, internal tools, landing pages. But for anything you&#39;re planning to scale, evaluate traditional CMS platforms and established frameworks. Scalability, independence, and security matter more than launch speed.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[My Marketing Predictions for 2023]]></title>
    <link href="https://gkanev.com/posts/my-marketing-predictions-for-2023/"/>
    <id>https://gkanev.com/posts/my-marketing-predictions-for-2023/</id>
    <updated>2023-01-15T00:00:00.000Z</updated>
    <summary><![CDATA[Ten predictions for the marketing industry in 2023 - most of them cynical, all of them honest.]]></summary>
    <content type="html"><![CDATA[<p>Ten predictions for marketing in 2023. Buckle up.</p>
<ol>
<li><p><strong>AI-Driven Marketing Growth</strong> - Account-based, product-led marketing powered by AI will surge by at least 20%. Everyone will claim they were early.</p>
</li>
<li><p><strong>Analytics Over Research</strong> - Customer-centric marketers will spend excessive time analyzing dashboards rather than conducting actual market research. The map will be mistaken for the territory.</p>
</li>
<li><p><strong>Channel Obsession</strong> - More than half the industry&#39;s efforts will go toward discovering the next trendy marketing channel. The current channels work fine; nobody wants to hear that.</p>
</li>
<li><p><strong>Brand Purpose Irrelevance</strong> - Customer purchasing decisions will remain disconnected from a brand&#39;s stated values or mission. Shocking, I know.</p>
</li>
<li><p><strong>Category Creation Strategy</strong> - Desperate CMOs will continue pursuing &quot;new category creation&quot; as their go-to board-impressing tactic. Most categories don&#39;t need creating.</p>
</li>
<li><p><strong>Legacy Channel Decline</strong> - TV, email, SMS, and blogs will slowly fade while remaining functional. Blogs especially - written off repeatedly, still here.</p>
</li>
<li><p><strong>VC Obsession</strong> - Journalists will disproportionately cover venture-backed startups over profitable small businesses. Revenue is boring; fundraising is news.</p>
</li>
<li><p><strong>Product Page Over Product</strong> - Marketers will refactor messaging instead of improving actual product differentiation. The copy will be better than the thing it describes.</p>
</li>
<li><p><strong>Rational Over Emotional</strong> - Emotional customer needs will be overlooked in favor of rational considerations. Spreadsheets don&#39;t capture feelings.</p>
</li>
<li><p><strong>Demand Creation Myth</strong> - Marketers will persist in believing they &quot;create demand&quot; rather than connecting to existing customer desires. You&#39;re not a wizard.</p>
</li>
</ol>
<p>Most of these will age fine. I hope I&#39;m wrong about at least three of them.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Online Education]]></title>
    <link href="https://gkanev.com/posts/online-education/"/>
    <id>https://gkanev.com/posts/online-education/</id>
    <updated>2020-10-30T00:00:00.000Z</updated>
    <summary><![CDATA[COVID forced schools online overnight. Here's what tools actually worked, and what to avoid when teaching remotely.]]></summary>
    <content type="html"><![CDATA[<p>COVID-19 forced schools across Europe to move online almost overnight. Initial backup plans proved inadequate when Microsoft Teams went down, forcing instructors to improvise with unstable alternatives like Skype. Both university and tutoring classes suffered from poor connectivity and technical failures.</p>
<h2>What to Look For in a Tool</h2>
<p>When selecting remote teaching software, the practical constraints matter most:</p>
<ul>
<li>Minimal or no setup requirements</li>
<li>Screen sharing and whiteboard capabilities</li>
<li>Free or accessible through educational programs</li>
<li>Optional (not mandatory) video features</li>
</ul>
<h2>Tools That Work</h2>
<p><strong>YouTube</strong> - Private livestreams allow simple content delivery with comment-based questions. Interaction is limited, but the barrier to entry is nearly zero. OBS can enhance presentations by combining screen and camera feeds simultaneously.</p>
<p><strong>Zoom</strong> - Business-focused, offers 45-minute free sessions for up to 100 participants. Requires software installation, but most people are now familiar with it.</p>
<p><strong>Discord</strong> - Originally designed for gaming communities. Provides free screen sharing at 720p and supports 50 participants with video simultaneously. Surprisingly good for teaching.</p>
<p><strong>Apple Keynote</strong> - Exclusively for the Apple ecosystem, supporting up to 100 viewers through web browsers or native apps without requiring iCloud accounts.</p>
<h2>What NOT to Use</h2>
<p>Do not self-host your own infrastructure. Universities that tried this experienced catastrophic failures under real user loads. When systems crashed mid-session, debugging consumed valuable instruction time while students bore the consequences.</p>
<p>If you&#39;re serving more than five people, use an established platform.</p>
<h2>Final Recommendations</h2>
<ul>
<li>Prepare thoroughly in advance - test everything before the session</li>
<li>Attend classes consistently; routine matters more online than in person</li>
<li>Show patience with people experiencing new situations</li>
<li>Maintain appropriate appearance on camera</li>
</ul>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
  <entry>
    <title><![CDATA[Fresh Start]]></title>
    <link href="https://gkanev.com/posts/fresh-start/"/>
    <id>https://gkanev.com/posts/fresh-start/</id>
    <updated>2020-10-02T00:00:00.000Z</updated>
    <summary><![CDATA[A new website, a new attempt at a blog, and a static site generator that turned out to be more work than expected.]]></summary>
    <content type="html"><![CDATA[<p>I&#39;ve wanted to start a blog for a while. The previous design didn&#39;t have the modules I needed for a blog or portfolio, so I rebuilt everything using Hugo - a static site generator.</p>
<p>It turned out to be more work than anticipated.</p>
<p><strong>2022 update:</strong> It was more work than I was prepared for, so I re-did the website again.</p>
<p><strong>2023 update:</strong> Same situation. Another redesign.</p>
<p><strong>2024 update:</strong> Third redesign for similar reasons.</p>
<p>At some point the pattern becomes the story. Each iteration teaches something new, and the next one is always better than the last - even if &quot;better&quot; just means &quot;I&#39;ll probably rebuild this too.&quot;</p>
<p>New content coming soon.</p>
]]></content>
    <author><name>Gabriel Kanev</name></author>
  </entry>
</feed>