Skip to content

The Laboratory

4 Topics 4 Posts

Where things break and that’s the point. Active experiments, workflow development, and the honest documentation of failure.

Subcategories


  • Ongoing experiments with clear hypotheses and live results. Document as you go, not just when you succeed.

    2 2
    2 Topics
    2 Posts
    Obliqo is growing. Slowly, imperfectly, but for real. [image: view?project=69aeb0e2000f974381fc&mode=admin] And I want to say something clearly: without an AI copilot, I would not have been able to build this alone. That does not mean you press a button and a product appears. It means daily study. Confusion. Debugging. Wrong turns. Rewrites. Retesting. Small breakthroughs surrounded by friction. What I am discovering is not just that AI helps me move faster. It is that, in my case, building with an AI copilot has become a different way of learning while building. Not passive. Not automatic. Not effortless. More like a continuous cognitive exchange: I try, the machine responds, I correct, it expands, I resist, it proposes, I study, I decide. But that exchange is not inherently trustworthy. Sometimes the copilot is useful. Sometimes it is shallow. Sometimes it is confidently wrong. Sometimes it gives me something plausible enough to slow down my own thinking. So the real work is not “using AI.” The real work is judging, testing, rejecting, reformulating, and learning enough to know when not to trust what looks convincing. That is why, for me, this process does not feel less human. If anything, it demands more: more clarity, more responsibility, more patience, and more honesty about what I actually understand versus what I am only borrowing for a moment. I am not presenting this as a universal path. Not everyone has the same access, the same technical starting point, or the same conditions for working this way. I am only saying that this is what I am living through while building Obliqo from zero: a form of learning-through-construction that would have been inaccessible to me without this kind of AI partnership. That is also why I do not think this process should remain a black box. It should be opened, examined, shared, criticized, and made more accessible to people who want to change their lives not by consuming answers, but by learning in the middle of real work. So I want to start sharing that process here from the beginning, including the mistakes, the dead ends, and the parts that still do not make sense. If Pyragogy means anything, it has to survive contact with real work, real confusion, and real construction.
  • n8n, OpenRouter, and multi-LLM orchestration. Share your flows, your failures, and the weird things AI does when given autonomy.

    0 0
    0 Topics
    0 Posts
    No new posts.
  • Scripts, plugins, integrations, and infrastructure. If you’re building something for the Pyragogy ecosystem, document it here.

    2 2
    2 Topics
    2 Posts
    [image: 1775719135114-logo-obliqo.png] I started Obliqo from a simple intuition: what if AI should not help us write faster, but help us think more honestly before we publish? That is the experiment. Obliqo is not being built as an AI writer, a ghostwriter, or a polishing tool. It is being built as a friction engine: a system that introduces structured resistance into the writing process so that a draft can be challenged before it becomes public. The current handbook page is here: Obliqo — The Friction Engine The wiki holds the more stable version of the idea. This thread is for the unstable part: doubts, objections, tensions, failures, and possible improvements. The core question Obliqo starts from one conviction: not all friction is a defect Sometimes friction is exactly what prevents a text from hiding behind fluency. A draft may sound clear and persuasive while still containing: weak reasoning rhetorical shortcuts unexamined assumptions more certainty than it has earned Obliqo is meant to make those things harder to ignore. But that raises a harder question: what kind of friction is actually useful, for whom, and under what conditions? That is the question I would like this thread to explore. A simple example Imagine a short text that sounds strong on first reading. Obliqo does not rewrite it. It does not make it smoother. It may simply interrupt it. It may say: this conclusion comes too fast this tone claims more certainty than the argument supports this sentence hides a shortcut instead of making the point this draft is avoiding the real question That interruption is the value. Not because friction is always good, but because sometimes a text needs resistance more than polish. What I want to discuss here I would especially like to hear thoughts on questions like these: When does friction improve thinking, and when does it only discourage the writer? What kinds of weak reasoning should Obliqo become better at detecting? How can AI challenge a draft without becoming theatrical, arrogant, or empty? What separates useful resistance from mere negativity? Should Obliqo remain strictly non-generative, or are there narrow exceptions worth discussing? How can this stay open without losing its identity? Contribute by disagreeing You do not need to agree with the current framing. In fact, disagreement is part of the point. You can help by: questioning the assumptions behind Obliqo proposing new friction patterns describing where this method would fail suggesting educational, editorial, or research uses helping define the line between assistance and substitution One thing I want to protect Obliqo should not become just another system that flatters the user by making everything easier. If it grows, I would rather see it grow slowly and honestly than turn into a convenience machine with a more intellectual logo. That is why this conversation matters. If you have a critique, a doubt, or a better question than the ones above, bring it in.