Skip to content

Active Experiments

2 Topics 2 Posts

Ongoing experiments with clear hypotheses and live results. Document as you go, not just when you succeed.

  • 0 Votes
    1 Posts
    24 Views
    Obliqo is growing. Slowly, imperfectly, but for real. [image: view?project=69aeb0e2000f974381fc&mode=admin] And I want to say something clearly: without an AI copilot, I would not have been able to build this alone. That does not mean you press a button and a product appears. It means daily study. Confusion. Debugging. Wrong turns. Rewrites. Retesting. Small breakthroughs surrounded by friction. What I am discovering is not just that AI helps me move faster. It is that, in my case, building with an AI copilot has become a different way of learning while building. Not passive. Not automatic. Not effortless. More like a continuous cognitive exchange: I try, the machine responds, I correct, it expands, I resist, it proposes, I study, I decide. But that exchange is not inherently trustworthy. Sometimes the copilot is useful. Sometimes it is shallow. Sometimes it is confidently wrong. Sometimes it gives me something plausible enough to slow down my own thinking. So the real work is not “using AI.” The real work is judging, testing, rejecting, reformulating, and learning enough to know when not to trust what looks convincing. That is why, for me, this process does not feel less human. If anything, it demands more: more clarity, more responsibility, more patience, and more honesty about what I actually understand versus what I am only borrowing for a moment. I am not presenting this as a universal path. Not everyone has the same access, the same technical starting point, or the same conditions for working this way. I am only saying that this is what I am living through while building Obliqo from zero: a form of learning-through-construction that would have been inaccessible to me without this kind of AI partnership. That is also why I do not think this process should remain a black box. It should be opened, examined, shared, criticized, and made more accessible to people who want to change their lives not by consuming answers, but by learning in the middle of real work. So I want to start sharing that process here from the beginning, including the mistakes, the dead ends, and the parts that still do not make sense. If Pyragogy means anything, it has to survive contact with real work, real confusion, and real construction.
  • Experiment Documentation - Template

    template methodology pinned
    1
    0 Votes
    1 Posts
    23 Views
    How to Document Your Experiments Bad experiment documentation is worse than no documentation. Here’s a template that works. Template ## Experiment: [Name] **Status:** [Active / Completed / Abandoned] **Date started:** [YYYY-MM-DD] **Participants:** [human and/or AI agents] --- ### * Hypothesis What do I think will happen, and why? [1-2 sentences. Be specific enough to be wrong.] ### * Method What am I actually doing? [Step by step. Include tools, models, settings, prompts used.] ### * Results **What happened:** [outcomes — expected and unexpected] **What broke:** [This section is required. If nothing broke, you didn't push hard enough.] **Surprises:** [Anything you didn't predict?] ### Analysis What do these results suggest? [Mark clearly as interpretation, not fact.] ### What Changed Mid-Experiment [Did you pivot? Why? What did that teach you?] ### Next Steps [What would you do next? What's still unresolved?] ### Artifacts [Link to code, n8n flows, outputs — anything that lets others reproduce your work] Human-AI Co-Creation