💡 I created a GDPR-compliant AI app in 1 hour with Symfony
on April 2, 2026
Today, many developers use AI to generate code.
The question we are addressing here is the following:
👉 How to transform an idea into an executable system?
And that's normal.
Because we learned to:
- coding features
- write functions
- connect APIs
But not to:
structuring an idea so that it can be executed
At Darkwood, we never start with the code.
We begin with a much more fundamental step:
👉 the reasoning
In this article, we will look at the following in concrete terms:
- How to transform an idea into a usable unit
- How to use AI to execute this idea
- How to apply this in a real project: SketchUp Shape
🧠 1. An idea is not code
When you think:
“Créer une application”
Your reflex as a developer is often:
“Okay, I’m going to code an API, a database, etc.”
👉 Bad starting point.
An idea is not yet something you can execute.
That's right:
👉 a vague intention
💡 The right mental model
At Darkwood, we believe that:
👉 one idea = one task
No more, no less.
Example
“Créer une application”
becomes :
Task: create_app
It may seem trivial.
But this is a significant change:
👉 You no longer think in terms of “projects” 👉 You think in executable units
⚙️ Why this method works: Church, Turing, and the idea as an executable unit
One might think that “prompting an AI” is a recent, almost purely practical topic, linked to current LLM news.
In reality, the problem is much older.
When a developer takes an idea and tries to turn it into an executable system, they always encounter the same question:
How do we move from an abstract intention to a form that the machine can actually process?
This is exactly the type of question that lies at the root of theoretical computer science.
In a previous Darkwood article - Create a Lambda Interpreter in PHP - we already explored this boundary between:
- the idea of a calculation
- and its actual execution by a machine
This article started with Alonzo Church's λ-calculus, then linked it to the Turing machine, recalling a fundamental point:
The λ-calculus and the Turing machine are two equivalent models of computation
In other words:
- Church provides a conceptual model
- Turing provides a model of execution
And that's exactly what interests us here.
🧠 Church: before execution, a conceivable form is necessary
Alonzo Church does not start from a machine. It stems from a much deeper problem:
What is a function?
and more broadly:
What is a calculation?
The λ-calculus responds to this with a radical idea:
Any calculation can be expressed as an abstraction and application of functions.
In this model:
- A function is a transformation
- a value can be passed to another function
- and evaluation consists of applying functions to one another
This is the theoretical root of many things we use today without thinking about it:
- functional programming
- the currying
- closures
- the combinators
- pure transformations
- DSLs
In the article on the lambda interpreter in PHP, this idea was presented in a very concrete way: We started with an expression, tokenized it, parsed it, built an AST, and then evaluated it.
In other words:
before a system executes something, First, it must understand a form
This is exactly the point that interests us regarding AI.
When you give an LLM a sentence like:
Create a house
You haven't given it an execution yet. You give it an abstract form, an intention, a request that is still insufficiently structured.
And that's where Church's intuition becomes useful to us.
Developer translation
When we talk about ideas in this article, we are not talking about vague inspiration or brainstorming.
We are already talking about a manipulable abstraction.
In other words:
An idea must become a unit upon which a system can reason.
That's why, at Darkwood, we make an initial mental shift:
An idea is not yet code.
An idea is already a transformation in progress.
Or, to put it another way:
An idea is a potential task
⚙️ Turing: once the form is defined, it must be executed
Church formalizes calculation on the side of abstraction. Turing, however, answers a different question:
What does a machine capable of performing this calculation look like?
His answer is famous:
- a memory strip
- a reading head
- a current state
- transition rules
The power of the model does not come from its complexity, but rather from its simplicity.
A Turing machine reads a cell, takes its state into account, applies a rule, possibly writes something, and then moves on.
It's an extremely simple model, but sufficient to reason about what a machine can do.
What interests us here is not teaching the entire theory of automata. It's about recovering a very concrete intuition:
An executable system needs readable units.
and a clear mechanism for switching from one state to the other
This is where the link with modern work on ideas, tasks, and AI agents becomes very concrete.
🔁 What Church and Turing teach us about working with AI
The essential point is this:
- Church teaches us to think of calculation as transformation
- Turing teaches us to think of computation as structured execution
When you combine the two, you get a much healthier way of working with AI.
We stop thinking:
“I’m going to ask the model for some code”
And we start to think:
“I’m going to give him a structured unit of work, that he will be able to transform, then another system can execute”
That's exactly what we do at Darkwood.
🧩 Darkwood translation: an idea becomes a task
At this stage, we can simplify the model without betraying it.
Here is the translation we are using as a basis:
| Theoretical concept | Darkwood interpretation | | ----------------- | ---------------------------- | | Abstraction | Structured idea | | Function | Transformation | | Memory slot | Task | | Strip | To-do list / stack | | Read | Execute | | Transition | Change from one state to another |
This gives us a simple rule:
an idea becomes a task
a task becomes a readable unit
A readable unit can be executed.
It is this change of perspective that matters.
Because as long as an idea remains only in your head, it is unusable by the system:
- nor your terminal
- nor your IDE
- nor your LLM
- nor your orchestrator
cannot work on it properly.
It therefore needs to be given a form.
💡 The idea as a unit of calculation
This is where our method becomes truly meaningful.
When we say:
Créer une application
It's not a system yet. This is not yet architecture. This is not yet a usable prompt.
But it's not "nothing" either.
That's already it:
a unit of calculation to be specified
At Darkwood, we put it through a nomination and reduction stage.
For example :
Task: create_app
Or
Task: generate_sketchup_shape
From this point on, the machine - or the system of agents - no longer manipulates just a human sentence. She is manipulating an identifiable task.
And once you can identify a task, you can:
- store it
- reread it
- reproduce it
- transmit it
- cut it
- parallelize it
- to orchestrate it
That's it, that's the real change.
🧠 Why this is so useful for a developer
A developer often tends to jump directly to implementation:
- create a folder
- Install Symfony
- connect an API
- write an order
But if the initial idea has not been modeled correctly, everything else becomes unstable.
This leads to what many people experience with LLMs today:
- vague prompts
- unclear outputs
- fragile structures
- projects that “work”, but lack coherence
The theory of Church and Turing reminds us of something very simple:
The quality of the execution depends on the quality of the initial representation.
In other words:
If you want to improve the performance of an AI, First, you need to represent your idea better.
And that's precisely what we're going to do next with SketchUp Shape:
- starting from an intention
- convert it into a task
- to structure it
- transform it into a design
- then only in code
🧭 The rule to remember
If this entire section had to be condensed into a single sentence, it would be this:
A useful idea for AI is not an inspiration.
It's a sufficiently clear abstraction to become an executable task
And that's where Church and Turing cease to be distant theory.
They become a practical model for the modern developer:
- Church helps you think about form
- Turing helps you think about execution
- Darkwood helps you connect the two
📚 Concrete example
You have several ideas:
- créer une app
- construire une API
- générer un modèle 3D
👉 You can represent them as a stack:
Stack:
- create_app
- build_api
- generate_model
And then, everything becomes simpler:
You can handle each task one by one
Or
in parallel
⚡ 2. Your brain already works like this
You don't need to learn a new system.
👉 You're already using it.
🧠 Simple case: different tasks
Task A: répondre à un message
Task B: installer un package
👉 No link 👉 You can easily switch between them
⚠️ Complicated case: similar tasks
Task A: coder une API
Task B: refactor la même API
👉 Now, things are getting tough
For what ?
- same context
- same mental variables
- same files
👉 Result:
- confusion
- errors
- fatigue
💥 What this implies
Your brain:
- has limited memory
- handles too much context poorly
- saturates quickly
👉 So:
You can't manage many complex tasks at the same time.
🧩 3. The real problem: reproducibility
A task is only valuable if you can:
- find her
- to revive it
- share it
👉 Otherwise, it stays in your head 👉 and she disappears
🧠 Darkwood Approach
We introduce a simple idea:
👉 one task = one pointer
In concrete terms
- You write your task in Joplin
- she has an ID
- This ID becomes your “pointer”
And then:
- in your terminal → you are referencing this task
- in your IDE → same
- in your AI → same
👉 Result:
you manipulate the same object, everywhere
💡 What this changes
You no longer say:
“I’m going to work on my idea.”
You say:
“I will carry out this task”
And that's much more powerful.
⚙️ 4. AI: not a magic tool, an execution system
Today, many developers use AI like this:
- “generate a function for me”
- “write me a class”
- “Fixes this bug”
👉 It works.
But you remain within a certain logic:
responsive
🧠 Darkwood Approach
We completely change the perspective:
👉 AI is becoming an execution system
🏗 We model like a business
Rather than “magical agents”, a simple structure is used:
👤 You (human)
- you define the intention
- you pose the problem
🧠 AI Architect
- he understands your intention
- he breaks it down into tasks
- he decides how to organize the system
⚙️ Agents
- they execute
- they produce code
- they transform the data
💡 Key Rule
👉 The prompt = your intention
If your prompt is unclear:
👉 the system is unclear
If your prompt is structured:
👉 the system becomes structured
🔥 Example
❌ Bad prompt:
“Generate a house”
✅ Good prompt:
“Create a structured architectural design of a house,
with walls, roof, proportions and constraints,
then generate SketchUp Ruby code from it”
👉 Here, you give:
- a direction
- a structure
- a clear objective
🧠 What you need to remember
👉 You don't need to be better at coding
👉 You need to get better at:
- structuring
- intention
- orchestration
And that's exactly what we're going to apply now with:
👉 SketchUp Shape
🚀 5. Practical application: SketchUp Shape
Project : 👉 https://github.com/matyo91/sketchup-shape
🎯 Objective
Transformer:
“Create a house”
in:
👉 Ruby script executable in SketchUp
🧱 Why SketchUp?
- 3D modeling tool
- Documented Ruby API: → SketchUp Ruby API
- extensible (plugins, scripts)
🎓 Context
I personally learned SketchUp with Sébastien Maison (YouTube)
👉 Here, the goal is not to use their AI:
👉 But to build our own system
Perfect - we're going to make this part more concrete, more grounded in reality, more "development-oriented", while also integrating:
- SketchUp (real-world context)
- Sébastien Maison (social proof)
- Ruby API (technical lever)
- Local AI (strong differentiation)
⚙️ 6. Pipeline: from idea to (concrete) 3D model
Before going any further, let's take a real-life example.
👉 Here, we're not talking about an abstract API 👉 We're talking about generating a 3D object in SketchUp
🧱 A little background: why SketchUp?
SketchUp is a widely used 3D modeling tool:
- architecture
- design
- prototyping
Personally, I discovered SketchUp thanks to Sébastien Maison, with whom I took a course.
👉 I modeled a house there “by hand”
And that's where the idea came from:
👉 “What if I could generate that directly with code?”
⚠️ Why not use SketchUp's native AI?
SketchUp already offers AI:
But that's not the point here.
👉 We are developers.
👉 We want:
- to understand
- control
- automate
So we go through:
👉 SketchUp Ruby API
🧠 The technical entrance
To do this, SketchUp explains:
👉 In concrete terms:
You write Ruby like this:
model = Sketchup.active_model
entities = model.entities
And you build:
- faces
- volumes
- groups
🧩 Resources used
To make the AI more reliable, I used:
-
Ruby API Stubs 👉 to give the LLM a clear view of the available objects
-
Official examples 👉 to understand the real patterns
And if you want to go further:
👉 SketchUp Extensions → You can package your code as a plugin
⚠️ Important prerequisite
To run Ruby:
👉 You must use the desktop version:
👉 The web version does not allow this (or only in very limited ways)
🔁 Pipeline Darkwood (applied)
Now that we have the context, here is the actual pipeline:
Intention
↓
Design (LLM #1)
↓
Code (LLM #2)
↓
SketchUp
🧠 Step 1: Intention
You start with a simple idea:
bin/console app:generate-shape "Create a house"
👉 Nothing more.
🧱 Step 2: Design (LLM #1)
Here, AI does not code.
👉 She thinks
She acts like an architect:
{
"type": "house",
"structure": {
"walls": {...},
"roof": {...}
}
}
👉 She decides:
- proportions
- elements
- of the structure
⚙️ Step 3: Code (LLM #2)
Only then:
👉 a second AI transforms this plan into code
model.start_operation('House', true)
...
model.commit_operation
👉 She acts like a developer:
- It uses the SketchUp API
- she constructs the geometry
⚠️ 7. Why 2 LLMs (and not 1)
This is one of the most important parts.
❌ Naive approach
idée → code
👉 What you get:
- inconsistent
- unpredictable
- difficult to correct
✅ Darkwood Approach
idée → design → code
👉 You separate:
- 🧠 the what
- ⚙️ the comment
💡 It's exactly like in a team:
- architect → decides
- developer → implements
🧠 8. The real problem encountered
When I launched:
Create a house
👉 Result:
- technically valid
- but visually… false
👉 It wasn't a house
❌ Why?
Because AI did this:
“A house = shapes”
👉 She stacked:
- cubes
- a roof
- without logic
💥 The real problem
AI doesn't understand:
- gravity
- the proportions
- actual usage
👉 She doesn't understand:
what a house is
🧠 Correction made
A layer was introduced:
👉 structured design
Instead of :
{ "parts": [...] }
We force:
{
"type": "house",
"foundation": {...},
"walls": {...},
"roof": {...},
"constraints": {
"symmetry": true,
"alignment": "centered"
}
}
👉 Result:
- consistency
- stability
- made credible
⚙️ 10. Technical Stack
🧱 Backend
- Symfony 8
- Symfony AI
darkwood/navi
👉 Installation:
composer require darkwood/navi
🧠 Local AI (GDPR)
👉 everything runs locally
⚖️ Why this is important
- no external API
- no data leak
- total control
👉 You build a system:
GDPR compliant by design
⚙️ 11. Orders
🧠 Local LLM
ollama pull ministral-3:8b
ollama serve
⚙️ Generation
bin/console app:generate-shape "Create a house"
Useful Options
-o house.rb # sauvegarde du script
--spec design.json # sauvegarde du design
-v # debug complet
🧠 12. What you really need to remember
❌ This is not:
- “How to use AI”
✅ It is:
👉 How to structure an idea so that an AI can execute it
🔁 Final Model
idée
→ tâche
→ design
→ code
→ exécution
🚀 Conclusion
The future of development is not:
write more code
But :
structuring ideas and orchestrate their execution
👉 At Darkwood:
we are not trying to go faster
we are trying to be fairer