Blog
  • Login

  • Login
  • Register
  • Blog

  • Articles
  • fr
  • de

🧩 Meetup AFUP Paris – Novembre 2025

on November 28, 2025

Document validation with Symfony + AI & technical test decryption

On November 27, 2025, the AFUP Paris branch: PHP met at Eleven Labs, at 102 Rue du Faubourg Saint-Honoré, for another meetup of the 2025-2026 season. A friendly, technical evening resolutely focused on the current challenges of PHP development: AI applied to Symfony, quality, recruitment… and a good time for networking.

🏢 A welcome at Eleven Labs

For this edition, Eleven Labs – an IT services company specializing in custom web and mobile projects, with offices in Paris, Nantes, and Montreal – hosted the community on the 5th floor of its premises. An ideal atmosphere for exchanging ideas, learning, and meeting other web enthusiasts.

🎤 AFUP Introduction & News – Opening presentation of the meetup

As with every edition, the AFUP Paris team opened the evening with a welcome address, important announcements, and an overview of PHP and community news. Here is a complete summary of the points presented.

🙋‍♀️ 🎙️ Call for Speakers, Sponsors & Venues

AFUP Paris regularly changes speakers, venues and even sometimes partners for food and drinks after the talks.

👉 If you wish:

suggest a topic,

to host a meetup at your premises

or sponsor the aperitif,

You are welcome! Just contact us we plan meetups several months in advance.

🎓 AFUP Mentoring Program: two components

AFUP has two mentoring programs, both voluntary and free:

  1. Mentoring for speakers

For those who want to:

learn how to prepare a talk,

structuring a presentation,

or to gain confidence in speaking in public.

This program is still relatively unknown, but open to all levels. Don't hesitate to register!

  1. PHP Mentoring

For :

mentees wishing to progress in PHP,

Mentors who want to pass on their knowledge.

👉 The registration link is available during meetups and on our social media.

🗓️ Upcoming AFUP Paris meetups

The next two dates have already been set:

Tuesday, December 17 – at Acolia

Thursday, January 8 – at Libid

As always, these evenings are free and open to everyone, subject to availability.

🔄 Overview of Parisian tech meetups

AFUP takes advantage of each edition to promote related tech events. In early December, several meetups will take place in Paris focusing on:

Ruby

Security / AppSec

Angular

And other local communities

A great opportunity to explore other ecosystems and expand your network!

🗞️ AFUP National News 🔹 AFUP Day 2025 – May 22

The event was announced last month. It will take place in 24 cities including: Bordeaux, Lille, Lyon… and, for the first time, Paris!

👉 Registration is open. A QR code was displayed: there are still a few places available at the "blind spot" price.

🐘 PHP News PHP 8.5 is released!

The new version has been available for a few days:

new official website

performance improvements,

Welcome new features for developers.

Results of the AFUP Salary Barometer

The annual barometer has been published:

market trends,

role-based forks

Evolution of PHP salaries in France.

Very useful for tracking your progress or negotiations.

PHPStan massively improves its support

Specifically for:

data providers,

other PHP constructs that have historically been poorly supported.

Good news for projects with high quality and static requirements.

📱 LinkTree AFUP Networks & Community

All useful links are grouped here: LinkedIn, Twitter/X, Discord, Slack, Meetup…

Feedback questionnaire

A QR code was provided to collect your feedback on the evening. This feedback helps us improve future meetups.

🎙️ 1. Symfony + AI: validate your documents before even opening them!

Speaker: Marianne Joseph-Géhannin

The evening got off to a great start with a presentation focused on the automated pre-validation of documents using the Symfony + OCR + LLM combination. A very current use case for product teams and platforms that handle heterogeneous files on a daily basis.

For this first presentation of the evening, Marianne shared a concrete feedback: how to combine Symfony, OCR and language models (LLM) to automate the pre-validation of administrative documents before a human even consults them.

A practical topic, stemming from a real customer need, combining AI, quality and architecture.

👤 Who is Marianne?

Marianne has been developing in PHP since 2011, with a background that has included Symfony since version 1.4. She has been working at Eleven Labs since 2020 and is currently on assignment at Le Monde, notably on the Falcon framework.

🧩 The problem: documents to process… in bulk

The use case presented comes from a call for tenders for a company managing school trips and holiday stays.

Each case requires numerous documents:

identity card,

passport,

certificate,

various supporting documents…

For operational teams, manual verification represents:

a time-consuming task,

a risk of error,

a low-value treatment

and a large variability in the quality of the documents provided.

🎯 Project objective: to automatically eliminate incorrect documents so that operators only process truly valid files.

🧰 The three technical pillars of the project

Marianne built her solution using three main building blocks:

  1. Symfony + Symfony-AI

The new Symfony AI initiative offers:

native integration of LLM into the Symfony ecosystem,

clients for language models,

integrated vectorization,

an “AI Platform” to simplify the use of providers.

In this project, Marianne mainly used:

ChatGPT integration via Symfony-AI Platform,

a first approach to vectorization for structuring rules.

  1. OCR with Tesseract

The documents provided are often images (scans, photos, unstructured PDFs). OCR is therefore essential to extract usable text.

Functioning :

cleaning (contrast, noise reduction…),

detection of text areas,

character recognition,

production of a raw text.

Marianne uses Tesseract via:

an HTTP client,

a local Docker container

an abstract interface to change OCR if needed.

  1. An LLM to interpret and classify documents

Once the text is extracted, an LLM analyzes:

What type of document is it?

if the expected information is present,

if the document is consistent.

Marianne tested several models:

ChatGPT → works perfectly

LLama, Qwen… via Ollama locally → limitations (response time, quality, hardware constraints)

She observed: ➡️ unpredictable or unusable local responses ➡️ insufficient hardware power ➡️ timeout issues

Result: for this POC, ChatGPT was the most reliable model.

📝 The prompt used

The prompt is intentionally simple:

provide a list of possible document types (ID card, passport, residence permit…)

provide the text extracted by OCR

request a response strictly in JSON format

This JSON format guarantees the possibility of:

validate the structure,

categorize the results,

save to database.

ChatGPT consistently adhered to the format which was not the case with local models.

🔄 Processing architecture (pipeline)

Here is the complete pipeline:

📤 Sending the file via a Symfony command (no form for the POC).

🧠 Tesseract OCR → text extraction.

💬 LLM → analysis, classification, validation.

🗃️ Recording of the result in the database, for use by the operators.

❌ If a document does not match its expected type → rejection + explanatory message.

The MVC approach is based on:

a DTO for data,

a handler,

specific services (OCR, AI, storage),

a modular design to replace components as needed.

🎥 Demonstration

Marianne showed several documents that were used:

an identity card,

a passport,

a poster reading “French Republic” (to test its robustness),

And the tool knew:

recognizing the correct document in simple cases,

reject false positives in ambiguous cases.

🧪 Limitations & Lessons Learned ❗ Finding the Right AI Model

Local models (via Ollama) have shown their limitations: performance, inconsistencies, difficulty in respecting the JSON format.

ChatGPT has proven to be much more reliable.

❗ Manage the timeout

Several attempts were made to stabilize the response time.

Temperature settings, output constraints…

Sometimes it is necessary to patch directly into the local engine code.

❗ The documentation

Symfony-AI → very good documentation

OCR / Tesseract → more spartan

AI doesn't always help us understand what we don't yet understand!

❗ GDPR and security

It's impossible to send sensitive documents to an external provider in a real-world context. Hence the advantage of local processing but this requires suitable hardware.

🔎 What this POC demonstrates

✔️ AI can bring real business value to document validation ✔️ The Symfony → OCR → LLM pipeline is perfectly feasible ✔️ Operators save time by focusing on real-world cases ✔️ Symfony-AI integration greatly simplifies these processes

But also :

⚠️ AI isn't magic: you need to assess its relevance, workload, and costs. ⚠️ Don't underestimate the effort required to make an AI pipeline reliable and usable.

🎉 Conclusion

This project perfectly illustrates how Symfony + AI can solve very concrete problems by automating tasks that were previously done manually. It's a pragmatic, measured, and intelligent use of AI—far from being just a buzzword—and perfectly applicable to other sectors.

Thank you to Marianne for this rich and clear sharing!

🎙️ 2. The technical test: useful, useless or counterproductive?

Speaker: Jeanne Londiche

The second talk delved into a sometimes sensitive but crucial topic: technical testing in recruitment. Useful? Not always. Poorly calibrated? Often. Counterproductive? Sometimes.

💡 What has been explored:

What the technical test measures… and what it doesn't measure

The most common biases

The impact on stress, communication, and trust

How a test reveals logic and curiosity, not just code

What alternatives can be considered for fairer and more effective recruitment?

Interactive exchanges with the audience: feedback, anecdotes, best practices

An intervention that asks the right questions and helps both candidates and recruiters to better use (or rethink) this tool.

🎙️ Talk: Technical testing – useful, useless, or downright counterproductive?

By Jeanne Londiche

For the second talk of the evening, Jeanne offered an interactive conference on a topic that many developers know well (and sometimes dread): 👉 technical testing in recruitment processes.

Objective :

  • Review the different types of tests, * share concrete feedback, * question what is actually being evaluated, * and ask: “Does it really help with better recruitment… or not?”

👤 Who is Jeanne?

Jeanne introduces herself:

  • She runs an open-source job board focused on recruitment (hosted on GitHub). * She grew up in England and has lived there since 2013. * She has specialized in PHP recruitment since 2011.

With over a decade of experience, she has seen all kinds of technical tests, from both the candidate and company sides.

🧪 The main categories of tests covered

Jeanne structures her presentation around several approaches:

  • Multiple-choice questions / online questionnaires * In-house / application-based test (develop a small application) * Algorithm test * Refactoring test * Pair programming * Conversational technical interview

The conference focuses mainly on three very common formats: 👉 multiple-choice questions, in-house tests, and refactoring, with many concrete examples.

#1️⃣ Technical MCQs: quick but very theoretical

Multiple-choice quizzes are often offered via specialized platforms. Jeanne asks the room: “Who has already taken a technical multiple-choice quiz? Do you like them?” Almost unanimous answer: no.

✅ The positive points:

  • Quick to correct. * Useful for an initial screening of a large number of candidates. * Gives an "objective" aspect to the process (scores, thresholds, etc.).

❌ Major limitations:

  • Highly theoretical, often disconnected from the real-world experience of a developer. * Heavily dependent on how the questions are worded. * Can exclude talented candidates who may not have the "correct" definition in mind, but who would be perfectly capable of solving the problem in a real-world context. * Favor those who simply recite the "recipe book" rather than those with curiosity, practical experience, and common sense.

In summary: ➡️ useful for rough filtering ➡️ very insufficient for judging the true value of a developer.

#2️⃣ In-house/application testing: practical, but often time-consuming

In-house (or application) testing involves developing a small application or feature "as if it were in real life":

  • to be done at home, * sometimes with a time limit, * sometimes “to be handed in when it’s ready”.

In the room, several people testify: some spent 12 to 15 hours there for a single test.

✅ The positive points:

  • Closer to the reality of work. * Allows you to see:

    • Code structuring, * architectural choices, * test management, * documentation, * even a short summary email (“with more time, I would have done…”), which is often as important as the submitted code. * Leaves room for personal expression (technical choices, project organization, etc.).

❌ Major drawbacks:

  • Can be very long: several evenings or an entire weekend. * Very often unpaid, especially for senior positions. * Risk of exploitation:

    Companies sometimes use these tests to advance their own product without ever hiring. Many experienced developers now refuse these tests.

    “It’s my job, I get paid to produce code; my experience and my CV speak for themselves.”

And most importantly: 🟥 The lack of feedback is consistently cited as a major problem. Candidates spend hours on it, and at best receive:

“Sorry, we are not continuing the process.” > without understanding what went wrong.

💬 Feedback: essential… and all too often absent

Jeanne insists on one point: 👉 a technical test without feedback is a missed opportunity for both sides.

  • For the candidate:

    • It's frustrating, * impossible to know what was expected, * impossible to make progress.
  • For the company:

    • This gives a bad image of the process, * Candidates often draw the following conclusion:

      “If even the test is poorly managed, do I really want to work with them?”

The room confirms: several people report having had:

  • vague remarks such as “not thorough enough” without explanation, * or no feedback at all.

Jeanne also points out that the technical test assesses the company as much as the candidate. A messy, opaque, or disrespectful process is a sign of a poor fit.

#3️⃣ Refactoring test: a format often more revealing

Jeanne then mentions a test format that she particularly likes: 👉 refactoring testing, sometimes presented as a kata.

Principle:

  • Existing code is provided, sometimes very "legacy" or intentionally flawed. * The candidate is asked to explain:

    • what he would do, * where he would start, * which parts he would refactor and why, * how he would improve readability, architecture, testing.

Sometimes this test is done:

  • in pair programming, * or in an interview, based on a projected code excerpt.

✅ Why this format can be very interesting:

  • It closely resembles the reality of many missions (we rarely work from scratch). * It allows us to see:

    • Analytical skills, * understanding of business challenges, * legacy system management, * technical communication. * It facilitates genuine exchange: choices can be explored in depth, discussions held, and challenges made.

But here again, the quality of the exercise depends on:

  • clarity of instructions, * presence or absence of a structured debrief afterwards, * and how time is managed (explicit or implicit timer).

🤖 And what about AI in all of this?

The question is posed: “Will AI change everything for technical testing?”

Jeanne replies:

  • Not really yet: the processes haven't been widely adapted. * But since AI is already changing our daily lives as developers, it would make sense for it to also change:

    • test formats, * what is being assessed (ability to use tools, not just to do everything “by hand”).

We are probably heading towards tests where we appreciate the following equally:

  • the ability to model a problem, * and to know how to intelligently rely on tools (including AI).

🧭 What Jeanne suggests remembering

There is no perfect technical test. But we can ask the right questions:

For businesses:

  • What do we really want to measure? * Does this test reflect the reality of the job? * Are we respecting the candidates' time? * Are we providing meaningful feedback?

For developers:

  • What does this test tell me about the company culture? * Do I really want to work in an environment that recruits this way? * Does the process make me want to join… or raise red flags?

Ultimately, the technical test should be:

a tool for meeting, not a torture test.

🤝 A meetup open to everyone

As always with AFUP, the evening was open to everyone: beginners, juniors, seniors, tech leads, freelancers, or simply curious individuals. The goal was simple: to share knowledge, demystify concepts, and build connections.

Participation was free but places were limited, so the event showed great momentum and an engaged audience.

🎯 Why do these topics matter?

AI continues to have a significant impact on our PHP and Symfony applications.

Document quality and validation are real issues in production

Tech recruitment and its methods are evolving, sometimes too slowly.

Meetups play a vital role: breaking down silos, sharing experience, and staying up-to-date.

📅 See you at the next meetup

The AFUP Paris branch continues its 2025-2026 program with new topics, new speakers and always the same spirit: to share, learn, and advance the PHP community.

  • Sitemap - Hello - Blog - Apps - Photos - Contact - - - - - Legal mentions - Darkwood 2025, all rights reserved