🔨 API Platform Conference 2025: a look back at two intense days focused on the Symfony and PHP ecosystem
on September 20, 2025
The 2025 edition of the API Platform Conference took place on September 18th and 19th. For two days, the Symfony, PHP, and API communities gathered around keynotes, feedback, and passionate discussions. Here's a talk-by-talk recap to keep a complete record of the event's takeaways.
🎤 Day 1 – September 18
🔑 Opening Keynote – Kévin Dunglas
Introducing the new features of API Platform 4.2, with a focus on automation, real-time support (Mercury, SSE), and LLM integration. A clear vision for the future of the ecosystem.
🔑 Keynote by Kévin Dunglas – 10 years of API Platform and a major new feature
To celebrate API Platform's 10th anniversary, its creator, Kevin Dunglas, took to the stage in Lille with a major announcement: a new feature shared by API Platform and FrankenPHP.
👨💻 A community and cooperative journey
Kévin recalled his many roles: maintainer of Symfony and PHP, co-founder of the Les-Tilleuls.coop cooperative, and initiator of projects such as Mercure, FrankenPHP, and several Symfony components. He emphasized his company's cooperative model, where decisions are made democratically and profits are shared.
🎂 Ten years of API Platform
- It started as a simple Symfony bundle of 2000 lines of code to expose a REST API. * Today: a complete library, usable with Symfony, Laravel or without a framework, with nearly 10,000 GitHub stars and nearly 1000 contributors. * The spirit of the project remains the same: exposing modern APIs from simple PHP classes, while offering multi-style support.
🌐 Multi-API Support
API Platform now allows you to automatically generate:
- REST (Hydra, HAL, JSON:API) * OpenAPI (machine-readable description) * GraphQL * Async APIs with Mercure and SSE
This ability to unify multiple API styles with the same code remains a unique strength of the framework.
💡 What's new: gRPC with FrankenPHP
The big announcement was the arrival of gRPC in the ecosystem.
- gRPC (created by Google) is a high-performance protocol based on Protocol Buffers and HTTP/2. * Advantages: Strongly typed, fast, efficient communication, suitable for microservices, IoT and critical systems. * Until now, PHP could not serve as a gRPC server (technical limitation).
👉 Thanks to FrankenPHP, written in Go, it is now possible to:
- write Go extensions for FrankenPHP, * create a gRPC server that delegates business logic to PHP workers, * combine the best of both worlds (Go for network performance, PHP for application logic).
A prototype FrankenPHP gRPC extension is already available on Kevin's GitHub.
📈 Perspectives
- Automatically generate proto files from API Resource entities. * Integrate gRPC directly as a supported format by API Platform. * Facilitate interoperability between PHP, Go, and other languages.
🤝 A community above all
Kevin concluded by reiterating that the true strength of API Platform is its community: contributors, trainers, and developers. He also paid tribute to Ryan Reza, a major contributor who recently passed away, and called for support for his family through a fundraiser.
📌 In summary
The opening keynote celebrated 10 years of innovation and community around API Platform, while announcing a major development: ➡️ the arrival of gRPC in the ecosystem via FrankenPHP. This advancement brings Symfony, Laravel, and the PHP world even closer to the future of modern APIs.
⚡ Performance
- 180,000 requests per second explained simply – Xavier Leune An educational talk that detailed the techniques behind extreme performance: fine-grained connection management, choice of network architecture, and the importance of runtime.
Here's a clear, structured, and ready-to-deliver version of Xavier Leune's (FR) talk. I've reorganized the speech, clarified the technical ideas, and kept the demonstrations and practical tips—all while maintaining the educational tone.
Network performance and concurrency in PHP
(kern, non-blocking IO, TCP/TLS, HTTP/2/3, DNS, fork & memory sharing)
Hello—I'm Xavier Leune. Today we're going to talk about very practical things: how to efficiently handle thousands of requests from PHP, why the CPU behaves the way it does, and what techniques to use to avoid running into a connection wall.
Context: why we waste CPU time when waiting for responses
When you're doing a lot of network requests, most of the time is spent waiting (IO) — not actually computing. However, in a typical synchronous script, you can see a lot of CPU cycles consumed just by running in a loop that polls the IO state: doing false
, false
, false
over and over again. A typical result: you get 10 seconds of real time, but you consume 20s of cumulative CPU, because loops are running in vain.
Kernel / Non-blocking IO: Free up the CPU
The solution is to let the kernel (or runtime) handle the wait rather than spinning in userland. Two approaches:
- Naive polling: active loop that checks without pausing → expensive in CPU. * Select / epoll / kqueue: we wait for the kernel to signal a ready IO. In PHP, using the equivalent (select, stream_select, or libs/event) drastically reduces iterations and CPU time: we go from thousands of iterations to a few dozen.
Concretely: replace a while (!done) { check(); }
loop with a select
which wakes up the script when there are events to process.
Attention to the ceiling: establishing connections
Establishing a TCP connection costs: system resources, sockets, handlers, etc. If you open 2k, 4k, 8k connections in a short time, you risk:
- full server-side backlog → SYN which is never handled; * client-side timeouts (5–15s) because the connection was never completed; * errors visible only at scale (timeouts, refusals, drops).
Tip: throttle the number of active connections—for example, limit N simultaneous connections and only launch new ones when one becomes available. Gradually increase N based on actual behavior.
TLS (HTTPS): an additional costly step
After the TCP handshake, TLS adds round trips (handshakes) and cryptographic computation. This increases the connection latency. If you multiply the number of short encrypted connections, the cost per request increases significantly.
- If possible, reuse connections (connection pooling / keep-alive). * If you need to open a lot of connections, count the TLS cost and test with your actual load.
Decorrelation of connections ↔ requests: multiplexing & protocol
History:
- HTTP/1.x: one request = one connection (or limited pipelining) → many connections. * HTTP/2: multiplexing on a single connection (multiple streams), independent order, reduced number of connections. * HTTP/3 (QUIC): over UDP, lighter connection, integrated TLS support, designed for high-latency & high-loss mobile networks.
Practical consequences:
- With HTTP/2, we reduce the number of connections and significantly increase the request rate/s on a well-configured server. * With HTTP/3, we gain resilience on unstable networks, reduced latency in certain mobile scenarios, but the client <-> server side implementations can still be less mature than HTTP/2 depending on the stacks.
TCP: retransmission & head-of-line blocking
With (reliable) TCP, if a packet loses the third response, TCP will retransmit the entire block in question—which can block other multiplexed responses if you don't have a more modern protocol (QUIC/HTTP3). Hence the interest in using QUIC for certain cases (mobile latency, losses)—but be careful: server-client implementations and tools must be mature.
DNS and client-side load balancing
If your domain has multiple A records (or multiple backends), how you resolve the IP affects your load distribution:
- DNS resolution on every request can help increase recipient diversity (round-robin). * Client-side caching resolution can concentrate the load on a single backend. * Sometimes rotating DNS resolution (doing your own) helps distribute the load.
Handy tip: Pre-resolve backend IPs, build your queries on those IPs to force client-side round-robin if needed.
Measurements & Comparisons: HTTP/1 vs HTTP/2 vs HTTP/3 (Summary)
Classic observations on load tests:
- HTTP/1: Huge number of connections, low RPS per socket, high client CPU. * HTTP/2: Much fewer real connections, very high RPS (thousands → tens of thousands). * HTTP/3: Sometimes better on mobile/unstable links; in practice results vary depending on implementation, but conceptually avoids some TCP blocking.
Conclusion: HTTP/2 is often the best compromise for most server→browser/API loads, except in mobile/extreme latency cases where HTTP/3 can help — test.
Increase CPU on the client side: fork / parallel / pcntl
Sometimes you want to fully utilize the client CPU (load testing, heavy processing). Options in PHP:
- pcntl_fork (process fork): child process creation; simple, robust; beware of shared resources (sockets, DB). * parallel (extension): parallel execution in lightweight threads (if available). * pthreads (deprecated / non-CLI), other OS-level solutions.
Important: after a fork, do not share the same open connections (sockets, DB handles) between parent and child without precautions — it breaks the flow. Two approaches:
- Open the connection after the fork: each process has its own socket/DB. 2. Close and reopen the connection in the child: safe and simple.
Communication between processes: shared memory
When forking, you need a way to synchronize/communicate:
- Shared Memory (shmop / SysV shm / ext-shm): Create a shared memory area to read/write strings, states, etc. Useful, simple. * Unix semaphores / files / sockets: Alternatives as needed.
Classic Pattern:
- parent creates shared memory (ftok + shmget), * child writes periodically, * parent reads / waits (poll / sleep); * cleanup at the end.
Resources to watch & best practices
- Limit simultaneous connection creation (throttle). * Reuse connections (keep-alive, pools). * Use select/epoll in non-blocking IO mode — do not spin. * Test TLS cost (handshake): measure the impact on latency. * Monitor server backlog: increase
somaxconn
or server config if needed. * Afterfork
, reopen connections on the child side. * Pre-resolve / manage DNS if you want to distribute the client-side load. * Measure: client CPU, server CPU, RPS, p95/p99 latency, errors/timeouts.
Demo & Results (Summary)
- Naive synchronous script (200 concurrent, 1000 requests → slow server): very high client CPU. * Same script using
select
/ wait: greatly reduced client CPU, loop iterations divided by >100. * Added throttling on connection establishment: reduced timeouts and errors. * Comparative test HTTP/1 vs HTTP/2 vs HTTP/3: HTTP/2 gives the best throughput on our bench – HTTP/3 interesting but variable depending on stack.
Where to find the code and continue
All demo code and scripts are available on GitHub (repository linked to the presentation) — you can clone, run the benches and adjust the settings for your infrastructure.
Conclusion
- Effective network concurrency is not just about opening more connections: it's about using the kernel for IO, limiting concurrent connections, reusing resources, and adapting the protocol (HTTP/2/3) to the context. * When you scale up the load, observe the signals: backlog, timeouts, client/server CPU, network errors, and adjust your architecture.
Thanks — if you have any questions I'm here after the session, and the code is live on GitHub.
-
API Platform, JsonStreamer and ESA for a skyrocketing API – Mathias Arlaud Highlighting JSON streaming to reduce memory consumption and boost the response speed of large-scale APIs.
-
Scaling Databases – Tobias Petry Exploring database scaling strategies: sharding, replication, index optimization. A reminder of the central role of the database in performance.
🏗️ Feedback and architecture
-
API Platform in PrestaShop, a walk in the park? – Johathan Lelièvre Concrete feedback on the integration of API Platform into an existing e-commerce environment. Challenges: compatibility, performance, and gradual migration.
-
API Platform x Redis – Clément Talleu Presentation of the uses of Redis with API Platform to accelerate cache, sessions and job queue.
-
Design Pattern, the treasure is in the vendor – Smaïne Milianni A conceptual talk: how patterns buried in our dependencies influence our architectures and how to better exploit them.
-
What if we used Event Storming in our API Platform projects? – Grégory Planchat Demonstration of Event Storming as a collaborative method for designing rich and coherent models.
The quest for “truth” in distributed systems
The talk by Rob Landers (Engineering Manager, Fintech) — put into an article
TL;DR
- “Truth” in software = provable facts → source of truth (database) + prove (application). * Caching accelerates… and introduces lies if poorly designed (incomplete keys, random invalidations, transactional pollution, race-to-stale). * External effects (email, payments, webhooks) do not participate in your transactions. Solution: outbox + message bus + idempotency. * Sharding multiplies truths and destroys your guarantees (transactions, joins, migrations). Avoid as much as possible. * Objective: consistent caches, reliable effects, trustworthy systems when (not if) the failure occurs.
What is “truth” for an application?
-
Philosophically vague, in software we construct it:
- Bucket of facts = database (source of truth). * Proving these facts = your application (business logic, invariants). * If you can stick to this simple model (App ↔ DB), stick to it.
The inevitable cache
*Perf pressure → “We put a cache and everything will be better.” * Yes… until the day when the cache contradicts the DB and makes the system diverge.
The 4 classic cache traps (and how to avoid them)
a) Incomplete keys (key collision)
Anti-pattern
$key = "bookstore.revenue.$year"; // missing storeId! return $cache->remember($key, fn() => calcRevenue($storeId, $year));
Fix: Encode all dependencies in the key.
$key = "bookstore.$storeId.revenue.$year";
b) Invalidations impossible
- If your key encodes the what but not the who/when, you don't know what to invalidate on change. * Solution: tags/groups. Example: tag by
store:$id
andyear:$year
, then invalidate by tag.
c) Transactional pollution
-
You write to the cache before
COMMIT
. * If the transaction rolls back, the cache broadcasts a lie (uncommitted value). * Rule of thumb: write/invalidate the cache afterCOMMIT
.- Implement a transaction-aware cache (post-commit hooks). * Or move the “cache layer” to the DB side (materializations/indexes/query plan) to benefit from ACID properties.
d) Race-to-stale
-
T1 updates DB + invalid → T2 reloads old value between inval and commit → cache obsolete. * Mitigations:
- Order “
COMMIT
→ invalidate/write” (post-commit hooks). * Locks/versions (ETags, object versions) in the cache. * Short TTLs + robust cache-aside.
- Order “
Cache checklist > > * [ ] Keys = all dependencies (user, locale, filters, feature flags, etc.). > * [ ] Invalidations by tags/groups. > * [ ] Post-commit cache writes only. > * [ ] Concurrency tests (race conditions). > * [ ] Observability (hit/miss, stale rate, latencies).
External effects: truth outside the transaction
The problem
- Payments, emails, webhooks, third-party APIs: not in your transaction. * Irreversible effect (email sent), invisible state until commit, unchecked retries.
The pattern that works
- Outbox (in your DB): before “publishing”, write a message to the database in the same transaction as your business data. 2. Message bus (e.g. Symfony Messenger with Doctrine transport) reads the outbox after commit, executes the effect, marks success/failure. 3. Idempotency keys on the provider side (payment, email): at least once → only one application of the effect.
flowchart LR A[App] -- Tx begin --> DB[(DB)] A -->|write data + outbox| DB DB -->|COMMIT| Q[Outbox Reader / Messenger] Q --> S[External service] S --> Q --> DB
Bonus
- Compensating actions: we “cancel” by moving forward (e.g., refund). * Idempotence on the consumer side of webhooks as well (deduplication by key).
“Scaling to infinity”: why sharding is (often) a false good idea
- No more global transactions (or very expensive). * No more inter-shard joins → painful denormalization. * Long and risky migrations (multi-schema long-lived code). * Cache keys to be expanded (include
shardId
). * Search to be (re)invented. * Non-unique clock (the “now” varies by shard).
Recommendation: First exhaust all vertical/horizontal without sharding > (indexes, queries, read replicas, internal partitioning, consistent caching, CQRS, projections/materialized views). > Sharding comes very far in the curve.
“Reliable Truth” Recipe
- DB = source of truth. Database invariants, constraints, transactions. 2. Cache: full dependencies, driven invalidations, post-commit only, race tests. 3. Side-effects: Outbox + Bus + Idempotence (+ compensations). 4. Observability: cache metrics (hit/stale), outbox delays, retry rates, dead-letter queues, audit trail. 5. Resilience: timeouts, backoff, circuit breakers, bulkheads. 6. Tests: concurrency (race), mild chaos (disable a cache node, inject latency). 7. Do not shard as long as other levers exist.
Appendix — Anti-patterns & remedies
if ($isAdmin) { … } if ($isCustomer) { … }
→ preferif … elseif …
if exclusive; otherwise make compound states (FSM/invariants) explicit. * Write to cache in a transaction → move the write to a post-commit hook or a Doctrine listener. * “By hand” invalidations → tags/groups + “ownership” of keys by bounded context. * Bus without idempotence → deterministic key (e.g. business UUID) → at-least-once becomes effectively once. * Non-deduplicated webhooks → tablewebhook_receipts(idempotency_key, received_at, status)
+ unique index.
Conclusion
“Truth” isn’t found: it’s engineered. By setting clear boundaries (DB, cache, effects), requiring time commitments (post-commit), and treating the outside as untrustworthy by default (outbox/idempotence), we build systems that fail without collapsing—and that tell the truth, even under pressure.
🛠️ Tools and practices
- Composer Best Practices 2025 – Nils Adermann Current recommendations for effectively managing dependencies: version constraints, security, build reproducibility.
#🧰 Composer Best Practices 2025 — by Nils Adermann
“Many good practices haven’t changed in 5–10 years… but the ecosystem, social security and our tools are changing.” — Nils
🆕 What changes (2025)
-
Goodbye Composer 1 / Packagist API v1
- Upgrade to Composer 2 (mandatory for updates; Composer 1 can no longer resolve via v1). * If you're stuck, private proxies can help... but the safest route remains migration.
-
New supply chain threats
- Typosquatting and AI-induced packages (names invented and then published by attackers). * Increased vigilance on what you add to
composer.json
.
- Typosquatting and AI-induced packages (names invented and then published by attackers). * Increased vigilance on what you add to
-
Small features that change life
composer update --minimal-changes
: Upgrade only what is strictly necessary to resolve a conflict. *composer update --patch-only
: Only take patches (xyz) — ideal for security hotfixes. *composer update --bump
(orcomposer bump
): Aligns yourcomposer.json
constraints with the installed versions. *composer audit
(auto-run on update): Detects known vulnerabilities in your lock.
🔐 Supply chain & security
-
Why everyone is concerned Even a “small” site collects data → potential target (phishing, pivot, etc.).
-
Key best practices
-
Run
composer audit
in CI and alert if vulnerabilities appear after deployment. * Add theroave/security-advisories
metapackage: prevents installing a vulnerable version. * Use a private Composer repository (Private Packagist / Nexus / Artifactory / Cloudsmith…):- Mirror artifacts (not just metadata) → protects against deletion or wild retags. * Reliable entry point for your builds (less direct dependency on GitHub). * Never retagged a published version: make a new release. * Sponsor your dependencies (
composer fund
), the PHP Foundation, etc.: it's your supply chain.
- Mirror artifacts (not just metadata) → protects against deletion or wild retags. * Reliable entry point for your builds (less direct dependency on GitHub). * Never retagged a published version: make a new release. * Sponsor your dependencies (
-
🧭 Semantics & constraints (useful reminders)
-
Prefer
^
(caret) to express “compatible until the next major”:json { "require": { "vendor/lib": "^1.2" } } // [1.2.0, <2.0.0)
* Multi-majors (often for PHP):json { "require": { "php": "^8.1 || ^8.2 || ^8.3" } }
* Exclude broken versions:json { "require": { "vendor/lib": "^1.2, !=1.3.2, !=1.4.0" } }
* Stability:dev
,alpha
,beta
,RC
,stable
(inferred from tag). Branches =dev-xxx
.
🍴 Forks: temporary vs permanent
-
Temporary fork (urgent hotfix)
-
Reference the VCS repository + alias to make it look like 1.2.3:
json "repositories": [{ "type": "vcs", "url": "https://github.com/me/lib" }], "require": { "vendor/lib": "dev-fix as 1.2.3" }
* ⚠️ You won't get upstream updates automatically → monitor and revert upstream as soon as possible.
-
-
Permanent fork
-
Rename the package (
my/lib
) and replace the original:json "replace": { "vendor/lib": "self.version" }
* Publish your package (e.g. Private Packagist) and remove the VCS source from the project.
-
🎯 Controlled updates
-
Partial:
bash composer update vendor/zebra --with-dependencies composer update vendor/zebra vendor/giraffe --with-all-dependencies
* Limit shockwave:--minimal-changes
: Keep current versions whenever possible. *--patch-only
: Only take patches. * Block rollback:
bash composer update --bump
* Automate:- Detectors/PRs: Dependabot, Renovate (watch out for gaps); Nils presented Conductor (PHP/Composer targeted tool: executes the update in your CI, understands plugins/scripts, groups PRs better).
🧩 Monorepos
-
Use
path
repositories to link your local libs (symlink invendor/
):json "repositories": [ { "type": "path", "url": "packages/*", "options": { "symlink": true } } ]
* Modify a constraint in a mono lib → re-runcomposer update
at the root.
🔒 The central role of composer.lock
-
Lock freezes the ENTIRE tree (exact versions + URLs). * Always commit
composer.lock
(for apps). * Intentional merge conflicts (hash) → reset tomain
, rerun the exact update command.- Tip: Paste the
composer update …
command into the PR commit message.
- Tip: Paste the
🚀 Reliable deployment (pipeline type)
- CI:
composer install --no-dev --prefer-dist --no-interaction --no-progress
2.composer check-platform-reqs
(or during image build) 3. Optimized autoload dump:composer dump-autoload -o
4. Build artifact (Docker archive / image) includingvendor/
5. Deploy artifact (zero updates in prod) → same code everywhere, no surprises.
⚡ Caching that works (CI)
-
Cache the Composer cache (
~/.composer/cache
) and (optional)vendor/
:- Composer cache accumulates (ideal for multi-branch/multi-job). *
vendor/
bypasses decompression if the state hasn't changed. * In Docker, leverage layers and invalidate oncomposer.lock
change.
- Composer cache accumulates (ideal for multi-branch/multi-job). *
📝 “Dial 2025” checklist
-
[ ] Composer 2 everywhere (+ Packagist v2 API). * [ ]
composer audit
in CI + security alerting outside the update cycle. * [ ]roave/security-advisories
to block vulnerable versions. * [ ] Private Composer repo to make artifacts more reliable. * [ ] Frequent updates (Renovate/Dependabot/Conductor), small and regular. * [ ]--minimal-changes
,--patch-only
,--bump
in your routine. * [ ] Commitcomposer.lock
and document update commands. * [ ] Composer cache caching (+vendor/
depending on context). * [ ] Don't retagged; publish a new version. * [ ] Support your dependencies (composer fund
). -
Extending the Caddy web server with your favorite language – Sylvain Combraque Presentation of Caddy’s extension possibilities to integrate custom features directly into the server.
-
Growing the PHP Core — One Test at a Time – Florian Engelhardt A plea to contribute to the language through targeted unit tests. Every test counts to strengthen PHP.
Here is a clear, ready-to-deliver version of Florian's (FR) talk, with a storyline that sticks to what he's saying, synthesized "demo" passages, and memorable punchlines.
Become a PHP contributor… by writing tests
(and a bit of history from 1993 to today)
👋 Introduction
Hi, my name is Florian. I work on the Profiler team at Datadog, where I build a continuous profiler for PHP. I also contribute to open source: PHP core, PHPUnit, GraphQLite… and I co-maintain the parallel extension (multithreading in PHP). I do all this… while being married and the father of 5 children. The bottom line: you can always find a little time to contribute 😉
🧒 Personal prehistory
- 1993: first PC (IBM PS/2, 286) + a book “GW-Basic for absolute beginners” → first steps in coding. * 1995: I discover the Internet, HTML/CSS/JS/Perl. We deploy via FTP + F5. * 2000: I join a web agency in Germany. Two teams: JSP and PHP. I am put on the PHP side. I am shown
echo 1+1
→ F5 → 2. I answer: “No one will ever use that.” 😅 Then I am shown MySQL, real code, real projects… and I finally understand what a software engineer does all day.
🧭 Why this talk
In my 25-year career, PHP has given me everything. I wanted to give back to the community, but without starting by writing C or an RFC. I discovered PHP TestFest (2017 edition): the idea is simple—write tests for PHP. Perfect for learning the codebase and contributing right away.
🔧 Build PHP & Run the Test Suite
Compile from source
git clone https://github.com/php/php-src.git cd php-src ./buildconf ./configure make -j$(nproc) sapi/cli/php -v # checks: PHP 8.x-dev
Run the tests (in parallel)
# From the repo root make test # sequential # or php run-tests.php -j10
- 18k+ tests, pass/skip/fail clearly listed. * Many skips if extensions are not compiled. * Final report with stats and possible fails (to be investigated).
🧪 PHPT: the PHP test format
A test is a .phpt
file in sections:
--TEST--
short title (+--DESCRIPTION--
if needed) *--EXTENSIONS--
dependencies (e.g.zlib
) *--SKIPIF--
skip logic (OS, network, etc.) *--FILE--
tested PHP code (oftenvar_dump
) *--EXPECT--
expected output *--CLEAN--
housekeeping (isolated from--FILE--
)
Tips: each section runs in isolation → no shared variables.
🧩 Real example: testing zlib_get_coding_type()
Context
-
PHP can compress the output automatically if
zlib.output_compression=On
and the client sendsAccept-Encoding
. * Thezlib_get_coding_type()
function returns:false
if no compression, *"gzip"
/"deflate"
depending on the algorithm PHP will use.
Case ideas to test
- No
Accept-Encoding
→false
2.Accept-Encoding: gzip
+ compression Off →false
3.Accept-Encoding: gzip
+ compression On →"gzip"
The pitfalls encountered (and what they learned)
-
Headers already sent
-
If you print something before changing the INI, PHP sends the headers → you can no longer change the compression. * Solution: buffer the output (store in a variable, do not
echo
too early). 2. Copy-on-write superglobals -
Changing
$_SERVER['HTTP_ACCEPT_ENCODING']
to userland does not change the internal value used by the engine. * Solution: Use the--ENV--
section of.phpt
to injectHTTP_ACCEPT_ENCODING=gzip
at the start of the test process. 3. Be careful with the output -
With compression enabled, the output becomes... gzip binary. * Solution: Capture, change the INI, then emit the expected output clear for the
--EXPECT--
.
-
Result: robust final test, integrated (at the time PHP 7.3), and coverage gained on untested branches.
🎁 What I learned along the way
- Superglobals (
$_SERVER
,$_GET
,$_POST
…) are copy-on-write → the internal original remains immutable. *ini_set()
is not “magic”: after sending the headers, it is sometimes too late to change a behavior that should have been declared in the HTTP response. * There are hidden treasures: while searching the cover, I (re)discovered ZipArchive, etc. * The PHPT format is not reserved for the core: PHPUnit also knows how to execute them — useful for testing a SAPI/CLI or a binary.
🚀 Why you should write tests for PHP
- You stabilize the ecosystem for everyone. * You learn the stepper motor, without writing a single line of C. * You become… a PHP contributor (and that’s cool ✨).
Where to start (5-minute checklist)
- Fork
php-src
,buildconf && configure && make
. 2.php run-tests.php -j8
for a first run. 3. Open Codecov/coverage → find simple red (switch/return). 4. Write 1.phpt
:--ENV--
,--EXTENSIONS--
,--FILE--
,--EXPECT--
. 5.make test TESTS=path/to/your-test.phpt
. 6. Small, targeted PR, clear explanations → easier merge.
🧑💻 Final word
We don't do this because it's easy, we do it because we think it's going to be easy... and we learn along the way. Thank you—and if you have any questions, I'm here!
- MongoDB: Ask more from your database – Jérôme Tamarelle Overview of MongoDB's advanced features (aggregations, complex queries) in an API context.
🧑💻 FrankenPHP in the spotlight
-
How Clever Cloud Redesigned Its Way of Deploying PHP Applications with FrankenPHP – Steven Le Roux & David Legrand Feedback on integrating FrankenPHP into a PaaS. Gains in efficiency, simplicity, and performance.
-
FrankenPHP in production, migrating an e-commerce site – Loïc Caillieux Real-life case of migrating a project to FrankenPHP. Performance figures and feedback on stability.
💡 Other notable talks
-
Mercure, SSE, API Platform and an LLM raise a chat(bot) – Mathieu Santostefano Experimentation of a real-time chatbot with Mercure and API Platform, enriched by an LLM.
-
How API Platform 4.2 is Redefining API Development – Antoine Bluchet (Soyuka) Detailed presentation of the new features in 4.2: new filters, DX improvements, better scalability.
Great, here is the written report of Antoine Bluchet's (Soyuka) talk — “API Platform 4.2”.
🎉 10 years of API Platform & release of 4.2 (live on stage)
“The release goes out right after the talk — the Wi-Fi is playing tricks on me.” — Antoine
🚦 Retro 4.0 → 4.2
- 600 commits, 200,000 lines modified; 300 issues opened, 2/3 of which are closed. * Thanks to Les-Tilleuls.coop for sponsoring Antoine full-time.
🧩 Metadata: declare & modify more easily
- New PHP declaration (in addition to attributes/YAML), ported from Symfony. * Targeted mutators:
AsResourceMutator
/AsOperationMutator
(+OperationMutator
interface) to adjust an operation/resource without any hassle (useful for the Serious Bundle).
🔎 Filters, finally decoupled (doc ↔ transformation ↔ validation ↔ SQL)
Historically, a filter mixed description, SQL strategy, etc. In 4.2, we separate the responsibilities:
-
Documentation:
-
JsonSchemaFilterInterface
— declares the schema of a parameter (inferred type → automatic coercion/validation on PHP side). *OpenApiParameterFilterInterface
— OpenAPI parameters (can override JSON Schema). * Filtering: unchanged storage interfaces (ORM/ODM/ES…). -
Filters become simple callbacks, without DI, which receive typed parameters.
-
🧭 Unified HTTP parameters
- New
Parameter
(query) &HeaderParameter
(header) attributes with advanced options (type, array, formats, coercion, etc.). * Parameters are declared on the operation, independent of entity properties. * Free-text queryq=
(Hydra style) out-of-the-box. * Composite filters possible, closing very old historical tickets.
🔗 “Smart” path parameters
Link extends Parameter
+ dedicated provider to resolve a linked resource (e.g.company
injected as a ready-to-use entity in the provider).
📜 OpenAPI & JSON Schema: lighter, cleaner
- Schema pooling: a base schema + enrichments (JSON-LD, JSON API, …) by $ref → −30% size on large specs, less I/O. * ⚠️ If you were testing the exact form of the schemas, expect diffs (functional validation remains the same). * New, stricter/updated validator; many inconsistencies fixed.
⚡ Performance: FrankenPHP in worker mode, figures to support it
- NGINX/PHP-FPM vs FrankenPHP Bench (optimized “sweet spot” config). * Without worker: equivalent. With worker: +RPS, latency ÷2 on Sylius page. * Key message: enable worker mode. (And go tease those who haven't done it 😉)
🧱 State Options: links and sub-resources… painlessly
-
For specific sub-resources, a dedicated callback gives a clear
WHERE
and avoids large automatic graphs. * Entity-class magic modernized with Symfony ObjectMapper:- Your shape API no longer has to match the Doctrine entity. * We annotate with
#[Map]
to describe correspondences (e.g.firstName
+lastName
→username
). * Clean and maintainable bidirectional mapping.
- Your shape API no longer has to match the Doctrine entity. * We annotate with
🛒 Real case (Sylius + JSON-LD / schema.org)
- Expose a schema.org compliant product sheet in JSON-LD while the Sylius entity does not match. * Provider that reads Sylius → ObjectMapper that remaps → Serializer that emits the JSON-LD. * Return from the API Platform profiler (content-negotiation, provider, serializer, etc.) to see where the time goes (often serialization).
🧵 Built-in JSON Streamer: Serialize large payloads faster
-
Integration of the Symfony JSON Streamer (+ TypeInfo) for JSON and JSON-LD. * Principle: pre-calculated schema, character-by-character streaming. * Measured gains: up to +32% RPS in Antoine's tests (the bigger the object, the more it gains). * Activation: tool option
json_stream: true
.- ⚠️ Requires public properties (otherwise, stick with the classic Serializer). * For further information: dedicated talk by Mathias Arlaud.
🧡 Laravel: Functional coverage in a boom
- Since the intro last year: 124 PRs & 100 issues addressed. * 80–90% of API Platform features now operational on the Laravel side (including HTTP cache). * Thanks to the Laravel top contributors. And deployment on Laravel Cloud presented by Joe Dixon.
🧪 Availability & Compatibility
- 4.2: Released right after the talk (beta exists). * Main breaking change: JSON Schema format (not substance). * OpenAPI defaults adjusted (low risk of impact). * Parameters: No longer experimental — adopt them.
🛣️ Roadmap to 5.0
- Deprecate
#[ApiFilter]
in favor of the settings system (assisted migration: script & compat kept for a long time). * Extend the JSON Streamer to other formats; feedback & tests welcome. * Continue maturing ObjectMapper (Symfony) via concrete uses in the ecosystem.
✋ Key points (TL;DR version)
- Unified parameters (typed, documented) + decoupled filters ⇒ DX and precision. * OpenAPI lighter and stricter. * FrankenPHP (worker) ⇒ real performance boost. * ObjectMapper ⇒ Clean API even if your entities are not. * JSON Streamer ⇒ faster big payloads. * Laravel: that's it, we're (almost) there feature-by-feature.
Do you want me to generate a 4.1 → 4.2 migration checklist (parameters, schemas, performance) + examples of annotations ready to copy and paste?
- How Laravel Cloud Uses FrankenPHP in Production – Florian Beer Focus on the synergy between Laravel Cloud and FrankenPHP.
🚀 Context
Florian Beer (Laravel Cloud infrastructure team) explained how the zero-ops platform launched in February allows you to deploy a Laravel app "in one minute" (GitHub/GitLab/Bitbucket connection → Deploy → public URL). The goal: no client-side infrastructure management (servers, containers, scaling, network... everything is managed).
⚙️ Octane, the long-lasting execution
- Without Octane: Laravel runs on PHP-FPM, the application boots for each request. * With Octane: the app boots once and stays in memory; requests are served by a long-running worker. * Octane supports multiple servers; Laravel Cloud has chosen FrankenPHP.
🧩 Why FrankenPHP?
FrankenPHP (based on Caddy) provides:
- HTTP/2 & HTTP/3, Early Hints, Auto TLS, * a powerful worker mode, * easy integration into the Laravel/Octane ecosystem.
In practice on Laravel Cloud, activate Octane = switch to FrankenPHP (and return to PHP-FPM if necessary).
🎬 Live demo (step by step)
- Create an app via template on Laravel Cloud (Frankfurt region). 2. Initial deployment → app accessible in PHP-FPM. 3.
composer require laravel/octane
, add a "runtime" route to expose runtime info. 4. Push → auto-deploy (build container, publish). 5. Flip the switch: enable Octane in the interface → redeploy. 6. The "runtime" route now shows FrankenPHP as runtime.
💡 Warning: In worker mode, monitor for memory leaks on the application code side (customer responsibility). The platform facilitates activation but does not "garbage-collect" your business logic.
🏗️ Under the hood of Laravel Cloud
-
The platform maintains two families of Docker images:
- PHP-FPM (classic), * FrankenPHP (Octane). * The pipeline takes your repo, builds the image, pushes it, and attaches the service to the public network.
🤝 Performance & collaboration
- Direct collaboration with Kévin Dunglas to optimize FrankenPHP on a wide variety of workloads (from side-projects to high-traffic SaaS). * Result: significant performance gains already observed on the client side.
✅ Issues & best practices
-
When to switch to Octane/FrankenPHP?
-
Intensive I/O, critical latency, hot endpoints, busy web/API. * Points of attention:
-
Global state & singletons (well re-initialized between requests), * Connections (DB, cache) managed properly in the worker lifecycle, * Observability (metrics, memory usage per worker).
-
🧭 Key message
On Laravel Cloud, Octane + FrankenPHP is activated with one click. > You retain the simplicity of zero-ops, while taking advantage of the modern runtime and worker mode for performance.
- Help! My Tech Skills Have an Expiration Date – Helvira Goma Reflecting on the rapid obsolescence of skills and how to stay relevant in a constantly changing industry.
🎤 Day 2 – September 19
🔑 Keynotes
- Nicolas Grekas: Symfony's status, new features, and roadmap. * Fabien Potencier: Symfony's long-term vision and a focus on AI-related components.
Here's a ready-to-deliver version (with slide titles) of Fabien Potencier's talk on "LLMs, Agents, and the Future of APIs." I've kept the pragmatic tone and concrete examples.
#1) Why this talk?
- The world of AI is moving so fast that what I say today may be obsolete tomorrow. * Goal: Understand how LLMs and agents are changing the way we design APIs.
Who here uses an LLM to code (almost) every day? Who has never called an API? Try it 😉
#2) What is an “agent”?
- Definition (Anthropic, summarized): a model that uses tools in a loop. * Mental schema: Prompt → chooses a tool → observes → reiterates → produces a result. * Possible tools: web browser, SDK/API, local executable, home-made function… * Important: both “human” and “machine”: plans, has memory, takes initiatives, but remains a program.
#3) 30 years of interfaces: from website to agent
-
90s: sites for humans (pure HTML, then CSS/JS). * CLI: for devs/ops. * APIs: machine-to-machine (mashups!), internal or public, with expectations of completeness and determinism. * New: agents interact with everything:
- Websites (scraping / browsing tool), * CLI (via MCP servers), * APIs (via SDKs or direct HTTP).
#4) Current APIs: Perfect for programs, not agents
- Strict inputs/outputs (OpenAPI/JSON), errors via HTTP status (400, 422, 429, etc.). * For deterministic apps, this is perfect: in case of an error, a human corrects the code. * But an agent must self-recover: it needs courses of action, not just “400 Bad Request.”
#5) When an agent bumps into your mistakes
- 400 / 422 / 429: the agent sees the code… and guesses (sometimes wrongly): missing field? bad format? try again later? * Bad loop: it tries, fails, googles, rereads the doc, tries again… → slow, expensive, non-deterministic. * Worse: many SDKs (e.g. in Python) only return the default status code → the detailed error body is lost.
#6) Making Mistakes… Action Tips
-
In the response (not just the code):
- problem title + actionable detail, * link to a specific page (not the root doc), * concrete proposal: “the
date
field must be inYYYY-MM-DD
format”, “quantity
≤ 100”, “this endpoint is deprecated, use/orgs/{id}/projects
”. * Benefits: fewer iterations, fewer tokens, less cost, fewer hallucinations.
- problem title + actionable detail, * link to a specific page (not the root doc), * concrete proposal: “the
Symfony has long supported error structuring (JSON issue): take advantage of this to standardize your error payloads.
#7) Consistency > intelligence
-
LLMs love predictability: pick a style and stick to it.
user_id
everywhere (notuserId
here andauthor_id
elsewhere). * Fields, URL names, formats: consistency. * Otherwise the agent “guesses”… and makes a mistake.
#8) “AX” Documentation (eXperience Agent)
-
Unified, up-to-date, centralized: avoid outdated pages and fake examples (LLMs copy them). * Tracks:
- LLMS.txt (inventory for LLMs), * each page viewable in Markdown (LLMs read MD very well), * path guides (e.g. “buy a product”: auth → cart → address → payment), * document possible errors per endpoint, how to resolve them, and provide correct examples.
A bad example in context can “contaminate” an LLM’s answers for hours.
#9) Minimize back and forth with the agent
- One API call: 10–100 ms; one LLM call: seconds. * Fewer rounds = faster, cheaper, more stable. * Idea: Expose a few high-level task-oriented endpoints (“checkout”, “full export”, “provision a project”) in addition to your low-level endpoints, to avoid 5 calls when 1 is enough.
#10) Testing… the indeterministic
-
Agents are not deterministic. However, tests are needed:
- low temperature, retry limits, more constrained prompts, * metrics (success rate, latency, costs) and dashboards, * accept the “grey” (good enough scenarios).
#11) Tokens: where the bill stings
-
Billing is per token (not per character). * Surprising impacts:
- English short words = 1 token; French/accents/Unicode = often several; * Random UUIDs & IDs → tokenized very expensive; *
category_id
can be 1 token depending on the tokenizer,DoneAt
vsCompletedAt
does not always make a difference. * Verbose JSON is expensive; structured Markdown is often more “readable” for the model and less tokenized. * Long context ≠ precision: the larger the context, the more the agent gets confused. Segment what you expose to agents (MCP, sub-APIs).
- English short words = 1 token; French/accents/Unicode = often several; * Random UUIDs & IDs → tokenized very expensive; *
#12) Credentials & Security: Don't Let the Agent Play with Fire
- Never put secrets in a prompt. * Prefer a tooled proxy (e.g. MCP server) that holds the keys, makes the calls, and restricts permissions. * Give the agent scoped tokens (read-only, minimal scope). * Case 429 (rate limit): tell what to do (“Retry-After: 3”, backoff recommended, quota per minute, etc.).
#13) Some “recipes” that you can apply tomorrow
- Actionable errors + specific links; standardize status/body. * Deprecation: report in the response AND the doc; propose the alternative. * Macro endpoints (tasks) in addition to micro ones. * Consistency of names and formats. * Central doc in Markdown, indexed (LLMS.txt). * Limit JSON verbosity, avoid gigantic IDs; paginate. * Integrate an MCP server to properly expose your tools/SDKs to agents.
#14) From DX/UX to AX (Agent eXperience)
We've made great progress in DX and UX. The next step is AX: designing understandable, actionable, and predictable APIs for customers… who reason.
What you do for agents also benefits humans: better errors, better documentation, less friction.
Conclusion
- Agents are already using your APIs. * Help them: fewer back-and-forths, errors that guide, consistency, usable documentation, controlled security. * The future of APIs is not just machine↔machine: it's machine-that-reasons ↔ well-designed service.
Thank you 🙏 — questions welcome!
🏗️ Architecture and REX
-
2025, an API Platform Odyssey – James Seconde An overview of the past and future evolutions of API Platform.
-
Deploying API Platform on Laravel Cloud – Joe Dixon Concrete example of integrating and deploying API Platform in a Laravel Cloud platform.
-
Headless & Scalable: Designing a Decoupled Application with API Platform and Vue.js – Nathan de Pachtere Demonstration of a headless project with API Platform as backend and Vue.js as frontend.
-
A seamless multi-tenant API with API Platform, Symfony and PostgreSQL – Mehdi Zaidi Technical strategies for managing multiple clients on a single API instance, leveraging PostgreSQL.
🛠️ Tools and best practices
-
Make your front-end devs happy with RFC 7807 – Clement Herreman How to normalize API errors with RFC 7807 for better front-end DX.
-
Symfony and Dependency Injection: From Past to Future – Imen Ezzine A history and projection on the evolution of dependency injection in Symfony.
-
Type System and Subtyping in PHP – Gina Peter Banyard Theoretical and practical presentation of the type system in PHP, with an academic perspective.
-
PIE: The Next Big Thing – Alexandre Daubois A look at a new technological proposition that could change the way we work with PHP.
Pi: the tool that reconciles PHP and its extensions
Towards a “composer for extensions”, supported by the PHP Foundation
TL;DR
Install, update, and uninstall PHP extensions painlessly, with dependency management, signatures, PHP version detection, composer.json integration, and more. Pi delivers exactly that. Designed and funded by the PHP Foundation, Pi leverages the Packagist ecosystem for metadata, automates php.ini editing, supports private GitHub, Windows, Linux, and macOS — and aims to replace legacy PECL/pickle usage.
Why a new tool for extensions?
In our projects, installing a PHP library is trivial (composer require …
). On the other hand, installing an extension (Redis, MongoDB, Xdebug, PCOV, etc.) often rhymes with:
- system dependencies,
./configure && make && make install
, * variations by OS/ABI/PHP versions, * manual editing of INI files, * fragile consistency between CI/dev/prod environments.
Initiatives have attempted to smooth out this friction (PECL/pickle, Docker PHP Extension Installer), but with limitations: slow and difficult to maintain site, lack of generalized signatures, imperfect detection of PHP compatibilities, tight coupling to Docker, etc.
Pi was born from this observation: to bring the experience of a Composer to extensions.
Pi in two sentences
- What it is: An extension manager that automatically downloads, builds (or fetches binaries when relevant), installs, and activates your PHP extensions. * What it changes: You treat your extensions as project dependencies (Packagist metadata, version constraints, composer.json integration), but with the intelligence needed for the extension world (C/Rust/Go, compilation, DLL/SO, ABI, etc.).
Key Features
-
Simplified installation
# Download local cached sources pi download redis # Build (configure/compile) according to your platform pi build redis # All-in-one: download + build + installation + activation pi install redis ```
-
Automatic update of
php.ini
No need to manually addextension=…
: Pi activates the extension in the correct configuration. -
Smart Compatibility (PHP/OS/arch) Extension authors can restrict OS compatibility and declare PHP min/max bounds; Pi cleanly blocks anything that doesn't match.
-
Signatures and Verifications Pi knows how to consume signed artifacts (e.g. GitHub Releases) and verify integrity before installation.
-
Private & monorepo-friendly Add repositories like Composer: VCS, local path, Private Packagist, etc. — ideal for private extensions.
bash pi repo add my-ext vcs https://github.com/acme/php-ext-foo.git
-
Reading
composer.json
A simplepi install
in your project allows Pi to scan yourcomposer.json
(e.g.require: { "ext-redis": "*" }
) and install any missing extensions. 🪄 -
Clean uninstall
bash pi uninstall xdebug
-
Multi-PHP support Install for a specific PHP binary (useful in multi-die CI):
bash pi install pcov --with-php-path=/usr/bin/php8.3
-
Windows first-class On Windows, Pi fetches precompiled DLLs when available; on Linux/macOS, Pi compiles by default (classic and reliable).
-
Symfony CLI integration Regular users can control Pi via:
bash symfony pi install xdebug
Where are the extension packages?
Pi relies on Packagist to index metadata (name, version, PHP/OS constraints, sources, signatures, etc.). Pi-compatible extensions are published under a dedicated vendor (e.g., packagist.org/extensions/...
) or via your own repositories. 👉 Consequence: same reflexes as Composer (semantic versioning, ranges, private repositories).
Typical workflow (developer & CI)
-
Declare your requirements (in the README and/or via
composer.json
:ext-…
). 2. Developerbash pi install # installs all extensions requested by the project php -m | grep redis
3. CI- Cache Pi cache and build artifacts for speed. * OS × PHP matrix: Pi handles build and activation differences. * Avoid pipeline-specific
apt-get
/brew
: Pi centralizes.
- Cache Pi cache and build artifacts for speed. * OS × PHP matrix: Pi handles build and activation differences. * Avoid pipeline-specific
Quick Comparisons
| Need | PECL/pickle | Docker Ext Installer | Pi | | ------------------------------- | ----------- | -------------------- | -------------------------------- | | Local installation without Docker | Medium | No | Yes | | PHP/OS version detection | Partial | N/A (Docker) | Yes (metadata) | | Signatures & verification | Heterogeneous | N/A | Yes | | Auto-activation (php.ini
) | No | N/A | Yes | | Private repositories | Complicated | No | Yes (VCS, Private Packagist) | | Reading composer.json
| No | No | Yes | | Windows | Variable | No | Yes (DLL) |
Express FAQ
Is there a .lock
per project like Composer? No. An extension is installed at the system/PHP binary level. Pi tracks what it handles (pi show
) and respects the target PHP version (--with-php-path
). Reproducibility is done at the CI level (matrix/os/versions) and via your constraints.
Can I use Pi with private GitHub sources? Yes: Pi reads GH_TOKEN
and authenticates private artifact uploads.
Precompiled binaries on Linux/macOS? By default no (local compilation = ABI robustness), but yes on Windows (DLL).
Is Pi officially replacing PECL/Pickle? The adoption process is going through RFC/voting on the PHP side; the direction is toward recommending Pi as the preferred path. Either way, you can use it right now.
Best practices to adopt today
- Declare your extensions in
composer.json
("ext-redis": "*"
), and document the supported PHP versions. * Standardize your CI pipelines aroundpi install
(rather than OS-specific scripts). * Publish full extension-side metadata: PHP/OS constraints, signatures, build instructions. * Cache the Pi cache in CI and fix extension versions in production (via stable tags).
Conclusion
Pi finally brings to PHP extensions the ergonomics and reliability that Composer offered to libraries: a unified, reproducible, scriptable, multi-platform workflow — and tailored for modern realities (monorepos, private, CI, Windows).
If you've ever said "no" to an extension because it seemed risky or time-consuming to install... try again with Pi. You might even get a taste for it. 🚀
Appendices – Command Reminder
# Inventory of extensions managed by Pi pi show
# Add a private extension repository pi repo add my-ext vcs https://github.com/acme/php-ext-foo.git
# Download the sources pi download xdebug
# Build according to current OS/PHP version pi build xdebug
# Install and activate pi install xdebug pi install pcov --with-php-path=/usr/bin/php8.3
# Uninstall pi uninstall xdebug
🌍 Society
- Where Have the Women in Tech History Gone? 2.0 – Laura Durieux Inspiring conference highlighting the place of women in history and the importance of inclusion in tech.
🎉 Closing
A unifying speech concluded the conference, highlighting the importance of community and setting a date for the 2026 edition.
📌 Conclusion
This 2025 edition was marked by:
- The omnipresence of FrankenPHP, present in the majority of REX. * The rapid evolution of API Platform 4.2, focused on automation, performance and real-time. * The emphasis on best practices: Composer, API filters, error handling, types. * A community that continues to innovate, while addressing human and societal issues.
A must-attend event for any developer who wants to stay at the forefront of PHP and Symfony technologies.