Ask codex
on April 22, 2025
đ Codex is OpenAI's newly open-sourced CLI tool for local agents. Just five days after its release, I jumped in to explore how I could take it further. In this article, I'll walk you through how I connected Codex with Uniflow to automate code understanding and summarization.
⨠What Is Codex?
đ Codex lets you ask questions about your codebase via the terminal. You install it with:
npm install -g openai/codex
Then you can run prompts like:
codex -q --json "explain utils.ts"
This example is one of the official use cases published by OpenAI on their GitHub repository.
âī¸ Why Automate It With Uniflow?
I didn't just want to run commands manually. I wanted to industrialize the interaction and make Codex a callable tool inside Uniflow. This allows ChatGPT to delegate execution to Codex and return human-readable insights.
đ Building the Workflow
Inside Uniflow, I created a new project called AskCodex
. Using Uniflow's Node client, I structured the workflow to:
- Set the
OPENAI_API_KEY
as an ENV variable - Define a custom prompt such as:
Use Codex to explain utils.ts
- Register
runCodex
as a tool inside the ChatGPT request - Have ChatGPT call
runCodex
with the prompt - Execute the CLI tool via Uniflow's batch bridge
- Summarize Codex's output using a second call to ChatGPT
đĻ Example Output
In my test, Codex successfully read and explained utils.ts
. It identified helper functions like:
sleep()
â creates a delayisNil()
â returns true fornull
orundefined
saveAsJSON()
â handles object persistence
After the tool runs, ChatGPT then provides a clean summary of the file.
đ§ Limitations and Future Ideas
â Codex currently doesn't maintain context across calls. It would be amazing to have a persistent interactive API mode where the model can ask clarifying questions.
đ§ Until then, we simulate interactivity with Uniflow by re-prompting GPT with additional information.
đĢ Run It Yourself
If you want to test locally:
npm install -g openai/codex
export OPENAI_API_KEY=your_key
codex -q "explain utils.ts" --json
đĨ Try the Full Automation
To reproduce what I did in this video, run your own flow inside Uniflow using its Node client. My example uses the CLI + Docker API and connects to Uniflow locally at port 9016.
Here is the resources I used
Flows :
- Object
Variable :
env = { openai_api_key: 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 }
- Text
Variable : codex_prompt =
Use Codex tool and ask him to "explain utils.ts"
- Function Code :
// Tool function to run Codex CLI
function runCodex(prompt) {
console.log(`?ī¸ Running Codex with prompt:\n${prompt}`);
try {
const sanitizedPrompt = prompt.replace(/"/g, '\\"');
const command = `OPENAI_API_KEY=${env.openai_api_key} codex -q "${sanitizedPrompt}" --json`;
const output = Bash.exec(command, { encoding: 'utf-8' });
return output;
} catch (error) {
return 'Codex error: ' + (error.stderr || error.message);
}
}
// Function to call ChatGPT with fetch
function askChatGPT(prompt) {
console.log(`? Asking ChatGPT:\n${prompt}`);
return fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer ' + env.openai_api_key,
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'gpt-4',
tools: [
{
type: 'function',
function: {
name: 'runCodex',
description: 'Executes Codex CLI with a given prompt',
parameters: {
type: 'object',
properties: {
prompt: { type: 'string', description: 'Prompt to send to Codex CLI' }
},
required: ['prompt']
}
}
}
],
messages: [
{
role: 'system',
content: 'You are an automation assistant that can call a custom CLI tool named "codex".'
},
{
role: 'user',
content: prompt
}
]
})
});
}
// Full flow execution
askChatGPT(codex_prompt)
.then(res => res.json())
.then(data => {
const toolCalls = data.choices?.[0]?.message?.tool_calls || [];
if (toolCalls.length === 0) {
console.log('âšī¸ No tool calls detected in the response.');
return Promise.resolve('No output from Codex.');
}
const executions = toolCalls.map(call => {
if (call.function?.name === 'runCodex') {
const args = JSON.parse(call.function.arguments);
return runCodex(args.prompt);
}
return Promise.resolve(null);
});
return Promise.all(executions);
})
.then(outputs => {
const content = outputs.join('\n');
console.log('? Codex Output:\n', content);
// Summarize Codex result with ChatGPT
return askChatGPT(`Summarize this content:\n\n${content}`);
})
.then(res => res.json())
.then(result => {
const summary = result.choices?.[0]?.message?.content || 'â No summary generated.';
console.log('? Final Summary:\n', summary);
})
.catch(error => {
console.error('â Flow error:', error.message);
});
đ Run Flow
go to library/uniflow-client-node
Generate dist/node.js
npm run build:dev
Run Flow with Uniflow Node Client
node dist/node.js --env=dev --api-key=zxtl1facmgbapyvjg581oip28sinzyr4 ask-codex
đ Conclusion
We saw how Codex can become much more powerful when plugged into an orchestrator like Uniflow. This combination turns raw CLI outputs into refined, automated summaries. Perfect for DevOps, onboarding, or code review pipelines.
đ Want to build your own workflows?
đĨ Watch the video on YouTube and don't forget to like, share, and subscribe. See you soon!