Ten years from now, AI support may be separated into two tiers.
Tier 1: Strategically important customers, served by a team of top-flight employees.
Tier 2: All other users, served by a dynamic marketplace.
Let me illustrate how this could work.
Creating a List of OpenAI Customers
To start, we'll generate a list of current OpenAI customers. We can go to the OpenAI blog to find some examples:
I used Codex, OpenAI's command line tool, to write Python scripts to:
- Download all the web pages of the blog.
- Remove the HTML tags from the saved files (to avoid wasting money/time on unnecessary tokens).
- Use the OpenAI Responses API to extract the names of OpenAI customers.
For cheaper queries and faster responses, I tried using the GPT-4.1 mini model to extract the customers names.
It wasn't as accurate as I would have liked - for example, it misinterpreted this article, naming the wrong organisation as the customer.
I changed the model to GPT-5 and it extracted the correct company names without issue.
OpenAI Customers
Gathering Customer Data
Now that we have a list of customers, let's build a system for ranking them by importance.
I used Codex to write a script to go through the list of customers, one by one. For each customer, I queried the Responses API.
This time, I used the web search tool and requested structured output.
I asked the model to return machine-readable data about each customer's business:
- Company Valuation
- Industry Name
- Industry Size Today
- Projected Industry Size in 2035
I also requested links to the sources of the data.
Customer Info
Deep Research
I passed the customer info to a deep research model to generate a prioritised list of customers.
When employing deep research, it is advisable to use background mode.
A deep research model will need extra time to perform its task. So using a standard web request may fail (the connection could be automatically closed after a set number of seconds).
Background mode allows you to start the process and check for the result later.
I kicked off the task and then retrieved the list an hour later. To limit costs, I set a max number of output tokens.
The deep research model provided customer importance rankings and scores, with text explaining the reasoning behind these metrics.
Customers by Priority
Generating Support Tickets
Finally, we'll need sample ticket data. For this, I went to the Open AI Community Forum:

I extracted the titles of all the posts from the previous three months.
Then I used gpt-5-nano (a fast and cost-effective model for simple tasks) to give each of these sample titles an urgency rating.
I limited the model to four possibilities: "critical", "high", "medium", and "low".
Ticket Urgencies
Ticketing System Demo
For the sake of variety, I used Claude Code to complete this section.
It took the sample data and built a ticketing system demo.
The demo displays the most urgent issues from high priority customers.
These are the problems that would be fixed by the AI company's best engineers:
Support Queue
This looks similar to the support queues of today.
The major difference would be the scale. Today, a large customer might generate a handful of support tickets per day.
As usage grows, that number could grow to thousands.
AI would be deployed to maximise engineer productivity. It would continuously absorb data and decide on company priorities.
At times, the best use of engineer time might be to help government agencies - in the interest of fostering a productive relationship.
At other times, the right choice may be to focus on gaining market share in an emerging industry.
The Marketplace
The rest of the user base would find assistance via a dynamic marketplace, owned and operated by the AI company.
This would be similar to the talent marketplaces of today:

However, there may be one crucial difference.
Sites like Upwork charge a fixed percentage fee for each task.
AI companies could take a lesson from Google.
They charge less for ads that are more relevant.
The logic is that relevant ads lead to a better user experience, and thereby improve the company's long term prospects.
The same thinking could be applied to the support marketplace.
The fee charged could change dynamically, based on the importance of the task to the AI company's long term goals.
A millionaire who wants his robot butler to cook a better omelette may pay a high price for support.

This use case would be too rare to be a company priority.
But the fee could subsidise more important cases (e.g. helping government agencies with small budgets).
This marketplace would also provide laid off workers with new employment opportunities.
Even if AI can do any task, it seems likely that humans will still want to monitor it and assess its performance.
Future Projects
So how does one become a top-tier AI support engineer?
Personally, I plan to learn every aspect of the OpenAI ecosystem.
Here are my upcoming projects:
The OpenAI interview guide suggests that candidates may be asked to design a solution using their products. Maybe this can be automated.
I could download the OpenAI documentation and cookbooks. I could then pass these as context to a deep research model, or perhaps use embeddings and vector stores for better search capabilities.
I want to understand the image models better. I could make an AI-powered version of 4 Pics 1 Word.
The image streaming capability could make the game more entertaining. If users can guess the word before all the images are created, they earn more points.

I tested GPT-5, and it wasn't able to find Waldo in the below image.
It created the incorrect red highlight. I created the correct green highlight:

I could try to fix this with vision fine-tuning.
Training a bot to play chess would give me a chance to try the computer use tool:

I recently vibe coded a stock trading game that uses real price data.
It would be interesting to see if the model could beat the stock market:

This could offer an opportunity to use reinforcement fine-tuning.
Perhaps I could combine all my previous efforts into one big project: a virtual content creator.
It could record itself playing games, provide audio commentary, and use MCP workflows to publish to social media.
Maybe it could be interviewed via the Realtime API.
Continuous Improvement
Before writing this post, I made a video covering some of these ideas.
I wanted to show my dedication to continuous improvement.
So I decided to share both iterations.
You can watch the video below, I'd be interested to hear your feedback:
Thank you for your time.