GPT Store and SEO for GPTs

OpenAI announced the GPT Store today and it has the potential to become as busy a marketplace as Apple’s App Store and very fast. What this means for developers of specific GPTs is that they need to think about ranking in this store because the store will be a type of search engine. – there’s a familiar challenge: getting found. GPT Store and SEO for GPTs will become a serious issue. Just as the advent of the App Store revolutionized mobile software, the GPT Store promises to be a playground for creatives and developers alike, offering tailored AI experiences through a plethora of specialized GPTs.

For your custom GPT to succeed, it’s not just about how intelligent or advanced it is; it’s also about how visible it will be in the GPT Store. This is where the concept of ‘GPT Store Optimization’ (GSO) might come into play, mirroring the well-established practice of App Store Optimization (ASO) for mobile apps.

The premise is simple yet critical: When users search the GPT Store, they should be able to find your GPT easily, and your creation should rank well within its category. But how?

Understanding GPT Store Algorithms

The announcement from OpenAI suggests that GPTs will be searchable and may “climb the leaderboards.” This implies an algorithmic ranking, possibly akin to search engines and app stores, where factors such as relevance, user engagement, and quality drive visibility.

Relevance and Keyword Optimization

Your GPT must be meticulously tailored to your target audience’s needs. Like optimizing web content for Google or an app for the App Store, choosing the right keywords for your GPT’s description and metadata is crucial. Understand the language and terms potential users will employ when seeking out the functionality your GPT offers. At the moment we don’t have any information about how GPTs are going to be defined, labelled, categorised etc. but there will have to be some sort of taxonomy and labelling.

User Engagement and Reviews

High engagement levels are likely to influence your GPT’s visibility positively. These market places are nothing if not Darwinian. So getting users to interact with your GPT frequently and for longer sessions will probably benefit your ranking. Reviews will undoubtedly play a significant role, too—stellar feedback may boost your standings on leaderboards, while negative reviews could do the opposite.

Quality and Retention

The announcement hints at the possibility of monetization based on usage. This means retention could be a vital metric. Quality will be a cornerstone here; if your GPT is not only unique but also provides value, users will return, and new users will find it through recommendations and higher rankings.

Categories and Leaderboards

It’s essential to place your GPT in the correct category to ensure it reaches the right audience. Being a top performer in a niche category can be more beneficial than being lost in a sea of generalists. Climbing the leaderboard in your category will require understanding the nuances of what the GPT Store algorithm values most.

Spotlights: The Role of OpenAI’s Curation

The mention of spotlighting “the most useful and delightful GPTs” suggests that OpenAI will curate content, much like featured apps on the App Store. Gaining such a spotlight could significantly enhance your GPT’s visibility. This could involve networking within the OpenAI community and ensuring your creation stands out in utility and innovation.

The launch of the GPT Store marks a significant milestone in the evolution of AI tools and services. It will democratise and dramatically accelerate the process of app development so to succeed, builders must adapt and apply robust SEO strategies to ensure their custom GPTs are not only useful and innovative but also discoverable. With the right approach, the opportunities for visibility and monetization are boundless. Keep these factors in mind, and you may just find your GPT leading its category, one search at a time. The GPT Store and SEO for GPTs is the new marketing challenge.

Personal LLMs

Personal LLMs: have a discussion with yourself but a yourself with a much better memory

Personal LLMs (Large Language Models) will be ubiquitous within two years and it seems likely that companies like Apple and Google will develop mobile LLM apps in parallel with focusing hardware and software developments around AI. A personal LLM will let you converse with your own private data, such as your Notion or Roam database, your Google Docs, your journal, internal company databases, your photo albums and data generated by health apps and other systems . You will be able to talk to your documents and interrogate them using Natural Language Understanding or NLU.

Apart from obvious privacy benefits – your data will remain on your device – how might this be useful? What are the implications for how we deal with information both in the work environment and at home? Will it also lead to the “death of search”, the gradual migration away from search engines to LLMs by people looking for information. The “what” may still be served better by search engines, but the “how” is often better performed already by LLMs and as LLMs get access to the web. Questions that have complicated answer are already answered in a much better style by LLMs although the accuracy of information contained in those answer can be poor.

Let’s look at these questions.

How local LLMs will be useful

You will be able to talk to your device and ask questions of all your personal data as if having a dialogue with yourself but a yourself with a better memory.

“When did I go to Rome and what was the name of the hotel I stayed in?” “Show me any photos I took on that trip that include pictures of my wife. “

“When did I last go to the dentist?”.

“How many times have I been to Oxford this year”?

“What was that article about lowering sea levels by flooding parts of the desert called?”.

“Have I ever written notes about Gustav Holst?” etc.

The fluidity of streamlined knowledge retrieval is an important benefit in its own right – it will speed up thought, stop the retrieval process from interrupting ideation and reduce time spent on information administration and housekeeping. But with all gains there will be losses. If we stop needing to retrieve information from the deep recesses in our memories will be lose the ability as neural pathways atrophy? In a simple sense, you could say that the evidence is that it will. Afterall, most people who could once do mental arithmetic well are now worse at it because they use calculators instead. Why shouldn’t the process of information retrieval go the same way?

Personalised, local LLMs will act as personal assistants or PAs. When coupled with specific plugins you will be able to get them to book tickets or flights, write articles while you sleep based on overnight headlines and twitter feeds, suggest recipes and order ingredients missing from your fridge.

The death of search

Search engines like Google and Bing are good for atomised searching – looking for something specific and well-defined. They have always been poor at providing answers to more nuanced questions such as “why does temperature fall as your altitude increases?” Instead they list sources of answers that you often have to view one after another to get the answer you need. With an LLM there is a single step: you ask the question and you get the answer. That is not the case with search engines. So people are already migrating away from search towards more helpful tools like Claude, Perplexity and ChatGPT for anything but the most basic of searches. This of course will cut into the advertising income generated by search ads. Google in particular, which has a degree of financial diarrhoea due to heavy ingestion of ad cash, may start to suffer. As someone pointed out a while back, if Bing search dies, Microsoft still thrives. If Google search dies, so does Google. So far, no Search engine has successfully worked out how to splatter LLM outputs with ads and perhaps, with a bit of luck, no-one will.