Personal LLMs

Personal LLMs: have a discussion with yourself but a yourself with a much better memory

Personal LLMs (Large Language Models) will be ubiquitous within two years and it seems likely that companies like Apple and Google will develop mobile LLM apps in parallel with focusing hardware and software developments around AI. A personal LLM will let you converse with your own private data, such as your Notion or Roam database, your Google Docs, your journal, internal company databases, your photo albums and data generated by health apps and other systems . You will be able to talk to your documents and interrogate them using Natural Language Understanding or NLU.

Apart from obvious privacy benefits – your data will remain on your device – how might this be useful? What are the implications for how we deal with information both in the work environment and at home? Will it also lead to the “death of search”, the gradual migration away from search engines to LLMs by people looking for information. The “what” may still be served better by search engines, but the “how” is often better performed already by LLMs and as LLMs get access to the web. Questions that have complicated answer are already answered in a much better style by LLMs although the accuracy of information contained in those answer can be poor.

Let’s look at these questions.

How local LLMs will be useful

You will be able to talk to your device and ask questions of all your personal data as if having a dialogue with yourself but a yourself with a better memory.

“When did I go to Rome and what was the name of the hotel I stayed in?” “Show me any photos I took on that trip that include pictures of my wife. “

“When did I last go to the dentist?”.

“How many times have I been to Oxford this year”?

“What was that article about lowering sea levels by flooding parts of the desert called?”.

“Have I ever written notes about Gustav Holst?” etc.

The fluidity of streamlined knowledge retrieval is an important benefit in its own right – it will speed up thought, stop the retrieval process from interrupting ideation and reduce time spent on information administration and housekeeping. But with all gains there will be losses. If we stop needing to retrieve information from the deep recesses in our memories will be lose the ability as neural pathways atrophy? In a simple sense, you could say that the evidence is that it will. Afterall, most people who could once do mental arithmetic well are now worse at it because they use calculators instead. Why shouldn’t the process of information retrieval go the same way?

Personalised, local LLMs will act as personal assistants or PAs. When coupled with specific plugins you will be able to get them to book tickets or flights, write articles while you sleep based on overnight headlines and twitter feeds, suggest recipes and order ingredients missing from your fridge.

The death of search

Search engines like Google and Bing are good for atomised searching – looking for something specific and well-defined. They have always been poor at providing answers to more nuanced questions such as “why does temperature fall as your altitude increases?” Instead they list sources of answers that you often have to view one after another to get the answer you need. With an LLM there is a single step: you ask the question and you get the answer. That is not the case with search engines. So people are already migrating away from search towards more helpful tools like Claude, Perplexity and ChatGPT for anything but the most basic of searches. This of course will cut into the advertising income generated by search ads. Google in particular, which has a degree of financial diarrhoea due to heavy ingestion of ad cash, may start to suffer. As someone pointed out a while back, if Bing search dies, Microsoft still thrives. If Google search dies, so does Google. So far, no Search engine has successfully worked out how to splatter LLM outputs with ads and perhaps, with a bit of luck, no-one will.