a good thing, and it will also make us grow faster.
Last week, I saw someone on Reddit saying that OpenAI has registered the domain 'search.chatgpt.com' and created an SSL certificate, which seems to herald the launch of a new search engine by OpenAI.
Bloomberg's latest article yesterday also mentioned that according to informed sources, OpenAI is developing a new feature that allows ChatGPT to perform web searches and cite sources in its results, such as Wikipedia entries and blog posts. This function will enable users to ask questions and get answers containing detailed information from the web. For example, if a user asks how to replace a doorknob, the returned result may include an illustration explaining this task. It is reported that one version of the product will also use images appropriately to assist textual explanations when answering questions.
The Information magazine reported on this search product under development in February, but specific operational details had not been disclosed before yesterday. Perplexity, a company emphasizing accuracy and citation with its AI-driven search engine, has already gained immense popularity and a billion-dollar valuation. Meanwhile, Google is accelerating its rethinking of its core search experience centered around AI and is expected to announce its latest Gemini AI model plan at next week's annual I/O event (we look forward to our colleagues attending the I/O event in Silicon Valley for front-line news, and we hope my old employer can be even better).
I think OpenAI's logic for doing search is also clear. I remember hearing in a dialogue record between the MIT president and OpenAI CEO Sam Altman where they discussed their approach: although GPT-4 can provide services like a database, its reasoning speed is slow, it is costly, and its performance is often unsatisfactory. These problems are mainly due to resource wastage caused by the design and training methods of the model. Sam Altman mentioned that this situation is unavoidable because it is a side effect of the current method of making reasoning engine models. However, he foresees that new technical methods may emerge in the future that can separate the model's reasoning ability from its demand for big data, thus more effectively solving existing problems.
Render unto Caesar the things which are Caesar’s, and unto God the things that are God’s.
The development of AI technology is progressing rapidly, with updates happening daily, which drives us to continuously learn new skills and logic. With the possible addition of search functions, the way we interact with AI, especially how we input instructions using prompts, may also change accordingly. Initially, I thought there were things I didn't want to do and let the AI handle them, but later realized that sometimes it's the tasks AI doesn't want to do that I end up handling.
Finally, quoting Yuan Bo: "I think in the future, more and more jobs will involve humans serving AI. For example, writing better documentation for AI so it can perform RAG more accurately. This is equivalent to adding leverage to human work, meaning the same number of people can produce more products. Just like in the industrial era, workers on the assembly line served machines, allowing the same number of people to produce more. I speculate that companies that complete this transition earlier will have employees who are more valuable in the market."