I'm going to tell you a little story about why I inherently mistrust technology companies who "move fast and break things", and in particular how this applies in generative AI.
I was working as a senior technical writer for a multinational company up until the end of last year, when the US parent company decided that a product team isn't a necessary function and all you need is engineers™️. Given that I was the writer in charge of all developer documentation, as well as the developer of the writing platform we worked on, I was told to assess the use of a generative AI tool to improve search.
I shan't name the company with whom we worked because I don't want to give them any publicity (even bad publicity), but suffice to say they were a generative AI chatbot outfit who claimed specialty in summarizing knowledgebases. Sounds like an ideal fit, thinks I, and I have seen this tool used elsewhere to mixed results. So I set up a consultation with them to get some information.
I meet with the CEO over Zoom and ask some pointed questions about the product. Mostly, the answers were pretty standard. However, two of them worried me immensely.
The first question was around pricing. I informed them of our content structure, the number of users we had on a daily basis, and the number of search queries we handled + associated clickthrough rates. I asked how much – ballpark – their product would set us back. That, he replied, was an unknown. They would negotiate a price with us, and anything that we used over that price would be absorbed by them. A screaming red flag, if ever I saw one.
The next was one that was extremely important to me. The company I worked for had a huge footprint in Asia, particularly in Japan. We hired localizers and translators to work on all copy so that we could be sure that it was translated appropriately for each country, and also to help us with naming conventions. Let's say you have a company called SoHo Housing, and you specialize in building management. Then let's say you hit upon the idea of creating a queryable index of housing stock that you want to sell as an addon. You call this product "SHIndex". Your Japanese localizer will immediately point out that Japanese clients will hate this name due to the inclusion of "Shi" ("死") meaning "death". At this point, you need to rework the name.
I asked this CEO about how the bot would handle international content. We stored content for different languages under subpaths (e.g. /en, /ja
, /ko
, /zh
) and worked hard with localizers to make the content searchable. The CEO responded excitedly that this didn't matter, and that only the English language content was needed. The mechanism, he explained, was that the bot would detect a person was asking a question in Japanese, translate it to English, query the content set to find an answer, then translate a summary of that content to Japanese.
At this point, the answer was an immediate "no". It may well be that this company has now resolved this issue, but they should under no circumstances have been allowed anywhere near a production knowledgebase with such a provably terrible design that would cause glaring and sensitive issues. That's quite beside the fact that the generated summaries were often misleading or flat out wrong, or that caching would have forced us to expensively reindex content if we needed to correct something. The absolute lack of understanding of their chosen field was utterly baffling and extremely concerning.
I really despise the fact that the software industry is full of people who really just don't seem to care about the basics. If this limitation was presented as such, that might have been acceptable. If they'd have said "this only works with English content" we might have trialed it. But the fact that they had monkey-patched such a dangerous solution and presented it as production-ready meant that I had to veto the use of the product entirely.
Good fun. Fuck software.