Search Engine Optimisation: The curious question of efficiency
For one reason or another I found myself a few days ago inspecting the code behind Pepperminty Wiki's full-text search engine. What I found was interesting enough that I thought I'd blog about it.
Forget about that kind of Search Engine Optimisation (the horrible click-baity kind - if there's enough interest I'll blog about my thoughts there too) and cue the appropriate music - we're going on a field trip fraught with the perils of Unicode, page ids, transliteration, and more!
Firstly, I should probably mention a little about the starting point. The (personal) wiki that is exhibiting the slowness has 546 ~75K words spread across 546 pages. Pepperminty Wiki manages to search all of this in about 2.8 seconds by way of an inverted index. If you haven't read my last post, you should do so now - it sets the stage for this one - and you'll be rather confused otherwise.
2.8 seconds is far too slow though. Let's do something about it! In order to do something about it, there are several other things that need explaining before I can show you what I did to optimise it. Let's look at Pepperminty Wiki's search system first. It's best explained with the aid of a diagram:
In short, every page has a numerical id, which is tracked by the ids
core class. The search system interacts with this during the indexing phase (that's a topic for another blog post) and during the query phase. The query phase works something like this:
- The inverted index is loaded from disk in my personal wiki the inverted index is ~968k, and loads in ~128ms)
- The inverted index is queried for pages in that match the tokenised query terms.
- The results returned from the query are ranked and sorted before being returned.
- A context is extracted from the source of each page in the results returned - just like Duck Duck Go or Google have a bit of text below the title of each result
- Said context has the search terms hightlighted
It sounds complicated, but it really isn't. The complicated bit comes when I tried to optimise it. To start with, I analysed how long each of the above steps were taking. The results were quite surprising:
- Step #1 took ~128ms on average
- Steps #2 & #3 took ~1200ms on average
- Step #4 took ~1500ms on average(!)
- Step #5 took a negligible amount of time
I did this by setting headers on the returned page. Timing things in PHP is relatively easy:
$start_time = microtime(true);
// Do work here
$end_time = microtime(true);
$time_taken_ms = round(($end_time - $start_time )*1000, 3);
This gave me a general idea as to what needed attention. I was surprised to learn that the context extractor was taking most of the time. At first, I thought that my weird and probably inefficient algorithm was to blame. There's no way it should be taking 1500ms!
So I set to work rewriting it to make it more optimal. Firstly, I tried something like this. Instead of multiple sub-loops, I figured out a way to do it with just 1 for loop and a few calls to mb_stripos_all()
.
Unfortunately, this did not have the desired effect. While it did shave about 50ms off the total time, it was far from what I'd hoped for. I tried refactoring it slightly again to use preg_match_all()
, but it still didn't give me the speed boost I was after - only another 50ms or so.
To get some answers, I brought out the big guns and profiled it with XDebug.
Upon analysing the generated profile it immediately became clear what the issue was: transliteration. Transliteration is the process of removing the diacritics and other accents from a string to make it easier to compare with other strings. For example, café
becomes Café
. In PHP this process is a bit funky. Here's what I do in Pepperminty Wiki:
$literator = Transliterator::createFromRules(':: Any-Latin; :: Latin-ASCII; :: NFD; :: [:Nonspacing Mark:] Remove; :: Lower(); :: NFC;', Transliterator::FORWARD);
$literator->transliterate($string);
(Originally from this StackOverflow answer)
Note that this requires the intl
PHP extension (which should be installed & enabled by default). This does several things:
- Converts the text to lowercase
- Normalises it to UTF-8 (See this article for more information)
- Casts Cyrillic characters to their Latin alphabet (i.e. a-z) phonetic equivalent
- Removes all diacritics
In short, it preprocesses a chunk of text so that it can be easily used by the search system. In my case, I transliterate search queries before tokenising them, source texts before indexing them, and crucially: source texts before extracting contextual information.
The thing about this wonderful transliteration process is that, at least in PHP, it's really slow. Thinking about it, the answer was obvious. Why bother extract offset information when the inverted index already contains that information?
The answer is: you don't upon refactoring the context extractor to utilise the inverted index, I managed to get it down to just ~59ms. Success!
Next up was the query system itself. 1200ms seems a bit high, so while I was at it, I analysed a profile of that as well. It turned out that a similar problem was occurring here too. Surprisingly, the page id system's getid($pagename)
function was being really slow. I found 2 issues here.
Firstly, I was doing too much Unicode normalisation. In the page id system, I don't want to transliterate to remove diacritics, but I do want to make sure that all the diacritics and accents are represented in the same way.
If you didn't know, Unicode has a both a character for letters like é
(e-acute), and a code-point for the acute accent itself, which gets merged into the previous letter during rendering. This can cause a page to acquire 2 (or even more!) seemingly identical ids in the system, which caused me a few headaches in the past! If you'd like to learn more, the article on Unicode normalisation I linked to above explains it in some detail. Thankfully, the solution is quite simple. Here's what Pepperminty Wiki does:
Normalizer::normalize($string, Normalizer::FORM_C)
This ensures that all accents and other weird characters are represented in the same way. As you might guess though, it's slow. I found that in the getid()
function I was normalising both the page names I was iterating over in the index, as well as the target page name to find in every loop. The solution here was simple:
- Don't normalise the page names from the index - that's the job of the
assign()
protected method to ensure that they are suitably normalised when adding them in the first place - Normalise the target page name only once, and then use that when spinning through the index.
Implementing these simple changes brought the overall search time down to 700ms. The other thing to note here is the structure of the index. I show it in the diagram, but here it is again:
- 1: Booster
- 2: Rocket
- 3: Satellite
The index is basically a hash-table mapping numerical ids to their page names. This is great for when you have an id and want to know what the name of the page associated with it is, but terrible for when you want to go in the other direction, as we need to do when performing a query!
I haven't quite decided what to do about this. Obviously, the implications on efficiency are significant whenever we need to convert a page name into its respective numerical id. The problem lies in the fact that the search query system travels in both directions: It needs to convert page ids into page names when unravelling the results from the inverted index, but it also needs to convert page names into their respective ids when searching the titles and tags in the page index (the index that contains information about all the pages on a wiki - not pictured in the diagram above).
I have several options that I can see immediately:
- Maintain 2 indexes: One going in each direction. This would also bring a minor improvement to indexing new and updating existing content in the inverted index.
- Use some fancy footwork to refactor the search query system to unwind the page ids into their respective page names before we search the pages' titles and tags.
While deciding what to do, I did manage to reduce the number of times I convert a page name into its respective id by only performing the conversion if I find a match in the page metadata. This brought the average search time down to ~455ms, which is perfectly fine for my needs at the moment.
In the future, I may come back to this and optimise it further - but as it stands I'm getting to the point of diminishing returns: Where every additional optimisation requires twice the amount of time to implement as the last, and only provides a marginal gain in speed.
To this end, it doesn't seem worth it to spend ages tackling this issue now. Pepperminty Wiki is written in such a way that I can come back later and turn the inner workings of any part of the system upside-down, and it doesn't have any effect on the rest of the system (most of the time, anyway.... :P).
If you do find the search system too slow after these optimisations are released in v0.17, I'd like to hear about it! Please open an issue and I'll investigate further - I certainly haven't reached the end of this particular lollipop.
Found this interesting? Learnt something? Got a better way of doing it? Comment below!